The Future of Playback

By Kristinn Sigurðsson, Head of IT at the National and University Library of Iceland and the Lead of the IIPC Tools Development Portfolio

It is difficult to overstate the importance of playback in web archiving. While it is possible to evaluate and make use of a web archive via data mining, text extraction and analysis, and so on, the ability to present the captured content in its original form to enable human inspection of the pages. A good playback tool opens up a wide range of practical use cases by the general public, facilitates non-automated quality assurance efforts and (sometimes most importantly) creates a highly visible “face” to our efforts.

OpenWayback

Over the last decade or so, most IIPC members, who operate their own web archives in-house, have relied on OpenWayback, even before it acquired that name. Recognizing the need for a playback tool and the prevalence of OpenWayback, the IIPC has been supporting OpenWayback in a variety of ways over the last five or six years. Most recently, Lauren Ko (UNT), a co-lead of the IIPC’s Tools Development Portfolio, has shepherded work on OpenWayback and pushed out  new releases (thanks Lauren!).

Unfortunately, it has been clear for some time that OpenWayback would require a ground up rewrite if it were to be continued on. The software, now almost a decade and a half old, is complicated and archaic. Adding features is nearly impossible and often bug fixes require exceptional effort. This has led to OpenWayback falling behind as web material evolves. Its replay fidelity fading.

As there was no prospect for the IIPC to fund a full rewrite, the Tools Development Portfolio, along with other interested IIPC members, began to consider alternatives. As far as we could see, there was only one viable contender on the market, Pywb.

Survey

Last fall the IIPC sent out a survey to our members to get some insights into the playback software that is currently being used, plans to transition to pywb and what were the key roadblocks preventing IIPC members from adopting Pywb. The IIPC also organised online calls for members and got feedback from institutions who had already adopted Pywb.

Unsurprisingly, these consultations with the membership confirmed the – current – importance of OpenWayback. The results also showed a general interest in adopting to Pywb whilst highlighting a number of likely hurdles our members faced in that change. Consequently, we decided to move ahead with the decision to endorse Pywb as a replay solution and work to support IIPC members’ adoption of Pywb.

The members of the IIPC’s Tools Development Portfolio then analyzed the results of the survey and, in consultation with Ilya Kreymer, came up with a list of requirements that, once met, would make it much easier for IIPC members to adopt Pywb. These requirements were then divided into three work packages to be delivered over the next year.

Pywb

Over the last few years, Pywb has emerged as a capable alternative to OpenWayback. In some areas of playback it is better or at least comparable to OpenWayback, having been updated to account for recent developments in web technology. Being more modern and still actively maintained the gap between it and OpenWayback is only likely to grow. As it is also open source, it makes for a reasonable alternative for the IIPC to support as the new “go-to” replay tool.

However, while Pywb’s replay abilities are impressive, it is far from a drop-in replacement for OpenWayback. Notably, OpenWayback offers more customization and localization support than Pywb. There are also many differences between the two softwares that make migration from one to the other difficult.

To address this, the IIPC has signed a contract with Ilya Kreymer, the maintainer of the web archive replay tool Pywb. The IIPC will be providing financial support for the development of key new features in Pywb.

Planned work

The first work package will focus on developing a detailed migration guide for existing OpenWayback users. This will include example configuration for common cases and cover diverse backend setups (e.g. CDX vs. ZipNum vs. OutbackCDX).

The second package will have some Pywb improvements to make it more modular, extended support and documentation for localization and extended access control options.

The third package will focus on customization and integration with existing services. It will also bring in some improvements to the Pywb “calendar page” and “banner”, bringing to them features now available in OpenWayback.

There is clearly more work that can be done on replay. The ever fluid nature of the web means we will always be playing catch-up. As work progresses on the work packages mentioned above, we will be soliciting feedback from our community. Based on that, we will consider how best to meet those challenges.

Resources:

LinkGate: Let’s build a scalable visualization tool for web archive research

By Youssef Eldakar of Bibliotheca Alexandrina and Lana Alsabbagh of the National Library of New Zealand

Bibliotheca Alexandrina (BA) and the National Library of New Zealand (NLNZ) are working together to bring to the web archiving community a tool for scalable web archive visualization: LinkGate. The project was awarded funding by the IIPC for the year 2020. This blog post gives a detailed overview of the work that has been done so far and outlines what lies ahead.


In all domains of science, visualization is essential for deriving meaning from data. In web archiving, data is linked data that may be visualized as graph with web resources as nodes and outlinks as edges.

This phase of the project aims to deliver the core functionality of a scalable web archive visualization environment consisting of a Link Service (link-serv), Link Indexer (link-indexer), and Link Visualizer (link-viz) components as well as to document potential research use cases within the domain of web archiving for future development.

The following illustrates data flow for LinkGate in the web archiving ecosystem, where a web crawler archives captured web resources into WARC/ARC files that are then checked into storage, metadata is extracted from WARC/ARC files into WAT files, link-indexer extracts outlink data from WAT files and inserts it into link-serv, which then serves graph data to link-viz for rendering as the user navigates the graph representation of the web archive:

LinkGate: data flow

In what follows, we look at development by Bibliotheca Alexandrina to get each of the project’s three main components, Link Service, Link Indexer and Link Visualizer, off the ground. We also discuss the outreach part of the project, coordinated  by the National Library of New Zealand, which involves gathering researcher input and putting together an inventory of use cases.

Please watch the project’s code repositories on GitHub for commits following a code review later this month:

Please see also the Research Use Cases for Web Archive Visualization wiki.

Link Service

link-serv is the Link Service that provides an API for inserting web archive interlinking data into a data store and for retrieving back that data for rendering and navigation.
We worked on the following:

  • Data store scalability
  • Data schema
  • API definition and and Gephi compatibility
  • Initial implementation

Data store scalability

link-serv depends on an underlying graph database as repository for web resources as nodes and outlinks as relationships. Building upon BA’s previous experience with graph databases in the Encyclopedia of Life project, we worked on adapting the Neo4j graph database for versioned web archive data. Scalability being a key interest, we ran a benchmark of Neo4j on Intel Xeon E5-2630 v3 hardware using a generated test dataset and examined bottlenecks to tune performance. In the benchmark, over a series of progressions, a total of 15 billion nodes and 34 billion relationships were loaded into Neo4j, and matching and updating performance was tested. And while time to insert nodes into the database for the larger progressions was in hours or even days, match and update times in all progressions after a database index was added, remained in seconds, ranging from 0.01 to 25 seconds for nodes, with 85% of cases remaining below 7 seconds and 0.5 to 34 seconds for relationships, with 67% of cases remaining below 9 seconds. Considering the results promising, we hope that tuning work during the coming months will lead to more desirable performance. Further testing is underway using a second set of generated relationships to more realistically simulate web links.

We ruled out Virtuoso, 4store, and OrientDB as graph data store options for being less suitable for the purposes of this project. A more recent alternative, ArangoDB, is currently being looked into and is also showing promising initial results, and we are leaving open the possibility of additionally supporting it as an option for the graph data store in link-serv.

Data schema

To represent web archive data in the graph data store, we designed a schema with the goals of supporting time-versioned interlinked web resources and being friendly to search using the Cypher Query Language. The schema defines Node and VersionNode as node types and HAS_VERSION and LINKED_TO as relationship types linking a Node to a descendant VersionNode and a VersionNode to a hyperlinked Node, respectively. A Node has the URI of the resource as attribute in Sort-friendly URI Reordering Transform (SURT), and a VersionNode has the ISO 8601 timestamp of the version as attribute. The following illustrates the schema:

LinkGate: data schema

API definition and Gephi compatibility

link-serv is to receive data extracted by link-indexer from a web archive and respond to queries by link-viz as the graph representation of web resources is navigated. At this point, 2 API operations were defined for this interfacing: updateGraph and getGraph. updateGraph is to be invoked by link-indexer and takes as input a JSON representation of outlinks to be loaded into the data store. getGraph, on the other hand, is to be invoked by link-viz and returns a JSON representation of possibly nested outlinks for rendering. Additional API operations may be defined in the future as development progresses.

One of the project’s premises is maintaining compatibility with the popular graph visualization tool, Gephi. This would enable users to render web archive data served by link-serv using Gephi as an  alternative to the project’s frontend component, link-viz. To achieve this, the updateGraph and getGraph API operations were based on their counterparts in the Gephi graph streaming API with the following adaptations:

  • Redefining the workspace to refer to a timestamp and URL
  • Adding timestamp and url parameters to both updateGraph and getGraph
  • Adding depth parameter to getGraph

An instance of Gephi with the graph streaming plugin installed was used to examine API behavior. We also examined API behavior using the Neo4j APOC library, which provides a procedure for data export to Gephi.

Initial implementation

Initial minimal API service for link-serv was implemented. The implementation is in Java and uses the Spring Boot framework and Neo4j bindings.
We have the following issues up next:

  • Continue to develop the service API implementation
  • Tune insertion and matching performance
  • Test integration with link-indexer and link-viz
  • ArangoDB benchmark

Link Indexer

link-indexer is the tool that runs on web archive storage where WARC/ARC files are kept and collects outlinks data to feed to link-serv to load into the graph data store. In a subsequent phase of the project, collected data may include details besides outlinks to enrich the visualization.
We worked on the following:

  • Invocation model and choice of programming tools
  • Web Archive Transformation (WAT) as input format
  • Initial implementation

Invocation model and choice of programming tools

link-indexer collects data from the web archive’s underlying file storage, which means it will often be invoked on multiple nodes in a computer cluster. To handle future research use cases, the tool will also eventually need to do a fair amount of data processing, such as  language detection, named entity recognition, or geolocation. For these reasons, we found Python a fitting choice for link-indexer. Additionally, several modules are readily available for Python that implement functionality related to web archiving, such as WARC file reading and writing and URI transformation.
In a distributed environment such as a computer cluster, invocation would be on ad-hoc basis using a tool such as Ansible, dsh, or pdsh (among many others) or configured using a configuration management tool (also such as Ansible) for periodic execution on each host in the distributed environment. Given this intended usage and magnitude of the input data, we identified the following requirements for the tool:

  • Non-interactive (unattended) command-line execution
  • Flexible configuration using a configuration file as well as command-line options
  • Reduced system resource footprint and optimized performance

Web Archive Transformation (WAT) as input format

Building upon already existing tools, Web Archive Transformation (WAT) is used as input format rather than directly reading full WARC/ARC files. WAT files hold metadata extracted from the web archive. Using WAT as input reduces code complexity, promotes modularity, and makes it possible to run link-indexer on auxiliary storage having only WAT files, which are significantly smaller in size compared to their original WARC/ARC sources.
warcio is used in the Python code to read WAT files, which conform in structure to the WARC format. We initially used archive-metadata-extractor to generate WAT files. However, testing our implementation with sample files showed the tool generates files that do not exactly conform to the WARC structure and cause warcio to fail on reading. The more recent webarchive-commons library was subsequently used instead to generate WAT files.

Initial implementation

The current initial minimal implementation of link-indexer includes the following:

  • Basic command-line invocation with multiple input WAT files as arguments
  • Traversal of metadata records in WAT files using warcio
  • Collecting outlink data and converting relative links to absolute
  • Composing JSON graph data compatible with the Gephi streaming API
  • Grouping a defined count of records into batches to reduce hits on the API service

We plan to continue work on the following:

  • Rewriting links in Sort-friendly URI Transformation (SURT)
  • Integration with the link-serv API
  • Command-line options
  • Configuration file

Link Visualizer

link-viz is the project’s web-based frontend for accessing data provided by link-serv as a graph that can be navigated and explored.
We worked on the following:

  • Graph rendering toolkit
  • Web development framework and tools
  • UI design and artwork

Graph visualization libraries, as well as web application frameworks, were researched for the web-based link visualization frontend. Both D3.js and Vis.js emerged as the most suitable candidates for the visualization toolkit. Experimentally coding using both toolkits, we decided to go with Vis.js, which fits the needs of the application and is better documented.
We also took a fresh look at current web development frameworks and decided to house the Vis.js visualization logic within a Laravel framework application combining PHP and Vue.js for future expandability of the application’s features, e.g., user profile management, sharing of graphs, etc.
A virtual machine was allocated on BA’s server infrastructure to host link-viz for the project demo that we will be working on.
We built a barebone frontend consisting of the following:

  • Landing page
  • Graph rendering page with the following UI elements:
    • Graph area
    • URL, depth, and date selection inputs
    • Placeholders for add-ons

As we outlined in the project proposal, we plan to implement add-ons during a later phase of the project to extend functionality. Add-ons would come in 2 categories: vizors for modifying how the user sees the graph, e.g., GeoVizor for superimposing nodes on a map of the world, and finders to help the user explore the graph, e.g., PathFinder for finding all paths from one node to another.
Some work has already been done in UI design, color theming, and artwork, and we plan to continue work on the following:

  • Integration with the link-serv API
  • Continue work on UI design and artwork
  • UI actions
  • Performance considerations

Research use cases for web archive visualization

In terms of outreach, the National Library of New Zealand has been getting in touch with researchers from a wide array of backgrounds, ranging from data scientists to historians, to gather feedback on potential use cases and the types of features researchers would like to see in a web archive visualization tool. Several issues have been brought up, including frustrations with existing tools’ lack of scalability, being tied to a physical workstation, time wasted on preprocessing datasets, and inability to customize an existing tool to a researcher’s individual needs. Gathering first hand input from researchers has led to many interesting insights. The next steps are to document and publish these potential research use cases on the wiki to guide future developments in the project.

We would like to extend our thanks and appreciation for all the researchers who generously gave their time to provide us with feedback, including Dr. Ian Milligan, Dr. Niels Brügger, Emily Maemura, Ryan Deschamps, Erin Gallagher, and Edward Summers.

Acknowledgements

Meet the people involved in the project at Bibliotheca Alexandrina:

  • Amr Morad
  • Amr Rizq
  • Mohamed Elsayed
  • Mohammed Elfarargy
  • Youssef Eldakar

And at the National Library of New Zealand:

  • Andrea Goethals
  • Ben O’Brien
  • Lana Alsabbagh

We would also like to thank Alex Osborne at the National Library of Australia and Andy Jackson at the British Library for their advice on technical issues.

If you have any questions or feedback, please contact the LinkGate Team at linkgate[at]iipc.simplelists.com.

Pywb at the Australian Web Archive: notes on the migration

Last year the National Library of Australia (NLA) launched the revamped Australian Web Archive (AWA) which expanded their older PANDORA selective archive with comprehensive snapshots of the .au domain. The AWA is full-text searchable through Trove, a single discovery service for the collections of Australia’s libraries, museums, arc hives and other cultural institutions. To replay archived pages, AWA used Java-based OpenWayback, but NLA’s technical team is now transitioning to Python Wayback (pywb).


By Alex Osborne, Web Archive Technical Lead at National Library of Australia and Co-Lead of the IIPC Tools Development Portfolio

We recently migrated the Australian Web Archive (AWA) from OpenWayback to Pywb in order to take advantage of Pywb’s better replay fidelity, particularly for JavaScript heavy websites. First I must give some background about the existing architecture. The Trove-branded user interface to the web archive is a separate web application which displays Wayback in an iframe.

The old architecture

We originally chose to write a separate application rather than customising OpenWayback’s banner templates for a couple of reasons:

  • We thought it would make updating wayback and the UI independently of each other easier. This is particularly important as the web archive backend and the Trove user interface are managed and developed by different teams with different development processes and release cycles.
  • It allows for replayed content to live on a different origin (domain) to the UI. This is important security measure to prevent archived content from being able to interfere with the UI. While we didn’t take advantage of this in the original release, it’s something I’ve long wanted to implement.
  • We had in mind from the beginning that we may eventually want to swap OpenWayback out for another replay tool and it’d be nice not to have to rewrite our UI in order to do it.

While it has caused a few problems in the past (redirect loops, PDF plugins) this architecture made the transition to Pywb straightforward. Pywb out of the box renders its own UI with an iframe so it was close to a drop in replacement for us. There were a few small problems we encountered along the way, most of our own making rather than Pywb’s though.

Problem 1: Notifying the UI when a navigation event happens

In the AWA’s initial release, archived content and UI were both served from the same domain name. This meant that the browser allowed the UI’s JavaScript to reach inside the iframe and access the archived page. The Trove-Web UI therefore was able to listen to the iframe’s load event and even intercept click events. When the iframe loads we can inspect the page’s title to update it in the UI and extract the current URL and timestamp from the iframe’s URL.

While this was convenient and would have worked with Pywb if we kept it on the same domain it also means archived content could do the same to our UI! We never encountered anyone doing this in the wild but we’ve always been a little worried that the web archive could be abused for attacks like phishing.

This means we needed a replacement way for the UI to get information about what was happening inside the replay iframe. Pywb fortunately has already solved this problem and uses the Window.postMessage() to send a message like this when the archived page loads.

{
    "wb_type":"load",
    "url":"http://www.example.com/",
    "ts":"20060821035730",
    "title":"Example Web Page",
    "icons":[],
    "request_ts":"20060821035730",
    "is_live":false,
    "readyState":"interactive"
}

One gotcha I encountered though was that Pywb doesn’t send a postMessage when displaying error pages. Our custom UI intercepts OpenWayback’s not found errors in order to display a more detailed message suggesting alternative ways to find the content or an explanatory message about restricted content.

I worked around that by including a custom templates/not_found.html which sends the load message:

<script>
    parent.postMessage({
        'wb_type': 'load',
        'url': '{{ url }}',
        'ts': location.href.split('/')[4].replace('mp_',''),
        'title': 'Webpage snapshot not found',
        'status': 404
    }, '*');
</script>

Problem 2: Accessing HTML meta tags

Trove’s user interface (and this is not specific to the web archive) has a tab which shows how to cite the item you’re looking at when referencing it in an academic context or on Wikipedia. The original implementation of this would on the client side inspect the contents of the iframe for HTML meta tags to pull out information such as an author or publisher’s name. This of course also broke when we moved wayback to a separate security origin and unlike the URL and page title Pywb doesn’t include the page’s meta tags in its load message.

Rather trying to provide JavaScript access to the page content and potentially risking undoing some of the isolation we were trying to introduce, I moved the meta tag extraction server side translating the code from JavaScript/DOM to Java/JSoup.

Problem 3: Multiple access points

The library has a take-down procedure for restricting access to content under certain circumstances. Restricted content can have several different policies applied to it. Content can be fully public, accessible only to staff, accessible on-site in the reading room or fully restricted.

OpenWayback also enables a different URL to configured for routing incoming requests (accessPointPath) as to the one that’s generated when rewriting links (replayPrefix). So our original implementation simply configured three access points under paths like /public, /onsite and /staff but with the generated links all at /wayback.

<beans>
     <bean name="publicaccesspoint" parent="standardaccesspoint">
        <property name="accessPointPath" value="http://backend.example:8080/public/"/>
        <property name="replayPrefix" value="https://frontend.example/wayback/"/>
        <property name="collection" ref="publiccollection" />
     </bean>
    
     <bean name="onsiteaccesspoint" parent="standardaccesspoint">
        <property name="accessPointPath" value="http://backend.example:8080/onsite/"/>
        <property name="replayPrefix" value="https://frontend.example/wayback/"/>
        <property name="collection" ref="onsitecollection" />
    </bean>
    
    <bean name="staffaccesspoint" parent="standardaccesspoint">
        <property name="accessPointPath" value="http://backend.example:8080/staff/"/>
        <property name="replayPrefix" value="https://frontend.example/wayback/"/>
        <property name="collection" ref="staffcollection" />
    </bean>
</beans>

Our frontend webserver (nginx) has a set of IP range rules which map the different access locations and rewrote the path accordingly:

rewrite /wayback/(.*) /$webarchive_access_point/$1 break;
proxy_pass http://openwayback;

I couldn’t find a way to replicate this configuration as Pywb appears to only have one parameter – the collection name – used for both routing and link generation. (Aside: perhaps it’s possible by using uwsgi and overriding SCRIPT_NAME but I couldn’t figure it out.) Therefore rather than running a single instance of Pywb with multiple collections configured, we ended up with a separate instance of Pywb for each access point and have nginx route requests to the appropriate port. It’s a little more complex to deploy than I’d like but works well enough.

upstream pywb-public { server backend.example:8080; }
upstream pywb-staff  { server backend.example:8081; }
upstream pywb-onsite { server backend.example:8082; }

proxy_pass http://pywb-$webarchive_access_point;

Problem 4: Hiding the UI for thumbnails

Trove’s collection browse path displays thumbnails of the archived sites. These are generated by a web service that wraps Chromium’s headless mode. Obviously, we don’t want the UI of the archive to be visible in the thumbnails. But if Chromium loads the URL of Pywb directly, there’s a JavaScript redirect back to the Trove UI. This exists so that if a user opens an archived link in a new tab, they get the web archive’s UI in that new tab too rather than just the contents of the replay frame.

In our original implementation we kind of hacked around this for screenshots by passing a magic flag as a URL fragment that the JavaScript redirect looked for as indicator not to redirect to the UI. This time though that redirect was being done by Pywb itself rather than our template customisations. Plus we don’t really want to expose a way of hiding the UI entirely again due to risk of the archive being abused for phishing.

I hoped to setup a second Pywb collection with framed_replay: false and a blank banner, but was thwarted as that can only be specified at the top level. So yes, you guessed it, we’re now up to four instances of Pywb. ¯\_(ツ)_/¯

Problem 5: The PDF workaround that broke

In the original iframe implementation we encountered problems with displaying PDFs in an iframe in some browsers. The developers ended up working around this by embedding PDF.js rather than relying on the browser’s rendering. This broke when switching to Pywb on an isolated domain as the interception was based on a onClick handler injected into the iframe and also the PDF.js viewer can’t load documents from a different origin without special configuration. Fortunately, it seems either Pywb does something differently or more likely browser’s now handle PDFs in iframes better so I was able to just disable the whole PDF interception thing.

Funnily enough, we do still use PDF.js for generating thumbnails though. Chromium’s PDF viewer doesn’t work in headless mode. While it probably would be more efficient to use some native PDF viewer just using PDF.js, let’s us reuse the same thumbnail generation logic and also piggyback on the browser’s security sandbox.

Problem 6: Reusing OpenWayback’s manhattan graph

I discovered, a little to my surprise, that Trove’s visualisation of captures over time was actually using OpenWayback’s server-side manhattan graph renderer. There’s no exact equivalent of this in Pywb and I didn’t want to keep a running instance of OpenWayback for that alone. Fortunately the graph renderer is standalone and could be incorporated into Trove-Web directly.

Screenshot of manhattan graph

Conclusion

Overall the problems we encountered were relatively minor. Pywb’s builtin frames support allowed us to eliminate virtually all the modifications and custom templates we’d had to make for OpenWayback. If there’s one wishlist item I have for Pywb it’s to allow more options to be overridden at a per-collection level not just the top-level. The migration fixed a large set of longstanding delivery problems for us including key websites like the Sydney 2000 Olympic Games. I encourage other archives looking to improve the quality of their replay to make the switch.

 

A New Release of Heritrix 3

By Andy Jackson, Web Archiving Technical Lead at the British Library

One of the outcomes of the Online Hours meetings has been an increase in activity around Heritrix 3. Most of us rely on Heritrix to carry out our web crawls, but recognise that to keep this large, complex crawler framework sustainable we need to try and get more people use the most recent versions, and make it easier for new users to get on board.

The most recent ‘formal’ release of Heritrix 3 was version 3.2.0 back in 2014, but a lot has happened since then. Numerous serious bugs have been discovered and resolved, and some new features added, but only those of us running the very latest code were able to take advantage of these changes.

Those of us who would rather base our crawling on a software release rather than building from source have been relying on the stable releases built by Kristinn Sigurðsson, and hosted on the NetarchiveSuite Maven Repository. This worked well for ‘those in the know’, but did little to make things easier for new users.

In an attempt to resolve this, and in coordination with the Internet Archive, we have started releasing ‘formal’ versions of Heritrix, culminating in the 3.4.0-20190207 Interim Release. This new release believed to be stable, and is recommended over previous releases of Heritrix 3. As well as being released on GitHub, it is also available through the Maven Central Repository, which should make it easier for others to re-use Heritrix.

You may notice we’ve added a date to the version tag. Traditionally, Heritrix 3 has used a tag of the form “X.X.X”, which gives the impression we are using a form of Semantic Versioning. However, that does not reflect how Heritrix is evolving. Heritrix is a broad framework of modules for building a crawler, and has lots of different components of different ages, at different levels of maturity and use. Given there are only a small number of developers working on Heritrix, we don’t have the resources to guarantee that a breaking change won’t slip into a minor release, so it’s best not to appear to be promising something we cannot deliver.

This means that, when you are upgrading your Heritrix 3 crawler, we recommend that you thoroughly test each release using your configuration (your ‘crawler beans’ in Hertrix3 jargon) under a realistic workload. If you can, please let us know how this goes, to help us understand how reliable the different parts of Heritrix 3 are.

As well as making new releases, we have also moved the Heritrix 3 documentation over to GitHub to populate the Heritrix3 wiki, and shifted the API documentation to a more modern platform. We hope this will help those who have been frustrated by the available documentation, and we encourage you to get in touch with any ideas for improving the situation, particularly when it comes to helping new users get on board.

If you want to know more, please drop into the Online Hours calls or use the archive-crawler mailing list or IIPC Slack to get in touch. To join IIPC Slack, submit a request through this form.

Web Archiving Down Under: Relaunch of the Web Curator Tool at the IIPC conference, Wellington, New Zealand

Kees Teszelszky, Curator Digital Collections at the National Library of the Netherlands/Koninklijke Bibliotheek (with input of Hanna Koppelaar, Jeffrey van der Hoeven – KB-NL, Ben O’Brien, Steve Knight and Andrea Goethals – National Library of New Zealand)

Hanna Koppelaar, KB & Ben O'Brien, NLNZ. IIPC WAC 2018.
Hanna Koppelaar, KB & Ben O’Brien, NLNZ. IIPC Web Archiving Conference 2018. Photo by Kees Teszelszky

The Web Curator Tool (WCT) is a globally used workflow management application designed for selective web archiving in digital heritage collecting organisations. Version 2.0 of the WCT is now available on Github. This release is the product of a collaborative development effort started in late 2017 between the National Library of New Zealand (NLNZ) and the National Library of the Netherlands (KB-NL). The new version was previewed during a tutorial at the IIPC Web Archiving Conference on 14 November 2018 at the National Library of New Zealand in Wellington, New Zealand. Ben O’Brien (NLNZ) and Hanna Koppelaar (KB-NL) presented the new features of the WCT and showed how to work collaboratively on opposite sides of the world in front of an audience of more than 25 spectators.

The tutorial highlighted that part of our road map for this version has been dedicated to improving the installation and support of WCT. We recognised that the majority of requests for support were related to database setup and application configuration. To improve this experience we consolidated and refactored the setup process, correcting ambiguities and misleading documentation. Another component to this improvement was the migration of our documentation to the readthedocs platform (found here), making the content more accessible and the process of updating it a lot simpler. This has replaced the PDF versions of the documentation, but not the Github wiki. The wiki content will be migrated where we see fit.

A guide on how to install WCT can be found here, a video can be found here.

1) WCT Workflow

One of the objectives in upgrading the WCT, was to raise it to a level where it could keep pace with the requirements of archiving the modern web. The first step in this process was decoupling the integration with the old Heritrix 1 web crawler, and allowing the WCT to harvest using the more modern Heritrix 3 (H3) version. This work started as a proof-of-concept in 2017, which did not include any configuration of H3 from within the WCT UI. A single H3 profile was used in the backend to run H3 crawls. Today H3 crawls are fully configurable from within the WCT, mirroring the existing profile management that users had with Heritrix 1.

2) 2018 Work Plan Milestones

The second step in this process of raising the WCT up is a technical uplift. For the past six or seven years, the software has fallen into a period of neglect, with mounting technical debt. The tool is sitting atop outdated and unsupported libraries and frameworks. Two of those frameworks are Spring and Hibernate. The feasibility of this upgrade has been explored through a proof-of-concept which was successful. We also want to make the WCT much more flexible and less coupled by exposing each component via an API layer. In order to make that API development much easier we are looking to migrate the existing SOAP API to REST and changing components so they are less dependent on each other.

Currently the Web Curator Tool is tightly coupled with the Heritrix crawler (H1 and H3). However, other crawl tools exist and the future will bring more. The third step is re-architecting WCT to be crawler agnostic. The abstracting out of all crawler-specific logic allows for minimal development effort to integrate new crawling tools. The path to this stage has already been started with the integration of Heritrix 3, and will be further developed during the technical uplift.

More detail about future milestones can be found in the Web Curator Tool Developer Guide in the appropriately titled section Future Milestones. This section will be updated as development work progresses.

3) Diagram showing the relationships between different Web Curator Tool components

We are conscious that there are long-time users on various old versions of WCT, as well as regular downloads of those older versions from the old Sourceforge repository (soon to be deactivated). We would like to encourage those users of older versions to start using WCT 2.0 and reaching out for support in upgrading. The primary channels for contact are the WCT Slack group and the Github repository. We hope that WCT will be widely used by the web archiving community in future and will have a large development and support base. Please contact us if you are interested in cooperating! See the Web Curator Tool Developer Guide for more information about how to become involved in the Web Curator Tool community.

WCT facts

The WCT is one of the most common, open-source enterprise solutions for web archiving. It was developed in 2006 as a collaborative effort between the National Library of New Zealand and the British Library, initiated by the International Internet Preservation Consortium (IIPC) as can be read in the original documentation. Since January 2018 it is being upgraded through collaboration with the Koninklijke Bibliotheek – National Library of the Netherlands. The WCT is open-source and available under the terms of the Apache Public License. The project was moved in 2014 from Sourceforge to Github. The latest release of the WCT, v2.0, is available now. It has an active user forum on Github and Slack.

Further reading on WCT:

Reaction on twitter:

Online Hours: Supporting Open Source

By Andrew Jackson, Web Archiving Technical Lead at the British Library

At the UK Web Archive, we believe in working in the open, and that organisations like ours can achieve more by working together and pooling our knowledge through shared practices and open source tools. However, we’ve come to realise that simply working in the open is not enough – it’s relatively easy to share the technical details, but less clear how to build real collaborations (particularly when not everyone is able to release their work as open source).

To help us work together (and maintain some momentum in the long gaps between conferences or workshops), we were keen to try something new, and hit upon the idea of Online Hours. It’s simply a regular web conference slot (organised and hosted by the IIPC, but open to all) which can act as a forum for anyone interested in collaborating on open source tools for web archiving. We’ve been running for a while now, and have settled on a rough agenda:

Full-text indexing:
– Mostly focussing on our Web Archive Discovery toolkit so far.

Heritrix3:
– including Heritrix3 release management, and the migration of Heritrix3 documentation to the GitHub wiki.

Playback:
– covering e.g. SolrWayback as well as OpenWayback and pywb.

AOB/SOS:
– for Any Other Business, and for anyone to ask for help if they need it.

This gives the meetings some structure, but is really just a starting point. If you look at the notes from the meetings, you’ll see we’ve talked about a wide range of technical topics, e.g.

  • OutbackCDX features and documentation, including its API;
  • web archive analysis, e.g. via the Archives Unleashed Toolkit;
  • summary of technologies so we can compare how we do things in our organisations, to find out which tools and approaches are shared and so might benefit from more collaboration;
  • coming up with ideas for possible new tools that meet a shared need in a modular, reusable way and identify potential collaborative projects.

The meeting is weekly, but we’ve attempted to make the meetings inclusive by alternating the specific time between 10am and 4pm (GMT). This doesn’t catch everyone who might like to attend, but at the moment I’m personally not able to run the call at a time that might tempt those of you on Pacific Standard Time. Of course, I’m more than happy to pass the baton if anyone else wants to run one or more calls at a more suitable time.

If you can’t make the calls, please consider:

My thanks go to everyone who as come along to the calls so far, and to IIPC for supporting us while still keeping it open to non-members.

Maybe see you online?

World Wide Webarchiving: Upgrading the Web Curator Tool

by Kees Teszelszky, Curator digital collections, National Library of the Netherlands

The Web Curator Tool (WCT) is a workflow management application designed for selective web archiving. It was created for use in libraries and other digital heritage collecting organisations, and supports collection by non-technical users while still allowing complete control of the web harvesting process. The WCT is a tool that supports the selection, harvesting and quality assessment of online material when employed by collaborating users in a library environment. The application is integrated with the existing Heritrix web crawler and supports key processes such as permissions, job scheduling, harvesting, quality review, and the collection of descriptive metadata. The WCT allows institutions to capture almost any online resource. These artefacts are handled with all possible care, so that their integrity and authenticity is preserved.

The WCT was developed in 2006 as a collaborative effort by the National Library of New Zealand (NLNZ) and the British Library (BL), initiated by the International Internet Preservation Consortium (IIPC) as can be read in the original documentation. The WCT is open-source and available under the terms of the Apache Public License. The project was moved in 2014 from Sourceforge to Github. The latest ‘binary’ release of the WCT, v1.6.3, was published in July 2017 on the Github page of NLNZ. Even after 12 years, the WCT still continues as one of the most common, open-source enterprise solutions for web archiving. It has an active user forum on Github and Slack.

From January 2018 onwards, NLNZ has been collaborating to upgrade the WCT with the Koninklijke Bibliotheek – National Library of the Netherlands (KB-NL) and adding new features to make the application future-proof. This involves learning the lessons from the previous development and recognising the advancements and trends occurring in the web archiving community. The objective is to get the WCT to a platform where it can keep pace with the requirements of archiving the modern web. Further, the Permission Request module will be extended to fit the Dutch situation which lacks a legal deposit for digital publications.

The first step in that process was decoupling the WCT from the old Heritrix 1.x web crawler, and allowing the WCT to harvest using the updated Heritrix 3.x version. A proof of concept for this change was successfully developed and deployed by the NLNZ, and has been the basis for a joint development work plan. The project will be extensively documented.

The NLNZ has been using the WCT for its selective web archiving programme since January 2007, KB-NL since 2009. In 2008 NLNZ published an article describing their experience using WCT in a production environment. However, the software had fallen into a period of neglect, with mounting technical debt: most notably its tight integration with an out-dated version of the Heritrix web crawler. While the last public release of the WCT is still used day-to-day in various institutions, this release has essentially reached its end-of-life as it has fallen further and further behind the requirements for harvesting the modern web. The community of users have echoed these sentiments over the last few years.

During 2016-2017 the NLNZ conducted a review of the WCT and how it fulfils business requirements, and compared the WCT to alternative software/services. The NLNZ concluded that the WCT was still the closest solution to meeting its requirements – provided the necessary upgrades could be done, namely a change to use the modern Heritrix 3 web crawler. Through a series of fortunate conversations the NLNZ discovered that another WCT user, KB-NL, was going through a similar review process and had reached the same conclusions. This led to collaborative development between the two institutions to uplift the WCT technically and functionally to be a fit for purpose tool within these institutions’ respective web archiving programmes.

Who are involved:

National Library of New Zealand:

Steve Knight
Andrea Goethals
Ben O’Brien
Gillian Lee
Susanna Joe
Sholto Duncan

Koninklijke Bibliotheek:

Peter de Bode
Jeffrey van der Hoeven
Hanna Koppelaar
Tymen Kwant
Barbara Sierman
René Voorburg
Kees Teszelszky

Further reading:

Announcing the IIPC Technical Speaker Series

By Jefferson Bailey, Director, Web Archiving (Internet Archive) & IIPC Chair

The IIPC is excited to announce a call for presenters in a new online series, the IIPC Technical Speaker Series. The goal of the IIPC Technical Speaker Series (TSS) is to facilitate knowledge sharing and foster conversations and collaborations among IIPC members around web archiving technical work.

The TSS will feature 30-60 minute online presentations or demonstrations related to tool development, software engineering, infrastructure management, or other specific technology projects. Presentations can take any format, including prepared slides, open conversations, or live demonstrations via screen sharing. Presentations will be from employees at IIPC member organizations and attendance will be open to all IIPC members. The TSS is intended to be informational, not a formal training or education program, and to provide an open venue for knowledge exchange on technical issues. The series will also give IIPC members the chance to demo and discuss technical work (including R&D, prototype, or early-stage work) taking place in member institutions that may have no other venue for presentation or discussion.

If you are interested in presenting, please fill out the short application form.

Details on applying:

  • Applicants must be employed by an IIPC member institution in good standing
  • Access to an online webinar system (WebEx, Zoom, etc) will be provided
  • Presentations will be scheduled for 60 minutes, but can be shorter and should allow time for questions and discussion
  • Small stipends are available to presenters, if needed or if helpful in getting managerial approval to participate.

We aim to have a 2-3 TSS events per quarter, scheduled at a time amenable to as many time zones as possible. Details on upcoming speakers and registration will be shared via the normal IIPC communication channels (listservs, blog, slack, twitter). This project is funded as part of IIPC’s 2018 suite of projects, including work by IIPC Portfolios and Working Groups, as well as other forthcoming member services. The TSS is currently administered by the IIPC Steering Committee Chair (jefferson@archive.org) and the IIPC Program and Communications Officer (Olga.Holownia@bl.uk). Contact either or both with any questions.

Please apply and present to the IIPC community all the excellent technical work taking place at your organization!

New OpenWayback lead

By Lauren Ko, University of North Texas Libraries

In response to IIPC’s call, I have volunteered to take on a leadership role in the OpenWayback project. Having been involved with web archives since 2008 as a programmer at the University of North Texas Libraries, I expect my experience working with OpenWayback, Heritrix, and WARC files, as well as writing code to support my institution’s broader digital library initiatives, to aid me in this endeavor.openwayback-banner

Over the past few years, the web archiving community has seen much development in the area of access related projects such as pywb, Memento, ipwb, and OutbackCDX – to name a few. There is great value in a growing number of available tools written in different languages/running in different environments. In line with this, we would like to keep the OpenWayback project’s development moving forward while it remains of use. Further, we hope to facilitate development of access related standards and APIs, interoperability of components such as index servers, and compatibility of formats such as CDXJ.

Moving OpenWayback forward will take a community. With Kristinn Sigurðsson soon relinquishing his leadership position, we are seeking a co-leader for the OpenWayback project. We also continue to need people to contribute code, provide code review, and test deployments. I hope this community will continue not only to develop access tools, but also access to those tools, encouraging and supporting newcomers via mailing lists and Slack channels as they begin building and interacting with web archives.

If your institution uses OpenWayback, please consider:

If you are interested in taking a co-leadership role in this project or are otherwise interested in helping with OpenWayback and IIPC’s access related initiatives, even if you don’t know how that might be, I welcome you to contact me by the name lauren.ko via IIPC Slack or email me at lauren.ko@unt.edu.

Wanted: New Leaders for OpenWayback

By Kristinn Sigurðsson, National and University Library of Iceland

The IIPC is looking for one or two people to take on a leadership role in the OpenWayback project.

The OpenWayback project is responsible not only for the widely used OpenWayback software, but also for the underlying webarchive-commons library. In addition the OpenWayback project has been working to define access related APIs.

The OpenWayback project thus plays an important role in the IIPCs efforts to foster the development and use of common tools and standards for web archives.

openwayback-bannerWhy now?

The OpenWayback project is at a cross roads. The IIPC first took on this project three years ago with the initial objective to make the software easier to install, run and manage. This included cleaning up the code and improving documentation.

Originally this work was done by volunteers in our community. About two years ago the IIPC decided to fund a developer to work on it. The initial funding was for 16 months. With this we were able to complete the task of stabilizing the software as evidenced by the release of OpenWayback 2.0.0 through 2.3.0.

We then embarked on a somewhat more ambitious task to improve the core of the software. A significant milestone that is now ending as a new ‘CDX server’ or resource resolver is being introduced. You can read more about that here.

This marks the end of the paid position (at least for time being). The original 16 months wound up being spread over somewhat longer time frame, but they are now exhausted. Currently, the National Library of Norway (who hosted the paid developer) is contributing, for free, the work to finalize the new resource resolver.

I’ve been guiding the project over the last year since the previous project leader moved on. While I was happy to assume this role to ensure that our funded developer had a functioning community, I felt like I was never able to give the project the kind of attention that is needed to grow it. Now it seems to be a good time for a change.

With the end of the paid position we are now at a point where there either needs to be a significant transformation of the project or it will likely die away, bit by bit, which is a shame bearing in mind the significance of the project to the community and the time already invested in it.

Who are we looking for?

While a technical background is certainly useful it is not a primary requirement for this role. As you may have surmised from the above, building up this community will definitely be a part of the job. Being a good communicator, manager and organizer may be far more important at this stage.

Ideally, I’d like to see two leads with complementary skill sets, technical and communications/management. Ultimately, the most important requirement is a willingness and ability to take on this challenge.

You’ll not be alone, aside from your prospective co-lead, there is an existing community to build on. Notably when it comes to the technical aspects of the project. You can get a feel for the community on the OpenWayback Google Group and the IIPC GitHub page.

It would be simplest if the new leads were drawn from IIPC member institutions. We may, however, be willing to consider a non-member, especially as a co-lead, if they are uniquely suited for the position.

If you would like to take up this challenge and help move this project forward, please get in touch. My email is kristinn (at) landsbokasafn (dot) is.

There is no deadline, as such, but ideally I’d like the new leads to be in place prior to our next General Assembly in Lisbon next March.