Raiders of the lost Web: WARCnet 💽World Wide Web 🌐 archiving discovery challenge 🏃‍♀️

Launched in the spring of 2020, WARCnet (Web ARChive studies network researching web domains and events) aims to promote high-quality national and transnational research that will help to understand the history of (trans)national web domains and of transnational events on the web. Led by Niels Brügger (Aarhus University), Valérie Schafer (University of Luxembourg), and Jane Winters (School of Advanced Studies, University of London), WARCnet brings together the expertise of researchers in the  field of web archiving as well as seven national web archives. The forthcoming WARCnet Autumn meeting (November 4-6 2020) is accompanied by the first WARCnet Challenge!


By Niels Brügger (Aarhus University), Valérie Schafer (University of Luxembourg), Jane Winters (School of Advanced Studies, University of London), and Kees Teszelszky (The National Library of the Netherlands)

As all web archivists 💾, scholars of the web 🔬 and all digital natives 👶 are aware, the World Wide Web 🌐 is full of old, rare, shocking and weird gems💎 waiting to be discovered 👩‍🔬🧑‍💻. Show us your skills as internet researchers 🔎, web archaeologists 🏺 and twenty-first century online Indiana Joneses 🔮 and uncover the most interesting 😍, thought-provoking 🤔 or downright tasteless treasures 🏴‍☠️ from the online mud. Let’s see if you 👈 can surprise us 🎁 with your finds and show us 🖥️ wonderful things!

You can join the WARCnet network’s challenge with text, image, video or code found on the live web, in a web archive or elsewhere (library, physical archive, computer museum, basement of a web collector).

Your web archiving discovery challenge entry must consist of three parts: 1. Your discovery (what have you found?), 2. Your method (how did you find it or discover it?), 3. Your story (why is your find so special?).

“Can you see anything?” “Yes, wonderful things!” Be smart and creative! Let us learn from how you have done it! Show us web archiving is more than web crawlers and WARC files! Have fun and make us laugh or shiver!

Need some ideas 💡 for your online treasure 👑 trove⚒️ — have a look at https://cc.au.dk/en/warcnet/warcnet-twitter-challenge/ or go straight to #WarcnetChallenge.

Theme of this WARCnet Challenge

The theme of this WARCnet challenge is ‘Web archaeology and history (trial version!) (Temple of ZOOM)’ — we’ll leave up to participants to interpret the theme…

How to participate in the WARCnet challenge?

You participate in the challenge by tweeting your reply using the hashtag #WarcnetChallenge. It is OK to post more than one tweet; please mark them ‘1 of 5’ for five tweets, etc. Deadline ⏳ for tweets is Friday 6 November 09:00 CEST!

How is the winner found?

A jury composed of Niels Brügger, Valérie Schäfer, Kees Teszelszky and Jane Winters will nominate 3-5 entries. Then, on Friday 6 November, the last day of the WARCnet meeting in Luxembourg, the members of the WARCnet network will vote for one of the nominees as the winner.

What will the winner get?

The winner will get a unique laptop sticker 🏆 and eternal fame on the wall of fame of the Raiders of the lost Web on this web page. The winning entry will also be included in the Grand Finale at the closing WARCnet conference in June 2022, to potentially win the Great Raider of the Lost Web Award.

Will there be more WARCnet challenges?

Yes, indeed there will. The WARCnet challenge is a play in four parts, each with a specific theme:

Part 1: Web archaeology and history (trial version!) (Temple of ZOOM): WARCnet Autumn meeting in Luxembourg, November 4-6 2020

Part 2: Web design and culture (Raiders of the lost WARC): WARCnet Spring meeting in Aarhus April 20-22 2021

Part 3. Offline internet culture, digital born time travellers and internet culture in old analogue history. (Dr. Jones and the Wayback Machine): WARCnet Autumn meeting in London, November 3-5 2021

Part 4: Grand Finale and announcement of winner. (Back to the Future from the Digital Dark Age): The last WARCnet meeting and conference in Aarhus, June 13-15 2022.

Who to contact with questions?

Any questions, just email us at warcnet@cc.au.dk.

Rapid Response Twitter Collecting at NLNZ

By Gillian Lee, Coordinator, Web Archives at the Alexander Turnbull Library, National Library of New Zealand (NLNZ)

This blog post has been adapted from an IIPC RSS webinar held in August where presenters shared  their social media web archiving projects. Thanks to everyone who participated and for your feedback. It’s always encouraging to see the projects colleagues are working on.

Collecting content when you only have a short window of opportunity

The National Library responds quickly to collecting web content when unexpected events occur. Our focus in the past was to collect websites and this worked well for us using Web Curator Tool, however collecting social media was much more difficult. We tried capturing social media using different web archiving tools, but none of them produced satisfactory results.

The Preservation, Research and Consultancy (PRC) team include programmers and web technicians. They thought running Twitter crawls using the public Twitter API could be a good solution for capturing Twitter content. It has enabled us to capture commentary about significant New Zealand events and we’ve been running these Twitter crawls since late 2016.

One such event was the Christchurch Mosque shootings which took place on 15 March 2019.  This terrorist attack by a lone gunman at two mosques in Christchurch, where 51 people were killed was the deadliest mass shooting in modern New Zealand history.  The image you see here by Shaun Yeo was created in response to the tragic events and was shared widely via social media.

Shaun Yeo: Crying Kiwi
Crying Kiwi. Ref: DCDL-0038997. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/42144570   (used with permission)

While the web archivists focussed on collecting web content relating to the attacks, and the IIPC community assisted us by providing links to international commentary for us to crawl using Archive-It, the PRC web technician was busy getting the Twitter harvest underway. He needed to work quickly because there were only a few days lee-way to pick up Tweets using Twitter’s public API.

Search Criteria

Our web technician checked Twitter and found a wide range of hash tags and search terms that we needed to use to collect the tweets.

Hashtags: ‘#ChristchurchMosqueShooting’ ‘#ChristchurchMosqueShootings’ ‘#ChristchurchMosqueAttack’ ‘#ChristchurchTerrorAttack’ ‘#ChristchurchTerroristAttack’ ‘#KiaKahaChristchurch’ ‘#NewZealandMosqueShooting’ ‘#NewZealandShooting’ ‘#NewZealandTerroristAttack’ ‘#NewZealandMosqueAttacks’ ‘#PrayForChristchurch’ ‘#ThisIsNotNewZealand’ ‘#ThisIsNotUs’ ‘#TheyAreUs’

Keywords: ‘zealand AND (gun OR ban OR bans OR automatic OR assault OR weapon OR weapons OR rifle OR military)’ ‘zealand AND (terrorist OR Terrorism OR terror)’ ‘zealand AND mass AND shooting’ ‘Christchurch AND mosque’ ‘Auckland AND vigil’ ‘Wellington AND vigil’

The Dataset

The Twitter crawl ran from 15-29 March 2019. We captured 3.2 million tweets in JSON files. We also collected 30,000 media files that were found in the tweets and we crawled 27,000 seeds referenced in the tweets. The dataset in total was around 108GB in size.

Collecting the Twitter content

We used Twarc to capture the Tweets and we also used some inhouse scripts that enabled us to merge and deduplicate the Tweets each time a crawl was run. The original set for each crawl was kept in case anything went wrong during the deduping process or if we needed to change search parameters.
We also used scripts to capture the media files referenced in the Tweets and a harvest was run using Heritrix to pick up webpages. These webpage URLs were run through a URL unshortening service prior to crawling to ensure we were collecting the original URL, and not a tiny URL that might become invalid within a few months. We felt that the Tweet text without accompanying images, media files and links might lose its context. We were also thinking about long term preservation of content that will be an important part of New Zealand’s documentary heritage.

Access copies

We created three access copies that provide different views of the dataset, namely Tweet IDs and hashed and non-hashed text files.  This enables the Library to restrict access to content where necessary.

Tweet ID’s

Tweet ID’s (system numbers) will be available to the public online. When you rehydrate the Tweet ID’s online, you only receive back the Tweets that are still publicly available – not any of the Tweets that have since been deleted.

Hashed and non-hashed access copies

In 2018, Twitter released a series of election integrity datasets (https://transparency.twitter.com/en/information-operations.html), which contained access copies of Tweets. We have used their structure and format as a precedent for our own reading room copies. These provide access to all Tweets and the majority of their metadata, but with all identifying user details obfuscated by hashed values. You can see in the table below the Tweet ID highlighted in yellow, the user display name in red and its corresponding system number (instead of an actual name) and the tweet text highlighted in blue.

The non-hashed copy provides the actual names and full URL rather than system numbers.

 Shaping the SIP for ingest
National Digital Heritage Archive (NDHA)

We have had some technical challenges ingesting Twitter files into the National Digital Heritage Archive (NDHA). Some files were too large to ingest using Indigo, which is a tool the web and digital archivists use to deposit content into the NDHA, so we have had to use another tool called the SIP Factory, which enables the ingest of large files to the NDHA. This is being carried out by the PRC team.

We’ve shaped the SIPs (submission information packages) according to these files below and have chosen to use file conventions for each event. We thought it would be helpful to create a readme file that shows some of the provenance and technical details of the dataset. Some of this information will be added to the descriptive record, but we felt that a readme file might include more information and it will remain with the dataset.

chch_terror_attack_2019_twitter_tweet_IDs
chch_terror_attack_2019_Twitter_access_copy
chch_terror_attack_2019_Twitter_access_copy_hashed
chch_terror_attack_2019_Twitter_crawl
chch_terror_attack_2019_twitter _readme
chch_terror_attack_2019_twitter_media_files
chch_terror_attack_2019_twitter_warc_files

Description of the dataset

Even though the tweets are published, we have decided to describe them in Tiaki, our archival content management system. This is because we’re effectively creating the dataset and our archival system works better for describing this kind of content than our published catalogue does.
NLNZ, Tiaki archival content management system

Research interest in the dataset

A PhD student was keen to view the dataset as a possible research topic. This was a great opportunity to see what we could provide and the level of assistance that might be required.

Due to the sensitivity of the dataset and the fact that it wasn’t in our archive yet, we liaised with the Library’s Access and Use Committee around what data the library was comfortable to provide. The decision was that the data should only come from Tweets that were still available online in this initial stage while the researcher was still determining the scope of her research study.

The Tweet IDs were put in Dropbox for the researcher to download. There were several complicating factors that meant she was unable to rehydrate the Tweet ID’s, so we did what we could to assist her.

We determined that the researcher simply wanted to get a sense of what was in the dataset, so we extracted a random sample set of 2000 Tweets. This sample included only original tweets (no retweets) and had rehydrated itself to remove any deleted tweets. The data included was, Tweet time, user location, likes, retweets, Tweet language and the Tweet text. She was pleased with what we were able to provide, because it gave her some idea of what was in the dataset even though it was a very small subset of the dataset itself.

Unfortunately, the research project has been put on hold due to Covid-19. If the research project does go ahead, we will need to work with the University to see what level of support they can provide the researcher and what kind of support we will need to provide.

The BESOCIAL project: towards a sustainable strategy for social media archiving in Belgium

By Jessica Pranger, Scientific Assistant at KBR / Royal Library of Belgium

In August, we had the opportunity to present the new BESOCIAL research project during the IIPC RSS webinar. Many thanks to all viewers who have shared their remarks, questions and enthusiasm with us!

The aim of the BESOCIAL project is to set up a sustainable strategy for social media archiving in Belgium. Some Belgian institutions are already archiving social media content related to their holdings or interests, but it is necessary to reflect on a national strategy. Launched in summer 2020, this project will run over two years and will be divided in seven steps, called ‘Work packages’ (WP):

  • WP1: Review of existing social media archiving projects and corpora in Belgium and abroad (M1-M6). The aim of this WP is to analyse selection, access and preservation policies, existing foreign legal frameworks and existing technical solutions.
  • WP2: Preparation of a pilot for social media archiving (M4-M15) including the development of a methodology for selection and the technical and functional requirements. An analysis of the user requirements and the existing legal framework is also included.
  • WP3: Pilot for social media archiving (M7-M24) including harvesting, quality control and the development of a preservation plan.
  • WP4: Pilot for access to social media archive (M16-M21) focusing on legal considerations, the development of an access platform and evaluating the pilot.
  • WP5: Recommendations for sustainable social media archiving in Belgium on the legal, technical and operational level (M16-M24).
  • WP6: Coordination, dissemination and valorisation.
  • WP7: Helpdesk for legal enquiries throughout the project.

Figure 1 shows these seven stages and how they will unfold over the two years of the project.

Figure 1. Work Packages of the BESOCIAL project.

Review of existing projects

We are currently in the first stage of the project (Work Package 1). To this end, a survey has been sent to 18 international heritage institutions and 10 Belgian institutions to ask questions on various topics related to the management of their born-digital collections. To date, we have received 13 responses and a first analysis of these answers has been completed and submitted for publication. Another task that is currently being undertaken is to provide an overview of the tools used for social media archiving. It is now important to dig deeper and check which kind of metadata is supported by the tools. We are also working on an analysis of the digital preservation policies, strategies and plans of libraries and archives that already archive digital content, especially social media data. For the legal aspects, we are analysing the legal framework of social media archiving in other European and non-European countries.

Our team

The BESOCIAL project is coordinated by the Royal Library of Belgium (KBR) and is financed by the Belgian Science Policy Office’s (Belspo) BRAIN-be programme. KBR partnered with three universities for this project: CRIDS (University of Namur) works on legal issues related to the information society, CENTAL (University of Louvain) and IDLab (Ghent University) will contribute the necessary technical skills related to information and data science, whereas GhentCDH and MICT (both from Ghent University) have significant expertise in the field of communication studies and digital humanities.

The interdisciplinarity of the team and the thorough analyses of existing policies will ensure that the social media archiving strategy for Belgium will be based on existing best practices and that all involved stakeholders (heritage institutions, users, legislators, etc.) will be taken into account.

If you want to learn more about this project, feel free to follow our hashtag #BeSocialProject on social media platforms, and visit the BESOCIAL web page.

LinkGate: Initial web archive graph visualization demo

by Mohammed Elfarargy and Youssef Eldakar of Bibliotheca Alexandrina

LinkGate is an IIPC-funded project to develop a scalable web archive graph visualization environment and collect research use cases, led by Bibliotheca Alexandrina (BA) and the National Library of New Zealand (NLNZ). The project provides three modular components:

  • Link Service (link-serv) for the scalable temporal graph data service with an underlying graph data store and API
  • Link Indexer (link-indexer) for collecting inter-linking data from the web archive

  • Link Visualizer (link-viz) for the web-based frontend geared towards web archive graph data navigation and exploration

Research use cases are being documented to guide future development.

You can read more about our work in the blog post published in April.

During a webinar held at the end of July as part of the IIPC Research Speaker Series (RSS), we presented a demo of the tools being developed and a summary of feedback gathered so far from the community towards a research use case inventory. In this blog post, we give an update on progress of the technical development, focusing on the initial UI of link-viz.

Link Visualizer

LinkGate’s frontend visualization component, link-viz, has developed on many fronts over the last four months. While the LinkServe component is compatible with the Gephi streaming API, Gephi remains a desktop-only general-purpose graph visualization tool. link-viz, on the other hand, is a web-based, scalable graph visualization tool that is made specifically to visualize web archive graph data. This makes it possible to produce more informative graphs for web archive users.

link-viz works in a similar manner to web-based map services like Google Maps. The user gets a graph based on the queried URL and the desired snapshot. Users can set the initial depth of the graph and then incrementally add more nodes as they explore deeper in the graph. This smart loading makes the exploration of such a dense graph run more smoothly.

The link-viz UI is designed to set the main focus on the graph. Users can click on any graph node to select it and perform actions using tools available in the UI. Graph nodes can be moved around and are, by default, distributed using a spring force model to help make a uniform distribution over 2D space. It’s possible to toggle this off to give users the option to organize nodes manually. Users can easily pan and zoom in/out the view using mouse controls or touch gestures. All other tools are located in four floating panels surrounding the main graph area:

The left-hand panel is used to search for a URL and to select the desired snapshot based on which the initial graph will be rendered. The snapshot selection widget is illustrated in Figure 1:

Figure 1: Snapshot selection widget

The bottom panel shows detailed information on the highlighted graph node. This includes a full URL and a listing of all the outlinks and inlinks. This can be seen in Figure 2:

Figure 2: Node details panel

The top panel contains a set of tools for graph navigation (zoom in/out and reset view), taking graph screenshots, setting graph depth, collapsing/expanding portions of the graph, and configuring the look of the graph (selection of color, size, and shape for both graph nodes and edges to represent different pieces of information). One nice feature of link-viz compared to standard graph visualization tools is the usage of website favicons for graph nodes instead of geometric shapes, which makes nodes instantly identifiable and results in a much more readable graph. Figures 3 and 4 show the top panel and favicon usage, respectively:

Figure 3: Top panel

 

Figure 4: Favicons for graph nodes

The right-hand panel contains two tabs reserved for two sets of tools, Vizors and Finders. Vizors are tools to display the same graph highlighting additional information. Two vizors are currently planned. The GeoVizor will put graph nodes on top of a world map to show the hosting physical location. The FileTypeVizor will display file-type icons as graph nodes, making it very easy to identify most common file types and their distribution over the web. Finders perform graph exploration functions, such as finding loops or paths between nodes.

Apart from Vizors and Finders, we are also working on other features, including smart graph loading and animated graph timeline. We are also going to improve UI styling.

Link Indexer

link-indexer is now integrated with link-serv via the API. We have been testing the process of inserting data extracted with link-indexer into link-serv to identify data and scalability problems to work on. link-indexer now accepts command-line options for specifying the target link-serv instance and controlling the insertion batch size to manage how often the API is invoked. More command-line options are being added to control various aspects of the tool, as well as the ability to load options from a configuration file. We are also working to enhance tolerance to data issues, such as very long URLs, and network issues, such as short service outages. Figure 5 shows a sample output from a link-indexer run:

Figure 5: Sample output from a link-indexer run

Link Service

link-serv implements an API for link-indexer and link-viz to communicate with the graph data store. The API is compatible with the Gephi streaming API, giving users the option to connect to link-serv using the popular graph visualization tool, Gephi, as an alternative to the project’s frontend, link-viz.  Figure 6 shows a Gephi client streaming graph data from a link-serv instance:

Figure 6: Gephi client streaming from a link-serv instance

A data schema customized for temporal, versioned web archive data is used in the underlying Neo4j graph data store, and link-serv defines extra API operations not defined in the Gephi streaming API to support temporal navigation functionality in link-viz.

As more data is added to link-serv, the underlying graph data store has difficulty scaling up when reliant on a single instance. Our primary focus in link-serv at the moment, therefore, is to implement clustering. Work is in progress on a customized dispatcher service for the Neo4j graph data store as a substitute to clustering functionality in the commercially licensed Neo4j Enterprise Edition. As a side track, we are also looking into ArangoDB as possibly an alternative deployment option for link-serv’s graph data store.

Robustify your links! A working solution to create persistently robust links

By Martin Klein, Scientist in the Research Library at Los Alamos National Laboratory (LANL), Shawn M. Jones, Ph.D. student and Graduate Research Assistant at LANL, Herbert Van de Sompel, Chief Innovation Officer at Data Archiving and Network Services (DANS), and Michael L. Nelson, Professor in the Computer Science Department at Old Dominion University (ODU).

Links on the web break all the time. We frequently experience the infamous “404 – Page not found” message, also known as “a broken link” or “link rot.” Sometimes we follow a link and discover that the linked page has significantly changed and its content no longer represents what was originally referenced, a scenario known as “content drift.” Both link rot and content drift are forms of “reference rot”, a significant detriment to our web experience. In the realm of scholarly communication where we increasingly reference web resources such as blog posts, source code, videos, social media posts, datasets, etc. in our manuscripts, we recognize that we are losing our scholarly record to reference rot.

Robust Links background

As part of The Andrew W. Mellon Foundation funded Hiberlink project, the Prototyping team of the Los Alamos National Laboratory’s Research Library together with colleagues from Edina and the Language Technology Group of the University of Edinburgh developed the Robust Links concept a few years ago to address the problem. Given the renewed interest in the digital preservation community, we have now collaborated with colleagues from DANS and the Web Science and Digital Libraries Research Group at Old Dominion University on a service that makes creating Robust Links straightforward. To create a Robust Link, we need to:

  1. Create an archival snapshot (memento) of the link URL and
  2. Robustify the link in our web page by adding a couple of attributes to the link.

Robust Links creation

The first step can be done by submitting a URL to a proactive web archiving service such as the Internet Archive’s “Save Page Now”, Perma.cc, or archive.today. The second step guarantees that the link retains the original URL, the URL of the archived snapshot (memento), and the datetime of linking. We detail this step in the Robust Links specification. With both done, we truly have robust links with multiple fallback options. If the original link on the live web is subject to reference rot, readers can access the memento from the web archive. If the memento itself is unavailable, for example, because the web archive is temporarily out of service, we can use the original URL and the datetime of linking to locate another suitable memento in a different web archive. The Memento protocol and infrastructure provides a federated search that seamlessly enables this sort of lookup.

Robust Links web service.
Robust Links web service.

To make Robust Links more accessible to everyone, we provide a web service to easily create Robust Links. To “robustify” your links, submit the URL of your HTML link to the web form, optionally specify a link text, and click “Robustify”. The Robust Links service creates a memento of the provided URL either with the Internet Archive or with archive.today (the selection is made randomly). To increase robustness, the service utilizes multiple publicly available web archives and we are working to include additional web archives in the future. From the result page after submitting the form, copy the HTML snippet for your robust link (shown as step 1 on the result page) and paste it into your web page. To make robust links actionable in a web browser, you need to include the Robust Links JavaScript and CSS in your page. We make this easy by providing an HTML snippet (step 2 on the result page) that you can copy and paste inside the HEAD section of your page.

Robust Links web service result page.
Robust Links web service result page.

Robust Links sustainability

During the implementation of this service, we identified two main concerns regarding its sustainability. The first issue is the reliable inclusion of the Robust Links JavaScript and CSS to make Robust Links actionable. Specifically, we were looking for a feasible approach to improve the chances that both files are available in the long term, can continuously be maintained, and their URI persistently resolves to the latest version. Our approach is two-fold:

  1. we moved the source files into the IIPC GitHub repository so they can be maintained (and versioned) by the community and served with the correct mime type via GitHub Pages and
  2. we minted two Digital Object Identifiers (DOIs) with DataCite, one to resolve to the latest version of the Robust Links JavaScript and the other to the CSS.

The other sustainability issue relates to the Memento infrastructure to automatically access mementos across web archives (2nd fallback mentioned above). Here we continue our path in that LANL and ODU, both IIPC member organizations, maintain the Memento infrastructure.

Because of limitations with the WordPress platform, we unfortunately can not demonstrate robust links in this blog post. However, we created a copy with robustified links hosted at https://robustlinks.mementoweb.org/demo/IIPC/robust_links_blog.html. In addition, our Robust Links demo page showcases how robust links are actionable in a browser via the included CSS and JavaScript. We also created an API for machine-access to our Robust Links service.

Robust Links in action
Robust Links in action.

Acknowledgements and feedback

Lastly, we would like to thank DataCite for granting two DOIs to the IIPC for this effort at no cost. We are also grateful to ODU’s Karen Vaughan for her help minting the DOIs.

For feedback/comments/questions, please do not hesitate and get in touch (martinklein0815[at]gmail.com)!

Relevant URIs

https://robustlinks.mementoweb.org/
https://robustlinks.mementoweb.org/about/
https://robustlinks.mementoweb.org/spec/
https://robustlinks.mementoweb.org/api-docs/

The Danish Coronavirus web collection – Coronavirus on the curators’ minds

By Sabine Schostag, Web Curator, The Royal Danish Library

Introduction – a provoking cartoon

In a sense, the story of Corona and the national Danish Web Archive (Netarchive) starts at the end of January 2020 – about 6 weeks before Corona came to Denmark. A cartoon by Niels Bo Bojesens in the Danish newspaper “Jyllandsposten” (2020-01-26) showing the Chinese flag with a circle of yellow corona-viruses instead of the stars caused indignation in China and captured attention worldwide. We focused on collecting reactions on different social media and in the international news media. Particularly on Twitter, a seething discussion arose with vehement comments and memes about Denmark.

From epidemic to pandemic

After that, the curators again focused on the daily routines in web archiving, as we believed that Corona (Covid-19) was a closed chapter in Netarchive’s history. But this was not the case. When the IIPC Content Development Working Group launched the Covid-19 collection in February, the Royal Danish Library contributed the Danish seeds.

Suddenly, the Corona virus arrived in Europe and the first infected Dane came home from a skiing trip in Italy. The epidemic turned into a pandemic. On March 12, the Danish Government decided to lockdown the country: all public employees where sent to their home offices and borders were closed. Not only the public sector shut down, trade and industry, shops, restaurants, bars etc. had to close too. Only supermarkets were still open and people in the Health Care sector had to work overtime.

While Denmark came to a standstill, so to speak, the Netarchive curators worked at full throttle on the coronavirus event collection. Zoom became the most important work tool for the following 2½ months. In daily Zoom meetings, we coordinated who worked on which facet of this collection. To put it briefly, we curators had coronavirus on our minds.

Event crawls in Netarchive

The Danish Web Archive crawls all Danish news media between several times daily and one time weekly, so there is no need to include news articles in an event crawl. Thus, with an event crawl we focus on augmented activity on social media, blog articles, new sites emerging in connection to the event – and reactions in news media outside Denmark.

Coronavirus documentation in Denmark

The Danish Web collection on coronavirus in Denmark is part of a general documentation on the corona lockdown in Denmark in 2020. This documentation is a cooperation between several cultural institutions, the National Archives (Rigsarkivet), the National Museum (Nationalmuseet), the Workers Museum (Arbejdermuseet), local archives and, last but not least, the Royal Danish Library. The corona lockdown documentation was supposed to be done in two steps:  the “here and now” collection of documentation under the corona lockdown and a more systematic follow-up by collecting materials from authorities and public bodies.

“Days with Corona” – a call for help

All Danes were asked to contribute to the corona lockdown documentation, for instance by sending photos and narratives from their daily life under the lockdown. “Days with Corona” is the title of this part of the documentation of the Danish Folklore Archives run by the National Museum and the Royal Library.

Netarchive also asked the public for help by nominating URLs of web pages related to coronavirus, social media profiles, hashtags, memes and any other relevant material.

Help from colleagues

Web archiving is part of the Department for Digital Cultural Heritage at the Royal Library. Almost all colleagues from the department were able to continue with their every day work from their home offices. Many colleagues from other departments were not able to do so. Some of them helped the Netarchive team by nominating URLs, as this event crawl could keep curators busy more than 7½ hours a day. We used a Google spreadsheet for all nominations (fig. 1)

Fig. 1 Nomination sheet for curators and colleagues form other departments and a call for contributions.

The Queen’s 80th birthday

On April 16, Queen Margarethe II celebrated her 80th birthday. One of the first things she did after the Corona lockdown, on March 13, was to cancel all her birthday celebration events. In a way, she set a good example, as everybody was asked not to meet with no more than ten people, ideally we only should socialize with members of our own household.

As part of the Corona event crawl, we collected web activity related to the Queen’s birthday, which mainly consisted of reactions on social media.

The big challenge – capturing social media

Knowledge of the coronavirus Covid-19 changes continuously. Consequently, authorities, public bodies, private institutions, and companies change information and precaution rules on their webpages frequently. We try to capture as much of these changes as possible. Companies and private individuals offering safety gear for protection against the virus was another facet in the collection. However, capturing all relevant activity on social media was much more challenging than the frequent updates on traditional web pages. Most of the social media platforms use technologies, which Heritrix (used by Netarchive for event crawling) is not able to capture.

Fig. 2 The Queen’s speech to the Danes on how to cope with the corona crisis. This was the second time in history (the first time was during the World War II) when a Royal Head of State addressed  the nation, besides the annual New Year’s Eve speech.

More or less successfully, we tried to capture content from Facebook, TikTok, Twitter, YouTube, Instagram, Reddit, Imgur, Soundcloud, and Pinterest. Twitter is the platform we are able to crawl with Heritrix with rather good results. We collect Facebook profiles with an account at Archive-It, as they have a better set of tools for capturing Facebook. With frequent Quality Assurance and follow-ups, we also get rather good results from Instagram, TikTok and Reddit. We capture YouTube videos by crawling the watch-URLs with a specific configuration using YouTube dl.  One of the collected YouTube videos comes from the Royal family’s YouTube channel: the Queens address to the people on how to behave to prevent or limit the spreading of the coronavirus (https://www.youtube.com/watch?v=TZKVUQ-E-UI, Fig. 2).

As Heritrix has problems with dynamic web content and streaming, we also used Webrecorder.io, although we have not yet implemented this tool in our harvesting setup. However, captures with Webrecorder.io are only drops in the ocean. The use of Webrecorder.io is manual: a curator clicks on all the elements on a page we want to capture. An example is a page on the BBC website, with a video of the reopening of Danish primary schools after the total lockdown (https://www.bbc.com/news/av/world-europe-52649919/coronavirus-inside-a-reopened-primary-school-in-the-time-of-covid-19, Fig. 3). There is still an issue with ingesting the resulting WARC files from Webrecorder.io in our web archive.

Danes produced a range of podcasts on coronavirus issues. We crawled the podcasts we had identified. We get good results when having an URL to a RSS feed, which we crawl with XML extraction.

Fig. 3 Crawled with Webrecorder.io to get the video.

Capture as much as possible – a broad crawl

Netarchive runs up to four broad crawls a year. We launched our first broad crawl for 2020 just in the beginning of the Danish Corona lockdown – on March 14. A broad crawl is an in-depth snapshot of all dk-domains and all other Top Level Domains (TDLs) where we have identified Danish content. A side benefit of this broad crawl might be getting Corona-related content into the archive – content which the curators do not find with their different methods. We identify content both with classic/common? keyword searches and using a variety of link scraping tools / link scrapers.

Is the coronavirus related web collection of any value to anybody?

In accordance with the Danish personal data protection law, the public has no access to the archived web material. Only researchers affiliated with Danish research institutions can apply for access in connection with specific research projects. We have already received an application for one research project dealing with values in the Covid-19 communication. We hope that our collection will inspire more research projects.

The Croatian Web Archive – what’s new?

The Croatian Web Archive (Hrvatski arhiv weba, HAW), launched in 2004, is open access. To celebrate its 15th anniversary, the National and University Library in Zagreb hosted the IIPC General Assembly and the Web Archiving Conference in June 2019. HAW has been the central point in Croatia for researching website development (.hr domain) and the HAW Team has also been organising training for librarians. One of HAW’s most recent projects was the development of the new portal.


By Karolina Holub, Library Adviser at the Croatian Digital Library Development Centre, Croatian Institute for Librarianship, Ingeborg Rudomino, Senior Librarian at the Croatian Web Archive, & Marta Matijević, Librarian at the Croatian Web Archive (National and University Library in Zagreb)

June 2019 – June 2020

It’s been more than a year since the National and University Library in Zagreb (NSK) hosted the IIPC General Assembly and Web Archiving Conference, which we remember with nostalgia.

Last year was a very busy year for the Croatian Web Archive (HAW) and we would like to share some of the key projects that we have been working on.

New portal design

The highlight of the last period was the launch of the new HAW portal.

Croatian Web Archive (HAW)

It was a complex project that took two years – from the initial idea to the launch of the portal in February 2020. The portal was developed and is maintained by NSK website developers and the HAW team. It is developed in a customized WordPress theme. Since the new portal had to be integrated with the database of the archived content, that is maintained by our partner University of Zagreb University Computing Centre (SRCE), a lot of coding was required in order to connect the portal with the archive database to ensure that everything is working properly and smoothly.

Below you can see fractions of our previous portals from 2006 and from 2020:

HAW’s website from 2006 until 2011

HAW’s website from 2011 until 2020

So, what’s new?

The most important objective was to put search box in focus for all types of crawls and give users an easier way to find a resource. Because of the diverse ways of searching, our goal was to have a clear distinction between selective (that is indexed and can be searched by keywords, any word in title or URL, or use advanced search) and domain crawls (can only be searched by entering the full URL). A valuable addition to this version of the portal are the basic metadata elements that accompany each resource (which has a catalogue record) available in the portal.


Archived resource with the basic metadata elements (available also via library catalogue)

Additionally, the browsing of subject categories has been expanded with subject subcategories.

The visibility of the thematic collections has been improved by placing them on the title page. A new feature In Focus has also been added to highlight some of the most important or interesting events or anniversaries happening in the country, city or at the Library in the form of blog posts. This feature is available only in the Croatian version of the portal. The central part of the homepage features New in HAW and Gone from the web sections where user can browse all publications that are new or publications that are no longer available on the live web. The About HAW page features a timeline marking all the important dates related to history of HAW.

Some parts of the new portal have largely remained the same with only slight improvements to make them more user-friendly and up to date. More information about Selection criteria, National .hr domain crawls, Statistics, Bibliography, FAQ etc. can be found in the footer.

The portal is also available in English.

New thematic collections

During this one-year period, we have been working on six thematic collections. Some of them are already available and others are still ongoing:

Elections for the President of the Republic of Croatia 2019-2020

At the end of 2019, Presidential Elections were held in Croatia. The thematic crawls was conducted in January and the content is publicly available as part of this thematic collection.

Rijeka – European Capital of Culture 2020

Croatian city of Rijeka is European Capital of Culture 2020. All contents related to this event, during this challenging time, will be harvested. We are still collecting the content.

Croatian Presidency of the Council of the European Union

Croatia has chaired the Council of the European Union from January to June 2020. We are finishing this thematic collection and it will soon be publicly available on the HAW’s portal.

COVID-19

Our largest thematic collection so far is definitely COVID-19, which is still ongoing. We have included the public in collecting the content inviting nominations related to the coronavirus. In this thematic collection, we follow the events that begin with the onset of coronavirus in the Republic of Croatia and the world, featured on the Croatian portals, blogs, articles – from the outbreak of coronavirus, through general lockdown to the gradual normalization in which we are now.

Archived website (19.03.2020)

2020 Zagreb earthquake

On March 22, just a few days after the start of coronavirus lockdown in Croatia, Zagreb was hit by the biggest earthquake in 140 years, causing numerous injuries and extensive damage. Croatian Web Archive immediately started collecting content about this disaster. This thematic collection is publicly available on the HAW’s portal.

Archived website (15.04.2020) (photo by HINA; Damir Senčar)

2020 Parliamentary Elections

When the spread of the coronavirus was believed to be under control, Croatia held the Parliamentary Elections on July 5. The content for this collection will be collected until the constitution of the new Croatian Parliament.

In May of this year, we started cataloguing thematic collections at the collection level. We have also contributed the Croatian content to the IIPC Coronavirus (Covid-19) Collection.

Annual .hr crawl

In December 2019 we have conducted the 9th annual domain crawl and collected 119 million resources amounting to 9.3 TB.

HAW also started the installation and configuration of tools for indexing and enabling full-text search for domain and thematic crawls: Webarchive-Discovery for parsing and indexing WARC files, Apache SORL for indexing and searching text content and SHINE web interface for index search and analysis. We are still in the testing phase and only a part of existing crawled content is indexed.

Testing Web Curator Tool for new collaborative processes – Local Web Crowd crawls

A new development phase is the collaboration with public libraries in crawling their local history collections for which we are testing the Web Curator Tool. We expect the first results are by the end of November this year.

What’s next?

In the next months, we will be working on enabling more advanced use of HAW’s content to better suit the researchers, starting with the creation of the data sets from HAW collections. We will also prepare guidelines for using archived content on HAW’s portal. In addition, we are planning to update our training material according to the new IIPC training material. In the meantime, we invite you to explore our new portal.

Documenting COVID-19 and the Great Confinement in Canada

By Sylvain Bélanger, Director General, Transition Team, Library and Archives Canada and Treasurer, International Internet Preservation Consortium

It seemed like it happened overnight, suddenly we were told to work from home and limit our physical interactions with people outside our household until further notice. The information was changing and evolving very rapidly and as we started seeing the rise in COVID-19 related cases globally, the anxiety among colleagues and employees was rising as well. Business rapidly ground to an almost complete halt and only essential services would continue to operate, with strict controls and restrictions.

Spanish Flu and the Great Confinement of 2020

Even during these early days, in these times of uncertainty, a group of individuals saw a parallel between the current situation and the period of the Spanish Flu a century earlier. Thinking ahead to fifty years from now this group was asking the question – how will future generations know about this period of time, the Great Confinement of 2020 as they may call it, or the time of great creativity, or perhaps the time the Internet became our lifeline? Turning the clock back one hundred years to the period of the Spanish Flu has given us hints. Let’s not forget that the tragedy of the early 1900s was documented through newspapers, diaries, photographs, and publications detailing the fight and aftermaths of the Spanish Flu.

In 2020, where social media and websites are key means citizens used to document and get informed, how do we capture such ephemeral product?  Does any country have the answer? Isn’t that the question we often ask ourselves?

The importance of web archiving

Screenshot from the Public Health Agency of Canada website.

This period has given all of us an opportunity to educate news publishers, citizens, and government decisions makers about the work done by web archiving teams across Canada and around the world. The efforts of the IIPC have been pushed to the forefront in this crisis, and have helped us demonstrate the importance of preserving web content for future generations.

In Canada the work entails a coordination of efforts with other governmental institutions as well as with university libraries and provincial/territorial archives to limit duplication of efforts. At Library and Archives Canada (LAC), to ensure a proper reflection of Canadian society, we have captured over 662,000 Tweets with hashtags such as #covidcanada, #covid19canada, #canadalockdown, #canadacovid19, as part of over 38 million digital assets collected for COVID-19 in 2020. Of that a little over 87% of the content is non-governmental, from media and non-media web resources selected for the COVID-19 collection. This includes 33 sites on Canadian news and media collected daily, to ensure we capture a robust sample of the published news on COVID-19. Added to that are non-media web resources that create an overall LAC seed list of over 900 resources. Total data collected to date is a little more than 3.09 TB at LAC alone.

Documenting the Canadian response

In addition to our web archiving program, LAC librarians have noticed an increase in books being published about the crisis. That has been measured through our ISBN team observing an increase in authors requesting ISBN numbers for books about various aspects of the pandemic. In addition, LAC will document the Government of Canada’s response to the COVID-19 pandemic through our Government Records Disposition Program.  In this way the government decision-making on COVID-19 and impact on Canadians will be acquired and preserved by LAC for present and future generations. Also, our Private Archives personnel are monitoring the activities, responses and reactions of individuals, communities, organizations and associations within their respective portfolios. LAC will endeavour to acquire documents about the pandemic when discussing possible acquisitions with current and potential donors and when evaluating offers. Descriptions in archival fonds will now highlight COVID 19 content where appropriate.

The efforts undertaken to date at LAC are meant to document the Canadian response. Are our efforts enough to help citizens 100 years from now to understand the times we were living, and how we responded to and tackled the challenges of COVID-19? Only time will tell whether this is enough, or we need to do any more work to truly document the historical times we live in.

IIPC Content Development Group’s activities 2019-2020

By Nicola Bingham, Lead Curator Web Archives, British Library and Co-Chair of the IIPC Content Development Working Group

Introduction

I was delighted to present an update on the Content Development Group’s (CDG) activities at the 2020 IIPC General Assembly (GA) on behalf of myself, Alex and the curators that have worked so hard on collaborative collections over the past year.

Socks, not contributing to Web Archiving

Although it was disappointing not to have been in Montreal for the GA and Web Archiving Conference (WAC), it is the case that there are many advantages in attending a conference remotely. Apart from cost and time savings, it meant that many more staff members from our organisations could attend. I liked the fact that I could see many “old” web archiving friends online and it did feel like the same friendly, enthusiastic, innovative environment that is normally fostered at IIPC events. I was also delighted to see some of the attendee’s pets on screen, although it did highlight that other people’s cats are generally much more affectionate than my own, who has, I have to say, contributed little to the field web archiving over the years, although he did show a mild interest in Warcat.

Several things become clear when tasked with pre-recording a presentation with a time limit of 2 to 3 minutes. Firstly, it is extremely difficult to fit everything you need to say into such a short space of time; secondly, what you do want to say must be tightly scripted – although this does have the advantage that there is no room for pauses or “errs” in a way that can sometimes pepper my in-person presentations. Thirdly, recording even a two-minute video calls for a surprising number of retakes, taking many hours for no apparent reason. Fourthly, naively explaining these facts to the Programme and Communications Officer leads quite seamlessly to the suggestion of writing a blog post in order that one can be more expansive on the points bulleted in the two-minute presentation….

CDG Collection Update

Since our last General Assembly in Zagreb, in June 2019, the CDG has continued working on several established, and two new collections:

  • The International Cooperation Organizations Collection was initiated in 2015 and is led by Alex Thurman of Columbia University Libraries. It previously consisted of all known active websites in the .int top-level domain (available only to organizations created by treaties), but was expanded to include a large group of similar organizations with .org domain hosts, and renamed Intergovernmental Organizations this year. This increased the collection from 163 to 403 intergovernmental organizations, all of which will continue to be crawled each year.
  • The National Olympic and Paralympic Committees, led by Helena Byrne of the British Library was initiated in 2016 and consists of websites of national Olympics and Paralympics committees and associations, as identified from the official listings of these groups found on the official sites http://www.olympic.org and http://www.paralympic.org.
  • Online News Around the World led by Sabine Schostag of the Royal Danish Library. This collection of seeds was first crawled in October 2018 to document a selection of online news from as many countries as possible. It was crawled again in November 2019. The collection was promoted at the Third RESAW Conference, “The web that was: archives, traces, reflections” in Amsterdam in June 2019 and at the IFLA News Media Conference at Universidad Nacional Autónoma de México, Mexico City in March 2020.
  • New in 2019, the CDG undertook a Climate Change Collection, led by Kees Teszelszky of the National Library of the Netherlands. The first crawl took place in June, with a final crawl shortly after the UN Climate summit in September 2019.
  • New in 2019, a collection on Artificial Intelligence was undertaken between May and December, led by Tiiu Daniel (National Library of Estonia), Liisi Esse (Stanford University Libraries) and Rashi Joshi (Library of Congress).

Coronavirus (Covid-19) Collection

The main collecting activity in 2020 has been around the Covid-19 Global pandemic. This has involved a huge effort by IIPC members with contributions from over 30 members as well as public nominations from over 100 individuals/institutions.

We have been very careful with scoping rules so that we are able to collect a diverse range of content within the data budget – and Archive-It generously increased the data limit for this collection to 5TB. Collecting will continue to run, budget permitting, while the event is of global significance.

Publicly available CDG collections can be viewed on the Archive-It website.https://archive-it.org/home/IIPC and an overview of the collection statistics can be seen below.

CDG Collection statistics. Figures correct as of 15th June 2020. Slide presented at IIPC GA 17th June 2020.

Researcher-use of Collections

The CDG has worked closely with the Research Working Group co-chairs to promote and facilitate use of the CDG collections which are now available through the Archives Unleashed Cloud thanks to the Archives Unleashed project. The collections have been analysed and there are a large amount of derivatives available to researchers at IIPC-led events and/or research projects. For more information about how to access these collections please refer to the guidelines.

Next Steps/Getting in touch

We would very much welcome new members to the CDG. We will be having an online meeting in the next couple of months which would be an excellent opportunity to find out more. In the meantime, any IIPC member is welcome to suggest and/or lead on possible 2021 collaborative collections. For more information please contact the co-chairs or the Programme and Communications Officer.

Nicola Bingham & Alex Thurman CDG co-chairs

The CDG Working Group at the 2019 IIPC General Assembly in Zagreb.

From pilot to portal: a year of web archiving in Hungary

National Széchényi Library started a web archiving pilot project in 2017. The aim of the pilot project was to identify the requirements of establishing the Hungarian Internet Archive. In the two years of the pilot phase, some hundred cultural and scientific websites were selected and published with the owners’ permission. The Hungarian Web Archive (MIA) was officially launched in 2017. The Library joined the IIPC in 2018 and the Hungarian Web Archive was first introduced at the General Assembly in Wellington in 2018. Last year, the achievements of the project were presented at the Web Archiving Conference (WAC) in Zagreb, in June 2019. This blog post offers a summary of some key developments since the 2019 conference.


By Márton Németh, Digital librarian at the National Széchényi Library, Hungary

In just about a year, we moved from a pilot project to officially launching our web archive, running a comprehensive crawl and creating special collections. In May 2020, the Hungarian parliament passed the modifications of the Cultural Law which allows us to run web archiving activities as a part of its basic service portfolio. Over the past year we have also organised training and participated in various collaborative initiatives.

Conferences and collaborations

In the summer just after the Zagreb conference, we could exchange experiences with our Czech and Slovak colleagues about the current status and major development points of web archiving projects in the Czech Republic, Slovakia and Hungary in the Visegrad 4 Library Conference in Bratislava. Our presentation is available from here. In the autumn, at the annual international conference of digital preservation in Bratislava, we could elaborate on our basic thoughts about the potential use of microdata in library environment. The presentation can be downloaded from here.

At the Digital Humanities 2020 conference in Budapest, Hungary, we organized a whole web archiving session with presentations and panel discussions together with Marie Haskovcová from the Czech National Library, Kees Teszelszky from the National Library of the Netherlands, Balázs Indig from the Digital Humanities Research Centre of Loránd Eötvös University and with Márton Németh from the National Széchényi Library. The main aim was to get a spotlight on Digital Humanities research activities in the web archiving context. Our presentation is available from here.

Training

Our annual workshop in the National Széchényi Library focused on the metadata enrichment of web archives, crawling and managing local web content in university library and city library environments, crawling and managing online newspaper articles and setting the limits of web archiving in research library environments.

We also run several accredited training courses for Hungarian librarians and summarized our experiences in web archiving education field in an article published by Emerald. The membership in the IIPC Training Working Group has offered us valuable experiences in this field.

Domain crawl and new portal

We had run our second comprehensive harvest about a large segment of the Hungarian web domain in the end of 2019. The robot had started on 246.819 seed addresses and crawled 110 million URL-s in less than eight days with 6,4 TB storage.

Our original project website was the first repository of resources related to web archiving in Hungarian. In 2019 we built a new portal. This new website serves as a knowledgebase in web archiving field in Hungary. Beyond the introduction to the web archive and to the project, separate groups of resources (info-materials, documents etc.) are available for every-day users, for content-owners, for professional experts and for journalists. It is available at https://webarchivum.oszk.hu.

https://webarchivum.oszk.hu
webarchivum.oszk.hu

We created a new sub-collection in 2019-2020 on the Francis II Rákóczi Memorial Year at the National Széchényi Library (NSZL), within the framework of the Public Collection Digitization Strategy. Its primary goal was present the technology of web archiving and the integration of the web archive with other digital collections through a demo application. The content focuses on the webpages and websites related to the Memorial Year, to the War of Independence, to the Prince and to his family. Furthermore, it contains born digital or digitized books from the Hungarian Electronic Library, articles from the Electronic Periodical Archives, photos, illustrations and other visual documents from the Digital Archive of Pictures. The service is available on the following address: http://rakoczi2019.webarchivum.oszk.hu.

OSZK-figure2
rakoczi2019.webarchivum.oszk.hu

Legislation and new collections

In May 2020 the Hungarian parliament passed the modifications of the Cultural Law that entitles the National Széchényi Library to run web archiving activities as a part of its basic service portfolio. Legal deposit of web materials will also be established. The corresponding governmental and ministerial decrees will appear soon, all the law modifications and decrees will be in effect from 1 January 2021.

We made our first experiment of harvesting various materials from 700 pages with more than 100.000 posts from Instagram using the Webrecorder software. We are running event-based harvests too about COVID-19, Summer Olympic Games, Paris Peace Conference (1919-1920). We are joining also to the corresponding international IIPC collaborative collection development projects.

Next steps

Supported by the framework of the Public Collection Digitization Strategy we could start to develop a collaboration network with various regional libraries in Hungary in order to collect local materials for the Hungarian Web Archive. Hopefully, we will summarize our first experiences during our next annual workshop in the autumn and we can further develop our joint collection activities.