You are here

Feed aggregator

DuraSpace News: ARKs in the Open - Project Update #1

planet code4lib - Wed, 2018-02-28 00:00

As announced last week, the California Digital Library (CDL) and DuraSpace are collaborating to build an open, international community around Archival Resource Keys (ARKs) and their use as persistent identifiers in the open scholarly ecosystem. We’re calling the project ARKs in the Open.

District Dispatch: #FundLibraries infographic now an audio description file, thanks to the Colorado Talking Book Library

planet code4lib - Tue, 2018-02-27 17:50

Since the President’s budget proposal dropped (less than two weeks ago) library advocates have been voicing their support for the Institute for Museum and Library Services. Over 15,000 emails have been sent via the ALA Action Center.

In order to help patrons of the national network of Talking Book and Braille Libraries, also known as the LBPH Network, the Colorado Talking Book Library (CTBL) has shared the ALA infographic on the legislative process and created the corresponding audio description file with LBPH network libraries so they can inform their patrons.

“Many libraries in the network are depending on Library Services and Technology Act (LSTA) funding and our patrons speak with their legislators about what the libraries mean to them. This is a great resource for understanding the entire legislative process,” says Debbi MacLeod, director of the CTBL. “We posted the Digital Talking Book version to BARD, the audio download service for the network. LBPH libraries partner with the Library of Congress, National Library for the Blind and Physically Handicapped (NLS).”

According to IMLS’s newly released Five-Year Evaluations, individuals with disabilities are the second largest beneficiary group of the grants to states program. Thanks to LSTA and other IMLS funds, many state libraries are able to support Libraries for the Blind and Physically Handicapped or Talking Book services, which provide access to reading materials in alternate formats. We often hear about how life-changing these services can be, and although there is federal coordination behind some of these offerings, there are no dedicated federal funding streams for them at the local and state level. IMLS Grants to States funding often fills that gap.

“The funding also supports the large print collection and our large print resource sharing program we have with other libraries in CO,” says MacLeod. “Our patrons routinely tell us: ‘I really look forward to getting my books, they are my lifeline,’ and ‘I thought I would never read again, but CTBL changed that.'”

We are very grateful to Director Debbi MacLeod as well as CTBL’s Studio Director Tyler Kottmann and all of the CTBL staff for recording this infographic to help NLS patrons engage in the legislative process. We’ll have more news soon about the next steps (in particular, step 3 on the infographic!). In the meantime, please continue to voice your support for libraries directly to your members of Congress via email and social media.

The post #FundLibraries infographic now an audio description file, thanks to the Colorado Talking Book Library appeared first on District Dispatch.

Library of Congress: The Signal: Rethinking LC for Robots: From Topics to Actions

planet code4lib - Tue, 2018-02-27 16:57

Have you noticed that our LC for Robots page has a new look this month? We integrated feedback from visitors, discussion, and a card sorting exercise to consolidate resources for machine-readable access to Library of Congress digital collections. We’re looking for your feedback, but first, learn more about how we approached this redesign.

In September 2017, we launched Library of Congress Labs with help from our friends in the Library’s Web Services. Bundled in the mix was our LC for Robots page. It was envisioned as a stepping off point for research using digital collections. The page offered access points for multiple data sets and APIs available at the Library of Congress. We wished to highlight the variety of resources and included tutorials and documentation to guide or prompt their use. And with the launch of this site, these resources were presented together for the first time on a single Library of Congress page.

The way LC for Robots looked two weeks after launch in September 2017

Our initial layout of resources was topical, based on the collections and subject divisions at the Library of Congress. For example, we called out separately the APIs for Chronicling America, the American Archive of Public Broadcasting, and JSON API. Affiliated Jupyter notebooks tutorials and documentation were also arranged in relation to the collection.

Building up and Breaking down LC for Robots 

Breaking down LC for Robots

From the start, we anticipated that we would regularly add other data sets and tutorials to LC for Robots. Still, it became more difficult to pinpoint what we were presenting even as we began to incorporate additional resources. For example, how might a visitor to compare tutorials in Jupyter notebooks to decide which approach they wanted to take to accessing images? They would have to scan the page to find and explore them. It was time to rethink what we wanted to communicate based on the audiences we planned to serve and their goals. We discussed new approaches to presenting information on the page in December. We also met with colleagues in Web Services to discuss the general presentation of the Library of Congress Labs site. Part of that discussion focused on our plans to grow with content, including on LC for Robots. We looked ahead and made plans to carve out time in the New Year to assess our approaches with LC for Robots. 

In January 2018, visiting software development librarian Laura Wrubel led us through a card sorting exercise. We started by printing out the LC for Robots page. Then, we promptly cut out each section and bullet into their own “cards.” We also had blank strips of paper at hand so that we could add elements or resources missing from the LC for Robots page.

We took 5 minutes to reflect on the feedback we’ve received since September and audiences we sought to serve. With that framing, we arranged the slips to correspond to those needs. Reorganizing the information required some of us to create new headings and groupings. Next, we took turns talking through the audiences we considered, the proposed layout, what might not fit, and what was missing. As you can see below, we shared perspectives about serving our visitors but had different ways of visualizing the best way to arrange Library of Congress resources. 

Then, we combined our overlapping ideas to set out a revised layout. At this step, we discussed what each section represented or at whom it was aimed, and captured images to make this revised layout a reality. We ended the session by reflecting on the ways this discussion sparked ideas about better structure for the page and where we might have challenges in the future with increased content. So, what surfaced as key considerations in our attempts to improve the page? 

Laying out our LC for Robots options

Skills and Experience of LC for Robots visitors

Visitors to LC for Robots have offered feedback and expressed different needs and interests in pursuit of their goals. Some visitors seek to get started with data; through bulk download or via API. Other visitors wish to integrate Library of Congress APIs in course instruction or in interactive sessions. This set of visitors would be interested in exploring documentation more closely. Another cohort expressed interest in trying these resources; they also signaled a need for more guidance for getting started.

Communicating where to start with specific tasks using a revised grouping

As a group, we focused on 3 main actions that could be supported by the re-arrangement of resources on the LC for Robots page:

  • Help visitors find data to use and download
  • Point directly to APIs and their documentation and context 
  • Highlight several types of support for getting started with guides and Jupyter Notebooks

A-P-I-s and spelling out documentation 

We had originally provided links to API pages and documentation details. Looking more closely at the sections, some of the information began to appear redundant as it was more appropriately presented with supporting context elsewhere on Library of Congress sites. We took this turn toward brevity as our goals are to showcase and offer a paths to use resources. Rather than LC for Robots becoming a definitive, primary access point, our approach builds on the fantastic work of our Library of Congress colleagues in documenting their collections and APIs.

As a result, we have a revised look!

The revised LC for Robots!

You’ll see that we’ve divided the content into three sections in an effort to help visitors to more quickly find what they’re seeking:

  1. Get Data
  2. APIs
  3. Get Started

Now, we’re asking for your feedback. Please leave a comment below or send us a tweet at @LC_Labs ( We’d like to know more about: 

  • What encouraged or is prompting you to explore LC for Robots?
  • Have you tried to access digital collections or bulk data?
  • What’s holding you back from digging in further?

Interested in running your own card sorting exercise? See this guide from 18F.

Lucidworks: Fusion 4 Ready for Download

planet code4lib - Tue, 2018-02-27 16:22

We are pleased to announce the release of Fusion 4, our application development platform for creating powerful search-driven data applications.

Fusion 4 is our most significant release to date and we’ve been hard at work to bring you our most feature-rich and production-ready release.

Introducing Fusion Apps

Fusion Apps are a logical grouping of all linked Fusion objects. Apps can be exported and shared between Fusion instances, promoting multi tenant deployment and significantly reducing the time to value for business to deploy smart search applications. Fusion Objects within Apps can be shared as well, significantly reducing development time, reducing duplication and promoting reusability.

Updates to Fusion AI

We’ve added significant updates to our AI suite. Fusion AI now includes several new features to allow organizations to deliver superior, industry leading search relevance powered by our powerful AI Capabilities:

Experiment Management & A/B Testing

Our new Experiment Management framework provides a full suite of A/B testing tools for comparing different production pipeline configuration variants to determine which pipelines are most successful. This allows tuning of Fusion pipelines for a significant increase in relevancy, click throughs and conversions.

All New Smart Jobs

Smart jobs are pre-configured, tested, and optimized AI jobs for Spark that bring the most popular models and approaches of machine learning to your apps. Our data scientists have tweaked and optimized a couple dozen of these jobs through extensive deployment in both testing and customer production environments.

Just drop them into your query or index pipelines and you’re ready to go. Smart jobs range from clustering and outlier detection, classification, query insights like head-n-tail analysis, content insights like statistically interesting phrases, and user insights like item similarity recommenders.

App Insights

App Insights is our new interface for providing detailed, real-time, customizable dashboards to visualize your App and Query Analytics. Our built in analytics reports based on our Smart Jobs provide key metrics for analyzing query performance.

Refreshed UI and Enhanced App-Centric Workflows

We’ve taken your valuable feedback and overhauled our UI with a fresh new look and feel, optimizing for App development and deployment workflows. Significant updates to our Object Explorer allow visualization of Apps and the intrinsic relationships between shared Fusion objects.

Connectors SDK

Our new Connectors SDK provides a stable API interface for development of custom connectors to ingest data into Fusion. This can be used to augment our suite of 200+ data sources allowing for us to ingest data from ANY data source.

And Under the Hood

And of course, Fusion 4.0 is powered by Apache Solr 7.2.1 and Apache Spark 2.3.

Webinar: What’s New In Fusion 4

Join Lucidworks SVP of Engineering Trey Grainger for a guided tour of what’s new and improved with Fusion 4. You’ll learn how Fusion 4 lets you build portable apps that can be quickly deployed anywhere, manage experiments for more success queries, and execute sophisticated custom AI jobs across your data.

Full details and registration.

Learn More

Read the release notes.

Download Fusion now.

The post Fusion 4 Ready for Download appeared first on Lucidworks.

David Rosenthal: "Nobody cared about security"

planet code4lib - Tue, 2018-02-27 16:00
There's a common meme that ascribes the parlous state of security on the Internet to the fact that in the ARPAnet days "nobody cared about security". It is true that in the early days of the ARPAnet security wasn't an important issue; everybody involved knew everybody else face-to-face. But it isn't true that the decisions taken in those early days hampered the deployment of security as the Internet took the shape we know today in the late 80s and early 90s. In fact the design decisions taken in the ARPAnet days made the deployment of security easier. The main reason for today's security nightmares is quite different.

I know because I was there, and to a small extent involved. Follow me below the fold for the explanation.

Making the original ARPAnet work at all was a huge achievement. It was, to a large extent, made possible because its design was based on the End-to-End Principle:
it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, especially when the latter are beyond the control of, and not accountable to, the former.This principle allowed for the exclusion from the packet transport layer of all functions not directly related to getting the packets from their source to their destination. Simplifying the implementation by pushing functions to the end hosts made designing and debugging the network much easier. Among the functions that were pushed to the end hosts was security. Had the design of the packet transport layer included security, and thus been vastly more complex, it is unlikely that the ARPAnet would ever have worked well enough to evolve into the Internet.

Thus a principle of the Internet was that security was one of the functions assigned to network services (file transfer, e-mail, Web, etc.), not to the network on which they operated.

In the long run, however, the more significant reason why the ARPAnet and early Internet lacked security was not that it wasn't needed, nor that it would have made development of the network harder, it was that implementing security either at the network or the application level would have required implementing cryptography. At the time, cryptography was classified as a munition. Software containing cryptography, or even just the hooks allowing cryptography to be added, could only be exported from the US with a specific license. Obtaining a license involved case-by-case negotiation with the State Department. In effect, had security been a feature of the ARPAnet or the early Internet, the network would have to have been US-only. Note that the first international ARPAnet nodes came up in 1973, in Norway and the UK.

Sometime in the mid-80s Unix distributions, such as Berkeley Unix, changed to eliminate cryptography hooks and implementations from versions exported from the US. This actually removed even the pre-existing minimal level of security from Unix systems outside the US. People outside the US noticed this, which had some influence on the discussions of export restrictions in the following decade.

Commercial domestic Internet started to become available in 1989 in a few areas:
The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.It wasn't widely available until the mid-90s.

In 1991 Phil Zimmerman released PGP. The availability of PGP outside the US was considered a violation of the Arms Export Control Act. The result was:
a grand jury in San Jose, Calif., has been gathering evidence since 1993, pondering whether to indict Zimmermann for violating a federal weapons-export law--a charge that carries a presumptive three-to-five-year sentence and a maximum $1 million fine. The investigation is being led by Silicon Valley Assistant U.S. Attorney William P. Keane; a grand jury indictment must be authorized by the Justice Department in Washington.In 1996 the investigation was dropped without filing any charges. But it meant that in the critical early days of mass deployment of Internet services everyone developing the Internet and its services knew that they were potentially liable for severe penalties if they implemented cryptography and it appeared overseas without a license.

Getting a license got a little easier in 1992:
In 1992, a deal between NSA and the [Software Publishers Association] made 40-bit RC2 and RC4 encryption easily exportable using a Commodity Jurisdiction (which transferred control from the State Department to the Commerce Department).Exporting encryption software still needed a license. It was easier to get, but only for encryption everyone understood was so weak as to be almost useless. Thus as the Internet started taking off US developers of Internet applications faced a choice:
  • Either eliminate cryptography from the product,
  • Or build two versions of the product, the Export version with 40-bit Potemkin encryption, and the Domestic version with encryption that actually provided useful security.
Starting in 1996, the export restrictions were gradually relaxed, though they have not been eliminated. But as regards Internet applications, by 2000 it was possible to ship a single product with effective encryption.

The first spam e-mail was sent in 1978 and evoked this reaction:
ON 2 MAY 78 DIGITAL EQUIPMENT CORPORATION (DEC) SENT OUT AN ARPANET MESSAGE ADVERTISING THEIR NEW COMPUTER SYSTEMS. THIS WAS A FLAGRANT VIOLATION OF THE USE OF ARPANET AS THE NETWORK IS TO BE USED FOR OFFICIAL U.S. GOVERNMENT BUSINESS ONLY. APPROPRIATE ACTION IS BEING TAKEN TO PRECLUDE ITS OCCURRENCE AGAIN.Which pretty much fixed the problem for the next 16 years. But in 1994 lawyers Canter & Siegel spammed the Usenet with an advertisement for their "green card" services, and that December the first commercial e-mail spam was recorded. Obviously, IMAP and SMTP needed security. That month John Gardiner Myers had published RFC1731 for IMAP, and by the following April had published a draft of what became RFC2554. So, very quickly after the need became apparent, a technical solution became available. Precisely because of the end-to-end principle, the solution was constrained to e-mail applications, incrementally deployable, and easily upgraded as problems or vulnerabilities were discovered.

Mosaic, the browser that popularized the Web, was first released in January 1993. It clearly needed secure communication. In SSL and TLS: Theory and Practice Rolf Oppliger writes:
Eight months later, in the middle of 1994, Netscape Communications already completed the design for SSL version 1 (SSL 1.0). This version circulated only internally (i.e., inside Netscape Communications), since it had several shortcomings and flaws. For example, it didn't provide data integrity protection. ... This and a few other problems had to be resolved, and at the end of 1994 Netscape Communications came up with SSL version 2 (SSL 2.0).SSL stands for the Secure Sockets Layer. SSL 3.0, the final version, was published in November 1996. So, very quickly after the need became evident, Netscape built Export versions of SSL and used them to build Export versions of Mosaic. SSL could clearly be used to protect e-mail as well as Web traffic, but doing so involved exporting cryptography.

In order to be usable outside the US, both Web browsers and Web servers had to at least support 40-bit SSL. The whole two-version development, testing and distribution process was a huge hassle, not to mention that you still needed to go through the export licensing process. So many smaller companies and open source developers took the "no-crypto" option, as Unix had in the 80s. Among them was sendmail, the dominant SMTP software from the Internet. Others took different approaches. OpenBSD arranged for all crypto-related work to be done in Canada, so that crypto was imported rather than exported at the US border.

Thus, for the whole of the period during which the Internet was evolving from an academic network into the world's information infrastructure it was impossible, at least for US developers, to deploy comprehensive security for legal reasons. It wasn't that people didn't care about security, it was because they cared about staying out of jail.

Library Tech Talk (U of Michigan): The Voyages of a Digital Collections Audit/Assessment: Episode 2 (The Pilot Group)

planet code4lib - Tue, 2018-02-27 00:00

The Digital Content & Collections department begins an ambitious audit/assessment of our 280+ digital collections. This is the second in a blog series about the endeavor, noting how we started with a pilot group of collections to assess and the lessons learned.

DuraSpace News: Recording and Slides From CLAW Webinar: Islandora + Fedora 4

planet code4lib - Tue, 2018-02-27 00:00

Last Friday February 23rd the University of Toronto Scarborough hosted an Islandora CLAW + Linked Data Lunch and Learn session. The recording is available here:

Webinar presentations:

0:00:14 - Introduction to Islandora CLAW and Linked Data (Danny Lamb, Technical Lead, Islandora Foundation)

DuraSpace News: REGISTER for ILIDE (Innovative Library in Digital Era) 2018

planet code4lib - Tue, 2018-02-27 00:00
The April 15-18 ILIDE (Innovative Library in Digital Era) International Conference in Jasna, Slovakia is open for registration. This International conference attracts librarians from all over the world. The aim of the Innovative Library in Digital Era Conference is to present visionary and original ideas based on the extensive experience of the participating experts and institutions.

District Dispatch: ALA joins operation #OneMoreVote

planet code4lib - Mon, 2018-02-26 19:51

ALA has been working closely with allies to support Senate legislation to restore 2015’s strong, enforceable net neutrality rules. The bill is a Congressional Review Act (CRA) resolution from Sen. Ed Markey (D-MA), which would block the Federal Communications Commission’s (FCC) December repeal of net neutrality rules. The CRA currently has bipartisan support from 50 of 100 senators and would be assured of passage if just one more Republican backs the effort. That’s why we are participating in tomorrow’s massive online day of action—including with other net neutrality advocates and companies like Etsy, Vimeo, Medium, Imgur, Slashdot and Tumblr—to get one more Senator on board.

The measure is backed by all 49 members of the Senate Democratic caucus, including 47 Democrats and two independents who caucus with Democrats. Sen. Susan Collins (R-ME) is the only Republican to support the bill so far, but we are hoping that an internet-wide push will secure #onemorevote.

Getting the CRA resolution passed in the Senate may also apply pressure to the House, where a simple majority is needed (218 votes) to push the CRA past leadership to the floor later this spring. That will require winning over more than 20 Republican members of the House. Ultimately, this Congressional action would be subject to Presidential approval.

But getting past this #onemorevote milestone in the Senate is the first step and puts Senators on the record on this critical issue.

With the Federal Register publication of the December repeal of net neutrality regulations, there are now deadlines established for passing a CRA resolution and for filing legal challenges. In light of this, several net neutrality advocates, ranging from Public Knowledge to the Internet Association to state attorneys general, have begun filing suits against the December FCC regulations.

Much is still in play as to when and how these cases will progress, but we have seen how effective courts can be in slowing or stopping attacks on Net Neutrality and we are supportive of the strategies in court.

What you can do:

  • For more information on how the Congressional Review Act Works, check out this video from Public Knowledge
  • Contact your members of Congress via the ALA Action Alert to ask them to support the CRA to overturn the FCC’s egregious action
  • Share the #onemorevote ALA Action Alert via your social media networks

The post ALA joins operation #OneMoreVote appeared first on District Dispatch.

District Dispatch: ALA welcomes three Alternative Spring Break students

planet code4lib - Mon, 2018-02-26 16:31

Today, we welcomed a new cohort of University of Michigan students to the Washington Office for a week-long internship in D.C. The internship is a part of the University’s School of Information “Alternative Spring Break” program. The students—Sophia McFadden-Keesling, Adrienne Royce, and Natalia Holtzman—will work on two projects related to the Washington Office.

Sophia and Adrienne will travel around D.C. taking photographs of important libraries, locations, events, and activities. Their photos will help build our photo library. They will also tour the Department of Commerce Research Library, the Consumer Financial Protection Bureau Library, and several university libraries as part of their travels. In addition, they will review the images we have in the photo library and update the alternative text where needed to ensure that it is descriptive.

Natalia will interview all of the ALA policy staff in order to learn about the range of activities and issues in the ALA Washington Office. Her final product will be creating a draft one-pager that gives a snapshot introduction to the office. She will also publish a blog post about her experience.

All three students are currently working to earn a master’s degree in Information at the University of Michigan. Outside of their studies, Sophia currently works as a cataloging assistant in the William L. Clements Library and as an instructional aide at the School of Information; Adrienne is the Lab Manager and Research Lab Technician at the University of Michigan’s Senior Research Center for Group Dynamics; and Natalia has a Master of Fine Art with a specialization in creative writing.

The University of Michigan Alternative Spring Break program creates an opportunity for students to engage in a service-oriented integrative learning experience, connects public sector organizations to the knowledge and abilities of students, and facilitates relationships between the School and the greater library community. In addition to ALA, the students are hosted by other organizations and federal agencies such as the Library of Congress, the Smithsonian Institution, and the National Archives. The students get a taste of life here in D.C. and an opportunity to network with information professionals. The Washington Office has participated several times in the past, including 2013, 2015 and 2016.

The post ALA welcomes three Alternative Spring Break students appeared first on District Dispatch.

DuraSpace News: 4Science Awarded OpenAIRE Funding to Increase Open Science Interoperability Features

planet code4lib - Mon, 2018-02-26 00:00

The 4Science proposal for an implementation aimed at increasing the interoperability features supported by the most broadly used platforms in the open science ecosystem for Literature Repositories, Data Repositories, Journals platforms and CRIS/RIMS, such as DSpace, Dataverse, OJS and DSpace-CRIS, has been selected for funding under the OpenAIRE call for services, launched last December.

The tender by 4Science focuses on two main topics:

LITA: LITA, ALCTS, and LLAMA document on small division collaboration

planet code4lib - Fri, 2018-02-23 21:22

Hi, LITAns.

I’m sharing with you a document on small division collaboration (LITA, LLAMA, and ALCTS) which I encourage you all to read carefully. I am also interested in any thoughts, questions, feelings, or ideas that you may have. 

The context for this document is that, as you may know, LITA, ALA, and membership associations generally have been experiencing declining membership for some time. The resulting budgetary deficits make it difficult for us to sustain services. The Presidents, Vice Presidents, and Executive Directors of LITA, LLAMA, and ALCTS have been discussing our shared challenges in this arena, and imagining how we could reduce duplication and build on our strengths were we to work together, whether through formal collaboration or potentially merging our divisions.

All three division Boards discussed this document on Monday afternoon at Midwinter, and we decided it is worth considering further. The division leadership, myself included, will be regrouping on February 28 to update each other on our Board meetings and discuss next steps.

I want to emphasize that nothing has been decided; this document is only the beginning of a discussion. We will be planning a process and timeline for gathering information and other next steps. This will include an open and public dialogue with you, our members, with numerous opportunities for you to participate across a variety of channels.

I expect you (like us!) have a range of feelings on this topic. I know for a fact that any direction we take will be substantially improved by your creativity and insight. You are welcome to discuss this topic here on LITAblog, as well as privately with me, Executive Director Jenny Levine, or President-Elect Bohyun Kim. You may also submit an anonymous question for the Board as a whole; responses will be collated and addressed here on LITAblog. I look forward to your responses.

On behalf of the LITA Board,

Andromeda Yelton

David Rosenthal: Brief Talk at Video Game Preservation Workshop

planet code4lib - Thu, 2018-02-22 22:30
I was asked to give a brief talk to the Video Game Preservation Workshop: Setting the Stage for Multi-Partner Projects at the Stanford Library, discussing the technical and legal aspects of cooperation on preserving software via emulation. Below the fold is an edited text of the talk with links to the sources.

On the basis of the report I wrote on Emulation and Virtualization as Preservation Strategies two years ago, I was asked to give a brief talk today. That may have been a mistake; I retired almost a year ago and I haven't been following developments in the field closely. But I'll do my best and I'm sure you will let me know where I'm out-of-date. As usual, you don't need to take notes, the text of what follows with links to the sources will go up on my blog at the end of this session.

I don't have much time, so I'll just cover a few technical, legal and business points, then open it up for questions (and corrections).
TechnicalFirst, a point I've been making for a long time. Right now, we're investing resources into building emulations to make preserved software accessible. But we're not doing it in a preservable way. We're wrapping the hardware metadata and the system image in a specific emulator, probably bwFLA or Emularity. This is a bad idea because emulation technology will evolve, and maybe your collaborators want to use a different technology now.

I wrote a detailed post about this a year ago, using the analogy of PDF. We didn't wrap PDFs in Adobe Reader, and then have to re-wrap them all in pdf.js. We exposed the metadata and the content so that, at rendering time, the browser could decide on the most appropriate renderer. I wrote:
The linked-to object that the browser obtains needs to describe the hardware that should be emulated. Part of that description must be the contents of the disks attached to the system. So we need two MimeTypes:
  • A metadata MimeType, say Emulation/MachineSpec, that describes the architecture and configuration of the hardware, which links to one or more resources of:
  • A disk image MimeType, say DiskImage/qcow2, with the contents of each of the disks.
Then the browser can download "emul.js", JavaScript that can figure out how to configure and invoke the appropriate emulator for that particular rendering.

Second, one of the things "emul.js" is going to have to work out is how to emulate the UI devices of the original hardware, and how to communicate this mapping to the user. Almost a half-century ago I was quite good at SpaceWar on the PDP7. The Internet Archive's emulation is impressive, but the UI is nothing like the knobs and switches I used back in the day, let alone the huge round calligraphic CRT. As the desktop and laptop gradually die out, the set of controls available to "emul.js" become sparser, and more different from the originals. This could be an issue for a collaboration in which, for example, one partner wanted a kiosk for access and another wanted a phone.

Third, the eventual user needs more than just some keymappings. VisiCalc for the Apple ][ is usable in emulation only because Dan Bricklin put the reference card up on his website. I defy anyone trained on Excel to figure it out without the card. Games are supposed to be "self-teaching" but this relies on a lot of social context that will be quite different a half-century later. How is this contextual metadata to be collected, preserved and presented to the eventual user?

Fourth, network access by preserved software poses a lot of very difficult problems. If you haven't read Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities by Sandy Clarke, Matt Blaze, Stefan Frei and Jonathan Smith, you really need to. They show that the rate of detection of vulnerabilities in software goes up with time. Exploits for these vulnerabilities on the Internet never go away. Preserved software will have vast numbers of vulnerabilities that are under active exploitation on the Internet. And, worse, it will still have zero-days waiting to be discovered. Adequately firewalling preserved software from the net will be extremely difficult, especially in a collaborative setting where partners want to access it over the net.
LegalWhich brings me to my first legal point. Note that I am not a lawyer, and that the following does not apply to preserving open-source software (thank you, Software Heritage for stepping up to do so).

Software makers disclaim liability for the vulnerabilities in their products. Users in effect disclaim liability for vulnerabilities in things they connect to the net by claiming that they follow "best practice" by keeping them patched up-to-date. We're going to be treading new legal ground in this area, because Spectre is a known vulnerability that cannot be patched in software, and for which there will not be fixed hardware for some time. Where does the liability for the inevitable breaches end up? With the cloud provider? With the cloud user who was too cheap to pay for dedicated hosts?

Archives are going to connect known-vulnerable software that cannot be patched to the net. The firewall between the software and the net will disclaim liability. Are your lawyers happy that you can disclaim liability for the known risk that your preserved software will be used to attack someone? And are your sysadmins going to let you connect known-vulnerable systems to their networks?

Of course, the  two better-known legal minefields surrounding old software are copyright and the End User License Agreement (EULA). I wrote about both in my report. In both areas the law is an ass; on a strict interpretation of the law almost anyone preserving closed-source software would incur criminal liability under the DMCA and civil liability under both copyright and contract law. Further, collaborations to preserve software would probably be conspiracies in criminal law, and the partners might incur joint and several liability under civil law. All it would take would be a very aggressive prosecutor or plaintiff and the wrong judge to make these theoretical possibilities real enough to give your institution's lawyers conniptions.

Copyright and EULAs apply separately to each component of the software stack, and to each partner in a collaboration. Games are typically a simple case, with only the game and an OS in the stack. But even then, a three-partner collaboration ends up with a table like this:

Partners' Rights
ABCOSC EC EC EGameC EC EC EThere are only a few closed-source OS left, so its likely that all partners have a site license for the OS, giving them cover under both copyright and EULA for what they want to do. But suppose partner A is a national library that acquired the game under copyright deposit, and thus never agreed to an EULA. Partner C purchased the game, so is covered under both (depending on the terms of the EULA). But partner B got the game in the donated papers of a deceased faculty member.

There's a reason why in the LOCKSS system that Vicky Reich & I designed nearly two decades ago each library's LOCKSS box gets its own copy of the journals from the publisher under its own subscription agreement. Subsequently, the box proves to other boxes that they have the same content, so that copying content from one to another to repair damage isn't creating a new, leaked copy. In a collaboration where one partner is responsible for ingesting content, and providing access to the other partners, new leaked copies are being created, potentially violating copyright, and used, potentially violating the EULA.

Even if the law were both clear and sensible, this would be tricky. As things are it is too hard to deal with. In the meantime, the best we can do is to define, promulgate, and adhere to reasonable "best practices". And to lobby the Copyright Office for exemptions, as the MADE (Museum of Arts and Digital Entertainment) and Public Knowledge are doing. They are asking for an exemption to allow museums and archives to run servers for abandoned online multi- and single-player games.
BusinessIn my panel presentation at the 2016 iPRES I said:
Is there a business model to support emulation services? This picture is very discouraging. Someone is going to have to pay the cost of emulation. As Rhizome found when the Theresa Duncan CD-ROMs went viral, if the end user doesn't pay there can be budget crises. If the end user does pay, its a significant barrier to use, and it starts looking like it is depriving the vendor of income. Some vendors might agree to cost-recovery charging. But typically there are multiple vendors involved. Consider emulating an old Autodesk on Windows environment. That is two vendors. Do they both agree to the principle of cost-recovery, and to the amount of cost-recovery?Somehow, all three phases of preservation need to be paid for:
  • Ingest: is expensive and, obviously, has to be paid up-front. That makes it look like a capital investment. The return on the investment will be in the form of future accesses, which are uncertain. This makes it hard to justify, leading to skimping on the expensive parts, such as metadata generation, which leads to less access, which leads to less justification for ingest.
  • Preservation: may not be that expensive each year, but it is an on-going cost whose total is hard to predict.
  • Dissemination: is typically not expensive but it can spike if the content gets popular, as it did with Rhizome's Theresa Duncan CDs. You only discover the cost after its too late to plan for it.
Most content will not be popular, and this is especially true if it is pay-per-view, thus most content is unlikely to earn enough to pay for its ingest and preservation. Taking into account the legal complexities, charging for access doesn't seem like a viable business model for software archives. But what other sustainable business models are there?

Cynthia Ng: Technical Services: Rationale and Benefits of a Workflow Review

planet code4lib - Thu, 2018-02-22 21:49
I have been doing a bunch of work in reviewing workflows and implementing new or changes to existing workflows, especially in Technical Services. In the process, I have been asked not only about the process I went through, but the rationale and value in doing such an exercise, especially for organizations where most Technical Services … Continue reading "Technical Services: Rationale and Benefits of a Workflow Review"

Open Knowledge Foundation: We crack the Schufa, the German credit scoring

planet code4lib - Thu, 2018-02-22 12:33

Last week the Open Knowledge Foundation Germany (OKFDE) and AlgorithmWatch launched the project OpenSCHUFA. Inspired by OKF Finland and the „mydata“ project, OpenSCHUFA is the first„mydata“ project by OKFDE. Over the last 7 days, the campaign generated Germany-wide media attention, and already over 8.000 individual Schufa data request (30.000 personal data requests in total).

Why we started OpenSCHUFA and why you should care about credit scoring

Germany’s leading credit rating bureau, SCHUFA, has immense power over people’s lives. A low SCHUFA score means landlords will refuse to rent you an apartment, banks will reject your credit card application and network providers will say ‘computer says no’ to a new Internet contract. But what if your SCHUFA score is low because there are mistakes in your credit history? Or if the score is calculated by a mathematical model that is biased?

The big problem is, we simply don’t know how accurate SCHUFA’s or any other credit scoring data is and how it computes its scores. OpenSCHUFA wants to change this by analyzing thousands of credit records.

This is not just happening in Germany, or just with credit scoring, for example the Chinese government has decided to introduce a scoring system by 2020 that assigns a “social value” to all residents. Or think about the Nosedive episode of Black Mirror series.

We want to

  • start a discussion on that topic
  • bring more transparency towards (credit) scoring
  • empower people with their own data and show what can be done once this data is donated or crowd-shared
What exactly is SCHUFA?

SCHUFA is Germany’s leading credit rating bureau. It’s a private company similar to Equifax, Experian or TransUnion, some of the major credit reporting agencies operating in the US, UK, Canada or Australia.

SCHUFA collects data of your financial history – your unpaid bills, credit cards, loans, fines and court judgments – and uses this information to calculate your SCHUFA score. Companies pay to check your SCHUFA score when you apply for a credit card, a new phone or Internet contract. A rental agent even checks with SCHUFA when you apply to rent an apartment. A low score means you have a high risk of defaulting on payments, so it makes it more difficult, or even impossible, to get credit. A low score can also affect how much interest you pay on a loan.

Why should you care about SCHUFA score or any other credit scores?

SCHUFA holds data on about 70 million people in Germany. That’s nearly everyone in the country aged 18 or older. According to SCHUFA, nearly one in ten of these people living in Germany (around 7 million people) have negative entries in their record. That’s quite a lot.

SCHUFA gets its data from approximately 9,000 partners, such as banks and telecommunication companies. SCHUFA doesn’t believe it has a responsibility to check the accuracy of data it receives from its partners.

In addition, the algorithm used by SCHUFA to calculate credit scores is protected as a trade secret so no one knows how the algorithm works and whether there are errors or injustices built into the model or the software.

So basically, if you are an adult living in Germany, there is a good chance your life is affected by a credit score produced by a multimillion euro private company using an automatic process that they do not have to explain and an algorithm based on data that nobody checks for inaccuracies. And this is not just the case in Germany, but everywhere were credit scores determine everyday life.

How can you help?

Not living in Germany? Money makes the world go round.

Please donate some money – 5 EUR, we also do take the GBP or USD –  to enable us to develop a data-donation software (that is open source and re-usable also in your country). Get in touch if you are interested in a similar campaign on the credit bureau in your country:

And now some of the famous German fun, our campaign video:

Terry Reese: MarcEdit 2017 Usage Information

planet code4lib - Thu, 2018-02-22 05:34

Every year, I like to take a couple of minutes and pull my log files to get a quick and dirty look at who might be making use of MarcEdit.  This year, I was also interested in how quickly MarcEdit 7 is being picked up, as the application install base is large and diverse and I’m thinking about how many maintenance releases I’m planning over this next year related to MarcEdit 6 (thinking 4 at this point).

Look at the numbers:

Number of Executions: ~3 million

Executions are measured by tracking those users that make use of the automated update/notifications tool.  Since MarcEdit will ping the update service, I get a rough idea of how often the program was started during the year.  However, as this only captures uses of folks taking advantage of the notification service, and are online – this number represents only a slice of usage.

Countries: ~190

Again, using the log files from the update service, the analytic software I use provides an set of broad country/administrative region codes.  Over the course of 2017, ~190 individual regions were represented, with ~120 regions have an active presence, month-over-month.

MarcEdit 7/MacOS Downloads

This I’m interested in because I’m curious about the rate of adoption.  On update, I usually see ~8-10,000 active users that routinely update the software.  Looking at Dec. 2017 (first month of release) and Jan. 2018 – it looks like folks are slowly starting to test and put MarcEdit 7 through it’s paces.  Below, a download represents a unique user.

Dec. 2017: 6,400 total downloads

Jan. 2018: 11,700 total downloads

Finally – how many questions did I get to answer.  Again, this is hard to say, but looking just at the MarcEdit Listserv, I provided roughly 5,500 responses.  Given questions I get on the list and off, it wouldn’t be a stretch to say that I probably answer ~20 questions a day regarding some aspect of the application.

Development Hours: ~820 hrs

This one surprised me, but this past year was spent revising the application – an honestly, it could be low.  On average though, it wouldn’t be out of the realm of possibility to say that I spent ~17 hours per week in 2017 writing code for MarcEdit, most of it happening between the hours of midnight-3 am. 

So, that’s roughly a snapshot of 2017 usage.  I’ll be interested to see what 2018 will bring.


In the Library, With the Lead Pipe: Editorial: What we’ve been up to

planet code4lib - Thu, 2018-02-22 01:35

Your editors at Lead Pipe wanted to share some of the things we’ve been working on and thinking about, Lead Pipe aside. Enjoy!



One of the projects I work on at my library is the Civic Lab, a pop-up participatory program initiative centered around facilitating deeper exploration of how our government works, social issues with policy implications, and topics in the news. The Civic Lab has been an active initiative for a year and a half, and as we’ve continued to iterate this concept we’ve been thinking about and developing strategies to address two key questions that have emerged.

First: How do we balance our desire to respond quickly to topics in the news with our desire to provide vetted, curated resources? For most of our pop-ups to date, we’ve planned topics well in advance of the program. That timeline has allowed us to create curated handouts for the topics we’ve discussed, filled with content like key definitions, questions for discussion, and resources for further exploration. If we want to be able to respond to a news topic immediately after it happens, however, we can’t take the time to curate a resource list, format it, put it through proofreading, etc. The result is that we’ve been experimenting with what we’re calling “rapid response” pop-ups, where we show up to talk about a current news item with some Civic Lab signage, a laptop to be able to dig into topics, and a handout of go-to news sources that is broadly applicable. This standard handout offers multiple avenues for answering questions about emerging news topics, with tips like “for local news stories, start with a local source” and listings of reputable go-to sources for business, science, and political news items. Having these handouts available at rapid response pop-ups has allowed us to give a solid resource to participants looking to use more effective strategies for staying informed on any topic, including recent rapid response discussions like immigration legislation and gun violence in schools.

Second: How do we facilitate participants adopting a more critical lens on the news media they consume? For us, this isn’t about sussing out so-called “fake news.” Rather, it’s about helping patrons understand the conventions and ethics of journalism so that they can confidently consume news coverage from any news source. After consulting with a journalism professor friend, we put together a pop-up on the topic “What is Journalism? (And What Isn’t?)” meant specifically to help patrons think about what is news coverage, what is analysis, and what is opinion content in their chosen news sources. We’re looking to have conversations about how to tell what’s objective coverage regardless of the source, and in the process de-emphasize the subjective analysis and opinion pieces that tend to infuriate rather than inform.



There’s a lot of stuff going on for me lately, including finalizing the manuscript that I am co-editing, Pushing the Margins: Women of Color and Intersectionality in LIS. It really takes a lot of work to edit a book and this process has taught me a lot about what goes into it. That being said, it also has taken up a lot of my time and cognitive energy. The other night, I was texting my friend about how I constantly feel like I’m behind on everything, but then I realize that perhaps I am really doing too much. My friend replied “We need to learn to do less and that it’s okay. We’re still awesome people and professionals” which maybe sounds simple, but for many women of color (WOC) librarians I know, this is a really difficult message to internalize and accept.

I’ve been thinking a lot about WOC librarians and labor, especially in reflecting on Fobazi Ettarh’s article about vocational awe and how it relates to mental health and burn out. Veronica Arrellano’s latest on Humblebrags, Guilt, and Professional Insecurities also resonated with me as I have been grappling with my own physical and mental health and making sure that I am taking time for myself outside of work. All of this seems timely, because this week is also LIS Mental Health Week (Feb 19-23, 2018) and there’s a host of things happening in conjunction with the week, including a Twitter chat on Feb 22 at 2 pm PST / 4 pm CST / 5 pm EST / 10 pm UTC with the hashtag #LISmentalhealth, as well as a zine that you can purchase with all proceeds going to Mental Health First Aid.



As a library director, I’m realizing the importance of grant writing to support projects that fall outside of my day-to-day operating budget. So, that’s what I’ve been spending a lot of time doing over the past few months. Ivy Tech Community College Columbus has offered faculty/staff the opportunity to apply for internal mini grants to support student retention and learning. Last semester I applied for a grant to fund my library’s Columbus Past, Present & Future Series, which was aimed at highlighting our community’s past and expectations for the future. I was a recipient of one of those grants, which put the needed wind in my sails to apply for another mini grant last month. I received that one as well and am now in the process of purchasing kindles that I’ll be loading bestseller titles on in the next few weeks. These will be available for checkout to my constituents, as well as the other institutional partners that reside on my campus – IUPUC and Purdue Polytechnic.

With the help of our grants office in Indy, I applied for a Sparks! grant through The Institution of Museum and Library Services (IMLS) towards the beginning of February. If we are selected for this grant, it will help fund an entrepreneurial space for students, faculty and staff, as well as the Columbus community. We’re hoping to incorporate new furniture layouts and conduct business workshops that would be offered by faculty at Ivy Tech, IUPUC, and Purdue Polytechnic.

Grant writing has always terrified me. I’ve always worried that I wouldn’t be able to articulate my library’s mission/vision in a way that would compel granting bodies to give. I’m learning to take risks, however, and it’s an exciting time to learn how the grants office at my institution can support and coach me through the process. If you are looking for a way to offer new services and resources outside of your annual budget, don’t let the fear of not being a good writer or a lack of grant knowledge stop you! Chances are you have a grants office available to you too and you may be months away from securing your first grant too!



In October of last year, after having been a cataloger for three and a half years, I started my first full time, faculty status, reference librarian position. It has been an exciting adventure so far to shift gears and put into practice ideas I had been collecting as an MLIS student. Some were simple, such as creating a book display and purchasing more books by people of color, but my main purpose, to support students and student organizations is a slow and steady process. It will take time and relationship building. Meanwhile, I am creating a contest for National Library Week and delving into the world of student outreach and library marketing.

Additionally, two colleagues and I began working a 60 minute presentation, our first, about professional development and career advice for our state library association’s annual conference. It has caused a great deal of reflection about this profession, my place in it, and my identity. I am also working on a poster proposal about languages, libraries, and communities. This means I will be traveling to several conferences this year, the first of which was ALA Midwinter where Junot Díaz asked a question that I think we all need to pause and consider. He asked the audience if during these conferences there is ever a day of remembrance, to remember the history of libraries and “recommended that every year we recognize the history of segregation as it relates to libraries. Every year, we must remind ourselves from which we come. At the heart of decolonization is to remember.” Since then I have been thinking about how ALA as an organization can have a day of remembrance and do justice to its equity, diversity, and inclusion initiatives.



I recently attended the two-day symposium “Libraries in the Context of Capitalism” at the Metropolitan Library Council of New York, which was keynoted by Barbara Fister. Fister’s presentation set the tone for much of the following proceedings: a hopefulness that rested on a trenchant, unsentimental, and sober(ing) view of libraries’ place in North American history and society. Fister reminded us that North American libraries were , from their origins, institutions of social control. Although she did not explicitly say so, she implied that they continue to be, depending on the roles that librarians wish to play in either furthering the aims of settler colonialism, white supremacy, patriarchy, and economic exploitation or resisting these structural features of our society and culture. All of the panels, presentation, discussions, and activities that followed explored a wide variety of ways that we can do the latter. You can read Fister’s reflections on the symposium here.

Among the highlights of the conference for me was a presentation by Carrie Salazar entitled “Using the Library to Empower Diverse Community College Students.” Salazar described the various ways that she prioritizes and centers the needs of marginalized students in her library, as well as the strategies she uses to communicate with them and to earn their trust. She also described the ways that she tries to deemphasize her role as an authority figure whose presence intimidates or constrains students’ research behaviors; in particular, she noted how positioning the librarian as content ‘expert’ can have a negative impact, and she suggested a more supportive and productive (and radical) approach in which the librarian treats the students as the experts.

Another presentation that I hope will find broader circulation soon was Roxanne Shirazi’s “Rethinking Value in Academic Libraries.” This rich and suggestive talk began with a reminder from the Leap Manifesto: “public scarcity in times of unprecedented private wealth is a manufactured crisis.” Such manufactured crises are also created, or echoed, by library administrators, often ignoring both librarians and patrons in the process. The reason for this, Shirazi argued, is that library work is a form of domestic labor upon which capitalism depends and which it exploits and devalues. Academic capitalism has generated a literature on academic value, but Shirazi found that it tellingly never mentions libraries or librarians, underscoring the invisibility of library labor. The appropriate response to this situation, Shirazi suggested, is to carry on the struggle for professional autonomy.



Our Special Collections library just wrapped up hosting a research day for the Chicago Metro History Fair. The History Fair provides an opportunity for middle and high school students to participate in historical research, and a statewide competition. There are several libraries and archives around Chicago that host a research day for the students, and the Special Collections library at University of Illinois at Chicago is one of them. During research day, students are able to learn how to locate, evaluate and use secondary and archival resources for their projects.

Before I accepted my current position, I didn’t know that I would be working with middle and high school students for four to six months out of the year. As with post-secondary students, middle and high school students are at varying levels of skill when it comes to research planning and strategy. Some work independently, while others work in a group of three. Some are more serious than others, but they all have the opportunity to engage with historical documents and rare books.

Working with the students takes a lot of patience, but I am glad that many of the students are African American and other students of color because that is one of my goals as a professional; to introduce students of color to archival materials, and to complicate history with them.

I am also currently working on a few research ideas and proposals, and plan to attend a few professional conferences this year.



The biggest change we’ve made about instruction at my library lately has been “flipping” the lower-order components of our one-shot sessions. Our faculty took to the term “badges,” so we’ve run with that for the combo of videos, tutorials, and quizzes that students do prior to working with a librarian in classes. We started out with the three courses we visit most frequently and we’re building out from there. Flipping the lower-order parts of instruction has given us the time to do more engaging and reflective activities with students in class, making the visits more compelling and memorable for everyone involved.

At the 2017 ACRL Washington & Oregon Joint Conference back in mid-October I saw a number of great panels. Two that I still think of weekly are from fellow community colleges. Jennifer Snoek-Brown and Candice Watkin gave an inspiring presentation on all the ways that Tacoma Community College has been integrating OER into their library. Samantha Hines from Peninsula College gave a great talk about learning from failure around diversity in the library profession, which she’s apparently just published a version of with PNLA Quarterly.



Over the past year, I’ve been doing a lot of reading and absorbing of various talks, workshops, and presentations to help me envision a new teaching and learning program at my library with social justice as our end goal. Something that’s come up repeatedly is this notion of the stories we tell ourselves and how that influences how we behave, the worlds or systems we create or hold on to, and how we move through those worlds or systems. A workshop I attended in January of this year, Unleashing Alternative Futures: Constructing New Worlds through Imagination, Narrative, and Radical Hope, clarified much of this for me when the wonderful facilitators defined world building as a paradigm shift and a way of making space for new perspectives or world views.

Whose Global Village?, by Ramesh Srinivasan, discusses how Enlightenment and colonialist ideas have led to a series of myths that have heavily influenced the way we view and develop technology: “…the way we choose to historicize technology, most notably the Internet, shapes our beliefs and assumptions about what it can be. Creation myths shape visions of the future” (p. 30). This, in conjunction with Linda Tuhiwai Smith’s Decolonizing Methodologies, which fellow librarian, Vani Natajaran, recommended to me, are helping me deconstruct the foundations of our very Euro-centric systems of organizing and defining knowledge. How do we teach this to our students while also empowering them to envision new paths forward (the way the Alternative Futures workshop asks us)? How do I prepare my colleagues to teach this to our students? Have thoughts on this? Get in touch! I would love to talk to other folks as I continue to struggle with these questions and develop my thoughts around these heavy ideas.

Library of Congress: The Signal: Digital Scholarship Resource Guide: People, Blogs and Labs (part 7 of 7)

planet code4lib - Wed, 2018-02-21 21:31

“I Love Data” She Wept, by
bixentro, on Flickr

This is the final post in a seven-part series by Samantha Herron, our 2017 Junior Fellow. She created this guide to help LC Labs explore how to support digital scholarship at the Library and we started publishing them in January. She’s covered why digital materials matter, how to create digital documents, what digital documents make possibletext analysis tools, spatial humanities/GIS/mapping & timelines, and network analysis tools. Herron rounds out all of this useful information with lists of digital scholarship people, labs and blogs to follow to learn even more. The full guide is also available as a PDF download. 

We hope you’ve gained some introductory understanding of what digital scholarship is and how to go about planing a digital project. Please feel free to add your own favorite resources to this list in the comments of this post. And a big “thank-you” to Sam for creating this guide during her fellowship. Look for this content to be developed in a a workshop in the next year, so we would really value your input on more resources and guides to consider. LC Labs is interested in helping scholars and users of all backgrounds to use the Library’s collections in digital projects. Also check-out the LC for Robots page where we’ve created a one-stop-shop for the Library’s computational resources and get started using the tools and techniques you’ve learned about in this blog series.



Miriam Posner, UCLA

Bethany Nowviskie, UVA + DLF at CLIR

Ted Underwood, University of Illinois

Dan Cohen, Northeastern

Ben Schmidt, Northeastern

Sapping Attention

Matthew Jockers, University of Nebraska

Matthew Kirschenbaum, University of Maryland

Mark Sample, Davidson College

And more…This is the list of feeds subscribed to by DHNow. This is the list of blogs from the CUNY Digital Humanities Research Guide.



Explore these websites for more examples of completed and in-progress digital scholarship projects. This is not an exhaustive list but meant as a starting point. 

 Digital Scholarship Lab, University of Richmond

Digital Scholarship Lab, Brown University

CESTA (Center for Spatial and Textual Analysis), Stanford University

Spatial History group, CESTA, Stanford University

Literary Lab, Stanford University

Text Technologies, Stanford University

Center for Interdisciplinary Research, Stanford University

Roy Rosenzweig Center for History and New Media, George Mason University – Creators of THATCamp and DHNow blog + software Omeka and Zotero.

Scholars’ Lab, University of Virginia

Institute for Advanced Technology in the Humanities, University of Virginia

Maryland Institute for Technology in the Humanities, University of Maryland

MIT Hyperstudio, Massachusetts Institute of Technology

Matrix, Michigan State University



The CUNY Digital Humanities Resource Guide has compiled many available digital scholarship syllabi and related tools here.

Miriam Posner’s Fall 2015 Introduction to Digital Humanities syllabus is online here, and she has also collected other intro syllabi here.

Miriam Posner has also made a Digital Humanities and the Library bibliography.

Digital Art History 101 – Johanna Drucker, Steven Nelson, Todd Presner, Miriam Posner



Available online:

Burdick, Anne, and Johanna Drucker, Peter Lunenfeld, Todd Presner, Jeffrey Schnapp. Digital_Humanities. Cambridge: MIT Press, 2012. Available online:

Gold, Matthew K, ed. Debates in the Digital Humanities 2016. Minneapolis: University of Minnesota Press, 2016. Available online:

Gold, Matthew K, ed. Debates in the Digital Humanities 2012. Minneapolis: University of Minnesota Press, 2012. Available online:

Schreibman, S. Siemens, R., Unsworth, J., eds. A Companion to Digital Humanities. Blackwell Companions to Literature and Culture, 2007. Available online:

Schreibman, S., Siemens, R., eds. A Companion to Digital Literary Studies. Blackwell Companions to Literature and Culture, 2008. Available online:


LITA: Jobs in Information Technology: February 21, 2018

planet code4lib - Wed, 2018-02-21 19:35

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

AIM Law Firm Library, Law Library Technical Assistant, Atlanta, GA

Georgia State University, Health Informationist, Atlanta, GA

California Historical Society, Special Collections Metadata and Systems Librarian, San Francisco, CA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.


Subscribe to code4lib aggregator