You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 16 hours 49 min ago

Jonathan Rochkind: On the graphic design of

Fri, 2017-04-21 16:18

I like to pay attention to design, and enjoy good design in the world, graphic and otherwise. A well-designed printed page, web page, or physical tool is a joy to interact with.

I’m not really a trained designer in any way, but in my web development career I’ve often effectively been the UI/UX/graphic designer of apps I work on, and I do my best, and always try to do the best I can (our users deserve good design), and to develop my skills by paying attention to graphic design in the world, reading up (I’d recommend Donald Norman’s The Design of Everyday Things, Robert Bringhurt’s The Elements of Typographic Style, and one free online one, Butterick’s Practical Typography), and trying to improve my practice, and I think my graphic design skills are always improving.   (I also learned a lot looking at and working with the productions of the skilled designers at Friends of the Web, where I worked last year).

Implementing turned out to be a great opportunity to practice some graphic and web design. has very few graphical or interactive elements, it’s a simple thing that does just a few things. The relative simplicity of what’s on the page, combined with it being a hobby side project — with no deadlines, no existing branding styles, and no stakeholders saying things like “how about you make that font just a little bit bigger” — made it a really good design exercise for me, where I could really focus on trying to make each element and the pages as a whole as good as I could in both aesthetics and utility, and develop my personal design vocabulary a bit.

I’m proud of the outcome, while I don’t consider it perfect (I’m not totally happy with the typography of the mixed-case headers in Fira Sans), I think it’s pretty good typography and graphic design, probably my best design work. It’s nothing fancy, but I think it’s pleasing to look at and effective.  I think probably like much good design, the simplicity of the end result belies the amount of work I put in to make it seem straightforward and unsophisticated. :)

My favorite element is the page-specific navigation (and sometimes info) “side bar”.

At first I tried to put these links in the site header, but there wasn’t quite enough room for them, I didn’t want to make the header two lines — on desktop or wide tablet displays, I think vertical space is a precious resource not to be squandered. And I realized that maybe anyway it was better for the header to only have unchanging site-wide links, and have page-specific links elsewhere.

Perhaps encouraged by the somewhat hand-written look (especially of all-caps text) in Fira Sans, the free font I was trying out, I got the idea of trying to include these as a sort of ‘margin note’.

The CSS got a bit tricky, with screen-size responsiveness (flexbox is a wonderful thing). On wide screens, the main content is centered in the screen, as you can see above, with the links to the left: The ‘like a margin note’ idea.

On somewhat narrower screens, where there’s not enough room to have margins on both sides big enough for the links, the main content column is no longer centered.

And on very narrow screens, where there’s not even room for that, such as most phones, the page-specific nav links switch to being above the content. On narrow screens, which are typically phones that are much higher than they are wide, it’s horizontal space that becomes precious, with some more vertical to spare.

Note on really narrow screens, which is probably most phones especially held in vertical orientation, the margins on the main content disappear completely, you get actual content with it’s white border from edge-to-edge. This seems an obvious thing to do to me on phone-sized screens: Why waste any horizontal real-estate with different-colored margins, or provide a visual distraction with even a few pixels of different-colored margin or border jammed up against the edge?  I’m surprised it seems a relatively rare thing to do in the wild.

Nothing too fancy, but I quite like how it turned out. I don’t remember exactly what CSS tricks I used to make it so. And I still haven’t really figured out how to write clear maintainable CSS code, I’m less proud of the actual messy CSS source code then I am of the result. :)

Filed under: General

Evergreen ILS: Evergreen 3.0 development update #2

Fri, 2017-04-21 15:30

Charles the male King Eider duck. Photo courtesy Arlene Schmuland.

As of this writing, 34 patches have been committed to the master since the previous development update. Many of them were bugfixes in support of the 2.10.11, 2.11.4, and 2.12.1 releases.

The 3.0 road map can be considered complete at this point, although as folks come up with additional feature ideas — and more importantly for the purposes of the road map, working code — entries can and should be added.

One of the latest road map additions I’d like to highlight in this update is bug 1682923, where Kathy Lussier proposes to add links in the public catalog to allow users to easily share records via social media. This is an example of a case where the expedient way of doing it — putting in whatever JavaScript Twitter or Facebook recommends — would be the wrong way of doing it. Why? Because it’s up to the user to decide what they share; using the stock social media share JavaScript could instead expose users to involuntary surveillance of their habits browsing an Evergreen catalog. Fortunately, we can have our cake and eat it too by building old-fashioned share links.

Another highlight for this week is offline mode… using only the web browser. This is something that would have been essentially impossible to implement back when Evergreen was getting off the ground, as short of writing a bespoke plugin, there was no way to store either the offline transactions or the block list. Nowadays it’s much easier; we can put pretty much whatever we like in a browser’s IndexedDB. IndexedDB’s API is pretty low-level, so Mike Rylander is working on using Google’s Lovefield, which offers a “relational database for web apps that can be serialized to IndexedDB. Here’s a snippet of how Mike propose’s to wrap Lovefield for use by the offline module:

/** * Core Service - egLovefield * * Lovefield wrapper factory for low level offline stuff * */ angular.module('egCoreMod') .factory('egLovefield', ['$q','$rootScope','egCore', function($q , $rootScope , egCore) { var osb = lf.schema.create('offline', 1); osb.createTable('Object'). addColumn('type', lf.Type.STRING). // class hint addColumn('id', lf.Type.STRING). // obj id addColumn('object', lf.Type.OBJECT). addPrimaryKey(['type','id']); Duck trivia

The Cornell Lab of Ornithology operates the All About Birds website, a great resource for birders. If you find yourself waiting for the test suite to finish running, you can pass the time solving one of their online jigsaw puzzles and learn how to identify some diving ducks.


Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

LITA: Library Blog Basics

Fri, 2017-04-21 15:00

I think we can probably agree that libraries are no longer exclusively geographical locations that our users come to: patrons also visit virtually. Many of their tasks at a library’s website are pragmatic — renewing books, checking their records, searching the online catalog and placing holds — but, increasingly, libraries are beginning to think of their online spaces as destinations for patrons; as communities of web denizens.

Victoria recently discussed social media planning for libraries. Another way librarians can create community in the library’s virtual space is by designing and sustaining blogs.

Last year, my library decided to expand our blog, from a repository of new titles lists and the occasional notice of a change in policy, to a content-rich space for library users to get to know staff, learn more about services, find topical book reviews, read about recent developments, and, yes, also to find the new titles lists they love.

To start the process of revamping our blog space as a virtual living room of ideas, we went through a process that took about a month in total. This was to be an experiment of sorts, something we’d try out and see if it was interesting to our members.

  • A colleague and I brainstormed the logistics of changing style and format for the blog; an example of the kind of decisions we made is deciding how many lines of a post are visible before the reader would need to click through to read the piece; our answer was four.
  • Then we settled on some metrics for measuring engagement — namely, pageviews (how often a post was clicked through to be read in full).
  • I researched blogs that other libraries were hosting, and analyzed how often they posted, recurring topics, and formats for posts (e.g., video, text, image).
  • Colleagues and I discussed the feasibility of posting regularly, considering our existing workloads. In my research, I’d found that most libraries were publishing an average of three or four pieces on their blogs each week, so we set our aim on posting two or three topical articles and a new titles list.
  • After settling on a frequency, we hashed out a list of possible topical categories for posts, based on what I’d seen on other libraries’ blogs: library services, events and programs, reading recommendations, history articles, and personal essays of staff.
  • I created a style guide so that we could develop a consistent tone, while preserving each author’s individual voice. The style guide indicates image guidelines (both size and sourcing), a list of topical categories, desired word count ranges, how to link to web sources and to materials in our catalog, and other technical specs.
  • I set a new posting schedule every six months. Each of sixteen contributors is scheduled to post once every two months, and we’re flexible on this schedule — if any writer is busy with other projects, they are free to skip that post deadline.
  • As we got this project underway, I hosted a peer-to-peer learning session in which I demonstrated all the features of our blog, a step-by-step how-to of posting, and a discussion of topics and categories of articles, followed by Q&A.

Within a few months of beginning this experiment in institutional blogging, we measured results — blog pageviews had increased by over 300%! Anecdotally, we were hearing about some of the pieces at the reference desk. Members began to request books listed in the posts. Although our blog isn’t open for comments, we began to feel this sense of online community bleeding into the IRL world of the library building.

Thus far, blogging has been a successful venture for us, allowing our patrons to share in the life of the library more fully by engaging with staff on a regular basis a few times a week. To be sure, members still visit our website to renew their books and check the library hours. But for those who are interested in content — whether they’re reading about our Chess Coordinator’s personal experience as a child coming to Mechanics’ Institute to watch the chess matches of Boris Spassky, or a readers’ advisory article on resistance-themed fiction, or a collection of the writerly quotes of Truman Capote — there’s also something on our site for these patrons to linger over. Our blog has become a virtual leisure space on the website, and, all things being equal, it’s something we plan to sustain over the long haul.

Does your library have a blog? What tips do you have for developing one?

Open Knowledge Foundation: Gender inequality on focus in São Paulo Open Data Day

Fri, 2017-04-21 14:58

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the human rights theme.

This blog has been translated from this Portuguese original post.

The International Open Data Day was celebrated for the seventh time on March 4th, 2017. It is always a good opportunity to present open data and show its benefits for newcomers. This year, as a joint initiative between PoliGNU, PoliGen, MariaLab e Transparência Hacker, on the Human Rights theme and we focused on the discussion about women participation in public policy development by looking at related open datasets.

Our open data day activity was designed with the following 4 steps:

  1.     Initial presentations and explanations;
  2.     Open data initiatives mapping;
  3.     Women’s fights related initiatives mapping;
  4.     Data analysis and visualization made by thematic groups;
1st Step – Initial presentations and explanations

We started with a brief introduction from each participant to allow everyone to know each other. This showed how diverse of a group we were: engineers, developers, business consultants, designers, social assistants, teachers, journalists, students and researchers.

Some of the participants had collaborated with the Brazilian Freedom of Information Act (FOIA – 12.527/2012), so we had a small discussion about how this law was produced, its purposes and limitations. There was also a brief presentation about what is open data, focusing on the eight principles: Complete, Primary, Timely, Accessible, Machine processable, Non-discriminatory, and License-free.

2nd Step – Open Data initiatives mapping

We started with a brainstorm in which everybody wrote open data related solutions onto post-its notes. The solutions were grouped into four macro themes: Macro Politics, Local Politics, Services and Media.


3rd Step – Women’s fights related initiatives mapping

After we had a second brainstorm about initiatives connected to women’s fights, claims and demands were mapped and added onto post-its. Those initiatives could be not internet-related, as long as they would be related to open data. The post-its were grouped into 5 themes: “Empowerment through Entrepreneurship”, “Empowerment through Technology”, “Visualisations”, “Campaigns” and “Apps”.

4th Step – The teams’ work on Data Analysis and DataViz

Two groups of complementary interests were formed: one that focused on the underrepresentation of women in elected public positions, and another, which sought to address gender inequality from an economic bias perspective.

The team that focused on political perspective, sought open data from the Electoral High Court referred to the Brazilian 2016 elections (available here). The group spent considerable time downloading and data wrangling the database. But even so, they got interesting statistics such as the average expenditure per candidate: ~ R$16,000 for male candidates and ~ R$6,000 for female candidates. Although all parties and states have reached the share of 30% of women, as defined by the law, women’s campaigns receive much less investment. For example, all women’s campaigns, together, did not reach 7% of the total amount of money in Rio de Janeiro City Hall Elections.

Tables, graphs and maps were generated in and the code produced is available in PoliGNU’s GitHub. With this disparity in women representativeness, it is undeniable that the decision-making power is concentrated in the hands of rich white men’s hands. How is it possible to ensure the human rights of such diverse society if the decisions are taken by a such a homogeneous group of rich white men, majority of whom happens to be old? This and other questions have remained and are waiting another hackday to delve again into the data.

The team that focused on economic perspective sought open data from the IBGE website of income, employed population, unemployed population, workforce, individuals microentrepreneur profile, among others. Much of the open data available was structured in a highly aggregated form, preventing manipulation from generating or doing any kind of analysis. As a consequence, this team had to redefine their question a few times.

Some pieces of information deserve to be highlighted:

  • women’s workforce increasing rate (~ 40%) is higher than that of the men (~ 20%)
  • the main segments of women’s small business are: (i) hairdressers, (ii) clothing and accessories sales, and (iii) beauty treatment activities;
  • the main segments of men’s small business are: (i) masonry works, (ii) clothing and accessories sales, and (iii) electrical maintenance.

These facts show an existing sexual division of labour segments – if this happened only due to vacation, it would not be a problem. However, this sexual division of work reveals that some areas impose barriers and prevent women’s entrance, although, these areas often provide better pay than those with a female majority.

Graphs were generated in and the data used for the graphs is available here.

District Dispatch: Build relationships to advance advocacy

Fri, 2017-04-21 13:18

This advocacy guest post was written by Arizona’s Pima County Public Library Director Amber Mathewson, whose member of Congress, Rep. Raul Grijalva (AZ-3), led the recent effort to gather 144 signatures on a “Dear Appropriator” letter in support of LSTA funding. To highlight the important local uses of Federal LSTA funding, Rep. Grijalva held a press conference in front of the library at the El Pueblo Neighborhood Center during Congress’ spring recess.

A crowd gathered this week outside the El Pueblo Library in South Tucson where Congressman Raúl Grijalva (D) and other library advocates to discuss the possible effects of President Trump’s proposed budget cuts — including the elimination of the IMLS —on libraries in Arizona and nationwide. A statement by ALA Julie Todaro was read at the event, in which the American Library Association thanked Rep. Grijalva for his leadership in fighting for library funding.

A statement by ALA Julie Todaro was read at the event, in which the American Library Association thanked Rep. Grijalva for his leadership in fighting for library funding.

Manager of the El Pueblo Library Anna Sanchez was among those who spoke: “Public libraries play a significant role in maintaining and supporting our free democratic society. They are America’s great equalizers, providing everyone the same access to information and opportunities for success.”

At Pima County Public Library, across 26 locations and 9,200 square miles in Southern Arizona, we passionately embrace that role in all that we do. From innovative programming helping entrepreneurs launch their dreams to high-tech youth centers where young adults engage in life-long learning, the Library gives everyone — regardless of age, gender, ethnicity or economic status — a chance to thrive.

Sanchez added: “Libraries are truly the one place in America where the doors are open to everyone.”

Arizona’s Pima County Public Library Director Amber Mathewson, whose member of Congress, Rep. Raul Grijalva (AZ-3), led the recent effort to gather 144 signatures on a “Dear Appropriator” letter in support of LSTA funding.

While libraries nationwide form the cornerstone of our democratic society, they cannot afford to be complacent. As the current threat to funding demonstrates, it is critical that we dedicate ourselves to building relationships with elected officials. It is their votes that can drastically affect the future of libraries. In Southern Arizona’s 3rd Congressional District, we have a champion and steadfast ally in Congressman Grijalva. He recently secured 144 lawmakers’ signatures, across party lines, on a letter to Congress, urging against the cuts and requesting more than $186 million in funding for library programs. Last year, the letter was signed by 88 Representatives.

Grijalva has helped to preserve and defend libraries, elevating library service in the local, state and national arenas. We must build upon that support and expand relationships with other policymakers. Like Rep. Grijalva, they are the ones who will help ensure a future in which libraries are valued as pillars not only of our communities but of our nation.

Last year, as the President of the Arizona Library Association, I attended ALA’s 42nd Annual National Library Legislative Day. Alongside State Librarian Holly Henley, citizen advocate Teresa Quale, and Legislative Chair Kathy Husser, we spoke to all 11 Arizona staff representatives from the House and Senate. We highlighted STEM programming and workforce development, answered funding questions, discussed collaborations and made plans for onsite visits.

In-person meetings are immeasurably meaningful. They are vital if we wish lawmakers to view libraries and librarians as true changemakers. It is in those meetings where we are afforded the space to share the powerful stories of transformation that take place at our libraries every day.

Pima County Public Library is an active partner in the Arizona State Library Association and the Arizona State Library, Archives and Public Records. These organizations are committed to our success and offer much to help us become our own best advocates.

Staff training provides tools to communicate effectively, while easy-to-use resources guide us in identifying and securing meetings with elected officials.

As a county-run system, the relationship we have with our Board of Supervisors is one of paramount importance. To be fully engaged in a library’s vision, one must see for themselves what the library makes possible.

We regularly invite supervisors to attend events and to visit their district libraries. The location of our Library Board Retreat, held annually, alternates between districts which help strengthen those relationships.

At Pima County Public Library, we believe it is our job to educate others so they can advocate on our behalf. The value we bring to our community is incalculable. Every day, we provide people with pathways to a better future. For many, we are a lifeline.

“Free and public libraries are a great tradition in this nation,” said Grijalva. Thankfully, he vows to continue fighting on our behalf. But it is up to us to make sure others — from lawmakers to board members, volunteers to citizen advocates — do, too.

As writer Caitlin Moran once said, “a library in the middle of a community is a cross between an emergency exit, a life raft and a festival.” We have seen it in our libraries and on the faces of our customers whom we serve. Now is the time to make their stories heard and to ensure our future.

The post Build relationships to advance advocacy appeared first on District Dispatch.

Cynthia Ng: BCLA 2017: Hot Topic: Never Neutral: Ethics & Digital Collections

Fri, 2017-04-21 01:31
Notes from the hot topic panel. Tara Robertson, CAPER-BC Jarrett M. Drake, Princeton University Archives Michael Wynne, Washington State University How Libraries Can Trump the Trend to Make America Hate Again (Jarrett) I apologize in advance as it was difficult to take notes for this talk. campaign slogan: Make America Great Again; it signals to … Continue reading BCLA 2017: Hot Topic: Never Neutral: Ethics & Digital Collections

Evergreen ILS: Evergreen 2.10.11, 2.11.4, and 2.12.1 released

Fri, 2017-04-21 00:16

The Evergreen community is pleased to announce three maintenance releases of Evergreen, 2.10.11, 2.11.4, and 2.12.1.

If you upgraded to 2.12.0 from an earlier version of Evergreen, please note that Evergreen 2.12.1 contains an important fix to the new geographic and chronological term browse indexes and requires that a browse reingest of your bibliographic records be run. If your Evergreen database started at 2.12.0, the browse reingest can be skipped; if you have not yet upgraded to 2.12.x, you need run the browse reingest only once, after applying the 2.12.0 to 2.12.1 database updates.

Evergreen 2.10.11 is the final regular release in the 2.10.x series. Further releases in that series will be made only if security bug fixes warrant, and community support will end on 17 June 2017.

Please visit the downloads page to view the release notes and retrieve the server software and staff clients.

Cynthia Ng: BCLA 2017: Measuring Value Beyond Our Walls

Thu, 2017-04-20 23:14
Notes from the panel session. Panel members: Kyla Epstein, Vancouver Public Library Board Karen Ranalletta, CUPE BC Alex Hemingway, Canadian Centre for Policy Alternatives Carlos Carvalho, United Way of the Lower Mainland Councillor Kiersten Duncan, City of Maple Ridge Why the Panel Members are Here What can we do to show value to those who … Continue reading BCLA 2017: Measuring Value Beyond Our Walls

Alf Eaton, Alf: async is more than await

Thu, 2017-04-20 22:09

If you want to use await in JavaScript, it has to be inside an function marked as async.

It's not just sugar, though: async means that the function always returns a Promise.

async function () { return 'foo' } is equivalent to function () { return Promise.resolve('foo') }

Cynthia Ng: BCLA 2017: Maximizing Library Vendor Relationships: The Inside Scoop

Thu, 2017-04-20 21:31
Notes from the first afternoon session on developing library vendor relationships. Scott Hargrove, Jeff Narver, FVRL How to develop a relationship of trust, mutual respect, and partnership. You’re the customer, you can do whatever you want. Vendors are an integral part of a library’s business. Goal of the Presentation define and enhance optimal vendor/library relationships. … Continue reading BCLA 2017: Maximizing Library Vendor Relationships: The Inside Scoop

District Dispatch: Next CopyTalk webinar: The Durationator

Thu, 2017-04-20 21:28

Join us for the next CopyTalk webinar: code + copyright = the Durationator.

Plan ahead! One hour CopyTalk webinars occur on the first Thursday of every month at 11 a.m. Pacific / 2 p.m. Eastern.

For the last decade, the Copyright Research Lab at Tulane University has been building the Durationator — a tool, helpdesk and resource for solving copyright questions. Designed to be used by libraries, archives, museums, artists and content owners (and everyone else!), the Durationator Copyright System combines complex legal research + code + human experts. The Durationator looks at every kind of cultural work (poems, films, books, photographs, art, sound recordings) in every country and territory of the world. It even covers state sound recordings! Elizabeth Townsend Gard will discuss what was learned during the ten-year development process. She will touch on basic information that is available for determining whether a work is under copyright or in the public domain, and how to think through copyright questions at the help desk.

Dr. Elizabeth Townsend Gard is an Associate Professor of Law and the Jill H. and Avram A. Glazer Professor of Social Entrepreneurship at Tulane University. She teaches intellectual property, art law, copyright and trademark law, advertising, property and law and entrepreneurship. Her research interests include fan fiction, the role of law in creativity in the content industries, and video games. She also fosters kittens, which makes Elizabeth an even more appealing speaker!


Date: Thursday, May 4, 2017

Time: 2:00 p.m. (Eastern) / 11:00 a.m. (Pacific)

Link: Go to and sign in as a guest. You’re in!

This program is brought to you by the Office for Information Technology Policy’s copyright education subcommittee. An archive of previous CopyTalk webinars is available.

The post Next CopyTalk webinar: The Durationator appeared first on District Dispatch.

Library of Congress: The Signal: Who Does What? Defining the Roles & Responsibilities for Digital Preservation

Thu, 2017-04-20 20:51

This is a guest post by Andrea Goethals, Manager of Digital Preservation and Repository Services at Harvard Library.

Harvard Library’s digital preservation program has evolved a great deal since the first incarnation of its digital preservation repository (“the DRS”) was put into production in October 2000. Over the years, we have produced 3GB worth of DRS documentation – everything from security policies to architectural diagrams to format migration plans to user documentation. Some of this documentation helps me to manage this repository; in fact, there are a handful of documents I could not effectively do my job without. This post is about one of them – the “DRS Roles & Responsibilities” document.

Like many other libraries, Harvard Library has gone through several reorganizations. Back in 2000, the DRS was solely managed by a library IT department called the Office for Information Services (OIS). When the Library’s digital preservation program was officially launched in 2008, it was naturally set up within OIS. Then in 2012, digital preservation was integrated with its analog preservation counterpart in a new large department called Preservation, Conservation & Digital Imaging (PCDI). But, the IT staff who managed the DRS’ technical infrastructure were moved into a new department called Library Technology Services (LTS) within the university’s central IT. So essentially the management and maintenance of the DRS would now be distributed across departments. Once the reorganization dust settled, it became clear that there was a lot of confusion throughout the Library and even within the departments directly involved over who’s responsibility it was to do what, and even which were digital preservation vs. IT responsibilities. For example, who creates the DRS enhancement roadmaps? Is that a responsibility of digital preservation or of the system development manager? And how should decisions be made about preservation storage? Clearly that should be influenced by both digital preservation and IT best practices.

In response, in 2013, a small group of us met to consider a first draft of what now has come to be known as the DRS Roles & Responsibilities document. It was essential to the eventual buy-in of the division of responsibilities that the group was composed of the head of the 2 departments (PCDI and LTS) as well as myself (the manager of the digital preservation program and the DRS), and the manager of the library’s system development. Over the course of a few meetings we refined the document into something we all agreed on.

Since then we have continued to refine it whenever it’s clear that we forgot to define who has responsibility for something, or when multiple departments think they are responsible for the same thing. Having this document has proved enormously helpful not only in making the day-to-day operations more efficient, it has also improved working relationships, removing contention over responsibilities. Most recently we used the document as a guide for deciding which information belongs on websites managed by Digital Preservation Services vs LTS. It has also proved useful as a communication tool. Now we can better explain to other staff who to go to for what.

This document has now been used as a model within Harvard Library in other areas, to clarify responsibilities for a functional area that is distributed across departments. My hope in sharing this is that it might serve as a useful tool for other institutions – to clarify digital preservation responsibilities distributed across departments, or possibly even among different cooperating institutions.

Page one of the DRS Roles & Responsibilities document.

Version 6 of the DRS Roles & Responsibilities can be found at

Code4Lib Journal: Linked Data is People: Building a Knowledge Graph to Reshape the Library Staff Directory

Thu, 2017-04-20 18:18
One of our greatest library resources is people. Most libraries have staff directory information published on the web, yet most of this data is trapped in local silos, PDFs, or unstructured HTML markup. With this in mind, the library informatics team at Montana State University (MSU) Library set a goal of remaking our people pages by connecting the local staff database to the Linked Open Data (LOD) cloud. In pursuing linked data integration for library staff profiles, we have realized two primary use cases: improving the search engine optimization (SEO) for people pages and creating network graph visualizations. In this article, we will focus on the code to build this library graph model as well as the linked data workflows and ontology expressions developed to support it. Existing linked data work has largely centered around machine-actionable data and improvements for bots or intelligent software agents. Our work demonstrates that connecting your staff directory to the LOD cloud can reveal relationships among people in dynamic ways, thereby raising staff visibility and bringing an increased level of understanding and collaboration potential for one of our primary assets: the people that make the library happen.

Code4Lib Journal: Recommendations for the application of to aggregated Cultural Heritage metadata to increase relevance and visibility to search engines: the case of Europeana

Thu, 2017-04-20 18:18
Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the vocabulary and best practices. html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.

Code4Lib Journal: Autoload: a pipeline for expanding the holdings of an Institutional Repository enabled by ResourceSync

Thu, 2017-04-20 18:18
Providing local access to locally produced content is a primary goal of the Institutional Repository (IR). Guidelines, requirements, and workflows are among the ways in which institutions attempt to ensure this content is deposited and preserved, but some content is always missed. At Los Alamos National Laboratory, the library implemented a service called LANL Research Online (LARO), to provide public access to a collection of publicly shareable LANL researcher publications authored between 2006 and 2016. LARO exposed the fact that we have full text for only about 10% of eligible publications for this time period, despite a review and release requirement that ought to have resulted in a much higher deposition rate. This discovery motivated a new effort to discover and add more full text content to LARO. Autoload attempts to locate and harvest items that were not deposited locally, but for which archivable copies exist. Here we describe the Autoload pipeline prototype and how it aggregates and utilizes Web services including Crossref, SHERPA/RoMEO, and oaDOI as it attempts to retrieve archivable copies of resources. Autoload employs a bootstrapping mechanism based on the ResourceSync standard, a NISO standard for resource replication and synchronization. We implemented support for ResourceSync atop the LARO Solr index, which exposes metadata contained in the local IR. This allowed us to utilize ResourceSync without modifying our IR. We close with a brief discussion of other uses we envision for our ResourceSync-Solr implementation, and describe how a new effort called Signposting can replace cumbersome screen scraping with a robust autodiscovery path to content which leverages Web protocols.

Code4Lib Journal: Outside The Box: Building a Digital Asset Management Ecosystem for Preservation and Access

Thu, 2017-04-20 18:18
The University of Houston (UH) Libraries made an institutional commitment in late 2015 to migrate the data for its digitized cultural heritage collections to open source systems for preservation and access: Hydra-in-a-Box, Archivematica, and ArchivesSpace. This article describes the work that the UH Libraries implementation team has completed to date, including open source tools for streamlining digital curation workflows, minting and resolving identifiers, and managing SKOS vocabularies. These systems, workflows, and tools, collectively known as the Bayou City Digital Asset Management System (BCDAMS), represent a novel effort to solve common issues in the digital curation lifecycle and may serve as a model for other institutions seeking to implement flexible and comprehensive systems for digital preservation and access.

Code4Lib Journal: Medici 2: A Scalable Content Management System for Cultural Heritage Datasets

Thu, 2017-04-20 18:18
Digitizing large collections of Cultural Heritage (CH) resources and providing tools for their management, analysis and visualization is critical to CH research. A key element in achieving the above goal is to provide user-friendly software offering an abstract interface for interaction with a variety of digital content types. To address these needs, the Medici content management system is being developed in a collaborative effort between the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, Bibliotheca Alexandrina (BA) in Egypt, and the Cyprus Institute (CyI). The project is pursued in the framework of European Project “Linking Scientific Computing in Europe and Eastern Mediterranean 2” (LinkSCEEM2) and supported by work funded through the U.S. National Science Foundation (NSF), the U.S. National Archives and Records Administration (NARA), the U.S. National Institutes of Health (NIH), the U.S. National Endowment for the Humanities (NEH), the U.S. Office of Naval Research (ONR), the U.S. Environmental Protection Agency (EPA) as well as other private sector efforts. Medici is a Web 2.0 environment integrating analysis tools for the auto-curation of un-curated digital data, allowing automatic processing of input (CH) datasets, and visualization of both data and collections. It offers a simple user interface for dataset preprocessing, previewing, automatic metadata extraction, user input of metadata and provenance support, storage, archiving and management, representation and reproduction. Building on previous experience (Medici 1), NCSA, and CyI are working towards the improvement of the technical, performance and functionality aspects of the system. The current version of Medici (Medici 2) is the result of these efforts. It is a scalable, flexible, robust distributed framework with wide data format support (including 3D models and Reflectance Transformation Imaging-RTI) and metadata functionality. We provide an overview of Medici 2’s current features supported by representative use cases as well as a discussion of future development directions

Code4Lib Journal: An Interactive Map for Showcasing Repository Impacts

Thu, 2017-04-20 18:18
Digital repository managers rely on usage metrics such as the number of downloads to demonstrate research visibility and impacts of the repositories. Increasingly, they find that current tools such as spreadsheets and charts are ineffective for revealing important elements of usage, including reader locations, and for attracting the targeted audiences. This article describes the design and development of a readership map that provides an interactive, near-real-time visualization of actual visits to an institutional repository using data from Google Analytics. The readership map exhibits the global impacts of a repository by displaying the city of every view or download together with the title of the scholarship being read and a hyperlink to its page in the repository. We will discuss project motivation and development issues such as authentication with Google API, metadata integration, performance tuning, and data privacy.

DPLA: DPLA Celebrates Continued Growth and Plans for the Future at DPLAfest 2017

Thu, 2017-04-20 15:15

Chicago, IL— DPLAfest 2017, the fourth annual event bringing together members of the broad DPLA community, officially kicked off Thursday morning at Chicago Public Library’s Harold Washington Library Center. In addition to Chicago Public Library, DPLAfest 2017 is co-hosted by the Black Metropolis Research Consortium, Chicago Collections, and the Reaching Across Illinois Library System (RAILS). Over the next two days, over 350 participants, representing diverse fields including libraries, archives, museums, technology, education, and more, will come together to learn, converse, and collaborate in a broad range of sessions, workshops, and working sessions. At this morning’s opening plenary, DPLAfest-goers received a warm welcome to the city of Chicago from Chicago Public Library Commissioner and CEO Brian Bannon as well as greetings from Amy Ryan, Chair of DPLA’s Board of Directors, and a report on DPLA’s recent milestones and new initiatives from DPLA Executive Director Dan Cohen.

Following the welcoming remarks, panelists Luis Herrera, City Librarian of San Francisco, Nell Taylor, Executive Director of the Read/Write Library, and Jennifer Brier, Associate Professor of History and Gender and Women’s Studies at the University of Illinois Chicago, discussed community archives, the future of open access to library, archive, and museum collections, and intersections between local community practice and DPLA’s national network in a panel entitled, “Telling Stories of Who We Are,” moderated by DPLA Board Member Sarah Burnes.

Selected announcements from the DPLAfest opening plenary include:

Continued growth of the DPLA network

DPLA celebrated the continued expansion of its partner network over the past year with the addition of new collections from Service Hubs in Wisconsin, Illinois, and Michigan as well as newly accepted applications from Service Hubs representing Ohio, Florida, Montana, Colorado and Wyoming, and the District of Columbia. In addition to its growing list of Service Hubs, DPLA was proud to officially welcome the Library of Congress as a contributing Content Hub in November 2016. With these new collections and others from established partners, DPLA now makes over 16 million items from 2,350 libraries, archives, and museums freely discoverable for all. With the growth of the collections, use of the site has grown dramatically, with new analytics implemented this year showing the important role of both search and curated projects like the Exhibitions and Primary Source Sets in ensuring discovery of and engagement with partner collections.

Implementing Rights Statements

Launched one year ago at DPLAfest 2016, has been well received by cultural heritage professionals within the DPLA network and around the world. Partners across the DPLA network have begun working towards implementation of the new statements, which will be the subject of the Turn the Rights On session Thursday at 3:30pm CT. partners DPLA and Europeana also look forward to welcoming new international partners to the project over the coming months. Digital libraries in Brazil, Australia, New Zealand, and India will be joining the project, with interest from additional libraries on every continent.

Reading the Ebooks Landscape

DPLA celebrated continued success and new initiatives towards its mission of maximizing access to ebooks. Open eBooks, a collaboration between DPLA, The New York Public Library, FirstBook, and Clever, with support from Baker and Taylor, marked its first full year in February  2017, during which children across the country read over 1.5 million ebooks using the app. In addition to Open eBooks, DPLA announced a $1.5 million grant from the Sloan Foundation in January to support the development of DPLA’s mobile-friendly open collection of ebooks and exploration into new ways of facilitating discovery of free, open content; unlocking previously gated content through new licensing and/or access models; and facilitating better purchasing options for libraries.

Expanding our Education Work

Since 2015, DPLA has collaborated with an Education Advisory Committee of ten teachers in grades 6-12 and higher education to design and curate 100 Primary Source Sets about topics in history, literature, and culture using DPLA partner content. These educators come from a variety of geographic and institutional settings including public K-12 schools, community colleges, school district administration, and research universities.

In 2017-2018, DPLA will continue to work with these ten teachers and add six more members from higher education with funding from the Teagle Foundation. With this team, DPLA will continue to develop primary source sets and build and pilot a curriculum for professional development. Professional development workshops with educators in diverse institutional settings will help instructors form next steps for implementing DPLA and the Primary Source Sets into their teaching practices and course syllabi.

Announcing our Values Statement

In today’s society, where fake news abounds, funding for arts and humanities programs is at risk, inequality is expanding, and our nation continues to wrestle with questions of belonging and inclusion for many people, we at DPLA believe it is more important than ever to be clear about who we are and what we value as an organization. As such, we are proud to unveil DPLA’s new Values Statement, which outlines the following core commitments of our organization and our staff:

  • A Commitment to our Mission
  • A Commitment to Constructive Collaboration
  • A Commitment to Diversity, Inclusion, and Social Justice
  • A Commitment to the Public and our Community
  • A Commitment to Stewardship of Resources

The ideas captured in the Values Statement emerged from discussions among our entire staff, with input from our board, about the mission of our institution, the ways we approach our work, and why we as professionals and individuals are committed to the essential goals of DPLA. For each tenet of the statement, we have outlined the core principle to which we aspire as well as specific ways that each value drives our everyday practice. We intend for this document to be a dynamic guide for our practice going forward and a reference against which we can track our progress as we continually strive to embody these values throughout the institution.

Volunteer Opportunity: Join the DPLA Community Reps

DPLA is currently accepting applications for the next class of Community Reps, a grassroots network of enthusiastic volunteers who help connect DPLA with members of their local communities through outreach activities. DPLA staff have worked with hundreds of terrific reps from diverse places and professions so far and look forward to welcoming a new cohort this spring. The application will remain open until Monday, April 24, 2017.

Welcome to DPLAfest Awardees

Cohen introduced and welcomed the five talented and diverse members of the extended DPLA community who are attending DPLAfest 2017 as recipients of the inaugural DPLA travel awards. After receiving a tremendous response to the call from many excellent candidates, DPLA was pleased to award travel support to Tommy Bui of Los Angeles Public Library, Amanda H. Davis of Charlotte Mecklenberg Library, Raquel Flores-Clemons of Chicago State University, Valerie Hawkins of Prairie State College, and Nicole Umayam of Arizona State Library.

Thanks to our Hosts and Sponsors

DPLA would like to acknowledge and thank the gracious DPLAfest 2017 host organizations, Chicago Public Library, Black Metropolis Research Consortium, Chicago Collections, and Reaching Across the Illinois Library System (RAILS) as well as the generous sponsors of DPLAfest 2017, Datalogics, OCLC, Lyrasis, Sony, and an anonymous donor.

DPLA invites all participants and those interested in joining the conversation from afar to follow and contribute to the conversation on Twitter using #DPLAfest.

For additional information or media requests, please contact Arielle Perry, DPLA Program Assistant at