You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 11 hours 10 min ago

Open Knowledge Foundation: Using the Global Open Data Index to strengthen open data policies: Best practices from Mexico

Wed, 2017-08-16 09:00

This is a blog post coauthored with Enrique Zapata, of the Mexican National Digital Strategy.

As part of the last Global Open Data Index (GODI), Open Knowledge International (OKI) decided to have a dialogue phase, where we invited individuals, CSOs, and national governments to exchange different points of view, knowledge about the data and understand data publication in a more useful way.

In this process, we had a number of valuable exchanges that we tried to capture in our report about the state of open government data in 2017, as well as the records in the forum. Additionally, we decided to highlight the dialogue process between the government and civil society in Mexico and their results towards improving data publication in the executive authority, as well as funding to expand this work to other authorities and improve the GODI process. Here is what we learned from the Mexican dialogue:

The submission process

During this stage, GODI tries to directly evaluate how easy it is to find and their data quality in general. To achieve this, civil society and government actors discussed how to best submit and agreed to submit together, based on the actual data availability.

Besides creating an open space to discuss open data in Mexico and agreeing on a joint submission process, this exercise showed some room for improvement in the characteristics that GODI measured in 2016:

  • Open licenses: In Mexico and many other countries, the licenses are linked to datasets through open data platforms. This showed some discrepancies with the sources referenced by the reviewers since the data could be found in different sites where the license application was not clear.
  • Data findability: Most of the requested datasets assess in GODI are the responsibility of the federal government and are available in Nevertheless, the titles to identify the datasets are based on technical regulation needs, which makes it difficult for data users to easily reach the data.
  • Differences of government levels and authorities: GODI assesses national governments but some of these datasets – such as land rights or national laws – are in the hands of other authorities or local governments. This meant that some datasets can’t be published by the federal government since it’s not in their jurisdiction and they can’t make publication of these data mandatory.
Open dialogue and the review process

During the review stage, taking the feedback into account, the Open Data Office of the National Digital Strategy worked on some of them. They summoned a new session with civil society, including representatives from the Open Data Charter and OKI in order to:

  • Agree on the state of the data in Mexico according to GODI characteristics;
  • Show the updates and publication of data requested by GODI;
  • Discuss paths to publish data that is not responsibility of the federal government;
  • Converse about how they could continue to strengthen the Mexican Open Data Policy.


The results

As a result of this dialogue, we agreed six actions that could be implemented internationally beyond just the Mexican context both by governments with centralised open data repositories and those which don’t centralise their data, as well as a way to improve the GODI methodology:

  1. Open dialogue during the GODI process: Mexico was the first country to develop a structured dialogue to agree with open data experts from civil society about submissions to GODI. The Mexican government will seek to replicate this process in future evaluations and include new groups to promote open data use in the country. OKI will take this experience into account to improve the GODI processes in the future.
  2. Open licenses by default: The Mexican government is reviewing and modifying their regulations to implement the terms of Libre Uso MX for every website, platform and online tool of the national government. This is an example of good practice which OKI have highlighted in our ongoing Open Licensing research.
  3. “GODI” data group in CKAN: Most data repositories allow users to create thematic groups. In the case of GODI, the Mexican government created the “Global Open Data Index” group in This will allow users to access these datasets based on their specific needs.
  4. Create a link between government built visualization tools and The visualisations and reference tools tend to be the first point of contact for citizens. For this reason, the Mexican government will have new regulations in their upcoming Open Data Policy so that any new development includes visible links to the open data they use.
  5. Multiple access points for data: In August 2018, the Mexican government will launch a new section on to provide non-technical users easy access to valuable data. These data called “‘Infraestructura de Datos Abiertos MX’ will be divided into five easy-to-explore and understand categories.
  6. Common language for data sets: Government naming conventions aren’t the easiest to understand and can make it difficult to access data. The Mexican government has agreed to change the names to use more colloquial language can help on data findability and promote their use. In case this is not possible with some datasets, the government will go for an option similar to the one established in point 5.

We hope these changes will be useful for data users as well as other governments who are looking to improve their publication policies. Got any other ideas? Share them with us on Twitter by messaging @OKFN or send us an email to


LITA: #NoFilter: Creating Compelling Visual Content for Social Media

Tue, 2017-08-15 13:55

The #NoFilter series has as its focus the numerous challenges that surround social media and its use in the library. In previous posts, I discussed sources for content inspiration as well as tips for content planning. This entry will concentrate on creating compelling visual content for your library’s social media.

A strong visual component for a social media post is imperative for capturing the attention of users and bringing them into dialogue with the library and forming the relationships that are key to institutional social media success.  Social media is not a one-way self-promotional tool for a library, but rather an interactive space allowing a library to engage meaningfully with users, cultivate their support and kindle their enthusiasm for the library’s work. Quality visual content in a social media post has the potential to spur conversations with/among users who in turn share the library’s content with ever-wider audiences.

Below are three tips for generating compelling visual content for your library’s social media posts:

Recipe card created using Canva for a Honey Bread Recipe from 1915. The card was shared on the Othmer Library’s Pinterest and Tumblr where it elicited numerous user responses.

  1. Craft an aesthetically pleasing design using Canva, the user-friendly web-based graphic design service. Utilizing one of Canva’s social media templates makes the creation process all that more efficient. If graphic design is not your forte, you can complete Canva’s Design Essentials tutorials which cover everything from fonts and colors to images and backgrounds.
  2. Assemble an infographic to display information in a captivating way. Canva makes this easy with its infographic template. PowerPoint can also be used for this purpose. The Penn Libraries provide excellent instructions for this process on its Infographics Guide.
  3. Bring a static photo or illustration to life with an animated GIF (short for Graphic Interchange Format). At the Othmer Library of Chemical History, we employ Photoshop Elements in order to create GIFs of images in our rare books and archives. Does using Photoshop seem intimidating to you? The Smithsonian Libraries offer some useful tips and tricks in their 2014 blog post: Library Hacks: Creating Animated GIFs. My colleague also created a handy step-by-step guide for GIF-making: Animated GIF Basics.


What types of visual content do you share on your library’s social media? Do you have any tips for creating compelling visuals? Share them in the comments below!

Eric Lease Morgan: How to do text mining in 69 words

Tue, 2017-08-15 13:38

Doing just about any type of text mining is a matter of: 0) articulating a research question, 1) acquiring a corpus, 2) cleaning the corpus, 3) coercing the corpus into a data structure one’s software can understand, 4) counting & tabulating characteristics of the corpus, and 5) evaluating the results of Step #4. Everybody wants to do Step #4 & #5, but the initial steps usually take more time than desired.

District Dispatch: FCC extends Net Neutrality public comment period to August 30

Tue, 2017-08-15 12:00

On Friday, the FCC announced it would extend the public comment period on its proposal to roll back a 2015 order protecting net neutrality for an additional two weeks. This phase of the process is supposed to allow for “replies” to arguments raised by other commenters.

With close to 20 million comments in the public record so far, any additional time is useful. It’s worth noting, however, that many advocates have called for the FCC to release the consumer complaints received since the 2015 Open Internet Order went into effect and all documents related to the ombudsperson’s interactions with internet users. The comment extension, while welcome, does not address the fact the FCC has yet to make public more than 40,000 net neutrality complaints that could provide direct and relevant evidence in response to numerous questions that the FCC poses in this proceeding.

The extra time means more opportunities for the library community to engage. Even if you have already submitted comments, you can do so again “on reply” Here are a few easy strategies:

  • Submit a comment amplifying the library and higher education principles for an open internet.
  • You can cite to specific examples or arguments in the initial comments submitted by ALA and allies earlier in the proceeding.
  • Thousands of librarians and library staff from across the country have filed comments on their own or via the ALA’s action alert. Members of the library community called on the FCC to keep the current net neutrality rules and shared their worries that the internet with “slow lanes” would hurt libraries and the communities they serve. The comments below offer a few examples and may help with your comments:
    • The New Jersey Library Association submits: “Abandoning net neutrality in favor of an unregulated environment where some content is prioritized over other content removes opportunities for entrepreneurs, students and citizens to learn, grow and participate in their government. It will further enhance the digital divide and severely inhibit the ability of our nation’s libraries to serve those on both sides of that divide.”
    • “If net neutrality is to be abolished, then our critical online services could be restricted to ‘slow lanes’ unless we pay a premium,” wrote John, a public library employee in Georgia. “These include our job and career gateway, language learning software, grant finding, medical information, ebooks, and test preparation guides, such as for the GED and ASVAB. Ending net neutrality would hurt the people who need equal access the most. These people use our career gateway to find jobs, our grant finder to support their businesses and nonprofits, and use our test aids to earn their GED or get into the military. If we were forced to pay a premium to access these resources, it will limit our ability to fund our other programs and services.”
    • Catherine, a reference librarian at a major university in Oregon writes, “I [have] learned that imaginative online searching is an invaluable research tool for personal, professional, and scholarly interests. Yes, going online can be fun, but the internet must not be considered a plaything. Access must not be restricted or limited by corporate packaging.”
    • Hampton, a chief executive officer of a public library system in Maryland, wrote about all the functions and services of the modern library dependent on reliable, unfettered internet access: “In our library, we offer downloadable eBooks, eMagazines, and eAudiobooks as well as numerous databases providing courses through, language learning through Rosetta Stone, 365-days-a-year tutoring for kindergarten through adult with BrainFuse, and many more resources online. We have public computers with internet access as well as free WiFi in our fifteen libraries extending Internet access to thousands of customers who bring their tablets and smartphones to the library. We work with customers to help them in the health care marketplace, with applications for Social Security and jobs, and every conceivable use of the internet. Obviously, being relegated to lower priority internet access would leave our customers in a very difficult position.”
    • Others wrote with concerns about the need for access to information for democracy to thrive, like Carrie, an information professional from Michigan: “The internet is not merely a tool for media consumption, but is also a means of free expression, a resource for education, and most importantly, an implement of democracy. I will not mince words: Allowing corporations to manipulate the flow of information on the internet is not the way forward. An end to net neutrality would hurt businesses large and small, inhibit the free flow of speech online, and allow telecommunications corporations to unjustly interfere with market forces.”

Stay tuned via the District Dispatch and American Libraries blog posts.

The post FCC extends Net Neutrality public comment period to August 30 appeared first on District Dispatch.

Open Knowledge Foundation: Data-cards – a design pattern

Tue, 2017-08-15 10:31

Cross-posted on

It can be useful to recognise patterns in the challenges we face, and in our responses to those challenges. In doing this, we can build a library of solutions, a useful resource when similar challenges arise in the future. When working on innovative projects, as is often the case at Open Knowledge International, creating brand new challenges is inevitable. With little or no historical reference material on how best to tackle these challenges, paying attention to your own repeatable solutions becomes even more valuable.

From a user interface design point of view, these solutions come in the form of design patterns – reusable solutions to commonly occurring problems. Identifying, and using design patterns can help create familiar processes for users; and by not reinventing the wheel, you can save time in production too.

In our work on Data Packages, we are introducing a new task into the world – creating those data packages. This task can be quite simple, and it will ultimately be time saving for people working with data. That said, there is no escaping the fact that this is a task that has never before been asked of people, one that will need to be done repeatedly, and potentially, from within any number of interfaces.

It has been my task of late to design some of these interfaces; I’d like to highlight one pattern that is starting to emerge – the process of describing, or adding metadata to, the columns of a data table. I was first faced with this challenge when working on OS Packager. The objective was to present a recognisable representation of the columns, and facilitate the addition of metadata for each of those columns. The adding of data would be relatively straight forward, a few form fields. The challenge lay in helping the user to recognise those columns from the tables they originated. As anyone who works with spreadsheets on a regular basis will know, they aren’t often predictably or uniformly structured, meaning it is not always obvious what you’re looking at. Take them out of the familiar context of the application they were created in, and this problem could get worse. For this reason, just pulling a table header is probably not sufficient to identify a column. We wanted to provide a preview of the data, to give the best chance of it being recognisable. In addition to this, I felt it important to keep the layout as close as possible to that of say Excel. The simplest solution would be to take the first few rows of the table, and put a form under each column, for the user to add their metadata.



This is a good start, about as recognisable and familiar as you’re going to get. There is one obvious problem though, this could extend well beyond the edge of the users screen, leading to an awkward navigating experience. For an app aimed at desktop users, horizontal scrolling, in any of its forms, would be problematic.

So, in the spirit of the good ol’ webpage, let’s make this thing wrap. That is to say that when an element can not fit on the screen, it moves to a new “line”. When doing this we’ll need some vertical spacing where this new line occurs, to make it clear that one column is separate from the one above it. We then need horizontal spacing to prevent the false impression of grouping created by the rows.



The data-card was born. At the time of writing it is utilised in OS Packager, pretty closely resembling the above sketch.



Data Packagist is another application that creates data packages, and it faces the same challenges as described above. When I got involved in this project there was already a working prototype, I saw in this prototype data cards beginning to emerge.

It struck me that if these elements followed the same data card pattern created for OS Packager, they could benefit in two significant ways. The layout and data preview would again allow the user to more easily recognise the columns from their spreadsheet; plus the grid layout would lend itself well to drag and drop, which would mean avoiding multiple clicks (of the arrows in the screenshot above) when reordering. I incorporated this pattern into the design.



Before building this new front-end, I extracted what I believe to be the essence of the data-card from the OS Packager code, to reuse in Data Packagist, and potentially future projects. While doing so I thought about the current and potential future uses, and the other functions useful to perform at the same time as adding metadata. Many of these will be unique to each app, but there are a couple that I believe likely to be recurring:

  • Reorder the columns
  • Remove / ignore a column

These features combine with those of the previous iteration to create this stand-alone data-card project:

Time will tell how useful this code will be for future work, but as I was able to use it wholesale (changing little more than a colour variable) in the implementation of the Data Packagist front-end, it came at virtually no additional cost. More important than the code however, is having this design pattern as a template, to solve this problem when it arises again in the future.

Alf Eaton, Alf: Styling and theming React Components

Tue, 2017-08-15 09:56

When styling applications with global CSS stylesheets, generic class names become specialised over time (e.g. section_heading__active), and eventually cause problems when a style change in one part of the app accidentally affects usage of the same class name in another part of the app.

Importantly, when authoring app components, you don’t know (or want to know) what CSS classes are being used in the rest of the app.

When authoring components in JSX it’s tempting to just write inline styles, but these can be slow and make the element code more difficult to read.

There are some standardised solutions to this - scoped styles in Shadow DOM - but lack of browser support means that JS polyfills are required.

Another alternative is CSS Modules: when a CSS file is imported into a JS file, the class names in the CSS file are replaced with globally unique names and are provided as an object*.

A solution: CSS in JS

Define the styles inside the component and inject them with a Higher Order Component, generating unique class names at the same time.

In order for components to be themeable, there needs to be a way for theme variables to be available when writing styles*. A theme provider HOC makes the theme object available via React’s context, so the variables can be used in styles.

There are a few leading contenders:

  • styled-components: write styles in CSS, interpolate theme variables, injects a className string containing a single class name.
  • react-jss: write styles in JSS (objects), use theme variables as values, injects a classes object containing multiple class names.
  • styled-jsx: write CSS in <style> tags, scoped to the parent component.
  • material-ui: uses react-jss and adds an name to each style object, allowing any class to be targetted for overriding by the theme.
Further reading
  • CSS Modules aren’t currently supported by react-scripts. SASS is one way to define and use global variables in CSS, but is also not currently supported by react-scripts

Ed Summers: UTR

Tue, 2017-08-15 04:00

I’ve always intended to use this blog as more of a place for rough working notes as well as somewhat more fully formed writing. So in that spirit here are some rough notes for some digging into a collection of tweets that used the #unitetheright hashtag. Specifically I’ll describe a way of determining what tweets have been deleted.

Note: be careful with deleted Twitter data. Specifically be careful about how you publish it on the web. Users delete content for lots of reasons. Republishing deleted data on the web could be seen as a form of Doxing. I’m documenting this procedure for identifying deleted tweets because it can provide insight into how particularly toxic information is traveling on the web. Please use discretion in how you put this data to use.

So I started by building a dataset of #unitetheright data using twarc:

twarc search '#unitetheright' > tweets.json

I waited two days and then was able to gather some information about the tweets that were deleted. I was also interested in what content and websites people were linking to in their tweets because of the implications this has for web archives. Here are some basic stats about the dataset:

number of tweets: 200,113

collected at: 2017-08-13 11:46:05 EDT

date range: 2017-08-04 11:44:12 - 2017-08-13 15:45:39 UTC

tweets deleted: 16,492 (8.2%)

Top 10 Domains in Tweeted URls Domain Count 518 91 83 47 32 22 17 16 15 15 Top 25 Tweeted URLs (after unshortening) URL Count 1460 929 613 384 351 338 244 242 223 208 202 189 187 184 167 143 127 123 107 100 99 90 87 81 80 Deletes

So how do you get a sense of what has been deleted from your data? While it might make sense to write a program to do this eventually, I find it can be useful to work in a more a more exploratory way on the command line first and then when I’ve got a good workflow I can put that into a program. I guess if I were a real data scientist I would be doing this in R or a Jupyter notebook at least. But I still enjoy working at the command line, so here are the steps I took to identify tweets that had been deleted from the original dataset:

First I extracted and sorted the tweet identifiers into a separate file using jq:

jq -r '.id_str' tweets.json | sort -n > ids.csv

Then I hydrated those ids with twarc. If the tweet has been deleted since it was first collected it cannot be hydrated:

twarc hydrate ids.csv > hydrated.json

I extracted these hydrated ids:

jq -r .id_str hydrated.json | sort -n > ids-hydrated.csv

Then I used diff to compare the pre and post hydration ids, and used a little bit of Perl to strip of the diff formatting, which results in a file of tweet ids that have been deleted.

diff ids.csv ids-hydrated.csv | perl -ne 'if (/< (\d+)/) {print "$1\n";}' > ids-deleted.csv

Since we have the data that was deleted we can now build a file of just deleted tweets. Maybe there’s a fancy way to do this on the command line but I found it easiest to write a little bit of Python to do it:

After you run it you should have a file delete.json. You might want to convert it to CSV with something like twarc’s utility to inspect in a spreadsheet program.

Calling these tweets deleted is a bit of a misnomer. A user could have deleted their tweet, deleted their account, protected their account or Twitter could have decided to suspend the users account. Also, the user could have done none of these things and simply retweeted a user who had done one of these things. Untangling what happened is left for another blog post. To be continued…

District Dispatch: ALA celebrates 10 years of Google Policy Fellows

Mon, 2017-08-14 19:50

ALA celebrates the 10th anniversary of the Google Policy Fellowship Program.

Last Friday, we said goodbye to our 2017 Google Policy Fellow Alisa Holahan. The week before her departure, she and OITP hosted a lunch and discussion for this year’s cohort of Google Policy Fellows.

Similar to the six Policy Fellows lunches we have hosted in the past (in 2016, 2015, 2014, 2013, 2012 and 2011), the gathering was an opportunity for the Fellows to explore the intersection of information technology policy and libraries. Fellows from various policy organizations including the Center for Democracy and Technology, the National Hispanic Media Coalition, and Public Knowledge attended to learn more about ALA’s role in shaping technology policy and addressing library needs.

Alan Inouye, Marijke Visser and Carrie Russell shared a brief overview of their roles and the focus at OITP and I represented the intersection between OITP and the OGR. After introductions, the conversation turned to a series of questions: How does the Ready to Code initiative support workforce innovation? How does the Washington Office set priorities? How do we decide our portfolios of work? The informal question-and-answer format generated an interesting exchange around libraries’ roles and interests in technology and innovation.

Most notably, this summer’s lunch marked the 10th anniversary of the Google Policy Fellow Program, of which ALA is a founding host organization. Since 2008, we have encouraged master’s and doctoral students in library and information studies or related areas with an interest in national public policy to apply and have now amassed a decade of alumni, including:

As the expanding role of libraries of all types evolves, the need for information professionals with Washington experience and savvy will continue to grow. The Washington Office is privileged to have hosted ten early-career professionals and to provide the means for them to obtain direct experience in national policy making.

The post ALA celebrates 10 years of Google Policy Fellows appeared first on District Dispatch.

Islandora: Islandora 7.x Committers Calls are on a new platform

Mon, 2017-08-14 18:27

Ever wanted to come to the bi-weekly Islandora Committers Call but don't like using Skype? Good news! We have moved the call to the FreeConferenceCallHD line used by the weekly Islandora CLAW Calls and many of our Interest Groups. You can join by phone or using the web interface in your browser, and join us in the #islandora IRC channel on Freenode for sharing links and text comments (there's a web version for that as well, if you don't want to run an IRC client).

How to join:

Lucidworks: Using A Query Classifier To Dynamically Boost Solr Ranking

Mon, 2017-08-14 18:14

As we countdown to the annual Lucene/Solr Revolution conference in Las Vegas next month, we’re highlighting talks and sessions from past conferences. Today, we’re highlighting Target’s Howard Wan’s talk, “Using A Query Classifier To Dynamically Boost Solr Ranking”.

About 40% of our queries at are ambiguous, which can result in products from many categories. For example, the query “red apple” can match the following products: a red apple ipod (electronic category), red apple fruit ( fresh produce ), red apple iphone case ( accessories). It is desirable to have a classifier to instruct Solr to boost items from the desire category. In addition, for a search engine with a small index, a good percentage of the queries may have little or no results. Is it possible to use the classifier to solve both problems? This talk discusses a classifier built from behavior data which can dynamically re-classify the query to solve both problems.


Join us at Lucene/Solr Revolution 2017, the biggest open source conference dedicated to Apache Lucene/Solr on September 12-15, 2017 in Las Vegas, Nevada. Come meet and network with the thought leaders building and deploying Lucene/Solr open source search technology. Full details and registration…

The post Using A Query Classifier To Dynamically Boost Solr Ranking appeared first on Lucidworks.

Evergreen ILS: Evergreen 3.0 development update #14: results of the second Feedback Fest

Mon, 2017-08-14 16:43

Display at the circulation desk of the Lucius Beebe Memorial Library in Wakefield, MA. Photo courtesy Jeff Klapes.

Since the previous update, another 61 patches have made their way into Evergreen’s master branch. Last week was the second Feedback Fest for the 3.0 release cycle, so let’s talk about some numbers.

A total of 52 bugs either had a pull request when the Feedback Fest started or got one during the course of the week. Of those, 23 resulted in patches getting merged. Some merges of particular note include:

  • A new database view for more efficiently retrieving lists of current and aged loans (bug 1695007)
  • Teaching Evergreen about the time zone that a library belongs to to help format due date and due displays (bug 1705524)
  • Teaching the web staff client how to print spine and item labels (bug 1704873)
  • Adding two more popularity metrics (bug 1688096)
  • Several improvements to Evergreen’s SIP server
  • Numerous fixes to the the internationalization infrastructure; the net effect is that more strings are available to translators

The fix for bug 1480432 warrants a particular shout-out, as it clarifies a subtle detail of the staff permissions system. It’s possible for a staff member to receive a given permission at more than one depth. For example, a staff member could have VIEW_USER permission at the branch level because of their primary profile, but could also have it at system level because of a secondary permission group. Prior to the bugfix, the actual depth that was applied would depend on hidden details of exactly when and how the permission was granted. As a result of the bugfix, there’s now a clear rule: if a staff account is granted a given permission multiple times, the depth applied is the one that grants the permission as broadly as possible. Many thanks to Michele Morgan for the bugfix and to Cesar Velez for writing unit tests.

Back to the numbers: three of the feedback fest bugs were tested and signed off, but not yet merged. Four of them got a pull request during the fest, while nine received comments of various sorts. In total, 75% of the feedback fest bugs received substantive feedback, which matches the results of the first Feedback Fest.

I would like to thank the following people who participated in Feedback Fest #2:

  • Galen Charlton
  • Jeff Davis
  • Bill Erickson
  • Jason Etheridge
  • Rogan Hamby
  • Blake Henderson
  • Kyle Huckins
  • Kathy Lussier
  • Terran McCanna
  • Michele Morgan
  • Andrea Neiman
  • Dan Pearl
  • Mike Rylander
  • Dan Scott
  • Chris Sharp
  • Ben Shum
  • Cesar Velez

It should be noted that several people also reported or commented on bugs that didn’t have a pull request.

As a reminder, the deadline for feature slush is this Friday, 18 August. Feature slush means that new features meant for inclusion in 3.0 should have a pull request on their bugs by end of business on the 18th (although as release manager and as somebody who hopes to be viewing the solar eclipse on the 21st, there is some wiggle room with that deadline). There will not be similar wiggle room for the feature freeze deadline of 1 September.

There are some large pull requests awaiting review, including the web staff serials module and the copy alert and suppression matrix, so please keep testing, y’all!

Duck trivia

Ducks can help streamline work at the circulation desk. The image in today’s post is courtesy of Jeff Klapes of the Lucius Beebe Memorial Library in Wakefield, MA, who says, “We have a monthly duck display at our circulation desk to divert the attention of the little ones while their parents check things out. It’s been wildly successful.”


Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

District Dispatch: Make a nomination for the 2018 National Medal for Museum and Library Service

Mon, 2017-08-14 15:00

The Institute of Museum and Library Services is now accepting nominations for the 2018 National Medal for Museum and Library Service awards. Anyone — an employee, a board member, a member of the public, or an elected official — can nominate an institution. To be considered, the institution must complete and return a nomination form by October 2, 2017.

In 2017, libraries from Iowa, California, South Carolina, Minnesota and Maine were selected to receive this high honor. In 2018, IMLS is particularly interested in libraries with programs that build community cohesion and serve as catalysts for positive community change, including programs that provide services for veterans and military families, at-risk children and families, the un- and under-employed and youth confronting barriers to STEM-related employment.

The ten winning institutions will be honored at a ceremony D.C. and are invited to host a two-day visit from StoryCorps to record community member stories. You can hear some of htese these moving impact stories, dating back to 2009, here.

Institutions interested in being considered should read the nomination form carefully and contact the designated program contacts with questions. The library-specific program contact for the National Medal for Museum and Library Service is Laura McKenzie, who can be reached at (202) 653-4644 or by email at

As we noted with the annoucement of the National Leadership Grants for Libraries and the Laura Bush 21st Century Librarian programs (deadline September 1!), an increase in nominations for the National Medal would send a signal to our Members of Congress that libraries are vital institutions in communities across the country. So, don’t delay — write your nomination today and celebrate the library workers who make our country a better place to live.

The post Make a nomination for the 2018 National Medal for Museum and Library Service appeared first on District Dispatch.

District Dispatch: Voting now open! Bring ALA to SxSW

Fri, 2017-08-11 13:22

For a third year, ALA is planning for Austin’s annual South by Southwest (SXSW) festival. As in years past, we need your help to bring our programs to the SXSW stage. Public voting counts for 30 percent of SXSW’s decision to pick a panel, so please join us in voting for these two ALA programs.

YALSA Past President Linda Braun and OITP Fellow Mega Subramaniam have partnered with IMLS and Google to put on a panel called “Ready to Code: Libraries Supporting CS Education.” Here’s the description:

In the last decade, libraries have transformed, from the traditional book provider to become the community anchor where the next generation technology innovations take place. Drawing from initiatives such as the Libraries Ready to Code project and IMLS grants, this session provides perspectives from thought leaders in industry, government, universities, and libraries on the role libraries play in our national CS education ecosystem and work together with communities to support youth success. You can view the video here.

The Office for Diversity, Literacy and Outreach Services and the Office for Intellectual Freedom are partnering to offer a worshop entitled “Free Speech or Hate Speech?” Here is the quick summary:

The Supreme Court agrees with the rock group, The Slants, that their name is protected under the first amendment. An increase in uses of hate speech in the United States has sparked a new fire in the debate: Is hate speech free speech? Is it a hate crime? The lines can be blurry. We will explore the history of intellectual freedom challenges and how to respond to traumatic interactions involving hate speech that are not seen as “crimes.” See the video here.

As you might remember, in 2016, ALA and Benetech collaborated on a session about leveraging 3D printers to create new learning opportunities for students with disabilities. And, in 2015, OITP partnered with D.C. Public Library and MapStory to present an interactive panel about the ways that libraries foster entrepreneurship and creativity.

Become a registered voter in the Panel Picker process by signing up for an account and get your votes in before Friday, August 25. (Also, be sure to keyword search “library” in the Panelpicker – there are over 30 related programs!)

You will have the opportunity to “Vote Up” or “Vote Down” on all session ideas (votes will be kept private) and add comments to each page. We encourage you to use this commenting feature to show support and even engage with the voting community.

The post Voting now open! Bring ALA to SxSW appeared first on District Dispatch.

FOSS4Lib Recent Releases: veraPDF - 1.8.1

Fri, 2017-08-11 12:48

Last updated August 11, 2017. Created by Peter Murray on August 11, 2017.
Log in to edit this page.

Package: veraPDFRelease Date: Wednesday, August 9, 2017

Harvard Library Innovation Lab: Git physical

Thu, 2017-08-10 20:40

This is a guest blog post by our summer fellow Miglena Minkova.

Last week at LIL, I had the pleasure of running a pilot of git physical, the first part of a series of workshops aimed at introducing git to artists and designers through creative challenges. In this workshop I focused on covering the basics: three-tree architecture, simple git workflow, and commands (add, commit, push). These lessons were fairly standard but contained a twist: The whole thing was completely analogue!

The participants, a diverse group of fellows and interns, engaged in a simplified version control exercise. Each participant was tasked with designing a postcard about their summer at LIL. Following basic git workflow, they took their designs from the working directory, through the staging index, to the version database, and to the remote repository where they displayed them. In the process they “pushed” five versions of their postcard design, each accompanied by a commit note. Working in this way allowed them to experience the workflow in a familiar setting and learn the basics in an interactive and social environment. By the end of the workshop everyone had ideas on how to implement git in their work and was eager to learn more.

Timelapse gif by Doyung Lee (

Not to mention some top-notch artwork was created.

The workshop was followed by a short debriefing session and Q&A.

Check GitHub for more info.

Alongside this overview, I want to share some of the thinking that went behind the scenes.

Starting with some background. Artists and designers perform version control in their work but in a much different way than developers do with git. They often use error-prone strategies to track document changes such as saving files in multiple places using obscure file naming conventions, working in large master files, or relying on in-built software features. At best these strategies result in inconsistencies, duplication and a large disc storage, and at worst, irreversible mistakes, loss of work, and multiple conflicting documents. Despite experiencing some of the same problems as developers, artists and designers are largely unfamiliar with git (exceptions exist).

The impetus for teaching artists and designers git was my personal experience with it. I had not been formally introduced to the concept of version control or git through my studies, nor my work. I discovered git during the final year of my MLIS degree when I worked with an artist to create a modular open source digital edition of an artist’s book. This project helped me see git as an ubiquitous tool with versatile application across multiple contexts and practices, the common denominator of which is making, editing, and sharing digital documents.

I realized that I was faced with a challenge: How do I get artists and designers excited about learning git?

I used my experience as a design educated digital librarian to create relatable content and tailor delivery to the specific characteristics of the audience: highly visual, creative, and non-technical.

Why create another git workshop? There are, after all, plenty of good quality learning resources out there and I have no intention of reinventing the wheel or competing with existing learning resources. However, I have noticed some gaps that I wanted to address through my workshop.

First of all, I wanted to focus on accessibility and have everyone start on equal ground with no prior knowledge or technical skills required. Even the simplest beginner level tutorials and training materials rely heavily on technology and the CLI (Command Line Interface) as a way of introducing new concepts. Notoriously intimidating for non-technical folk, the CLI seems inevitable given the fact that git is a command line tool. The inherent expectation of using technology to teach git means that people need to learn the architecture, terminology, workflow, commands, and the CLI all at the same time. This seems ambitious and a tad unrealistic for an audience of artists and designers.

I decided to put the technology on hold and combine several pedagogies to leverage learning: active learning, learning through doing, and project-based learning. To contextualize the topic, I embedded elements of the practice of artists and designers by including an open ended creative challenge to serve as a trigger and an end goal. I toyed with different creative challenges using deconstruction, generative design, and surrealist techniques. However this seemed to steer away from the main goal of the workshop. It also made it challenging to narrow down the scope, especially as I realized that no single workflow can embrace the diversity of creative practices. At the end, I chose to focus on versioning a combination of image and text in a single document. This helped to define the learning objectives, and cover only one functionality: the basic git workflow.

I considered it important to introduce concepts gradually in a familiar setting using analogue means to visualize black-box concepts and processes. I wanted to employ abstraction to present the git workflow in a tangible, easily digestible, and memorable way. To achieve this the physical environment and set up was crucial for the delivery of the learning objectives.

In terms of designing the workspace, I assigned and labelled different areas of the space to represent the components of git’s architecture. I made use of directional arrows to illustrate the workflow sequence alongside the commands that needed to be executed and used a “remote” as a way of displaying each version on a timeline. Low-tech or no-tech solution such as carbon paper were used to make multiple copies. It took several experiments to get the sketchpad layering right, especially as I did not want to introduce manual redundancies that do little justice to git.

Thinking over the audience interaction, I had considered role play and collaboration. However these modes did not enable each participant to go through the whole workflow and fell short of addressing the learning objectives. Instead I provided each participant with initial instructions to guide them through the basic git workflow and repeat it over and over again using their own design work. The workshop was followed with debriefing which articulated the specific benefits for artists and designers, outlined use cases depending on the type of work they produce, and featured some existing examples of artwork done using git. This was to emphasize that the workshop did not offer a one-size fits all solution, but rather a tool that artists and designers can experiment with and adopt in many different ways in their work.

I want to thank Becky and Casey for their editing work.

Going forward, I am planning to develop a series of workshops introducing other git functionality such as basic merging and branching, diff-ing, and more, and tag a lab exercise to each of them. By providing multiple ways of processing the same information I am hoping that participants will successfully connect the workshop experience and git practice.

Terry Reese: MarcEdit 7 Z39.50/SRU Client Wireframes

Thu, 2017-08-10 17:37

One of the appalling discoveries when taking a closer look at the MarcEdit 6 codebase, was the presence of 3(!) Z39.50 clients (all using slightly different codebases.  This happened because of the ILS integration, the direct Z39.50 Database editing, and the actual Z39.50 client.  In the Mac version, these clients are all the same thing – so I wanted to emulate that approach in the Windows/Linux version.  And as a plus, maybe I would stop (or reduce) my utter distain at having support Z39.50 generally, within any library program that I work with. 

* Sidebar – I really, really, really can’t stand working with Z39.50.  SRU is a fine replacement for the protocol, and yet, over the 10-15 years that its been available, SRU remains a fringe protocol.  That tells me two things:

  1. Library vendors generally have rejected this as a protocol and there are some good reason for this…most vendors that support (and I’m thinking specifically about ExLibris), use a custom profile.  This is a pain in the ass because the custom profile requires code to handle foreign namespaces.  This wouldn’t be a problem if this only happened occasionally, but it happens all the time.  Every SRU implementation works best if you use their custom profiles.  I think what made Z39.50 work, is the well-defined set of Bib-1 attributes.  The flexibility in SRU is a good thing, but I also think it’s why very few people support it, and fewer understand how it actually works.
  2. That SRU is a poor solution to begin with.  Hey, just like OAI-PMH, we created library standards to work on the web.  If we had it to do over again, we’d do it differently.  We should probably do it differently at this point…because supporting SRU in software is basically just checking a box.  People have heard about it, they ask for it, but pretty much no one uses it.

By consolidating the Z39.50 client code, I’m able to clean out a lot of old code, and better yet, actually focus on a few improvements (which has been hard because I make improvements in the main client, but forget to port them everywhere else).  The main improvements that I’ll be applying has to do with searching multiple databases.  Single search has always allowed users to select up to 5 databases to query.  I may remove that limit.  It’s kind of an arbitrary one.  However, I’ll also be adding this functionality to the batch search.  When doing multiple database searches in batch, users will have an option to take all records, the first record found, or potentially (I haven’t worked this one out), records based on order of database preference. 


Main Window:

Z39.50 Database Settings:

SRU Settings:

There will be a preferences panel as well (haven’t created it yet), but this is where you will set proxy information and notes related to batch preferences.  You will no longer need to set title field or limits, as the limits are moving to the search screen (this has always needed to be variable) and the title field data is being pulled from preferences already set in the program preferences.

One of the benefits of making the changes is that this folds the z39.50/sru client into the Main MarcEdit application (rather than as a program that was shelled to), which allows me to leverage the same accessibility platform that has been developed for the rest of the application.  It also highlights one of the other changes happening in MarcEdit 7.  MarcEdit 6- is a collection of about 7 or 8 individual executables.  This makes sense in some cases, less sense in others.  I’m evaluating all the stand-alone programs and if I replicate the functionality in the main program, then it means that while initially, having these as separate program might have been a good thing, the current structure of the application has changed, and so the code (both external and internal) code needs to be re-evaluated and put in one spot.  In the application, this has meant that in some cases, like the Z39.50 client, the code will move into MarcEdit proper (rather being a separate program called mebatch.exe) and for SQL interactions, it will mean that I’ll create a single shared library (rather than replicating code between three different component parts….the sql explorer, the ILS integration, and the local database query tooling).

Questions, let me know.


FOSS4Lib Recent Releases: veraPDF - 1.8.1

Thu, 2017-08-10 12:04

Last updated August 10, 2017. Created by Peter Murray on August 10, 2017.
Log in to edit this page.

Package: veraPDFRelease Date: Wednesday, August 9, 2017

Open Knowledge Foundation: An approach to building open databases

Thu, 2017-08-10 07:30

This post has been co-authored by Adam Kariv, Vitor Baptista, and Paul Walsh.

Open Knowledge International (OKI) recently coordinated a two-day work sprint as a way to touch base with partners in the Open Data for Tax Justice project. Our initial writeup of the sprint can be found here.

Phase I of the project ended in February 2017 with the publication of What Do They Pay?, a white paper that outlines the need for a public database on the tax contributions and economic activities of multinational companies.

The overarching goal of the sprint was to start some work towards such a database, by replicating data collection processes we’ve used in other projects, and to provide a space for domain expert partners to potentially use this data for some exploratory investigative work. We had limited time, a limited budget, and we are pleased with the discussions and ideas that came out of the sprint.

One attendee, Tim Davies, criticised the approach we took in the technical stream of the sprint. The problem with the criticism is the extrapolation of one stream of activity during a two-day event to posit an entire approach to a project. We think exploration and prototyping should be part of any healthy project, and that is exactly what we did with our technical work in the two-day sprint.

Reflecting on the discussion presents a good opportunity here to look more generally at how we, as an organisation, bring technical capacity to projects such as Open Data for Tax Justice. Of course, we often bring much more than technical capacity to a project, and Open Data for Tax Justice is no different in that regard, being mostly a research project to date.

In particular, we’ll take a look at the technical approach we used for the two-day sprint. While this is not the only approach towards technical projects we employ at OKI, it has proven useful on projects driven by the creation of new databases.

An approach

Almost all projects that OKI either leads on, or participates in, have multiple partners. OKI generally participates in one of three capacities (sometimes, all three):

  • Technical design and implementation of open data platforms and apps.
  • Research and thought leadership on openness and data.
  • Dissemination and facilitating participation, often by bringing the “open data community” to interact with domain specific actors.

Only the first capacity is strictly technical, but each capacity does, more often than not, touch on technical issues around open data.

Some projects have an important component around the creation of new databases targeting a particular domain. Open Data for Tax Justice is one such project, as are OpenTrials, and the Subsidy Stories project, which itself is a part of OpenSpending.

While most projects have partners, usually domain experts, it does not mean that collaboration is consistent or equally distributed over the project life cycle. There are many reasons for this to be the case, such as the strengths and weaknesses of our team and those of our partners, priorities identified in the field, and, of course, project scope and funding.

With this as the backdrop for projects we engage in generally, we’ll focus for the rest of this post on aspects when we bring technical capacity to a project. As a team (the Product Team at OKI), we are currently iterating on an approach in such projects, based on the following concepts:

  • Replication and reuse
  • Data provenance and reproducibility
  • Centralise data, decentralise views
  • Data wrangling before data standards

While not applicable to all projects, we’ve found this approach useful when contributing to projects that involve building a database to, ultimately, unlock the potential to use data towards social change.

Replication and reuse

We highly value the replication of processes and the reuse of tooling across projects. Replication and reuse enables us to reduce technical costs, focus more on the domain at hand, and share knowledge on common patterns across open data projects. In terms of technical capacity, the Product Team is becoming quite effective at this, with a strong body of processes and tooling ready for use.

This also means that each project enables us to iterate on such processes and tooling, integrating new learnings. Many of these learnings come from interactions with partners and users, and others come from working with data.

In the recent Open Data for Tax Justice sprint, we invited various partners to share experiences working in this field and try a prototype we built to extract data from country-by-country reports to a central database. It was developed in about a week, thanks to the reuse of processes and tools from other projects and contexts.

When our partners started looking into this database, they had questions that could only be answered by looking back to the original reports. They needed to check the footnotes and other context around the data, which weren’t available in the database yet. We’ve encountered similar use cases in both and OpenTrials, so we can build upon these experiences to iterate towards a reusable solution for the Open Data for Tax Justice project.

By doing this enough times in different contexts, we’re able to solve common issues quickly, freeing more time to focus on the unique challenges each project brings.

Data provenance and reproducibility

We think that data provenance, and reproducibility of views on data, is absolutely essential to building databases with a long and useful future.

What exactly is data provenance? A useful definition from wikipedia is “… (d)ata provenance documents the inputs, entities, systems, and processes that influence data of interest, in effect providing a historical record of the data and its origins”. Depending on the way provenance is implemented in a project, it can also be a powerful tool for reproducibility of the data.

Most work around open data at present does not consider data provenance and reproducibility as an essential aspect of working with open data. We think this is to the detriment of the ecosystem’s broader goals of seeing open data drive social change: the credible use of data from projects with no provenance or reproducibility built in to the creation of databases is significantly diminished in our “post truth” era.

Our current approach builds data provenance and reproducibility right into the heart of building a database. There is a clear, documented record of every action performed on data, from the extraction of source data, through to normalisation processes, and right to the creation of records in a database. The connection between source data and processed data is not lost, and, importantly, the entire data pipeline can be reproduced by others.

We acknowledge that a clear constraint of this approach, in its current form, is that it is necessarily more technical than, say, ad hoc extraction and manipulation with spreadsheets and other consumer tools used in manual data extraction processes. However, as such approaches make data provenance and reproducibility harder, because there is no history of the changes made, or where the data comes from, we are willing to accept this more technical approach and iterate on ways to reduce technical barriers.

We hope to see more actors in the open data ecosystem integrating provenance and reproducibility right into their data work. Without doing so, we greatly reduce the ability for open data to be used in an investigative capacity, and likewise, we diminish the possibility of using the outputs of open data projects in the wider establishment of facts about the world. Recent work on beneficial ownership data takes a step in this direction, leveraging the PROV-DM standard to declare data provenance facts.

Centralise data, decentralise views

In OpenSpending, OpenTrials, and our initial exploratory work on Open Data for Tax Justice, there is an overarching theme to how we have approached data work, user stories and use cases, and co-design with domain experts: “centralise data, decentralise views”.

Building a central database for open data in a given domain affords ways of interacting with such data that are extremely difficult, or impossible, by actively choosing to decentralise such data. Centralised databases make investigative work that uses the data easier, and allows for the discovery, for example, of patterns across entities and time that can be very hard to discover if data is decentralised.

Additionally, by having in place a strong approach to data provenance and reproducibility, the complete replication of a centralised database is relatively easily done, and very much encouraged. This somewhat mitigates a major concern with centralised databases, being that they imply some type of “vendor lock-in”.

Views on data are better when decentralised. By “views on data” we refer to visualisations, apps, websites – any user-facing presentation of data. While having data centralised potentially enables richer views, data almost always needs to be presented with additional context, localised, framed in a particular narrative, or otherwise presented in unique ways that will never be best served from a central point.

Further, decentralised usage of data provides a feedback mechanism for iteration on the central database. For example, providing commonly used contextual data, establishing clear use cases for enrichment and reconciliation of measures and dimensions in the data, and so on.

Data wrangling before data standards

As a team, we are interested in, engage with, and also author, open data standards. However, we are very wary of efforts to establish a data standard before working with large amounts of data that such a standard is supposed to represent.

Data standards that are developed too early are bound to make untested assumptions about the world they seek to formalise (the data itself). There is a dilemma here of describing the world “as it is”, or, “as we would like it to be”. No doubt, a “standards first” approach is valid in some situations. Often, it seems, in the realm of policy. We do not consider such an approach flawed, but rather, one with its own pros and cons.

We prefer to work with data, right from extraction and processing, through to user interaction, before working towards public standards, specifications, or any other type of formalisation of the data for a given domain.

Our process generally follows this pattern:

  • Get to know available data and establish (with domain experts) initial use cases.
  • Attempt to map what we do not know (e.g.: data that is not yet publicly accessible), as this clearly impacts both usage of the data, and formalisation of a standard.
  • Start data work by prescribing the absolute minimum data specification to use the data (i.e.: meet some or all of the identified use cases).
  • Implement data infrastructure that makes it simple to ingest large amounts of data, and also to keep the data specification reactive to change.
  • Integrate data from a wide variety of sources, and, with partners and users, work on ways to improve participation / contribution of data.
  • Repeat the above steps towards a fairly stable specification for the data.
  • Consider extracting this specification into a data standard.

Throughout this entire process, there is a constant feedback loop with domain expert partners, as well as a range of users interested in the data.


We want to be very clear that we do not think that the above approach is the only way to work towards a database in a data-driven project.

Design (project design, technical design, interactive design, and so on) emerges from context. Design is also a sequence of choices, and each choice has an opportunity cost based on various constraints that are present in any activity.

In projects we engage in around open databases, technology is a means to other, social ends. Collaboration around data is generally facilitated by technology, but we do not think the technological basis for this collaboration should be limited to existing consumer-facing tools, especially if such tools have hidden costs on the path to other important goals, like data provenance and reproducibility. Better tools and processes for collaboration will only emerge over time if we allow exploration and experimentation.

We think it is important to understand general approaches to working with open data, and how they may manifest within a single project, or across a range of projects. Project work is not static, and definitely not reducible to snapshots of activity within a wider project life cycle.

Certain approaches emphasise different ends. We’ve tried above to highlight some pros and cons of our approach, especially around data provenance and reproducibility, and data standards.

In closing, we’d like to invite others interested in approaches to building open databases to engage in a broader discussion around these themes, as well as a discussion around short term and long term goals of such projects. From our perspective, we think there could be a great deal of value for the ecosystem around open data generally – CSOs, NGOs, governments, domain experts, funders – via a proactive discussion or series of posts with a multitude of voices. Join the discussion here if this is of interest to you.

Library Tech Talk (U of Michigan): Problems with Authority

Thu, 2017-08-10 00:00

MARC Authority records can be used to create a map of the Federal Government that will help with collection development and analysis. Unfortunately, MARC is not designed for this purpose, so we have to find ways to work around the MARC format's limitations.

District Dispatch: The Copyright Office belongs in the Library of Congress

Wed, 2017-08-09 23:10

In “Lessons From History: The Copyright Office Belongs in the Library of Congress,” a new report from the American Library Association (ALA), Google Policy Fellow Alisa Holahan compellingly documents that Congress repeatedly has considered the best locus for the U.S. Copyright Office (CO) and consistently reaffirmed that the Library of Congress (Library) is its most effective and efficient home.

The U.S. Copyright Office is located in the James Madison Memorial Building of the Library of Congress in Washington, D.C. Photo credit: The Architect of the Capitol

Prompted by persistent legislative and other proposals to remove the CO from the Library in both the current and most recent Congresses, Holahan’s analysis comprehensively reviews the history of the locus of copyright activities from 1870 to the present day. In addition to providing a longer historical perspective, the Report finds that Congress has examined this issue at roughly 20-year intervals, declining to separate the CO and Library each time.

Notable developments occurred, for example, in the deliberations leading to the Copyright Act of 1976. In particular, there was argument made that the CO performs executive branch functions, and thus its placement in the legislative branch is unconstitutional. The 1976 Act left the U.S. Copyright Office in the Library. Moreover, in 1978, the U.S. Court of Appeals for the Fourth Circuit in Eltra Corp. v. Ringer directly addressed this constitutionality question. It found no constitutional problem with the CO’s and Library’s co-location because the Copyright Office operates under the direction of the Librarian of Congress, an appointee of the president.

Holahan also notes another challenge via the Omnibus Patent Act of 1996, which proposed that copyright, patent and trademark activities be consolidated under a single government corporation. This Act was opposed by then-Register of Copyrights Marybeth Peters and then-Librarian of Congress James Billington, as well as an array of stakeholders that included the American Society of Composers, Authors and Publishers (ASCAP); American Society of Journalists and Authors; as well as the library, book publishing and scholarly communities. This legislation was not enacted, thereby leaving the placement of the Copyright Office unchanged.

The neutral question that launched this research was to identify anything of relevance in the historical record regarding the placement of the Copyright Office. ALA recommends Holahan’s research (refer to her full report for additional historical milestones and further details) to anyone contemplating whether the Register of Copyrights should be appointed by the President or whether the Copyright Office should be relocated from the Library.

In a nutshell, these questions have been asked and answered the same way many times already: “it ain’t broke, so don’t fix it.” Holahan’s research and report will inform ALA’s continuing lobbying and policy advocacy on these questions as we work to protect and enhance copyright’s role in promoting the creation and dissemination of knowledge for all.

The post The Copyright Office belongs in the Library of Congress appeared first on District Dispatch.