You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 10 hours 39 min ago

LITA: Jobs in Information Technology: July 19, 2017

Wed, 2017-07-19 20:30

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Library of the University of California, Davis, Associate University Librarian for Research and Learning, Davis, CA

Library of the University of California, Davis. Associate University Librarian for Scholarly Resources, Davis, CA

Sullivan University, Electronic Resources Librarian, Louisville, KY

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

 

HangingTogether: Trust, sustainability, collaboration. . . and research libraries under the Greek sun

Wed, 2017-07-19 20:29

We recently returned from attending the LIBER annual conference, held in Patras, Greece. LIBER is the Ligue des Bibliothèques Européennes de Recherche – Association of European Research Libraries and serves as an important professional organization for national and university libraries throughout Europe. We attended to talk with European research librarians and also to present preliminary findings on our collaborative research with LIBER on the adoption and integration of persistent person and organizational identifiers in European research information management infrastructures.

This year, trust and sustainability were the dominant themes emerging over the three days. Whether distributed solutions (such as the new COAR initiative to create a globally networked infrastructure for scholarly communication) or shared efforts (such as the European Print Initiatives Collaboration EPICo) – they require trust to work, and sustainable operational models to survive.

Another theme that stood out for us was collaboration. Jacquelijn Ringersma, Head of Digital Production Centre, Wageningen University & Research (WUR), spoke about Research Data Management policies at WUR locally as well as on the Dutch National Coordination Point Research Data Management led by SURF, an initiative to coordinate knowledge sharing and stimulate cooperation among stakeholders nationwide, which has helped develop a visible, knowledgeable, and efficient RDM community in the Netherlands. She offered one of our favorite quotes of the conference “You cannot build RDM services as only a library”, adding that libraries succeed through local partnership with researchers, IT, legal services, and the graduate school– and that you also need national collaboration.

Join us next week when we hear more from Jacquelijn on Policy Realities in Research Data Management in an OCLC webinar.

Our own presentation on “The Adoption and Integration of Persistent Identifiers in European Research Information Management” also addressed themes of trust, sustainability, and collaboration. We define research information management (RIM) as the aggregation, curation, and utilization of metadata about research activities–in Europe, RIMs are also widely known as Current Research Information Systems (CRIS). In collaboration with LIBER, OCLC Research is examining the nexus of RIM infrastructures with person and organizational identifiers in three national contexts, Finland, Germany, and the Netherlands, in order to gain useful insights on emerging practices and challenges at different level of scale.

We have conducted interviews with more than fifteen research universities and ICT providers in these national landscapes. Our presentation offered preliminary findings of our research, documenting how Finnish and Dutch infrastructures are highly organized and aggregate research outputs at the national scale, largely in response to national and funder mandates for improved impact assessment and open access. These external incentives are largely absent in Germany where we observe far less state or national scale coordination.

One thing we found interesting at the LIBER conference was that persistent identifiers (PIDs) and research information management infrastructures did not feature prominently in the printed program, including the abstracts, and so we were curious to learn why that was. For persistent identifiers, it turned out to be pretty simple. Persistent person and organizational identifiers are critical infrastructure necessary for scaling, for networked solutions, for interoperability and interlinking, so they were rarely mentioned but often implied, as was confirmed whenever we asked. System integration of PIDs with RIMs provides added convenience for busy researchers, as they enter metadata once–which can make them more likely to comply.

The absence of RIM related topics from a library related conference, however common in our experience, is less easily explained. Is it that RIM, unlike RDM, is regarded as a purely administrative topic? Do research libraries feel excluded from this sphere in their institutions, or do they selectively focus on those parts of RIM which are close to their skills and mission, such as publication management and researcher support? Share your thoughts below if you have any, as we would love to hear from you.

In Patras, we ended up having terrific conversations on the joys and sorrows in PID land, and learned a lot. Watch for more news about our research through this blog and by following @OCLC on Twitter.

Rebecca Bryant & Annette Dortmund

Thanks to our research partner Constance Malpas for her input in writing this post.

Cynthia Ng: Article: A Practical Starter Guide on Developing Accessible Websites

Wed, 2017-07-19 17:21
After years of prepping and months of writing and editing, I finally published my first article! The article is focused on accessibility and assumes that you are a web developer or can understand web development to at least an intermediate level. The idea was to fill a bit of a gap since so many accessibility … Continue reading Article: A Practical Starter Guide on Developing Accessible Websites

Open Knowledge Foundation: Brazil’s Information Access Law and the problem of ‘un-anonymous’ request for public information

Wed, 2017-07-19 09:35

It is critical to build mechanisms that allow and promote the exercise of right to information access in a way that is safe to Information Access Law users. In this blog, Ariel Kogan (managing director of Open Knowledge Brasil) and Fabiano Angélico (transparency and integrity adviser and author of the book “Lei de Acesso à Informação: Reforço ao Controle Democrático” (Information Access Act: Reinforcement for the Democratic Control) ) talk about the importance of anonymous requests of information to preserve the identity, privacy and safety of citizens.

According to the Brazilian Information Access Law, which has been effective for five years this May, the information requesting party – either an individual or an entity – needs to inform the government authority of its name and a document number. This obligation has shown to be problematic, especially for journalists and activists who search for information that might uncover cases of corruption or misappropriation of public resources. 

Brazil submitted its third action plan to Open Government Partnership in December of 2016. One of the country’s commitments is to “create new mechanisms or improve existing mechanisms to evaluate and monitor the passive transparency of Law 12.527 of 2011 in the Federal Government”. Another commitment is to “safeguard the requesting party’s identity under excusable cases through adjustments in request procedures and channels”. 

Image: Digital Rights LAC (CC-BY-SA 2.0)

Brazil has however failed to adhere to some of the commitments of the Open Government Partnership. The following paragraphs document the treatment meted out to some individuals who have dared to use the Information Access Act to request for somewhat sensitive data.

Several cases of subtle or aggressive threats, employee termination and other kinds of reprisals have been reported. A member of a non-governmental organisation (NGO), Renato used a state government’s system to request information on their military police. A military police officer responded to his request with a threatening tone. The officer even mentioned the names of the fundraisers of the NGO of which Renato is a member. Joana, a federal government public employee, requested a ministry information about a quite controversial contract. Shortly afterwards and without previous notice, she was dismissed from her leadership position while she was on vacation.

João, a state company public employee, suspected that the company’s top executives were misusing public funds. He asked his brother to request information access. He was then discharged with cause for disobedience. Feeling threatened, Maria was afraid to request information about the budget execution of the town where she lived. Searching the Internet, she found another person who lived in a very distant town who was in a similar situation. They then decided to exchange favours, and one requested information on behalf of the other. It was safer for both of them. Manoel, a journalist, requested information from a city hall via the Information Access Act. However, he didn’t inform that he was a journalist. In a few days, the municipal secretary of communications called him and, is a less than cordial tone, said that Manoel didn’t need to use the Information Access Law to collect data. 

All names mentioned above are fictitious.  The reported cases, however, are unfortunately real. In addition to discharges and threatens, the requesting party identification leads the government to respond to information requests according to the requesting party “status”.

Research in several countries, including Brazil, shows that the response to the same information request is more complete when the requesting party is identified as an investigator from a renowned university, for example than when the individual is identified just by his/her name.

These cases demonstrate that the identification of the requesting party may have not democratic and republican consequences. In all cases, an illegal and disproportionate force was used to silence requests for information.

It is, therefore, critical to develop mechanisms that allow and promote the exercise of the right to safely and, if necessary, anonymously access information. This would be enriching for all and would allow social control in many critical situations.

The Information Access Act may be an excellent tool to identify and monitor suspicions of misuse of public resources, contract frauds, or other improprieties in public agencies. For this law to be effective, however, it is essential that the requesting party is safeguarded. We believe this will be the next great challenge to the Information Access Act implementation process.

 

Access Conference: Looking for Access 2018 hosts!

Tue, 2017-07-18 22:08

The Access 2017 planning committee is now accepting proposals from institutions and groups interested hosting Access 2018. Bring Canada’s leading (and most fun) library tech conference to your campus or city in 2018!

Interested? Submit your proposal to accesslibcon@gmail.com, including:

  • The host organization(s) name
  • Proposed dates
  • The location the event will likely be held (campus facility, hotel name, etc.)
  • Considerations noted in the hosting guidelines
  • Anything else to convince us that you would put on a successful Access conference

Proposals will be accepted until September 1st, 2017. The 2018 hosts will be selected by the 2017 Planning Committee, and notified in early September. The official announcement will be made on September 28th at the Access 2017 in Saskatoon.

Questions? Let us know at accesslibcon@gmail.com!

Code4Lib Journal: Editorial: Welcome New Editors, What We Know About Who We Are, and Submission Pro Tip!

Tue, 2017-07-18 21:03
Want to see your work in C4LJ? Here's a pro tip!

Code4Lib Journal: A Practical Starter Guide on Developing Accessible Websites

Tue, 2017-07-18 21:03
There is growing concern about the accessibility of the online content and services provided by libraries and public institutions. While many articles cover legislation, general benefits, and common opportunities to improve web accessibility on the surface (e.g., alt tags), few articles discuss web accessibility in more depth, and when they do, they are typically not specific to library web services. This article is meant to fill in this vacuum and will provide practical best practices and code.

Code4Lib Journal: Recount: Revisiting the 42nd Canadian Federal Election to Evaluate the Efficacy of Retroactive Tweet Collection

Tue, 2017-07-18 21:03
In this paper, we report the development and testing of a methodology for collecting tweets from periods beyond the Twitter API’s seven-to-nine day limitation. To accomplish this, we used Twitter’s advanced search feature to search for tweets from past the seven to nine day limit, and then used JavaScript to automatically scan the resulting webpage for tweet IDs. These IDs were then rehydrated (tweet metadata retrieved) using twarc. To examine the efficacy of this method for retrospective collection, we revisited the case study of the 42nd Canadian Federal Election. Using comparisons between the two datasets, we found that our methodology does not produce as robust results as real-time streaming, but that it might be useful as a starting point for researchers or collectors. We close by discussing the implications of these findings.

Code4Lib Journal: Extending Omeka for a Large-Scale Digital Project

Tue, 2017-07-18 21:03
In September 2016, the department of Special Collections and Archives, Kent State University Libraries, received a Digital Dissemination grant from the National Historical Publications and Records Commission (NHPRC) to digitize roughly 72,500 pages from the May 4 collection, which documents the May 1970 shootings of thirteen students by Ohio National Guardsmen at Kent State University. This article will highlight the project team’s efforts to adapt the Omeka instance with modifications to the interface and ingestion processes to assist the efforts of presenting unique archival collections online, including an automated method to create folder level links on the relevant finding aids upon ingestion; implementing open source Tesseract to provide OCR to uploaded files; automated PDF creation from the raw image files using Ghostscript; and integrating Mirador to present a folder level display to reflect archival organization as it occurs in the physical collections. These adaptations, which have been shared via GitHub, will be of interest to other institutions looking to present archival material in Omeka.

Code4Lib Journal: Annotation-based enrichment of Digital Objects using open-source frameworks

Tue, 2017-07-18 21:03
The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora) accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.

Code4Lib Journal: The FachRef-Assistant: Personalised, subject specific, and transparent stock management

Tue, 2017-07-18 21:03
We present in this paper a personalized web application for the weeding of printed resources: the FachRef-Assistant. It offers an extensive range of tools for evidence based stock management, based on the thorough analysis of usage statistics. Special attention is paid to the criteria individualization, transparency of the parameters used, and generic functions. Currently, it is designed to work with the Aleph-System from ExLibris, but efforts were spent to keep the application as generic as possible. For example, all procedures specific to the local library system have been collected in one Java package. The inclusion of library specific properties such as collections and systematics has been designed to be highly generic as well by mapping the individual entries onto an in-memory database. Hence simple adaption of the package and the mappings would render the FachRef-Assistant compatible to other library systems. The personalization of the application allows for the inclusion of subject specific usage properties as well as of variations between different collections within one subject area. The parameter sets used to analyse the stock and to prepare weeding and purchase proposal lists are included in the output XML-files to facilitate a high degree of transparency, objectivity and reproducibility.

Code4Lib Journal: The Semantics of Metadata: Avalon Media System and the Move to RDF

Tue, 2017-07-18 21:03
The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.

Code4Lib Journal: OpeNumisma: A Software Platform Managing Numismatic Collections with A Particular Focus On Reflectance Transformation Imaging

Tue, 2017-07-18 21:03
This paper describes OpeNumisma; a reusable web-based platform focused on digital numismatic collections. The platform provides an innovative merge of digital imaging and data management systems that offer great new opportunities for research and the dissemination of numismatic knowledge online. A unique feature of the platform is the application of Reflectance Transformation Imaging (RTI), a computational photographic method that offers tremendous image analysis and possibilities for numismatic research. This computational photography technique allows the user to observe on browser minor details, unseen with the naked eye just by holding the computer mouse rather than the actual object. The first successful implementation of OpeNumisma has been the creation of a digital library for the medieval coins from the collection of the Bank of Cyprus Cultural Foundation.

Code4Lib Journal: DuEPublicA: Automated bibliometric reports based on the University Bibliography and external citation data

Tue, 2017-07-18 21:03
This paper describes a web application to generate bibliometric reports based on the University Bibliography and the Scopus citation database. Our goal is to offer an alternative to easy-to-prepare automated reports from commercial sources. These often suffer from an incomplete coverage of publication types and a difficult attribution to people, institutes and universities. Using our University Bibliography as the source to select relevant publications solves the two problems. As it is a local system, maintained and set up by the library, we can include every publication type we want. As the University Bibliography is linked to the identity management system of the university, it enables an easy selection of publications for people, institutes and the whole university. The program is designed as a web application, which collects publications from the University Bibliography, enriches them with citation data from Scopus and performs three kinds of analyses: 1. A general analysis (number and type of publications, publications per year etc.), 2. A citation analysis (average citations per publication, h-index, uncitedness), and 3. An affiliation analysis (home and partner institutions) We tried to keep the code highly generic, so that the inclusion of other databases (Web of Science, IEEE) or other bibliographies is easily feasible. The application is written in Java and XML and uses XSL transformations and LaTeX to generate bibliometric reports as HTML pages and in pdf format. Warnings and alerts are automatically included if the citation analysis covers only a small fraction of the publications from the University Bibliography. In addition, we describe a small tool that helps to collect author details for an analysis.

Code4Lib Journal: New Metadata Recipes for Old Cookbooks: Creating and Analyzing a Digital Collection Using the HathiTrust Research Center Portal

Tue, 2017-07-18 21:03
The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.

Code4Lib Journal: Countering Stryker’s Punch: Algorithmically Filling the Black Hole

Tue, 2017-07-18 21:03
Two current digital image editing programs are examined in the context of filling in missing visual image data from hole-punched United States Farm Security Administration (FSA) negatives. Specifically, Photoshop's Content-Aware Fill feature and GIMP's Resynthesizer plugin are evaluated and contrasted against comparable images. A possible automated workflow geared towards large scale editing of similarly hole-punched negatives is also explored. Finally, potential future research based upon this study's results are proposed in the context of leveraging previously-enhanced, image-level metadata.

District Dispatch: Rights reversion: restoring knowledge and culture, one book at a time

Tue, 2017-07-18 20:02

Guest post by: Brianna Schofield, Executive Director, Authors Alliance; Erika Wilson, Communications & Operations Manager, Authors Alliance

Erika Wilson, Communications & Operations Manager, Authors Alliance

Brianna Schofield, Executive Director, Authors Alliance

For many of us, it’s an all-too-familiar scenario: We’re searching for a book that’s fallen out of print and is unavailable to read or purchase online. Maybe it’s an academic text, with volumes held in only a few research library collections and all but inaccessible to the public. Or maybe it’s one of the many 20th-century books whose initial commercial life has ended, and whose copyright status means they have disappeared. Most of these books were published long before the advent of the Internet, or of e-books. Finding and accessing these volumes can be frustrating and time-consuming, even with the benefit of interlibrary loan. There’s all this valuable knowledge and culture out there, but we can’t get to it!

Wouldn’t it be great if there were some mechanism to give new life to the many books that have been “locked away,” to make them newly available, and to share them with new audiences?

Thanks to rights reversion, there is a way! Reversion enables authors to regain the rights to their previously published books, so that they can make them newly available in the ways they want. Some authors may want to bring their out-of-print books back into print, while others may want to deposit their books in open access online repositories. Still others might want to update their works, create e-book versions with multimedia resources, or commission translations.

A “right of reversion” is a contractual provision that permits authors to work with their publishers to regain some or all of the rights in their books when certain conditions are met. But authors may also be able to revert rights even if they have not met the triggering conditions in their contract, or if their contracts do not have a reversion clause at all! Reversion can be a powerful tool for authors, but many authors do not know where to start.

That’s where Authors Alliance comes in. We’re a non-profit education and advocacy organization whose mission is to facilitate widespread access to works of authorship by assisting authors who want to share knowledge and products of the imagination broadly. We provide information and tools designed to help authors better understand and manage key legal, technological, and institutional aspects of authorship in the digital age.

Our Guide to Understanding Rights Reversion was written to help authors navigate the reversion process. (Check out the rights reversion portal on our website to download or buy the guide, and for more resources including letter templates for use in contacting publishers about reversion). Since we released the guide two years ago, we’ve featured a number of reversion success stories. For example, Robert Darnton (professor emeritus at Harvard and a founding member of Authors Alliance) worked with his publisher to regain rights to two of his books about the French Enlightenment, and he has made them freely available to all via HathiTrust and the Authors Alliance collection page at the Internet Archive. Novelist and Authors Alliance member Tracee Garner successfully leveraged reversion to regain the rights to two of her previously published books. She’s currently working on a third volume, and she plans to release all three as a new trilogy.

Rights reversion has a great deal of potential to help authors and the public, and librarians are in an excellent position to help spread the word about reversion. Many senior academics have decades’ worth of scholarly books, many of which may be out of print and locked away in inaccessible library stacks. None of them are available online. Rights reversion can be a way to help authors ensure their intellectual legacy, while also bring their works to new audiences.

Reversion is good for authors, good for publishers, and good for the public interest. You can learn more by visiting our website, where we invite you to become a member of Authors Alliance! Basic membership is free, and our members are the first to hear of new resources, such as our forthcoming guide to fair use and our guide to publication contracts. We also feature news on copyright policy and advocacy.

If you have questions about rights reversion, we can be reached at reversions@authorsalliance.org. We’d also love to hear about your experiences with assisting authors with these issues—who knows, maybe yours could be the next rights reversion success story!

The post Rights reversion: restoring knowledge and culture, one book at a time appeared first on District Dispatch.

Pages