You are here

Feed aggregator

DuraSpace News: Oslo and Akershus University College (HiOA) Launch DSpace IRs on KnowledgeArc Platform

planet code4lib - Tue, 2017-02-28 00:00

From Michael Guthrie, KnowledgeArc

Hove, UK Oslo and Akershus University College (Høgskolen i Oslo og Akershus or HiOA) have deployed their two new institutional repositories, HiOA Open Digital Archive and HiOA Fagarkivet on the KnowledgeArc managed, hosted DSpace platform.

DuraSpace News: VIVO Updates Feb 26–Camp, Ontologies, Strategy, Outreach

planet code4lib - Tue, 2017-02-28 00:00

VIVO Camp Registration extended.  We have a great group signed up to learn about VIVO at VIVO Camp, April 6-8 in Albuquerque, New Mexico.  Registration has been extended and will remain open until we're full.  Register today for Camp here.

LibUX: Listen: Trey Gordner and Stephen Bateman from Koios (23:08)

planet code4lib - Mon, 2017-02-27 22:26

In this episode, Trey Gordner and Stephen Bateman from Koios join Amanda and Michael to chat about startup opportunities in this space, the opportunity and danger of “interface aggregation,” the design of their new service Libre, and more.

These two were super fun to interview.

You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

Open Knowledge Foundation: 7 ways the ROUTE-TO-PA project has improved data sharing through CKAN

planet code4lib - Mon, 2017-02-27 10:01

Data sharing has come a long way over the years. With open source tools, improvements and new features are always quickly on the horizon. Serah Rono looks at the improvements that have been made to open source data management system CKAN through the course of the ROUTE-TO-PA project. 

In the present day, 5MB worth of data would probably be a decent photo, a three-minute song, or a spreadsheet. Nothing worth writing home about, let alone splashing across front pages of mainstream media. This was not the case in 1956 though –  in September of that year, IBM made the news by creating a 5MB hard drive. It was so big, a crane was used to lift it onto a plane. Two years later, in 1958, the World Data Centre was established to allow users open access to scientific data. Over the years, data storage and sharing options have evolved to be more portable, secure, and with the blossoming of the Internet, virtual, too.

One such virtual data sharing platform, CKAN, has been up and running for ten years now. CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available.

It is no wonder then that ROUTE-TO-PA, a Horizon2020 project pushing for transparency in public administrations across the EU, chose CKAN as a foundation for its Transparency Enhancing Toolset (TET). As one of ROUTE-TO-PA’s tools, the Transparency Enhancing Toolset provides data publishers with a platform on which they can open up data in their custody to the general public.

So, what improvements have been made to the CKAN base code to constitute the Transparency Enhancing Toolset? Below is a brief list:

1. Content management system support

CKAN Integration with a content management system enables publishers to publish content related to datasets and publish updates related to the portal in an easy way. TET WordPress plugin seamlessly integrates TET enabled CKAN and provides rich content publishing features to publishers and an elegantly organized entry point to data portal. 

2. PivotTable

CKAN platform has limited data analysis capabilities, essential for working with data. ROUTE-TO-PA added a PivotTable feature to allow users to view, summarize and visualize data. From the data explorer in this example, users can easily create pivot tables and even run SQL queries.  See source code here.

3. OpenID

ROUTE-TO-PA created an OpenID plugin for CKAN which enabled OpenID authentication on CKAN. See source code here.

4. Recommendation for related datasets

With this feature, the application recommends related datasets a user can look at based on the current selection and other contextual information. The feature guides users to find potentially useful and relevant datasets. See example in this search result for datasets on bins in Dublin, Ireland.

5. Combine Datasets Feature

This feature allows users to combine related datasets in their search results within TET into one ‘wholesome’ dataset. Along with the Refine Results feature, the Combined Datasets feature is found in the top right corner of the search results page, as in this example. Please note, that only datasets with the same structure can be combined at this point. Once combined, the resulting dataset can be downloaded for use.

6. Personalized search and recommendations

Personalized search feature allows logged-in users to get personalized search based on details provided in their profile. In addition logged-in users are provided with personalized recommendations based on their profile details.

7. Metadata quality check/validation

Extra validations to dataset entry form are added to prevent data entry errors and to ensure consistency.

You can find, borrow from and contribute to CKAN and TET code repositories on Github, join CKAN’s global user group or email serah.rono@okfn.org with any/all of your questions. Viva el open source!

John Miedema: Dusting off the OpenBook WordPress Plugin. There will be Code.

planet code4lib - Sun, 2017-02-26 14:41

This morning I dusted off my OpenBook WordPress plugin. WordPress sensibly suppresses old plugins from its search directory, so I had to manually install it from its website. WordPress warns that the plugin has not been updated in over two years and may have compatibility issues. I was fully expecting it to break, if not during the installation then during the book search or preview function. To my surprise, everything worked fine!

Some of you will remember that during my days in library school I programmed the OpenBook plugin to insert rich book data from Open Library into WordPress posts/pages. The plugin got a bit of attention. I wrote an article in Code4Lib. I was asked to write another for NISO (pdf). I did some presentations at library conferences. I got commissioned to create a similar plugin for BookNet Canada; it is still available. OpenBook even got a favourable mention in the Guardian newspaper during the Google Books controversy. 

OpenBook went through a few significant upgrades. Templates allowed users to customize the look. COinS were added for integration with third-party applications like Zotero. OpenURL resolver allowed libraries to point webpages directly to entries in their online catalogues. I’m still proud of it. I had plenty of new ideas. Alas, code projects are never finished, merely abandoned.

Today, I am invested in publishing my After Reading web series. I am not announcing the resurrection of OpenBook. The web series will, however, use technical diagrams and code, to illustrate important bookish concepts. For example, a couple years ago I had an idea for a cloud-based web service that could pull book data from multiple sources. I was excited about this use of distributed data. Today that concept has a name, distributed ledger, that could be applied to book records. I will not be developing that technology in this series, but you can count on at least one major code project. There will be code.

The After Reading series will be posting book reviews, so I figured what the heck, dust off OpenBook. Maybe a small refresh will make it active in the WordPress search directory again. 

 

District Dispatch: ALA in NYC (Part 2): ‘More than a collection of books’

planet code4lib - Fri, 2017-02-24 20:46

How many times have you heard that phrase? A visit to the Central Library of the Brooklyn Public Library system proved to make that statement undoubtedly true. I work for ALA, I am a librarian, I worked for a library, I go to my branch library every week, so I know something about libraries. But this visit to this library was the kind of experience when you want to point, jump and exclaim, “See what is going on here!”

Photo credit: Carrie Russell

Indeed, much more than a collection of books. And more than shiny digital things. And more than an information commons. All are welcome here because the library is our community.

A visit to Brooklyn Public was first on the agenda of ALA’s Digital Content Working Group (DCWG), the leadership contingent that meets periodically with publishers in New York City. (Read more about the publisher meetings here.)

On arrival, several FBI agents were assembled outside of the building, which gave us momentary pause until we learned that they were cast members of the television series Homeland, one of the many film crews that use the Central Library for scenic backdrop. Look at the front of the building. Someone said “it’s like walking into a church.” Very cool if you can call this your library home.

Brooklyn is making a difference in its community. Heck, in 2016 it won the Institute of Museum and Library Services’ National Medal, the nation’s highest honor for museums and libraries. Brooklyn was recognized in part for its Outreach Services Department, which provides numerous social services, including a creative aging program, oral history project for veterans, free legal services in multi-languages and a reading program called TeleStory. (TeleStory connects families to loved ones in prison via video conference so they can read books together. Who knew a public library did that?!) All people are welcome here.

The Adult Learning Center, operating at Central and five Brooklyn Library branches under the direction of Kerwin Pilgrim, provides adult literacy training for new immigrants, digital literacy, citizenship classes, job search assistance and an extensive arts program that takes students to cultural centers where some see their first live dance performance! The active program depends on volunteer tutors. “Our tutors are the backbone of our adult basic education program serving students reading below the fifth grade reading level. Without our trained and dedicated volunteers, our program would suffer,” said Pilgrim.

Photo credit: Jim Neal

“Our volunteers are also strong advocates for our students and ambassadors of literacy because many of them share their experiences tutoring and inspire others to join as well. In many of our receptions over the years when we’ve asked tutors to provide reflections on their experiences we’ve heard that volunteering with BPL’s adult literacy programs not only opened their eyes, but also informed their perspective about what it means to be illiterate in a digital world. They empathize and empower our students to go beyond their comfort zones. Tutors have helped students progress to higher level groups and even to our Pre-HSE program. Students have achieved personal and professional goals like passing the citizenship and driver’s license exams, and completing applications for various city agencies. We help students with their stated goals as well as other aspects of their development and growth.”

During our tour of the library, we caught the tail end of story time, and I have never seen so many baby carriages lined up in the hallway. Another crowd scene when we were leaving the building – a huge line of people waiting to apply for their passports. Yes, the library has a passport office. Brooklyn also partners with the City of New York by providing space for the municipal ID program called IDNYC. All New Yorkers are eligible for a free New York City Identification Card, regardless of citizenship status. Already, 10 percent of the city’s population has an IDNYC Card! This proof of identity card provides one with access to city services. With the ID, people can apply for a bank or credit union account, get health insurance from the NY State Health Insurance Marketplace and more.

In the Business and Career Center, Brooklyn provides numerous online databases and other resources, many helpful for entrepreneurs looking to start their own business. The biggest barrier to starting a new business is inadequate start-up funds. Brooklyn Public tries to mitigate this problem with their “Power Up” program. Funded by Citibank, budding entrepreneurs take library training to write a business plan. After classes on business plan writing, marketing, and discovering sources for financing, participants enter a “best business plan” competition with a first prize of $15,000. Nearly one-third of Brooklyn residents do not have sufficient access to the Internet, so the information commons and dedicated staff available to help—open to all residents—is critical.

I know this is just the tip of the iceberg of all the work that Brooklyn Public does. The library—more than a collection of books—is certainly home to Brooklyn’s 2.5 million residents. And all people are welcome here.

The post ALA in NYC (Part 2): ‘More than a collection of books’ appeared first on District Dispatch.

Brown University Library Digital Technologies Projects: Solr LocalParams and dereferencing

planet code4lib - Fri, 2017-02-24 18:03

A few months ago, at the Blacklight Summit, I learned that Blacklight defines certain settings in solrconfig.xml to serve as shortcuts for a group of fields with different boost values. For
example, in our Blacklight installation we have a setting for author_qf that references four specific author fields with different boost values.

<str name="author_qf">   author_unstem_search^200   author_addl_unstem_search^50   author_t^20   author_addl_t </str>

In this case author_qf is a shortcut that we use when issuing searches by author. By referencing author_qf in our request to Solr we don’t have to list all four author fields (author_unstem_search, author_addl_unstem_search, author_t, and author_addl_t) and their boost values, Solr is smart enough to use those four fields when it notices author_qf in the query. You can see the exact definition of this field in our GitHub repository.

Although the Blacklight project talks about this feature in their documentation page and our Blacklight instance takes advantage of it via the Blacklight Advanced Search plugin I had never really quite understood how this works internally in Solr.

LocalParams

Turns out Blacklight takes advantage of a feature in Solr called LocalParams. This feature allows us to customize individual values for a parameter on each request:

LocalParams stands for local parameters: they provide a way to “localize” information about a specific argument that is being sent to Solr. In other words, LocalParams provide a way to add meta-data to certain argument types such as query strings. https://wiki.apache.org/solr/LocalParams

The syntax for LocalParams is p={! k=v } where p is the parameter to localize, k is the setting to customize, and v the value for the setting. For example, the following

q={! qf=author}jane

uses LocalParams to customize the q parameter of a search. In this case it forces the query field qf parameter to use the author field when it searches for “jane”.

Dereferencing

When using LocalParams you can also use dereferencing to tell the parser to use an already defined value as the value for a LocalParam. For example, the following example shows how to use the already defined value (author_qf) when setting the value for the qf in the LocalParams. Notice how the value is prefixed with a dollar-sign to indicate dereferencing:

q={! qf=$author_qf}jane

When Solr sees the $author_qf it replaces it with the four author fields that we defined for it and sets the qf parameter to use the four author fields.

You can see how Solr handles dereferencing if you pass debugQuery=true to your Solr query and inspect the debug.parsedquery in the response. The previous query would return something along the lines of

(+DisjunctionMaxQuery(     (     author_t:jane^20.0 |     author_addl_t:jane |     author_addl_unstem_search:jane^50.0 |     author_unstem_search:jane^200.0     )~0.01   ) )/no_coord

Notice how Solr dereferenced (i.e. expanded) author_qf to the four author fields that we have configured in our solrconfig.xml with the corresponding boost values.

It’s worth noticing that dereferencing only works if you use the eDisMax parser in Solr.

There are several advantages to using this Solr feature that come to mind. One is that your queries are a bit shorter since we are passing an alias (author_qf) rather than all four fields and their boost values, this makes reading the query a bit clearer. The second advantage is that you can change the definition for the author_qf field on the server (say to add include a new author field in your Solr index) and the client applications automatically will use the definition when you reference author_qf.

Open Knowledge Foundation: Announcing the 2017 International Open Data Day Mini-Grant Winners!

planet code4lib - Fri, 2017-02-24 17:05

This blog was co-written by Franka Vaughan and Mor Rubinstein, OKI Network team.

This is the third year of the Open Knowledge International Open Data Day mini-grants scheme, our best one yet! Building on last year’s lessons from the scheme, and in the spirit of Open Data Day, we are trying to make the scheme more transparent. We are aspiring to email every mini-grant applicant a response email with feedback about their application. This blog is the first in a series where we look at who has received grants, how much has been given, and also our criteria for deciding who to fund (more about that, next week!).

Our selection process took more time than expected due to the right circumstances – the UK Foreign & Commonwealth Office joined the scheme last week and is funding eight more events! Adding it to the support we got from SPARC, the Open Contracting Program of Hivos, Article 19 and our grant from Hewlett Foundation, a total amount of $16,530 worth of mini-grants are being distributed. This is $4,030 more than what we committed to initially.  

The grants are divided into six categories: Open Research, Open Data for Human Rights, Open Data for Environment, Open Data Newbies, Open Contracting and FCO special grantees. Although not a planned category, we decided to be flexible and accommodate places that will be hosting an Open Data Day event for the first time. We call it the Newbie category. These events didn’t necessarily fit our criteria, but showed potential or are hosting Open Data Day events for the first time. Two of these events will get special assistant from our Open Data for Development Africa Lead, David Opoku.

So without further ado, here are the grantees:  

  1. Open Knowledge Nepal’s ODD event will focus on “Open Access Research for Students” to highlight the conditions of Open Access and Open Research in Nepal, showcasing the research opportunities and the moving direction of research trends.  Amount: $350
  2. Open Switch Africa will organise a workshop to encourage open data practises in academic and public institutions, teach attendees how to create / utilize open data sheets and repositories and also build up an open data community in Nigeria.  Amount: $400
  3. The Electrochemical Society’s ODD event in the USA will focus on informing the general public about their mission to Free the Science and make scientific research available to everyone, and also share their plans to launch their open access research repository through Research4Life in March. Amount: $400
  4. Wiki Education Brazil aims to create and build structures to publish Brazilian academic research on Wikipedia and WikiData. They will organise a hackathon and edit-a-thon in partnership with pt.wiki and wikidata communities with support from Wikimedia Foundation research team to create a pilot event, similar to https://meta.wikimedia.org/wiki/WikiCite. Amount: $400
  5. Kyambogo University, Uganda will organise a presentation on how open data and the library promote open access. They will host an exhibition on open access resources and organise a library tour to acquaint participants with the available open access resources in the University’s library. Amount: $400
  6. Kirstie Whitaker will organise a brainhack to empower early career researchers at Cambridge University, England, on how to access open neuroimaging datasets already in existence for new studies and add their own data for others to use in the future. Amount: $375
  7. The University of Kashmir’s ODD event in India will target scholars, researchers and the teaching community and introduce them to Open Data Lab projects that are available through open Knowledge Labs and open research repositories through re3data.org.  Amount: $400
  8. The Research Computing Centre of the University of Chicago will organise a hackathon that will introduce participants to public data available on different portals on the internet. Amount: $300
  9. Technarium hackerspace  in Lithuania will organise a science cafe to open up the conversation in an otherwise conservative Lithuanian scientist population about the benefits of open data, and ways to share the science that they do. Amount: $400
  10. UNU-MERIT /BITSS-YAOUNDE in Cameroon will organise a hands-on practical training courses on Github, OSF, STATA dynamic documents, R Markdown, advocacy campaigns etc. Targeting 100 people.  Amount: $400
  11. Open Sudan will organise a high level conference to discuss the current state of research data sharing in Sudan, highlight the global movement and its successes, shed light on what could be implemented on the local level that is learned from the global movement and most importantly create a venue for collaboration. Amount: $400

  1. Dag Medya’s ODD event will increase awareness on deceased workers in Turkey by structuring and compiling raw data in a tabular format and opening it to the public for the benefit of open data lovers and data enthusiasts. Amount: $300
  2. Election Resource Centre Zimbabwe will organise a training to build the capacity of project champions who will use data to tell human rights stories, analysis, visualisation, reporting, stimulating citizen engagement and campaigns. Amount: $350
  3. PoliGNU ODD event in Brazil will be a discussion on women’s participation in the development of public policies and will be guided by open data collection and visualizations. Amount: $390
  4. ICT4Dev Research Center will organise a press conference to launch their new website [ict4dev.ma] which highlights their open data work, a panel discussion about the relationship between Human Rights and Open Data in Morocco.  Amount: $300
  5. Accountabilitylab.org  will train and engage Citizen Helpdesk volunteers from four earthquake hit districts in Nepal (Kavre, Sindhpalchowke, Nuwakot and Dhading) who are working as interlocutors, problem solvers and advocates on migration-related problems to codify citizen feedback using qualitative data from the ground and amplifying them using open data tool.s Amount: $300
  6. Abriendo Datos Costa Rica will gather people interested in human rights activism and accountability, and teach them open data concepts and the context of open data day, and check for openness or otherwise of the available human rights data. Amount: $300

  1. SpaceClubFUTA will use OpenStreetMap, TeachOSM tasking manager, Remote Sensing and GIS tools to map garbage sites in Akure, Nigeria and track their exact locations, and the size and type of garbage. The data collected will be handed over to the agency in charge of clean up to help them organise the necessary logistics.  Amount: $300
  2. Open Data Durban will initiate a project about the impacts of open data in society through the engagement of the network of labs and open data school clubs (wrangling data through an IoT weather station) in Durban, South Africa. Amount: $310
  3. Data for Sustainable Development in Tanzania ODD event will focus on using available information from opendata.go.tz to create visualization thematic map to show how data can be used in the health sector to track spread of infectious diseases, monitor demand or use demographic factors to look for opportunity in opening of new health facilities. Amount: $300
  4. SubidiosClaros / Datos Concepción will create an Interactive Map of Floods on the Argentine and Uruguayan Coasts of the Uruguay River using 2000-2015 data. This will serve as an outline for implementing warning systems in situations of water emergency. Amount: $400
  5. Outbox Hub Uganda will teach participants how to tell stories using open data on air quality from various sources and their own open data project. Amount: $300
  6. Lakehub will use data to highlight the effects of climate change, deforestation on Lake Victoria, Kenya. Amount: $300
  7. Tupale.co will create the basis for a generic data model to analyze Air Quality in the city of Medellin for the last five years. This initial “scaffolding” will serve as the go-to basis to engage more city stakeholders while putting in evidence for the need for more open data sets in Colombia. Amount: $300
  8. Beog Neere will develop action plan to open up Extractives’ environmental impact data and develop data skills for key stakeholders – government and civil society. Amount: $300

  1. East-West Management Institute’s Open Development Initiative (EWMI-ODI) in Laos will build an open data community in Laos, and promote and localise localization the Open Data Handbook. Amount: $300
  2. Mukono District NGO Forum will use OpenCon resource depositories and make a presentation on Open Data, Open Access, and Open Data for Environment.  Amount: $350
  3. The Law Society of the Catholic University of Malawi will advocate for sexual reproductive health rights by going to secondary schools and disseminate information to young women on their rights and how they can report once they have been victimized. Amount: $350

  1. LabHacker will take their Hacker Bus to a small city near São Paulo and run a hack day/workshop there and create a physical tool to visualize the city budget which will be made available for the local citizens. They will document the process and share it online so other can copy and modify it. Amount: $400
  2. Anti-Corruption Coalition Uganda will organize a meetup of 40 people identified from civil society, media,  government and general public and educate them on the importance of open data in improving public service delivery. Amount: $400
  3. Youth Association for Development, will hold a discussion on the current government [Pakistan] policies about open data. The discussions will cover, open budget, Open Contracting, open bidding, open procurements, open bidding, open tendering, Open spending, Cooking budgets, Cooking budgets, Panama Papers, Municipal Money etc. Amount: $400
  4. DRCongo Open Data initiative will organise a conference to raise awareness on the role of open data and mobile technologies to enhance  transparency and promoting accountability in the management of revenues from extractive industries in DR Congo. Amount: $400
  5. Daystar University in Kenya will organise a seminar to raise awareness among student journalists about using public data to cover public officers’ use of taxpayer money. Amount: $380
  6. Centre for Geoinformation Science, University of Pretoria in South Africa will develop a web-based application that uses gamification to encourage the local community (school learners specifically)  to engage with open data on public funds and spending. Amount: $345
  7. Socialtic will host Data Expedition, Workshops, Panel and lightning talks and Open Data BBQ to encourage the participation of groups like NGOs and journos to use data for their work. Amount: $350
  8. OpenDataPy and Girolabs will show  civil society organizations public contract data of Paraguay and also show visualizations and apps made with that data. Their goal is to use all the data available and generate a debate on how this information can help achieve transparency. Amount: $400
  9. Code for Ghana will bring together data enthusiasts, developers, CSOs and journalists to work on analysing and visualising the previous government’s expenditure to bring out insights that would educate the general public. Amount: $400
  10. Benin Bloggers’ Association will raise awareness of the need for Benin to have an effective access to information law that oblige elected officials and public officials to publish their assets and revenues. Amount: $400

  1. Red Ciudadana will organize a presentation on open data and human rights in Guatemala. They aim to show the importance of opening data linked to the Sustainable Development Goals and human rights and the impact it has on people’s quality of life. Amount: $400
  2. School of Data – Latvia is organizing a hackathon and inviting journalists, programmers, data analysts, activists and the general public interested in data-driven opportunities. Their aim is to create real projects that draws out data-based arguments and help solve issues that are important for society. Amount: $280
  3. Code for South Africa (Code4SA)’s event will introduce participants to what open data is, why it is valuable and how it is relevant in their lives. They are choosing to *not* work directly with raw data, but rather using an interface on top of census and IEC data to create a more inclusive event. Amount: $400
  4. Code for Romania will use the “Vote Monitoring” App to build a user-friendly repository of open data on election fraud in Romania and Moldova. Amount: $400
  5. Albanian Institute of Science – AIS will organize a workshop on Open Contracting & Red Flag Index and present some of their instruments and databases, with the purpose of encouraging the use of facts in journalistic investigations or citizens’ advocacy. Amount: $400
  6. TransGov Ghana will clean data on public expenditure on development projects [2015 to 2016] and show how they are distributed in the Greater Accra Metropolis (data from Accra Metropolitan Assembly) to meet open data standards and deploy on Ghana Open Data Initiative (GODI) platform. Amount: $400

 

For those who were not successful on this occasion, we will providing further feedback and would encourage you to try again next time the scheme is available. We look forward to seeing, sharing and participating in your successful events. We invite you all you register your event on the ODD website.

Wishing you all a happy and productive Open Data Day! #OpenDataDay for more on Twitter!

Jonathan Rochkind: rubyland infrastruture, and a modest sponsorship from honeybadger

planet code4lib - Fri, 2017-02-24 16:41

Rubyland.news is my hobby project ruby RSS/atom feed aggregator.

Previously it was run on entirely free heroku resources — free dyno, free postgres (limited to 10K rows, which dashes my dreams of a searchable archive, oh well). The only thing I had to pay for was the domain. Rubyland doesn’t take many resources because it is mostly relatively ‘static’ and cacheable content, so could get by fine on one dyno. (I’m caching whole pages with Rails “fragment” caching and an in-process memory-based store, not quite how Rails fragment caching was intended to be used, but works out pretty well for this simple use case, with no additional resources required).

But the heroku free dyno doesn’t allow SSL on a custom hostname.  It’s actually pretty amazing what one can accomplish with ‘free tier’ resources from various cloud providers these days.  (I also use a free tier mailgun account for an MX server to receive @rubyland.news emails, and SMTP server for sending admin notifications from the app. And free DNS from cloudflare).  Yeah, for the limited resources rubyland needs, a very cheap DigitalOcean droplet would also work — but just as I’m not willing to spend much money on this hobby project, I’m also not willing to spend any more ‘sysadmin’ type time than I need — I like programming and UX design and enjoy doing it in my spare ‘hobby’ time, but sysadmin’ing is more like a necessary evil to me. Heroku works so well and does so much for you.

With a very kind sponsorship gift of $20/month for 6 months from Honeybadger, I used the money to upgrade to a heroku hobby-dev dyno, which does allow SSL on custom hostnames. So now rubyland.news is available at https, via letsencrypt.org, with cert acquisition and renewal fully automated by the letsencrypt-rails-heroku gem, which makes it incredibly painless, just set a few heroku config variables and you’re pretty much done.

I still haven’t redirected all http to https, and am not sure what to do about https on rubyland. For one, if I don’t continue to get sponsorship donations, I might not continue the heroku paid dyno, and then wouldn’t have custom domain SSL available. Also, even with SSL, since the rubyland.news feed often includes embedded <img> tags with their original src, you still get browser mixed-content warnings (which browsers may be moving to give you a security error page on?).  So not sure about the ultimate disposition of SSL on rubyland.news, but for now it’s available on both http and https — so at least I can do secure admin or other logins if I want (haven’t implemented yet, but an admin interface for approving feed suggestions is on my agenda).

Honeybadger

I hadn’t looked at Honeybadger before myself.  I have used bugsnag on client projects before, and been quite happy with it. Honeybadger looks like basically a bugsnag competitor — it’s main feature set is about capturing errors from your Rails (or other, including non-ruby platform) apps, and presenting them well for your response, with grouping, notifications, status disposition, etc.

I’ve set up honeybadger integration on rubyland.news, to check it out. (Note: “Honeybadger is free for non-commercial open-source projects”, which is pretty awesome, thanks honeybadger!) Honeybadger’s feature set and user/developer experience are looking really good.  It’s got much more favorable pricing than bugsnag for many projects–pricing is just per-app, not per-event-logged or per-seat.  It’s got pretty similar featureset to bugsnag, in some areas I like how honeybadger does things a lot better than bugsnag, in others not sure.

(I’ve been thinking for a while about wanting to forward all Rails.logger error-level log lines to my error monitoring service, even though they aren’t fatal exceptions/500s. I think this would be quite do-able with honeybadger, might try to rig it up at some point. I like the idea of being able to put error-level logging in my code rather than monitoring-service-specific logic, and have it just work with whatever monitoring service is configured).

So I’d encourage folks to check out honeybadger — yeah, my attention was caught by their (modest, but welcome and appreciated! $20/month) sponsorship, but I’m not being paid to write this specifically, all they asked for in return for sponsorship was a mention on the rubyland.news about page.

Honeybadger also includes some limited uptime monitoring.   The other important piece of monitoring, in my opinion, is request- or page-load time monitoring, with reports and notifications on median and 90th/95th percentile. I’m not sure if honeybadger includes that in any way. (for non-heroku deploys, disk space, RAM, and CPU usage monitoring is also key. RAM and CPU can still be useful with heroku, but less vital in my experience).

Is there even a service that will work well for Rails apps that combines error, uptime, and request time monitoring, with a great developer experience, at a reasonable price? It’s a bit surprising to me that there are so many services that do just one or two of these, and few that combine all of them in one package.  Anyone had any good experiences?

For my library-sector readers, I think this is one area where most library web infrastruture is not yet operating at professional standards. In this decade, a professional website means you have monitoring and notification to tell you about errors and outages without needing to wait for users to report em, so you can get em fixed as soon as possible. Few library services are being operated such, and it’s time to get up to speed.  While you can run your own monitoring and notification services on your own hardware, in my experience few open source packages are up to the quality of current commercial cloud offerings — and when you run your own monitoring/notification, you run the risk of losing notice of problems because of misconfiguration of some kind (it’s happened to me!), or a local infrastructure event that takes out both your app and your monitoring/notification (that too!).  A cloud commercial offering makes a lot of sense. While there are many “reasonably” priced options these days, they are admittedly still not ‘cheap’ for a library budget (or lack thereof) — but it’s a price worth paying, it’s what i means to run web sites, apps, and services professionally.


Filed under: General

District Dispatch: Top 5 myths about National Library Legislative Day

planet code4lib - Thu, 2017-02-23 18:36

Originally published by American Libraries in Cognotes during ALA Midwinter 2017.

The list of core library values is a proud one, and a long one. For the past 42 years, library supporters from all over the country have gathered in Washington, D.C. in May with one goal in mind – to advance libraries’ core values and communicate the importance of libraries to Members of Congress. They’ve told their stories, shared data and highlighted pressing legislation impacting their libraries and their patrons.

Photo Credit: Adam Mason Photography

This year, Congressional action may well threaten principles and practices that librarians hold dear as never before. That makes it more important than ever that National Library Legislative Day 2017 be the best attended ever. So, let’s tackle a few of the common misconceptions about National Library Legislative Day that often keep people from coming to D.C. to share their own stories:

  1. Only librarians can attend.
    This event is open to the public and anyone who loves libraries – students, business owners, stay-at-home moms, just plain library enthusiasts – has a story to tell. Those firsthand stories are critical to conveying to members of Congress and their staffs just how important libraries are to their constituents.
  2. Only policy and legislative experts should attend.
    While some attendees have been following library legislative issues for many years, many are first time advocates. We provide a full day of training to ensure that participants have the most up-to-date information and can go into their meetings on Capitol Hill fully prepared to answer questions and convey key talking points.
  3. I’m not allowed to lobby.
    The IRS has developed guidelines so that nonprofit groups and private citizens can advocate legally. Even if you are a government appointee, there are ways you can advocate on issues important to libraries and help educate elected officials about the important work libraries do.
    Still concerned? The National Council of Nonprofits has resources to help you.
  4. My voice won’t make a difference.
    From confirming the new Librarian of Congress in 2016 to limiting mass surveillance under the USA FREEDOM Act in 2015 to securing billions in federal support for library programs over many decades, your voice combined with other dedicated library advocates’ has time and again defended the rights of the people we serve and moved our elected officials to take positive action. This can’t be done without you!
  5. I can’t participate if I don’t go to D.C.
    Although having advocates in D.C. to personally visit every Congressional office is hugely beneficial – and is itself a powerful testimony to librarian’s commitment to their communities –  you can participate from home. During Virtual Library Legislative Day you can help effectively double the impact of National Library Legislative Day by calling, emailing or tweeting Members of Congress using the same talking points carried by onsite NLLD participants.

Legislative threats to core library values are all too real this year. Don’t let myths prevent you from standing up for them on May 1-2, 2017. Whether you’ve been advocating for 3 months or 30 years, there’s a place for you in your National Library Legislative Day state delegation, either in person or online.

For more information, and to register for National Library Legislative Day, please visit ala.org/nlld.

The post Top 5 myths about National Library Legislative Day appeared first on District Dispatch.

David Rosenthal: Poynder on the Open Access mess

planet code4lib - Thu, 2017-02-23 16:00
Do not be put off by the fact that it is 36 pages long. Richard Poynder's Copyright: the immoveable barrier that open access advocates underestimated is a must-read. Every one of the 36 pages is full of insight.

Briefly, Poynder is arguing that the mis-match of resources, expertise and motivation makes it futile to depend on a transaction between an author and a publisher to provide useful open access to scientific articles. As I have argued before, Poynder concludes that the only way out is for Universities to act:
As it happens, the much-lauded Harvard open access policy contains the seeds for such a development. This includes wording along the lines of: “each faculty member grants to the school a nonexclusive copyright for all of his/her scholarly articles.” A rational next step would be for schools to appropriate faculty copyright all together. This would be a way of preventing publishers from doing so, and it would have the added benefit of avoiding the legal uncertainty some see in the Harvard policies. Importantly, it would be a top-down diktat rather than a bottom-up approach. Since currently researchers can request a no-questions-asked opt-out, and publishers have learned that they can bully researchers into requesting that opt-out, the objective of the Harvard OA policies is in any case subverted. Note the word "faculty" above. Poynder does not examine the issue that very few papers are published all of whose authors are faculty. Most authors are students, post-docs or staff. The copyright in a joint work is held by the authors jointly, or if some are employees working for hire, jointly by the faculty authors and the institution. I doubt very much that the copyright transfer agreements in these cases are actually valid, because they have been signed only by the primary author (most frequently not a faculty member), and/or have been signed by a worker-for-hire who does not in fact own the copyright.

District Dispatch: Look Back, Move Forward: network neutrality

planet code4lib - Thu, 2017-02-23 15:44

Background image is from the ALA Archives.

With news about network neutrality in everyone’s feeds recently, let’s TBT to 2014 at the Annual Conference in Las Vegas, Nevada, where the ALA Council passed a resolution “Reaffirming Support for National Open Internet Policies and Network Neutrality.” And in 2006—over a decade ago!—our first resolution “Affirming Network Neutrality” was approved.

You can read both resolutions from 2006 and 2014 in ALA’s Institutional Repository. While you are here, be sure to sign up for the Washington Office’s legislative action center for more news and opportunities to act as the issue evolves.

2014 Resolution Reaffirming Support for National Open Internet Policies and “Network Neutrality”

Citations
• Resolution endorsed by ALA Council on June 28, 2006. Council Document 20.12.
• Resolution adopted by ALA Council on July 1, 2014, in Las Vegas, Nevada. Council Document 20.7.

The post Look Back, Move Forward: network neutrality appeared first on District Dispatch.

LibUX: WordPress could be libraries’ best bet against losing their independence to vendors

planet code4lib - Thu, 2017-02-23 13:17

Stephen Francouer: Interesting play by EBSCO. I’m going to guess that it’s optimized to work with EDS and other EBSCO products. “When It Comes To Improving Your Library Website, Not All Web Platforms Are Created Equal” https://libraryux.slack.com/archives/what-to-read/p1487376220000478

Stephen’s linking to an article where Ebsco announces Stacks:

Stacks is the only web platform created by library professionals for library professionals. Stacks understands the challenges librarians face when it comes to the library website and has built a web platform and native mobile apps that lets you get back to doing what you do best; curating excellent content for your users. Learn more about how Stacks and the New Library Experience.

I haven’t had any hands-on opportunity with Stacks, so I can’t comment on the product – it might be good. My contention, however, is that it is probably worse for libraries if it’s good.

Ebsco is not the first in this space. I think, probably, Springshare has the leg up – so far. Ebsco won’t be the last in this space, either. I know of two vendors who are poised to announce their product.

The opportunity for library-specific content management systems is huge, though. Open-source is still such an incredibly steep hill for libraries that installing, maintaining, customizing — and I am going to say this without any first-hand experience with Stacks, but I can’t believe Ebsco will break free of the vendor-wide pattern — a superior platform like WordPress requires too much involvement. So, because library websites fail to convert and library professionals lack the expertise to solve that problem themselves, it’s ripe for the picking.

This is part of a trend I’ve warned about in my last few posts, the last podcast (called “Your front end is doomed”), and so on all the way back to my once optimistic observation of the Library as Interface: libraries are losing control of their most important asset – the gate.

Libraries are so concerned with being help-desk level professionals that they are ignoring the in-house opportunity for design and development expertise and unable to comprehend the role that plays in libraries’ independence.

Why I title this post “WordPress could be libraries’ best bet against losing their independence to vendors” is because WordPress — moreso than Drupal — is the easiest platform through which to learn how to develop custom solutions. There are more developers, cheap conferences worldwide, ubiquitous meetups, literally more WordPress sites than any other site on the internet, that is easy-ish to use out of the box and most capable to scale for complexity.

These in-house skills are crucial for the libraries’ ability to say “no” over the long term.

Open Knowledge Foundation: Measuring the openness of government data in southern Africa: the experience of a GODI contributor

planet code4lib - Thu, 2017-02-23 10:22

The Global Open Data Index (GODI) is one of our core projects at Open Knowledge International. The index measures and benchmarks the openness of government data around the world. As we complete the review phase of the audit of government data, we are soliciting feedback on the submission process. Tricia Govindasamy shares her experience submitting to #GODI16.

Open Data Durban (ODD), a civic tech lab based in Durban South Africa, received the opportunity from Open Knowledge International (OKI) to contribute to the Global Open Data Index (GODI) 2016 for eight (8) southern African countries. OKI defines GODI as “an annual effort to measure the state of open government data around the world.” With a fast approaching deadline, I was eager to take up the challenge of measuring the openness of specified datasets as made available by the governments of South Africa, Botswana, Namibia, Malawi, Zambia, Zimbabwe, Mozambique and Lesotho.

This intense data wrangling consisted of finding the state of open government data for the following datasets: National Maps, National Laws, Government Budget, Government Spending, National Statistics, Administrative Boundaries, Procurement, Pollutant Emissions, Election Results, Weather Forecast, Water Quality, Locations, Draft Legislation, Company Register, Land Ownership. A quick calculation: 15 datasets multiplied by 8 individual countries, results in 120 surveys! As you can imagine, this repetitive task took hours of google searches until late hours of the night (the best and most productive time for data wrangling I reckon) resulting in my sleep pattern being completely messed up. Nonetheless, I got the task done. Here are some of the findings.

Part of the survey for Pollutant Emissions in South Africa

Trends

The African Development Bank developed Open Data Portals for most of the 8 countries. At first sight, these portals are quite impressive with data visualisations and graphics, however, these portals are poorly organised and rarely updated. For most countries, the environmental departments are lagging as there is barely any records on Pollutant Emissions or Water Quality. Datasets on Weather forecasts and Land Ownership are only available for half of the countries. In some situations, sections of the datasets were not available. For example, while both South Africa and Malawi had data on land parcel boundary, there was no data on property value or tenure type.

It was quite shocking to note that Company Register, an important dataset that can help monitor fraud as it relates to trade and industry was unavailable for all the countries with the exception of Lesotho.

National Laws dataset was found for all countries with the exception of Mozambique, whereas Draft Legislation data was not available in Mozambique, Namibia and Botswana. I believe the availability of data on National Laws for almost all the countries can in part be attributed to the African Legal Information Institute, which has contributed to making legalisation open and has created websites for South Africa, Lesotho, Malawi and Zambia. Also, while Government Budget and Expenditure data are available, important detailed information such as transactions are lacking for most countries.

On a more positive note, Election Results compiled by independent electoral commissions were the easiest data to find and were generally up to date for all countries except Mozambique, for which I found no results.

It is important to note that none of the datasets for any of the 8 countries are openly licensed or in the public domain, begging the question for more education on the importance of the matter.

Challenges

OKI has a forum in which Network members from around the world discuss projects and also ask and resolve questions. I must admit, I took full advantage of this since I am a new member of the community with my training wheels still on. The biggest challenge I faced during this process was searching for Mozambique’s government data. I had to resort to using Google translator to find relevant data sources since all the data are published in Portuguese, Mozambique’s national language.

Due to the language barrier, I felt certain things were lost in translation, thus not providing a fair depiction of the survey. Luckily, OKI members from Brazil will be reviewing my submission to verify the data sources.

Tricia Govindasamy submitting to GODI on behalf of 8 countries in southern Africa.

Being South African and having prior knowledge of available government data made the process much easier when I submitted for South Africa. I already knew where to find the data sources even though many of the sources did not show up on simple google searches. I do not have experience with government data from the 7 other countries so I solely relied on google searches which may or may not have contained all exhaustible sources of data in its first few pages of search results.

The part of the survey which I felt my efforts really did not provide much insight into the Index was in situations where I found no datasets. If no datasets are found, the survey asks to “provide the reason that the data are not collected by the government”. I did not have any evidence to sufficiently substantiate an answer and contacting government departments for a variety of countries to get an answer was simply not practical at the time.

I would like to thank OKI for giving Open Data Durban the opportunity to be part of contributing to  GODI. It was a fulfilling experience as it is a volunteer based programme for people around the world. It is always great to know that the open data community extends beyond just Durban or South Africa but is an international community who are always collaborating on projects with a joint objective of advocating for open data.

LibUX: Listen: Your Front End is Doomed (33:10)

planet code4lib - Wed, 2017-02-22 21:56

Metric alum Emily King @emilykingatcsn swings by to chat with me about conversational UI and “interface aggregation” – front ends other than yours letting users connect with your service without ever actually having to visit your app. We cover a lot: API-first, considering the role of tone in voice user interfaces, and — of course — predicting doom.

You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

LITA: Jobs in Information Technology: February 22, 2017

planet code4lib - Wed, 2017-02-22 20:09

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Yale University, Sterling Memorial Library, Workflow Analyst/Programmer, New Haven, CT

Penn State University Libraries, Nursing and Allied Health Liaison Librarian, University Park, PA

St. Lawrence University, Science Librarian, Canton, NY

Louisiana State University, Department Head/Chairman, Baton Rouge, LA

Louisiana State University, Associate Dean for Special Collections, Baton Rouge, LA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Evergreen ILS: Evergreen 2.12 beta is released

planet code4lib - Wed, 2017-02-22 18:55

The Evergreen community is pleased to announce the beta release of Evergreen 2.12 and the beta release of OpenSRF 2.5. The releases are available for download and testing from the Evergreen downloads page and from the OpenSRF downloads page. Testers must upgrade to OpenSRF 2.5 to test Evergreen 2.12.

This release includes the implementation of acquisitions and booking in the new web staff client in addition to many web client bug fixes for circulation, cataloging, administration and reports. We strongly encourage libraries to start using the web client on a trial basis in production. All functionality is available for testing with the exception of serials and offline circulation.

Other notable new features and enhancements for 2.12 include:

  • Overdrive and OneClickdigital integration. When configured, patrons will be able to see ebook availability in search results and on the record summary page. They will also see ebook checkouts and holds in My Account.
  • Improvements to metarecords that include:
    • improvements to the bibliographic fingerprint to prevent the system from grouping different parts of a work together and to better distinguish between the title and author in the fingerprint;
    • the ability to limit the “Group Formats & Editions” search by format or other limiters;
    • improvements to the retrieval of e-resources in a “Group Formats & Editions” search;
    • and the ability to jump to other formats and editions of a work directly from the record summary page.
  • The removal of advanced search limiters from the basic search box, with a new widget added to the sidebar where users can see and remove those limiters.
  • A change to topic, geographic and temporal subject browse indexes that will display the entire heading as a unit rather than displaying individual subject terms separately.
  • Support for right-to-left languages, such as Arabic, in the public catalog. Arabic has also become a new officially-supported language in Evergreen.
  • A new hold targeting service supporting new targeting options and runtime optimizations to speed up targeting.
  • In the web staff client, the ability to apply merge profiles in the record bucket merge and Z39.50 interfaces.
  • The ability to display copy alerts when recording in-house use.
  • The ability to ignore punctuation, such as hyphens and apostrophes, when performing patron searches.
  • Support for recognition of client time zones,  particularly useful for consortia spanning time zones.

With release 2.12, minimum requirements for Evergreen have increased to PostgreSQL 9.3 and OpenSRF 2.5

For more information about what will be available in the release, check out the draft release notes.

Many thanks to all of the developers, testers, documentors, translators, funders and other contributors who helped make this release happen.

DPLA: Michele Kimpton to Lead Business Development Strategy at DPLA

planet code4lib - Wed, 2017-02-22 16:00

The Digital Public Library of America is pleased to announce that Michele Kimpton will be joining its staff as Director of Business Development and Senior Strategist beginning March 1, 2017.

In this critical role, Michele will be responsible for developing and implementing business strategies to increase the impact and reach of DPLA. This will include building key strategic partnerships, creating new services and exploring new opportunities, expanding private and public funding, and developing community support models, both financial and in-kind. Together these important activities will support DPLA’s present and future.

“We are truly fortunate to have someone of Michele’s deep experience, tremendous ability, and stellar reputation join DPLA at this time,” said Dan Cohen, DPLA’s Executive Director. “Along with the rest of the DPLA staff, I look forward to working with Michele to strengthen and expand our community and mission.”

Prior to joining DPLA, Michele Kimpton worked as Chief Strategist for LYRASIS and CEO of DuraSpace, where she developed several new cloud-based managed services for the digital library community, and developed new sustainability and governance models for multiple open source projects. Kimpton is a founding member of both the National Digital Strategic Alliance (NDSA) and the IIPC (International Internet Preservation Consortium). In 2013, Kimpton was named Digital Preservation Pioneer by the NDIPP program at Library of Congress. She holds a MBA from Santa Clara University, and a Bachelor of Science in Mechanical Engineering from Lehigh University. She can now be reached at michele dot kimpton at dp dot la.

Welcome, Michele!

DuraSpace News: INTRODUCING Fedora 4 Ansible

planet code4lib - Wed, 2017-02-22 00:00

From Yinlin Chen, Software Engineer, Digital Library Development, University Libraries, Virginia Tech

Pages

Subscribe to code4lib aggregator