You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib -
Updated: 1 hour 55 min ago

Jason Ronallo: The Lenovo X240 Keyboard and the End/Insert Key With FnLk On as a Software Developer on Linux

Thu, 2016-09-22 15:44

As a software developer I’m using keys like F5 a lot. When I’m doing any writing, I use F6 a lot to turn off and on spell correction underlining. On the Lenovo X240 the function keys are overlaid on the same keys as volume and brightness control. This causes some problems for me. Luckily there’s a solution that works for me under Linux.

To access the function keys you have to also press the Fn key. If most of what you’re doing is reloading a browser and not using the volume control, then this is a problem, so they’ve created a function lock which is enabled by pressing the Fn and Esc/FnLk key. The Fn key lights up and you can press F5 without using the Fn modifier key.

That’s all well and good until you get to another quirk of this keyboard where the Home, End, and Delete keys are in the same function key row in a way that the End key also functions as the Insert key. When function lock is on the End key becomes an Insert key. I don’t ever use the Insert key on a keyboard, so I understand why they combined the End/Insert key. But in this combination it doesn’t work for me as a software developer. I’m continually going between something that needs to be reloaded with F5 and in an editor where I need to quickly go to the end of a line in a program.

Luckily there’s a pretty simple answer to this if you don’t ever need to use the Insert key. I found the answer on askubuntu.

All I needed to do was run the following:

xmodmap -e "keycode 118 = End"

And now even when the function keys are locked the End/Insert key always behaves as End. To make this is permanent and the mapping gets loaded with X11 starts, add xmodmap -e "keycode 118 = End" to your ~/.xinitrc.

Jason Ronallo: Styling HTML5 Video with CSS

Thu, 2016-09-22 15:44

If you add an image to an HTML document you can style it with CSS. You can add borders, change its opacity, use CSS animations, and lots more. HTML5 video is just as easy to add to your pages and you can style video too. Lots of tutorials will show you how to style video controls, but I haven’t seen anything that will show you how to style the video itself. Read on for an extreme example of styling video just to show what’s possible.

Here’s a simple example of a video with a single source wrapped in a div:

<div id="styled_video_container"> <video src="/video/wind.mp4" type="video/mp4" controls poster="/video/wind.png" id="styled_video" muted preload="metadata" loop> </div>

Add some buttons under the video to style and play the video and then to stop the madness.

<button type="button" id="style_it">Style It!</button> <button type="button" id="stop_style_it">Stop It!</button>

We’ll use this JavaScript just to add a class to the containing element of the video and play/pause the video.

jQuery(document).ready(function($) { $('#style_it').on('click', function(){ $('#styled_video')[0].play(); $('#styled_video_container').addClass('style_it'); }); $('#stop_style_it').on('click', function(){ $('#styled_video_container').removeClass('style_it'); $('#styled_video')[0].pause(); }); });

Using the class that gets added we can then style and animate the video element with CSS. This is a simplified version without vendor flags.

#styled_video_container.style_it { background: linear-gradient(to bottom, #ff670f 0%,#e20d0d 100%); } #styled_video_container.style_it video { border: 10px solid green !important; opacity: 0.6; transition: all 8s ease-in-out; transform: rotate(300deg); box-shadow: 12px 9px 13px rgba(255, 0, 255, 0.75); } Stupid Video Styling Tricks Style It! Stop It!


OK, maybe there aren’t a lot of practical uses for styling video with CSS, but it is still fun to know that we can. Do you have a practical use for styling video with CSS that you can share?

Jason Ronallo: HTML5 Video Caption Cue Settings in WebVTT

Thu, 2016-09-22 15:44

TL;DR Check out my tool to better understand how cue settings position captions for HTML5 video.

Having video be a part of the Web with HTML5 <video> opens up a lot of new opportunities for creating rich video experiences. Being able to style video with CSS and control it with the JavaScript API makes it possible to do fun stuff and to create accessible players and a consistent experience across browsers. With better support in browsers for timed text tracks in the <track> element, I hope to see more captioned video.

An important consideration in creating really professional looking closed captions is placing them correctly. I don’t rely on captions, but I do increasingly turn them on to improve my viewing experience. I’ve come to appreciate some attributes of really well done captions. Accuracy is certainly important. The captions should match the words spoken. As someone who can hear, I see inaccurate captions all too often. Thoroughness is another factor. Are all the sounds important for the action represented in captions. Captions will also include a “music” caption, but other sounds, especially those off screen are often omitted. But accuracy and thoroughness aren’t the only factors to consider when evaluating caption quality.

Placement of captions can be equally important. The captions should not block other important content. They should not run off the edge of the screen. If two speakers are on screen you want the appropriate captions to be placed near each speaker. If a sound or voice is coming from off screen, the caption is best placed as close to the source as possible. These extra clues can help with understanding the content and action. These are the basics. There are other style guidelines for producing good captions. Producing good captions is something of an art form. More than two rows long is usually too much, and rows ought to be split at phrase breaks. Periods should be used to end sentences and are usually the end of a single cue. There’s judgment necessary to have pleasing phrasing.

While there are tools for doing this proper placement for television and burned in captions, I haven’t found a tool for this for Web video. While I haven’t yet have a tool to do this, in the following I’ll show you how to:

  • Use the JavaScript API to dynamically change cue text and settings.
  • Control placement of captions for your HTML5 video using cue settings.
  • Play around with different cue settings to better understand how they work.
  • Style captions with CSS.

Track and Cue JavaScript API

The <video> element has an API which allows you to get a list of all tracks for that video.

Let’s say we have the following video markup which is the only video on the page. This video is embedded far below, so you should be able to run these in the console of your developer tools right now.

<video poster="soybean-talk-clip.png" controls autoplay loop> <source src="soybean-talk-clip.mp4" type="video/mp4"> <track label="Captions" kind="captions" srclang="en" src="soybean-talk-clip.vtt" id="soybean-talk-clip-captions" default> </video>

Here we get the first video on the page:

var video = document.getElementsByTagName('video')[0];

You can then get all the tracks (in this case just one) with the following:

var tracks = video.textTracks; // returns a TextTrackList var track = tracks[0]; // returns TextTrack

Alternately, if your track element has an id you can get it more directly:

var track = document.getElementById('soybean-talk-clip-captions').track;

Once you have the track you can see the kind, label, and language:

track.kind; // "captions" track.label; // "Captions" track.language; // "en"

You can also get all the cues as a TextTrackCueList:

var cues = track.cues; // TextTrackCueList

In our example we have just two cues. We can also get just the active cues (in this case only one so far):

var active_cues = track.activeCues; // TextTrackCueList

Now we can see the text of the current cue:

var text = active_cues[0].text;

Now the really interesting part is that we can change the text of the caption dynamically and it will immediately change:

track.activeCues[0].text = "This is a completely different caption text!!!!1"; Cue Settings

We can also then change the position of the cue using cue settings. The following will move the first active cue to the top of the video.

track.activeCues[0].line = 1;

The cue can also be aligned to the start of the line position:

track.activeCues[0].align = "start";

Now for one last trick we’ll add another cue with the arguments of start time and end time in seconds and the cue text:

var new_cue = new VTTCue(1,30, "This is the next of the new cue.");

We’ll set a position for our new cue before we place it in the track:

new_cue.line = 5;

Then we can add the cue to the track:


And now you should see your new cue for most of the duration of the video.

Playing with Cue Settings

The other settings you can play with including position and size. Position is the text position as a percentage of the width of the video. The size is the width of the cue as a percentage of the width of the video.

While I could go through all of the different cue settings, I found it easier to understand them after I built a demonstration of dynamically changing all the cue settings. There you can play around with all the settings together to see how they actually interact with each other.

At least as of the time of this writing there is some variability between how different browsers apply these settings.

Test WebVTT Cue Settings and Styling

Cue Settings in WebVTT

I’m honestly still a bit confused about all of the optional ways in which cue settings can be defined in WebVTT. The demonstration outputs the simplest and most straightforward representation of cue settings. You’d have to read the spec for optional ways to apply some cue settings in WebVTT.

Styling Cues

In browsers that support styling of cues (Chrome, Opera, Safari), the demonstration also allows you to apply styling to cues in a few different ways. This CSS code is included in the demo to show some simple examples of styling.

::cue(.red){ color: red; } ::cue(.blue){ color: blue; } ::cue(.green){ color: green; } ::cue(.yellow){ color: yellow; } ::cue(.background-red){ background-color: red; } ::cue(.background-blue){ background-color: blue; } ::cue(.background-green){ background-color: green; } ::cue(.background-yellow){ background-color: yellow; }

Then the following cue text can be added to show red text with a yellow background. The

<>This cue has red text with a yellow background.</c>

In the demo you can see which text styles are supported by which browsers for styling the ::cue pseudo-element. There’s a text box at the bottom that allows you to enter any arbitrary styles and see what effect they have.

Example Video

Test WebVTT Cue Settings and Styling

Jason Ronallo: HTML Slide Decks With Synchronized and Interactive Audience Notes Using WebSockets

Thu, 2016-09-22 15:44

One question I got asked after giving my Code4Lib presentation on WebSockets was how I created my slides. I’ve written about how I create HTML slides before, but this time I added some new features like an audience interface that synchronizes automatically with the slides and allows for audience participation.

TL;DR I’ve open sourced starterdeck-node for creating synchronized and interactive HTML slide decks.

Not every time that I give a presentation am I able to use the technologies that I am talking about within the presentation itself, so I like to do it when I can. I write my slide decks as Markdown and convert them with Pandoc to HTML slides which use DZslides for slide sizing and animations. I use a browser to present the slides. Working this way with HTML has allowed me to do things like embed HTML5 video into a presentation on HTML5 video and show examples of the JavaScript API and how videos can be styled with CSS.

For a presentation on WebSockets I gave at Code4Lib 2014, I wanted to provide another example from within the presentation itself of what you can do with WebSockets. If you have the slides and the audience notes handout page open at the same time, you will see how they are synchronized. (Beware slowness as it is a large self-contained HTML download using data URIs.) When you change to certain slides in the presenter view, new content is revealed in the audience view. Because the slides are just an HTML page, it is possible to make the slides more interactive. WebSockets are used to allow the slides to send messages to each audience members’ browser and reveal notes. I am never able to say everything that I would want to in one short 20 minute talk, so this provided me a way to give the audience some supplementary material.

Within the slides I even included a simplistic chat application that allowed the audience to send messages directly to the presenter slides. (Every talk on WebSockets needs a gratuitous chat application.) At the end of the talk I also accepted questions from the audience via an input field. The questions were then delivered to the slides via WebSockets and displayed right within a slide using a little JavaScript. What I like most about this is that even someone who did not feel confident enough to step up to a microphone would have the opportunity to ask an anonymous question. And I even got a few legitimate questions amongst the requests for me to dance.

Another nice side benefit of getting the audience to notes before the presentation starts is that you can include your contact information and Twitter handle on the page.

I have wrapped up all this functionality for creating interactive slide decks into a project called starterdeck-node. It includes the WebSocket server and a simple starting point for creating your own slides. It strings together a bunch of different tools to make creating and deploying slide decks like this simpler so you’ll need to look at the requirements. This is still definitely just a tool for hackers, but having this scaffolding in place ought to make the next slide deck easier to create.

Here’s a video where I show starterdeck-node at work. Slides on the left; audience notes on the right.

Other Features

While the new exciting feature added in this version of the project is synchronization between presenter slides and audience notes, there are also lots of other great features if you want to create HTML slide decks. Even if you aren’t going to use the synchronization feature, there are still lots of reasons why you might want to create your HTML slides with starterdeck-node.

Self-contained HTML. Pandoc uses data-URIs so that the HTML version of your slides have no external dependencies. Everything including images, video, JavaScript, CSS, and fonts are all embedded within a single HTML document. That means that even if there’s no internet connection from the podium you’ll still be able to deliver your presentation.

Onstage view. Part of what gets built is a DZSlides onstage view where the presenter can see the current slide, next slide, speaker notes, and current time.

Single page view. This view is a self-contained, single-page layout version of the slides and speaker notes. This is a much nicer way to read a presentation than just flipping through the slides on various slide sharing sites. If you put a lot of work into your talk and are writing speaker notes, this is a great way to reuse them.

PDF backup. A script is included to create a PDF backup of your presentation. Sometimes you have to use the computer at the podium and it has an old version of IE on it. PDF backup to the rescue. While you won’t get all the features of the HTML presentation you’re still in business. The included Node.js app provides a server so that a headless browser can take screenshots of each slide. These screenshots are then compiled into the PDF.


I’d love to hear from anyone who tries to use it. I’ll list any examples I hear about below.

Here are some examples of slide decks that have used starterdeck-node or starterdeck.

Jason Ronallo: A Plugin For Mediaelement.js For Preview Thumbnails on Hover Over the Time Rail Using WebVTT

Thu, 2016-09-22 15:44

The time rail or progress bar on video players gives the viewer some indication of how much of the video they’ve watched, what portion of the video remains to be viewed, and how much of the video is buffered. The time rail can also be clicked on to jump to a particular time within the video. But figuring out where in the video you want to go can feel kind of random. You can usually hover over the time rail and move from side to side and see the time that you’d jump to if you clicked, but who knows what you might see when you get there.

Some video players have begun to use the time rail to show video thumbnails on hover in a tooltip. For most videos these thumbnails give a much better idea of what you’ll see when you click to jump to that time. I’ll show you how you can create your own thumbnail previews using HTML5 video.

TL;DR Use the time rail thumbnails plugin for Mediaelement.js.

Archival Use Case

We usually follow agile practices in our archival processing. This style of processing became popularized by the article More Product, Less Process: Revamping Traditional Archival Processing by Mark A. Greene and Dennis Meissner. For instance, we don’t read every page of every folder in every box of every collection in order to describe it well enough for us to make the collection accessible to researchers. Over time we may decide to make the materials for a particular collection or parts of a collection more discoverable by doing the work to look closer and add more metadata to our description of the contents. But we try not to allow the perfect from being the enemy of the good enough. Our goal is to make the materials accessible to researchers and not hidden in some box no one knows about.

Some of our collections of videos are highly curated like for video oral histories. We’ve created transcripts for the whole video. We extract out the most interesting or on topic clips. For each of these video clips we create a WebVTT caption file and an interface to navigate within the video from the transcript.

At NCSU Libraries we have begun digitizing more archival videos. And for these videos we’re much more likely to treat them like other archival materials. We’re never going to watch every minute of every video about cucumbers or agricultural machinery in order to fully describe the contents. Digitization gives us some opportunities to automate the summarization that would be manually done with physical materials. Many of these videos don’t even have dialogue, so even when automated video transcription is more accurate and cheaper we’ll still be left with only the images. In any case, the visual component is a good place to start.

Video Thumbnail Previews

When you hover over the time rail on some video viewers, you see a thumbnail image from the video at that time. YouTube does this for many of its videos. I first saw that this would be possible with HTML5 video when I saw the JW Player page on Adding Preview Thumbnails. From there I took the idea to use an image sprite and a WebVTT file to structure which media fragments from the sprite to use in the thumbnail preview. I’ve implemented this as a plugin for Mediaelement.js. You can see detailed instructions there on how to use the plugin, but I’ll give the summary here.

1. Create an Image Sprite from the Video

This uses ffmpeg to take a snapshot every 5 seconds in the video and then uses montage (from ImageMagick) to stitch them together into a sprite. This means that only one file needs to be downloaded before you can show the preview thumbnail.

ffmpeg -i "video-name.mp4" -f image2 -vf fps=fps=1/5 video-name-%05d.jpg montage video-name*jpg -tile 5x -geometry 150x video-name-sprite.jpg 2. Create a WebVTT metadata file

This is just a standard WebVTT file except the cue text is metadata instead of captions. The URL is to an image and uses a spatial Media Fragment for what part of the sprite to display in the tooltip.

WEBVTT 00:00:00.000 --> 00:00:05.000,0,150,100 00:00:05.000 --> 00:00:10.000,0,150,100 00:00:10.000 --> 00:00:15.000,0,150,100 00:00:15.000 --> 00:00:20.000,0,150,100 00:00:20.000 --> 00:00:25.000,0,150,100 00:00:25.000 --> 00:00:30.000,100,150,100 3. Add the Video Thumbnail Preview Track

Put the following within the <video> element.

<track kind="metadata" class="time-rail-thumbnails" src=""></track> 4. Initialize the Plugin

The following assumes that you’re already using Mediaelement.js, jQuery, and have included the vtt.js library.

$('video').mediaelementplayer({ features: ['playpause','progress','current','duration','tracks','volume', 'timerailthumbnails'], timeRailThumbnailsSeconds: 5 }); The Result Your browser won’t play an MP4. You can [download it instead](/video/mep-feature-time-rail-thumbnails-example.mp4).

See Bug Sprays and Pets with sound.


The plugin can either be installed using the Rails gem or the Bower package.


One of the DOM API features I hadn’t used before is MutationObserver. One thing the thumbnail preview plugin needs to do is know what time is being hovered over on the time rail. I could have calculated this myself, but I wanted to rely on MediaElement.js to provide the information. Maybe there’s a callback in MediaElement.js for when this is updated, but I couldn’t find it. Instead I use a MutationObserver to watch for when MediaElement.js changes the DOM for the default display of a timestamp on hover. Looking at the time code there then allows the plugin to pick the correct cue text to use for the media fragment. MutationObserver is more performant than the now deprecated MutationEvents. I’ve experienced very little latency using a MutationObserver which allows it to trigger lots of events quickly.

The plugin currently only works in the browsers that support MutationObserver, which is most current browsers. In browsers that do not support MutationObserver the plugin will do nothing at all and just show the default timestamp on hover. I’d be interested in other ideas on how to solve this kind of problem, though it is nice to know that plugins that rely on another library have tools like MutationObserver around.

Other Caveats

This plugin is brand new and works for me, but there are some caveats. All the images in the sprite must have the same dimensions. The durations for each thumbnail must be consistent. The timestamps currently aren’t really used to determine which thumbnail to display, but is instead faked relying on the consistent durations. The plugin just does some simple addition and plucks out the correct thumbnail from the array of cues. Hopefully in future versions I can address some of these issues.


Having this feature be available for our digitized video, we’ve already found things in our collection that we wouldn’t have seen before. You can see how a “Profession with a Future” evidently involves shortening your life by smoking (at about 9:05). I found a spinning spherical display of Soy-O and synthetic meat (at about 2:12). Some videos switch between black & white and color which you wouldn’t know just from the poster image. And there are some videos, like talking heads, that appear from the thumbnails to have no surprises at all. But maybe you like watching boiling water for almost 13 minutes.

OK, this isn’t really a discovery in itself, but it is fun to watch a head banging JFK as you go back and forth over the time rail. He really likes milk. And Eisenhower had a different speaking style.

You can see this in action for all of our videos on the NCSU Libraries’ Rare & Unique Digital Collections site and make your own discoveries. Let me know if you find anything interesting.

Preview Thumbnail Sprite Reuse

Since we already had the sprite images for the time rail hover preview, I created another interface to allow a user to jump through a video. Under the video player is a control button that shows a modal with the thumbnail sprite. The sprite alone provides a nice overview of the video that allows you to see very quickly what might be of interest. I used an image map so that the rather large sprite images would only have to be in memory once. (Yes, image maps are still valid in HTML5 and have their legitimate uses.) jQuery RWD Image Maps allows the map area coordinates to scale up and down across devices. Hovering over a single thumb will show the timestamp for that frame. Clicking a thumbnail will set the current time for the video to be the start time of that section of the video. One advantage of this feature is that it doesn’t require the kind of fine motor skill necessary to hover over the video player time rail and move back and forth to show each of the thumbnails.

This feature has just been added this week and deployed to production this week, so I’m looking for feedback on whether folks find this useful, how to improve it, and any bugs that are encountered.

Summarization Services

I expect that automated summarization services will become increasingly important for researchers as archives do more large-scale digitization of physical collections and collect more born digital resources in bulk. We’re already seeing projects like fondz which autogenerates archival description by extracting the contents of born digital resources. At NCSU Libraries we’re working on other ways to summarize the metadata we create as we ingest born digital collections. As we learn more what summarization services and interfaces are useful for researchers, I hope to see more work done in this area. And this is just the beginning of what we can do with summarizing archival video.

LITA: Social Media For My Institution – a new LITA web course

Thu, 2016-09-22 14:22

Social Media For My Institution: from “mine” to “ours”

Instructor: Dr. Plamen Miltenoff
Wednesdays, 10/19/2016 – 11/9/2016
Blended format web course

Register Online, page arranged by session date (login required)

This course has been re-scheduled from a previous date.

A course for librarians who want to explore the institutional application of social media. Based on an established academic course at St. Cloud State University “Social Media in Global Context”. This course will critically examine the institutional need of social media (SM) and juxtapose it to its private use. Discuss the mechanics of choice for recent and future SM tools. Present a theoretical introduction to the subculture of social media. Show how to streamline library SM policies with the goals and mission of the institution. There will be hands-on exercises on creation and dissemination of textual and multimedia content, and patrons’ engagement. And will include brainstorming on suitable for the institution strategies regarding resources, human and technological, workload share, storytelling, and branding and related issues such as privacy, security etc.

This is a blended format web course:

The course will be delivered as 4 separate live webinar lectures, one per week on Wednesdays, October 19, 26, November 2, and 9 at 2pm Central. You do not have to attend the live lectures in order to participate. The webinars will be recorded and distributed through the web course platform, Moodle, for asynchronous participation. The web course space will also contain the exercises and discussions for the course.

Details here and Registration here


By the end of this class, participants will be able to:

  • Move from the state of personal use of social media (SM) and contemplate the institutional approach
  • Have a hands-on experience with finding and selecting multimedia resources and their application for branding of the institution
  • Participants will acquire the foundational structure of the elements, which constitute meaningful institutional social media

Dr. Plamen Miltenoff is an information specialist and Professor at St. Cloud State University. His education includes several graduate degrees in history and Library and Information Science and in education. His professional interests encompass social Web development and design, gaming and gamification environments. For more information see

And don’t miss other upcoming LITA fall continuing education offerings:

Beyond Usage Statistics: How to use Google Analytics to Improve your Repository
Presenter: Hui Zhang
Tuesday, October 11, 2016
11:00 am – 12:30 pm Central Time
Register Online, page arranged by session date (login required)

Online Productivity Tools: Smart Shortcuts and Clever Tricks
Presenter: Jaclyn McKewan
Tuesday November 8, 2016
11:00 am – 12:30 pm Central Time
Register Online, page arranged by session date (login required)

Questions or Comments?

For questions or comments, contact LITA at (312) 280-4268 or Mark Beatty,

Open Knowledge Foundation: Spotlight on tax evasion: Connecting with citizens and activists working on tax justice campaigns across Africa

Thu, 2016-09-22 10:40

Open Knowledge International is coordinating the Open Data for Tax Justice project in partnership with the Tax Justice Network to create a global network of people and organisations using open data to inform local and global efforts around tax justice.

Tax evasion, corruption and illicit financial flows rob countries around the world of billions in revenue which could be spent on improving life for citizens.

That much can be agreed. But how many billions are lost, who is responsible and which countries are worst affected? Those are difficult questions to answer given the lack of transparency and public disclosure in many tax jurisdictions.

The consensus is that it is the economies of the world’s poorest countries which are proportionally most affected by this revenue loss, with African governments estimated to be losing between $30 billion and $60 billion a year to tax evasion or illicit financial flows, according to a 2015 report commissioned by the African Union and United Nations.

International bodies have been slow to produce solutions which fight for the equitable sharing of tax revenues with lobbying leading to a retrenchment of proposed transparency measures and scuppering efforts to create a global tax body under the auspices of the UN.

More transparency and public information is needed to understand the true extent of these issues. To that end, Open Knowledge International is coordinating the Open Data for Tax Justice project with the Tax Justice Network to create a global network of people and organisations using open data to improve advocacy, journalism and public policy around tax justice.

And last week, I joined the third iteration of the International Tax Justice Academy, organised by the Tax Justice Network – Africa, to connect with advocates working to shed light on these issues across Africa.

The picture they painted over three days was bleak: Dr Dereje Alemayehu laid out how the views of African countries had been marginalised or ignored in international tax negotiations due in part to a lack of strong regional power blocs; Jane Nalunga of SEATINI-Uganda bemoaned politicians who continue to “talk left, walk right” when it comes to taking action on cracking down on corrupt or illicit practices; and Professor Patrick Bond of South Africa’s Witwatersrand School of Governance foresaw a rise in violent economic protests across Africa as people become more and more aware of how their natural capital is being eroded.

Several speakers said that an absence of data, low public awareness, lack of political will and poor national or regional coordination all hampered efforts to generate action on illicit financial flows in countries across Africa. Everyone agreed that these debates are not helped by the opacity of key tax terms like transfer pricing, country-by-country reporting and beneficial ownership.

“…an absence of data, low public awareness, lack of political will and poor national or regional coordination all hampered efforts to generate action on illicit financial flows”

The governments of South Africa, Nigeria, Kenya and Tanzania may have all publicly pledged measures like creating beneficial ownership registers to stop individuals hiding their wealth or activities behind anonymous company structures. But at the same time a key concern of those attending the academy was the closing of civic space in many countries across the continent making it harder for them to carry out their work and investigate such activities.

Michael Otieno of the Tax Justice Network – Africa told delegates that they should set the advocacy agenda around tax to ensure that human rights and development issues could be understood by the public in the context of how taxes are collected, allocated and spent. He encouraged all those present to combine forces by adding their voices to the Stop the Bleeding campaign to end illicit financial flows from Africa.

Through our Open Data for Tax Justice project, Open Knowledge International will be looking to incorporate the views of more civil society groups, advocates and public policy makers like those at the tax justice academy into our work. If you would like to join us or learn more about the project, please email

Evergreen ILS: Evergreen 2.9.8 and 2.10.7 released

Thu, 2016-09-22 01:26

We are pleased to announce the release of Evergreen 2.9.8 and 2.10.7, both bugfix releases.

Evergreen 2.9.8 fixes the following issues:

  • When adding a price to the Acquisitions Brief Record price field, it will now propogate to the lineitem estimated price field.
  • Declares UTF-8 encoding when printing from the catalog to resolve issues where non-ASCII characters printed incorrectly in some browsers.
  • Fixes an issue where the circ module sometimes skipped over booking logic even when booking was running on a system.

Evergreen 2.10.7 fixes the same issues fixed in 2.9.8, and also fixes the following:

  • Fixes an issue where the workstation parameter was not passed through the login function, causing problems with opt-in settings and transit behaviors.

Please visit the downloads page to retrieve the server software and staff clients.

DuraSpace News: AVAILABLE: Fedora Camel Component 4.4.4

Thu, 2016-09-22 00:00

From Aaron Coburn, systems administrator and programmer, Academic Technology Services, Amherst College

Amherst, MA  I would like to announce the 4.4.4 release of the Fedora Camel component.

This is a patch release that deprecates two endpoint options: transform and tombstone. Those options are still available in this release, but they will log a warning; they will be completely removed in the 4.5.0 release.

DuraSpace News: TAKE A LOOK: DuraSpace Pictures on Instagram

Thu, 2016-09-22 00:00

Austin, TX  Who doesn't like a pretty picture? DuraSpace Pictures on Instagram features community news and updates plus photographs of people and places related to DuraSpace initiatives and activities. Follow us on Instagram:

LibUX: On the User Experience of Ebooks

Wed, 2016-09-21 20:30

When it comes to ebooks I am in the minority: I prefer them to the real thing. The aesthetic or whats-it about the musty trappings of paper and ink or looming space-sapping towers of shelving just don’t capture my fancy. But these are precisely the go-to attributes people wax poetically about — and you can’t deny there’s something to it.

In fact, beyond convenience ebooks in terms of user experience don’t have much to offer. They are certainly not as convenient as they could be.

All the storytelling power of the web is lost on such a stubbornly static industry where print – where it should be most advantageous – drags its feet. Write in the gloss on, but not in an ebook; embellish a narrative with animation at the New York Times (a newspaper), but not in an ebook; share, borrow, copy, paste, link-to anything but an ebook.

Note what is lacking when it comes to ebook’s advantages: the user experience. True, some people certainly prefer an e-reader (or their phone or tablet), but a physical book has its advantages as well: relative indestructibility, and little regret if it is destroyed or lost; tangibility, both in regards to feel and in the ability to notate; the ability to share or borrow; and, of course, the fact a book is an escape from the screens we look at nearly constantly. At the very best the user experience comparison (excluding the convenience factor) is a push; I’d argue it tilts towards physical books.

Ben Thompson
Disconfirming Ebooks

All things being equal, where ebooks lack can be made-up by the no-cost of their distribution, but the rarely discounted price of the ebook is often more expensive – if not especially costly considering that readers neither own nor can legally migrate their ebook-as-licensed-software to a device, medium, or format where the user experience could be improved.

This aligns with data demonstrating that while ebook access increases between the proliferation of internet-connected devices and even the amount of ebook lending programs in libraries the number of people reading ebooks isn’t meaningfully pulling away from those reading print – like we all imagined it might when this stuff was science fiction.

Grim reader

Similar reports last year seemed to signal the death of the ebook — you might dig my podcast on the ebookalypse — was a misreading that totally ignored the sales of ebooks without isbns — you know, the self-publishers! — that proved not that the ebook was a lost cause but that Amazon dominates because of the ubiquity of Kindle and its superior bookstore. 

There, big-publisher books are forced to a fixed price using an Amazon-controlled interface wherein authors add and easily publish good content on the cheap. Again we see how investing in even a slightly better user experience than everyone else is at the crux of creating monopoly.

Ebook reading tends to be objectively better on a Kindle, and so the entire ebook market largely funnels through Amazon.

  • the price of ebooks are competitively low – or even free
  • ebooks, through Kindles or the Kindle App, are easy to downloaded
  • while still largely encumbered by DRM, readers already have a Kindle – so they don’t require inconvenient additional software or — what’s worse — to be read on a computer
  • since ebook reading kind of sucks on other platforms, there’s not really that much incentive in the present to port Kindle books anyway
  • features like WhisperSync enhance the reading experience in a way that isn’t available in print

— which is sort of what I was lamenting when I wrote, “all the storytelling power of the web is lost on such a stubbornly static industry where print – where it should be most advantageous – drags its feet.”

Other vendors, particularly those available to libraries, have so far been able to only provide a fine middling user experience that doesn’t do much for their desirability for either party. So, print wins out.

SearchHub: Now Everybody Knows Their Names…

Wed, 2016-09-21 17:01

As previously mentioned: On October 13th, Lucene/Solr Revolution 2016 will once again be hosting “Stump The Chump” in which I (The Chump) will be answering tough Solr questions — submitted by users like you — live, on stage, sight unseen.

Today, I’m happy to announce the Panel of experts that will be challenging me with those questions, and deciding which questions were able to Stump The Chump!

In addition to taunting me with the questions, and ridiculing all my attempts to stall while I rack my brain for answers, the Panel members will be responsible for deciding which questions did the best job of “Stumping” me and awarding prizes to the folks who submitted them.

Information on how to submit questions can be found on the session agenda page, and I’ll be posting more details with the Chump tag as we get closer to the conference.

(And don’t forget to register for the conference ASAP if you plan on attending!)

The post Now Everybody Knows Their Names… appeared first on

DPLA: 10 Ways to Use the Primary Source Sets in Your Classroom

Wed, 2016-09-21 16:00

Now that the school year is well underway, we are already hearing great things about how educators and students across the country are putting DPLA and its education resources to use. The Primary Source Sets, in particular, were designed to be versatile and adaptable for a broad variety of classroom environments, lessons, assignments and grade levels, so we wanted to share a few different ideas that demonstrate that versatility in action!

An 1830 pamphlet printed by the Cherokee nation discussing Indian Removal. Courtesy of Hargrett Library via Digital Library of Georgia.

Document-Based Questions, or DBQs, ask students to critically engage with primary sources and use evidence to support an argument or position. DBQs have traditionally been a hallmark of the AP History class to prep for the exam, but we see room for broad applications of DBQs across a variety of courses all year long.  Pull sources from the sets to devise a DBQ for your students or assign a question from one of the teaching guides.

Example: Question 4 from Jacksonian Democracy?: “Using Jackson’s message to Congress concerning Indian Removal and the 1830 pamphlet by the Cherokee nation, determine whether Indian Removal was a democratic action taken by the federal government or an invasion of Cherokee sovereignty.”

Ask students to analyze, interpret, or respond to a specific primary source from the sets to kick off your class session or lesson unit. Alternately, let students pick one source from a set and respond.

Example: Ask students to examine this photograph from the Immigration and Americanization, 1880-1930 set to begin your class session on late nineteenth century immigration. What does it reveal about the experience of immigrating to the US? What questions does it raise?

Have students pick a set and use the sources in their next research project on that topic. For a more focused selection, try a thematic subset like Science and Technology or Women.

Each Primary Source Set Teaching Guide has at least one suggested classroom activity.  Try a new way of bringing primary sources to life in your classroom:

A photo of a student protester carrying a sign depicting a burned draft card, 1969. Courtesy of Suffolk University, Moakley Archive & Institute via Digital Commonwealth.

Be Creative
 – Explore where history meets social media in this activity from the set on The Things They Carried.
Take a Stand – Students create a vintage radio or TV advertisement in small groups to raise awareness about polio prevention in this activity from “There is no cure for Polio
Engage – Have students explore the Civil Rights Movement in stations using primary sources after reading The Watsons Go To Birmingham – 1963 in this activity.
Debate – Students teams stake a claim and debate each other in this activity from the Texas Revolution set.

Use the primary source sets to help students make connections between past and present and add historical perspective to the headlines and news stories we see every day.

Ida B. Wells and Anti-Lynching Activism may offer an important historical counterpart to the #BlackLivesMatter Movement.

Sets on the Fifteenth Amendment and Fannie Lou Hamer and the Civil Rights Movement in Mississippi could help contextualize voting rights activism today.

Sets on immigration provide a historical lens for contemporary news stories about immigration of Latinos, Muslims, and Syrian refugees.

Photograph of a Charleston dance contest in St. Louis on November 13, 1925 from The Great Gatsby set. Courtesy of the Missouri History Museum via Missouri Hub.

Use the primary source sets to add historical and cultural context to works of literature.

Teacher Testimonials:
“I use the literature primary source sets after we read each novel. It’s especially helpful for students to see the connection between what we read as fiction and in the real world.”

“I just started my first semester…where I learned that one of the student learning outcomes for literature courses is something like ‘students will be able to situate literary texts within their cultural contexts.’  This learning outcome is being assessed right now, and there is some room for improvement. Primary source sets to the rescue!”

Starting a unit on American Colonization, the Revolutionary War, or the Civil War and Reconstruction? Use the time period filters to see all the sets from that era and mix and match sources from sets to complement your lesson and help students make connections between topics and ideas.

Teacher Testimonial:
“I will use materials from several sets, including the Underground Railroad, to teach the novel Kindred this semester.”

An American poster discouraging food waste to assist the European Allies. Courtesy of North Carolina Department of Cultural Resources via North Carolina Digital Heritage Center.

Select five examples of a type of media featured throughout the sets and analyze how they communicate a message to their audiences. For example, analyzing five posters featured in the sets can introduce students to visual thinking and build interpretation skills.

Example: Consider starting with the World War I: America Heads to War set, which includes a great selection of posters.

After analyzing the primary sources in a set, ask students to write their own discussion questions to add to the list provided in the teaching guide. Use student-generated questions to drive class discussion and analysis of the topic.

Example: Check out the discussion questions in the teaching guide for Attacks on American Soil: Pearl Harbor and September 11 as a starting point and then add your own.

Using the DPLA sets as inspiration, have students create their own primary source sets. Student sets could be as simple as a list of links in a document or more elaborate using images on a website. Students can identify items in DPLA and write an overview about the chosen topic.

Teacher Testimonial: Before reading Code Name Verity, which is a Young Adult historical fiction novel, students had to locate 5 different primary sources about WWII on the DPLA website and then analyze them before sharing them with the class. Students were able to easily navigate the website.”

DPLA is an ever-growing resource and we’ll be working to create exhibitions and primary source sets and develop new educational opportunities all year so let’s keep in touch!

  • Stay in the loop and get all the updates from the education department by joining our email list for education.
  • And let us know about your experience using the primary source sets or DPLA in your class by emailing  Your feedback will impact our future work!

LITA: Volunteer for LITA!

Wed, 2016-09-21 14:10

Do you want to…

  • learn and apply valuable skills?
  • meet colleagues from all over the US (and maybe even beyond)?
  • help your colleagues learn, grow, and have great experiences with LITA?

Then please volunteer for a LITA committee!


As the LITA Vice President, I’m responsible (along with the Appointments Committee) for making committee appointments happen. What am I looking for?

People who get things done. If you’re a worker bee, a visionary, an artist, a coder, a problem-solver, a community builder, an initiative-taker, or anyone else ready to pitch in, I want you on our committees. (Conversely, I’m not looking for anyone who’s just here for a line on their CV.)

A diverse range of people. Our committees should reflect not just librarianship today, but the fully inclusive librarianship I’d like to see tomorrow — and that starts with making sure our leaders and our voices embrace a wide range. I want to appoint people from a variety of backgrounds, including perspectives from traditionally underrepresented groups.

If you’re inclined toward accomplishment (not just participation), and/or you bring a voice we don’t hear enough of around LITA, please say so on the committee volunteer form so that we know to flag you.

Wondering what the process looks like after you’ve submitted your volunteer form? Well, assuming I’ve got the code on my appointments app right, and assuming you put a working email on your volunteer form (please do this!), you should get an email with the details within a week after submitting your form.

I’m looking forward to hearing from you!

Library of Congress: The Signal: Carla Hayden: Harnessing the Power of Technology with the Resources at the Library of Congress

Wed, 2016-09-21 13:44

This is an excerpt from the inaugural speech by Carla Hayden, the Librarian of Congress.

The 14th Librarian of Congress, Carla Hayden. Photo by Shawn Miller.

Today, through the power of technology, thousands around the country are able to watch this ceremony live. This is the opportunity to build on the contributions of the Librarians who have come before, to realize a vision of a national library that reaches outside the limits of Washington.

When I contemplate the potential of harnessing that power of technology with the unparalleled resources at the Library of Congress, I am overwhelmed with the possibilities…This Library holds some of the world’s largest collections from maps to comic books; founding documents like Thomas Jefferson’s handwritten draft of the Declaration of Independence; the full papers of 23 presidents, and the works of eminent Americans such as Samuel Morse, Frederick Douglass, Clara Barton, Leonard Bernstein, Bob Hope and Thurgood Marshall.

What is the possibility for those treasures? How are they relevant today? I am reminded of a moment during the unrest in the City of Baltimore in April 2015. The Pennsylvania Avenue Branch library was located in the center of those events. But I made the decision to keep the library open, to provide a safe place for our citizens to gather. I was there, hand in hand with the staff, as we opened the doors every morning. Cars were still smoldering in the streets. Closed signs were hanging in storefronts for blocks. But people lined up outside the doors of the library. I remember in particular a young girl coming up to me and asking, “What’s the matter? What is everyone so upset about?” She came to the library for sanctuary and understanding.

Librarian of Congress Carla Hayden reads to children from Brent Elementary school in the Young Readers Center, September 16, 2016. Photo by Shawn Miller.

I recently had the opportunity to view one of the latest Library of Congress acquisitions – the Rosa Parks Collection – which includes her family bible, the bible she carried in her purse, and her handwritten letters. In one such letter she reflects on her December 1, 1955 arrest, writing, “I had been pushed around all my life and felt at this moment that I couldn’t take it anymore.” That letter – and all of her papers – are now digitized and available online.

So anyone anywhere can read her words in her own handwriting. Read them in the classrooms of Racine, Wisconsin, in a small library on a reservation in New Mexico, and even in the library of a young girl in Baltimore, looking around as her city is in turmoil. That is a real public service. And a natural next step for this nation’s library, a place where you can touch history and imagine your future. This Library of Congress, a historic reference source for Congress, an established place for scholars, can also be a place where we grow scholars, where we inspire young authors, where we connect with those individuals outside the limits of Washington and help them make history themselves.

How do we accomplish this? By building on a legacy that depends so much on the people in this room. Not only the elected officials, who have quite a bit to say about the direction of this institution, but also the staff of the Library of Congress, my new colleagues, here on the mezzanine, watching in the Madison Hall, the Adams Café and the Montpelier Room; watching in Culpeper at the Packard Campus for audio/visual conservation; and watching at the National Library Services for the Blind and Physically Handicapped.

Public service has been such a motivating factor for me, in my life and my career. When I received the call from the White House about this opportunity, and was asked, “Will you serve?” Without hesitation I said “yes.” Throughout my career I have known the staff of the Library of Congress to be a dedicated and enthusiastic group of public servants. I look forward to working with you for years to come. But we cannot do it alone. I am calling on you, both who are here in person and those watching virtually, that to have a truly national library, an institution of opportunity for all: it is the responsibility of all.

That means collaborating with other institutions. That means private sector support and patriotic philanthropy for necessary projects like digitization. That means starting a new dialogue about connectivity to classrooms and other libraries. I cannot wait to work with all of you to seize this moment in our history. Let’s make history at the Library of Congress together.

Open Knowledge Foundation: New Open Knowledge Network chapters launched in Japan and Sweden

Wed, 2016-09-21 12:36

This month sees the launch of two new Chapters at the Open Knowledge Network, a chapter for Japan and a chapter for Sweden. Chapters are the Open Knowledge Network’s most developed form, which have legal independence from the organisation and are affiliated by a Memorandum of Understanding. For a full list of our current chapters, see here and to learn more about their structure visit the network guidelines.

Open Knowledge Japan is one of our oldest groups. Started in 2012, the group has done a lot of work promoting open data use in government. The group is also leading the open data day effort in Japan, with more than 60 local events around the country. This is our first chapter in East Asia.

Open Knowledge Sweden, the chapter in the land which implemented the first Freedom of Information legislation in 1766, is still active in promoting FOI through their platform Fragastaten, and is very active in hacks for heritage realms. They are currently part of EU funded project: Clarity- Open EGovernment Services.They have just launched OKawards which is going to be the first award in the region that provides recognition to Open Knowledge contributors from the public and private sector. They are our second chapter in the Nordic countries, joining their neighbours in Finland.  

The Open Knowledge International global network now includes groups in over 40 countries, from Scotland to Cameroon, China to the Czech Republic. Eleven of these groups has now affiliated as chapters. This network of practice of dedicated civic activists, openness specialists, and data diggers are at the heart of the Open Knowledge International mission, and at the forefront of the movement for Open.

“The launch of these new chapters emphasizes the importance of openness in East Asia and the Nordic countries,” said Pavel Richter, Open Knowledge CEO. “These chapters are a manifestation of continuous engagement by volunteers around the world to work towards more open and accountable societies. We are looking forward to following their work and supporting their efforts in the future.”

On of the many events in Japan during open data day. Credit:

The Representative Director of Open Knowledge Japan, Masahiko Shoji, added, Open Knowledge Japan has been leading open data utilization and open knowledge movement in Japan in cooperation with 21 experts and ten companies.  We are delighted to become the official Chapter of Open Knowledge International and share this joy with the active open data communities in Japan.  We would like to move forward with other Asian Open Knowledge communities and the fellows around the world.”


Members of OK SE in open data day. Credit:

Similarly, the Chairman of Open Knowledge Sweden, Serdar Temiz, said, “We are happy to be a closer part of changemakers network in Open Knowledge. To be a chapter at Open Knowledge Network is a great pleasure and a privilege. We are happy to be a part of an organization that is at the forefront of the Open Knowledge movement. It is very motivating for us that within 2 years of our initial period, OKI also recognizes our efforts in the OK community and we could become one of the few official Chapters”

Galen Charlton: How to build an evil library catalog

Wed, 2016-09-21 01:48

Consider a catalog for a small public library that features a way to sort search results by popularity. There are several ways to measure “popularity” of a book: circulations, hold requests, click-throughs in the catalog, downloads, patron-supplied ratings, place on bestseller lists, and so forth.

But let’s do a little thought experiment: let’s use a random number generator to calculate popularity.

However, the results will need to be plausible. It won’t do to have the catalog assert that the latest J.D. Robb book is gathering dust in the stacks. Conversely, the copy of 1959 edition of The geology and paleontology of the Elk Mountain and Tabernacle Butte area, Wyoming that was given to the library right after the last weeding is never going to be a doorbuster.

So let’s be clever and ensure that the 500 most circulated titles in the collection retain their expected popularity rating. Let’s also leave books that have never circulated alone in their dark corners, as well as those that have no cover images available. The rest, we leave to the tender mercies of the RNG.

What will happen? If patrons use the catalog’s popularity rankings, if they trust them — or at least are more likely to look at whatever shows up near the top of search results — we might expect that the titles with an artificial bump from the random number generator will circulate just a bit more often.

Of course, testing that hypothesis by letting a RNG skew search results in a real library catalog would be unethical.

But if one were clever enough to be subtle in one’s use of the RNG, the patrons would have a hard time figuring out that something was amiss.  From the user’s point of view, a sufficiently advanced search engine is indistinguishable from a black box.

This suggests some interesting possibilities for the Evil Librarian of Evil:

  • Some manual tweaks: after all, everybody really ought to read $BESTBOOK. (We won’t mention that it was written by the ELE’s nephew.)
  • Automatic personalization of search results. Does geolocation show that the patron’s IP address is on the wrong side of the tracks? Titles with a lower reading level just got more popular!
  • Has the patron logged in to the catalog? Personalization just got better! Let’s check the patron’s gender and tune accordingly!

Don’t be the ELE.

But as you work to improve library catalogs… take care not to become the ELE by accident.

DuraSpace News: ONLINE DSpace, Fedora and VIVO Technical Specifications

Wed, 2016-09-21 00:00

Austin, TX  Get current technical facts about DSpace, Fedora and VIVO in a concise format online, or as a downloadable PDF. Overview technical specifications for each DuraSpace open source project offer a basic description, history, a typical product use case, an architectural overview, key features and more.

DSpace technical specifications here.

Fedora technical specifications here.

DuraSpace News: CALL for Open Repositories Conference 2017 Proposals

Wed, 2016-09-21 00:00

From the organisers of the Twelfth International Conference on Open Repositories, OR2017

Brisbane, Australia  The Twelfth International Conference on Open Repositories, OR2017, will be held on June 26th-30th, 2017 in Brisbane, Australia. The organisers are pleased to issue this call for contributions to the program, with submissions due by 20 November 2016.