You are here

Feed aggregator

LITA: LITA Forum Assessment Task Force Survey

planet code4lib - Mon, 2016-05-16 14:28

Dear Colleagues,

The LITA Forum Assessment Task Force wants your opinions about the impact of LITA Forum and how it fits within the library technology conference landscape. We invite everyone who works in the overlapping space between libraries and technology, whether or not you belong to LITA or have attended the LITA Forum recently (or at all), to take a short survey:

https://www.surveymonkey.com/r/litaforumassess

We anticipate this survey will take approximately 10 minutes to complete. Participation is anonymous unless you provide your email address for potential follow-up questions. The survey closes on Friday, May 27th, 2016, so don’t delay!

We will summarize what we learn from this survey on the LITA Blog after July 1st. If you have any questions or are having problems completing the survey, please feel free to contact:

Jenny Taylor (emanuelj@uic.edu) or Ken Varnum (varnum@umich.edu).

We thank you in advance for taking the time to provide us with this important information.

Jenny Taylor
Co-Chair, LITA Forum Assessment Task Force
emanuelj@uic.edu

Ken Varnum
Co-Chair, LITA Forum Assessment Task Force
varnum@umich.edu

LITA: Transmission #4

planet code4lib - Mon, 2016-05-16 14:14

In a fun-filled fourth episode, Begin Transmission sits down with John Klima, Assistant Director at the Waukesha Public Library and LITA Blogger. Learn about Klima’s commitment to public service and steampunk expertise.

Begin Transmission will return with our fifth episode on May 31st.

D-Lib: Scientific Stewardship in the Open Data and Big Data Era -- Roles and Responsibilities of Stewards and Other Major Product Stakeholders

planet code4lib - Mon, 2016-05-16 14:13
Article by Ge Peng, Cooperative Institute for Climate and Satellites-North Carolina (CICS-NC), North Carolina State University and NOAA's National Centers for Environmental Information (NCEI), Nancy A. Ritchey, NCEI, Kenneth S. Casey, NCEI, Edward J. Kearns, NCEI, Jeffrey L. Privette, NCEI, Drew Saunders, NCEI, Philip Jones, STG, Inc, Tom Maycock, CICS-NC/NCEI, and Steve Ansari, NCEI

D-Lib: Report from the Sixth Annual DuraSpace Member Summit, March 2016

planet code4lib - Mon, 2016-05-16 14:13
Conference Report by Carol Minton Morris, DuraSpace

D-Lib: Linking Publications and Data: Challenges, Trends, and Opportunities

planet code4lib - Mon, 2016-05-16 14:13
Conference Report by Matthew S. Mayernik and Jennifer Phillips, NCAR/UCAR Library, National Center for Atmospheric Research (NCAR), University Corporation for Atmospheric Research (UCAR); Eric Nienhouse, Computational and Information Systems Lab, National Center for Atmospheric Research, University Corporation for Atmospheric Research (UCAR)

D-Lib: Institutional Repositories: Home for Small Scholarly Journals?

planet code4lib - Mon, 2016-05-16 14:13
Article by Julie Kelly and Linda Eells, University of Minnesota

D-Lib: Stewardship

planet code4lib - Mon, 2016-05-16 14:13
Editorial by Laurence Lannom, CNRI

D-Lib: Customization of Open Source Applications to Support a Multi-Institution Digital Repository Using DSpace

planet code4lib - Mon, 2016-05-16 14:13
Article by Youssef Benchouaf, Daniel Hamp and Mark Shelstad, Colorado State University

Islandora: Islandora Show & Tell: Berklee College of Music

planet code4lib - Mon, 2016-05-16 13:21

We have a particularly interesting Islandora Show & Tell this week, with the archives of the Berklee College of Music. The collection includes (of course) some pretty great music from their students, but also extends to event programs, oral histories, scrapbooks, and other products of their student's graduate projects.

They use some minor customizations of the Newspaper Solution Pack to fit it to their event programs, displaying a title per every “Islandora Newspaper Issue,” which provides more context and metadata for events. Some more involved customizations went into creating their “Highlighted Items” block, which appears on the landing page. It's done with a Drupal block module that taps into the Islandora / Tuque API with a custom web form; the team enters PIDs for the objects they want to display. There are also smaller customizations to the user interface spread throughout the site, such as navigation breadcrumbs, “page rotation” options in the OpenSeaDragon plugin, a new “viewer” module that replaces the Internet Archive Book Reader, and in the back end, heavily customized XML to create a simplified interface for non-archives staff. Once they get their configurations locked down, they have plans to share their details in the Islandora Deployments repo. 

Which is all very cool. But since I'm the one reviewing this collection, it's all about the cats. Berklee's archives (specially, their event programs), do not disappoint. Front and centre on the search results page is The Cat's Pajamas. Not to be outdone by Cats 'N Spats. Or, my personal favourite to invent a context for: Our Cat Is Sleeping On Your Back Porch

And on those notes, let's hear from Sofía Becerra-Licha and Ernie Gillis about how their collection came together:

What is the primary purpose of your repository? Who is the intended audience?

The mission of the Archives at Berklee College of Music is to preserve and provide access to institutional history as well as special collections focusing on popular music. Our audience includes Berklee faculty, staff, current students, and alumni on both campuses (Boston and Valencia) as well as a diverse array of outside researchers and casual viewers. Given the College’s emphasis on technological innovation and our very limited physical space on campus, digital access to collections has been a priority since the Archives’ establishment in 2012. 

Our online presence is divided into two parts: an institutional repository containing master’s capstone projects and digital collections consisting of both institutional records and special collections. The institutional repository  both showcase recent graduates’ work and is actively used by the graduate student body and their faculty as research and reference materials. Our digital collections include oral histories chronicling Berklee and Boston popular music history, materials relating to the Schillinger System of Musical Composition upon which Berklee’s distinctive curriculum was based, and other curricular and institutional highlights. 

Why did you choose Islandora?

This repository was a long time coming. Area requirements included an open-source solution and the ability to ably manage audiovisual assets. Originally, the Learning Resources technical team attempted to build a repository structure from scratch, resulting in several iterations over the past decade before discovering and settling on the Fedora Repository structure. Then, over the last five years, Learning Resources as a whole (including the College Archives and Berklee’s library) migrated to a Drupal-based CMS. Given these developments and the want & necessity of sticking to open source solutions, Islandora emerged as a perfect fit to both match the CMS migration to Drupal and tap into the offerings of Fedora. In short, Islandora fit our workflow management needs and open source requirements. 

Which modules or solution packs are most important to your repository?

What feature of your repository are you most proud of?

Some highlights: 

  • Using the Newspaper Solution Pack to present our campus event programs, organized chronologically by venue: https://archives.berklee.edu/bca-009-bcm-event-programs. 
  • Incorporating TurnJS for multi page object viewing (to replace Internet Archive Book Reader)
  • SOLR integration and configuration fine tuning

Who built/developed/designed your repository (i.e, who was on the team?)

We’re a small team of three-ish. Web development and design work was done by the Manager of Learning Resources Web Development (Ernie Gillis) and Senior Web Developer (Jaesung Song), in consultation with the College Archivist (Sofía Becerra-Licha), who assisted with content development and ingest. In the planning and implementation stages, the team had input from the Director of Library Services and the Dean of Learning Resources. 12 shared work-study students were collaboratively employed in the areas of digitization (6), graphic design (2), web development (2), and archival processing (2). 

Do you have plans to expand your site in the future?
Yes! Modules we plan to install and/or implement include OAI and Scholar. Planned improvements to local functionality development and design include: SSO rights management, implementing a (TBD) library services discovery layer, and improving responsive and mobile design interfaces for Islandora/Fedora Objects. With regard to asset management and creation, we plan to add:

  • Closed captioning for videos (such as oral histories)
  • Real time streaming for on demand audio / video (for improved control over downloads, and for better mobile or remote / low data access via adaptive bitrate) 
  • PDF to Image & image protection (to better control high quality download of certain images or graphic objects)
  • EAD / Finding Aids as Fedora Objects
  • Web technology objects (for HTML5 SVG objects, or other web animation types)

What is your favourite object in your collection to show off?
The scrapbook in the Franklin McGinley collection on Duke Ellington (BCA-004). Great illustrations, fun clippings, musician autographs, and more! 

LibUX: Why am I doing this to our users?

planet code4lib - Mon, 2016-05-16 07:00
Let’s Redesign a Web Application!

In the spring of 2015 the WVU Libraries development team was called upon by the eResources committee to redesign their legacy Databases web application, which was first started in 2010.

A screenshot of the WVU Libraries databases web application as it existed before 2015. This design wasn’t responsive, utilized a left-hand faceting system, and navigation/search elements were inherited into the web application.

I was given less than two days to design a databases web application user interface (UI) redesign and present it to the entire eResources committee. This was without any time to establish quantitative design research – analytics, usage data, server-side statistics, etc. – or to conduct any type of qualitative usability testing.

Due to the development team being in constant backlog, being extremely understaffed, and deadlines being decided without our input, it was apparent that this was a UI upgrade only – and we were going in blind. After launch I would be able to conduct summative usability testing.

Four Missed Opportunities
  1. We didn’t include the user in the process
    It was a missed opportunity not only to include the user in the process. By not including the user we couldn’t determine what worked for them, how they used the UI, and what their satisfaction and pain points were.
  2. We hadn’t established any type of quantitative research
    We didn’t have any type of quantitative research for the web application (Google Analytics, CrazyEgg, etc.). So we had no quantitative (or qualitative) data on which we could base any UI decisions.
  3. We didn’t use interactive wireframes
    I have found that without interactive wireframes stakeholders, developers, and user experience (UX) professionals have very different ideas about how a web application or design should function. Ultimately you can end up with a hybrid or compromised project that is usually subpar to a developers, stakeholder, or client’s expectations.
  4. We launched before we tested
    We didn’t conduct any type of user survey or testing on the different interface in a development environment, but instead went live with the intention of testing the live product.

Images of the proposed UI redesign for the WVU Libraries databases web application.

Despite the missed opportunities, the major features and improvements of what was designed and implemented for the web application were:

  • Being responsive
  • Having featured wikilinks for resource and type tags
  • A cleaner simpler UI
  • A faceted breadcrumb navigation system (similar to online shopping experiences) for titles and subjects.

The only differences with what was built for the WVU Libraries databases web application include moving to a tabbar-based mobile navigation, on-page help information on every view, and a slightly reduced faceting system from before.

All in all, not too many differences existed between what was proposed versus what was actually designed and built. The only differences include moving to a tabbar-based mobile navigation, on-page help information on every view, and a slightly reduced faceting system from before.

A design in context of the tabbar-based mobile navigation which contained the sort type, help, and faceting system.

TEST 1: Redesign Usability Testing Results

The committee had requested changes be made on the live product before testing, without knowing the goals and objectives to test in the first place. At this point I stressed that we had no qualitative or quantitative data to go by, and doing some summative usability testing before making any changes was critical. There was a lot of push back on this, and even though I held my ground as a UX professional, I sensed that this was the beginning in a breakdown of communication, understanding, and the user’s role in the development process.

Database Web Application Usability Testing from Tim Broadwater

The Database web application usability results report clarified the web application’s primary target audience as being 73% undergraduate students, 21 years in their average age, 80% of which having smartphones. Additionally the report clarified that 96% of the Database web application usage was from users not actually in the library.

The usability test results demonstrated that as the users spent more time in the web application, it became easier for them to navigate and use. It also indicated that the majority of error rates could be dramatically improved just by making a few UI, content, and development changes. An example of these were changing the locations of certain UI elements, changing some naming conventions, and adding a search box.

It seemed like the blind attempt of redesigning the web application by our UX professional and development team was a success, that we were pretty good, on the right path, and the UI was workable… right?

The UX Conspiracy

I couldn’t have been more wrong. First and foremost the design and UI was not what the stakeholders wanted; they stated that users wouldn’t scroll, that the faceting was too complicated, and that they weren’t happy with the web application. The development team was then questioned as to why we didn’t make the requested changes before testing, to which I responded in kind concerning the lack of user data.

Next we were told that Librarians needed to be involved with the entire Usability Testing process, and what they were seeing users do at the desk and in-person was not what we were designing and building. Finally, it was implied that the UX professional (me) was bending data to fulfill an anti-librarian agenda, push UI elements that weren’t necessary, and not listening to what the librarians thought the users wanted.

I was flabbergasted. Here we had our first insights into how the user interacted with the Databases UI, measurable data to weigh against future summative user testing going forward, and a clear pathway for improvement. However, the stakeholders were claiming data manipulation occurred, stating that users don’t know how to scroll, and exclaiming that it was better before the redesign.

The Breakdown of Communication

We ended up not being able to meet with the eResources committee for three months, during which time the development team was not permitted to make any improvements identified in the usability testing. Near the end of the year, we finally had an hour long meeting with the eResources committee in which we argued the same points, we tried to compromise, but in an entire hour the user was never mentioned or considered once. The conversation had devolved into a blame game.

Later the development team was informed that the eResources committee was possibly moving the Databases web application into another product entirely, unless we built verbatim exactly what they wanted. At this point some on our team became so disgusted with the entire process, and completely excluding the user, that they didn’t care if the eResources committee moved to another product. Simultaneously, other members of the development team who had been working on the web application for years didn’t want to throw away their work, and agreed to do what the committee wanted, even if it meant taking out the innovative and groundbreaking features of the web application (faceted breadcrumbs, wikilink tags, etc.).

During all of this time, I kept thinking “where is the user in this whole process.”

Let’s Re-Redesign a Web Application?

So the development team started work on the second redesign of the Databases web application in the same academic year, and were told that we had to get approval from the eResources committee every step of the way, and that we couldn’t go live without their approval.

This process was a grueling, disheartening, and a morale killer for our team. Mostly because we devolved into a ‘my way or the highway’ work mode, the usability test results were thrown out the window, and we were forced to work along a bulleted list of what we needed to do to get approval to go live. At this point I remembered thinking to myself, “when did I become a mouse the librarians could just click-and-drag,” and asking myself “whose job am I doing, and why am I doing this to our users.”

The re-redesigned Databases web application went live on March 7, 2016. The head of the eResources committee sent out an email to all library personnel exclaiming how the committee worked hard with the ‘web folks’ to simplify and make it easier to navigate; many library-wide congratulations followed.

TEST 2: Re-Redesign Usability Testing Results

On April 15th, 2016 we began conducting a repeat of the usability test from last year, omitting the questions that couldn’t be answered in the changed UI.

Database Web Application User Test 2 from Tim Broadwater

TechSmith Morae was used on a laptop computer to conduct usability testing of the recently revised Database web application, using test questions from the first round of testing that were still relevant to the web application.

When comparing test results from the first Database usability test from the Development team’s redesign, to the re-redesign mandated by the eResources committee, the following occurred:

  • 54% of task completion required more time in Test 2 when compared to Test 1
  • Success rates for both ‘Completed with ease’ and ‘Completed’ were largely reduced in Test 2 when compared to Test 1
  • Success rates for both ‘Completed with difficulty’ and ‘Failed to complete’ largely increased in Test 2 when compared to Test 1
  • The standard deviation of error rates have majorly increased in Test 2 when compared to Test 1

Additionally, 70% of the system usability scale responses from Test 1 to Test 2 decreased in favorable response:

  • I needed to learn a lot of things before I could get going with this system
  • I found the system very cumbersome to use
  • I thought there was too much inconsistency in this system
  • I found the various functions in this system were well integrated
  • I thought that the system was easy to use
  • I think that I would like to use this system frequently
What Went Wrong?

Data from the second round of usability testing indicated a False-Consensus Effect (from stakeholder to user), which can be seen in the average time, success rate, and error rate metric comparison. In the field of psychology and UX, a false-consensus effect is a type of cognitive bias whereby people tend to overestimate the extent to which their opinions, preferences, and habits are normal and typical of those of others. Basically, it is assuming that others act and think the same way that they do.

A false-consensus effect is the largest problem and area of concern in the field of UX. It’s the biggest factor that contributes and leads to a decrease in the quality of UX. In the context of the Databases web application, the false-consensus effect was demonstrated when the users that were tested the second time were unable to complete most usability testing tasks because the UI was too simple. The majority of the user’s results ended with a wall of text that took took much time to read through, and the user was unable to facet or filter their search results.

So basically, we redesigned the second time for what the eResources librarians thought the users wanted, and not what actually worked for our 21 year old undergraduate students. At that point I stuck my tongue in my cheek, took a deep breath, and quoted Jakob Nielsen outloud saying, “pay attention to what users do, not what they say.”

Going Forward, Ask Yourself

Now that we are back at the drawing board and looking to bring back some of the features that did work for our users, we have more perspective. We are halting everything because we have learned a valuable lesson. When we think about how we got here, it pretty simple. At every step of the way we all omitted the user from the development process, and at every step the role of the UX professional (me) was devalued and diminished. That’s it, pure and simple. Going forward, and on every project, a UX professional should ask these questions:

Where is the user?

You are not your users, the librarians are not your users, the developers are not your users, and your coworkers are not your users. Have you stopped to do a little bit of work to determine exactly who is your target audience? Once you know that then you know who to include in your testing, not librarians, not staff, and not co-workers. Include the user in the data mining process, the interactive wireframing process, and the usability testing process.

Are we ignoring data?

It’s widely said in UX that if you design for everybody you design for nobody. If you don’t have any quantitative data at all – server statistics, heat maps, analytics, etc. – your UX and usability design research will fail. Don’t blindly ignore test results, or you will blindly ignore your users.

What are we building for?

Do you ever get the feeling that you’re spinning your wheels, and wasting precious development and enhancement time? If you do, then most likely you are. Work smarter not harder by focusing all of your development for your users, and based on formative user data, not whimsical stakeholder requests. Also, if you haven’t read “Why Design-By-Committee Should Die“ by Speider Schneider, go there immediately so you don’t get overpowered by committees.

How do we get everyone on the same page?

To get everyone on the same page before you even touch code, work with interactive wireframes. Corporations and tech startups have been doing this for years, and so it’s time for libraries to do the same. Concept.ly, Pixate, and Principle work very nicely as a collaboration/feedback tools, and Adobe Experience Design just came out for Mac!

In The End, Don’t Sweat It

Dilbert Daily Comic. Dilbert.com. Monday May 7th, 2012.

To some degree we all make mistakes, hindsight is clearer than foresight, and everybody has been guilty of false-consensus effect at one point or another due to deadlines and work constraints. The important thing is to keep good working relationships with people, constantly make a case for good usability by including your users as much as possible, and learn best practices through failure. That’s all that anyone can hope to do.

The post Why am I doing this to our users? appeared first on LibUX.

DuraSpace News: LYRASIS and DuraSpace Announce Dissolution of "Intent to Merge"

planet code4lib - Mon, 2016-05-16 00:00

Austin, TX  Following four months of formal due diligence and six months of exploration, the Boards of LYRASIS and DuraSpace have decided that a full merger is not currently the best way for each organization to achieve its long-term goals. In lieu of a formal merger, LYRASIS and DuraSpace will continue to pursue more informal collaborations that benefit members and communities of both organizations while allowing each organization to remain focused on its mission. This decision is the result of extensive investigations and good faith due diligence.

Jason Ronallo: The Lenovo X240 Keyboard and the End/Insert Key With FnLk On as a Software Developer on Linux

planet code4lib - Sat, 2016-05-14 18:24

As a software developer I’m using keys like F5 a lot. When I’m doing any writing, I use F6 a lot to turn off and on spell correction underlining. On the Lenovo X240 the function keys are overlaid on the same keys as volume and brightness control. This causes some problems for me. Luckily there’s a solution that works for me under Linux.

To access the function keys you have to also press the Fn key. If most of what you’re doing is reloading a browser and not using the volume control, then this is a problem, so they’ve created a function lock which is enabled by pressing the Fn and Esc/FnLk key. The Fn key lights up and you can press F5 without using the Fn modifier key.

That’s all well and good until you get to another quirk of this keyboard where the Home, End, and Delete keys are in the same function key row in a way that the End key also functions as the Insert key. When function lock is on the End key becomes an Insert key. I don’t ever use the Insert key on a keyboard, so I understand why they combined the End/Insert key. But in this combination it doesn’t work for me as a software developer. I’m continually going between something that needs to be reloaded with F5 and in an editor where I need to quickly go to the end of a line in a program.

Luckily there’s a pretty simple answer to this if you don’t ever need to use the Insert key. I found the answer on askubuntu.

All I needed to do was run the following:

xmodmap -e "keycode 118 = End"

And now even when the function keys are locked the End/Insert key always behaves as End. To make this is permanent and the mapping gets loaded with X11 starts, add xmodmap -e "keycode 118 = End" to your ~/.xinitrc.

Jason Ronallo: Styling HTML5 Video with CSS

planet code4lib - Sat, 2016-05-14 18:24

If you add an image to an HTML document you can style it with CSS. You can add borders, change its opacity, use CSS animations, and lots more. HTML5 video is just as easy to add to your pages and you can style video too. Lots of tutorials will show you how to style video controls, but I haven’t seen anything that will show you how to style the video itself. Read on for an extreme example of styling video just to show what’s possible.

Here’s a simple example of a video with a single source wrapped in a div:

<div id="styled_video_container"> <video src="/video/wind.mp4" type="video/mp4" controls poster="/video/wind.png" id="styled_video" muted preload="metadata" loop> </div>

Add some buttons under the video to style and play the video and then to stop the madness.

<button type="button" id="style_it">Style It!</button> <button type="button" id="stop_style_it">Stop It!</button>

We’ll use this JavaScript just to add a class to the containing element of the video and play/pause the video.

jQuery(document).ready(function($) { $('#style_it').on('click', function(){ $('#styled_video')[0].play(); $('#styled_video_container').addClass('style_it'); }); $('#stop_style_it').on('click', function(){ $('#styled_video_container').removeClass('style_it'); $('#styled_video')[0].pause(); }); });

Using the class that gets added we can then style and animate the video element with CSS. This is a simplified version without vendor flags.

#styled_video_container.style_it { background: linear-gradient(to bottom, #ff670f 0%,#e20d0d 100%); } #styled_video_container.style_it video { border: 10px solid green !important; opacity: 0.6; transition: all 8s ease-in-out; transform: rotate(300deg); box-shadow: 12px 9px 13px rgba(255, 0, 255, 0.75); } Stupid Video Styling Tricks Style It! Stop It!

Conclusion

OK, maybe there aren’t a lot of practical uses for styling video with CSS, but it is still fun to know that we can. Do you have a practical use for styling video with CSS that you can share?

Jason Ronallo: HTML5 Video Caption Cue Settings in WebVTT

planet code4lib - Sat, 2016-05-14 18:24

TL;DR Check out my tool to better understand how cue settings position captions for HTML5 video.

Having video be a part of the Web with HTML5 <video> opens up a lot of new opportunities for creating rich video experiences. Being able to style video with CSS and control it with the JavaScript API makes it possible to do fun stuff and to create accessible players and a consistent experience across browsers. With better support in browsers for timed text tracks in the <track> element, I hope to see more captioned video.

An important consideration in creating really professional looking closed captions is placing them correctly. I don’t rely on captions, but I do increasingly turn them on to improve my viewing experience. I’ve come to appreciate some attributes of really well done captions. Accuracy is certainly important. The captions should match the words spoken. As someone who can hear, I see inaccurate captions all too often. Thoroughness is another factor. Are all the sounds important for the action represented in captions. Captions will also include a “music” caption, but other sounds, especially those off screen are often omitted. But accuracy and thoroughness aren’t the only factors to consider when evaluating caption quality.

Placement of captions can be equally important. The captions should not block other important content. They should not run off the edge of the screen. If two speakers are on screen you want the appropriate captions to be placed near each speaker. If a sound or voice is coming from off screen, the caption is best placed as close to the source as possible. These extra clues can help with understanding the content and action. These are the basics. There are other style guidelines for producing good captions. Producing good captions is something of an art form. More than two rows long is usually too much, and rows ought to be split at phrase breaks. Periods should be used to end sentences and are usually the end of a single cue. There’s judgment necessary to have pleasing phrasing.

While there are tools for doing this proper placement for television and burned in captions, I haven’t found a tool for this for Web video. While I haven’t yet have a tool to do this, in the following I’ll show you how to:

  • Use the JavaScript API to dynamically change cue text and settings.
  • Control placement of captions for your HTML5 video using cue settings.
  • Play around with different cue settings to better understand how they work.
  • Style captions with CSS.
Track and Cue JavaScript API

The <video> element has an API which allows you to get a list of all tracks for that video.

Let’s say we have the following video markup which is the only video on the page. This video is embedded far below, so you should be able to run these in the console of your developer tools right now.

<video poster="soybean-talk-clip.png" controls autoplay loop> <source src="soybean-talk-clip.mp4" type="video/mp4"> <track label="Captions" kind="captions" srclang="en" src="soybean-talk-clip.vtt" id="soybean-talk-clip-captions" default> </video>

Here we get the first video on the page:

var video = document.getElementsByTagName('video')[0];

You can then get all the tracks (in this case just one) with the following:

var tracks = video.textTracks; // returns a TextTrackList var track = tracks[0]; // returns TextTrack

Alternately, if your track element has an id you can get it more directly:

var track = document.getElementById('soybean-talk-clip-captions').track;

Once you have the track you can see the kind, label, and language:

track.kind; // "captions" track.label; // "Captions" track.language; // "en"

You can also get all the cues as a TextTrackCueList:

var cues = track.cues; // TextTrackCueList

In our example we have just two cues. We can also get just the active cues (in this case only one so far):

var active_cues = track.activeCues; // TextTrackCueList

Now we can see the text of the current cue:

var text = active_cues[0].text;

Now the really interesting part is that we can change the text of the caption dynamically and it will immediately change:

track.activeCues[0].text = "This is a completely different caption text!!!!1"; Cue Settings

We can also then change the position of the cue using cue settings. The following will move the first active cue to the top of the video.

track.activeCues[0].line = 1;

The cue can also be aligned to the start of the line position:

track.activeCues[0].align = "start";

Now for one last trick we’ll add another cue with the arguments of start time and end time in seconds and the cue text:

var new_cue = new VTTCue(1,30, "This is the next of the new cue.");

We’ll set a position for our new cue before we place it in the track:

new_cue.line = 5;

Then we can add the cue to the track:

track.addCue(new_cuew);

And now you should see your new cue for most of the duration of the video.

Playing with Cue Settings

The other settings you can play with including position and size. Position is the text position as a percentage of the width of the video. The size is the width of the cue as a percentage of the width of the video.

While I could go through all of the different cue settings, I found it easier to understand them after I built a demonstration of dynamically changing all the cue settings. There you can play around with all the settings together to see how they actually interact with each other.

At least as of the time of this writing there is some variability between how different browsers apply these settings.

Test WebVTT Cue Settings and Styling

Cue Settings in WebVTT

I’m honestly still a bit confused about all of the optional ways in which cue settings can be defined in WebVTT. The demonstration outputs the simplest and most straightforward representation of cue settings. You’d have to read the spec for optional ways to apply some cue settings in WebVTT.

Styling Cues

In browsers that support styling of cues (Chrome, Opera, Safari), the demonstration also allows you to apply styling to cues in a few different ways. This CSS code is included in the demo to show some simple examples of styling.

::cue(.red){ color: red; } ::cue(.blue){ color: blue; } ::cue(.green){ color: green; } ::cue(.yellow){ color: yellow; } ::cue(.background-red){ background-color: red; } ::cue(.background-blue){ background-color: blue; } ::cue(.background-green){ background-color: green; } ::cue(.background-yellow){ background-color: yellow; }

Then the following cue text can be added to show red text with a yellow background. The

<c.red.background-yellow>This cue has red text with a yellow background.</c>

In the demo you can see which text styles are supported by which browsers for styling the ::cue pseudo-element. There’s a text box at the bottom that allows you to enter any arbitrary styles and see what effect they have.

Example Video

Test WebVTT Cue Settings and Styling

Jason Ronallo: HTML Slide Decks With Synchronized and Interactive Audience Notes Using WebSockets

planet code4lib - Sat, 2016-05-14 18:24

One question I got asked after giving my Code4Lib presentation on WebSockets was how I created my slides. I’ve written about how I create HTML slides before, but this time I added some new features like an audience interface that synchronizes automatically with the slides and allows for audience participation.

TL;DR I’ve open sourced starterdeck-node for creating synchronized and interactive HTML slide decks.

Not every time that I give a presentation am I able to use the technologies that I am talking about within the presentation itself, so I like to do it when I can. I write my slide decks as Markdown and convert them with Pandoc to HTML slides which use DZslides for slide sizing and animations. I use a browser to present the slides. Working this way with HTML has allowed me to do things like embed HTML5 video into a presentation on HTML5 video and show examples of the JavaScript API and how videos can be styled with CSS.

For a presentation on WebSockets I gave at Code4Lib 2014, I wanted to provide another example from within the presentation itself of what you can do with WebSockets. If you have the slides and the audience notes handout page open at the same time, you will see how they are synchronized. (Beware slowness as it is a large self-contained HTML download using data URIs.) When you change to certain slides in the presenter view, new content is revealed in the audience view. Because the slides are just an HTML page, it is possible to make the slides more interactive. WebSockets are used to allow the slides to send messages to each audience members’ browser and reveal notes. I am never able to say everything that I would want to in one short 20 minute talk, so this provided me a way to give the audience some supplementary material.

Within the slides I even included a simplistic chat application that allowed the audience to send messages directly to the presenter slides. (Every talk on WebSockets needs a gratuitous chat application.) At the end of the talk I also accepted questions from the audience via an input field. The questions were then delivered to the slides via WebSockets and displayed right within a slide using a little JavaScript. What I like most about this is that even someone who did not feel confident enough to step up to a microphone would have the opportunity to ask an anonymous question. And I even got a few legitimate questions amongst the requests for me to dance.

Another nice side benefit of getting the audience to notes before the presentation starts is that you can include your contact information and Twitter handle on the page.

I have wrapped up all this functionality for creating interactive slide decks into a project called starterdeck-node. It includes the WebSocket server and a simple starting point for creating your own slides. It strings together a bunch of different tools to make creating and deploying slide decks like this simpler so you’ll need to look at the requirements. This is still definitely just a tool for hackers, but having this scaffolding in place ought to make the next slide deck easier to create.

Here’s a video where I show starterdeck-node at work. Slides on the left; audience notes on the right.

Other Features

While the new exciting feature added in this version of the project is synchronization between presenter slides and audience notes, there are also lots of other great features if you want to create HTML slide decks. Even if you aren’t going to use the synchronization feature, there are still lots of reasons why you might want to create your HTML slides with starterdeck-node.

Self-contained HTML. Pandoc uses data-URIs so that the HTML version of your slides have no external dependencies. Everything including images, video, JavaScript, CSS, and fonts are all embedded within a single HTML document. That means that even if there’s no internet connection from the podium you’ll still be able to deliver your presentation.

Onstage view. Part of what gets built is a DZSlides onstage view where the presenter can see the current slide, next slide, speaker notes, and current time.

Single page view. This view is a self-contained, single-page layout version of the slides and speaker notes. This is a much nicer way to read a presentation than just flipping through the slides on various slide sharing sites. If you put a lot of work into your talk and are writing speaker notes, this is a great way to reuse them.

PDF backup. A script is included to create a PDF backup of your presentation. Sometimes you have to use the computer at the podium and it has an old version of IE on it. PDF backup to the rescue. While you won’t get all the features of the HTML presentation you’re still in business. The included Node.js app provides a server so that a headless browser can take screenshots of each slide. These screenshots are then compiled into the PDF.

Examples

I’d love to hear from anyone who tries to use it. I’ll list any examples I hear about below.

Here are some examples of slide decks that have used starterdeck-node or starterdeck.

Jason Ronallo: A Plugin For Mediaelement.js For Preview Thumbnails on Hover Over the Time Rail Using WebVTT

planet code4lib - Sat, 2016-05-14 18:24

The time rail or progress bar on video players gives the viewer some indication of how much of the video they’ve watched, what portion of the video remains to be viewed, and how much of the video is buffered. The time rail can also be clicked on to jump to a particular time within the video. But figuring out where in the video you want to go can feel kind of random. You can usually hover over the time rail and move from side to side and see the time that you’d jump to if you clicked, but who knows what you might see when you get there.

Some video players have begun to use the time rail to show video thumbnails on hover in a tooltip. For most videos these thumbnails give a much better idea of what you’ll see when you click to jump to that time. I’ll show you how you can create your own thumbnail previews using HTML5 video.

TL;DR Use the time rail thumbnails plugin for Mediaelement.js.

Archival Use Case

We usually follow agile practices in our archival processing. This style of processing became popularized by the article More Product, Less Process: Revamping Traditional Archival Processing by Mark A. Greene and Dennis Meissner. For instance, we don’t read every page of every folder in every box of every collection in order to describe it well enough for us to make the collection accessible to researchers. Over time we may decide to make the materials for a particular collection or parts of a collection more discoverable by doing the work to look closer and add more metadata to our description of the contents. But we try not to allow the perfect from being the enemy of the good enough. Our goal is to make the materials accessible to researchers and not hidden in some box no one knows about.

Some of our collections of videos are highly curated like for video oral histories. We’ve created transcripts for the whole video. We extract out the most interesting or on topic clips. For each of these video clips we create a WebVTT caption file and an interface to navigate within the video from the transcript.

At NCSU Libraries we have begun digitizing more archival videos. And for these videos we’re much more likely to treat them like other archival materials. We’re never going to watch every minute of every video about cucumbers or agricultural machinery in order to fully describe the contents. Digitization gives us some opportunities to automate the summarization that would be manually done with physical materials. Many of these videos don’t even have dialogue, so even when automated video transcription is more accurate and cheaper we’ll still be left with only the images. In any case, the visual component is a good place to start.

Video Thumbnail Previews

When you hover over the time rail on some video viewers, you see a thumbnail image from the video at that time. YouTube does this for many of its videos. I first saw that this would be possible with HTML5 video when I saw the JW Player page on Adding Preview Thumbnails. From there I took the idea to use an image sprite and a WebVTT file to structure which media fragments from the sprite to use in the thumbnail preview. I’ve implemented this as a plugin for Mediaelement.js. You can see detailed instructions there on how to use the plugin, but I’ll give the summary here.

1. Create an Image Sprite from the Video

This uses ffmpeg to take a snapshot every 5 seconds in the video and then uses montage (from ImageMagick) to stitch them together into a sprite. This means that only one file needs to be downloaded before you can show the preview thumbnail.

ffmpeg -i "video-name.mp4" -f image2 -vf fps=fps=1/5 video-name-%05d.jpg montage video-name*jpg -tile 5x -geometry 150x video-name-sprite.jpg 2. Create a WebVTT metadata file

This is just a standard WebVTT file except the cue text is metadata instead of captions. The URL is to an image and uses a spatial Media Fragment for what part of the sprite to display in the tooltip.

WEBVTT 00:00:00.000 --> 00:00:05.000 http://example.com/video-name-sprite.jpg#xywh=0,0,150,100 00:00:05.000 --> 00:00:10.000 http://example.com/video-name-sprite.jpg#xywh=150,0,150,100 00:00:10.000 --> 00:00:15.000 http://example.com/video-name-sprite.jpg#xywh=300,0,150,100 00:00:15.000 --> 00:00:20.000 http://example.com/video-name-sprite.jpg#xywh=450,0,150,100 00:00:20.000 --> 00:00:25.000 http://example.com/video-name-sprite.jpg#xywh=600,0,150,100 00:00:25.000 --> 00:00:30.000 http://example.com/video-name-sprite.jpg#xywh=0,100,150,100 3. Add the Video Thumbnail Preview Track

Put the following within the <video> element.

<track kind="metadata" class="time-rail-thumbnails" src="http://example.com/video-name-sprite.vtt"></track> 4. Initialize the Plugin

The following assumes that you’re already using Mediaelement.js, jQuery, and have included the vtt.js library.

$('video').mediaelementplayer({ features: ['playpause','progress','current','duration','tracks','volume', 'timerailthumbnails'], timeRailThumbnailsSeconds: 5 }); The Result Your browser won’t play an MP4. You can [download it instead](/video/mep-feature-time-rail-thumbnails-example.mp4).

See Bug Sprays and Pets with sound.

Installation

The plugin can either be installed using the Rails gem or the Bower package.

MutationObserver

One of the DOM API features I hadn’t used before is MutationObserver. One thing the thumbnail preview plugin needs to do is know what time is being hovered over on the time rail. I could have calculated this myself, but I wanted to rely on MediaElement.js to provide the information. Maybe there’s a callback in MediaElement.js for when this is updated, but I couldn’t find it. Instead I use a MutationObserver to watch for when MediaElement.js changes the DOM for the default display of a timestamp on hover. Looking at the time code there then allows the plugin to pick the correct cue text to use for the media fragment. MutationObserver is more performant than the now deprecated MutationEvents. I’ve experienced very little latency using a MutationObserver which allows it to trigger lots of events quickly.

The plugin currently only works in the browsers that support MutationObserver, which is most current browsers. In browsers that do not support MutationObserver the plugin will do nothing at all and just show the default timestamp on hover. I’d be interested in other ideas on how to solve this kind of problem, though it is nice to know that plugins that rely on another library have tools like MutationObserver around.

Other Caveats

This plugin is brand new and works for me, but there are some caveats. All the images in the sprite must have the same dimensions. The durations for each thumbnail must be consistent. The timestamps currently aren’t really used to determine which thumbnail to display, but is instead faked relying on the consistent durations. The plugin just does some simple addition and plucks out the correct thumbnail from the array of cues. Hopefully in future versions I can address some of these issues.

Discoveries

Having this feature be available for our digitized video, we’ve already found things in our collection that we wouldn’t have seen before. You can see how a “Profession with a Future” evidently involves shortening your life by smoking (at about 9:05). I found a spinning spherical display of Soy-O and synthetic meat (at about 2:12). Some videos switch between black & white and color which you wouldn’t know just from the poster image. And there are some videos, like talking heads, that appear from the thumbnails to have no surprises at all. But maybe you like watching boiling water for almost 13 minutes.

OK, this isn’t really a discovery in itself, but it is fun to watch a head banging JFK as you go back and forth over the time rail. He really likes milk. And Eisenhower had a different speaking style.

You can see this in action for all of our videos on the NCSU Libraries’ Rare & Unique Digital Collections site and make your own discoveries. Let me know if you find anything interesting.

Preview Thumbnail Sprite Reuse

Since we already had the sprite images for the time rail hover preview, I created another interface to allow a user to jump through a video. Under the video player is a control button that shows a modal with the thumbnail sprite. The sprite alone provides a nice overview of the video that allows you to see very quickly what might be of interest. I used an image map so that the rather large sprite images would only have to be in memory once. (Yes, image maps are still valid in HTML5 and have their legitimate uses.) jQuery RWD Image Maps allows the map area coordinates to scale up and down across devices. Hovering over a single thumb will show the timestamp for that frame. Clicking a thumbnail will set the current time for the video to be the start time of that section of the video. One advantage of this feature is that it doesn’t require the kind of fine motor skill necessary to hover over the video player time rail and move back and forth to show each of the thumbnails.

This feature has just been added this week and deployed to production this week, so I’m looking for feedback on whether folks find this useful, how to improve it, and any bugs that are encountered.

Summarization Services

I expect that automated summarization services will become increasingly important for researchers as archives do more large-scale digitization of physical collections and collect more born digital resources in bulk. We’re already seeing projects like fondz which autogenerates archival description by extracting the contents of born digital resources. At NCSU Libraries we’re working on other ways to summarize the metadata we create as we ingest born digital collections. As we learn more what summarization services and interfaces are useful for researchers, I hope to see more work done in this area. And this is just the beginning of what we can do with summarizing archival video.

Tara Robertson: changing the rules of the game: what libraries can learn from Beyoncé

planet code4lib - Sat, 2016-05-14 17:49

 

Dr. Safiya U. Noble‘s selfie

Recently two awesome things changed my world. Beyoncé released her album Lemonade and the BC Library Association conference happened.

Cory Doctorow’s opening keynote was brilliant. As expected he gave a smart and funny talk full of examples to illustrate the bigger issues. I don’t think anyone will forget the baby monitor cam that was taken over by creepy men who were taunting the baby as an example of privacy flaws in everyday “smart” devices. I feel like he gave libraries more credit than we deserve. I felt pretty depressed and without hope thinking about how libraries continue to choose proprietary vendor technology that does not reflect our core values.

One of my favourite conversations at this conference was with Alison Macrina, from the Library Freedom Project.  We talked about many things, including our mutual love for Beyoncé. She saw her concert in Houston and told me about the amazing choreography for Freedom, which was the last song Beyoncé performed.

When I asked friends what their favourite song was on Beyoncé’s Lemonade a few people said that they thought of the whole album as one song, or as an opera. So, on the way home from the conference, I was listening the whole album and hearing it in a new way. I jumped off the bus and walked up the street to my home just as Freedom came on, by the end of the song I had a realization. Beyonce embodies freedom by owning her creative product, but perhaps even more importantly she owns the means of distribution. Like Beyoncé, libraries need to own our distribution platforms.

Tidal, Beyonce’s distribution channel, is a streaming music platform that is a competitor to Spotify and Pandora. I’m not sure what the ownership breakdown is, but Tidal is owned by artists.  A few of the artist-owners are Jay Z , Beyoncé, Prince, Rihanna, Kanye West, Nicki Minaj, Daft Punk, Jack White, Madonna, Arcade Fire, Alicia Keys, Usher, Chris Martin, Calvin Harris, deadmau5, Jason Aldean and J. Cole. Initially many people thought Tidal was a failure, but that has changed.

Lemonade was launched on HBO on April 22. On the 23rd the only place Lemonade was available was streamed through Tidal, and for purchase the day after. On the 25th it was available for purchase by track or album to Amazon Music and the iTunes Store. Physical copies of the album went on sale at brick and mortar stores on May 6. Initially the shift to digital distribution replicated the business model for distributing records which generated huge profits for record labels, but often cut out the artist.

PKP (Public Knowledge Project) is a great example of how academic libraries built open source publishing tools to challenge scholarly publishers. This has been a game changer in terms of how research is published, distributed and accessed.

For more than 10 years we’ve been complaining about Overdrive’s DRM-laced ebooks, and the crappy user experience. Instead of relying on vendors, we need to build our own distribution platform for ebooks. I realize that it’s the content our patrons are hungry for, and that we’re neither Jay Z, nor Beyoncé. If publishers aren’t willing to play with us, we have strong relationships with authors and could work directly with them as content creators. There needs to be a new business model where people can access creative works and that the content creators can make a living. Access Copyright’s model doesn’t work, but we could work with content creators to figure out a business model that does.

In her closing keynote at BCLA activist and writer Harsha Walia talked about systemic power structures and the need to change how we do things. Talking about pay equity she said “It’s not about breaking the glass ceiling, it’s about shattering the whole house.” Vendor rules and platforms are about profit margins for those companies. Libraries need to change the rules of the game.

Tryna rain, tryna rain on the thunder
Tell the storm I’m new
I’m a wall, come and march on the regular
Painting white flags blue

Freedom! Freedom! I can’t move
Freedom, cut me loose!
Freedom! Freedom! Where are you?
Cause I need freedom too!

Pages

Subscribe to code4lib aggregator