planet code4lib

Syndicate content
Planet Code4Lib - http://planet.code4lib.org
Updated: 15 weeks 6 days ago

Engard, Nicole: Bookmarks for April 8, 2014

Tue, 2014-04-08 20:30

Today I found the following resources and bookmarked them on <a href=

  • Midori A lightweight, fast, and free web browser

Digest powered by RSS Digest

The post Bookmarks for April 8, 2014 appeared first on What I Learned Today....

Related posts:

  1. Wikipedia Browser
  2. Donate A Book Day in April
  3. Oh Cool – More Ways to Find RSS Feeds for Journals

Engard, Nicole: Bookmarks for April 8, 2014

Tue, 2014-04-08 20:30

Today I found the following resources and bookmarked them on <a href=

  • Midori A lightweight, fast, and free web browser

Digest powered by RSS Digest

The post Bookmarks for April 8, 2014 appeared first on What I Learned Today....

Related posts:

  1. Wikipedia Browser
  2. Donate A Book Day in April
  3. Oh Cool – More Ways to Find RSS Feeds for Journals

Summers, Ed: Where Brooklyn At?

Tue, 2014-04-08 19:41

As a follow up to my last post I added a script to my fork of Aaron’s py-flarchive that will load up a Redis instance with comments, notes, tags and sets for Flickr images that were uploaded by Brooklyn Museum. The script assumes you’ve got a snapshot of the archived metadata, which I downloaded as a tarball. It took several hours to unpack the tarball on a medium ec2 instance; so if you want to play around and just want the redis database let me know and I’ll get it to you.

Once I loaded up Redis I was able to generate some high level stats:

  • images: 5,697
  • authors: 4,617
  • tags: 6,132
  • machine tags: 933
  • comments: 7,353
  • notes: 963
  • sets: 141

Given how many images there were there it represents an astonishing number of authors: unique people who added tags, comments or notes. If you are curious I generated a list of the tags and saved them as a Google Doc. The machine tags were particularly interesting to me. The majority (849) of them look like Brooklyn Museum IDs of some kind, for example:

bm:unique=S10_08_Thebes/9928

But there were also 51 geotags, and what looks like 23 links to items in Pleiades, for example:

tag:pleiades:depicts=721417202

If I had to guess I’d say this particular machine tag indicated that the Brooklyn Museum image depicted Abu Simbel. Now there weren’t tons of these machine tags but it’s important to remember that other people use Flickr as a scratch space for annotating images this way.

If you aren’t familiar with them, Flickr notes are annotations of an image, where the user has attached a textual note to a region in the image. Just eyeballing the list, it appears that there is quite a bit of diversity in them, ranging from the whimsical:

  • cool! they look soo surreal
  • teehee somebody wrote some graffiti in greek
  • Lol are these painted?
  • Steaks are ready!

to the seemingly useful:

  • Hunter’s Island
  • Ramesses III Temple
  • Lapland Village
  • Lake Michigan
  • Montuemhat Crypt
  • Napoleon’s troops are often accused of destroying the nose, but they are not the culprits. The nose was already gone during the 18th century.

Similarly the general comments run the gamut from:

  • very nostalgic…
  • always wanted to visit Egypt

to:

  • Just a few points. This is not ‘East Jordan’ it is in the Hauran region of southern Syria. Second it is not Qarawat (I guess you meant Qanawat) but Suweida. Third there is no mention that the house is enveloped by the colonnade of a Roman peripteral temple.
  • The fire that destroyed the buildings was almost certainly arson. it occurred at the height of the Pullman strike and at the time, rightly or wrongly, the strikers were blamed.
  • You can see in the background, the TROCADERO with two towers .. This “medieval city” was built on the right bank where are now buildings in modern art style erected for the exposition of 1937.

Brooklyn Museum pulled over 48 tags from Flickr before they deleted the account. That’s just 0.7% of the tags that were there. None of the comments or notes were moved over.

In the data that Aaron archived there was one indicator of user engagement: the datetime included with comments. Combined with the upload time for the images it was possible to create a spreadsheet that correlates the number of comments with the number of uploads per month:

I’m guessing the drop off in December of 2013 is due to that being the last time Aaron archived Brooklyn Museum’s metadata. You can see that there was a decline in user engagement: the peak in late 2008 / early 2009 was never matched again. I was half expecting to see that user engagement fell off when Brooklyn Museum’s interest in the platform (uploads) fell off. But you can see that they continued to push content to Flickr, without seeing much of a reward, at least in the shape of comments. It’s impossible now to tell if tagging, notes or sets trended differently.

Since Flickr includes the number of times each image was viewed it’s possible to look at all the images and see how many times images were viewed, the answer?

9,193,331

Not a bad run for 5,697 images. I don’t know if Brooklyn Museum downloaded their metadata prior to removing their account. But luckily Aaron did.

Summers, Ed: Where Brooklyn At?

Tue, 2014-04-08 19:41

As a follow up to my last post I added a script to my fork of Aaron’s py-flarchive that will load up a Redis instance with comments, notes, tags and sets for Flickr images that were uploaded by Brooklyn Museum. The script assumes you’ve got a snapshot of the archived metadata, which I downloaded as a tarball. It took several hours to unpack the tarball on a medium ec2 instance; so if you want to play around and just want the redis database let me know and I’ll get it to you.

Once I loaded up Redis I was able to generate some high level stats:

  • images: 5,697
  • authors: 4,617
  • tags: 6,132
  • machine tags: 933
  • comments: 7,353
  • notes: 963
  • sets: 141

Given how many images there were there it represents an astonishing number of authors: unique people who added tags, comments or notes. If you are curious I generated a list of the tags and saved them as a Google Doc. The machine tags were particularly interesting to me. The majority (849) of them look like Brooklyn Museum IDs of some kind, for example:

bm:unique=S10_08_Thebes/9928

But there were also 51 geotags, and what looks like 23 links to items in Pleiades, for example:

tag:pleiades:depicts=721417202

If I had to guess I’d say this particular machine tag indicated that the Brooklyn Museum image depicted Abu Simbel. Now there weren’t tons of these machine tags but it’s important to remember that other people use Flickr as a scratch space for annotating images this way.

If you aren’t familiar with them, Flickr notes are annotations of an image, where the user has attached a textual note to a region in the image. Just eyeballing the list, it appears that there is quite a bit of diversity in them, ranging from the whimsical:

  • cool! they look soo surreal
  • teehee somebody wrote some graffiti in greek
  • Lol are these painted?
  • Steaks are ready!

to the seemingly useful:

  • Hunter’s Island
  • Ramesses III Temple
  • Lapland Village
  • Lake Michigan
  • Montuemhat Crypt
  • Napoleon’s troops are often accused of destroying the nose, but they are not the culprits. The nose was already gone during the 18th century.

Similarly the general comments run the gamut from:

  • very nostalgic…
  • always wanted to visit Egypt

to:

  • Just a few points. This is not ‘East Jordan’ it is in the Hauran region of southern Syria. Second it is not Qarawat (I guess you meant Qanawat) but Suweida. Third there is no mention that the house is enveloped by the colonnade of a Roman peripteral temple.
  • The fire that destroyed the buildings was almost certainly arson. it occurred at the height of the Pullman strike and at the time, rightly or wrongly, the strikers were blamed.
  • You can see in the background, the TROCADERO with two towers .. This “medieval city” was built on the right bank where are now buildings in modern art style erected for the exposition of 1937.

Brooklyn Museum pulled over 48 tags from Flickr before they deleted the account. That’s just 0.7% of the tags that were there. None of the comments or notes were moved over.

In the data that Aaron archived there was one indicator of user engagement: the datetime included with comments. Combined with the upload time for the images it was possible to create a spreadsheet that correlates the number of comments with the number of uploads per month:

I’m guessing the drop off in December of 2013 is due to that being the last time Aaron archived Brooklyn Museum’s metadata. You can see that there was a decline in user engagement: the peak in late 2008 / early 2009 was never matched again. I was half expecting to see that user engagement fell off when Brooklyn Museum’s interest in the platform (uploads) fell off. But you can see that they continued to push content to Flickr, without seeing much of a reward, at least in the shape of comments. It’s impossible now to tell if tagging, notes or sets trended differently.

Since Flickr includes the number of times each image was viewed it’s possible to look at all the images and see how many times images were viewed, the answer?

9,193,331

Not a bad run for 5,697 images. I don’t know if Brooklyn Museum downloaded their metadata prior to removing their account. But luckily Aaron did.

Morgan, Eric Lease: The 3D Printing Working Group is maturing, complete with a shiny new mailing list

Tue, 2014-04-08 19:28

A couple of weeks ago Kevin Phaup took the lead of facilitating a 3D printing workshop here in the Libraries’s Center For Digital Scholarship. More than a dozen students from across the University participated. Kevin presented them with an overview of 3D printing, pointed them towards a online 3D image editing application (Shapeshifter), and everybody created various objects which Matt Sisk has been diligently printing. The event was deemed a success, and there will probably be more specialized workshops scheduled for the Fall.

Since the last blog posting there has also been another Working Group meeting. A short dozen of us got together in Stinson-Remick where we discussed the future possibilities for the Group. The consensus was to create a more formal mailing list, maybe create a directory of people with 3D printing interests, and see about doing something more substancial — with a purpose — for the University.

To those ends, a mailing list has been created. Its name is 3D Printing Working Group . The list is open to anybody, and its purpose is to facilitate discussion of all things 3D printing around Notre Dame and the region. To subscribe address an email message to listserv@listserv.nd.edu, and in the body of the message include the following command:

subscribe nd-3d-printing Your Name

where Your Name is… your name.

Finally, the next meeting of the Working Group has been scheduled for Wednesday, May 14. It will be sponsored by Bob Sutton of Springboard Technologies, and it will be located in Innovation Park across from the University, and it will take place from 11:30 to 1 o’clock. I’m pretty sure lunch will be provided. The purpose of the meeting will be continue to outline the future directions of the Group as well as to see a demonstration of a printer called the Isis3D.

Morgan, Eric Lease: The 3D Printing Working Group is maturing, complete with a shiny new mailing list

Tue, 2014-04-08 19:28

A couple of weeks ago Kevin Phaup took the lead of facilitating a 3D printing workshop here in the Libraries’s Center For Digital Scholarship. More than a dozen students from across the University participated. Kevin presented them with an overview of 3D printing, pointed them towards a online 3D image editing application (Shapeshifter), and everybody created various objects which Matt Sisk has been diligently printing. The event was deemed a success, and there will probably be more specialized workshops scheduled for the Fall.

Since the last blog posting there has also been another Working Group meeting. A short dozen of us got together in Stinson-Remick where we discussed the future possibilities for the Group. The consensus was to create a more formal mailing list, maybe create a directory of people with 3D printing interests, and see about doing something more substancial — with a purpose — for the University.

To those ends, a mailing list has been created. Its name is 3D Printing Working Group . The list is open to anybody, and its purpose is to facilitate discussion of all things 3D printing around Notre Dame and the region. To subscribe address an email message to listserv@listserv.nd.edu, and in the body of the message include the following command:

subscribe nd-3d-printing Your Name

where Your Name is… your name.

Finally, the next meeting of the Working Group has been scheduled for Wednesday, May 14. It will be sponsored by Bob Sutton of Springboard Technologies, and it will be located in Innovation Park across from the University, and it will take place from 11:30 to 1 o’clock. I’m pretty sure lunch will be provided. The purpose of the meeting will be continue to outline the future directions of the Group as well as to see a demonstration of a printer called the Isis3D.

Tennant, Roy: Being a Savvy Social Media User

Tue, 2014-04-08 16:25

Recently my colleague Karen Smith-Yoshimura noted a blog post that demonstrates effective traits for using social media on behalf of an organization. Titled “Social Change”, the post documents the choices that Brooklyn Museum staff made recently to pare down their social media participation to venues that they find most effective. As they put it:

There comes a moment in every trajectory where one has to change course.  As part of a social media strategic plan, we are changing gears a bit to deploy an engagement strategy which focuses on our in-building audience, closely examines which channels are working for us, and aligns our energies in places where we feel our voice is needed, but allows for us to pull away where things are happening on their own.

This clearly indicates that it doesn’t make a lot of sense to simply get an account on every social media site out there and let’er rip. For one reason it is highly unlikely that your organization has the bandwidth to engage effectively in every platform. Another is that without the ability to engage effectively, it’s best to not even attempt it. Having a moribund presence on a social platform is worse than having no presence at all.

Therefore, being a savvy social media user means consciously reviewing your social media use periodically to:

  • Identify venues that are no longer useful to you and either shutdown the account or put it on ice.
  • Identify venues that you find useful and maintain or increase your use of those venues.
  • Consider whether the nature of your engagement should change. For example, should you use more pictures to make your posts more engaging? Should you craft messages that are more intriguing than informative, thus potentially increasing visits to your site?

Kudos to the Brooklyn Museum for doing this right. Read the post, and understand what it means to be a thoughtful social media user. We should all be so savvy.

 

Image courtesy of Brantley Davidson, Creative Commons Attribution 2.0 Generic License

Tennant, Roy: Being a Savvy Social Media User

Tue, 2014-04-08 16:25

Recently my colleague Karen Smith-Yoshimura noted a blog post that demonstrates effective traits for using social media on behalf of an organization. Titled “Social Change”, the post documents the choices that Brooklyn Museum staff made recently to pare down their social media participation to venues that they find most effective. As they put it:

There comes a moment in every trajectory where one has to change course.  As part of a social media strategic plan, we are changing gears a bit to deploy an engagement strategy which focuses on our in-building audience, closely examines which channels are working for us, and aligns our energies in places where we feel our voice is needed, but allows for us to pull away where things are happening on their own.

This clearly indicates that it doesn’t make a lot of sense to simply get an account on every social media site out there and let’er rip. For one reason it is highly unlikely that your organization has the bandwidth to engage effectively in every platform. Another is that without the ability to engage effectively, it’s best to not even attempt it. Having a moribund presence on a social platform is worse than having no presence at all.

Therefore, being a savvy social media user means consciously reviewing your social media use periodically to:

  • Identify venues that are no longer useful to you and either shutdown the account or put it on ice.
  • Identify venues that you find useful and maintain or increase your use of those venues.
  • Consider whether the nature of your engagement should change. For example, should you use more pictures to make your posts more engaging? Should you craft messages that are more intriguing than informative, thus potentially increasing visits to your site?

Kudos to the Brooklyn Museum for doing this right. Read the post, and understand what it means to be a thoughtful social media user. We should all be so savvy.

 

Image courtesy of Brantley Davidson, Creative Commons Attribution 2.0 Generic License

Library Hackers Unite blog: OpenSSL Vulnerability

Tue, 2014-04-08 15:51

SSL certificates can be compromised using a new vulnerability that shipped on currently supported versions of Debian, Ubuntu, CentOS, Fedora, the BSDs, etc.

Time update your servers, regenerate certs and if you are being rigorous about it, go through the certificate revocation process for your old ones. BUT, be careful that you have available OpenSSL 1.0.1g (or newer, should their be one). Versions previous to 1.0.1 are NOT vulnerable to heartbleed. Though many of these old versions are vulnerable to other bugs, you would not want to update from 1.0.0 for the sole purpose of avoiding heartbleed, if you are only going to land in 1.0.1e, thereby introducing the problem.

Considering the widespread deployment of OpenSSL, it is hard to overstate how common this bug is online.

Library Hackers Unite blog: OpenSSL Vulnerability

Tue, 2014-04-08 15:51

SSL certificates can be compromised using a new vulnerability that shipped on currently supported versions of Debian, Ubuntu, CentOS, Fedora, the BSDs, etc.

Time update your servers, regenerate certs and if you are being rigorous about it, go through the certificate revocation process for your old ones. BUT, be careful that you have available OpenSSL 1.0.1g (or newer, should their be one). Versions previous to 1.0.1 are NOT vulnerable to heartbleed. Though many of these old versions are vulnerable to other bugs, you would not want to update from 1.0.0 for the sole purpose of avoiding heartbleed, if you are only going to land in 1.0.1e, thereby introducing the problem.

Considering the widespread deployment of OpenSSL, it is hard to overstate how common this bug is online.

Miedema, John: Cognitive technologies can eliminate the silly amount of time we spend sifting through search results. ‘Whatson’ success criteria revisited.

Tue, 2014-04-08 03:21

My first build of ‘Whatson’ left me wanting. I felt I needed to better define how cognitive technology differed from good-old-fashioned-search, like Google. On one level, cognitive technology is, well, more mental. It uses more than keyword matching and regular expressions; but then so does Google. It uses language analysis; so does Google. It succeeds using very large unstructured data sets. So too Google. So what distinguishes cognitive technology like Watson, and must be wired into the bone of my Whatson?

Some of my posts pointed to the deeper feature I was struggling to find. An early post asked, “Natural Language Processing — Can we do away with Unique Identifiers?” Another asked, “You have just been beamed aboard the starship Enterprise. You can ask one question of the ship’s computer. What would it be?” A more conclusive post was entitled, “Good-bye database design and unique identifiers. Strong NLP and the singularity of Watson.

I benefited by reading, Final Jeopardy: Man vs. Machine and the Quest to Know Everything, by Stephen Baker. The difference between search and cognitive technology is the difference between a set of search results and a single correct answer, between looking and finding, seeking versus knowing. Google provides a “vague pointer” to the answer. Watson provides a single, precise answer. Many versus one.

There is rarely one right answer to a question. The essence of critical thinking is the ability to find other ways of thinking about a problem. Google stacks up a list of results and assigns a confidence level to each one. So do cognitive technologies. Unlike Google, cognitive technologies have to be good enough that the top answer is right most of the time. Watson made its public debut playing the game of Jeopardy. Part of its smarts was knowing when to pass a turn, but it had to be able to answer quickly and correctly most of the time or it would lose the game. Cognitive technology raises the bar. It must use more sophisticated language analysis to really understand a human question. It has to be better at pattern recognition. It must employ more thoughtful decision making and follow a big picture strategy.

We have become so used to Google that we are content with a list of search results. What it would be like if we could answer a question on the first try? Would that be it? Done? Not quite. A game can have one right answer, but not the real world. What cognitive technologies can do is eliminate is the silly amount of time we spend sifting through search results. We could ask a question and get a satisfactory answer, and then, just like a dialog with a person, we would ask another question. Beautiful.

Miedema, John: Cognitive technologies can eliminate the silly amount of time we spend sifting through search results. ‘Whatson’ success criteria revisited.

Tue, 2014-04-08 03:21

My first build of ‘Whatson’ left me wanting. I felt I needed to better define how cognitive technology differed from good-old-fashioned-search, like Google. On one level, cognitive technology is, well, more mental. It uses more than keyword matching and regular expressions; but then so does Google. It uses language analysis; so does Google. It succeeds using very large unstructured data sets. So too Google. So what distinguishes cognitive technology like Watson, and must be wired into the bone of my Whatson?

Some of my posts pointed to the deeper feature I was struggling to find. An early post asked, “Natural Language Processing — Can we do away with Unique Identifiers?” Another asked, “You have just been beamed aboard the starship Enterprise. You can ask one question of the ship’s computer. What would it be?” A more conclusive post was entitled, “Good-bye database design and unique identifiers. Strong NLP and the singularity of Watson.

I benefited by reading, Final Jeopardy: Man vs. Machine and the Quest to Know Everything, by Stephen Baker. The difference between search and cognitive technology is the difference between a set of search results and a single correct answer, between looking and finding, seeking versus knowing. Google provides a “vague pointer” to the answer. Watson provides a single, precise answer. Many versus one.

There is rarely one right answer to a question. The essence of critical thinking is the ability to find other ways of thinking about a problem. Google stacks up a list of results and assigns a confidence level to each one. So do cognitive technologies. Unlike Google, cognitive technologies have to be good enough that the top answer is right most of the time. Watson made its public debut playing the game of Jeopardy. Part of its smarts was knowing when to pass a turn, but it had to be able to answer quickly and correctly most of the time or it would lose the game. Cognitive technology raises the bar. It must use more sophisticated language analysis to really understand a human question. It has to be better at pattern recognition. It must employ more thoughtful decision making and follow a big picture strategy.

We have become so used to Google that we are content with a list of search results. What it would be like if we could answer a question on the first try? Would that be it? Done? Not quite. A game can have one right answer, but not the real world. What cognitive technologies can do is eliminate is the silly amount of time we spend sifting through search results. We could ask a question and get a satisfactory answer, and then, just like a dialog with a person, we would ask another question. Beautiful.

Ronallo, Jason: Questions Asked During the Presentation Websockets For Real-time And Interactive Interfaces At Code4lib 2014

Mon, 2014-04-07 23:30

During my presentation on WebSockets, there were a couple points where folks in the audience could enter text in an input field that would then show up on a slide. The data was sent to the slides via WebSockets. It is not often that you get a chance to incorporate the technology that you’re talking about directly into how the presentation is given, so it was a lot of fun. At the end of the presentation, I allowed folks to anonymously submit questions directly to the HTML slides via WebSockets.

I ran out of time before I could answer all of the questions that I saw. I’ll try to answer them now.

Questions From Slides

You can see in the YouTube video at the end of my presentation (at 1h38m26s) the following questions came in. ([Full presentation starts here[(https://www.youtube.com/watch?v=_8MJATYsqbY&feature=share&t=1h25m37s).) Some lines that came in were not questions at all. For those that are really questions, I’ll answer them now, even if I already answered them.

Are you a trained dancer?

No. Before my presentation I was joking with folks about how little of a presentation I’d have, at least for the interactive bits, if the wireless didn’t work well enough. Tim Shearer suggested I just do an interpretive dance in that eventuality. Luckily it didn’t come to that.

When is the dance?

There was no dance. Initially I thought the dance might happen later, but it didn’t. OK, I’ll admit it, I was never going to dance.

Did you have any efficiency problems with the big images and chrome?

On the big video walls in Hunt Library we often use Web technologies to create the content and Chrome for displaying it on the wall. For the most part we don’t have issues with big images or lots of images on the wall. But there’s a bit of trick happening here. For instance when we display images for My #HuntLibrary on the wall, they’re just images from Instagram so only 600x600px. We initially didn’t know how these would look blown up on the video wall, but they end up looking fantastic. So you don’t necessarily need super high resolution images to make a very nice looking display.

Upstairs on the Visualization Wall, I display some digitized special collections images. While the possible resolution on the display is higher, the current effective resolution is only about 202px wide for each MicroTile. The largest image is then only 404px side. In this case we are also using a Djatoka image server to deliver the images. Djatoka has an issue with the quality of its scaling between quality levels where the algorithm chosen can make the images look very poor. How I usually work around this is to pick the quality level that is just above the width required to fit whatever design. Then the browser scales the image down and does a better job making it look OK than the image server would. I don’t know which of these factors effect the look on the Visualization Wall the most, but some images have a stair stepping look on some lines. This especially effects line drawings with diagonal lines, while photographs can look totally acceptable. We’ll keep looking for how to improve the look of images on these walls especially in the browser.

Have you got next act after Wikipedia?

This question is referring to the adaptation of Listen to Wikipedia for the Immersion Theater. You can see video of what this looks like on the big Hunt Library Immersion Theater wall.

I don’t currently have solid plans for developing other content for any of the walls. Some of the work that I and others in the Libraries have done early on has been to help see what’s possible in these spaces and begin to form the cow paths for others to produce content more easily. We answered some big questions. Can we deliver content through the browser? What templates can we create to make this work easier? I think the next act is really for the NCSU Libraries to help more students and researchers to publish and promote their work through these spaces.

Is it lunchtime yet?

In some time zone somewhere, yes. Hopefully during the conference lunch came soon enough for you and was delicious and filling.

Could you describe how testing worked more?

I wish I could think of some good way to test applications that are destined for these kinds of large displays. There’s really no automated testing that is going to help here. BrowserStack doesn’t have a big video wall that they can take screenshots on. I’ve also thought that it’d be nice to have a webcam trained on the walls so that I could make tweaks from a distance.

But Chrome does have its screen emulation developer tools which were super helpful for this kind of work. These kinds of tools are useful not just for mobile development, which is how they’re usually promoted, but for designing for very large displays as well. Even on my small workstation monitor I could get a close enough approximation of what something would look like on the wall. Chrome will shrink the content to fit to the available viewport size. I could develop for the exact dimensions of the wall while seeing all of the content shrunk down to fit my desktop. This meant that I could develop and get close enough before trying it out on the wall itself. Being able to design in the browser has huge advantages for this kind of work.

I work at DH Hill Library while these displays are in Hunt Library. I don’t get over there all that often, so I would schedule some time to see the content on the walls when I happened to be over there for a meeting. This meant that there’d often be a lag of a week or two before I could get over there. This was acceptable as this wasn’t the primary project I was working on.

By the time I saw it on the wall, though, we were really just making tweaks for design purposes. We wanted the panels to the left and right of the Listen to Wikipedia visualization to fall along the bezel. We would adjust font sizes for how they felt once you’re in the space. The initial, rough cut work of modifying the design to work in the space was easy, but getting the details just right required several rounds of tweaks and testing. Sometimes I’d ask someone over at Hunt to take a picture with their phone to ensure I’d fixed an issue.

While it would have been possible for me to bring my laptop and sit in front of the wall to work, I personally didn’t find that to work well for me. I can see how it could work to make development much faster, though, and it is possible to work this way.

Race condition issues between devices?

Some spaces could allow you to control a wall from a kiosk and completely avoid any possibility of a race condition. When you allow users to bring their own device as a remote control to your spaces you have some options. You could allow the first remote to connect and lock everyone else out for a period of time. Because of how subscriptions and presence notifications work this would certainly be possible to do.

For Listen to Wikipedia we allow more than one user to control the wall at the same time. Then we use WebSockets to try to keep multiple clients in sync. Even though we attempt to quickly update all the clients, it is certainly possible that there could be race conditions, though it seems unlikely. Because we’re not dealing with persisting data, I don’t really worry about it too much. If one remote submits just after another but before it is synced, then the wall will reflect the last to submit. That’s perfectly acceptable in this case. If a client were to get out of sync with what is on the wall, then any change by that client would just be sent to the wall as is. There’s no attempt to make sure a client had the most recent, freshest version of the data prior to submitting.

While this could be an issue for other use cases, it does not adversely effect the experience here. We do an alright job keeping the clients in sync, but don’t shoot for perfection.

How did you find the time to work on this?

At the time I worked on these I had at least a couple other projects going. When waiting for someone else to finish something before being able to make more progress or on a Friday afternoon, I’d take a look at one of these projects for a little. It meant the progress was slow, but these also weren’t projects that anyone was asking to be delivered on a deadline. I like to have a couple projects of this nature around. If I’ve got a little time, say before a meeting, but not enough for something else, I can pull one of these projects out.

I wonder, though, if this question isn’t more about the why I did these projects. There were multiple motivations. A big motivation was to learn more about WebSockets and how the technology could be applied in the library context. I always like to have a reason to learn new technologies, especially Web technologies, and see how to apply them to other types of applications. And now that I know more about WebSockets I can see other ways to improve the performance and experience of other applications in ways that might not be as overt in their use of the technology as these project were.

For the real-time digital collections view this is integrated into an application I’ve developed and it did not take much to begin adding in some new functionality. We do a great deal of business analytic tracking for this application. The site has excellent SEO for the kind of content we have. I wanted to explore other types of metrics of our success.

The video wall projects allowed us to explore several different questions. What does it take to develop Web content for them? What kinds of tools can we make available for others to develop content? What should the interaction model be? What messaging is most effective? How should we kick off an interaction? Is it possible to develop bring your own device interactions? All of these kinds of questions will help us to make better use of these kinds of spaces.

Speed of an unladen swallow?

I think you’d be better off asking a scientist or a British comedy troupe.

Questions From Twitter

Mia (@mia_out) tweeted at 11:47 AM on Tue, Mar 25, 2014
@ostephens @ronallo out of curiosity, how many interactions compared to visitor numbers? And in-app or relying on phone reader?

sebchan (@sebchan) tweeted at 0:06 PM on Tue, Mar 25, 2014
@ostephens @ronallo (but) what are the other options for ‘interacting’?

This question was in response to how 80% of the interactions with the Listen to Wikipedia application are via QR code. We placed a URL and QR code on the wall for Listen to Wikipedia not knowing which would get the most use.

Unfortunately there’s no simple way I know of to kick off an interaction in these spaces when the user brings their own device. Once when there was a stable exhibit for a week we used a kiosk iPad to control a wall so that the visitor did not need to bring a device. We are considering how a kiosk tablet could be used more generally for this purpose. In cases where the visitor brings their own device it is more complicated. The visitor either must enter a URL or scan a QR code. We try to make the URLs short, but because we wanted to use some simple token authentication they’re at least 4 characters longer than they might otherwise be. I’ve considered using geolocation services as the authentication method, but they are not as exact as we might want them to be for this purpose, especially if the device uses campus wireless rather than GPS. We also did not want to have a further hurdle of asking for permission of the user and potentially being rejected. For the QR code the visitor must have a QR code reader already on their device. The QR code includes the changing token. Using either the URL or QR code sends the visitor to a page in their browser.

Because the walls I’ve placed content on are in public spaces there is no good way to know how many visitors there are compared to the number of interactions. One interesting thing about the Immersion Theater is that I’ll often see folks standing outside of the opening to the space looking in, so even if there where some way to track folks going in and out of the space, that would not include everyone who has viewed the content.

Other Questions

If you have other questions about anything in my presentation, please feel free to ask. (If you submit them through the slides I won’t ever see them, so better to email or tweet at me.)

Ronallo, Jason: Questions Asked During the Presentation Websockets For Real-time And Interactive Interfaces At Code4lib 2014

Mon, 2014-04-07 23:30

During my presentation on WebSockets, there were a couple points where folks in the audience could enter text in an input field that would then show up on a slide. The data was sent to the slides via WebSockets. It is not often that you get a chance to incorporate the technology that you’re talking about directly into how the presentation is given, so it was a lot of fun. At the end of the presentation, I allowed folks to anonymously submit questions directly to the HTML slides via WebSockets.

I ran out of time before I could answer all of the questions that I saw. I’ll try to answer them now.

Questions From Slides

You can see in the YouTube video at the end of my presentation (at 1h38m26s) the following questions came in. ([Full presentation starts here[(https://www.youtube.com/watch?v=_8MJATYsqbY&feature=share&t=1h25m37s).) Some lines that came in were not questions at all. For those that are really questions, I’ll answer them now, even if I already answered them.

Are you a trained dancer?

No. Before my presentation I was joking with folks about how little of a presentation I’d have, at least for the interactive bits, if the wireless didn’t work well enough. Tim Shearer suggested I just do an interpretive dance in that eventuality. Luckily it didn’t come to that.

When is the dance?

There was no dance. Initially I thought the dance might happen later, but it didn’t. OK, I’ll admit it, I was never going to dance.

Did you have any efficiency problems with the big images and chrome?

On the big video walls in Hunt Library we often use Web technologies to create the content and Chrome for displaying it on the wall. For the most part we don’t have issues with big images or lots of images on the wall. But there’s a bit of trick happening here. For instance when we display images for My #HuntLibrary on the wall, they’re just images from Instagram so only 600x600px. We initially didn’t know how these would look blown up on the video wall, but they end up looking fantastic. So you don’t necessarily need super high resolution images to make a very nice looking display.

Upstairs on the Visualization Wall, I display some digitized special collections images. While the possible resolution on the display is higher, the current effective resolution is only about 202px wide for each MicroTile. The largest image is then only 404px side. In this case we are also using a Djatoka image server to deliver the images. Djatoka has an issue with the quality of its scaling between quality levels where the algorithm chosen can make the images look very poor. How I usually work around this is to pick the quality level that is just above the width required to fit whatever design. Then the browser scales the image down and does a better job making it look OK than the image server would. I don’t know which of these factors effect the look on the Visualization Wall the most, but some images have a stair stepping look on some lines. This especially effects line drawings with diagonal lines, while photographs can look totally acceptable. We’ll keep looking for how to improve the look of images on these walls especially in the browser.

Have you got next act after Wikipedia?

This question is referring to the adaptation of Listen to Wikipedia for the Immersion Theater. You can see video of what this looks like on the big Hunt Library Immersion Theater wall.

I don’t currently have solid plans for developing other content for any of the walls. Some of the work that I and others in the Libraries have done early on has been to help see what’s possible in these spaces and begin to form the cow paths for others to produce content more easily. We answered some big questions. Can we deliver content through the browser? What templates can we create to make this work easier? I think the next act is really for the NCSU Libraries to help more students and researchers to publish and promote their work through these spaces.

Is it lunchtime yet?

In some time zone somewhere, yes. Hopefully during the conference lunch came soon enough for you and was delicious and filling.

Could you describe how testing worked more?

I wish I could think of some good way to test applications that are destined for these kinds of large displays. There’s really no automated testing that is going to help here. BrowserStack doesn’t have a big video wall that they can take screenshots on. I’ve also thought that it’d be nice to have a webcam trained on the walls so that I could make tweaks from a distance.

But Chrome does have its screen emulation developer tools which were super helpful for this kind of work. These kinds of tools are useful not just for mobile development, which is how they’re usually promoted, but for designing for very large displays as well. Even on my small workstation monitor I could get a close enough approximation of what something would look like on the wall. Chrome will shrink the content to fit to the available viewport size. I could develop for the exact dimensions of the wall while seeing all of the content shrunk down to fit my desktop. This meant that I could develop and get close enough before trying it out on the wall itself. Being able to design in the browser has huge advantages for this kind of work.

I work at DH Hill Library while these displays are in Hunt Library. I don’t get over there all that often, so I would schedule some time to see the content on the walls when I happened to be over there for a meeting. This meant that there’d often be a lag of a week or two before I could get over there. This was acceptable as this wasn’t the primary project I was working on.

By the time I saw it on the wall, though, we were really just making tweaks for design purposes. We wanted the panels to the left and right of the Listen to Wikipedia visualization to fall along the bezel. We would adjust font sizes for how they felt once you’re in the space. The initial, rough cut work of modifying the design to work in the space was easy, but getting the details just right required several rounds of tweaks and testing. Sometimes I’d ask someone over at Hunt to take a picture with their phone to ensure I’d fixed an issue.

While it would have been possible for me to bring my laptop and sit in front of the wall to work, I personally didn’t find that to work well for me. I can see how it could work to make development much faster, though, and it is possible to work this way.

Race condition issues between devices?

Some spaces could allow you to control a wall from a kiosk and completely avoid any possibility of a race condition. When you allow users to bring their own device as a remote control to your spaces you have some options. You could allow the first remote to connect and lock everyone else out for a period of time. Because of how subscriptions and presence notifications work this would certainly be possible to do.

For Listen to Wikipedia we allow more than one user to control the wall at the same time. Then we use WebSockets to try to keep multiple clients in sync. Even though we attempt to quickly update all the clients, it is certainly possible that there could be race conditions, though it seems unlikely. Because we’re not dealing with persisting data, I don’t really worry about it too much. If one remote submits just after another but before it is synced, then the wall will reflect the last to submit. That’s perfectly acceptable in this case. If a client were to get out of sync with what is on the wall, then any change by that client would just be sent to the wall as is. There’s no attempt to make sure a client had the most recent, freshest version of the data prior to submitting.

While this could be an issue for other use cases, it does not adversely effect the experience here. We do an alright job keeping the clients in sync, but don’t shoot for perfection.

How did you find the time to work on this?

At the time I worked on these I had at least a couple other projects going. When waiting for someone else to finish something before being able to make more progress or on a Friday afternoon, I’d take a look at one of these projects for a little. It meant the progress was slow, but these also weren’t projects that anyone was asking to be delivered on a deadline. I like to have a couple projects of this nature around. If I’ve got a little time, say before a meeting, but not enough for something else, I can pull one of these projects out.

I wonder, though, if this question isn’t more about the why I did these projects. There were multiple motivations. A big motivation was to learn more about WebSockets and how the technology could be applied in the library context. I always like to have a reason to learn new technologies, especially Web technologies, and see how to apply them to other types of applications. And now that I know more about WebSockets I can see other ways to improve the performance and experience of other applications in ways that might not be as overt in their use of the technology as these project were.

For the real-time digital collections view this is integrated into an application I’ve developed and it did not take much to begin adding in some new functionality. We do a great deal of business analytic tracking for this application. The site has excellent SEO for the kind of content we have. I wanted to explore other types of metrics of our success.

The video wall projects allowed us to explore several different questions. What does it take to develop Web content for them? What kinds of tools can we make available for others to develop content? What should the interaction model be? What messaging is most effective? How should we kick off an interaction? Is it possible to develop bring your own device interactions? All of these kinds of questions will help us to make better use of these kinds of spaces.

Speed of an unladen swallow?

I think you’d be better off asking a scientist or a British comedy troupe.

Questions From Twitter

Mia (@mia_out) tweeted at 11:47 AM on Tue, Mar 25, 2014
@ostephens @ronallo out of curiosity, how many interactions compared to visitor numbers? And in-app or relying on phone reader?

sebchan (@sebchan) tweeted at 0:06 PM on Tue, Mar 25, 2014
@ostephens @ronallo (but) what are the other options for ‘interacting’?

This question was in response to how 80% of the interactions with the Listen to Wikipedia application are via QR code. We placed a URL and QR code on the wall for Listen to Wikipedia not knowing which would get the most use.

Unfortunately there’s no simple way I know of to kick off an interaction in these spaces when the user brings their own device. Once when there was a stable exhibit for a week we used a kiosk iPad to control a wall so that the visitor did not need to bring a device. We are considering how a kiosk tablet could be used more generally for this purpose. In cases where the visitor brings their own device it is more complicated. The visitor either must enter a URL or scan a QR code. We try to make the URLs short, but because we wanted to use some simple token authentication they’re at least 4 characters longer than they might otherwise be. I’ve considered using geolocation services as the authentication method, but they are not as exact as we might want them to be for this purpose, especially if the device uses campus wireless rather than GPS. We also did not want to have a further hurdle of asking for permission of the user and potentially being rejected. For the QR code the visitor must have a QR code reader already on their device. The QR code includes the changing token. Using either the URL or QR code sends the visitor to a page in their browser.

Because the walls I’ve placed content on are in public spaces there is no good way to know how many visitors there are compared to the number of interactions. One interesting thing about the Immersion Theater is that I’ll often see folks standing outside of the opening to the space looking in, so even if there where some way to track folks going in and out of the space, that would not include everyone who has viewed the content.

Other Questions

If you have other questions about anything in my presentation, please feel free to ask. (If you submit them through the slides I won’t ever see them, so better to email or tweet at me.)

ALA Equitable Access to Electronic Content: Two billion for E-rate provides “2-for-1” benefits”

Mon, 2014-04-07 22:04

Today, the American Library Association (ALA) called on (pdf) the Federal Communications Commission (FCC) to deploy newly identified E-rate program funding to boost library broadband access and alleviate historic shortfalls in funding for internal connections. In response to the FCC’s March Public Notice, the ALA seeks to leverage existing high-speed, scalable networks to increase library broadband speeds, improve area networks and further explore cost efficiencies that could be enabled through new consortium approaches.

ALA proposes:

  • Supporting school-library wide-area network partnerships to better leverage local E-rate investments and support community use of high-capacity connections during non-school hours;
  • Providing short-term funding focused on deployment where libraries are in close proximity to providers that can ensure scalable broadband at affordable construction charges and recurring costs over time; and
  • Advancing cost-efficient library network development with new diagnostic and technical support provided at the state level.

“ALA welcomes this new $2 billion investment to support broadband networks in our nations’ libraries and schools so we may meet growing community demand for services ranging from interactive online learning to videoconferencing to downloading and streaming increasingly digital collections,” said ALA President Barbara Stripling. “This infusion can provide ‘two-for-one’ benefits by advancing library broadband to and within our buildings immediately and continuing to improve the E-rate program in the near future.”

Read the ALA press release

The post Two billion for E-rate provides “2-for-1” benefits” appeared first on District Dispatch.

ALA Equitable Access to Electronic Content: Two billion for E-rate provides “2-for-1” benefits”

Mon, 2014-04-07 22:04

Today, the American Library Association (ALA) called on (pdf) the Federal Communications Commission (FCC) to deploy newly identified E-rate program funding to boost library broadband access and alleviate historic shortfalls in funding for internal connections. In response to the FCC’s March Public Notice, the ALA seeks to leverage existing high-speed, scalable networks to increase library broadband speeds, improve area networks and further explore cost efficiencies that could be enabled through new consortium approaches.

ALA proposes:

  • Supporting school-library wide-area network partnerships to better leverage local E-rate investments and support community use of high-capacity connections during non-school hours;
  • Providing short-term funding focused on deployment where libraries are in close proximity to providers that can ensure scalable broadband at affordable construction charges and recurring costs over time; and
  • Advancing cost-efficient library network development with new diagnostic and technical support provided at the state level.

“ALA welcomes this new $2 billion investment to support broadband networks in our nations’ libraries and schools so we may meet growing community demand for services ranging from interactive online learning to videoconferencing to downloading and streaming increasingly digital collections,” said ALA President Barbara Stripling. “This infusion can provide ‘two-for-one’ benefits by advancing library broadband to and within our buildings immediately and continuing to improve the E-rate program in the near future.”

Read the ALA press release

The post Two billion for E-rate provides “2-for-1” benefits” appeared first on District Dispatch.

Engard, Nicole: Bookmarks for April 7, 2014

Mon, 2014-04-07 20:30

Today I found the following resources and bookmarked them on <a href=

  • Lubuntu Lubuntu is a fast and lightweight operating system developed by a community of Free and Open Source enthusiasts. The core of the system is based on Linux and Ubuntu .
  • Lubuntu XP three flavors XP Themes for Lubuntu to help people transistion to Linux.

Digest powered by RSS Digest

The post Bookmarks for April 7, 2014 appeared first on What I Learned Today....

Related posts:

  1. Can you say Kebberfegg 3 times fast
  2. What’s new in Ubuntu?
  3. Amazon’s bestselling laptop is open source!

Engard, Nicole: Bookmarks for April 7, 2014

Mon, 2014-04-07 20:30

Today I found the following resources and bookmarked them on <a href=

  • Lubuntu Lubuntu is a fast and lightweight operating system developed by a community of Free and Open Source enthusiasts. The core of the system is based on Linux and Ubuntu .
  • Lubuntu XP three flavors XP Themes for Lubuntu to help people transistion to Linux.

Digest powered by RSS Digest

The post Bookmarks for April 7, 2014 appeared first on What I Learned Today....

Related posts:

  1. Can you say Kebberfegg 3 times fast
  2. What’s new in Ubuntu?
  3. Amazon’s bestselling laptop is open source!

Summers, Ed: Glass Houses

Mon, 2014-04-07 16:29

You may have noticed Brooklyn Museum’s recent announcement that they have pulled out of Flickr Commons. Apparently they’ve seen a “steady decline in engagement level” on Flickr, and decided to remove their content from that platform, so they can focus on their own website as well as Wikimedia Commons.

Brooklyn Museum announced three years ago that they would be cross-posting their content to Internet Archive and Wikimedia Commons. Perhaps I’m not seeing their current bot, but they appear to have two, neither of which have done an upload since March of 2011, based on their user activity. It’s kind of ironic that content like this was uploaded to Wikimedia Commons by Flickr Uploader Bot and not by one of their own bots.

The announcement stirred up a fair bit of discussion about how an institution devoted to the preservation and curation of cultural heritage material could delete all the curation that has happened at Flickr. The theory being that all the comments, tagging and annotation that has happened on Flickr has not been migrated to Wikimedia Commons. I’m not even sure if there’s a place where this structured data could live at Wikimedia Commons. Perhaps some sort of template could be created, or it could live in Wikidata?

Fortunately, Aaron Straup-Cope has a backup copy of Flickr Commons metadata, which includes a snapshot of the Brooklyn Museum’s content. He’s been harvesting this metadata out of concern for Flickr’s future, but surprise, surprise — it was an organization devoted to preservation of cultural heritage material that removed it. It would be interesting to see how many comments there were. I’m currently unpacking a tarball of Aaron’s metadata on an ec2 instance just to see if it’s easy to summarize.

But:

I’m pretty sure I’m living in one of those.

I agree with Ben:

@edsu @textfiles @dantobias @waxpancake @brooklynmuseum Yep. Unfortunately this is a blind spot even for orgs doing things relatively well

— Ben Fino-Radin (@benfinoradin)

April 7, 2014

It would help if we had a bit more method to the madness of our own Web presence. Too often the Web is treated as a marketing platform instead of our culture’s predominant content delivery mechanism. Brooklyn Museum deserves a lot of credit for talking about this issue openly. Most organizations just sweep it under the carpet and hope nobody notices.

What do you think? Is it acceptable that Brooklyn Museum discarded the user contributions that happened on Flickr, and that all the people who happened to be pointing at said content from elsewhere now have broken links? Could Brooklyn Museum instead decided to leave the content there, with a banner of some kind indicating that it is no longer actively maintained? Don’t lots of copies keep stuff safe?

Or perhaps having too many copies detracts from the perceived value of the currently endorsed places of finding the content? Curators have too many places to look, which aren’t synchronized, which add confusion and duplication. Maybe it’s better to have one place where people can focus their attention?

Perhaps these two positions aren’t at odds, and what’s actually at issue is a framework for thinking about how to migrate Web content between platforms. And different expectations about content that is self hosted, and content that is hosted elsewhere?

Summers, Ed: Glass Houses

Mon, 2014-04-07 16:29

You may have noticed Brooklyn Museum’s recent announcement that they have pulled out of Flickr Commons. Apparently they’ve seen a “steady decline in engagement level” on Flickr, and decided to remove their content from that platform, so they can focus on their own website as well as Wikimedia Commons.

Brooklyn Museum announced three years ago that they would be cross-posting their content to Internet Archive and Wikimedia Commons. Perhaps I’m not seeing their current bot, but they appear to have two, neither of which have done an upload since March of 2011, based on their user activity. It’s kind of ironic that content like this was uploaded to Wikimedia Commons by Flickr Uploader Bot and not by one of their own bots.

The announcement stirred up a fair bit of discussion about how an institution devoted to the preservation and curation of cultural heritage material could delete all the curation that has happened at Flickr. The theory being that all the comments, tagging and annotation that has happened on Flickr has not been migrated to Wikimedia Commons. I’m not even sure if there’s a place where this structured data could live at Wikimedia Commons. Perhaps some sort of template could be created, or it could live in Wikidata?

Fortunately, Aaron Straup-Cope has a backup copy of Flickr Commons metadata, which includes a snapshot of the Brooklyn Museum’s content. He’s been harvesting this metadata out of concern for Flickr’s future, but surprise, surprise — it was an organization devoted to preservation of cultural heritage material that removed it. It would be interesting to see how many comments there were. I’m currently unpacking a tarball of Aaron’s metadata on an ec2 instance just to see if it’s easy to summarize.

But:

I’m pretty sure I’m living in one of those.

I agree with Ben:

@edsu @textfiles @dantobias @waxpancake @brooklynmuseum Yep. Unfortunately this is a blind spot even for orgs doing things relatively well

— Ben Fino-Radin (@benfinoradin)

April 7, 2014

It would help if we had a bit more method to the madness of our own Web presence. Too often the Web is treated as a marketing platform instead of our culture’s predominant content delivery mechanism. Brooklyn Museum deserves a lot of credit for talking about this issue openly. Most organizations just sweep it under the carpet and hope nobody notices.

What do you think? Is it acceptable that Brooklyn Museum discarded the user contributions that happened on Flickr, and that all the people who happened to be pointing at said content from elsewhere now have broken links? Could Brooklyn Museum instead decided to leave the content there, with a banner of some kind indicating that it is no longer actively maintained? Don’t lots of copies keep stuff safe?

Or perhaps having too many copies detracts from the perceived value of the currently endorsed places of finding the content? Curators have too many places to look, which aren’t synchronized, which add confusion and duplication. Maybe it’s better to have one place where people can focus their attention?

Perhaps these two positions aren’t at odds, and what’s actually at issue is a framework for thinking about how to migrate Web content between platforms. And different expectations about content that is self hosted, and content that is hosted elsewhere?