You are here

Feed aggregator

LITA: Lost Art of Creativity

planet code4lib - Mon, 2017-08-07 15:55

The Lost Art series examines tech tools that encourage communication between libraries and its users.  The Lost Art of Conversation looked at ways that podcasts can connect with the community, as well as the technology required to create a professional podcast.

This month is all about the 3-D printer, a tool that creates three dimensional objects based on a digital design.  A brief history of this technology: the first patent was issued in the 1980s and today these printers can create anything from a kidney to a car.

A 2016 Pew Research Study found that 50% of those polled think 3-D printers are a good investment for libraries (up 5% from 2015) and this number goes up when people are broken out by race: “69% of blacks and 63% of Hispanics say libraries should definitely buy 3-D printers and other high-tech tools, compared with 44% of whites.”

Some people might wonder why libraries should invest in such an expensive technology.  There are both  symbolic and practical reasons for the investment. Open access is a tenet of librarianship, ever since the beginning of U.S. public libraries when books were difficult to get if you were not wealthy or a member of the church.  The current Digital Divide is real and libraries continue to provide access to technology that people otherwise couldn’t afford it. The 3-D printer is just another example of leveling the playing field.  Using this tool is not just for show, there are many practical applications.

One of the hands created at the Mastics-Moriches-Shirley Community Library in Suffolk County, NY

Earlier this month, a library in Suffolk County, New York printed 15 prosthetic hands to donate to disabled children around the world. Custom hands cost as much as $10,000, while the library is able to create a hand using $48 in materials. Maskerspaces, gathering places for users to share ideas and tools, offer 3-D printing along with Legos, sewing machines, and other tools to encourage creativity.  Some additional uses can be found in a 2016 article published in School Library Journal: “My Love/Hate Relationship with 3-D Printers in Libraries.”

ALA published a practical guide, Progress in the Making, for those new to the 3-D printing world or those considering the purchase.  Below are some highlights:

  1. Cost- range from $200-$2,000 but there is no need to spend over $1,500. Not sure which printer is best for your library? Check with colleagues or product reviews, like this one from LibraryJournal.
  2. Supplies- ALA recommends having 2-3 rolls of material in stock, these cost around $25.
  3. Space- most printers are the size of a desktop computer, so allocate the same desk size as a computer plus storage for supplies and prototypes!
  4. Software- ALA recommends Tinkercad, a free computer-aided design (CAD) software that runs in a browser.  Some printers, like the LulizBot Mini, offers free, open-source software.
  5. Time- this depends on what is being created. A Rubik’s Cube will take around 5 hours to complete, whereas a larger or more intricate design will take longer.
  6. Issues- many printers have warranties and customer service reps that can help troubleshoot by phone or email

How is your library using 3-D printers? Any creative or interesting designs to share?

HangingTogether: A research and learning agenda for special collections and archives

planet code4lib - Mon, 2017-08-07 15:38

As we previously shared with you, we have been hard at work developing a research and learning agenda for special collections and archives. Here’s what has happened since Chela’s last post in May…

Chela continued conversations with the advisory group and also with many others in the field. The goal has been to develop a practitioner-based view of challenges and opportunities — and to include many voices in that process.

Workshop in Iowa City

We held a workshop at the RBMS Conference in June with an invited group of special collections and other library leaders to help refine an early draft of our agenda. That group was very generous with their time and helped improve the agenda considerably. Thanks to the Iowa City Public Library for being generous hosts!

Following the useful input and critique we gathered in Iowa City, we revised the document and released it for broader comment. We also held a larger workshop focusing on the current draft at focused on the current draft in July at Archives 2017, the annual meeting of the Society of American Archivists.

Workshop at Archives 2017

In developing this agenda, which we see as not only important for OCLC Research but for other organizations and stakeholders, we taking a transparent, iterative approach. We are seeking substantial input from the OCLC Research Library Partnership, as well as the broader archives and special collections community.

We are inviting you today to play a role in the next steps of shaping the agenda, and asking for your feedback on the current draft of the agenda by August 28th. We are happy to hear thoughts on any element of the draft agenda, but in particular, are interested in hearing comments on the following questions:

  1. Proposed Research Activities: do you have ideas for activities in areas that are left blank in the current draft? Are there other research activities or questions you would like to see addressed within each of the outlined topical areas of investigation?
  2. Relevant Existing Work in the Community: Is there current or early-stage work going on that addresses any of the topical areas of investigation and that we should be aware of?
  3. Priorities for OCLC: OCLC Research will be able to address only a small portion of the issues and activities outlined in the agenda, and wants to put its resources and expertise to best use. Which of the topical areas of investigation and proposed research activities would you most like to see OCLC take on, and where do you think they can make most impact?

Please find the draft agenda either as a Google Doc or as a PDF. You are welcome to add comments in the Google Doc itself, or submit comments via email to RLPStrategy@oclc.org. We welcome feedback and comments through August 28th.

District Dispatch: A visit to Senator Tester’s field office

planet code4lib - Mon, 2017-08-07 15:30

I moved to Montana three years ago when I accepted a position as director of Montana State University’s School Library Media preparation program. Like any good librarian, the very first thing I did when I moved to Bozeman was obtain my library card. And like any good library advocate, the second thing I did was learn about Montana politics. Montana is an interesting place. It’s incredibly rural (our largest city is Billings, population 110,000). Just over one million people live in the Treasure State, and it takes about ten hours to travel across the state east to west. Accordingly, Montana is represented by our two Senators, Steve Daines and Jon Tester, and one at-large Representative, Greg Gianforte.

Senator Tester
Source: Thom Bridge

Senator Tester is the only working farmer in Congress. He lives in Big Sandy, population 598, where he produces organic wheat, barley, lentils, peas, millet, buckwheat and alfalfa. He butchers his own meat and brings it to Washington in an extra carry-on bag. A former teacher and school board member, he is a staunch advocate for public education. I looked at his background and priorities and found that Senator Tester has a good track record of supporting some of ALA’s key issues, such as open access to government information and the Library Services and Technology Act.

I’ve participated in ALA’s National Library Legislative Day as part of the Montana delegation annually since 2015, so I was familiar with Senator Tester’s Washington, DC-based staff. This summer, with the Senate’s August recess looming, I saw another opportunity to connect with the Senator’s field staff. In Bozeman and the surrounding area, the Senator’s staff regularly schedules outreach and listening sessions in public libraries. On July 27, I attended one of these listening sessions at the Bozeman Public Library. I came prepared with a short list of items that I wanted to cover. Because there were about eight people in the listening session, I wasn’t able to get specific about my issues, so I scheduled a one-on-one appointment the following week with the field office staff in Downtown Bozeman.

I met with Jenna Rhoads, who is a new field officer and a recent graduate of MSU’s political science program. We chatted briefly about people we knew in common and I congratulated her on her new position and recent graduation. I then spoke about several issues, keeping it short, to the point, and being very specific about my “asks.” These issues included:

  1. Congratulating Senator Tester for receiving the Madison Award from the American Library Association and thanking him for his support of the Library Services and Technology Act by signing the Dear Appropriator letter for the FY18 appropriations cycle. I asked that next year, the Senator please consider signing the Dear Appropriator letter on the Innovative Approaches to Literacy program as well.
  2. Thanking the field office for holding listening sessions in local public libraries and encouraging this partnership to continue.
  3. Asking that Senator Tester use his position on the Interior Appropriations subcommittee to assure continued funding for the U.S. Geological Survey when the Interior Appropriations bill is voted on after Labor Day. I provided Jenna with a copy of ALA’s related letter and asked that she pass it along to the appropriate Washington staffer.
  4. Inviting the Senator to continue to work in the long term on school library issues, particularly in rural and tribal schools, which Senator Tester already cares deeply about.

The meeting lasted about 30 minutes. Later that day I followed up with a thank you email, reiterating my issues and “asks.”

As the Senate goes into its traditional August recess, this is a very good time to schedule a meeting with your senator’s field office staff in your local area and perhaps even meet with your senator. I hope that you will take the opportunity to engage with your senators and their field office staff to advocate for important library issues. There are many resources on District Dispatch, the ALA Washington Office blog, that can help you hone in on the issues that are important to your senator. Additionally, the ALA Washington Office’s Office of Government Relations staff are always willing to help you craft your message and give you valuable information about where your senator stands on library issues so you can make your case in the most effective manner.

I chose to take the time to meet with my senator’s field office staff because I believe in the power of civic engagement – and because I know that libraries change lives. I hope that you will take some time to connect with your senator’s field office this August.

The post A visit to Senator Tester’s field office appeared first on District Dispatch.

Terry Reese: MarcEdit 7 Alpha: the XML/JSON Profiler

planet code4lib - Sun, 2017-08-06 19:06

Metadata transformations can be really difficult.  While I try to make them easier in MarcEdit, the reality is, the program really has functioned for a long time as a facilitator of the process; handling the binary data processing and character set conversions that may be necessary.  But the heavy lifting, that’s all been on the user.  And if you think about it, there is a lot of expertise tied up in even the simplest transformation.  Say your library gets an XML file full of records from a vendor.  As a technical services librarian, I’d have to go through the following steps to remap that data into MARC (or something else):

  1. Evaluate the vended data file
  2. Create a metadata dictionary for the new xml file (so I know what each data element represents)
  3. Create a mapping between the data dictionary for the vended file and MARC
  4. Create the XSLT crosswalk that contains all the logic for turning this data into MARCXML
  5. Setup the process to move data between XML=>MARC

 

All of these steps are really time consuming, but the development of the XSLT/XQuery to actually translate the data is the one that stops most people.  While there are many folks in the library technology space (and technical services spaces) that would argue that the ability to create XSLT is a vital job skill, let’s be honest, people are busy.  Additionally, there is a big difference between knowing how to create an XSLT and writing a metadata translation.  These things get really complicated, and change all the time (XSLT is up to version 3), meaning that even if you’ve learned how to do this years ago, the skills may be stale or not translate into the current XSLT version.

Additionally, in MarcEdit, I’ve tried really hard to make the XSLT process as simple and straightforward as possible.  But, the reality is, I’ve only been able to work on the edges of this goal.  The tool handles the transformation of binary and character encoding data (since the XSLT engines cannot do that), it uses a smart processing algorithm to try to improve speed and memory handling while still enabling users to work with either DOM or Sax processing techniques.  And I’ve tried to introduce a paradigm that enables reuse and flexibility when creating transformations.  Folks that have heard me speak have likely heard me talk about this model as a wheel and spoke:

The idea behind this model is that as long as users create translations that map to and from MARCXML, the tool can automatically enable transformations to any of the known metadata formats registered with MarcEdit.  There are definitely tradeoffs to this approach (for sure, doing a 1-to-1, direct translation would produce the best translation, but it also requires more work and users to be experts in the source and final metadata formats), but the benefit from my perspective is that I don’t have to be the bottleneck in the process.  Were I to hard-code or create 1-to-1 conversions, any deviation or local use within a spec, would render the process unusable…and that was something that I really tried to avoid.  I’d like to think that this approach has been successful, and has enabled technical services folks to make better use of the marked up metadata that they are provided.

The problem is that as content providers have moved more of their metadata operations online,  a large number have shifted away from standards-based metadata to locally defined metadata profiles.  This is challenging because these are one off formats that really are only applicable for a publisher’s particular customers.  As a result, it’s really hard to find conversions for these formats.  The result of this, for me, are large numbers of catalogers/MarcEdit users asking for help creating these one off transformations…work that I simply don’t have time to do.  And that can surprise folks.  I try hard to make myself available to answer questions.  If you find yourself on the MarcEdit listserv, you’ll likely notice that I answer a lot of the questions…I enjoy working with the community.  And I’m pretty much always ready to give folks feedback and toss around ideas when folks are working on projects.  But there is only so much time in the day, and only so much that I can do when folks ask for this type of help.

So, transformations are an area where I get a lot of questions.  Users faced with these publisher specific metadata formats often reach out for advice or to see if I’ve worked with a vendor in the past.  And for years, I’ve been wanting to do more for this group.  While many metadata librarians would consider XSLT or XQuery as required skills, these are not always in high demand when faced with a mountain of content moving through an organization.  So, I’ve been collecting user stories and outlining a process that I think could help: an XML/JSON Profiler.

So, it’s with a lot of excitement, that I can write that MarcEdit 7 will include this tool.  As I say, it’s been a long-term coming; and the goal is to reduce the technical requirements needed to process XML or JSON metadata.

XML/JSON Profiler

To create this tool, I had decide how users would define their data for mapping.  Given that MarcEdit has a Delimited Text Translator for converting Excel data to MARC, I decided to work form this model.  The code produced does a couple of things:

  1. It validates the XML format to be profiled.  Mostly, this means that the tool is making sure that schema’s are followed, namespaces are defined and discoverable, etc.
  2. Output data in MARC, MARCXML, or another XML format
  3. Shifts mapping of data from an XML file to a delimited text file (though, it’s not actually creating a delimited text file).
  4. Since the data is in XML, there is  a general assumption that data should be in UTF8.

 

Users can access the Wizard through the updated XML Functions Editor.  Users open MARC Tools and select Edit XML function list, and you see the following:

I highlighted the XML Function Wizard.  I may also make this tool available from the main window.  Once selected, the program walks users through a basic reference interview:

Page 1:

 

From here, users just need to follow the interview questions.  User will need a sample XML file that contains at least one record in order to create the mappings against.  As users walk through the interview, they are asked to identify the record element in the XML file, as well as map xml tags to MARC tags, using the same interface and tools as found in the delimited text translator.  Users also have the option to map data directly to a new metadata format by creating an XML mapping file – or a representation of the XML output, which MarcEdit will then use to generate new records.

Once a new mapping has been created, the function will then be registered into MarcEdit, and be available like any other translation.  Whether this process simplifies the conversion of XML and JSON data for librarians, I don’t know.  But I’m super excited to find out.  This creates a significant shift in how users can interact with marked up metadata, and I think will remove many of the technical barriers that exist for users today…at least, for those users working with MarcEdit.

To give a better idea of what is actually happening, I created a demonstration video of the early version of this tool in action.  You can find it here: https://youtu.be/9CtxjoIktwM.  This provides an early look at the functionality, and hopefully help provide some context around the above discussion.  If you are interested in seeing how the process works, I’ve posted the code for the parser on my github page here: https://github.com/reeset/meparsemarkup

Do you have questions, concerns?  Let me know.

 

–tr

John Miedema: Evernote Random. Get a Daily Email to a Random Note.

planet code4lib - Sun, 2017-08-06 15:25

I write in bits and pieces. I expect most writers do. I think of things at the oddest moments. I surf the web and find a document that fits into a writing project. I have an email dialog and know it belongs with my essay. It is almost never a good time to write so I file everything. Evernote is an excellent tool for aggregating all of the bits in notebooks. I have every intention of gettng back to them. Unfortunately, once the content is filed, it usually stays buried and forgotten.

I need a way to keep my content alive. The solution is a daily email, a link to a random Evernote note. I can read the note to keep it fresh in memory. I can edit the note, even just one change to keep it growing.

I looked around for a service but could not find one. I did find an IFTTT recipe for emailing a daily link to a random Wikipedia page. IFTTT sends the daily link to a Wikipedia page that automatically generates a random entry. In the end, I had to build an Evernote page to do a similar thing.

You can set up Evernote Random too, but you need a few things:

  • An Evernote account, obviously.
  • A web host that supports PHP.
  • A bit of technical skill. I have already written the Evernote Random script that generates the random link. But you have to walk through some technical Evernote setup steps, like generating keys and testing your script in their sandbox.
  • The Evernote Random script fro, my GitHub Gist site. It has all the instructions.
  • An IFTTT recipe. That’s the easy part.
  • Take the script. Use it. Improve it. I would enjoy hearing from you.

Originally published at this website on April 1, 2015.

Hugh Rundle: Welcome to Ghost

planet code4lib - Sun, 2017-08-06 06:06

Hey! Welcome to Ghost, it's great to have you :)

We know that first impressions are important, so we've populated your new site with some initial Getting Started posts that will help you get familiar with everything in no time. This is the first one!

There are a few things that you should know up-front:
  1. Ghost is designed for ambitious, professional publishers who want to actively build a business around their content. That's who it works best for. If you're using Ghost for some other purpose, that's fine too - but it might not be the best choice for you.

  2. The entire platform can be modified and customized to suit your needs, which is very powerful, but doing so does require some knowledge of code. Ghost is not necessarily a good platform for beginners or people who just want a simple personal blog.

  3. For the best experience we recommend downloading the Ghost Desktop App for your computer, which is the best way to access your Ghost site on a desktop device.

Ghost is made by an independent non-profit organisation called the Ghost Foundation. We are 100% self funded by revenue from our Ghost(Pro) service, and every penny we make is re-invested into funding further development of free, open source technology for modern journalism.

The main thing you'll want to read about next is probably: the Ghost editor.

Once you're done reading, you can simply delete the default Ghost user from your team to remove all of these introductory posts!

Hugh Rundle: Using the Ghost editor

planet code4lib - Sun, 2017-08-06 06:06

Ghost uses a language called Markdown to format text.

When you go to edit a post and see special characters and colours intertwined between the words, those are Markdown shortcuts which tell Ghost what to do with the words in your document. The biggest benefit of Markdown is that you can quickly apply formatting as you type, without needing to pause.

At the bottom of the editor, you'll find a toolbar with basic formatting options to help you get started as easily as possible. You'll also notice that there's a ? icon, which contains more advanced shortcuts.

For now, though, let's run you through some of the basics. You'll want to make sure you're editing this post in order to see all the Markdown we've used.

Formatting text

The most common shortcuts are of course, bold text, italic text, and hyperlinks. These generally make up the bulk of any document. You can type the characters out, but you can also use keyboard shortcuts.

  • CMD/Ctrl + B for Bold
  • CMD/Ctrl + I for Italic
  • CMD/Ctrl + K for a Link
  • CMD/Ctrl + H for a Heading (Press multiple times for h2/h3/h4/etc)

With just a couple of extra characters here and there, you're well on your way to creating a beautifully formatted story.

Inserting images

Images in Markdown look just the same as links, except they're prefixed with an exclamation mark, like this:

![Image description](/path/to/image.jpg)

Most Markdown editors don't make you type this out, though. In Ghost you can click on the image icon in the toolbar at the bottom of the editor, or you can just click and drag an image from your desktop directly into the editor. Both will upload the image for you and generate the appropriate Markdown.

Important Note: Ghost does not currently have automatic image resizing, so it's always a good idea to make sure your images aren't gigantic files before uploading them to Ghost.

Making lists

Lists in HTML are a formatting nightmare, but in Markdown they become an absolute breeze with just a couple of characters and a bit of smart automation. For numbered lists, just write out the numbers. For bullet lists, just use * or - or +. Like this:

  1. Crack the eggs over a bowl
  2. Whisk them together
  3. Make an omelette

or

  • Remember to buy milk
  • Feed the cat
  • Come up with idea for next story
Adding quotes

When you want to pull out a particularly good excerpt in the middle of a piece, you can use > at the beginning of a paragraph to turn it into a Blockquote. You might've seen this formatting before in email clients.

A well placed quote guides a reader through a story, helping them to understand the most important points being made

All themes handles blockquotes slightly differently. Sometimes they'll look better kept shorter, while other times you can quote fairly hefty amounts of text and get away with it. Generally, the safest option is to use blockquotes sparingly.

Dividing things up

If you're writing a piece in parts and you just feel like you need to divide a couple of sections distinctly from each other, a horizontal rule might be just what you need. Dropping --- on a new line will create a sleek divider, anywhere you want it.

This should get you going with the vast majority of what you need to do in the editor, but if you're still curious about more advanced tips then check out the Advanced Markdown Guide - or if you'd rather learn about how Ghost taxononomies work, we've got a overview of how to use Ghost tags.

Hugh Rundle: Organising your content with tags

planet code4lib - Sun, 2017-08-06 06:06

Ghost has a single, powerful organisational taxonomy, called tags.

It doesn't matter whether you want to call them categories, tags, boxes, or anything else. You can think of Ghost tags a lot like Gmail labels. By tagging posts with one or more keyword, you can organise articles into buckets of related content.

Basic tagging

When you write a post, you can assign tags to help differentiate between categories of content. For example, you might tag some posts with News and other posts with Cycling, which would create two distinct categories of content listed on /tag/news/ and /tag/cycling/, respectively.

If you tag a post with both News and Cycling - then it appears in both sections.

Tag archives are like dedicated home-pages for each category of content that you have. They have their own pages, their own RSS feeds, and can support their own cover images and meta data.

The primary tag

Inside the Ghost editor, you can drag and drop tags into a specific order. The first tag in the list is always given the most importance, and some themes will only display the primary tag (the first tag in the list) by default. So you can add the most important tag which you want to show up in your theme, but also add a bunch of related tags which are less important.

News, Cycling, Bart Stevens, Extreme Sports

In this example, News is the primary tag which will be displayed by the theme, but the post will also still receive all the other tags, and show up in their respective archives.

Private tags

Sometimes you may want to assign a post a specific tag, but you don't necessarily want that tag appearing in the theme or creating an archive page. In Ghost, hashtags are private and can be used for special styling.

For example, if you sometimes publish posts with video content - you might want your theme to adapt and get rid of the sidebar for these posts, to give more space for an embedded video to fill the screen. In this case, you could use private tags to tell your theme what to do.

News, Cycling, #video

Here, the theme would assign the post publicly displayed tags of News, and Cycling - but it would also keep a private record of the post being tagged with #video.

In your theme, you could then look for private tags conditionally and give them special formatting:

{{#post}} {{#has tag="#video"}} ...markup for a nice big video post layout... {{else}} ...regular markup for a post... {{/has}} {{/post}}

You can find documentation for theme development techniques like this and many more over on Ghost's extensive theme documentation.

Hugh Rundle: Managing Ghost users

planet code4lib - Sun, 2017-08-06 06:06

Ghost has a number of different user roles for your team

Authors

The base user level in Ghost is an author. Authors can write posts, edit their own posts, and publish their own posts. Authors are trusted users. If you don't trust users to be allowed to publish their own posts, you shouldn't invite them to Ghost admin.

Editors

Editors are the 2nd user level in Ghost. Editors can do everything that an Author can do, but they can also edit and publish the posts of others - as well as their own. Editors can also invite new authors to the site.

Administrators

The top user level in Ghost is Administrator. Again, administrators can do everything that Authors and Editors can do, but they can also edit all site settings and data, not just content. Additionally, administrators have full access to invite, manage or remove any other user of the site.

The Owner

There is only ever one owner of a Ghost site. The owner is a special user which has all the same permissions as an Administrator, but with two exceptions: The Owner can never be deleted. And in some circumstances the owner will have access to additional special settings if applicable — for example, billing details, if using Ghost(Pro).

It's a good idea to ask all of your users to fill out their user profiles, including bio and social links. These will populate rich structured data for posts and generally create more opportunities for themes to fully populate their design.

Hugh Rundle: Making your site private

planet code4lib - Sun, 2017-08-06 06:06

Sometimes you might want to put your site behind closed doors

If you've got a publication that you don't want the world to see yet because it's not ready to launch, you can hide your Ghost site behind a simple shared pass-phrase.

You can toggle this preference on at the bottom of Ghost's General Settings

Ghost will give you a short, randomly generated pass-phrase which you can share with anyone who needs access to the site while you're working on it. While this setting is enabled, all search engine optimisation features will be switched off to help keep the site off the radar.

Do remember though, this is not secure authentication. You shouldn't rely on this feature for protecting important private data. It's just a simple, shared pass-phrase for very basic privacy.

Hugh Rundle: Setting up your own Ghost theme

planet code4lib - Sun, 2017-08-06 06:06

Creating a totally custom design for your publication

Ghost comes with a beautiful default theme called Casper, which is designed to be a clean, readable publication layout and can be easily adapted for most purposes. However, Ghost can also be completely themed to suit your needs. Rather than just giving you a few basic settings which act as a poor proxy for code, we just let you write code.

There are a huge range of both free and premium pre-built themes which you can get from the Ghost Theme Marketplace, or you can simply create your own from scratch.

Anyone can write a completely custom Ghost theme, with just some solid knowledge of HTML and CSS

Ghost themes are written with a templating language called handlebars, which has a bunch of dynamic helpers to insert your data into template files. Like {{author.name}}, for example, outputs the name of the current author.

The best way to learn how to write your own Ghost theme is to have a look at the source code for Casper, which is heavily commented and should give you a sense of how everything fits together.

  • default.hbs is the main template file, all contexts will load inside this file unless specifically told to use a different template.
  • post.hbs is the file used in the context of viewing a post.
  • index.hbs is the file used in the context of viewing the home page.
  • and so on

We've got full and extensive theme documentation which outlines every template file, context and helper that you can use.

If you want to chat with other people making Ghost themes to get any advice or help, there's also a #themes channel in our public Slack community which we always recommend joining!

FOSS4Lib Recent Releases: Fedora Repository - 4.7.4

planet code4lib - Sun, 2017-08-06 01:49

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: Fedora RepositoryRelease Date: Tuesday, August 1, 2017

FOSS4Lib Recent Releases: YAZ - 5.23.0

planet code4lib - Sun, 2017-08-06 01:35

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: YAZRelease Date: Friday, August 4, 2017

FOSS4Lib Recent Releases: Zebra - 2.1.2

planet code4lib - Sun, 2017-08-06 01:34

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: ZebraRelease Date: Friday, August 4, 2017

FOSS4Lib Upcoming Events: Blacklight European Summit 2017

planet code4lib - Sun, 2017-08-06 00:55
Date: Monday, October 16, 2017 - 09:00 to Wednesday, October 18, 2017 - 21:00Supports: Blacklight

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

For details, see http://projectblacklight.org/european-summit-2017.

Evergreen ILS: Evergreen 3.0 development update #13: let the fest begin again

planet code4lib - Fri, 2017-08-04 21:02

Flying female Mallard duck by Martin Correns (CC-BY-SA on Wikimedia Commons) It is to be hoped that she is going after nice, juicy bugs to squash and eat.

Since the previous update last month, another 72 patches have made their way into Evergreen. Dominoes are toppling into place; new features added to master in July include:

  • Adding (back) the ability for patrons to place holds via the public catalog and have them be suspended for later activation. (bug 1189989)
  • Teaching MARC export and the Z39.50 server to include call number prefixes and suffixes. (bugs 1692106 and 1705478)
  • A new feature to add the ability to apply tags to copy records and display them as digital bookplate. (bug 1673857)
  • A number of improvements to the web staff interface.

Next week will be the second feedback fest. The feedback fest a week where Evergreen developers will be focusing on providing feedback on active code submissions. As of the moment, 42 pull requests are being targeted for review, many of deal with major features on the Evergreen 3.0 road map. Some of the larger pull requests include the web staff client’s serials module and its offline circulation module, batch patron editing, catalog search improvements, improvements to Evergreen’s ability to handle consortia that cross time zones, configurable copy alerts, and a new popularity parameter for in-house use.

Speak of concentrated community efforts, a Bug Squashing Week ran from 17 to 21 July. As reported by the wrangler of the bug squashing week, Terran McCanna, a total of 145 updates to existing bug reports were made, with 22 signoffs and 13 patches merged.  The next Bug Squashing Week will occur on 11 to 15 September.

A couple important deadlines for 3.0 are fast approaching, with feature slush scheduled for 18 August and feature freeze for 1 September.

Duck trivia

The U.K. has a number of canals. The walkways and towpaths alongside them tend to be a bit narrow and are used by pedestrians, cyclists… and ducks. How to avoid duck paillard on the pavement? The Canal and River Trust will be painting duck lanes on the walkways to encourage folks to slow down.

This bit of trivia was contributed by Irene Patrick of the North Carolina Government & Heritage Library. Thanks!

Submissions

Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

District Dispatch: Bi-partisan bill would support library wi-fi

planet code4lib - Fri, 2017-08-04 15:35

Earlier this week, the Advancing Innovation and Reinvigorating Widespread Access to Viable Electromagnetic Spectrum (AIRWAVES) Act, S. 1682, was introduced by Senators Cory Gardner (R-CO) and Maggie Hassan (D-NH). As described by Sen. Hassan, “The bipartisan AIRWAVES Act will help ensure that

Source: http://www.yourmoney.com

there is an adequate supply of spectrum for licensed and unlicensed use, which in turn will enhance wireless services to our people, stimulate our economy, and spur innovation.” Senator Gardner stated, “This legislation offers innovative ways to avoid a spectrum crunch, pave the way for 5G services, and provide critical resources to rural America.” The legislation would encourage a more efficient use of spectrum, the airwaves over which signals and data travel, while helping to close the urban-rural digital gap.

In a statement on the new bill, ALA President Jim Neal said:

The American Library Association applauds Senators Cory Gardner (R-CO) and Maggie Hassan (D-NH) on the introduction of the AIRWAVES Act and supports their efforts to increase the amount of unlicensed spectrum available to power libraries’ Wi-Fi networks. Access to Wi-Fi is important to virtually every patron of the nearly 120,000 school, public and higher education libraries in the United States. More spectrum for library Wi-Fi means more public access to the internet for everyone from school children to entrepreneurs, job seekers and scientists. The AIRWAVES Act will mean that millions more people, especially those in rural areas, will benefit from the library programs and services increasingly essential to their and the nation’s success in the digital age.

Specifically, The AIRWAVES Act would direct the Federal Communications Commission to free up unused or underused spectrum currently assigned to government users for commercial providers to expand their broadband offerings and for the expansion of services like Wi-Fi. The auctioned spectrum would include low-band, mid-band, and high-band frequencies, enabling the deployment of a variety of new wireless technologies. It also includes a proposal to auction other spectrum and would require that 10 percent of the auction proceeds be dedicated to funding wireless infrastructure projects in unserved and underserved rural areas.

Finally, the bill requires the Government Accountability Office (GAO) to report on the efficiency of the transfer of federal money from the Spectrum Relocation Fund to better encourage federal agencies to make additional spectrum available.

ALA urges Congress to support the AIRWAVES Act’s creative, bi-partisan approach to spectrum use and rapid action on this important legislation.

The post Bi-partisan bill would support library wi-fi appeared first on District Dispatch.

District Dispatch: Where’s CopyTalk?

planet code4lib - Fri, 2017-08-04 13:00

We are on a summer hiatus! CopyTalk webinars will start up again in September. In the meantime, you can listen to those webinars you missed in the archive!

Brought to you by an enthusiastic ALA committee—OITP Copyright Education Committee—upcoming webinars will address music copyright, copyright tutorials on music, and rights reversion with the Authors Alliance. We would love your suggestions for future topics! Contact Patrick Newell pnewell@csuchico.edu or me crussell@alawash.org with your ideas.

CopyTalks are one hour in duration and scheduled on the first Thursday of every month at 2 pm Eastern (1am Pacific) and of course, are free. The webinar address is always ala.adobeconnect.com/copytalk. Sign in as a guest. You’re in!

Copyright Tools! These are fun!

Our copyright education committee provides fun copyright tools—guides to help you respond to common copyright questions, like “is this a fair use?” Michael Brewer, committee member extraordinaire created these tools that are now in digital form—the 108 Spinner (library reproductions), the public domain slider, the copyright genie (doesn’t she sound cute?), exceptions for instructors and the very popular fair use evaluator, available for download. All tools are available at the Copyright Advisory Network (CAN).

Our most recent tools are the fair use foldy thingys that were a big hit at Annual. You will be enthralled playing with the foldy thingy – see the video! They are available for bulk purchase from the manufacturer.

We also created fair use factor coasters, one coaster for each factor. Collect all four! Each includes a quote from a court case that illuminates the meaning and importance of each factor. Tested for quality, the coasters are functional and work well with cold bottles of beer. Collect yours at a copyright conference in your area!

Talk about service!

Don’t forget to visit the Copyright Advisory Network! Post your copyright question to the question forum and get a quick response from a copyright expert. We don’t provide legal advice but have informed opinions and are willing to share our expertise. Get on the CAN!

The post Where’s CopyTalk? appeared first on District Dispatch.

David Rosenthal: Preservation Is Not A Technical Problem

planet code4lib - Thu, 2017-08-03 15:00
As I've always said, preserving the Web and other digital content for posterity is an economic problem. With an unlimited budget collection and preservation isn't a problem. The reason we're collecting and preserving less than half the classic Web of quasi-static linked documents, and much less of "Web 2.0", is that no-one has the money to do much better.

The budgets of libraries and archives, the institutions tasked with acting as society's memory, have been under sustained attack for a long time. I'm working on a talk and I needed an example. So I drew this graph of the British Library's annual income in real terms (year 2000 pounds). It shows that the Library's income has declined by almost 45% in the last decade.

Memory institutions that can purchase only half what they could 10 years ago aren't likely to greatly increase funding for acquiring new stuff; it's going to be hard for them just to keep the stuff (and the staff) they already have.

Below the fold, the data for the graph and links to the sources.

The nominal income data was obtained from the British Library's Annual Report series. The real income was computed from it using the Bank of England's official inflation calculator. Here is the data from which the graph was drawn:
YearNominal GBPYear 2000 GBP(millions)(millions)2016118.076.392015117.877.592014118.979.092013124.784.902012126.188.462011140.1101.442010137.9105.052009142.2113.322008140.5111.372007141.2116.392006159.2136.852005136.9121.442004121.6110.922003119.5112.252002119.2115.202001120.9118.802000110.2110.201999112.3115.62

Library of Congress: The Signal: Collections as Data and National Digital Initiatives

planet code4lib - Thu, 2017-08-03 12:54

This the text of my talk from the Collections as Data: IMPACT. Once the videos of the individual talks are processed and available, we’ll share those with you here — in the meanwhile, you can watch starting at minute 6:45 in the video of the entire event.

Welcome. Published by Currier & Ives. <https://www.loc.gov/item/2002698194/>.

Welcome to Collections as Data! When we hosted our first Collections as Data meeting, last year, we explored issues around the computationally processing, analyzing, presenting digital collections. The response overwhelmed us. The topic seemed to strike a chord with many of our colleagues and intersected with other efforts in the field in a fun way. But, still to this day, after a year of talking about this — we’re still struggling to explain it in a way that is tangible to friends and colleagues without direct experience. We’re calling this second iteration “Collections as Data: IMPACT” because we want to get to the heart of why this type of work matters. We’ve invited speakers to tell stories about using data to better their communities and the world.

And in this spirit, I’m going to kick things off with a short story about computation applied to library collections when computers were people doing the calculating, not machines in our pockets. I hope that this will connect the work we’ll discuss today to a longer history to illustrate the power of computation when it’s applied to library collections.

Charles Willson Peale’s portrait of James Madison https://www.loc.gov/item/95522332/

Portrait of Alexander Hamilton. The Knapp Co. https://www.loc.gov/item/2003667031/

 

 

 

 

 

 

 

 

 

 

 

The Federalist Papers, are a collection of essays written by John Jay, Alexander Hamilton, and James Madison. They were published under a pseudonym, Publius, to persuade colonial citizens to ratify the constitution.  When changing public opinion converted this documents from anonymous trolling to foundational to our democracy, we started to get a sense of the authorship of the papers.

After the dust settled, 12 remained in dispute between Hamilton and Madison. Hamilton pretty much said they were joint papers, Madison said he didn’t have much to do with them, and lots of people thought Madison actually wrote them. Why so much finger pointing? These were propaganda pieces, and sometimes the authors held public positions that were different from the ones they presented in the papers.

Historical opinion about who wrote what swung back and forth, depending on new evidence that came forward or on the popularity of the given historical figure at the time. As always there is a really interesting story about the sources of historical evidence for authorship. If you’re curious about that, I encourage you to talk to your local librarian.

In 1944, Douglass Adair, an American academic, determined that it was most likely Madison wrote the disputed papers. But the historical evidence was modest, so he sought another way of making an analysis.

 

He talked to two statisticians to see if there was a computational way to determine authorship.

Frederick Mosteller and David Wallace were intrigued by the idea, and decided to take on the challenge. They thought maybe average sentence length would be a possible indicator, so they laboriously counted sentence length for the known Hamilton and Madison papers, performed some analysis (for example, they had to determine whether quoted sentences counted toward the averages), and did the calculations.

They came up with an average length of 34.5 words for Hamilton and 34.6 words for Madison. So, that wasn’t going to work. Then they tried standard deviation. They thought, although the average length is the same, maybe, for example, one author writes mostly average length sentences and the other, for example lots of teeny tiny sentences and lots of long sentences. Unfortunately, that effort turned out to be a bust, as well. So they shelved the project.

After a few years later, Douglass Adair reached back out to say that he found a tool that could be useful. He found that Hamilton uses the word “while,” and Madison uses “whilst.” This fact, in itself is not enough to determine authorship. The word isn’t used enough in the papers for that to work, and it could have been introduced during the editing process. But it gave them somewhere to start.

The statisticians counted word usage in a screening set of known Madison and Hamilton papers, which I imagine as about as fun as watching paint dry. From that they created a frequency analysis of words used in each authors’ writing. They then determined which words were predictable discriminators and which were what they called “dangerously contextual,” because they were correlated with a certain subject favored by a particular author.

They ended up with 117 words to analyze.

Buck, Matt. Maths in neon at Autonomy in Cambridge. 2009. Photograph. Retrieved from Flickr, https://www.flickr.com/photos/mattbuck007/3676624894/

 

Using Bayesian statistics, they determined probability of authorship based on the number of times the words appear. If you’re interested in any of this, I encourage you to read the book — it’s very readable and kind of fun.

They concluded that “Our data independently supplement those of the historian. On the basis of our data alone, Madison is extremely likely, in the sense of degree of belief to have written all the disputed Federalists.”

None of this was digital. This was all ink on paper, so it’s one of my favorite example of using collections as data.

What does digital do? It democratizes this kind of analysis, and makes being wrong much less expensive. Which is great! Because we know that being wrong is the cost of being right.

Our heros in this story spent years inventing this analysis, but much of that time was spent laboriously counting word frequencies and hand calculating. Their data set was limited by human scale. Imagine what we could do with lots of data and faster analysis.

Detroit Publishing Co,. City of Detroit III, Gothic room. Photograph. Retrieved from the Library of Congress, https://www.loc.gov/item/det1994012547/PP/

 

This sort of linguistic analysis is now very common. A few years ago, a computer scientist, Patrick Juola, got a call from a reporter asking him if he could show that Robin Galbraith was really J.K, Rowling. He did. And his code is open-source for anyone to use.

Collections as Data graphic created by Natalie Buda Smith, User Experience Manager, Library of Congress https://blogs.loc.gov/thesignal/2016/10/user-experience-ux-design-in-libraries-an-interview-with-natalie-buda-smith/

 

This brings us back to today. What excites me about the possibilities inherit in Collections as Data is that we can now make these kind of intellectual breakthroughs on our laptops. People have been doing this kind of analysis — computational analysis of collections — for a long time. But now, for the first time, we have huge data sets to train our algorithms. We can figure stuff out without having to hand count words in sentences.

And this means that discovery and pay with collections materials becomes even more available to what I consider a core constituency of the Library: the informed and curious.

We’ve invited some academic luminaries here today, and we’re so proud they could join us. We’re learning so much from the ground-breaking work of our colleagues in academic libraries, like our friends working on the IMLS-grant-funded “Always Already Computational: Library Collections as Data” project. But many of our speakers, and many of you out there, don’t have access to well-funded institutional libraries. We hope that you will consider us an intellectual home for exploration.

 

Canaries & jewels / Marston Ream 1874. https://www.loc.gov/item/2003677675/.

 

We, in my group, National Digital Initiatives, are very inspired by our new boss, Dr. Carla Hayden. She is leading us strongly in a direction toward opening up the collection as much as possible. She talks about this place, lovingly, like the American people’s treasure chest that she is helping to crack open. We in NDI see our responsibility as helping make that happen for our digital and digitized collections. I’d like to tell you about a few things we’re working on.

The first is crowdsourcing.

Screenshot of a development version of Beyond Words, an application created by Tong Wang (with support from Repository Development/OCIO and SGP/LS) on the Scribe platform

 

We’re working to expand the library’s ability to learn from our users on to more digital platforms. Here’s screenshot of an application that’s still in development, built by Tong Wang, an engineer on the Repository Team at the Library of Congress, while he was a Innovator in Residence in NDI. It invites people to identify cartoons or photographs in historic newspapers and to update the captions. This will enhance findability and also gets us data sets of images that are useful for scholarship. For example, we could create a gallery of cartoons published during WWI.

We’re excited to announce this will be launching late this summer (in beta). This is not the only crowdsourcing project we’re working on (but I’ll save those details for another time). We hope that this work will supplement the other programs LC is using to crowdsourcing its collections, including our presence on Flickr, the American Archive of Public Broadcasting Fix It game, and efforts in Law Library and World Digital Library.

 

Zwaard, Kate. A view of the Capitol on my way home from work. 2016. Photograph

 

Our CIO, Bud Barton, announced at this year’s Legislative Data and Transparency Conference that LC will be hosting a competition for creative use of Legislative data. We’re still working on the details, and we should have more to share soon.

 

National Digital Initiatives hosts a “Hack To Learn” event, May 17, 2017. Photo by Shawn Miller.

 

I’m thrilled to let you all know that we’ll be launching labs.loc.gov in a few months. In additional to giving a platform to all Library staff for play and experimentation, Labs will be NDI’s home — where we’ll host things like results from our hackathons (like the one pictured here) and experiments by our Innovators in Residence (more on that in a bit).

And now, selfishly, since you’re trapped here, I’d like to share a things LC things that you might find useful.

 

Palmer, Alfred T, [Operating a hand drill at the North American Aviation, Inc.,]. Oct. Photograph. https://www.loc.gov/item/fsa1992001189/PP/

There are a couple of interesting jobs posted right now, and I’d like to encourage you all to apply and share widely. Keep checking back! We need your good brains here, helping us.

Highsmith, Carol M. Great Hall, second floor, north. Library of Congress Thomas Jefferson Building, Washington, D.C. [Between 1980 and 2006] Photograph. https://www.loc.gov/item/2011632164/.

Speaking of stuff to apply to, please consider coming here for a short period as a Kluge fellow in digital studies! It’s a paid fellowship for research using LC resources into the impact of the digital revolution on society, cultural, and international relations. Applications are due December 6th. More than one can be awarded each year, so share with your friends.

V. Donaghue. [WPA Art Project]. https://www.loc.gov/item/98509756/.

Lastly, I want to mention a program NDI has been working on to bring exciting people to the Library of Congress for short-term, high-impact projects: we call it the Innovators in Residence program. We’re wrapping up some details on this year’s fellowship, and I’ll have more to announce soon. Our vision for the innovator in residence program is to bring bright minds and new blood to the library who can help create more access points to the collection.

Bendorf, Oliver, artist. What does it mean to assemble the whole? 2016. Mixed Media.

 

So thanks for coming! As I mentioned, we’re working to launch our website, which will make what we’re working on much more easy to follow. In the meanwhile, you can always keep up with the latest news on our blog.

Enjoy the program. And, if you’re using social media today, please use the hashtag #asData

 

Pages

Subscribe to code4lib aggregator