You are here

Feed aggregator

Hugh Rundle: Setting up your own Ghost theme

planet code4lib - Sun, 2017-08-06 06:06

Creating a totally custom design for your publication

Ghost comes with a beautiful default theme called Casper, which is designed to be a clean, readable publication layout and can be easily adapted for most purposes. However, Ghost can also be completely themed to suit your needs. Rather than just giving you a few basic settings which act as a poor proxy for code, we just let you write code.

There are a huge range of both free and premium pre-built themes which you can get from the Ghost Theme Marketplace, or you can simply create your own from scratch.

Anyone can write a completely custom Ghost theme, with just some solid knowledge of HTML and CSS

Ghost themes are written with a templating language called handlebars, which has a bunch of dynamic helpers to insert your data into template files. Like {{author.name}}, for example, outputs the name of the current author.

The best way to learn how to write your own Ghost theme is to have a look at the source code for Casper, which is heavily commented and should give you a sense of how everything fits together.

  • default.hbs is the main template file, all contexts will load inside this file unless specifically told to use a different template.
  • post.hbs is the file used in the context of viewing a post.
  • index.hbs is the file used in the context of viewing the home page.
  • and so on

We've got full and extensive theme documentation which outlines every template file, context and helper that you can use.

If you want to chat with other people making Ghost themes to get any advice or help, there's also a #themes channel in our public Slack community which we always recommend joining!

FOSS4Lib Recent Releases: Fedora Repository - 4.7.4

planet code4lib - Sun, 2017-08-06 01:49

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: Fedora RepositoryRelease Date: Tuesday, August 1, 2017

FOSS4Lib Recent Releases: YAZ - 5.23.0

planet code4lib - Sun, 2017-08-06 01:35

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: YAZRelease Date: Friday, August 4, 2017

FOSS4Lib Recent Releases: Zebra - 2.1.2

planet code4lib - Sun, 2017-08-06 01:34

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

Package: ZebraRelease Date: Friday, August 4, 2017

FOSS4Lib Upcoming Events: Blacklight European Summit 2017

planet code4lib - Sun, 2017-08-06 00:55
Date: Monday, October 16, 2017 - 09:00 to Wednesday, October 18, 2017 - 21:00Supports: Blacklight

Last updated August 5, 2017. Created by Peter Murray on August 5, 2017.
Log in to edit this page.

For details, see http://projectblacklight.org/european-summit-2017.

Evergreen ILS: Evergreen 3.0 development update #13: let the fest begin again

planet code4lib - Fri, 2017-08-04 21:02

Flying female Mallard duck by Martin Correns (CC-BY-SA on Wikimedia Commons) It is to be hoped that she is going after nice, juicy bugs to squash and eat.

Since the previous update last month, another 72 patches have made their way into Evergreen. Dominoes are toppling into place; new features added to master in July include:

  • Adding (back) the ability for patrons to place holds via the public catalog and have them be suspended for later activation. (bug 1189989)
  • Teaching MARC export and the Z39.50 server to include call number prefixes and suffixes. (bugs 1692106 and 1705478)
  • A new feature to add the ability to apply tags to copy records and display them as digital bookplate. (bug 1673857)
  • A number of improvements to the web staff interface.

Next week will be the second feedback fest. The feedback fest a week where Evergreen developers will be focusing on providing feedback on active code submissions. As of the moment, 42 pull requests are being targeted for review, many of deal with major features on the Evergreen 3.0 road map. Some of the larger pull requests include the web staff client’s serials module and its offline circulation module, batch patron editing, catalog search improvements, improvements to Evergreen’s ability to handle consortia that cross time zones, configurable copy alerts, and a new popularity parameter for in-house use.

Speak of concentrated community efforts, a Bug Squashing Week ran from 17 to 21 July. As reported by the wrangler of the bug squashing week, Terran McCanna, a total of 145 updates to existing bug reports were made, with 22 signoffs and 13 patches merged.  The next Bug Squashing Week will occur on 11 to 15 September.

A couple important deadlines for 3.0 are fast approaching, with feature slush scheduled for 18 August and feature freeze for 1 September.

Duck trivia

The U.K. has a number of canals. The walkways and towpaths alongside them tend to be a bit narrow and are used by pedestrians, cyclists… and ducks. How to avoid duck paillard on the pavement? The Canal and River Trust will be painting duck lanes on the walkways to encourage folks to slow down.

This bit of trivia was contributed by Irene Patrick of the North Carolina Government & Heritage Library. Thanks!

Submissions

Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

District Dispatch: Bi-partisan bill would support library wi-fi

planet code4lib - Fri, 2017-08-04 15:35

Earlier this week, the Advancing Innovation and Reinvigorating Widespread Access to Viable Electromagnetic Spectrum (AIRWAVES) Act, S. 1682, was introduced by Senators Cory Gardner (R-CO) and Maggie Hassan (D-NH). As described by Sen. Hassan, “The bipartisan AIRWAVES Act will help ensure that

Source: http://www.yourmoney.com

there is an adequate supply of spectrum for licensed and unlicensed use, which in turn will enhance wireless services to our people, stimulate our economy, and spur innovation.” Senator Gardner stated, “This legislation offers innovative ways to avoid a spectrum crunch, pave the way for 5G services, and provide critical resources to rural America.” The legislation would encourage a more efficient use of spectrum, the airwaves over which signals and data travel, while helping to close the urban-rural digital gap.

In a statement on the new bill, ALA President Jim Neal said:

The American Library Association applauds Senators Cory Gardner (R-CO) and Maggie Hassan (D-NH) on the introduction of the AIRWAVES Act and supports their efforts to increase the amount of unlicensed spectrum available to power libraries’ Wi-Fi networks. Access to Wi-Fi is important to virtually every patron of the nearly 120,000 school, public and higher education libraries in the United States. More spectrum for library Wi-Fi means more public access to the internet for everyone from school children to entrepreneurs, job seekers and scientists. The AIRWAVES Act will mean that millions more people, especially those in rural areas, will benefit from the library programs and services increasingly essential to their and the nation’s success in the digital age.

Specifically, The AIRWAVES Act would direct the Federal Communications Commission to free up unused or underused spectrum currently assigned to government users for commercial providers to expand their broadband offerings and for the expansion of services like Wi-Fi. The auctioned spectrum would include low-band, mid-band, and high-band frequencies, enabling the deployment of a variety of new wireless technologies. It also includes a proposal to auction other spectrum and would require that 10 percent of the auction proceeds be dedicated to funding wireless infrastructure projects in unserved and underserved rural areas.

Finally, the bill requires the Government Accountability Office (GAO) to report on the efficiency of the transfer of federal money from the Spectrum Relocation Fund to better encourage federal agencies to make additional spectrum available.

ALA urges Congress to support the AIRWAVES Act’s creative, bi-partisan approach to spectrum use and rapid action on this important legislation.

The post Bi-partisan bill would support library wi-fi appeared first on District Dispatch.

District Dispatch: Where’s CopyTalk?

planet code4lib - Fri, 2017-08-04 13:00

We are on a summer hiatus! CopyTalk webinars will start up again in September. In the meantime, you can listen to those webinars you missed in the archive!

Brought to you by an enthusiastic ALA committee—OITP Copyright Education Committee—upcoming webinars will address music copyright, copyright tutorials on music, and rights reversion with the Authors Alliance. We would love your suggestions for future topics! Contact Patrick Newell pnewell@csuchico.edu or me crussell@alawash.org with your ideas.

CopyTalks are one hour in duration and scheduled on the first Thursday of every month at 2 pm Eastern (1am Pacific) and of course, are free. The webinar address is always ala.adobeconnect.com/copytalk. Sign in as a guest. You’re in!

Copyright Tools! These are fun!

Our copyright education committee provides fun copyright tools—guides to help you respond to common copyright questions, like “is this a fair use?” Michael Brewer, committee member extraordinaire created these tools that are now in digital form—the 108 Spinner (library reproductions), the public domain slider, the copyright genie (doesn’t she sound cute?), exceptions for instructors and the very popular fair use evaluator, available for download. All tools are available at the Copyright Advisory Network (CAN).

Our most recent tools are the fair use foldy thingys that were a big hit at Annual. You will be enthralled playing with the foldy thingy – see the video! They are available for bulk purchase from the manufacturer.

We also created fair use factor coasters, one coaster for each factor. Collect all four! Each includes a quote from a court case that illuminates the meaning and importance of each factor. Tested for quality, the coasters are functional and work well with cold bottles of beer. Collect yours at a copyright conference in your area!

Talk about service!

Don’t forget to visit the Copyright Advisory Network! Post your copyright question to the question forum and get a quick response from a copyright expert. We don’t provide legal advice but have informed opinions and are willing to share our expertise. Get on the CAN!

The post Where’s CopyTalk? appeared first on District Dispatch.

David Rosenthal: Preservation Is Not A Technical Problem

planet code4lib - Thu, 2017-08-03 15:00
As I've always said, preserving the Web and other digital content for posterity is an economic problem. With an unlimited budget collection and preservation isn't a problem. The reason we're collecting and preserving less than half the classic Web of quasi-static linked documents, and much less of "Web 2.0", is that no-one has the money to do much better.

The budgets of libraries and archives, the institutions tasked with acting as society's memory, have been under sustained attack for a long time. I'm working on a talk and I needed an example. So I drew this graph of the British Library's annual income in real terms (year 2000 pounds). It shows that the Library's income has declined by almost 45% in the last decade.

Memory institutions that can purchase only half what they could 10 years ago aren't likely to greatly increase funding for acquiring new stuff; it's going to be hard for them just to keep the stuff (and the staff) they already have.

Below the fold, the data for the graph and links to the sources.

The nominal income data was obtained from the British Library's Annual Report series. The real income was computed from it using the Bank of England's official inflation calculator. Here is the data from which the graph was drawn:
YearNominal GBPYear 2000 GBP(millions)(millions)2016118.076.392015117.877.592014118.979.092013124.784.902012126.188.462011140.1101.442010137.9105.052009142.2113.322008140.5111.372007141.2116.392006159.2136.852005136.9121.442004121.6110.922003119.5112.252002119.2115.202001120.9118.802000110.2110.201999112.3115.62

Library of Congress: The Signal: Collections as Data and National Digital Initiatives

planet code4lib - Thu, 2017-08-03 12:54

This the text of my talk from the Collections as Data: IMPACT. Once the videos of the individual talks are processed and available, we’ll share those with you here — in the meanwhile, you can watch starting at minute 6:45 in the video of the entire event.

Welcome. Published by Currier & Ives. <https://www.loc.gov/item/2002698194/>.

Welcome to Collections as Data! When we hosted our first Collections as Data meeting, last year, we explored issues around the computationally processing, analyzing, presenting digital collections. The response overwhelmed us. The topic seemed to strike a chord with many of our colleagues and intersected with other efforts in the field in a fun way. But, still to this day, after a year of talking about this — we’re still struggling to explain it in a way that is tangible to friends and colleagues without direct experience. We’re calling this second iteration “Collections as Data: IMPACT” because we want to get to the heart of why this type of work matters. We’ve invited speakers to tell stories about using data to better their communities and the world.

And in this spirit, I’m going to kick things off with a short story about computation applied to library collections when computers were people doing the calculating, not machines in our pockets. I hope that this will connect the work we’ll discuss today to a longer history to illustrate the power of computation when it’s applied to library collections.

Charles Willson Peale’s portrait of James Madison https://www.loc.gov/item/95522332/

Portrait of Alexander Hamilton. The Knapp Co. https://www.loc.gov/item/2003667031/

 

 

 

 

 

 

 

 

 

 

 

The Federalist Papers, are a collection of essays written by John Jay, Alexander Hamilton, and James Madison. They were published under a pseudonym, Publius, to persuade colonial citizens to ratify the constitution.  When changing public opinion converted this documents from anonymous trolling to foundational to our democracy, we started to get a sense of the authorship of the papers.

After the dust settled, 12 remained in dispute between Hamilton and Madison. Hamilton pretty much said they were joint papers, Madison said he didn’t have much to do with them, and lots of people thought Madison actually wrote them. Why so much finger pointing? These were propaganda pieces, and sometimes the authors held public positions that were different from the ones they presented in the papers.

Historical opinion about who wrote what swung back and forth, depending on new evidence that came forward or on the popularity of the given historical figure at the time. As always there is a really interesting story about the sources of historical evidence for authorship. If you’re curious about that, I encourage you to talk to your local librarian.

In 1944, Douglass Adair, an American academic, determined that it was most likely Madison wrote the disputed papers. But the historical evidence was modest, so he sought another way of making an analysis.

 

He talked to two statisticians to see if there was a computational way to determine authorship.

Frederick Mosteller and David Wallace were intrigued by the idea, and decided to take on the challenge. They thought maybe average sentence length would be a possible indicator, so they laboriously counted sentence length for the known Hamilton and Madison papers, performed some analysis (for example, they had to determine whether quoted sentences counted toward the averages), and did the calculations.

They came up with an average length of 34.5 words for Hamilton and 34.6 words for Madison. So, that wasn’t going to work. Then they tried standard deviation. They thought, although the average length is the same, maybe, for example, one author writes mostly average length sentences and the other, for example lots of teeny tiny sentences and lots of long sentences. Unfortunately, that effort turned out to be a bust, as well. So they shelved the project.

After a few years later, Douglass Adair reached back out to say that he found a tool that could be useful. He found that Hamilton uses the word “while,” and Madison uses “whilst.” This fact, in itself is not enough to determine authorship. The word isn’t used enough in the papers for that to work, and it could have been introduced during the editing process. But it gave them somewhere to start.

The statisticians counted word usage in a screening set of known Madison and Hamilton papers, which I imagine as about as fun as watching paint dry. From that they created a frequency analysis of words used in each authors’ writing. They then determined which words were predictable discriminators and which were what they called “dangerously contextual,” because they were correlated with a certain subject favored by a particular author.

They ended up with 117 words to analyze.

Buck, Matt. Maths in neon at Autonomy in Cambridge. 2009. Photograph. Retrieved from Flickr, https://www.flickr.com/photos/mattbuck007/3676624894/

 

Using Bayesian statistics, they determined probability of authorship based on the number of times the words appear. If you’re interested in any of this, I encourage you to read the book — it’s very readable and kind of fun.

They concluded that “Our data independently supplement those of the historian. On the basis of our data alone, Madison is extremely likely, in the sense of degree of belief to have written all the disputed Federalists.”

None of this was digital. This was all ink on paper, so it’s one of my favorite example of using collections as data.

What does digital do? It democratizes this kind of analysis, and makes being wrong much less expensive. Which is great! Because we know that being wrong is the cost of being right.

Our heros in this story spent years inventing this analysis, but much of that time was spent laboriously counting word frequencies and hand calculating. Their data set was limited by human scale. Imagine what we could do with lots of data and faster analysis.

Detroit Publishing Co,. City of Detroit III, Gothic room. Photograph. Retrieved from the Library of Congress, https://www.loc.gov/item/det1994012547/PP/

 

This sort of linguistic analysis is now very common. A few years ago, a computer scientist, Patrick Juola, got a call from a reporter asking him if he could show that Robin Galbraith was really J.K, Rowling. He did. And his code is open-source for anyone to use.

Collections as Data graphic created by Natalie Buda Smith, User Experience Manager, Library of Congress https://blogs.loc.gov/thesignal/2016/10/user-experience-ux-design-in-libraries-an-interview-with-natalie-buda-smith/

 

This brings us back to today. What excites me about the possibilities inherit in Collections as Data is that we can now make these kind of intellectual breakthroughs on our laptops. People have been doing this kind of analysis — computational analysis of collections — for a long time. But now, for the first time, we have huge data sets to train our algorithms. We can figure stuff out without having to hand count words in sentences.

And this means that discovery and pay with collections materials becomes even more available to what I consider a core constituency of the Library: the informed and curious.

We’ve invited some academic luminaries here today, and we’re so proud they could join us. We’re learning so much from the ground-breaking work of our colleagues in academic libraries, like our friends working on the IMLS-grant-funded “Always Already Computational: Library Collections as Data” project. But many of our speakers, and many of you out there, don’t have access to well-funded institutional libraries. We hope that you will consider us an intellectual home for exploration.

 

Canaries & jewels / Marston Ream 1874. https://www.loc.gov/item/2003677675/.

 

We, in my group, National Digital Initiatives, are very inspired by our new boss, Dr. Carla Hayden. She is leading us strongly in a direction toward opening up the collection as much as possible. She talks about this place, lovingly, like the American people’s treasure chest that she is helping to crack open. We in NDI see our responsibility as helping make that happen for our digital and digitized collections. I’d like to tell you about a few things we’re working on.

The first is crowdsourcing.

Screenshot of a development version of Beyond Words, an application created by Tong Wang (with support from Repository Development/OCIO and SGP/LS) on the Scribe platform

 

We’re working to expand the library’s ability to learn from our users on to more digital platforms. Here’s screenshot of an application that’s still in development, built by Tong Wang, an engineer on the Repository Team at the Library of Congress, while he was a Innovator in Residence in NDI. It invites people to identify cartoons or photographs in historic newspapers and to update the captions. This will enhance findability and also gets us data sets of images that are useful for scholarship. For example, we could create a gallery of cartoons published during WWI.

We’re excited to announce this will be launching late this summer (in beta). This is not the only crowdsourcing project we’re working on (but I’ll save those details for another time). We hope that this work will supplement the other programs LC is using to crowdsourcing its collections, including our presence on Flickr, the American Archive of Public Broadcasting Fix It game, and efforts in Law Library and World Digital Library.

 

Zwaard, Kate. A view of the Capitol on my way home from work. 2016. Photograph

 

Our CIO, Bud Barton, announced at this year’s Legislative Data and Transparency Conference that LC will be hosting a competition for creative use of Legislative data. We’re still working on the details, and we should have more to share soon.

 

National Digital Initiatives hosts a “Hack To Learn” event, May 17, 2017. Photo by Shawn Miller.

 

I’m thrilled to let you all know that we’ll be launching labs.loc.gov in a few months. In additional to giving a platform to all Library staff for play and experimentation, Labs will be NDI’s home — where we’ll host things like results from our hackathons (like the one pictured here) and experiments by our Innovators in Residence (more on that in a bit).

And now, selfishly, since you’re trapped here, I’d like to share a things LC things that you might find useful.

 

Palmer, Alfred T, [Operating a hand drill at the North American Aviation, Inc.,]. Oct. Photograph. https://www.loc.gov/item/fsa1992001189/PP/

There are a couple of interesting jobs posted right now, and I’d like to encourage you all to apply and share widely. Keep checking back! We need your good brains here, helping us.

Highsmith, Carol M. Great Hall, second floor, north. Library of Congress Thomas Jefferson Building, Washington, D.C. [Between 1980 and 2006] Photograph. https://www.loc.gov/item/2011632164/.

Speaking of stuff to apply to, please consider coming here for a short period as a Kluge fellow in digital studies! It’s a paid fellowship for research using LC resources into the impact of the digital revolution on society, cultural, and international relations. Applications are due December 6th. More than one can be awarded each year, so share with your friends.

V. Donaghue. [WPA Art Project]. https://www.loc.gov/item/98509756/.

Lastly, I want to mention a program NDI has been working on to bring exciting people to the Library of Congress for short-term, high-impact projects: we call it the Innovators in Residence program. We’re wrapping up some details on this year’s fellowship, and I’ll have more to announce soon. Our vision for the innovator in residence program is to bring bright minds and new blood to the library who can help create more access points to the collection.

Bendorf, Oliver, artist. What does it mean to assemble the whole? 2016. Mixed Media.

 

So thanks for coming! As I mentioned, we’re working to launch our website, which will make what we’re working on much more easy to follow. In the meanwhile, you can always keep up with the latest news on our blog.

Enjoy the program. And, if you’re using social media today, please use the hashtag #asData

 

Open Knowledge Foundation: OKI Agile: Kanban – the dashboard of doing

planet code4lib - Thu, 2017-08-03 09:38

This is the fourth in a series of blogs on how we are using the Agile methodology at Open Knowledge International. Originating from software development, the Agile manifesto describes a set of principles that prioritise agility in work processes: for example through continuous development, self-organised teams with frequent interactions and quick responses to change (http://agilemanifesto.org). In this blogging series we explore the different ways Agile can be used to work better in teams and to create more efficiency in how to deliver projects. Previous posts dealt with user storiesmethodologies and the use of scrum and sprints in open data: this time we go into Kanban. 

Picking a methodology has to take into account both the team doing the work but also the project. Bigger teams call for a bigger methodology, and more critical projects call for more methodology density (higher ceremony or publicly visible correctness). Bigger methodology adds larger amount of cost to the project. So picking a big methodology, with high ceremony, for a small team and less critical projects is a waste of time, money and effort.

Scrum is an example of a relatively big methodology with high ceremony (in the agile universe). There are daily standups, sprint backlogs and commitments, product owner, scrum masters etc. Always going for Scrum may make us look like we’re organised, but we might be organised in the wrong way.

Less can lead to more

The opposite of Scrum in the agile world is perhaps Kanban. It’s designed to not interfere in any way with how the team already works. It may even be an addition to Scrum. It’s a form of what has been called Just-In-Time delivery/development.

In Scrum the team creates a sprint backlog of what the team commits to and what will be worked on during the sprint. During the sprint, which typically ranges from 2-4 weeks, nobody can touch the backlog: nothing can be added, nothing can be modified. Items in the sprint backlog can only be closed. This is done to allow the team to focus on delivery, not requirements discussions.

Kanban works on a different level. In Japanese Kan stands for card and ban means for signal. Kanban could therefore be translated as Card signals and that comes pretty close because Kanban is about using cards or some equivalent of cards to signal progress. It’s a card dashboard showing what’s being done where. There is no sprint that denotes work units, it’s just continuous work and planning when needed, or just in time.

Spreadsheet planning

In its essence, Kanban is progress visualised in a structured spreadsheet. You have columns that show different stages of progress or work. The different stages can include:

  • Incoming work
  • Brainstorming/Specifications
  • Implementation/development
  • Ready for review
  • In tests/proofreading
  • Ready for deployment
  • Deployed

This is not a be all, end all list. This is an example of what columns might be included. Kanban doesn’t tell us what columns we should use. Kanban is designed to be an addition on top of our current processes.

The rows in our spreadsheet is used to track different aspects of the project. For example, if we have different members of the team working in different areas: Designers, content writers, software developers, a row would represent each area. This could also be used to track features of the project if the team is homogeneous (all team members work together in the same area, e.g. all software developers).

The cells of the spreadsheet will then include requirements, for example user stories, that are in each stage for each feature or team. If we remember the requirements iceberg, this this is the 1-2 day tasks that we perform.

Visualising progress

As work progresses on each requirement item, it moves between the different columns. This creates the Kanban dashboard, visualising progress of the project. Kanban encourages autonomy and leadership on all levels. Every team member can come in each day, have a look at what’s needed and just do some work and then once done, move it between columns. That’s part of the just-in-time attitude. We don’t need to plan everything in beforehand, things will be done just in time.

That means for managers, who normally push schedules for work to team members, they now have to sit back and give the team autonomy over the process. Likewise, team members who are used to being told what to do, now have to take matters into their own hands and pull the work that needs to be done.

What about prioritisation you may ask? What about team members just picking low-hanging fruit or easy tasks and leave all the boring tasks until last even if they are the highest priority?

Kanban puts limits on each column. How many is up to the team, but it is not allowed to have more than a prefixed amount of requirements in each column at any given moment. So there might never be more than 8 incoming work items, 2 items being drafted, 3 being implemented etc. The only exception to this can be the last column, delivered which collects everything that has been done, but is usually not a part of the Kanban dashboard or if it is, then it’s regularly purged.

So if the team wants to implement a drafted feature but the amount of items in the implementation column has already reached the maximum they need to sort that out first, before they can take on more implementation work. The project manager or client representative will typically manage the incoming work column but that one is also limited by the maximum amount and progress.

Common sense should of course be applied to these limitations. If the team is heterogeneous, i.e. each row in the “pretend spreadsheet” is a different area instead of a different feature, the limitations apply to each of the cells in the column (e.g. we can’t have the content team limit the software developers just because they have more to do).

Principles

Kanban is really simple, it’s all about visualising the work flow and not overwhelming the team. It’s built around four principles:

  • It should work with whatever process is currently used by the team
  • It helps the team make small incremental changes towards improvement
  • It respects roles and responsibilities of each team member
  • To each her/his own, team members are autonomous and leaders over their own work

It should be really simple to pick up Kanban and add it on top of whatever you’re doing today. Kanban works really well for whiteboards but the biggest obstacle for a remote organisation is finding a good digital tool for the workflow visualisation; Trello might work if if you don’t care too much about the rows. Google spreadsheet might work if you don’t care about the extra effort of moving things around.

If you find a good way to manage a Kanban board, please share it through the comments below or on our forum!

 

LITA: Jobs in Information Technology: August 2, 2017

planet code4lib - Wed, 2017-08-02 18:33

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

University of California, Riverside Library, Web Developer and User Interface Designer, Riverside, CA

University of Miami School of Law, Library Director, Coral Gables, FL

University at Albany, State University of New York, User Experience / Web Design Librarian, Albany, NY

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Open Knowledge Foundation: Why MyData 2017?

planet code4lib - Wed, 2017-08-02 09:02

This is a guest post explaining the focus of the MyData conference in Tallinn and Helsinki later this month.

By a famous writing tip, you should always start texts with ‘why?’. Here we are taking that tip, and we actually find many ways to answer the big Why. So,

Why MyData 2017?

Did you get your data after MyData 2016 conference? No, you did not. There is lots of work to be done, and we need all the companies, governments, individuals and NGO’s on board on Aug 31-Sep 1 in Tallinn and Helsinki. When else would you meet the other over 800 friends at once?

Because no. 1: The work did not stop after MyData 2016

The organizers Fing, Aalto University, Open Knowledge Finland, and Tallinn University have been working on the topic also after the conference. Fing continues their MesInfos project, started in 2012, which goes to its second phase in 2017: implementing the MyData approach in France with a long-term pilot involving big corporations, public actors, testers and a platform. Aalto University is the home base of human-centric personal data research in Finland. Many Helsinki-based pieces of research contribute their academic skills to the conference’s Academic workshops.

Open Knowledge Finland, apart from giving the conference an organizational kick also fosters a project researching MyData implementation in Finnish public sector, of which we will hear in the conference too. Tallinn University, as the newest addition to the group of organizers, will host the conference day in Tallinn to set the base for and inspire MyData initiatives in Estonian companies, public sector, and academic domain.

In addition to the obvious ones, multiple MyData inspired companies to continue on their own. Work continues for example in Alliance meetings, and in some cases, there are people working from the bottom up and acting as change makers in their organization.

MyData 2016 went extremely well, 95 % of the feedback was positive, and the complaints were related to organizational issues like the positioning of the knives during lunch time. Total individual visitor count was 670 from 24 countries. All this was for (at the time) niche conference, organized for the first time by a team mainly of part time workers.

The key to success was the people who came in offering their insights as presenters or their talents in customer care as volunteers. MyData 2017 is, even more, community driven than the year before – again a big bunch of devoted presenters, and the volunteers have been working already since March in weekly meetings, talkoot.

Because no. 2: The Community did not stop existing – it started to grow

MyData gained momentum in 2016 – the MyData White paper is mentioned in a ‘Staff Working Document on the free flow of data and emerging issues of the European data economy’, on pages 24-25. The white paper is also now translated from Finnish to English and Portuguese. Internationally, multiple Local Hubs have been founded this year – of which you hear more about in the Global track of the conference – and a MyData Symposium was held in Japan earlier this year.

The PIMS (Personal Information Management Systems) community, who met for the fourth time during the 2016 conference, has been requesting more established community around the topic. “Building a global community and sharing ideas” is one goal of MyData 2017, and as a very concrete action, the conference organizing team and PIMS community have agreed to merge their efforts under the umbrella name of MyData. The MyData Global Network Founding Members are reviewing the Declaration of MyData Principles to be presented during MyData 2017. Next round table meeting for the MyData Global Network will be held in Aarhus in November 23.–24. 2017.

 

Open Knowledge Estonia was founded after last year’s conference. Since MyData was nurtured into its current form inside the Open Knowledge movement, where Open Knowledge Finland still plays the biggest role, MyData people feel very close to other Open Knowledge chapters. See for yourself, how nicely Rufus Pollock explains in this video from MyData 2016 how Open Data and MyData are related.

Because no. 3: Estonians are estonishing

“Why Tallinn then?” is a question we hear a lot. The closeness of the two cities, also sometimes jointly called Talsinki, makes the choice very natural to the Finns and Estonians, but might seem weird looking from outside.

Estonia holds the Presidency of the Council of the EU in the second part of 2017. In an e-Estonia, home of the infamous e-residency, MyData fits naturally in the pool of ideas to be tossed around during that period. Now, having the ‘Free movement of data’ as the fifth freedom within the European Union, in addition to goods, capital, service, and people, has been suggested by Estonians, and MyData way of thinking is a crucial part to advance that.

Estonia and Finland co-operate in developing X-road, a data exchange layer for national information systems, between the two countries. In 2017, the Nordic Institute for Interoperability (NIIS) was founded to advance the X-road in other countries as well. Finnish population registry center and their digitalized services esuomi.fi is the main partner of the conference in 2017

Estonia and Finland both as small countries are very good places to test new ideas. Both in Helsinki and Tallinn, we now have ongoing ‘MyData Alliance’ meetups for companies and public organizations who want to advance MyData in their organizations. A goal of MyData in general, “we want to make Finland the Moomin Valley of personal data” will be expanded to “we want to make Finland and Estonia the Moomin Valley of personal data”.

 

Terry Reese: MarcEdit 7: MarcEditor Performance Metrics

planet code4lib - Wed, 2017-08-02 04:39

Because I change version numbers so rarely when it comes to MarcEdit, I usually like to take the major version numbers as an opportunity to look at how some of the core code works, and this time is no different.  One of the things that I’ve occasionally heard is that Opening and Saving larger files in the MarcEditor can be slow.  I guess, before I talk about some of the early metrics, I’d like to explain how the MarcEditor works, because it works differently than a normal text editor in order to allow users to work with files of any size.

Opening records in the MarcEditor

When you open the MarcEditor, the program utilizes one of two modes to read files into the editing screen: Preview and Paging. 

Preview Mode:

Preview mode has been designed specifically for really large files – but the caveat is that when in Preview mode, the editor gets locked into Read Only mode.  This means you can’t type in the Editor, but you can use any of the Editing functions to change the file.  The benefit of the Preview mode is you remove the need to load the file (which is an expensive process).

Paging Mode:

Paging mode is the editing mode enabled by default.  This mode breaks files into pages, meaning that MarcEdit must first, read the file to determine the number of records, and create an internal directory of page start and end locations.  Once that is accomplished, the program then renders data onto the screen.  The pages created are all virtual (they don’t exist), unless a user actually edits (typing onto the screen) information on a page.  Global edits affect the whole file, so the file get’s re-paged after every global edit. 

The paging mode is by far the best rendering mode for data under, say, 150 MBs (in MarcEdit 6).  This is because at around 150 MB, it starts taking a lot longer to create the virtual pages.  And depending on your operating system, and hard drive type, this process could be really expensive.  I’ve found on older equipment (non-Solid State (SD) drives), this process can really slow down reading and writing because so many disk accesses have to occur when creating pages (even virtually).

Saving records in the MarcEditor

Saving files essentially does the paging operation in reverse, though now, rather than a virtual page, the program does have to access the file and extract the page content for every virtual page in existence.  Again, if you have a non-SD drive or an older 5400 rpm drive, this can by a slow process.  If your operating system is already having disk usage issues (and older computers upgraded to Windows 10 have many of these), this can slow the process considerably.

MarcEdit 7 Enhancements

In thinking about how this process works, I started wondering how I could improve file operations in MarcEdit 7.  Obviously, the easiest way to improve the open and save processes would be to remove as many disk operations as possible.  The fewer file operations, the faster the process.  so, I started looking.  Now, one of the benefits of updating to the new version of .NET, is that I have access to some new programming concepts.  One of these new elements are Thread Tasks to initiate Parallel processes in C# (though, I’ve found these must be handled with care, or I can really cause disk issue as threads spawn too quickly) and the other are simply lamba expressions that enable the compiler to optimize the operations code.  With this in mind, I started working.

Testing:

For the purpose of this benchmark, I’m using an Dell Inspiron 13, with an i-5 processor, SD drive, and 16 GB of RAM. 

Reading Data into the MarcEditor

In order to speed up the reading operation, I had to reduce the number of file operations that were being run on the system.  To do this, I made two significant changes. 

  1. When MarcEdit’s Enhanced File reading mode is enabled, MarcEdit reads files under 60 MB into memory.  Using Parallel Tasks, I was able to improve this process, reducing the number of file reads by 50%.  So, if the old method made 100 file reads to build the page, the new process would only make 50 file reads.  Additionally, with the processing now in a Parallel process, data could be read asynchronously, though this doesn’t help as much as one might hope since data needs to be processed in order.  But, it does seem to help.
  2. For files larger than 60 MB, again, I needed to find a way to reduce the number of file reads.  To do this, I tried two things.  First, I increased the buffer.  This means that more data is read at a time, so fewer file reads must occur.  Previously, the buffer was 1 MB.  The buffer has been increased to 8 MB.  This makes a big difference, as now files under 8 MB only are read once, as the remainder of the data lives in the buffer.  The second thing that I did was moved access down to the abstract classes.  This allowed me to interact beneath the StreamReader class and access the actual positions in the file when data was read.  This couldn’t be done in the current version of MarcEdit, because the position properties report where buffered data was read.  This meant that an additional file operation had to occur just to get the file positions.  Again, if the file needed 100 reads to read the file, the update process would only need 50 reads. 

So, what’s the impact of this.  Well, let’s see.  I have a 350 MB file and paging set to 100 records per page.  This is a UTF8 file with records from materials The Ohio State University Libraries has loaded into the HathiTrust.  Using this as my test set, I simply opened the files in the MarcEditor in MarcEdit 6.3.x and MarcEdit 7.0.0.alpha.  To test, I loaded this file five times, throwing out the slowest and fastest times, and selecting the status message closest to the average.

MarcEdit 6.3.x:   

MarcEdit 7.0.0.alpha:   

We can see that by reducing the number of file reads, the process improves significantly, though, it could be better.  Digging deeper into the results, I’m finding that the actual reading of the data is even faster, with the actual rending of the data in the newer control taking longer than the previous editing control.  The reason for this is that in MarcEdit 6.3.x, this control usage has been optimized, double buffered, etc.  In MarcEdit 7.0.0.alpha, this hasn’t been done yet.  My guess, I can probably get these numbers down to around 8.7-9 seconds for a file of this size.  That would represent a 5-5 1/2 second increase in performance.  Of course the question is, will this help individuals opening smaller files.  I think yes.  On my SD drive, loading of a 50 MB file takes roughly the same amount of time: 1.3 seconds.  But on a non-SD drive, I think the improvement will be significant given that the number of file reads will be reduced. 

This test though was with the old defaults in MarcEdit.  For MarcEdit 7.0.0.alpha, I would like to change the default paging size to 1000 records per page (since the new component is more is more efficient when dealing with larger sets).  So, let’s run the test again, this time using the different paging values using the same approach as above:

MarcEdit 6.3.x:  

MarcEdit 7.0.0.alpha:   

Looking at the process, you can see that the gap between the two versions gets larger.  Again, looking closer at the data, the actual loading of the file is faster than the first tests, but rendering the data pushed the final load times higher.  As in the first tests, I believe that once the Editor itself has been optimized, we’ll see this improve significantly.  By the time the final version comes out, the performance different on this type of file could be between 6-8 seconds, or a 37-50% speed improvement over the current 6.3.x version of the software. 

Writing files in the MarcEditor

In looking at the process used to write files on save, the same kind of issues are causing the problems there.  First, saving requires a lot of file access (both read and write), and second, once a file is saved, it is reloaded into the Editor.  This means the on systems with SD drives, the performance benefits may be modest, but for non-SD systems, the gains should be significant.  But there was only one way to tell.  Using the same file, I made edits on 4 pages.  The first page, the 50th page, the 150th page, and the last page.  Paging was set back to 100 records per page.  This forces the tool to combine the changes pages with the unchanged data in the virtual space.  Using the loading times above, we can estimate the time actually used when saving the data.  I’ll be providing numbers for both the save, and save as process (since they work slightly different):

Saving the file using Save:

MarcEdit 6.3.x:

MarcEdit 7.0.0.alpha:

Saving the file using Save As:

MarcEdit 6.3.x:

MarcEdit 7.0.0.alpha:

As you can see here, the difference between the new saving method and the old saving method is pretty significant.  The time posted here reflects the time it takes to both save the file, and then reload the data back into the Editor window.  Taking the times from the first test, we can determine that the Save function in MarcEdit 6.3.x takes ~6.2 seconds, if rendering the file takes an average of 14 seconds, and the Save As operation takes approximately 6.7 seconds.  Let’s compare that to MarcEdit 7.0.0.alpha.  We know that the rendering of the file takes approximately 10 seconds.  That means that the Save function takes ~.8 seconds to complete, and the Save as function, 1.2 seconds to complete.  In each case, this represents a significant performance improvement, and as noted above, optimizations have yet to be completed.  Additionally, I do believe that on non-SD systems, the performance gains will be even more noticeable.

Thoughts, Conclusions, and So what

Given how early I am in the development and optimization process, why start looking at any of these metrics now.  Surely, some of these things will change, and I’m sure they will.  But these give me a base-line to work with, and a touchstone as I continue working on optimizing the process.  And it is early, but one of the things that I wanted to highlight here is that in addition to the new features, updated interface, and accessibility improvements – a big part of this update is about performance and speed.  When I initially wrote MarcEdit, nearly all the code was written in Assembly.  Shifting to a higher level language was incredibly painful for me to do because I want things to be fast, and Assembly programming is all about building things small and building things fast.  You have access to the CPU registers, and you can make magic happen.  Unfortunately, keeping up with the changes in the metadata world, the need to provide better Unicode support, and my desire to support Mac systems (which used, at the time, a different CPU architecture, meant moving to a higher language that could be compiled for different systems.  Ever since that code migration, I’ve been chasing the clock, trying to get the processing speeds down to the original assembly code-base.  Is that possible?  No.  Though, even if it was, so many things have changed and been added, the process simply does more than the simple libraries that I first created in 1999…but still, that desire is there.

So, while I am spending most of my time communicating publically about the new wireframes and new functionality in MarcEdit 7 (and I’m really excited about these changes)…please know – MarcEdit 7 is also about making it fast.  I think MarcEdit 6.3.x is already pretty quick on its feet.  As you can see here, its about to get faster.

–tr

DuraSpace News: Duke University Libraries Seeks a Repository Developer

planet code4lib - Wed, 2017-08-02 00:00

Durham, North Carolina  Duke University Libraries is seeking a repository developer to work on redeveloping the MorphoSource project into full-scale repository.  This position is funded for three years through a grant from the National Science Foundation Advances in Biological Informatics (NSF ABI) program with the potential for renewal.

District Dispatch: Connecting with your members of Congress

planet code4lib - Tue, 2017-08-01 18:46

Guest post by: Eileen M. Palmer, NJLA Public Policy Committee Chairs (July 2016-June 2017)

We’ve all heard it before but it is nonetheless true: effective advocacy is about building relationships. Building strong relationships is more than the occasional call to an elected official’s office requesting support for a bill or funding. Learning who your officials are and understanding their interests and concerns is at the heart of building that relationship and should be ongoing.

Members of the NJ delegation at National Library Legislative Day 2017 with Congressman Leonard Lance (NJ-7).

The New Jersey Library Association (NJLA) has worked to develop strong relationships with our congressional delegation through training for advocates provided by our Public Policy Committee, during our annual NJ Library Advocacy Week and at ALA’s National Library Legislative Day. And over the last several months we’ve seen the benefits of relationship building in our work supporting the ALA Washington Office’s advocacy efforts for federal funding.

As 2017 began we learned that the House Committee on Appropriations would be chaired by a representative from New Jersey. Rodney Frelinghuysen represents the 11th district, one rich with libraries and passionate library advocates, from library staff to trustees to mayors. When ALA reached out to us we were ready, willing and able to get to work taking our message to Rep. Frelinghuysen and his staff. Our NJLA Public Policy Committee was the key link in communications between ALA, NJLA and selected advocates from the 11th district. By working together, we were able to develop and execute a plan that has been successful on several fronts. Our plan included:

  • Repeatedly requesting a meeting with the congressman. Though we were unsuccessful in securing a face to face meeting, these communications were critical opportunities to convey our messages on both library funding and access to Congressional Research Service (CRS) reports, an issue also included in the appropriations legislation.
  • Making sure all local advocates in the 11th district knew our issues and the need to make their own contacts with Rep. Frelinghuysen’s local office. Parsippany Library Director Jayne Beline has had a longstanding relationship with the Congressman and his office, which was invaluable in communicating our message when he was in her library. Building relationships also includes making sure your local congressional office knows if your library has a meeting room they can use for events!
  • Working with local stakeholders – trustees, local officials and even patrons – to convey our message about how federal library funding impacts local library patrons. This message is so much more powerful when delivered locally with local examples.

ALA chapters play an indispensable role in ALA’s advocacy efforts. Coordinating our chapter efforts with the ALA Washington Office has amplified our message and assured each member of our NJ congressional delegation knows, not just how much money we are requesting but, even more importantly, how those funds impact their constituents.

At this point in the legislative process we have reached a significant milestone. The House Committee on Appropriations has passed a bill that holds IMLS, LSTA and IAL funding at current levels and includes a provision to make CRS reports available to all. But we are not close to being done. To move forward, we must work with the Senate to support similar funding as their process begins in earnest this fall. I encourage all chapters to take an active role in working with ALA on these issues. Here are some specific ways to do that:

  • Get friends from inside and outside the library world to sign up for alerts and to act. The ALA Action Center or your local Chapter Action Center makes this very easy.
  • Offer your library for a town hall, tour, summer reading or other program visit by members and/or their staff.
  • Write a brief, personal letter-to-the-editor about the issues we care about. ALA has resources to help you.
  • Ask to meet with your representative and senator (or their staff) over the summer. Don’t be discouraged if you are turned down. Use the opportunity to convey your concern about library funding. Also, ask to be included on the invitation list for any telephone town halls.

Each of these activities can help to build the lasting relationships we need to effectively tell our story to every member of Congress. We’ve seen a very positive impact in New Jersey, not only with the optimistic budget outlook, but also in the further development of our relationship with our legislators and their staff. The benefits of advocacy are well worth the effort of all of us.

The post Connecting with your members of Congress appeared first on District Dispatch.

David Rosenthal: Disk media market update

planet code4lib - Tue, 2017-08-01 15:34
Its time for an update on the disk media market., based on reporting from The Register's Chris Mellor here and here and here.

WD hard disk shipmentsWD's hard disk shipments were basically flat in unit terms but sharply up in exabytes:
Financial statements revealed 39.3 million disk drives were shipped, slightly down on the 40.1 million a year ago. But that's 81.2 disk exabytes shipped, much more than the year-ago total of 66.1. The average selling price per drive stayed the same at $63.

A look at the disk segment splits shows the long-term slump in disk drive sales as flash takes over in PCs, notebooks and the high-performance enterprise drive areas.Note the graph showing a kickup in "Consumer electronics". This may represent more large customers deciding that cheaper consumer drives are "good enough" for bulk storage. use.

Seagate hard disk shipments were flat in exabyte terms, meaning a decline in unit terms:
In terms of exabytes shipped – mostly in disk drives – Seagate said enterprise mission-critical exabyte shipments were flat year-on-year, and there was 4.5 per cent growth on the previous quarter.

Nearline high capacity enterprise capacity shipped declined 14 per cent, while PC exabyte shipments were up 14.3 per cent year-over-year. Non-compute exabyte ships were down quarter-on-quarter.The reason is that Seagate was late delivering 10TB helium drives, a favorite in the bulk storage market:
Luczo has Seagate focused on bulk storage of enterprise data on high-capacity disk drives, yet shipments of such drives fell in the quarter as Seagate missed a switchover to 10TB helium-filled drives. Stifel analyst and MD Aaron Rakers sees Western Digital having an 80 per cent ship share in this market.This failure, and even more Seagate's failure in the flash market, had a big impact on Segate's revenues and their position against WD:
WD vs. Seagate revenueThe right-hand side of the chart shows the $2.4bn gap in revenues that is the result of Seagate boss Steve Luczo's failure to break into the flash drive business and being late to helium-filled disk drives. Seagate is now a shrinking business while WD is growing.Seagate's response has been to kick Luzco upstairs:
Steve Luczo will go upstairs to become executive chairman of the board on October 1, with president and chief operating officer Dave Mosley taking over the CEO spot and getting a board slot.He's supposed to focus on the long term, but this doesn't seem to be his forte. Mellor writes:
Seagate says Luczo will focus on longer-term shareholder value creation, whatever that means. As he’s so far avoided Seagate getting any more involved in the NAND business than if it were playing Trivial Pursuit, we don’t have high hopes for moves in that direction.Seagate's poor performance poses a real problem for the IT industry, similar to problems it has faced in other two-vendor areas, such as AMD's historically poor performance against Intel, and ATI's historically poor performance against Nvidia. The record shows that big customers, reluctant to end up with a single viable supplier of critical components, will support the weaker player by strategic purchases of less-competitive product.

HDD shipmentsThe even bigger problem for the IT industry is that flash vendors cannot manufacture enough exabytes to completely displace disk, especially in the bulk storage segment:
NAND capacity shipped in the second quarter, including for phones and other smart devices (some 40 per cent if capacity shipped), and enterprise storage, was about 35 exabytes. The total HDD capacity shipped number was 159.5 exabytes, almost five times larger, with some 58 exabytes constituting nearline/high-capacity enterprise disk drives. So bulk storage could consume nearly twice the entire flash production, leaving none for the higher-value uses such as phones. Note that these numbers, combined with Aaron Rakers' revenue estimates:
Revenues in 2nd QuarterAnnual ChangeQuarter ChangeFlashc$13.2 Bn55%8%Disk Drivesc$5.7 Bn-5.5%-4%imply that Flash averages $0.38/GB where HDD averages $0.036/GB, or ten times cheaper per byte.

HDD revenue & exabytesSo the industry needs disk vendors to stay in business and continue to invest in increasing density, despite falling unit shipments. Because hard disk is a volume manufacturing business, falling unit shipments tend to put economies of scale into reverse, and reduce profit margins significantly.

Kryder's Law implies that capacity shipped will increase faster than revenues. The graph shows capacity shipped increasing while revenues decrease. The IT industry must hope that this trend continues without killing the goose that is laying this golden egg.

LibUX: Writing for the User Experience with Rebecca Blakiston

planet code4lib - Tue, 2017-08-01 12:15

 

Writing for the User Experience is our second ever Library User Experience Community webinar. Ours is a community of designers, developers, librarians, info architecture people, content strategists, marketing folks, accessibility enthusiasts, and others, started by me — Michael Schofield (@schoeyfeld) — and Amanda L. Goodman (@godaisies). We do our best to make content that pushes the conversation around the user experience and libraries, non-profits, and higher-ed forward.

In this session, Rebecca Blakiston — author of books on usability testing and writing with clarity; Library Journal mover and shaker — talks shop and makes the case for content strategy, using active and authentic voice, removing unnecessary words, writing meaningful titles/headings, using parallelism, and more.

A [really good] transcript is being made and will be available to supporters and patreons.

Rebecca is super

Rebecca Blakiston (@blakistonr) is the team lead for Web Design & User Experience at the University of Arizona Libraries. She is the author of two books: Usability Testing: a Practical Guide for Librarians, and Writing Effectively in Print and on the Web: a Practical Guide for Librarians. She’s also the former Chair of the University Libraries Section, Association of College and Research Libraries (ACRL ULS). In 2016, she was named a Library Journal Mover and Shaker.

Supporters

These events are bootstrapped with your help. Either through donations, old-fashioned street-teaming, talking these events up to the boss – it all goes a long way.

Specific and heartfelt thanks to Lauren Seaton, Stephen Bateman, Emma Boettcher, Emily King, Rum Rubinstein, Kelly Sattler, Anna Stackeljahn, Alyssa Hanson, Amanda Brite, Amy Croft, Tobias Treppmann, Stephanie Van Ness, and Angie Chan-Geiger.

Novare Library Services provides our webinar-space, records, and archives our video. They specializes in IT solutions for libraries and small businesses. In addition to our Library User Experience Community webinars, they’re behind a bunch of other events, too.

 

Islandora: New Partner: Born-Digital

planet code4lib - Tue, 2017-08-01 12:11

The Islandora Foundation is very happy to announce that long time member Born-Digital (also known as Common Media) has become a Partner in the Foundation. A service company that specializes in support for open source digital tools in the cultural preservation field, Born-Digital has been an active member of the Islandora community for several years, with particularly notable contributions in the Dev-Ops Interest Group, the Islandora ISLE project, and a big presence at both of our conferences. As a part of this new Partner membership, the Islandora Foundation also welcomes Noah Smith to our Board of Directors.

District Dispatch: New report explores rural library technology access

planet code4lib - Mon, 2017-07-31 19:24

A new report from the Office for Information Technology Policy focuses attention on the capacity of rural public libraries to deploy Internet-enabled computing technologies and other resources to meet the needs of their residents.

Rural Libraries in the United States: Recent Strides, Future Possibilities, and Meeting Community Needs” explores nuances of rurality, details challenges rural libraries face in maximizing their community impacts and describes how existing collaborative regional and statewide efforts help rural libraries and their communities.

Authors Brian Real and Norman Rose combine data from the final Digital Inclusion Survey with Public Libraries Survey data from the Institute of Museum and Library Services to find:

  • Sixty percent of rural libraries have a single location as part of their administrative system, hampering economies of scale.
  • Rural libraries furthest from population centers (“rural remote”) are most likely to be single-outlet entities and lag rural counterparts (“rural distant” and “rural fringe”) in most measures of operational capacity.
  • Rural library broadband capacity falls short of benchmarks set for U.S. home access, which is 25 Mbps download and 4 Mbps upload speeds. By contrast, rural fringe libraries average 13/8.6 Mbps, rural distant is 7.7/2.2 Mbps and rural remote is 6.7/1 Mbps.
  • Overall, one in 10 rural libraries report their internet speeds rarely meet patron needs.
  • Rural libraries are on par with colleagues in larger communities in terms of public wi-fi access and providing patrons’ assistance with basic computer and internet training, but more specialized training and resources can lag.
  • More than half of all rural libraries offer programs that help local residents apply for jobs and use job opportunity resources (e.g., online job listings, resume software), and rural libraries are comparable to their peers in providing work space for mobile workers.
  • Significant proportions of all rural libraries (even the most remote) offer programs and services related to employment, entrepreneurship, education, community engagement and health and wellness.
  • The level of programming and services is particularly noteworthy in light of staffing levels: 4.2 median FTE for rural fringe, 2.0 for rural distant and just 1.3 for rural remote libraries.
  • Rural libraries were the least likely to report renovations had taken place in the past five years; about 15 percent, compared with a national average of 21 percent. The Digital Inclusion Survey noted a relationship between facility updates and services and library program offerings.

Finally, the authors consider the roles of state and regional cooperation in adding capacity and resources for rural libraries, looking at examples from Maryland and Iowa.

One-third of all U.S. public libraries serve areas with populations of 2,500 or fewer people, and this new report provides one of the most detailed looks at their services available to date.

The post New report explores rural library technology access appeared first on District Dispatch.

Pages

Subscribe to code4lib aggregator