You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 1 day 17 hours ago

Islandora: Islandora 7.x-1.7 Release Team ASSEMBLE!

Mon, 2016-02-01 15:24

Ever thought about joining an Islandora Release Team? If you have worked with Islandora at all, there's a role to suit your skills and we'd like your help. Join Release Manager Dan Aitken and a team of your fellow volunteers to help get Islandora 7.x-1.7 released this April. We are looking for volunteers for the following roles:

Documentation:

Documentation will need to be updated for the next release. Any new components will also need to be documented. If you are interested in working on the documentation for a given component, please add your name to any component here

Testers:

All components with JIRA issues set to 'Ready for Test' will need to be tested and verifying. Additionally, testers test the overall functionality of a given component. If you are interested in being a tester for a given component, please add your name to any component here. Testers will be provided with a release candidate virtual machine to do testing on.

Auditors:

Each release we audit our README and LICENSE files. Auditors will be responsible auditing a given component. If you are interested in being an auditor for a given component, please add your name to any component listed here.

Component Managers:

Are responsible for the code base of their components. If you are interested in being a component manager, please add your name to any component listed here.

More information about contributor roles can be found at http://islandora.ca/resources/contributors. If you'd like to assist with the release but don't know what to do, feel free to drop us a line and we can point you in the right direction so you can help out when the time comes.

The tentative schedule for the release is:

  • Code Freeze: February 18, 2016
  • First Release Candidate: March 3, 2016
  • Release: Mid to late April

Islandora: Dispatches from the User List: Islandora Scholar, Solr for Chinese Text, and Drag & Drop Ingest

Mon, 2016-02-01 14:15

Time to shine a spotlight on some great information you may have missed if you're not a subscriber on our listserv. Another spotlight, since we've done this a few times before.

Florida State University Report

First up is a post that wasn't actually on our main listserv. Instead, we're visiting the IR Interest Group's direct listserv, where Florida State University's Bryan Brown shared a report he wrote about why and how they migrated from Bepress to Islandora Scholar. Although the report is specific to their use case, there's some great stuff in there for anyone who is considering a migration - especially for an institutional repository, since FSU's work with Islandora Scholar is some of the best in the community.

Indexing Chinese text with Solr

Back to the main listserv, where Mark Jordan from Simon Fraser asked the community for advice on how to get Solr to handle "phrase" searches with multiple Chinese characters. Commenters brought up a presentation given by Jeff Liu from the Chinese University of Hong Kong at the Islandora Conference last summer, which showed them handling this issue. Jeff himself chimed in with the details: a custom solr config file by discoverygarden, Inc. that can be found on their GitHub.

Drag and Drop Ingest

Finally, a really great solution for "easy" ingest comes from the University of North Carolina Charlotte's Brad Spry, in response to a request from Jennifer Eustis of the University of Connecticut for advice on how other Islandora sites handle the ingest of very large files (Islandora Plupload is another approach). Brad's solution was to create "a 'drag and drop' ingest solution based upon a local NAS system with built in rsync, and server-side incron, PHP CLI, and islandora_batch," allowing UNCC's archivists to have all the power of islandora_batch without the need to use terminal commands. It's a very user-friendly approach that UNCC has shared on their GitHub.

This was followed up on Friday with another tool that works alongside the drag and drop ingest: Islandora Ingest Indicator, which is "designed to communicate Islandora ingest status to Archivists; a methodology for integrating Blink indicator lights with an Islandora ingest server. We have programmed Blink to glow GREEN for indicating "ready for ingest" and RED for "ingest currently running."

Open Knowledge Foundation: Google Funds Frictionless Data Initiative at Open Knowledge

Mon, 2016-02-01 14:05

We are delighted to announce that Open Knowledge has received funding from Google to work on tool integration for Data Packages as part of our broader work on Frictionless Data to support the open data community.

 

What are Data Packages?

The funding will support a growing set of tooling around Data Packages.  Data Packages provide functionality for data similar to “packaging” in software and “containerization” in shipping: a simple wrapper and basic structure for the transportation of data that significantly reduces the “friction” and challenges associated with data sharing and integration.

Data Packages also support better automation in data processing and do so without imposing major changes on the underlying data being packaged.  As an example, comprehensive country codes is a Data Package which joins together standardized country information from various sources into a single CSV file. The Data Package format, at its simplest level, allows its creator to provide information describing the fields, license, and maintainer of the dataset, all in a machine-readable format.

In addition to the basic Data Package format –which supports any data structure– there are other, more specialised Data Package formats: Tabular Data Package for tabular data and based on CSV, Geo Data Package for geodata based on GeoJSON. You can also extend Data Package with your own schemas and create topic-specific Data Packages like Fiscal Data Package for public financial data.  

What will be funded?

The funding supports adding Data Package integration and support to CKAN, BigQuery, and popular open-source SQL relational databases like PostgreSQL and MySQL / MariaDB.

CKAN Integration

CKAN is an open source data management system that is used by many governments and civic organizations to streamline publishing, sharing, finding and using data. This project implements a CKAN extension so that all CKAN datasets are automatically available as Data Packages through the CKAN API. In addition, the extension ensures that the CKAN API natively accepts Tabular Data Package metadata and preserves this information on round-tripping.

BigQuery Integration

This project also creates support for import and export of Tabular Data Packages to BigQuery, Google’s web service querying massive datasets. This involves scripting and a small online service to map Tabular Data Package to BigQuery data definitions. Because Tabular Data Packages already use CSV as the data format, this work focuses on the transformation of data definitions.

General SQL Integration

Finally, general SQL integration is being funded which would cover key open source databases like PostgreSQL and MySQL / MariaDB. This will allow data packages to be natively used in an even wider variety of software that depend on these databases than those listed above.

 

These integrations move us closer to a world of “frictionless data”. For more information about our vision, visit: http://data.okfn.org/.

If you have any questions, comments or would like more information, please visit this topic in our OKFN Discuss forum.

 

LibUX: How to Talk About User Experience

Mon, 2016-02-01 13:00

In 2015, Craig M. MacDonald published interesting research reporting “the results of a qualitative study involving interviews with 16 librarians who have ‘User Experience’ in their official job title.” He was able to demonstrate the quite healthy state of — let’s capitalize it –Library User Experience. Healthy, but emerging. The blossoming of library user experience roles, named and unnamed, the community growing around it (like on slack and Facebook), the talks, conferences, and corresponding literature signal a broad — if shallow — pond, because while we can workshop card sorts and redesign websites, we find it pretty hard to succinctly answer: what is user experience?

Some resist definition in the way others resist referring to people as “users” — you know who you are — but this serves only to conflate an organizational philosophy of being user-centric with the practice of user experience design. These aren’t the same. Muddling the two confuses a service mentality — a bias — with the use of tools and techniques to measure and improve user experience.

Although she is writing about Service Design, Jess Leitch sums-up my same concerns about the fluid definition of “user experience design.”

I will argue that the status of Service Design and its impact as a professional field is impacted by the absence of a single consistent definition of the area, the wide spread of professional practices and the varied backgrounds and training of its practitioners. Jess Leitch, What is Service Design?

How we talk about user experience matters.

The User Experience

When we talk about the user experience, we are talking about something that can be measured. It is plottable and predictable.

The user experience is the measure of your end-user’s interaction with your organization: its brand, its product, and its services.

The overall value of the user experience is holistic: a cumulative quality, one we try to understand through models like Peter Morville’s Honeycomb that serve as practical ways to focus our efforts.

Some, like the Kano Model, reflect how new features to a product or service impact — positively or negatively — the customer experience, the work involved implementing them (and whether it is worth it), and predict what impact if any these features will have in the long run. Others, like Coral Sheldon-Hess’s CMMI-based model, simply define how much consideration is afforded to the user experience at organizational levels.

Kano Model

This is to say that the value of the user experience is both qualitative and quantitative, in which the who and why give meaning to the when, what, and how.

In this way, talking about “user experience” as a measurement makes otherwise woo-woo intangible car-salesman bullshit — “this insert-thing-here has a super UX” — into something that can be practically evaluated and improved.

The customer is always right — but the user isn’t

What we choose to call our end-users can betray the underlying assumptions we make about them, our relationship, the business model, and these imply how we might interpret results. “Customer experience” is conceptually different from “user experience.” The role of the patron is different from the member.

These distinctions can have real impact in determining which metrics matter, and where to focus.

I like “user” as a default. It gets a little tsked-at for being impersonal, but I suspect “user” is more conducive to data-driven design and development than one where the customer is always right.

What matters is that our user-, patron-, member-, customer-, xenomorph-centric ethic is the same, whether we are motivated by business, bosses, or — er — our humanity.

Usability isn’t the Point

Useful, usable, desirable: like three legs of a stool, if your library is missing the mark on any of these it’s bound to wobble. Amanda Etches and Aaron Schmidt

It is easy to confuse “usability” for “user experience” because a product or service’s ease of use is often so crucial that blogs like mine beat that drum ad nauseum, but we should resist using these interchangeably. We otherwise narrow the scope from a nuanced holistic approach to product and service design and business, to one reduced to — I don’t know — bashing hamburger menus on twitter. When all user experience designers see is usability like bulls see red, they may forget that hard-to-learn, ugh inconvenient interfaces, tasks, services, can nevertheless net a positive user experience, succeed, make money, do work.

One of the reasons I like Peter Morville’s User Experience Honeycomb so darn much is because it is such a useful way to visualize a multifaceted user experience where “usability is necessary but not sufficient.” Where the value of the UX is cumulative all ships rise with the tide. The look might suffer, but what drag poor aesthetic creates is tempered by its usefulness.

“The honeycomb hits the sweet spot by … helping people understand the need to define priorities. Is it more important for your [service] to be desirable or accessible? How about usable or credible? The truth is, it depends on your unique balance of context, content, and users, and the required tradeoffs are better made explicitly than unconsciously. Peter Morville

The added value of this model is that user experience is represented as a hive. We can add, remove, jostle facets as we need. It can not only grow in area, but we can even demonstrate its three-dimensionality by — let’s say — elaborating on the relationship between “usable” and “useful.”

When we talk about “usability,” it should be in relation to something’s “utility”:
  • usable — something is easy to use and intuitive
  • utility — something fulfills a demonstrable need
  • useful — a usable product, services, application, process, etc., fulfills a demonstrable need

Usability and utility are equally important and together determine whether something is useful: it matters little that something is easy if it’s not what you want. It’s also no good if the system can hypothetically do what you want, but … is too difficult. Jakob Nielsen, Usability 101: Introduction to Usability

The User Experience and Organizational Inertia

Good design is determined by its functional success, its efficacy, how masterfully it served this-or-that purpose. Its aesthetic has a role. Its emotional impact plays a part. But design is not art. The practical application of design thinking to services or instruction or libraries isn’t to just make an awesome website but empower decision makers with user-centric strategies to better meet mission or business goals.

Most of the time there are business- or mission-sensitive stakeholders behind user experience design work. In the same way we differentiate design from art, it may be generally more practical to differentiate a user experience design strategy from the desire to make whizzbang emotional experiences.

Often in real-world business/mission-driven design work, particularly in which design decisions need stakeholder support — sometimes in the form of cold hard cash — “making good experiences” can be nebulous, whereas “demonstrably improving the user experience of such-and-such service that correlate to the success of such-and-such bottom line” is better suited for the kind of buy-in required for organizational user-centricity.

Anyway, in summary: this is how I choose to talk about user experience

As a measurement. Something plottable, predictable.

I write a weekly newsletter called the Web for Libraries, chock-full of data-informed commentary about user experience design, including the bleeding-edge trends and web news I think user-oriented thinkers should know. Take a minute to sign up!

Email Address

The post How to Talk About User Experience appeared first on LibUX.

Open Library: February 1-5 is #ColorOurCollections Week

Sun, 2016-01-31 17:26

There are a lot of neat public domain images in our collections. We’ve highlighted them in the past and continue to encourage people to use, remix and share our content. This week for the #ColorOurCollections event, we’ve pulled out some especially colorable images and made them into PDFs that you can print out and color. We’ve created a few pairs of images we think you’ll like. Here are the images and links to the books where you can find and download even more. If you just want to download a zip file of all eight images, click here.

 

pinboard: Technology Awareness Resources

Sun, 2016-01-31 17:22
The resources below were compiled as part of my research while writing The Neal-Schuman Library Technology Companion: A Basic Guide for Library Staff (forthcoming from ALA Neal-Schuman, 2016). Websites and Blogs Twitter Electronic Discussion Lists Periodicals Continuing Education, Conference, and Trade Show Opportunities Find Libraries Near You To Visit

District Dispatch: ALA joins NFCC to serve military and their families through libraries

Fri, 2016-01-29 21:43

ALA member libraries and the NFCC are partnering to deliver financial education and resources to members of the military and their families in libraries across the country.

ALA has joined forces with the National Foundation for Credit Counseling® (NFCC®) and local libraries to deliver financial education and resources to members of the military and their families across the country.

Members of the U.S. armed forces, Coast Guard, veterans, and their families face financial challenges often not adequately addressed by resources designed for the general public. ALA and NFCC will leverage local member agencies and libraries to help improve the financial lives of service members, veterans and their families.

ALA President Sari Feldman commented on the vital new initiative:

The Digital Age has seen libraries transform and be recognized as a critical part of the infrastructure delivering services to communities nationwide. It is a particular honor to be able to serve those who have sacrificed so much on behalf of all Americans – our veterans and their families. We are especially pleased to partner with NFCC, an organization that understands the unique financial needs of military families. Together, our organizations and local members will boost access to relevant and customized resources and learning where it is needed most.

Recent preliminary data from NFCC’s Sharpen Your Financial Focus™ (Sharpen) program reveals military families face unique challenges. For example, military Sharpen participants had higher unsecured debt balances ($400-$500 more) than the average Sharpen participant. Fewer tangible assets and higher debt-related expenses were also more common among these families. Relocation, frequent deployment, and changes in local economic conditions are likely among the factors influencing these impacts.

This new initiative emerged out of conversations related to the National Policy Agenda for Libraries, and how ALA and libraries may partner with others to build capacity and further expand services to meet community needs and/or advance the public interest. One of the identified community focuses in the policy agenda is veterans and military families. Roughly 22 million Americans are veterans of military service, and another 2.2 million currently serve on military active duty or in reserve units.

NFCC and ALA had the opportunity to discuss this unique partnership on the radio show, Home & Family Finance, which regularly provides practical financial information to its listeners across the country. It is nationally syndicated and airs on the American Forces Radio Network and Sirius/XM Satellite Radio.

NFCC member agencies will work with local libraries to offer financial education workshops, access to personalized counseling, and other resources that help families reach their financial goals and contribute to the economic stability of their neighborhoods. The workshops will cover subjects like housing, budgeting, banking, credit, permanent change of station (PCS) & deployment, and career transition into civilian life. Local libraries and certified counselors will select the most relevant and timely topics for their communities.

This collaboration builds on relationships and library services developed to meet the needs of veterans, service members and their families. One example of this work can be found in the Veterans connect @ the library initiative with California libraries and the California Department of Veterans Affairs. Close to 40 Veterans Resource Centers have been opened or planned to open this year to connect veterans and their families to benefits and services for which they are eligible.

NFCC and ALA will announce the local communities and libraries where the program will first be launched in the coming weeks. For more information, please email: lclark@alawash.org

The post ALA joins NFCC to serve military and their families through libraries appeared first on District Dispatch.

Peter Murray: Emerging Tech: Bluetooth Beacons and the DPLA

Fri, 2016-01-29 21:16

This is the text of a talk that I gave at the NN/LM Greater Midwest Region tech talk on January 29, 2016. It has been lightly edited and annotated with links to articles and other information. The topic was “Emerging Technology” and Trisha Adamus, Research Data Librarian at UW-Madison and Jenny Taylor, Assistant Health Sciences Librarian at UIC LHS in Urbana presented topics as well.

Bluetooth Beacons

Libraries of all types face challenges bridging the physical space with the online space. I'd wager that we've all seen stories of people walking around with their eyes glued to their mobile devices; you and I might have even been the subject of such stories. We want users to know about new services available in our spaces — both the physical and the online — yet it is difficult to connect to users.

Bluetooth Beacons, along with a phone and applications written to make use of beacons, can turn a user's smartphone into a tool for reaching users with information tailored to your library. Some examples:

Facebook Bluetooth Beacons

Facebook is one company experimenting with Bluetooth beacons. In a trial program underway now, Facebook will send you a beacon that you can tie to a Facebook Place. When a patron uses Facebook in range of the beacon, they see a welcome note and posts about the place and are prompted to like the associated Facebook Page and check in at the location. Facebook Bluetooth Beacons are in limited deployment now, and there is a web page available for you to sign up to receive one.

Brooklyn Museum

The Brooklyn Museum experimented with indoor positioning with beacons in 2014 and 2015. They scattered beacons throughout the galleries and added a function to their mobile app to pinpoint where the user is as they ask questions about artwork. They have a blog post on their website where they describe the challenges with positioning the beacons and having the beacons fit into the aesthetics of their gallery spaces.

University of Oklahoma NavApp

As described by the University of Oklahoma Libraries, its NavApp guides users throughout the main library building including various resources, service desks, and event spaces. The app also includes outdoor geolocation to guide users to the libraries' branches and special collections. When a student is standing in front of a study room, the app show how to book the room. The library also has about 100 beacons in its museum space to show more information and videos about artworks.

How Bluetooth Beacons Work

The foundation of Bluetooth Beacons is the iBeacon protocol. As with anything that has an 'i' in front of it now-a-days, you would rightly guess that this is something created by Apple. Announced in 2013, Apple defined a way for an iPhone to figure out its location in an indoor space. (When outside, a device can receive GPS satellite signals, but those signals do not penetrate into buildings.) The iBeacon technology has been adopted by many companies now; it isn't something limited to Apple. A beacon continuously transmits a globally unique number to any device within range — typically up to 30 feet or farther. An app on the device can then take action based on that unique number.

Say, for instance, you have your library's app on your phone when you walk into the library. The beacon at the entrance is transmitting its unique number, and your phone wakes up the app when it gets in range of the library's beacon. The app can then decide what to do — maybe it connects to the library's web server to get the time when the building closes and displays an alert with that information. Or the app can retrieve the events calendar and let you know what is happening in the library today. Maybe the app checks your hold queue to see if you have items to pick up.

Once inside the library, the smartphone starts receiving unique identifiers from beacons scattered around the space. The smartphone app has a built-in map of where the devices are located, and based on which identifiers it receives it can figure out where in the space the phone is. As you move with your smartphone around the space, it sees different identifiers and in that way can track your movement. So when you get that notification about an item to pick up from the hold queue, a map in the library app can guide you to the hold pickup location.

It is important to note, that all the intelligence is in the smartphone app. The beacon itself is just a dumb device that is transmitting the same unique number over and over. The beacon is not connected to your wired or wireless network, and it the beacon doesn't receive any information from the smartphone. It is up to the smartphone and the apps on it to do something with the unique number from the beacon. This means that the beacons themselves can be really cheap — sometimes less than $5 — and can last a really long time on one battery — months or years. That's why Facebook can give them away for free and why retailers are installing dozens of them per store.

Concerns about Beacons

You might think all of this sounds great — a futuristic science fiction world where machines know your exact location and can serve up information tailored specifically to where you are. There are some not so nice aspects, too.

Privacy is one. Your position within a building — whether it be in front of a shelf full of books or a shelf full of flu remedies — can be recorded along with the exact date and time by any number of third parties. The vendor that is supplying the Rite Aid pharmacy chain with beacons for its 4,500 stores is also partnering with publishers like CondeNast and Gannett, so apps from those companies will also be listening for the unique beacon identifiers. Now apps like Epicurious, Coupon Sherpa and ScanLife will know when and where your phone has been in a store.

Security is another. In the basic iBeacon protocol, there is nothing that validates a beacon's signal, so it is possible to fool a smartphone app into thinking it is near a beacon when it isn't. There is a story of how the staff at Make Magazine hacked a scavenger hunt at the 2014 Consumer Electronics Show. They showed how they could win the hunt without ever being in Las Vegas.

If you are interested in hearing more about Bluetooth Beacons, check out the article on my blog that will have links to the things I've talked about and more.

For More Information…

Digital Public Library of America

One of my fondest themes in the evolution of library services is how libraries have dealt with massive waves of information. In fact, I think we are in the third such wave of change. The first wave came with the printing press. It gave rise first to bibles, then to all sorts of commercially published tomes of fact and fiction. Libraries grew out of a desire to make that information more broadly accessible, and that was the first wave — commercially produced physical material. The second wave came just a few decades ago with commercially produced digital material. You know what this looks like: journal articles as standalone PDF files, electronic books downloaded to handheld devices, and indexes first on far away computers with the Thompson-Reuters Dialog system — then on CD-ROMs — and then spread all over the world wide web. For a time, libraries tried to collect and curate the wave of commercially produced digital material themselves, but for the most part this has been seeded to commercial providers.

And now we are in the third wave: local, digital materials. Libraries are taking on the responsibility of stewardship for article preprints, reports, datasets, and other materials for our users. This is not necessarily a new thing — through both the first and second waves of commercially produced information, libraries have been a place for local, unique material. What has changed is that libraries have become a publisher of sorts by offering that information to a community more broad than could be reached by those that could physically come to the library. We're taking not only born-digital materials in this third wave, but we are also reaching back into our collections and archives to digitize and publish that material that is unique to our holdings.

This dispersion of library activity was becoming a problem, though. How could users find the relevant material published by the library down the street, across the state, or on the other side of the country? The European Union, faced with this same question last decade, formed Europeana — an internet portal that provides pointers to the collective digital information of Europe. In 2011, libraries in the U.S. took on the task of forming our own solution, and it is the Digital Public Library of America.

DPLA Portal

Perhaps the most well known aspect of the DPLA is its search portal, and the URL to it is very easy to remember: dp.la. If you can remember "Digital Public Library of America", you can remember this web address. The portal has several ways to search for content: you can look at curated exhibitions of content pulled from all the DPLA partners, you can explore by place through an interactive map, and you can look at a timeline of material. There are apps that use the DPLA application programming interface to search for material in innovative ways or to integrate material from the DPLA into other systems.

The DPLA Portal is just that — it is an aggregation and a view of metadata harvested from hubs across the country. The DPLA Portal doesn't store information, it just points to where the information is stored. A series of content hubs and service hubs provide metadata to DPLA. Content hubs are large standalone units such as ARTstore, the Government Printing Office, and the Internet Archive. Service hubs gather metadata from a libraries in a region and provide a single feed of that metadata to DPLA. Service hubs are also a gathering point for professional development, expertise on digitization and metadata creation, and community outreach.

"Hydra-in-a-Box"

The most difficult part of this library-to-hub-to-portal arrangement is at the local library. At this point in time, it is tricky to publish information to the web in a way that can be harvested by a service hub and maintained for the long term. Your average digital asset management system has a lot of moving parts and requires complex server setups. The Hydra-in-a-Box project aims to reduce this complexity so a library won't need developers to install, configure and run the application. The project launched last year and is nearing the completion of the design phase.

E-books

Since the early formation days of the DPLA, one of the most desired streams of activity is around ebooks. Ebooks have not yet been a good fit into library service offerings. We've seen problems ranging from purchasing and licensing models that don't work well for libraries to electronic book platforms that have limited or no integration with existing library systems. DPLA has a number of ebook initiatives where librarians and publishers are working through ways to smooth the rough edges. One is the Open Ebooks Initiative, a partnership with DPLA, the New York Public Library, and the First Book organization. This initiative is offering public domain and current popular books for free to low-income students. DPLA is also the host of working groups that aim to develop a national library ebook strategy.

DPLA Community Representatives

If you are interested in getting involved with the DPLA, one of the best ways to do so is to join the community reps program. These volunteers are a two-way conduit of information between users of DPLA services and the DPLA staff. Community reps organize regional activities to promote DPLA and provide feedback with a local perspective to other reps and to the staff. Applications for the next class of community reps are due on February 19th.

Cynthia Ng: Tips on Converting PDF to an Accessible Document

Fri, 2016-01-29 20:46
I’ve talked about making documents accessible and the editing guidelines, but the more editing I do, the more I realize I save a lot of time because I don’t do all my editing manually. Some of these tips might also help when editing after converting from EPUB and other ebook formats. Which Converter to Use … Continue reading Tips on Converting PDF to an Accessible Document

District Dispatch: National Library Legislative Day 2016

Fri, 2016-01-29 18:07

It’s that time again! Registration for the 42nd annual National Library Legislative Day is open.

This year, the event will be held in Washington, D.C. on May 2-3, 2016, bringing hundreds of librarians, trustees, library supporters, and patrons to Washington, D.C. to meet with their Members of Congress and rally support for libraries issues and policies. As with previous years, participants will receive advocacy tips and training, along with important issue briefings prior to their meetings.

Participants at National Library Legislative Day are also able to take advantage of a discounted room rate by booking at the Liaison (for the nights of May 1st and 2nd). To register for the event and find hotel registration information, please visit the website.

Want to see a little more? Check out the video from last year!


We also offer a scholarship opportunity to one first-time participant at National Library Legislative Day. Recipients of the White House Conference on Library and Information Services Taskforce (WHCLIST) Award receive a stipend of $300 and two free nights at a D.C. hotel. For more information about the WHCLIST Award, visit our webpage.

I hope you will consider joining us!

For more information or assistance of any kind, please contact Lisa Lindle, ALA Washington’s Grassroots Communications Specialist, at llindle@alawash.org or 202-628-8140.

The post National Library Legislative Day 2016 appeared first on District Dispatch.

District Dispatch: State Government Information and the copyright conundrum (updated information!)

Fri, 2016-01-29 18:02

CopyTalk is back with a new webinar on February 4, 2016. (photo by trophygeek)

Updated webinar registration information! (see below)

Figuring out whether state government documents are copyrighted is a tricky question. Copyright law has significant impact on the work libraries, digital repositories, and even state agencies, with regards to digitizing and web archiving state government information.

Free State Government Information (FSGI) http://stategov.freegovinfo.info/ has been steadily working to raise awareness and find pathways forward for policy change with regards to the copyright issue of state government publications.

Get the scoop from the FSGI at the next CopyTalk on February 4th at 2 pm Eastern/11 am Pacific.

This presentation will cover:

  • who we are and why we are tackling copyright issues with state government
  • specific state government information projects that academic, state, and digital libraries are engaged in that are impacted by copyright
  • a way forward to address copyright policy in the states: Kyle Courtney’s 50 state survey of copyright policies, State Copyright Resource Center http://copyright.lib.harvard.edu/states/

Speakers:

For full bios see: http://stategov.freegovinfo.info/about

  • Bernadette Bartlett, Library of Michigan, Michigan Documents Librarian
  • Kyle Courtney, Copyright Advisor, Harvard University
  • Kristina Eden, Copyright Review Program Manager, HathiTrust
  • Kris Kasianovitz, Stanford University Library, Government Information

If we have more than 100 attendees, we are charged some ridiculous amount that will come out of my pay check! So we ask that attendees watch the webinar with colleagues when possible. To access the webinar, go here and register as a guest and you’re in!

Yes, it’s FREE because the Office for Information Technology Policy and the Copyright Education Subcommittee want to expand copyright awareness and education opportunities.

An archived copy will be available after the webinar.

The post State Government Information and the copyright conundrum (updated information!) appeared first on District Dispatch.

District Dispatch: ALA’s Charlie Wapner promoted

Fri, 2016-01-29 16:38

Will serve as senior information policy analyst in Office for Information Technology Policy (OITP)

Charlie Wapner, senior policy analyst, ALA Office for Information Technology Policy (OITP).

Please join me in congratulating Charlie Wapner on his promotion from Information Policy Analyst to Senior Information Policy Analyst effective in January 2016.

Many of you know Charlie through his leadership on 3D printing. He completed a major report, “Progress in the Making: 3D Printing Policy Considerations Through the Library Lens,” which attracted library and general press coverage (e.g., Charlie contributed to a piece by the Christian Science Monitor), and he was invited to write an article for School Library Journal based on his report. Charlie also produced a more accessible, shorter report on 3D printing, in collaboration with United for Libraries and the Public Library Association, and in December 2015, released a report on the merits of 3D printing and libraries targeted to the national policy community as part of our advocacy in conjunction with the Policy Revolution! initiative. Charlie was invited to present at a number of venues,
such as the Dupont Summit and a workshop at Virginia Tech, and invited as an expert to a 3-day workshop hosted by Benetech (under an IMLS grant) in Silicon Valley.

Notwithstanding the import of Charlie’s 3D printing contributions, the large majority of his time is dedicated to the extensive and wide-ranging research and analysis that he provides under the rubric of the Policy Revolution! initiative. With general (or even vague) direction, Charlie clarifies research needs, finds and digests relevant material, and writes syntheses on topics from veterans’ services and entrepreneurship to broadband and youth and technology. In the past few months, Charlie’s research and analysis has extended to informing our work to identify new collaborators (e.g., funders) and specifically to identify new funding opportunities for OITP and for the Association generally. Going forward, Charlie also will be increasing his focus on international policy work.

Charlie came to ALA in March 2014 from the Office of Representative Ron Barber (Ariz.) where he was a legislative fellow. Earlier, he also served as a legislative correspondent for Representative Mark Critz (Penn.). Charlie also interned in the offices of Senator Kirsten Gillibrand (N.Y.) and Governor Edward Rendell (Penn.). After completing his B.A. in diplomatic history at the University of Pennsylvania, Charlie received his M.S. in public policy and management from Carnegie Mellon University.

The post ALA’s Charlie Wapner promoted appeared first on District Dispatch.

Open Knowledge Foundation: Open Data Day Mini Grants: back for 2016!

Fri, 2016-01-29 14:18

This year, on Saturday, the 5th of March, the fourth annual Open Data Day will take place. For us in Open Knowledge, Open Data Day is one of our favourite initiatives. This is a grassroot event that has no particular organisation behind it, and it is able to bring together people from all over the world to discuss, hack and promote open data. From Japan to Vancouver, Cape Town to Oslo, Brazil to Nepal, London and Greece, Open Data Day is a global celebration of openness. It helps us all raise awareness about openness of data in different fields across the world and It unites us once a year as a community.

Last year, Open Knowledge International started the mini grant scheme to support Open Data Day events across the world. As a volunteer based event, we know how a small chunk of money can make a great difference – from getting food to your hackathon to hiring a venue to whatever you need. In 2015, with the support of ILDA, Sunlight Foundation and the Caribbean Open Institute we were able to give 28 grants all over the world and to enrich Open Data Day.

This year We are happy to announce that we will keep giving mini grants to support Open Data Day around the world. Open Knowledge will be able distribute a total amount of $7500 USD between different groups around the world. The mini grants will be in the sum of $250-$350 USD each and will serve Open Data Day 2016 events only. The deadline to all applications is Sunday, 14/2/2016.

2015 Open Data Day participants in Indonesia

How to apply for the mini grants scheme?

First, add your event to the Open Data Day website and wiki. Then, fill out this FORM. NOTICE: Events that are not be registered on the Open Data Day website will not be considered for the grant.

Who can apply for a mini grant?

Any civil society group from anywhere around the world. We will give preference to current groups and affiliates groups that already work as part of the Open Knowledge Network, but we will consider other groups as well. Notice, we will not give this grant to governments.

Is there any topic that the event should focus on? No, it can be Open Science, Open GLAM, Open Gov… As long has it has something to do with Open Data. :-)

Are there any geographical restrictions? It doesn’t matter where your event is, you are welcome to apply. Please note that we will not fund two events in the same country, so we encourage groups to merge to one event as they can.

What is the catch? Do I have to do anything in return? Yes there is a small catch, but only for the sake of knowledge sharing and smooth operations!

  1. Since Open Data Day is really around the corner, we ask you to provide us all information for delivering your grant, within 3 working days after you have been notified you will get the grant.
  2. We do ask you to write a blog post that describes your event and what the group learned from it. We believe that in this way the Open Knowledge Network can learn better from one another and make better connections between people and ideas.

If my application is successful, how are you going to transfer us the money?

If your application was successful, you will be required to immediately provide sufficient Bank information in order to make payment. All payments will either be made via Paypal, or direct to you bank account.

When will you announce if I got the mini grant? We aim to notify all grant winners by Friday the 19/2/2016.

The deadline to all applications is Sunday, 14/2/2016.

For more information please ask in our forum, and one of us would be happy to assist. – https://discuss.okfn.org/c/network-and-community/open-data-day

FOSS4Lib Upcoming Events: Hydra Developer Congress

Thu, 2016-01-28 21:34
Date: Thursday, March 24, 2016 - 08:00 to Saturday, March 26, 2016 - 16:30Supports: SufiaHydraFedora Repository

Last updated January 28, 2016. Created by Peter Murray on January 28, 2016.
Log in to edit this page.

See the meeting announcement for more details.

FOSS4Lib Recent Releases: Sufia - 6.6.0

Thu, 2016-01-28 21:32

Last updated January 28, 2016. Created by Peter Murray on January 28, 2016.
Log in to edit this page.

Package: SufiaRelease Date: Thursday, January 28, 2016

District Dispatch: Amazonopoly!

Thu, 2016-01-28 21:28

Amazon ‘s control of the print and e-book market is raising concern.

The New America Foundation held another excellent program yesterday titled “Amazon’s Book Monopoly: A Threat to Freedom of Expression?” Particularly relevant — this summer the Author’s Guild and the American Booksellers Association filed an appeal to the Department of Justice, asking for an investigation in Amazon’s alleged anti-trust activities. An archived copy of the program can be viewed here.

What’s the problem? Sure, we understand that Amazon dominates the online retailer space but we love Amazon because we can buy just about anything, at a good price with prompt delivery, right? The speakers agreed with that assessment but are concerned with Amazon’s control of “the culture market.” The dominance of Amazon’s book retail division is trampling on the free flow of books and information to the detriment of American democracy. And the evidence is awfully convincing.

Similar to Standard Oil of the 1900s in market domination, Amazon controls 75% of the print book market and 65% of the ebook market. It is known to undercut book prices at a financial loss to capture more and more of the market. The problem is vertical integration – Amazon owns its own publishing company (among other things) —and competes with its own customers who rely on its platform. If you are an author, you want your books sold on Amazon, but Amazon undercuts traditional retail prices of books leading to less revenue for the author. Recently the big five publishers negotiated agency pricing with Amazon pumping book prices up to a more profitable level. But when books priced at 19.95 are listed next to books priced at 99 cents, what will customers choose? And what will they think? Do customers believe this is the going rate for books these days?

Amazon is known to manipulate search results placing its published books at a higher ranking than other books. Amazon had a well-publicized fight with Hachette over prices last year. For the 11 months of the dispute, Amazon refused to sell Hachette books. One Hachette author described searching Amazon for his own book and finding it not available. Instead a pop up box appeared with the suggestion that buyers read an alternate author’s —in this case, Lee Child’s books–  instead, suggesting they are “just as good.” (Lee Child is published by Penguin Random House.)

Speakers also expressed dissatisfaction with the Obama administration and its unwillingness to address online retail monopolies. The Justice Department has shied away from bringing anti-trust suits of this kind—the solutions are difficult and their impact would be extensive. A speaker also noted that The Washington Post did not send a reporter to cover the event. (Jeff Bezos, the head of Amazon owns The Washington Post). There was a great sense of concern, fear and even suspicion throughout the meeting. Apparently other speakers were invited to the event but were afraid of retaliation.

Putting aside conspiracy theories, this is an important information policy issue for libraries to monitor. We need to consider what information is becoming less available, and what speech is effectively quashed when the public is unable to find content. Moreover if certain genres are favored over others because of higher sales numbers, will authorship change and will people be less willing to write what they want? Will there be a book market for libraries?

This, and a lot more, was discussed at this fascinating program, which leaves much food for thought.

I encourage people to watch the archived video.

The post Amazonopoly! appeared first on District Dispatch.

Code4Lib Journal: Editorial Introduction: New Year Resolutions

Thu, 2016-01-28 16:07
While New Year’s day came and went with very little fanfare at my house (well, if you don’t count our Star Wars marathon), I think I’d be remiss if I didn’t take the time to mark the passing of the new year, with a look ahead to the future.  And I think it is fitting, then, […]

Code4Lib Journal: Beyond Open Source: Evaluating the Community Availability of Software

Thu, 2016-01-28 16:07
The Code4Lib community has produced an increasingly impressive collection of open source software over the last decade, but much of this creative work remains out of reach for large portions of the library community. Do the relatively privileged institutions represented by a majority of Code4Lib participants have a professional responsibility to support the adoption of their innovations? Drawing from old and new software packaging and distribution approaches (from freeware to Docker), we propose extending the open source software values of collaboration and transparency to include the wide and affordable distribution of software. We believe this will not only simplify the process of sharing our applications within the library community, but also make it possible for less well-resourced institutions to actually use our software. We identify areas of need, present our experiences with the users of our own open source projects, discuss our attempts to go beyond open source, propose a preliminary set of technology availability performance indicators for evaluating software availability, and make an argument for the internal value of supporting and encouraging a vibrant library software ecosystem.

Code4Lib Journal: Bringing our Internet Archive collection back home: A case study from the University of Mary Washington

Thu, 2016-01-28 16:07
The Internet Archive is a great boon to smaller libraries that may not have the resources to host their own digital materials. However, individual items uploaded to the Internet Archive are hard to treat as a collection. Full text searching can only be done within an item. It can be difficult to direct patrons to local resources. Since 2010, the University of Mary Washington has uploaded over two thousand digitized university publications, including the student newspaper and the yearbook, to the Internet Archive. Taken together, these represent almost 100 years of UMW history. Using Apache Lucy, we built a search interface, Eagle Explorer, that treats our Internet Archive collection as a cohesive whole. Patrons can use Eagle Explorer to full-text search within the collection and to filter by date and publication. This article will describe how we created Eagle Explorer, the challenges we encountered, and its reception from the campus community.

Code4Lib Journal: Extracting, Augmenting, and Updating Metadata in Fedora 3 and 4 Using a Local OpenRefine Reconciliation Service

Thu, 2016-01-28 16:07
When developing local collections, librarians and archivists often create detailed metadata which then gets stored in collection-specific silos. At times, the metadata could be used to augment other collections but the software does not provide native support for object relationship update and augmentation. This article describes a project updating author metadata in one collection using a local reconciliation service generated from another collection's authority records. Because the Goddard Library is on the cusp of a migration from Fedora 3 to Fedora 4, this article addresses the challenges in updating Fedora 3 and ways Fedora 4's architecture will allow for easier updates.

Pages