In honor of Thanksgiving, I’d like to give thanks for 5 tech tools that make life as a librarian much easier.
On any given day I work on at least 6 different computers and tablets. That means I need instant access to my documents wherever I go and without cloud storage I’d be lost. While there are plenty of other free file hosting services, I like Drive the most because it offers 15GB of free storage and it’s incredibly easy to use. When I’m working with patrons who already have a Gmail account, setting up Drive is just a click away.
I dabbled in Goodreads for a bit, but I must say, Libib has won me over. Libib lets you catalog your personal library and share your favorite media with others. While it doesn’t handle images quite as well as Goodreads, I much prefer Libib’s sleek and modern interface. Instead of cataloging books that I own, I’m currently using Libib to create a list of my favorite children’s books to recommend to patrons.
Hopscotch is my favorite iOS app right now. With Hopscotch, you can learn the fundamentals of coding through play. The app is marketed towards kids, but I think the bubbly characters and lighthearted nature appeals to adults too. I’m using Hopscotch in an upcoming adult program at the library to show that coding can be quirky and fun. If you want to use Hopscotch at your library, check out their resources for teachers. They’ve got fantastic ready made lesson plans for the taking.
My love affair with Photoshop started many years ago, but as I’ve gotten older, Illustrator and I have become a much better match. I use Illustrator to create flyers, posters, and templates for computer class handouts. The best thing about Illustrator is that it’s designed for working with vector graphics. That means I can easily translate a design for a 6-inch bookmark into a 6-foot poster without losing image quality.
Twitter is hands-down my social network of choice. My account is purely for library-related stuff and I know I can count on Twitter to pick me up and get me inspired when I’m running out of steam. Thanks to all the libraries and librarians who keep me going!
What tech tools are you thankful for? Please share in the comments!
When Boston Public Library first designed its statewide digitization service plan as an LSTA-funded grant project in 2010, we offered free imaging to any institution that agreed to make their digitized collections available through the Digital Commonwealth repository and portal system. We hoped and suggested that money not spent by our partners on scanning might then be invested in the other side of any good digital object – descriptive metadata. We envisioned a resurgence of special collections cataloging in libraries, archives, and historical societies across Massachusetts.
After a couple of years, reality set in. Most of our partners did not have the resources to generate good descriptive records structured well enough to fit into our MODS application profile without major oversight and intervention on our part. What we did find, however, were some very dedicated and knowledgeable local historians, librarians, and archivists who maintained a variety of documentation that could be best described as “pre-metadata.” Their local landscapes included inventories, spreadsheets, caption files, finding aids, catalog cards, sleeve inscriptions, dusty three-ring binders – the rich soil from which good metadata grows.
We understood it was now our job to cultivate and harvest metadata from these local sources. And thus the “Metadata Mob” was born. It is a fun and creative type of mob — less roughneck and more spontaneous dance routine. Except, instead of wildly cavorting to Do-Re-Mi in train stations, we cut-and-paste, we transcribe, we script, we spell check, we authorize, we regularize, we refine, we edit, and we enhance. It is a highly customized, hands-on process that differs slightly (or significantly) from collection to collection, institution to institution.
In many ways, the work Boston Public Library does has come to resemble the locally-sourced food movement in that we focus on how each community understands and represents their collections in their own unique way. Free-range metadata, so to speak, that we unearth after plowing through the annals of our partners.
We don’t impose our structures or processes on anyone beyond offering advice on some standard information science principles – the three major “food groups” of metadata as it were – well defined schema, authority control, and content standard compliance. We encourage our partners to maintain their local practices.
We then carefully nurture their information into healthy, juicy, and delicious metadata records that we can ingest into the Digital Commonwealth repository. We have all encountered online resources with weak and frail frames — malnourished with a few inconsistently used Dublin Core fields and factory-farmed values imported blindly from collection records or poorly conceived legacy projects. Our mob members eschew this technique. They are craftsmen, artisans, information viticulturists. If digital library systems are nourished by the metadata they ingest, then ours will be kept vigorous and healthy with the rich diet they have produced.
Thanks to SEMAP for use of their the logo in the header image. Check out SEMAP’s very informative website at semaponline.org. Buy Fresh, Buy Local! Photo credit: Lori De Santis.
All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.
From Bram Luyten, @mire
With the DSpace 5 release coming up, we wanted to make it easier for aspiring developers to get up and running with DSpace development. In our experience, starting off on the right foot with a proven set of tools and practices can reduce someone’s learning curve and help in quickly getting to initial results. IDEA 13, the integrated development environment by IntelliJ can make a developer’s life a lot easier thanks to a truckload of features that are not included in your run-of-the-mill text editor.
By Michele Mennielli, International Relations, Cineca
Bologna, Italy During the recent euroCRIS Strategic Membership Meeting held in Amsterdam November 11-13 Cineca had the opportunity to present a new version of DSpace-CRIS with DSpace 4.2. This version of DSpace CRIS will be released in the next few days.
From James Evans, Product Manager, Open Repository
As previously reported in The Digital Reader, the bill passed in September by wide margins in both houses of the New Jersey State Legislature and would have codified the right to read ebooks without letting the government and everybody else knowing about it.
I wrote about some problems I saw with the bill. Based on a California law focused on law enforcement, the proposed NJ law added civil penalties on booksellers who disclosed the personal information of users without a court order. As I understood it, the bill could have prevented online booksellers from participating in ad networks (they all do!).
Governor Christie's veto statement pointed out more problems. The proposed law didn't explicitly prevent the government from asking for personal reading data, it just made it against the law for a bookseller to comply. So, for example, a local sheriff could still ask Amazon for a list of people in his town reading an incriminating book. If Amazon answered, somehow the reader would have to:
- find out that Amazon had provided the information
- sue Amazon for $500.
In New Jersey, a governor can issue a "Conditional Veto". In doing so, the governor outlines changes in a bill that would allow it to become law. Christie's revisions to the Reader Privacy Act make the following changes:
- The civil penalties are stripped out of the bill. This allows Gov. Christie to position himself and NJ as "business-friendly".
- A requirement is added preventing the government from asking for reader information without a court order or subpoena. Christie gets to be on the side of liberty. Yay!
- It's made clear that the law applies only to government snooping, and not to promiscuous data sharing with ad networks. Christie avoids the ire of rich ad network moguls.
- Child porn is carved out of the definition of "books". Being tough on child pornography is one of those politically courageous positions that all politicians love.
I'm not a fan of his by any means, but Chris Christie's version of the Reader Privacy Act is a solid step in the right direction and would be an excellent model for other states. We could use a law like it on the national level as well.
(Guest posted at The Digital Reader)
As some of you already know, Marlene and I are moving from Seattle to Atlanta in December. We’ve moved many (too many?) times before, so we’ve got most of the logistics down pat. Movers: hired! New house: rented! Mail forwarding: set up! Physical books: still too dang many!
We could do it in our sleep! (And the scary thing is, perhaps we have in the past.)
One thing that is different this time is that we’ll be driving across the country, visiting friends along the way. 3,650 miles, one car, two drivers, one Keurig, two suitcases, two sets of electronic paraphernalia, and three cats.
Who wants to lay odds on how many miles it will take each day for the cats to lose their voices?
Fortunately Sophia is already testing the cats’ accommodations:
I will miss the friends we made in Seattle, the summer weather, the great restaurants, being able to walk down to the water, and decent public transportation. I will also miss the drives up to Vancouver for conferences with a great bunch of librarians; I’m looking forward to attending Code4Lib BC next week, but I’m sorry to that our personal tradition of American Thanksgiving in British Columbia is coming to an end.
As far as Atlanta is concerned, I am looking forward to being back in MPOW’s office, having better access to a variety of good barbecue, the winter weather, and living in an area with less de facto segregation.
It’s been a good two years in the Pacific Northwest, but much to my surprise, I’ve found that the prospect of moving back to Atlanta feels a bit like a homecoming. So, onward!
PeerLibrary participated at OpenCon 2014, the student and early career researcher conference on Open Access, Open Education, and Open Data.
Today I found the following resources and bookmarked them on <a href=
- FnordMetric | Framework for building beautiful real-time dashboards FnordMetric allows you to write SQL queries that return SVG charts rather than tables. Turning a query result into a chart is literally one line of code.
Digest powered by RSS Digest
As the Ebola outbreak continues, the public must sort through all of the information being disseminated via the news media and social media. In this rapidly evolving environment, librarians are providing valuable services to their communities as they assist their users in finding credible information sources on Ebola, as well as other infectious diseases.
On Tuesday, December 12, 2014, library leaders from the U.S. National Library of Medicine will host the free webinar “Ebola and Other Infectious Diseases: The Latest Information from the National Library of Medicine.” As a follow-up to the webinar they presented in October, librarians from the U.S. National Library of Medicine will be discussing how to provide effective services in this environment, as well as providing an update on information sources that can be of assistance to librarians.Speakers
- Siobhan Champ-Blackwell is a librarian with the U.S. National Library of Medicine Disaster Information Management Research Center. Champ-Blackwell selects material to be added to the NLM disaster medicine grey literature data base and is responsible for the Center’s social media efforts. Champ-Blackwell has over 10 years of experience in providing training on NLM products and resources.
- Elizabeth Norton is a librarian with the U.S. National Library of Medicine Disaster Information Management Research Center where she has been working to improve online access to disaster health information for the disaster medicine and public health workforce. Norton has presented on this topic at national and international association meetings and has provided training on disaster health information resources to first responders, educators, and librarians working with the disaster response and public health preparedness communities.
Date: December 12, 2014
Time: 2:00 PM–3:00 PM Eastern
Register for the free event
If you cannot attend this live session, a recorded archive will be available to view at your convenience. To view past webinars also done in collaboration with iPAC, please visit Lib2Gov.org.
As faculty and students delve into digital scholarly works, they are tripping over the kinds of challenges that libraries specialize in overcoming, such as questions regarding digital project planning, improving discovery or using quality metadata. Indeed, nobody is better suited at helping scholars with their decisions regarding how to organize and deliver their digital works than librarians.
At my institution, we have not marketed our expertise in any meaningful way (yet), but we receive regular requests for help by faculty and campus organizations who are struggling with publishing digital scholarship. For example, a few years ago a team of librarians at my library helped researchers from the University of Ireland at Galway to migrate and restructure their online collection of annotations from the Vatican Archive to a more stable home on Omeka.net. Our expertise in metadata standards, OAI harvesting, digital collection platforms and digital project planning turned out to be invaluable to saving their dying collection and giving it a stable, long-term home. You can read more in my Saved by the Cloud post.
These kinds of requests have continued since. In recognition of this growing need, we are poised to launch a digital consultancy service on our campus.Digital Project Planning
A core component of our jobs is planning digital projects. Over the past year, in fact, we’ve developed a standard project planning template that we apply to each digital project that comes our way. This has done wonders at keeping us all up to date on what stage each project is in and who is up next in terms of the workflow.
Researchers are often experts at planning out their papers, but they don’t normally have much experience with planning a digital project. For example, because metadata and preservation are things that normally don’t come up for them, they overlook planning around these aspects. And more generally, I’ve found that just having a template to work with can help them understand how the experts do digital projects and give them a sense of the issues they need to consider when planning their own projects, whether that’s building an online exhibit or organizing their selected works in ways that will reap the biggest bang for the buck.
We intend to begin formally offering project planning help to faculty very soon.Platform Selection
It’s also our job to keep abreast of the various technologies available for distributing digital content, whether that is harvesting protocols, web content management systems, new plugins for WordPress or digital humanities exhibit platforms. Sometimes researchers know about some of these, but in my experience, their first choice is not necessarily the best for what they want to do.
It is fairly common for me to meet with campus partners that have an existing collection online, but which has been published in a platform that is ill-suited for what they are trying to accomplish. Currently, we have many departments moving old content based in SQL databases to plain HTML pages with no database behind them whatsoever. When I show them some of the other options, such as our Digital Commons-based institutional repository or Omeka.net, they often state they had no idea that such options existed and are very excited to work with us.Metadata
I think people in general are becoming more aware of metadata, but there is still lots of technical considerations that your typical researcher may not be aware of. At our library, we have helped out with all aspects of metadata. We have helped them clean up their data to conform to authorized terms and standard vocabularies. We have explained Dublin Core. We have helped re-encode their data so that diacritics display online. We have done crosswalking and harvesting. It’s a deep area of knowledge and one that few people outside of libraries know on a suitably deep level.
One recommendation for any budding metadata consultants that I would share is that you really need to be the Carl Sagan of metadata. This is pretty technical stuff and most people don’t need all the details. Stick to discussing the final outcome and not the technical details and your help will be far more understood and appreciated. For example, I once presented to a room of researchers on all the technical fixes to a database that we made to enhance and standardize the metadata, but his went over terribly. People later came up to me and joked that whatever it was we did, they’re sure it was important and thanked us for being there. I guess that was a good outcome since they acknowledged our contribution. But it would have been better had they understood, the practical benefits for the collection and users of that content.SEO
Search Engine Optimization is not hard, but it is likely that few people outside of the online marketing and web design world know what it is. I often find people can understand it very quickly if you simply define it as “helping Google understand your content so it can help people find you.” Simple SEO tricks like defining and then using keywords in your headers will do wonders for your collection’s visibility in the major search engines. But you can go deep with this stuff too, so I like to gauge my audience’s appetite for this stuff and then provide them with as much detail as I think they have an appetite for.Discovery
It’s a sad statement on the state of libraries, but the real discovery game is in the major search engines…not in our siloed, boutique search interfaces. Most people begin their searches (whether academic or not) in Google and this is really bad news for our digital collections since by and large, library collections are indexed in the deep web, beyond the reach of the search robots.
I recently tried a search for the title of a digital image in one of our collections in Google.com and found it. Yeah! Now I tried the same search in Google Images. No dice.
More librarians are coming to terms with this discovery problem now and we need to share this with digital scholars as they begin considering their own online collections so that they don’t make the mistakes libraries made (and continue to make…sigh) with our own collections.
We had one department at my institution that was sitting on a print journal that they were considering putting online. Behind this was a desire to bring the publication back to life since they had been told by one researcher in Europe that she thought the journal had been discontinued years ago. Unfortunately, it was still being published, it just wasn’t being indexed in Google. We offered our repository as an excellent place to do so, especially because it would increase their visibility worldwide. Unfortunately, they opted for a very small, non-profit online publisher whose content we demonstrated was not surfacing in Google or Google Scholar. Well, you can lead a horse to water…
Still, I think this kind of understanding of the discovery universe does resonate with many. Going back to our somewhat invisible digital images, we will be pushing many to social media like Flickr with the expectation that this will boost visibility in the image search engines (and social networks) and drive more traffic to our digital collections.Usability
This one is a tough one because people often come with pre-conceived notions of how they want their content organized or the site designed. For this reason, sometimes usability advice does not go over well. But for those instances when our experiences with user studies and information architecture can influence a digital scholarship project, it’s time well spent. In fact, I often hear people remark that they “never thought of it that way” and they’re willing to try some of the expert advice that we have to share.
Such advice includes things like:
- Best practices for writing for the web
- Principles of information architecture
- Responsive design
- Accessibility support
- User Experience design
It’s fitting to end on marketing. This is usually the final step in any digital project and one that often gets dropped. And yet, why do all the work of creating a digital collection only to let it go unnoticed. As digital project expert, librarians are familiar with the various channels available to promote and build followers with tools like social networking sites, blogs and the like.
With our own digital projects, we discuss marketing at the very beginning so we are sure all the hooks, timing and planning considerations are understood by everyone. In fact, marketing strategy will impact some of the features of your exhibit, your choice of keywords used to help SEO, the ultimate deadlines that you set for completion and the staffing time you know you’ll need post launch to keep the buzz buzzing.
Most importantly, though, marketing plans can greatly influence the decision for which platform to use. For example, one of the benefits of Omeka.net (rather than self-hosted Omeka) is that any collection hosted with them becomes part of a network of other digital collections, boosting the potential for serendipitous discovery. I often urge faculty to opt for our Digital Commons repository over, say, their personal website, because anything they place in DC gets aggregated into the larger DC universe and has built-in marketing tools like email subscriptions and RSS feeds.
The bottom line here is that marketing is an area where librarians can shine. Online marketing of digital collections really pulls together all of the other forms of expertise that we can offer (our understanding of metadata, web technology and social networks) to fulfill the aim of every digital project: to reach other people and teach them something.
Steve's basic graph is a log-log plot with performance increasing up and to the right. Response time for accessing an object (think latency) decreases to the right on the X-axis and the touch rate, the proportion of the total capacity that can be accessed by random reads in a year (think bandwidth) increases on the Y-axis. For example, a touch rate of 100/yr means that random reads could access the entire contents 100 times a year. He divides the graph into regions suited to different applications, with minimum requirements for response time and touch rate. So, for example, transaction processing requires response times below 10ms and touch rates above 100 (the average object is accessed about once every 3 days).
The touch rate depends on the size of the objects being accessed. If you take a specific storage medium, you can use its specifications to draw a curve on the graph as the size varies. Here Steve uses "capacity disk" (i.e. commodity 3.5" SATA drives) to show the typical curve, which varies from being bandwidth limited (for large objects on the left, horizontal side) being response limited (for small objects on the right, vertical side).
As an example of the use of these graphs, Steve analyzed the idea of MAID (Massive Array of Idle Drives). He used HGST MegaScale DC 4000.B SATA drives, and assumed that at any time 10% of them would be spun-up and the rest would be in standby. With random accesses to data objects, 9 out of 10 of them will encounter a 15sec spin-up delay, which sets the response time limit. Fully powering-down the drives as Facebook's cold storage does would save more power but increase the spin-up time to 20s. The system provides only (actually somewhat less than) 10% of the bandwidth per unit content, which sets the touch rate limit.
The Steve looked at the fine print of the drive specifications. He found two significant restrictions:
- The drives have a life-time limit of 50K start/stop cycles.
- For reasons that are totally opaque, the drives are limited to a total transfer of 180TB/yr.
This analysis suggests that traditional MAID is not significantly better than tapes in a robot. Here, for example, Steve examines configurations varying from one tape drive for 1600 LTO6 tapes, or 4PB per drive, to a quite unrealistically expensive 1 drive per 10 tapes, or 60TB per drive. Tape drives have a 120K lifetime load/unload cycle limit, and the tapes can withstand at most 260 full-file passes, so tape has a similar pair of horizontal and vertical lines.
The reason that Facebook's disk-based cold storage doesn't suffer from the same limits as traditional MAID is that it isn't doing random I/O. Facebook's system schedules I/Os so that it uses the full bandwidth of the disk array, raising the touch rate limit to that of the drives, and reducing the number of start-stop cycles. Admittedly, the response time for a random data object is now a worst-case 7 times the time for which a group of drives is active, but this is not a critical parameter for Facebook's application.
Steve's metric seems to be a major contribution to the analysis of storage systems.
I presented a version of this talk at the 2014 Futurebook Conference in London, England. They also kindly featured me in the program. Thank you to The Bookseller for a wonderful conference filled with innovation and intelligent people!
A few days ago, I was in the Bodleian Library at Oxford University, often considered the most beautiful library in the world. My enthusiastic guide told the following story:
After the Reformation when all the books in Oxford were burned, Sir Thomas Bodley decided to create a place where people could go and access all the world’s information at their fingertips, for free.
“What does that sound like?” she asked. “…the Internet?”
While this is a lovely conceit, the part of the story that resonated with me for this talk is the other big change that Bodley made, which was to work with publishers, who were largely a monopoly at that point, to fill his library for free by turning the library into a copyright library. While this seemed antithetical to the ways that publishers worked, in giving a copy of their very expensive books away, they left an indelible and permanent mark on the face of human knowledge. It was not only preservation, but self-preservation.
Bodley was what people nowadays would probably call “an innovator” and maybe even in the parlance of my field, a “community manager.”
By thinking outside of the scheme of how publishing works, he joined together with a group of skeptics and created one of the greatest knowledge repositories in the world, one that still exists 700 years later. This speaks to a few issues:
Sharing economies, community, and publishing should and do go hand in hand and have since the birth of libraries. By stepping outside of traditional models, you are creating a world filled with limitless knowledge and crafting it in new and unexpected ways.
The bound manuscript is one of the most enduring technologies. This story remains relevant because books are still books and people are still reading them.
As the same time, things are definitely changing. For the most part, books and manuscripts were pretty much identifiable as books and manuscripts for the past 1000 years.
But what if I were to give Google Maps to a 16th Century Map Maker? Or what if I were to show Joseph Pulitzer Medium? Or what if I were to hand Gutenberg a Kindle? Or Project Gutenberg for that matter? What if I were to explain to Thomas Bodley how I shared the new Lena Dunham book with a friend by sending her the file instead of actually handing her the physical book? What if I were to try to explain Lena Dunham?
These innovations have all taken place within the last twenty years, and I would argue that we haven’t even scratched the surface in terms of the innovations that are to come.
We need to accept that the future of the printed word may vary from words on paper to an ereader or computer in 500 years, but I want to emphasize that in the 500 years to come, it will more likely vary from the ereader to a giant question mark.
International literacy rates have risen rapidly over the past 100 years and companies are scrambling to be the first to reach what they call “developing markets” in terms of connectivity. In the vein of Mark Surman’s talk at the Mozilla Festival this year, I will instead call these economies post-colonial economies.
Because we (as people of the book) are fundamentally idealists who believe that the printed word can change lives, we need to be engaged with rethinking the printed word in a way that recognizes power structures and does not settle for the limited choices that the corporate Internet provides (think Facebook vs WhatsApp). This is not as a panacea to fix the world’s ills.
In the Atlantic last year, Phil Nichols wrote an excellent piece that paralleled Web literacy and early 20th century literacy movements. The dualities between “connected” and “non-connected,” he writes, impose the same kinds of binaries and blind cure-all for social ills that the “literacy” movement imposed in the early 20th century. In equating “connectedness” with opportunity, we are “hiding an ideology that is rooted in social control.”
Surman, who is director of the Mozilla Foundation, claims that the Web, which had so much potential to become a free and open virtual meeting place for communities, has started to resemble a shopping mall. While I can go there and meet with my friends, it’s still controlled by cameras that are watching my every move and its sole motive is to get me to buy things.
85 percent of North America is connected to the Internet and 40 percent of the world is connected. Connectivity increased at a rate of 676% in the past 13 years. Studies show that literacy and connectivity go hand in hand.
How do you envision a fully connected world? How do you envision a fully literate world? How can we empower a new generation of connected communities to become learners rather than consumers?
I’m not one of these technology nuts who’s going to argue that books are going to somehow leave their containers and become networked floating apparatuses, and I’m not going to argue that the ereader is a significantly different vessel than the physical book.
I’m also not going to argue that we’re going to have a world of people who are only Web literate and not reading books in twenty years. To make any kind of future prediction would be a false prophesy, elitist, and perhaps dangerous.
Although I don’t know what the printed word will look like in the next 500 years,
I want to take a moment to think outside the book,
to think outside traditional publishing models, and to embrace the instantaneousness, randomness, and spontaneity of the Internet as it could be, not as it is now.
One way I want you to embrace the wonderful wide Web is to try to at least partially decouple your social media followers from your community.
Twitter and other forms of social media are certainly a delightful and fun way for communities to communicate and get involved, but your viral campaign, if you have it, is not your community.
True communities of practice are groups of people who come together to think beyond traditional models and innovate within a domain. For a touchstone, a community of practice is something like the Penguin Labs internal innovation center that Tom Weldon spoke about this morning and not like Penguin’s 600,000 followers on Twitter. How can we bring people together to allow for innovation, communication, and creation?
The Internet provides new and unlimited opportunities for community and innovation, but we have to start managing communities and embracing the people we touch as makers rather than simply followers or consumers.
The maker economy is here— participatory content creation has become the norm rather than the exception. You have the potential to reach and mobilize 2.1 billion people and let them tell you what they want, but you have to identify leaders and early adopters and you have to empower them.
How do you recognize the people who create content for you? I don’t mean authors, but instead the ambassadors who want to get involved and stay involved with your brand.
I want to ask you, in the spirit of innovation from the edges
What is your next platform for radical participation? How are you enabling your community to bring you to the next level? How can you differentiate your brand and make every single person you touch psyched to read your content, together? How can you create a community of practice?
Community is conversation. Your users are not your community.
Ask yourself the question Rachel Fershleiser asked when building a community on Tumblr: Are you reaching out to the people who want to hear from you and encouraging them or are you just letting your community be unplanned and organic?
There reaches a point where we reach the limit of unplanned organic growth. Know when you reach this limit.
Target, plan, be upbeat, and encourage people to talk to one another without your help and stretch the creativity of your work to the upper limit.
Does this model look different from when you started working in publishing? Good.
As the story of the Bodelian Library illustrated, sometimes a totally crazy idea can be the beginning of an enduring institution.
To repeat, the book is one of the most durable technologies and publishing is one of the most durable industries in history. Its durability has been put to the test more than once, and it will surely be put to the test again. Think of your current concerns as a minor stumbling block in a history filled with success, a history that has documented and shaped the world.
Don’t be afraid of the person who calls you up and says, “I have this crazy idea that may just change the way you work…” While the industry may shift, the printed word will always prevail.
Publishing has been around in some shape or form for 1000 years. Here’s hoping that it’s around for another 1000 more.
On Tuesday, November 18th, the American Library Association (ALA) held a panel discussion on recent judicial interpretations of the doctrine of fair use. The discussion, entitled “Too Good to be True: Are the Courts Revolutionizing Fair Use for Education, Research and Libraries?” is the first in a series of information policy discussions to help us chart the way forward as the ongoing digital revolution fundamentally changes the way we access, process and disseminate information.
These events are part of the ALA Office for Information Technology Policy’s broader Policy Revolution! initiative—an ongoing effort to establish and maintain a national public policy agenda that will amplify the voice of the library community in the policymaking process and position libraries to best serve their patrons in the years ahead.
Tuesday’s event convened three copyright experts to discuss and debate recent developments in digital fair use. The experts—ALA legislative counsel Jonathan Band; American University practitioner-in-practice Brandon Butler; and Authors Guild executive director Mary Rasenberger—engaged in a lively discussion that highlighted some points of agreement and disagreement between librarians and authors.
The library community is a strong proponent of fair use, a flexible copyright exception that enables use of copyrighted works without prior authorization from the rights holder. Fair use can be determined by the consideration of four factors. A number of court decisions issued over the last three years have affirmed the use of copyrighted works by libraries as fair, including the mass digitization of books housed in some research libraries, such as Authors Guild v. HathiTrust.
Band and Butler disagreed with Rasenberger on several points concerning recent judicial fair use interpretations. Band and Butler described judicial rulings on fair use in disputes like the Google Books case and the HathiTrust case as on-point, and rejected arguments that the reproductions of content at issue in these cases could result in economic injury to authors. Rasenberger, on the other hand, argued that repositories like HathiTrust and Google Books can in fact lead to negative market impacts for authors, and therefore do not represent a fair use.
Rasenberger believes that licensing arrangements should be made between authors and members of the library, academic and research communities who want to reproduce the content to which they hold rights. She takes specific issue with judicial interpretations of market harm that require authors to demonstrate proof of a loss of profits, suggesting that such harm can be established by showing that future injury is likely to befall an author as a result of the reproduction of his or her work.
Despite their differences of opinion, the panelists provided those in attendance at Tuesday’s event with some meaningful food for thought, and offered a thorough overview of the ongoing judicial debates over fair use. We were pleased that the Washington Internet Daily published an article “Georgia State Case Highlights Fair Use Disagreement Among Copyright Experts,” on November 20, 2014, about our session. ALA continues to fight for public access to information as these debates play out.
Stay tuned for the next event, planned for early 2015!
The post ALA Washington Office copyright event “too good to be true” appeared first on District Dispatch.
Last year, we reached a milestone at Cherry Hill when we moved all of our projects into a managed deployment system. We have talked about Jenkins, one of the tools that we use to manage our workflow and there has been continued interest on what our "recipe" consists of. Being that we are using open source tools, and we think of ourselves as part of the (larger than Drupal) open source community, I want to share a bit more of what we use and how it is stitched together. Our hope is that this helps to spark a larger discussion of the tools others are using, so we can all learn from each other.
Git is a distributed code revision control system. While we could use any revision control system such as CSV, Subversion (and even though this is a given with most agencies, we strongly suggest you use *some* system over nothing at all), git is fairly easy to use, has great...Read more »
In a continuation of our weekly facial hair inspiration (check out last week’s list of Civil War mustached men), we recognize that the “Movember” challenge isn’t easy. Growing an impressive beard or mustache, even for a good cause, can be a struggle. Let us help!
This week: A collection of historic mustache must-haves.A “mustache-guard” best used “with drinking-cups or goblets, tumblers, and other drinking-vessels.” A support group: The “Mustache Club,” 1893. A little synthetic help (like this woman wearing a fake ‘stache in a skit). A Japanese “mustache-lifter” from the 1920s. Or this stick, which Japanese men used to raise their mustaches while drinking wine. A little bit of dye, to keep your mustache a “natural brown or black,” as this advertisement promises. A steady reflection. A sense of humor (or not, if you aren’t a fan of clowns). A nice ride, for regular trips to the barber. A theme song.
This week we did a guerrilla-style test to see how (or if) people find our subject guides, particularly if they are not in our main listing. We asked “Pretend that someone has told you there is a really great subject guide on the library website about [subject]. What would you do to find it?” We cycled through three different subjects not listed on our main subject guide page: Canadian History, Ottawa, and Homelessness.Some Context
Our subject guides use a template created in-house (not LibGuides) and we use Drupal Views and Taxonomy to create our lists. The main subject guide page has an A-Z list, an autocomplete search box, a list of broad subjects (e.g. Arts and Social Sciences) and a list of narrower subjects (e.g. Sociology). The list of every subject guide is on another page. Subject specialists were not sure if users would find guides that didn’t correspond to the narrower subjects (e.g. Sociology of Sport).Results
The 21 students we saw did all kinds of things to find subject guides. We purposely used the same vocabulary as what is on the site because it wasn’t supposed to be a test about the label “subject guide.” However, less than 30% clicked on the Subject Guides link; the majority used some sort of search.
When people used our site search, they had little problem finding the guide (although a typo stymied one person). However, a lot of participants used our Summon search. I think there are a couple of reasons for this:
- Students didn’t know what a subject guide was and so looked for guides the way they look for articles, books, etc.
- Students think the Summon search box is for everything
Of the 6 students who did click on the Subject Guides link:
- 2 used broad subjects (and neither was successful with this strategy)
- 2 used narrow subjects (both were successful)
- 1 used the A-Z list (with success)
- 1 used the autocomplete search (with success)
One person thought that she couldn’t possibly find the Ottawa guide under “Subject Guides” because she thought those were only for courses. I found this very interesting because a number of our subject guides do not map directly to courses.
The poor performance of the broad subjects on the subject guide page is an issue and Web Committee will look at how we might address that. Making our site search more forgiving of typos is also going to move up the to-do list. But I think the biggest takeaway is that we really have to figure out how to get our guides indexed in Summon.
Today, the American Library Association (ALA) and its Digital Content Working Group (DCWG) welcomed Simon & Schuster’s announcement that it will allow libraries to opt into the “Buy It Now” program. The publisher began offering all of its ebook titles for library lending nationwide in June 2014, with required participation in the “Buy It Now” merchandising program, which enables library users to directly purchase a title rather than check it out from the library. Simon & Schuster ebooks are available for lending for one year from the date of purchase.
In an ALA statement, ALA President Courtney Young applauded the move:
From the beginning, the ALA has advocated for the broadest and most affordable library access to e-titles, as well as licensing terms that give libraries flexibility to best meet their community needs.
We appreciate that Simon & Schuster is modifying its library ebook program to provide libraries a choice in whether or not to participate in Buy It Now. Providing options like these allow libraries to enable digital access while also respecting local norms or policies. This change also speaks to the importance of sustaining conversations among librarians, publishers, distributors and authors to continue advancing our shared goals of connecting writers and readers.
DCWG Co-Chairs Carolyn Anthony and Erika Linke also commented on the Simon & Schuster announcement:
“We are still in the early days of this digital publishing revolution, and we hope we can co-create solutions that expand access, increase readership and improve exposure for diverse and emerging voices,” said. “Many challenges remain including high prices, privacy concerns, and other terms under which ebooks are offered to libraries. We are continuing our discussions with publishers.”
For more library ebook lending news, visit the American Libraries magazine E-Content blog.
The post ALA welcomes Simon & Schuster change to Buy It Now program appeared first on District Dispatch.