I was pleased to read last week that the National Digital Newspaper Program, which has sponsored the digitization of over 1 million historically significant newspaper pages , has announced that it has expanded its scope to include content published up to 1963, as long as public domain status can be established. I’m excited about this initiative, which will surface content of historic interest that’s in many readers’ living memory. I’ve advocated opening access to serials up to 1963 for a long time, and have worked on various efforts to surface information about serial copyright renewals (like this one), to make it easier to find public domain serial content that can be made freely readable online. (In the US, renewal became automatic for copyrights secured after 1963, making it difficult to republish most newspapers after that date. Up till then, though, there’s a lot that can be put online.)
Copyright in contributions
Clearing copyright for newspapers after 1922 can be challenging, however. Relatively few newspapers renewed copyrights for entire issues– as I noted 10 years ago, none outside of New York City did before the end of World War II. But newspapers often aggregate lots of content from lots of sources, and determining the copyright status of those various pieces of content is necessary as well, as far as I can tell. While section 201(c) of copyright law normally gives copyright holders of a collective work, such as a magazine or newspaper, the right to republish contributions as part of that work, people digitizing a newspaper that didn’t renew its own copyright aren’t usually copyright holders for that newspaper. (I’m not a lawyer, though– if any legal experts want to argue that digitizing libraries get similar republication rights as the newspaper copyright holders, feel free to comment.)
As I mentioned in my last post, we at Penn are currently going through the Catalog of Copyright Entries to survey which periodicals have contributions with copyright renewals, and when those renewals started. (My previous post discussed this in the context of journals, but the survey covers newspapers as well.) Most of the contributions in the section we’re surveying are text, and we’ve now comprehensively surveyed up to 1932. In the process, we’ve found a number of newspapers that had copyright-renewed text contributions, even when they did not have copyright-renewed issues. The renewed contributions are most commonly serialized fiction (which was more commonly run in newspapers decades ago than it is now). Occasionally we’ll see a special nonfiction feature by a well-known author renewed. I have not yet seen any contribution renewals for straight news stories, though, and most newspapers published in the 1920s and early 1930s have not made any appearance in our renewal survey to date. I’ll post an update if I see this pattern changing; but right now, if digitizers are uncertain about the status of a particular story or feature article in a newspaper, searching for its title and author in the Catalog of Copyright Entries should suffice to clear it.
Photographs and advertisements
Newspapers contain more than text, though. They also include photos, as well as other graphical elements, which often appear in advertisements. It turns out, however, that the renewal rate for images is very low, and the renewal rate for “commercial prints”, which include advertisements, is even lower. There isn’t yet a searchable text file or database for these types of copyright renewals (though I’m hoping one can online before long, with help from Distributed Proofreaders), and in any case, images typically don’t have unambiguous titles one can use for searching. However, most news photographs were published just after they were taken, and therefore they have a known copyright year and specific years in which a renewal, if any, should have been filed. It’s possible to go through the complete artwork and commercial prints of any given year, get an overview of all the renewed photos and ads that exist, and look for matches. (It’s a little cumbersome, but doable, with page images of the Catalog of Copyright Entries; it will be easier once there are searchable, classified transcriptions of these pages.)
Fair use arguments may also be relevant. Even in the rare case where an advertisement was copyright-renewed, or includes copyright-renewed elements (like a copyrighted character), an ad in the context of an old newspaper largely serves an informative purpose, and presenting it there online doesn’t typically take away from the market for that advertisement. As far as I can tell, what market exists for ads mostly involves relicensing them for new purposes such as nostalgia merchandise. For that matter, most licensed reuses of photographs I’m aware of involve the use of high-resolution original prints and negatives, not the lower-quality copies that appear on newsprint (and that could be made even lower-grade for purposes of free display in a noncommercial research collection, if necessary). I don’t know if NDNP is planning to accommodate fair use arguments along with public domain documentation, but they’re worth considering.
Syndicated and reprinted content: A thornier problem
Many newspapers contain not only original content, but also content that originated elsewhere. This type of content comes in many forms: wire-service stories and photos, ads, and syndicated cartoons and columns. I don’t yet see much cause for concern about wire news stories; typically they originate in a specific newspaper, and would normally need to be renewed with reference to that newspaper. And at least as far as 1932, I haven’t yet seen any straight news stories renewed. Likewise, I suspect wire photos and national ads can be cleared much like single-newspaper photos and ads can be.
But I think syndicated content may be more of a sticky issue. Syndicated comics and features grew increasingly popular in newspapers in the 20th century, and there’s still a market for some content that goes back a long way. For instance, the first contribution renewal for the Elizabethan Star, dated September 8, 1930, is the very first Blondie comic strip. That strip soon became wildly popular, published by thousands of newspapers across the country. It still enjoys a robust market, with its official website noting it runs in over 2000 newspapers today. Moreover, its syndicator, King Features, also published weekly periodicals of its own, with issues as far back as 1933 renewed. (As far as I can tell, it published these for copyright purposes, as very few libraries have them, but according to WorldCat an issue “binds together one copy of each comic, puzzle, or column distributed by the syndicate in a given week”. Renew that, and you renew everything in it.) King Features remains one of the largest syndicators in the world. Most major newspapers, then, include at least some copyrighted (and possibly still marketable) material at least as far back as the early 1930s.
Selective presentation of serial content
The most problematic content of these old newspapers from a copyright point of view, though, is probably the least interesting content from a researcher’s point of view. Most people who want to look at a particular locale’s newspaper want to see the local content: the news its journalists reported, the editorials it ran, the ads local businesses and readers bought. The material that came from elsewhere, and ran identically in hundreds of other newspapers, is of less research interest. Why not omit that, then, while still showing all the local content?
This should be feasible given current law and technology. We know from the Google and Hathitrust cases that fair use allows completely copyrighted volumes to be digitized and used for certain purposes like search, as long as users aren’t generally shown the full text. And while projects like HathiTrust and Chronicling America now typically show all the pages they scan, commonly used digitized newspaper software can either highlight or blank out not only specific pages but even the specific sections of a page in which a particular article or image appears.
This gives us a path forward for providing access to newspapers up to 1963 (or whatever date the paper started being renewed in its entirety). Specifically, a library digitization project can digitize and index all the pages, but then only expose the portions of the issues it’s comfortable showing given its copyright knowledge. It can summarize the parts it’s omitting, so that other libraries (or other trusted collaborators) can research the parts it wasn’t able to clear on its own. Sections could then be opened up as researchers across the Internet found evidence to clear up their status. Taken as a whole, it’s a big job, but projects like the Copyright Review Management System show how distributed copyright clearance can be feasibly done at scale.
Moreover, if we can establish a workable clearance and selective display process for US newspapers, it will probably also work for most other serials published in the US. Most of them, whether magazines, scholarly journals, conference proceedings, newsletters, or trade publications, are no more complicated in their sources and structures than newspapers are, and they’re often much simpler. So I look forward to seeing how this expansion in scope up to 1963 works out for the National Digital Newspaper Program. And I hope we can use their example and experience to open access to a wider variety of serials as well.
Open Knowledge Foundation: Open Access: Why do scholarly communication platforms matter and what is the true cost of gold OA?
During the past 2,5 years Open Knowledge has been a partner in PASTEUR4OA, a project focused on aligning open access policies for European Union research. As part of the work, a series of advocacy resources was produced that can be used by stakeholders to promote the development and reinforcement of such open access policies. The final two briefing papers, written by Open Knowledge, have been published this week and deal with two pressing issues around open access today: the financial opacity of open access publishing and its potential harmful effects for the research community, and the expansion of open and free scholarly communication platforms in the academic world – explaining the new dependencies that may arise from those platforms and why this matters for the open access movement.Revealing the true cost of gold OA
“Reducing the costs of readership while increasing access to research outputs” has been a rallying cry for open access publishing, or Gold OA. Yet, the Gold OA market is largely opaque and makes it hard for us to evaluate how the costs of readership actually develop. Data on both the costs of subscriptions (for hybrid OA journals) and of APCs are hard to gather. If they can be obtained, they only offer partial but very different insights into the market. This is a problem for efficient open access publishing. Funders, institutions, and individual researchers are therefore increasingly concerned that a transition to Gold OA could leave research community open for exploitative financial practices and prevent effective market coordination.
Which factors contribute to the current opacity in the market? Which approaches are taken to foster financial transparency of Gold OA? And what are recommendations to funders, institutions, researchers and publishers to increase transparency?
The paper Revealing the true costs of Gold OA – Towards a public data infrastructure of scholarly publishing costs, written by researchers of Open Knowledge International, King’s College London and the University of London, presents the current state of financial opacity in scholarly journal publishing. It describes what information is needed in order to obtain a bigger, more systemic picture of financial flows, and to understand how much money is going into the system, where this money comes from, and how these financial flows might be adjusted to support alternative kinds of publishing models.
Why do scholarly communication platforms matter for open access? Over the past two decades, open access advocates have made significant gains in securing public access to the formal outputs of scholarly communication (e.g. peer reviewed journal articles). The same period has seen the rise of platforms from commercial publishers and technology companies that enable users to interact and share their work, as well as providing analytics and services around scholarly communication.
How should researchers and policymakers respond to the rise of these platforms? Do commercial platforms necessarily work the interests of the scholarly community? How and to what extent do these proprietary platforms pose a threat to open scholarly communication? What might public alternatives look like?The paper Infrastructures for Open Scholarly Communication provides a brief overview of the rise of scholarly platforms – describing some of their main characteristics as well as debates and controversies surrounding them. It argues that in order to prevent new forms of enclosure, it is essential that public policymakers should be concerned with the provision of public infrastructures for scholarly communication as well as public access to the outputs of research. It concludes with a review of some of the core elements of such infrastructures, as well as recommendations for further work in this area.
Catherine E. Kerrigan
Recent postings from ACRL indicate that the library world is paying more attention than ever to demonstrating the impact we have on student learning, faculty productivity, serving our communities, and the overall missions of our institutions. Megan Oakleaf has written extensively on this issue, and her work revolves around the way we can try to make connections between assessment efforts and student learning, among other things.
Blame shrinking budgets, clueless campus administrators, or just a lack of sharing the great work we do, but we are all faced with the reality of validating our role on our respective campuses in one way or another. I don’t want to get into the merits of such an argument, but rather to offer a possible solution to this issue-one of many options, to be sure.
Setting annual reports aside, which are at best long-winded and most likely end up in a forgotten file-folder, chances are we only have few and brief opportunities to communicate that which is very difficult to encapsulate, much less quantify. So how can you pack that proverbial punch? Enter the increasingly popular infographic. At OSU, we’ve embarked on an ambitious project to do just that, and we are in the throes of deciding how to best harness the power of such a tool for our purposes.
There are really two broad issues to take into consideration if you would like to use this type of tool: what to include and how to design for maximum impact.
First, you’ll need to think about the information you want to collect, both quantitative and qualitative. A good Google Form, Excel spreadsheet, or Springshare’s LibAnalytics will do the trick. But beware, things may not be as simple as they appear. Numbers are easy-put a 3 or a 10 and off you go. What’s harder to capture is the story behind that figure. Make sure that all of your quantitative data have a qualitative equivalent. Which is where defining your categories comes into play. For example, if you want to capture how many successful consultations librarians averaged in a given year, make sure they understand exactly what you mean by that term. Some may interpret it as all the reference questions they answer, others may only report appointment-based interactions, while others still might think this relates only to a particular user group.
In addition, whatever non-numerical information you capture should be able to answer the question “So what?” If you can’t determine its importance, chances are neither will someone outside the library no matter how much you try to explain it. Ideally, whatever categories you select either match your library or institutional strategic goals (or both) so that you can directly correlate them to the areas which are important on a broader level and aggregate individual efforts into a composite snapshot for the semester or the year. This section will allow you to tell that ever important story and show how the numbers are actually meaningful. The recent article by Anne Kenney speaks more directly to liaison work, but her insights can easily be extrapolated to more general terms. In other words, focus on the impact of the activity rather than measuring its existence.
Which leads me to the next point, whatever data is captured, start by actually capturing it! You can have the most perfect form in the world, but if no one is filling it out, it’s pointless. Consistency is also key, and for this you may need the help of a department head or library administration to help nudge participation in the right direction. But even some data, however incomplete, is better than none at all and you can always build on your efforts, but you have to start somewhere and establish that initial benchmark.
Formatting and creating the infographic is just as important as what’s in it. Luckily, there are several free tools out there which help to make this work a little easier:
- What actions/learning are you trying to enable? Do you want to simply inform or perhaps persuade?
- What questions are you trying to answer?
- What do you want to show? What story are you trying to tell?
- Who is your audience? What are their priorities and level of knowledge about your information?
- What key information do you want to relay? Where do you want the reader to focus and on what?
Knowing the answers to these questions will help you decide layout and formatting choices. Keep things simple and choose complementary colors. Make sure the infographic is easy to print out and can be viewed online just as easily-try to avoid making it too long so that the person has to scroll endlessly to see everything. And most importantly, keep trying!
*Images taken from Pixabay
Yesterday, S. 2893, legislation introduced by Senator Schumer, passed! It authorizes the National Library Service for the Blind and Physically Handicapped (NLS) to extend its service by providing refreshable Braille display devices to NLS users. Previously, NLS could only supply Braille books in print which are expensive to produce and costly to ship. The NLS did have the capability of sending Braille files to users, but many could not afford the refreshable Braille display devices. Braille readers— popular with many people with print disabilities— allow readers the ability to read Braille from a device connected to a computer keyboard. With so much content now displayed on a computer screen, Braille readers are indispensable. Isn’t technology cool?
Kudos to Senator Schumer for acting on a recommendation from the Government Accountability Office (GAO) in its recent report entitled “Library Services For Those With Disabilities: Additional Steps Needed to Ease Services and Modernize Technology” to “give NLS the opportunity to provide braille in a modernized format and potentially achieve cost savings, Congress should consider amending the law to allow the agency to use federal funds to provide its users playback equipment for electronic braille files (i.e., refreshable braille devices).”
The VIAF API is undergoing enhancements in an upcoming July install scheduled for 7/19/2016.
“Yeas 74, Nays 18”: with those few magic words yesterday, Dr. Carla Hayden was confirmed overwhelmingly by the United States Senate to serve as the nation’s 14th Librarian of Congress. ALA strongly endorsed Dr. Hayden’s nomination, worked hard for her confirmation as an organization, and is proud to have enabled tens of thousands of Americans (librarians and many others alike) to communicate their pride in and support of Dr. Hayden to their Senators.
Today’s magic words are the ones that our parents first acquainted us with – “thank you.” Too often, in the heat of legislative debate and public advocacy, they’re forgotten, but not by librarians and the people who support what (and who) we stand for. Today, keep calling, emailing, and Tweeting the Senators who voted “Yea” to confirm Dr. Hayden (complete list by state below), and no matter where you live, also thank:
- Senate Majority Leader Mitch McConnell for initiating and enabling yesterday’s historic vote;
- Senate Majority Whip John Cornyn for influentially supporting Dr. Hayden with his vote;
- The Rules Committee’s indefatigable staff and leadership, Chairman Roy Blunt and Ranking members Chuck Schumer; and, by no means least
- Dr. Hayden’s biggest boosters in the Senate, her home state of Maryland’s Senators Barbara Mikulski and Ben Cardin.
Dr. Hayden’s nomination, Rules Committee vetting, hearing and ultimate consideration on the floor of the Senate were, appropriately, not partisan. They were done right, done fairly and done well and the nation will benefit for a decade from that model process.
Saying “thank you” is appropriate, easy, and it’s the right thing to do. Please, pass it on proudly and loudly – #HaydenISLoC
The post Thank your Senators for the new Librarian of Congress appeared first on District Dispatch.
Great to hear Koha’s Nicole Engard and Brendan Gallagher interviewed on FLOSS Weekly episode 236 talking about the integrated library system. Six (!) years ago Evergreen was on FLOSS Weekly episode 132, with Mike Rylander and the rich radio-friendly baritone voice of Ontario’s own Dan Scott explaining about the other free and open ILS written in Perl.
Austin, TX The peak of summer is also the mid-point in the annual DuraSpace Membership Campaign. Many thanks to those in our community who have become 2016 DuraSpace Members. We are pleased to report that we are within reach of our Membership Campaign goal of $1,250,000. Financial contributions come from our members, registered service providers and our corporate sponsors.
This afternoon, the Senate voted to confirm Dr. Carla Hayden as the 14th Librarian of Congress! Dr. Hayden will be the first professional librarian to hold the position in over 40 years, as well as the first woman and first African American Librarian of Congress.
You can join our celebration on social media (#HaydenISLoC) and by taking a moment to thank the 74 Senators who voted to confirm Dr. Hayden!
The post Hayden confirmed as the 14th Librarian of Congress appeared first on District Dispatch.
CRRA Update Spring 2016
(December, January, February)
Please see the PDF for the more visually rich version.
Open Knowledge Foundation: Why Open Source Software Matters for Government and Civic Tech – and How to Support It
Today we’re publishing a new research paper looking at whether free/open source software matters for government and civic tech. Matters in the sense that it should have a deep and strategic role in government IT and policy rather than just being a “nice to have” or something “we use when we can”.
As the paper shows the answer is a strong yes: open source software does matter for government and civic tech — and, conversely, government matters for open source. The paper covers:
- Why open software is especially important for government and civic tech
- Why open software needs special support and treatment by government (and funders)
- What specific actions can be taken to provide this support for open software by government (and funders)
We also discuss how software is different from other things that government traditionally buy or fund. This difference is why government cannot buy software like it buys office furniture or procures the building of bridges — and why buying open matters so much.
The paper is authored by our President and Founder Dr Rufus Pollock.Read the Full Version of the Paper Online »
Download PDF Version of the paper »
Discussion and Comments » Why Open Software
We begin with four facts about software and government which form a basis for the conclusions and recommendations that follow.
- The economics of software: software has high fixed costs and low (zero) marginal costs and it is also incremental in that new code builds on old. The cost structure creates a fundamental dilemma between finding ways to fund the fixed cost, e.g. by having proprietary software and raising prices; and promoting optimal access by setting the price at the marginal cost level of zero. In resolving this dilemma, proprietary software models favour the funding of fixed costs but at the price of inefficiently raised pricing and hampering future development, whilst open source models favour efficient pricing and access but face the challenge of funding the fixed costs to create high quality software in the first place. The incremental nature of software sharpens this dilemma and contributes to technological and vendor lock-in.
Switching costs are significant: it is (increasingly) costly to switch off a given piece of software once you start using it. This is because you make “asset (software) specific investments”: in learning how to use the software, integrating the software with your systems, extending and customizing the software, etc. These all mean there are often substantial costs associated with switching to an alternative later.
The future matters and is difficult to know: software is used for a long time — whether in its original or upgraded form. Knowing the future is therefore especially important in purchasing software. Predictions about the future in relation to software are especially hard because of its complex nature and adaptability; behavioural biases mean the level of uncertainty and likely future change are underestimated. Together these mean lock-in is under-estimated.
Governments are bad at negotiating, especially in this environment, and hence the lock-in problem is especially acute for Government. Government are generally poor decision-makers and bargainers due to the incentives faced by government as a whole and by individuals within government. They are especially weak when having to make trade-offs between the near-term and the more distant future. They are even weaker when the future is complex, uncertain and hard to specify contractually up front. Software procurement has all of these characteristics, making it particularly prone to error compared to other government procurement areas.
Note: numbers in brackets e.g. (1) refer to one of the four observations of the previous section.
A. Lock-in to Proprietary Software is a Problem
Incremental Nature of Software (1) + Switching Costs (2)
Lock-in happens for a software technology, and, if it is proprietary, to a vendor
Zero Marginal Cost of Software (1) + Uncertainty about the Future in user needs and technologies (3) + Governments are Poor Bargainers (4)
Lock-in to proprietary software is a problem
Lock-in has high costs and is under-estimated – especially so for government
B. Open Source is a Solution
Lock-in is a problem
Strategies that reduce lock-in are valuable
Economics of Software (1)
Open-source is a strategy for government (and others) to reduce future lock-in
Why? Because it requires the software provider to make an up-front commitment to making the essential technology available both to users and other technologists at zero cost, both now and in the future
Together these two points
Open source is a solution
And a specific commitment to open source in government / civic tech is important and valuable
C. Open Source Needs Support
And Government / Civic Tech is an area where it can be provided effectively
Software has high fixed costs and a challenge for open source is to secure sufficient support investment to cover these fixed costs (1 – Economics)
Governments are large spenders on IT and are bureaucratic: they can make rules to pre-commit up front (e.g. in procurement) and can feasibly coordinate whether at local, national or, even, international levels on buying and investment decisions related to software.
Government is especially well situated to support open source
Government has the tools to provide systematic support
Government should provide systematic support
We have established in the previous section that there is a strong basis for promoting open software. This section provides specific strategic and tactical suggestions for how to do that. There are five proposals that we summarize here. Each of these is covered in more detail in the main section below. We especially emphasize the potential of the last three options as it does not require up-front participation by government and can be boot-strapped with philanthropic funding.
1. Recognize and reward open source in IT procurement.
Give open source explicit recognition and beneficial treatment in procurement. Specifically, introduce into government tenders: EITHER an explicit requirement for an open source solution OR a significant points value for open source in the scoring for solutions (more than 30% of the points on offer).
2. Make government IT procurement more agile and lightweight.
Current methodologies follow a “spec and deliver” model in which government attempts to define a full spec up front and then seeks solutions that deliver against this. The spec and deliver model greatly diminishes the value of open source – which allows for rapid iteration in the open, and more rapid switching of provider – and implicitly builds lock-in to the selected provider whose solution is a black-box to the buyer. In addition, whilst theoretically shifting risk to the supplier of the software, given the difficulty of specifying software up front it really just inflates upfront costs (since the supplier has to price in risk) and sets the scene for complex and cumbersome later negotiations about under-specified elements.
3. Develop a marketing and business development support organization for open source in key markets (e.g. US and Europe).
The organization would be small, at least initially, and focused on three closely related activity areas (in rough order of importance):
- General marketing of open source to government at both local and national level: getting in front of CIOs, explaining open source, demystifying and derisking it, making the case etc. This is not specific to any specific product or solution.
Supporting open source businesses, especially those at an early-stage, in initial business development activities including: connecting startups to potential customers (“opening the rolodex”) and guidance in navigating the bureaucracy of government procurement including discovering and responding to RFPs.
Promoting commercialization of open source by providing advice, training and support for open source startups and developers in commercializing and marketing their technology. Open source developers and startups are often strong on technology and weak on marketing and selling their solutions and this support would help address these deficiencies.
4. Open Offsets: establish target levels of open source financing combined with a “offsets” style scheme to discharge these obligations.
An “Open Offsets” program would combine three components:
- Establish target commitments for funding open source for participants in the program who could include government, philanthropists and private sector. Targets would be a specific measurable figure like 20% of all IT spending or $5m.
Participants discharge their funding commitment either through direct spending such as procurement or sponsorship or via purchase of open source “offsets”. “Offsets” enable organizations to discharge their open source funding obligation in an analogous manner to the way carbon offsets allow groups to deliver on their climate change commitments.
Administrators of the open offset fund distribute the funds to relevant open source projects and communities in a transparent manner, likely using some combination of expert advice, community voting and value generated (this latter based on an estimate of the usage and value of created by given pieces of open software).
5. “Choose Open”: a grass-roots oriented campaign to promote open software in government and government run activities such as education.
“Choose Open” would be modelled on recent initiatives in online political organizing such as “Move On” in the 2004 US Presidential election as well as online initiatives like Avaaz. It would combine central provision of message, materials and policy with localized community participation to drive change.Read the Full Version of the Paper Online »
Download PDF Version of the paper »
Discussion and Comments »
In an earlier post I speculated about the plateau in ebook adoption. According to recent statistics from publishers we are now actually seeing a decline in ebook sales after a period of growth (and then the leveling off that I discussed before). Here’s my guess about what’s going on—an educated guess, supported by what I’m hearing from my sources and network.
First, re-read my original post. I believe it captured a significant part of the story. A reminder: when we hear about ebook sales we hear about the sales from (mostly) large publishers and I have no doubt that ebooks are a troubled part of their sales portfolio. But there are many other ebooks than those reported by the publishers that release their stats, and ways to acquire them, and thus there’s a good chance that there’s considerable “dark reading” (as I called it) that accounts for the disconnect between the surveys that say that e-reading is growing while sales (again, from the publishers that reveal these stats) are declining.
The big story I now perceive is a bifurcation of the market between what used to be called high and low culture. For genre fiction (think sexy vampires) and other genres where there is a lot of self-publishing, readers seem to be moving to cheap (often 99 cent) ebooks from Amazon’s large and growing self-publishing program. Amazon doesn’t release its ebook sales stats, but we know that they already have 65% of the ebook market and through their self-publishing program may reach a disturbing 90% in a few years. Meanwhile, middle- and high-brow books for the most part remain at traditional publishers, where advances still grease the wheels of commerce (and writing).
Other changes I didn’t discuss in my last post are also happening that impact ebook adoption. Audiobook sales rose by an astonishing 40% over the last year, a notable story that likely impacts ebook growth—for the vast majority of those with smartphones, they are substitutes (see also the growth in podcasts). In addition, ebooks have gotten more expensive in the past few years, while print (especially paperback) prices have become more competitive; for many consumers, a simple Econ 101 assessment of pricing accounts for the ebook stall.
I also failed to account in my earlier post for the growing buy-local movement that has impacted many areas of consumption—see vinyl LPs and farm-to-table restaurants—and is, in part, responsible for the turnaround in bookstores—once dying, now revived—an encouraging trend pointed out to me by Oren Teicher, the head of the American Booksellers Association. These bookstores were clobbered by Amazon and large chains late last decade but have recovered as the buy-local movement has strengthened and (more behind the scenes, but just as important) they adopted technology and especially rapid shipping mechanisms that have made them more competitive.
Personally, I continue to read in both print and digitally, from my great local public library and from bookstores, and so I’ll end with an anecdotal observation: there’s still a lot of friction in getting an ebook versus a print book, even though one would think it would be the other way around. Libraries still have poor licensing terms from publishers that treat digital books like physical books that can only be loaned to one person at a time despite the affordances of ebooks; ebooks are often not that much cheaper, if at all, than physical books; and device-dependency and software hassles cause other headaches. And as I noted in my earlier post, there’s still not a killer e-reading device. The Kindle remains (to me and I suspect many others) a clunky device with a poor screen, fonts, etc. In my earlier analysis, I probably also underestimated the inertial positive feeling of physical books for most readers—which I myself feel as a form of consumption that reinforces the benefits of the physical over the digital.
It seems like all of these factors—pricing, friction, audiobooks, localism, and traditional physical advantages—are combining to restrict the ebook market for “respectable” ebooks and to shift them to Amazon for “less respectable” genres. It remains to be seen if this will hold, and I continue to believe that it would be healthy for us to prepare for, and create, a better future with ebooks.
Austin, TX DuraSpace is pleased to announce the launch of a new DuraCloud web site: http://duracloud.org The site makes it easy to request a customized DuraCloud quote or to create a free trial account. Simple navigation points users to more information about the service, and four different subscription plans. Please let us know what you think!
We are excited to announce that the second face-to-face Mashcat event in North America will be held on January 24th, 2017, in downtown Atlanta, Georgia, USA. We invite you to save the date. We will be sending out a call for session proposals and opening up registration in the late summer and early fall.
Not sure what Mashcat is? “Mashcat” was originally an event in the UK in 2012 aimed at bringing together people working on the IT systems side of libraries with those working in cataloguing and metadata. Four years later, Mashcat is a loose group of metadata specialists, cataloguers, developers and anyone else with an interest in how metadata in and around libraries can be created, manipulated, used and re-used by computers and software. The aim is to work together and bridge the communications gap that has sometimes gotten in the way of building the best tools we possibly can to manage library data. Among our accomplishments in 2016 was holding the first North American face-to-face event in Boston in January and running webinars. If you’re unable to attend a face-to-face meeting, we will be holding at least one more webinar in 2016.
Thanks for considering, and we hope to see you in January.
Register now for the 2016 LITA Forum
Fort Worth, TX
November 17-20, 2016
Join us in Fort Worth, Texas, at the Omni Fort Worth Hotel located in Downtown Fort Worth, for the 2016 LITA Forum, a three-day education and networking event featuring 2 preconferences, 3 keynote sessions, more than 55 concurrent sessions and 25 poster presentations. It’s the 19th annual gathering of the highly regarded LITA Forum for technology-minded information professionals. Meet with your colleagues involved in new and leading edge technologies in the library and information technology field. Registration is limited in order to preserve the important networking advantages of a smaller conference. Attendees take advantage of the informal Friday evening reception, networking dinners and other social opportunities to get to know colleagues and speakers.
- Cecily Walker, Vancouver Public Library
- Waldo Jaquith, U.S. Open Data
- Tara Robertson, @tararobertson
- Librarians can code! A “hands-on” computer programming workshop just for librarians
- Letting the Collections Tell Their Story: Using Tableau for Collection Evaluation
Comments from past attendees:
“Best conference I’ve been to in terms of practical, usable ideas that I can implement at my library.”
“I get so inspired by the presentations and conversations with colleagues who are dealing with the same sorts of issues that I am.”
“After LITA I return to my institution excited to implement solutions I find here.”
“This is always the most informative conference! It inspires me to develop new programs and plan initiatives.”
See you in Fort Worth.
There are probably a hundred reasons why the Senate should immediately vote – and unanimously at that — to confirm Dr. Carla Hayden to serve as the next Librarian of Congress. With the clock ticking down to zero this week on its pre-recess calendar, here are our top ten for the Senate to award her the job now:
- She brought Baltimore’s large historic library system into the 21st Century and she’ll do the same for the Library of Congress.
- The nation’s Library has been led by a library professional three times before in its history; its technology and organizational needs demand a fourth now.
- The Senate Rules Committee approved her without dissent by voice vote.
- Every state and major national library association in America strongly back her confirmation.
- University of Chicago PhDs don’t come in cereal boxes.
- Breathing new life into the Library of Congress demands Dr. Hayden’s deep understanding of technology, opportunity and community.
- The world’s greatest library deserves to be led by one of Fortune Magazine’s 50 “World’s Greatest Leaders” for 2016.
- Congress and the public it serves needs the best possible librarian as the Librarian.
- It’s hard to find anything or anyone else that the Copyright Alliance and Internet Association agree on.
- “Vacancy” is the sign you want to see on a motel marquee at the end of a long drive, not on the Librarian of Congress’ chair at the beginning of a new Congress.
Ask your Senators to confirm Dr. Carla Hayden today – visit the Action Center for additional talking points and pre-written tweets messages.
The post Ten reasons to confirm Dr. Hayden for Librarian of Congress appeared first on District Dispatch.
UI/UX Assets, who create design assets and resources for user interface and user experience designers, make available these really useful flowchart cards designed by Johan Netzler. These are common design patterns you can use to think through the design and flow of a site. Super handy.
I love this kind of stuff. Here, I pieced together an idea for the homepage of a public library.
128 UX flowchart cards. Perfect tool for creating user journeys and UX flows using Sketch. Not only does it come with hundreds of elements, it is as always extremely well organized. Each card follows a flexible grid and a strict layer structure, creating consistency across all cards. This is a perfect instrument to make your ideas minimal, readable and easy to follow.
UX Flowchart Cards on UI/UX Assets
DPLA: DPLA Welcomes Denise Stephens and Mary Minow to Board, Honors Departing Paul Courant and Laura DeBonis
On July 1, 2016, the Digital Public Library of America had several transitions on its Board of Directors. Two of our original board members rotated off the board at the end of their second terms, and two new board members joined in their stead. We wish to salute the critical roles that Paul Courant and Laura DeBonis played in our young organization, and give a warm welcome to Denise Stephens and Mary Minow as we continue to mature.Paul Courant
Paul Courant was at the first meeting that conceptualized DPLA in the fall of 2010 at the Radcliffe Institute, and he has been instrumental in DPLA’s inception and growth ever since. Paul led the creation of one of our founding hubs, HathiTrust, and, with his wide-ranging administrative experience as a provost and university librarian at the University of Michigan and his deep economic knowledge, he has been a tremendous resource to DPLA. With HathiTrust, Paul crystallized the importance of nonprofit institutions holding, preserving, and making accessible digital copies of books (and later, other documents). HathiTrust’s model of large-scale collaboration was also an inspiration for DPLA.
Paul has long been a vocal and effective advocate for open access and for sharing the holdings of our cultural heritage institutions as widely as possible with the global public. His shrewd vision of the national and international landscape for libraries was tremendously influential as we formed, launched, and expanded over the last six years. Paul’s very good humor will also be greatly missed.
Paul N. Courant previously served as the University Librarian and Dean of Libraries, Harold T. Shapiro Collegiate Professor of Public Policy, Arthur F. Thurnau Professor, Professor of Economics and Professor of Information at the University of Michigan. From 2002-2005 he served as Provost and Executive Vice-President for Academic Affairs, the chief academic officer and the chief budget officer of the University. He has also served as the Associate Provost for Academic and Budgetary Affairs, Chair of the Department of Economics and Director of the Institute of Public Policy Studies (which is now the Gerald R. Ford School of Public Policy). In 1979 and 1980 he was a Senior Staff Economist at the Council of Economic Advisers. Paul has authored half a dozen books, and over seventy papers covering a broad range of topics in economics and public policy, including tax policy, state and local economic development, gender differences in pay, housing, radon and public health, relationships between economic growth and environmental policy, and university budgeting systems. More recently, his academic work has considered the economics of universities, the economics of libraries and archives, and the effects of new information technologies and other disruptions on scholarship, scholarly publication, and academic libraries. Paul holds a BA in History from Swarthmore College, an MA in Economics from Princeton University, and a PhD in Economics from Princeton University.Laura DeBonis
Laura DeBonis’s background is very different from Paul’s, but she brought an equal measure of economic and business expertise, and a similar passion to seeing how technology can help the general public. Her early and leading involvement with Google Books, and her ability to establish partnerships across multiple domains, was incredibly helpful to DPLA. Laura’s knowledge of digitization and sense of the power of computational technology—as well as her understanding of where its limits lie and where human activity and collaboration must step in—were enormously useful as we set up DPLA’s distributed national system. In recent years, her savvy understanding of the ebook ecosystem has helped us plan our work in this area, and impacted the Open eBook Initiative. Laura was constantly available to staff, and always ready with well-considered, thoughtful advice. We wish her well and plan to stay in touch.
Laura DeBonis currently works as a consultant to education companies and non-profits. In addition to the DPLA, she also serves on the Public Interest Declassification Board at the National Archives. Laura previously worked at Google in a variety of positions including Director of Library Partnerships for Book Search, Google’s initiative to make all the world’s books discoverable and searchable online. Laura started her career in documentary film and multimedia and in strategy consulting for internet businesses. She is a graduate of Harvard College and has a MBA from Harvard Business School.
Denise Stephens, the University Librarian at the University of California, Santa Barbara, begins her first term on the board this month. We have been particularly impressed with the way that Denise has combined a deep understanding of libraries, physical and digital, with a public spirit and sense of community. The recently renovated library at UCSB, with both analog and digital resources oriented toward the many needs of students, teachers, and the public, is itself a model for DPLA. Her many years of experience and passion for libraries and public service will be greatly appreciated at DPLA.
Denise Stephens has served as University Librarian at UCSB since 2011. Her background includes a broad range of leadership and management roles related to the intersection of evolving information resource strategies and scholarship in the academic environment. She has actively participated in implementing digital library initiatives and service programs in research university libraries for 20 years. In addition to her current position, she has held campus-wide library and information technology executive leadership roles at Syracuse University (as Associate and Acting University Librarian) and the University of Kansas, where she served as Vice Provost and Chief Information Officer. Early in her career, she helped to launch transformative spatial data services among emerging digital library programs at the University of Virginia. Ms. Stephens has also contributed to efforts promoting transformed scholarly communications and persistent accessibility of information resources as a member of the BioOne Board of Directors and the Depository Library Council of the Federal Depository Library Program. Ms. Stephens has a BA in Political Science and a Master of Library and Information Studies from the University of Oklahoma.Mary Minow
Mary Minow is one of the foremost legal scholars on issues that impact libraries, including copyright and fair use. She has been very active in the library community, serving on boards and committees that span a range of interests and communities. Her thoughtful discourses on the nature and role of libraries, the importance of access to culture and the need for intellectual freedom, fits beautifully into our work, and we look forward to her inspiring words and advice. She has worked as both a librarian and a lawyer, and will help us bridge these worlds as well.
Mary Minow is an advanced leadership initiative fellow at Harvard University and is a Presidential Appointee to the board of the Institute of Museum and Library Services. She has also worked as a consultant with libraries in California and across the country on copyright, privacy, free speech and related legal issues. She most recently was counsel to Califa, a consortium of California libraries that set up its own statewide ebook lending service. Previously she was the Follett Chair at Dominican University’s School of Library and Information Science. Current and past board memberships include the Electronic Privacy Information Center, the Freedom to Read Foundation and the California Association of Trustees and Commissioners (Past Chair). She is the recipient of the first Zoia Horn Intellectual Freedom award and also received a WISE (Web-based Information Science Education) award for excellence in online education when she taught part time at San Jose State University.