Register now for the 2016 LITA Forum
Fort Worth, TX
November 17-20, 2016
Join us in Fort Worth, Texas, at the Omni Fort Worth Hotel located in Downtown Fort Worth, for the 2016 LITA Forum, a three-day education and networking event featuring 2 preconferences, 3 keynote sessions, more than 55 concurrent sessions and 25 poster presentations. It’s the 19th annual gathering of the highly regarded LITA Forum for technology-minded information professionals. Meet with your colleagues involved in new and leading edge technologies in the library and information technology field. Registration is limited in order to preserve the important networking advantages of a smaller conference. Attendees take advantage of the informal Friday evening reception, networking dinners and other social opportunities to get to know colleagues and speakers.
- Cecily Walker, Vancouver Public Library
- Waldo Jaquith, U.S. Open Data
- Tara Robertson, @tararobertson
- Librarians can code! A “hands-on” computer programming workshop just for librarians
- Letting the Collections Tell Their Story: Using Tableau for Collection Evaluation
Comments from past attendees:
“Best conference I’ve been to in terms of practical, usable ideas that I can implement at my library.”
“I get so inspired by the presentations and conversations with colleagues who are dealing with the same sorts of issues that I am.”
“After LITA I return to my institution excited to implement solutions I find here.”
“This is always the most informative conference! It inspires me to develop new programs and plan initiatives.”
See you in Fort Worth.
There are probably a hundred reasons why the Senate should immediately vote – and unanimously at that — to confirm Dr. Carla Hayden to serve as the next Librarian of Congress. With the clock ticking down to zero this week on its pre-recess calendar, here are our top ten for the Senate to award her the job now:
- She brought Baltimore’s large historic library system into the 21st Century and she’ll do the same for the Library of Congress.
- The nation’s Library has been led by a library professional three times before in its history; its technology and organizational needs demand a fourth now.
- The Senate Rules Committee approved her without dissent by voice vote.
- Every state and major national library association in America strongly back her confirmation.
- University of Chicago PhDs don’t come in cereal boxes.
- Breathing new life into the Library of Congress demands Dr. Hayden’s deep understanding of technology, opportunity and community.
- The world’s greatest library deserves to be led by one of Fortune Magazine’s 50 “World’s Greatest Leaders” for 2016.
- Congress and the public it serves needs the best possible librarian as the Librarian.
- It’s hard to find anything or anyone else that the Copyright Alliance and Internet Association agree on.
- “Vacancy” is the sign you want to see on a motel marquee at the end of a long drive, not on the Librarian of Congress’ chair at the beginning of a new Congress.
Ask your Senators to confirm Dr. Carla Hayden today – visit the Action Center for additional talking points and pre-written tweets messages.
The post Ten reasons to confirm Dr. Hayden for Librarian of Congress appeared first on District Dispatch.
UI/UX Assets, who create design assets and resources for user interface and user experience designers, make available these really useful flowchart cards designed by Johan Netzler. These are common design patterns you can use to think through the design and flow of a site. Super handy.
I love this kind of stuff. Here, I pieced together an idea for the homepage of a public library.
128 UX flowchart cards. Perfect tool for creating user journeys and UX flows using Sketch. Not only does it come with hundreds of elements, it is as always extremely well organized. Each card follows a flexible grid and a strict layer structure, creating consistency across all cards. This is a perfect instrument to make your ideas minimal, readable and easy to follow.
UX Flowchart Cards on UI/UX Assets
DPLA: DPLA Welcomes Denise Stephens and Mary Minow to Board, Honors Departing Paul Courant and Laura DeBonis
On July 1, 2016, the Digital Public Library of America had several transitions on its Board of Directors. Two of our original board members rotated off the board at the end of their second terms, and two new board members joined in their stead. We wish to salute the critical roles that Paul Courant and Laura DeBonis played in our young organization, and give a warm welcome to Denise Stephens and Mary Minow as we continue to mature.Paul Courant
Paul Courant was at the first meeting that conceptualized DPLA in the fall of 2010 at the Radcliffe Institute, and he has been instrumental in DPLA’s inception and growth ever since. Paul led the creation of one of our founding hubs, HathiTrust, and, with his wide-ranging administrative experience as a provost and university librarian at the University of Michigan and his deep economic knowledge, he has been a tremendous resource to DPLA. With HathiTrust, Paul crystallized the importance of nonprofit institutions holding, preserving, and making accessible digital copies of books (and later, other documents). HathiTrust’s model of large-scale collaboration was also an inspiration for DPLA.
Paul has long been a vocal and effective advocate for open access and for sharing the holdings of our cultural heritage institutions as widely as possible with the global public. His shrewd vision of the national and international landscape for libraries was tremendously influential as we formed, launched, and expanded over the last six years. Paul’s very good humor will also be greatly missed.
Paul N. Courant previously served as the University Librarian and Dean of Libraries, Harold T. Shapiro Collegiate Professor of Public Policy, Arthur F. Thurnau Professor, Professor of Economics and Professor of Information at the University of Michigan. From 2002-2005 he served as Provost and Executive Vice-President for Academic Affairs, the chief academic officer and the chief budget officer of the University. He has also served as the Associate Provost for Academic and Budgetary Affairs, Chair of the Department of Economics and Director of the Institute of Public Policy Studies (which is now the Gerald R. Ford School of Public Policy). In 1979 and 1980 he was a Senior Staff Economist at the Council of Economic Advisers. Paul has authored half a dozen books, and over seventy papers covering a broad range of topics in economics and public policy, including tax policy, state and local economic development, gender differences in pay, housing, radon and public health, relationships between economic growth and environmental policy, and university budgeting systems. More recently, his academic work has considered the economics of universities, the economics of libraries and archives, and the effects of new information technologies and other disruptions on scholarship, scholarly publication, and academic libraries. Paul holds a BA in History from Swarthmore College, an MA in Economics from Princeton University, and a PhD in Economics from Princeton University.Laura DeBonis
Laura DeBonis’s background is very different from Paul’s, but she brought an equal measure of economic and business expertise, and a similar passion to seeing how technology can help the general public. Her early and leading involvement with Google Books, and her ability to establish partnerships across multiple domains, was incredibly helpful to DPLA. Laura’s knowledge of digitization and sense of the power of computational technology—as well as her understanding of where its limits lie and where human activity and collaboration must step in—were enormously useful as we set up DPLA’s distributed national system. In recent years, her savvy understanding of the ebook ecosystem has helped us plan our work in this area, and impacted the Open eBook Initiative. Laura was constantly available to staff, and always ready with well-considered, thoughtful advice. We wish her well and plan to stay in touch.
Laura DeBonis currently works as a consultant to education companies and non-profits. In addition to the DPLA, she also serves on the Public Interest Declassification Board at the National Archives. Laura previously worked at Google in a variety of positions including Director of Library Partnerships for Book Search, Google’s initiative to make all the world’s books discoverable and searchable online. Laura started her career in documentary film and multimedia and in strategy consulting for internet businesses. She is a graduate of Harvard College and has a MBA from Harvard Business School.
Denise Stephens, the University Librarian at the University of California, Santa Barbara, begins her first term on the board this month. We have been particularly impressed with the way that Denise has combined a deep understanding of libraries, physical and digital, with a public spirit and sense of community. The recently renovated library at UCSB, with both analog and digital resources oriented toward the many needs of students, teachers, and the public, is itself a model for DPLA. Her many years of experience and passion for libraries and public service will be greatly appreciated at DPLA.
Denise Stephens has served as University Librarian at UCSB since 2011. Her background includes a broad range of leadership and management roles related to the intersection of evolving information resource strategies and scholarship in the academic environment. She has actively participated in implementing digital library initiatives and service programs in research university libraries for 20 years. In addition to her current position, she has held campus-wide library and information technology executive leadership roles at Syracuse University (as Associate and Acting University Librarian) and the University of Kansas, where she served as Vice Provost and Chief Information Officer. Early in her career, she helped to launch transformative spatial data services among emerging digital library programs at the University of Virginia. Ms. Stephens has also contributed to efforts promoting transformed scholarly communications and persistent accessibility of information resources as a member of the BioOne Board of Directors and the Depository Library Council of the Federal Depository Library Program. Ms. Stephens has a BA in Political Science and a Master of Library and Information Studies from the University of Oklahoma.Mary Minow
Mary Minow is one of the foremost legal scholars on issues that impact libraries, including copyright and fair use. She has been very active in the library community, serving on boards and committees that span a range of interests and communities. Her thoughtful discourses on the nature and role of libraries, the importance of access to culture and the need for intellectual freedom, fits beautifully into our work, and we look forward to her inspiring words and advice. She has worked as both a librarian and a lawyer, and will help us bridge these worlds as well.
Mary Minow is an advanced leadership initiative fellow at Harvard University and is a Presidential Appointee to the board of the Institute of Museum and Library Services. She has also worked as a consultant with libraries in California and across the country on copyright, privacy, free speech and related legal issues. She most recently was counsel to Califa, a consortium of California libraries that set up its own statewide ebook lending service. Previously she was the Follett Chair at Dominican University’s School of Library and Information Science. Current and past board memberships include the Electronic Privacy Information Center, the Freedom to Read Foundation and the California Association of Trustees and Commissioners (Past Chair). She is the recipient of the first Zoia Horn Intellectual Freedom award and also received a WISE (Web-based Information Science Education) award for excellence in online education when she taught part time at San Jose State University.
Inc.’s John Brandon recently wrote about The Slow, Sad, and Ultimately Predictable Decline of 3D Printing. Uh, not so fast.
3D Printing is just getting started. For libraries whose adopted mission is to introduce people to emerging technologies, this is a fantastic opportunity to do so. But it has to be done right.Another dead end?
Brandon cites a few reasons for his pessimism:
- 3D printed objects are low quality and the printers are finicky
- 3D printing growth is falling behind initial estimates
- people in manufacturing are not impressed
- and the costs are too high
I won’t get into all that’s wrong with this analysis, as I feel like most of it is incorrect, or at the very least, a temporary problem typical of a new technology. Instead, I’d like to discuss this in the library maker context. And in fact, you can apply these ideas to any tech project.How to make failure a win—no matter what
Libraries are quick to jump on tech. Remember those QR Codes that would revolutionize mobile access? Did your library consider a Second Life branch? How about those Chromebooks!
Inevitably, these experiments are going to fail. But that’s okay.
As this blog often suggests, failure is a win when doing so teaches you something. Experimenting is the first step in the process of discovery. And that’s really what all these kinds of projects need to be.
In the case of a 3D Printing project at your library, it’s important to keep this notion front and center. A 3D Printing pilot with the goal of introducing the public to the technology can be successful if people simply try it out. That seems easy enough. But to be really successful, even this kind of basic 3D Printing project needs to have a fair amount of up-front planning attached to it.
Chicago Public Library created a successful Maker Lab. Their program was pretty simple: Hold regular classes showing people how to use the 3D printers and then allow those that completed the introductory course to use the printers in open studio lab times. When I tried this out at CPL, it was quite difficult to get a spot in the class due to popularity. The grant-funded project was so successful, based on the number of attendees, that it was extended and continues to this day.
As a grant-funded endeavor, CPL likely wrote out the specifics before any money was handed over. But even an internally-funded project should do this. Keep the goals simple and clear so expectations on the front line match those up the chain of command. Figure out what your measurements of success are before you even purchase the first printer. Be realistic. Always document everything. And return to that documentation throughout the project’s timeline.Taking it to the next level
San Diego Public Library is an example of a Maker Project that went to the next level. Uyen Tran saw an opportunity to merge startup seminars with their maker tools at her library. She brought aspiring entrepreneurs into her library for a Startup Weekend event where budding innovators learned how the library could be a resource for them as they launched their companies. 3D printers were part of this successful program.
It’s important to note that Uyen already had the maker lab in place before she launched this project. And it would be risky for a library to skip the establishment of a rudimentary 3D printer program before trying for this more ambitious program.
But it could be done if that library was well organized with solid project managers and deep roots in the target community. But that’s a tall order to fill.What’s the worst thing that could go wrong?
The worst thing that could go wrong is doubling down on failure: repeating one failed project after another without changing the flawed approach behind it.
I’d also add that libraries are often out ahead of the public on these technologies, so dead ends are inevitable. To address this, I would also add one more tactic to your tech projects: listening.
The public has lots of concerns about a variety of things. If you ask them, they’ll tell you all about them. Many of their concerns are directly related to libraries, but we can often help. We have permission to do so. People trust us. It’s a great position to be in.
But we have to ask them to tell us what’s on their mind. We have to listen. And then we need to think creatively.
Listening and thinking outside the box was how San Diego took their 3D Printers to the next level.The Long Future of 3D Printing
The Wright Brothers first flight managed only 120 feet in the air. A year later, they flew 24 miles. These initial attempts looked nothing like the jet age and yet the technology of flight was born from these humble experiments.
Already, 3D printing is being adopted in multiple industries. Artists are using it to prototype their designs. Astronauts are using it to print parts aboard the International Space Station. Bio-engineers are now looking at printing stem-cell structures to replace organs and bones. We’re decades away from the jet age of 3D printing, but this tech is here to stay.
John Brandon’s read is incorrect simply because he’s looking at the current state and not seeing the long-term promise. When he asks a Ford engineer for his take on 3D Printing in the assembly process, he gets a smirk. Not a hotbed of innovation. What kind of reaction would he have gotten from an engineer at Tesla? At Apple? Fundamentally, he’s approaching 3D Printers from the wrong perspective and this is why it looks doomed.
Libraries should not make this mistake. The world is changing ever more quickly and the public needs us to help them navigate the new frontier. We need to do this methodically, with careful planning and a good dose of optimism.
Starting in 2012, the British Library replaced its interlibrary loan service with a license document delivery agreement with the International Association of Scientific, Technical & Medical Publishers (STM) and the Publishers Association. Perhaps to improve turnaround time to provide better service, perhaps to save money by outsourcing, or perhaps because of fear of infringement, the British Library agreed to switch to the International Non-Commercial Document Supply (INCD) service. Their previous interlibrary loan service was extremely popular and apparently lawful because UK copyright law has interlibrary loan copyright exception similar to the one we have in US copyright law – that libraries could send journal articles to other libraries to meet the request of a user. But did it cover international ILL?
The abandoned interlibrary loan service provided resources to 59 countries that did not have the materials requested by faculty, researchers, and students at their own libraries. Being one of the largest research collections in the world, interlibrary loan from the British Library was naturally, heavily relied upon. After moving to the INCD service however, the popular interlibrary loan service deteriorated in spectacular fashion, detailed by Teresa Hackett from Electronic Information for Libraries) (EIFL). In her blog post entitled “Licensed to Fail,” Hackett describes the swift demise of INCD service, and, through a freedom of information request, has the data to bolster her argument. You must read it, although you likely will not be surprised.
Back in 2012, when announcing the INCD partnership, Michael Mabe, CEO of the STM said that “the British Library framework license (INCD) will give publishers, including our members, contractual control over the international cross-border delivery of copies from their material via an established and respected document supply service. It will also allow the British Library to improve the service, and delivery times, available to its authorized users.” Alas, the British Library cancelled the service this month. It did not fit the bill, dramatically reducing the access to research materials (while delivering on publisher contractual control).
One wonders. Maybe this explains the popularity of Sci-Hub.
The past 3 weeks, I’ve been doing a lot of work on MarcEdit. These initial changes impact just the windows and linux version of MarcEdit. I’ll be taking some time tomorrow and Wed. to update the Mac version. The current changes are as follows:
* Enhancement: Language files have been updated
* Enhancement: Command-line tool: -task option added to support tasks being run via the command-line.
* Enhancement: Command-line tool: -clean and -validate options updated to support structure validation.
* Enhancement: Alma integration: Updating version numbers and cleaned up some windowing in the initial release.
* Enhancement: Small update to the validation rules file.
* Enhancement: Update to the linked data rules file around music headings processing.
* Enhancement: Linked Data Platform: collections information has been moved into the configuration file. This will allow local indexes to be added so long as they support a json return.
* Enhancement: Merge Records — 001 matching now looks at the 035 and included oclc numbers by default.
* Enhancement: MarcEngine: Updated the engine to accommodate invalid data in the ldr.
* Enhancement: MARC SQL Explorer — added an option to allow mysql database to be created as UTF8.
* Enhancement: Handful of odd UI changes.
You can get the update from the downloads page (http://marcedit.reeset.net/downloads) or via the automated update tools.
MarcEdit’s command-line function has always had the ability to run validation tasks against the MarcEdit rules file. However, the program hasn’t included access to the cleaning functions of the validator. As of the last update, this has changed. If the –validate command is invoked without a rules file defined, the program will validate the structure of the data. If the –clean option is passed, the program will remove invalid structural data from the file.
Here’s an example of the command:
>> cmarcedit.exe -s “C:\Users\rees\Desktop\CLA_UCB 2016\Data File\sample data\bad_sample_records.mrc” –validate
MarcEdit’s task list functionality has made doing repetitive tasks in MarcEdit a fairly simple process. But one limitation has always been that the tasks must be run from within MarcEdit. Well, that limitation has been lifted. As of the last update, a new option has been added to the command-line tool: –task. When run with a path to a task to run, MarcEdit will preform the task from the command-line.
Here’s an example of a command:
cmarcedit.exe -s “C:\Users\rees\Desktop\withcallnumbers.mrk” -d “C:\Users\rees\Desktop\remote_task.mrk” -task “C:\Users\rees\AppData\Roaming\marcedit\macros\tasksfile-2016_06_17_190213223.txt”
This functionality is only available in the Windows and Linux version of the application.
Journal of Web Librarianship: Pakistani University Library Web Sites: Features, Contents, and Maintenance Issues
Muhammad Abbas Ganaee
Do you have an inventive VIVO application or exemplary linked open data set? Show off your creativity at the VIVO conference! Submit your work to the VIVO App or Linked Open Data Contests and give it the recognition that it deserves. Winners will be announced and recognized at the conference. Submissions are due by August 1, 2016. Instructions can be found here.
Book your room now for VIVO2016
FOSS4Lib Upcoming Events: FOLIO Open Source Project to build a Library Services Platform – Questions and Answers
Last updated July 11, 2016. Created by Peter Murray on July 11, 2016.
Log in to edit this page.
Open Library Community Forum: Wednesday, July 13, 2016, at 11 AM EDT/3 PM GMT
Please come join the Open Library Community Forum!
Speakers will answer questions about FOLIO, the open source project to build a library services platform (LSP), and attendees can learn more about the project as well as how they can participate. Members of the audience are welcome to submit questions during the Forum. Questions not answered during the Forum will be answered in a soon-to-come blog posting from the FOLIO Project Leaders.
Last updated July 11, 2016. Created by Peter Murray on July 11, 2016.
Log in to edit this page.
A community collaboration to develop an open source Library Services Platform (LSP) designed for innovation.Package Type: Integrated Library SystemLicense: Apache 2.0Development Status: In Development Package Links Browser/Cross-Platform Upcoming Events for the FOLIO Package
Academic libraries have long provided workshops that focus on research skills and tools to the community. Topics often include citation software or specific database search strategies. Increasingly, however, libraries are offering workshops on topics that some may consider untraditional or outside the natural home of the library. These topics include using R and other analysis packages, data visualization software, and GIS technology training, to name a few. Librarians are becoming trained as Data and Software Carpentry instructors in order to pull from their established lesson plans and become part of a larger instructional community. Librarians are also partnering with non-profit groups like Mozilla’s Science Lab to facilitate research and learning communities.
Traditional workshops have generally been conceived and executed by librarians in the library. Collaborating with outside groups like Software Carpentry (SWC) and Mozilla is a relatively new endeavor. As an example, certified trainers from SWC can come to campus and teach a topic from their course portfolio (e.g. using SQL, Python, R, Git). These workshops may or may not have a cost associated with them and are generally open to the campus community. From what I know, the library is typically the lead organizer of these events. This shouldn’t be terribly surprising. Librarians are often very aware of the research hurdles that faculty encounter, or what research skills aren’t being taught in the classroom to students (more on this later).
Librarians are helpers. If you have some biology knowledge, I find it useful to think of librarians as chaperone proteins, proteins that help other proteins get into their functional conformational shape. Librarians act in the same way, guiding and helping people to be more prepared to do effective research. We may not be altering their DNA, but we are helping them bend in new ways and take on different perspectives. When we see a skills gap, we think about how we can help. But workshops don’t just *spring* into being. They take a huge amount of planning and coordination. Librarians, on top of all the other things we do, pitch the idea to administration and other stakeholders on campus, coordinate the space, timing, refreshments, travel for the instructors (if they aren’t available in-house), registration, and advocate for the funding to pay for the event in order to make it free to the community. A recent listserv discussion regarding hosting SWC workshops resulted in consensus around a recommended minimum six week lead time. The workshops have all been hugely successful at the institutions responding on the list and there are even plans for future Library Carpentry events.
A colleague once said that everything that librarians do in instruction are things that the disciplinary faculty should be doing in the classroom anyway. That is, the research skills workshops, the use of a reference manager, searching databases, the data management best practices are all appropriately – and possibly more appropriately – taught in the classroom by the professor for the subject. While he is completely correct, that is most certainly not happening. We know this because faculty send their students to the library for help. They do this because they lack curricular time to cover any of these topics in depth and they lack professional development time to keep abreast of changes in certain research methods and technologies. And because these are all things that librarians should have expertise in. The beauty of our profession is that information is the coin of the realm for us, regardless of its form or subject. With minimal effort, we should be able to navigate information sources with precision and accuracy. This is one of the reasons why, time and again, the library is considered the intellectual center, the hub, or the heart of the university. Have an information need? We got you. Whether those information sources are in GitHub as code, spreadsheets as data, or databases as article surrogates, we should be able to chaperone our user through that process.
All of this is to the good, as far as I am concerned. Yet, I have a persistent niggle at the back of my mind that libraries are too often taking a passive posture. [Sidebar: I fully admit that this post is written from a place of feeling, of suspicions and anecdotes, and not from empirical data. Therefore, I am both uncomfortable writing it, yet unable to turn away from it.] My concern is that as libraries extend to take on these workshops because there is a need on campus for discipline-agnostic learning experiences, we (as a community) do so without really fomenting what the expectations and compensations of an academic library are, or should be. This is a natural extension of the “what types of positions should libraries provide/support?” question that seems to persist. How much of this response is based on the work of individuals volunteering to meet needs, stretching the work to fit into a job description or existing work loads, and ultimately putting user needs ahead of organizational health? I am not advocating that we ignore these needs; rather I am advocating that we integrate the support for these initiatives within the organization, that we systematize it, and that we own our expertise in it.
This brings me back to the idea of workshops and how we claim ownership of them. Are libraries providing these workshops only because no one else on campus is meeting the need? Or are we asserting our expertise in the domain of information/data shepherding and producing these workshops because the library is the best home for them, not a home by default? And if we are making this assertion, then have we positioned our people to be supported in the continual professional development that this demands? Have we set up mechanisms within the library and within the university for this work to be appropriately rewarded? The end result may be the same – say, providing workshops on R – but the motivation and framing of the service is important.
Information is our domain. We navigate its currents and ride its waves. It is ever changing and evolving, as we must be. And while we must be agile and nimble, we must also be institutionally supported and rewarded. I wonder if libraries can table the self-reflection and self-doubt regarding the appropriateness of our services (see everything ever written regarding libraries and data, digital humanities, digital scholarship, altmetrics, etc.) and instead advocate for the resourcing and recognition that our expertise warrants.
Library of Congress: The Signal: FADGI MXF Video Specification Moves Up an Industry-organization Approval Ladder
The following is a guest post by Carl Fleischhauer, who organized the FADGI Audio-Visual Working Group in 2007. Fleischhauer recently retired from the Library of Congress.
The Federal Agencies Digitization Guidelines Initiative Audio-Visual Working Group is pleased to announce a milestone in the development of the AS-07 MXF video-preservation format specification. AS-07 has taken shape under the auspices of a not-for-profit trade group: the Advanced Media Workflow Association. AS-07 is now an official AMWA Proposed Specification, and the current version (CC by SA Creative Commons license and all) has been posted at the AMWA website. Although this writer retired from the Library in April, he helped shepherd the specification through this phase.
AS-07 is one of three new AMWA specifications announced in June. Another one is the organization’s new process rule book. The new AMWA process is patterned on the Requests for Comment approach used by the Internet Engineering Task Force. In the new AMWA scheme, there are three levels of maturity:
- Work in Progress
- Proposed Specification
Two earlier versions of AS-07 were exposed for community comment at the AMWA website, beginning in September 2014, and this met the requirements for a Work in Progress. For more information about the history of AS-07, refer to the FADGI website.
AS-07 is a standards-based specification. For the most part it is a cookbook recipe for a particular subtype of the MXF standard. MXF stands for Material eXchange Format, and that format’s complex and lengthy set of rules and options is spelled out in more than thirty standards from the Society of Motion Picture and Television Engineers. AS-07 also enumerates a number of permitted encodings and other components, each of which is based on other standards from SMPTE, the International Organization for Standardization and International Electrotechnical Commission, the European Broadcast Union, and special White Paper documents from the British Broadcasting Corporation. It is no wonder that a cookbook recipe is called for!
Why the emphasis on standards? The short answer is that standards underpin interoperability, in the digital world just as surely as they have for, say, the dimensions of railroad tracks, so my boxcar will roll down your rail line. It is worth saying that, in our preservation context, interoperability has both current and future dimensions. Today, cooperating archives may exchange preservation master files and these must be readable by both parties. More important, however, is temporal interoperability: today’s content must be readable by the archive of tomorrow. AS-07’s extensive use of standards-based design supports both types of interoperability.
At a high level, the objectives for video archival master files (aka preservation masters) are like those for the digital preservation reformatting for other categories of content. Archives want their masters to reproduce picture and sound at very high levels of quality. In addition, the preservation masters should be complete and authentic copies of the originals, i.e., in the case of video, they should retain components like multiple timecodes, closed captions and multiple soundtracks. And–back to temporal interoperability–the files must support access by future users.
What are some of the features of AS-07? The specification emphasizes encodings that ensure the highest possible quality of picture and sound, including requirements for declaring the correct aspect ratio and handling the intricacies of interlaced picture, a characteristic of pre-digital video. Beyond those elements, AS-07 also specifies options for the following:
- Captions and Subtitles
- retain and provide carriage for captions and subtitles
- translate binary-format captions and subtitles to XML Timed Text
- Audio Track Layout and Labeling
- provide options for audio track layout and labeling
- Content integrity
- provide support for within-file content integrity data
- provide coherent master timecode
- retain legacy timecode
- label multiple timecodes
- Embedding Text-Based and Binary Data
- provide carriage of supplementary metadata (text-based data)
- provide carriage of captions and subtitles in the form of Timed Text (text-based data)
- provide carriage of a manifest (text-based data)
- provide carriage of still images, documents, EBU STL, etc. (binary data)
- Language Tagging
- provide a means to tag Timed Text languages
- retain language tagging associated with legacy binary caption or subtitle data
- provide a means to tag soundtrack languages
- provide support for segmented content
AS-07 has not been exclusively developed in writing (“on paper,” in oldspeak). The format is based on pioneering work done by Jim Lindner in the early 2000s, when he developed a system called SAMMA (System for the Automated Migration of Media Archives). SAMMA produces MXF files for which the picture data is encoded as lossless JPEG 2000 frame images. It also operates in a robotic mode, to support high-volume reformatting.
Jim’s design for SAMMA was motivated by the forecasts for high-volume reformatting at the Library’s audio-visual center in Culpeper, Virginia (today’s Packard Campus for Audio-Visual Conservation), which was then in its planning phase. The Packard Campus began operation in 2007 and, since then, more than 160,000 videotapes have been reformatted using the SAMMA system. AS-07 is very much a refinement and elaboration of the SAMMA format. In order to get a better look at those refinements, in 2015, the AS-07 team commissioned the production of custom-made sample files.
What next? The interesting — and I think proper — feature of the new AMWA process concerns the movement from Proposed Specification to Specification. The rulebook lists several bullets as requirements but the gist is this: you gotta have implementation and adoption. AS-07 at this time is, metaphorically, a recipe ready to test in the kitchen. Now it is time to cook and taste the pudding. After there are instances of implementation and adoption, these will be reported to the AMWA board with a request to advance AS-07 to the level of [approved] Specification. (Of course, if the process reveals problems, the specification will be modified.)
The first steps toward implementation are under way. On FADGI’s behalf, the Library has contracted with Audiovisual Preservation Solutions and EVS to assemble additional test files, and to have them reviewed by an outside expert. At the same time, James Snyder, the Senior Systems Administrator at the Packard Campus, is working with vendors to do some actual workups. (James oversees the campus’s use of SAMMA and has been an active AS-07 team member.) We trust that these implementation efforts will bear fruit during the remaining months of 2016.
D-Lib Magazine has just published my analysis of the 2015 International Linked Data Survey for Implementers.* I published the results of the 2014 linked data survey in a series of blog posts here between 28 August 2014 and 8 September 2014 (1 — Who’s doing it; 2 — Examples in production; 3 — Why and what institutions are consuming; 4 — Why and what institutions are publishing; 5 — Technical details; 6 — Advice from the implementers). Discussions with OCLC Research Library Partners metadata managers prompted these surveys, as they thought there were more linked data projects that had been implemented than they were aware of.
I had two objectives for repeating the 2014 survey in 2015:
- Increase survey participation, especially by national libraries.
- Identify changes in the linked data environment, as described in my 1 June 2015 posting, What’s changed in linked data implementations?
We met the first objective. Few national libraries were represented in the 48 responding institutions to the 2014 survey (those that had implemented or were implementing linked data projects or services), and several commentators noted their absence. To address this gap, we conducted the 2015 survey earlier, between 1 June and 31 July (rather than 7 July and 15 August in 2014). We were also more pro-active in recruiting responses. We indeed had increased participation, receiving responses from 71 institutions that had implemented or were implementing linked data projects or services, including 14 from national libraries (compared to just 4 in 2014). The number of projects described also increased, from 76 in 2014 to 112 in 2015.
The idea that we could compare responses to the same set of questions to identify changes or trends proved to be unrealistic for three reasons:
- Although I asked each responding institution in 2014 to also respond to the 2015 survey, only 29 did so. This is too small a pool to provide any over-arching “changes in the linked data environment.”
- One year is insufficient to note significant changes.
- Although repeat respondents had access to their responses in 2014, a number of their 2015 responses differed in areas that were not likely to change within a year (such as licenses, platforms, serializations, vocabularies used). It was unclear whether they really represented a change or just a different answer.
It is easier to note what did not change between the two surveys. For example:
- Most linked data projects or services both consume and publish linked data. Those that publish linked data only (and not consume it) are relatively few in both survey results.
- The chief motivations for publishing linked data are the same: expose data to a larger audience on the Web and demonstrate what could be done with datasets as linked data (80% or more of all respondents in each survey).
- Similarly, the chief motivations for consuming linked data are the same: provide local users with a richer experience and enhance local data by consuming linked data from other sources (74% or more of all respondents in each survey).
- Most respondents in each survey were libraries or networks of libraries. We had few responses from outside the library domain. In hindsight this is not surprising, as our social networks are with those who work in, for or with libraries
The 2015 survey results may be considered a partial snapshot of the (mostly) library linked data environment. Museums and digital humanities linked data projects are not well represented. I have been asked whether I plan to repeat the survey. I haven’t decided – what do you think?
If you’re interested in looking at the responses from institutions you consider your peers, or would like to analyze the results for yourself, all responses to both the 2015 and 2014 surveys (minus the contact information which we promised to keep confidential) are available at: http://www.oclc.org/content/dam/research/activities/linkeddata/oclc-research-linked-data-implementers-survey-2014.xlsx
* Full citation: Smith-Yoshimura, Karen. 2016. Analysis of International Linked Data Survey for Implementers. D-Lib Magazine 22 (7/8) doi:10.1045/july2016-smith-yoshimuraAbout Karen Smith-Yoshimura
Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.Mail | Web | Twitter | More Posts (68)
DuraSpace News: VIVO Updates for July 10–VIVO16 Conference News, VIVO 1.9 and Vitro 1.9 Release Candidates, OpenVIVO Update
From Mike Conlon, VIVO Project Director
Apps Contest and Linked Data Contest. Do you have an application that uses VIVO data? Do you have a set of linked data that you can share? The VIVO Conference is holding its annual Application Contest and Linked Data Contest. You will receive an email this week with instructions for applying. Applications will be due August 1. It's easy to apply. Winners will be recognized in the program and recognized in OpenVIVO!
Before we can posit any solutions to the problems that I have noted in these posts, we need to at least know what questions we are trying to answer. To me, the main question is:
What should happen between the search box and the bibliographic display?
Or as Pauline Cochrane asked: "Why should a user ever enter a search term that does not provide a link to the syndetic apparatus and a suggestion about how to proceed?" I really like the "suggestion about how to proceed" that she included there. Although I can think of some exceptions, I do consider this an important question.
If you took a course in reference work at library school (and perhaps such a thing is no longer taught - I don't know), then you learned a technique called "the reference interview." The Wikipedia article on this is not bad, and defines the concept as an interaction at the reference desk "in which the librarian responds to the user's initial explanation of his or her information need by first attempting to clarify that need and then by directing the user to appropriate information resources." The assumption of the reference interview is that the user arrives at the library with either an ill-formed query, or one that is not easily translated to the library's sources. Bill Katz's textbook "Introduction to Reference Work" makes the point bluntly:
"Be skeptical of the of information the patron presents" 
If we're so skeptical that the user could approach the library with the correct search in mind/hand, then why then do we think that giving the user a search box in which to put that poorly thought out or badly formulated search is a solution? This is another mind-boggler to me.
So back to our question, what SHOULD happen between the search box and the bibliographic display? This is not an easy question, and it will not have a simple answer. Part of the difficulty of the answer is that there will not be one single right answer. Another difficulty is that we won't know a right answer until we try it, give it some time, open it up for tweaking, and carefully observe. That's the kind of thing that Google does when they make changes in their interface, but we haven't got either Google's money nor its network (we depend on vendor systems, which define what we can and cannot do with our catalog).
Since I don't have answers (I don't even have all of the questions) I'll pose some questions, but I really want input from any of you who have ideas on this, since your ideas are likely to be better informed than mine. What do we want to know about this problem and its possible solutions?
(Some of) Karen's QuestionsWhy have we stopped evolving subject access?Is it that keyword access is simply easier for users to understand? Did the technology deceive us into thinking that a "syndetic apparatus" is unnecessary? Why have the cataloging rules and bibliographic description been given so much more of our profession's time and development resources than subject access has? 
Is it too late to introduce knowledge organization to today's users?The user of today is very different to the user of pre-computer times. Some of our users have never used a catalog with an obvious knowledge organization structure that they must/can navigate. Would they find such a structure intrusive? Or would they suddenly discover what they had been missing all along? 
Can we successfully use the subject access that we already have in library records?Some of the comments in the articles organized by Cochrane in my previous post were about problems in the Library of Congress Subject Headings (LCSH), in particular that the relationships between headings were incomplete and perhaps poorly designed. Since LCSH is what we have as headings, could we make them better? Another criticism was the sparsity of "see" references, once dictated by the difficulty of updating LCSH. Can this be ameliorated? Crowdsourced? Localized?
We still do not have machine-readable versions of the Library of Congress Classification (LCC), and the machine-readable Dewey Decimal Classification (DDC) has been taken off-line (and may be subject to licensing). Could we make use of LCC/DDC for knowledge navigation if they were available as machine-readable files?
Given that both LCSH and LCC/DDC have elements of post-composition and are primarily instructions for subject catalogers, could they be modified for end-user searching, or do we need to develop a different instrument altogether?
How can we measure success?Without Google's user laboratory apparatus, the answer to this may be: we can't. At least, we cannot expect to have a definitive measure. How terrible would it be to continue to do as we do today and provide what we can, and presume that it is better than nothing? Would we really see, for example, a rise in use of library catalogs that would confirm that we have done "the right thing?"
Notes*Modern Subject Access in the Online Age: Lesson 3
Author(s): Pauline A. Cochrane, Marcia J. Bates, Margaret Beckman, Hans H. Wellisch, Sanford Berman, Toni Petersen, Stephen E. Wiberley and Jr.
Source: American Libraries, Vol. 15, No. 4 (Apr., 1984), pp. 250-252, 254-255
Stable URL: http://www.jstor.org/stable/25626708
 Katz, Bill. Introduction to Reference Work: Reference Services and Reference Processes. New York: McGraw-Hill, 1992. p. 82 http://www.worldcat.org/oclc/928951754. Cited in: Brown, Stephanie Willen. The Reference Interview: Theories and Practice. Library Philosophy and Practice 2008. ISSN 1522-0222
 One answer, although it doesn't explain everything, is economic: the cataloging rules are published by the professional association and are a revenue stream for it. That provides an incentive to create new editions of rules. There is no economic gain in making updates to the LCSH. As for the classifications, the big problem there is that they are permanently glued onto the physical volumes making retroactive changes prohibitive. Even changes to descriptive cataloging must be moderated so as to minimize disruption to existing catalogs, which we saw happen during the development of RDA, but with some adjustments the new and the old have been made to coexist in our catalogs.
 Note that there are a few places online, in particular Wikipedia, where there is a mild semblance of organized knowledge and with which users are generally familiar. It's not the same as the structure that we have in subject headings and classification, but users are prompted to select pre-formed headings, with a keyword search being secondary.
 Simon Spero did a now famous (infamous?) analysis of LCSH's structure that started with Biology and ended with Doorbells.
One of my favourite exercises from library school is perhaps one that you had to do as well. We were instructed to find a particular term from the Library of Congress Subject Heading “Red Books” and develop that term into a topic map that would illustrate the relationships between the chosen term and its designated broader terms, narrower terms and related terms. Try as I might, I cannot remember the term that I used in my assignment so many years ago so, here is such a mapping for existentialism.
Recently we’ve been spending much attention on the language of these subject headings as we come to recognize those particular headings that are reductive and problematic. For example, undocumented students are denied their basic humanity when they are described as illegal aliens. And as most of you already know, the act of reforming this particular heading was seriously hindered by Republicans in the House of Representatives.
As troubling as this interference is, this is not what I want to write about LCSH for you today. For this post, I want to bring greater attention to something else about subject headings. I want to share something that Karen Coyle has pointed out repeatedly but that I have only recently finally grokked.
When we moved to online library catalogues, we stripped all the relationship context from our subject headings — all those related terms, broader terms, all those relationships that placed a concept in relationship with other concepts. As such, all of our subject headings may as well be ‘tags’ for how they are used in our systems. Furthermore, the newer standards that are being developed to replace MARC (FRBR, Bibframe, RDF) either don’t capture this information or if they do, the systems being developed around these standards do not to use these subject relationships or hinder subject ordering [ed. text corrected].
From the slides of “How not to waste catalogers’ time: Making the most of subject headings“, a code4lib presentation from John Mark Ockerbloom:Here’s another way we can view and explore works on a particular subject. This is a catalog I’ve built of public domain and other freely readable texts available on the Internet. It organizes works based on an awareness of subjects and how subjects are cataloged. The works we see at the top of the list on the right, for instance, tend to be works where “United States – History – Revolution, 1775-1783” was the first subject assigned. Books where that subject was further down their subject list tend to appear appear further down in this list. I worry about whether I’ll still be able to do this when catalogs migrate to RDF. [You just heard in the last talk] that in RDF, unlike in MARC, you have to go out of your way to preserve property ordering. So here’s my plea to you who are developing RDF catalogs: PLEASE GO OUT OF YOUR WAY AND PRESERVE SUBJECT ORDERING!
I highly recommend reading Karen Coyle’s series of posts on Catalog and Context in which she patiently presents the reader the history and context of why Library of Congress Subject Headings were developed, how they were used and then explains what has been lost and why.
It begins like this:
Imagine that you do a search in your GPS system and are given the exact point of the address, but nothing more.
Without some context showing where on the planet the point exists, having the exact location, while accurate, is not useful.
In essence, this is what we provide to users of our catalogs. They do a search and we reply with bibliographic items that meet the letter of that search, but with no context about where those items fit into any knowledge map.
And what was lost? While our online catalogs make known-item searching very simple, our catalogues are terrible!dismal!horrible! for discovery and exploration.
Perhaps this is one of the reasons why there is so much interest in outsider-libraries that are built for discovery, like The Prelinger Library.
This remarkable library – which is run by only two people – turns a collection of ephemera, found material and of library discards into a collection built for visual inspiration and support of the independent scholar through careful selection and an unique arrangement that was developed by Megan Prelinger:
Inspired by Aby Warburg’s “law of the good neighbor” the Prelinger Library’s organization does not follow conventional classification systems such as the Dewey Decimal System. Instead it was custom-designed by Megan Shaw Prelinger in a way that would allow visitors to browse and encounter titles by accident or, better yet, by good fortune. Furthermore, somewhat evoking the shifts in magnitudes at play in Charles and Ray Eames’s Powers of Ten (1977) the shelves’ contents are arranged according to a geospatial model departing from the local material specifically originating from or dealing with San Francisco and ending with the cosmic where books on both outer space and science fiction are combined with the more ethereal realms of math, religion, and philosophy.
Of particular note: The Prelinger Library does not have a library catalogue and they don’t support query based research. They think query based research is reductive (Situated Systems, Issue 3: The Prelinger Library).
One thing I wonder: why do we suggest that catalogers work the reference desk but don't suggest that reference folks work in cataloging?
— Erin Leach (@erinaleach) June 26, 2016
Frankly, I’m embarrassed how little I know about the intellectual work behind our systems that I use and teach as a liaison librarian. I do understand that libraries, like many other organizations such as museums, theatre and restaurants, have a “front of house” and “back of house” with separate practices and cultures and that there are very good reasons for specialization. That being said, I believe that the force of digitization has collapsed the space between the public and technical services of the library. In fact, I would go as far to say that the separation is largely a product of past organizational practice and it doesn’t make much sense anymore.
Inspired by Karen Coyle, Christina Harlow, and the very good people of mashcat, I’m working on improving my own understanding of the systems and if you are interested, you can follow my readings in this pursuit on my reading journal, Reading is Becoming. It contains quotes like this:
GV: You mentioned “media archeology” and I was wondering if you’re referring to any of Shannon Mattern’s work…
RP: Well, she’s one of the smartest people in the world. What Shannon Mattern does that’s super-interesting is she teaches both urban space and she teaches libraries and archives. And it occurred to me after looking at her syllabi — and I know she’s thought about this a lot, but one model for thinking about archives in libraries — you know, Megan was the creator of the specialized taxonomy for this pace, but in a broader sense, collections are cities. You know, there’s neighborhoods of enclosure and openness. There’s areas of interchange. There’s a kind of morphology of growth which nobody’s really examined yet. But I think it’s a really productive metaphor for thinking about what the specialty archives have been and what they might be. [Mattern’s] work is leading in that position. She teaches a library in her class.
I understand the importance of taking a critical stance towards the classification systems of our libraries and recognizing when these systems use language that is offensive or unkind to the populations we serve. But critique is not enough. These are our systems and the responsibility to amend them, to improve them, to re-imagine them, and to re-build them as necessary- these are responsibilities of those of our profession.
We know where we need to go. We already have a map.