You are here

Feed aggregator

Aaron Schmidt: Library Websites Worth Looking At

planet code4lib - Tue, 2014-10-21 14:00

Last week Chris Zammarelli asked Amanda Etches and me for some library website inspiration. So we decided to compile a short list of some sites that we’re liking right now. If we missed one that you really like, please holler!

Hennepin County Library

We like:

The huge search box. The visual design of the site is pleasant, but the best part of the HCL website is the catalog integration. Totally into it. Search results are legible, and bib records aren’t filled with junk that people don’t want to see (though additional information is available below).

Red flags:

At 1440 x 900, there’s some odd white space on the left of most pages. (A somewhat minor gripe, to be sure.)

Addison Public Library

We like:

Legible typography, calm visual design, restrained content.

Red flags:

Wish the search box was a bit bigger, but it is in a conventional location so maybe that’s okay. Also, the site uses the classic and ever popular public library audience segmentation of kids/teens/adults. We understand the problem that this solves but think there’s probably a better solution out there somewhere.

MIT Libraries

We like:

Great homepage! Nice, clear, bold typography. Useful content.

Red flags:

Catalog isn’t integrated, lots of content thrown into link laden libguides.

CSU Channel Islands John Spoor Broome Library

We like:

Another great homepage! Very welcoming with friendly writing. Task oriented and a big search box.

Red flags:

Not responsive.

Open Library: Open Library Scheduled Hardware Maintenance

planet code4lib - Tue, 2014-10-21 13:59

Open Library will be down from 5:00PM to 6:00PM SF Time (PDT, UTC/GMT -7 hours) on Tuesday October 21, 2014 due to a scheduled hardware maintenance.

We’ll post updates here and on @openlibrary twitter.

Thank you for your cooperation.

Library of Congress: The Signal: New Season, New Viewshare

planet code4lib - Tue, 2014-10-21 13:49

The following is a guest post by NDIIPP summer intern Elizabeth Tobey. Liz is a graduate student in the Masters of Library Science program at the University of Maryland.

Along with the fall weather, food, activities and the new layer of clothes that are now necessary, this season also brings us a new and improved Viewshare. The new Viewshare has all the capabilities of the previous version but has a simplified workflow; an improved, streamlined look; and larger and more legible graphics in its views.

Originally launched in 2011, Viewshare is visualization software that libraries, archives and museums can use for free to generate “views” of their digital collections. Users have discovered a multitude of applications for Viewshare, including visualizations of LAM (Library, Archives and Museum) collections’ data, representation of data sets in academic scholarship and student use of Viewshare in library science classwork.

The new version of Viewshare has streamlined the workflow so that users can proceed directly from uploading data sets into creating views. The old Viewshare divided this process into three distinct stages: uploading records, augmenting data fields and creating/sharing views. While all these functions are still part of the Viewshare workflow, the new Viewshare accelerates the process by creating your first view for you directly from the imported data.

Once you have uploaded your data from the web or from a file on your computer, the fields will immediately populate records in a List View of your collection. You can immediately start reviewing the uploaded records in the List View, and if you choose, can begin creating additional views once you save your data set.

List View of uploaded data set.

 

Once you save your data set, you can start adding new views immediately.

Like in the old version of Viewshare, you will need to augment some of your data fields in order to get the best results in creating certain types of views, such as maps based upon geographical location or timelines based upon date. Viewshare still needs to generate latitudinal/longitudinal coordinates for locations and standardize dates, but the augmentation process has been simplified.

In the new Viewshare, you can create an augmented field by clicking on the blue “Add a Property” button and entering information into the dialog box about the augmented field and the fields you wish to base it upon. Here, the user is creating an augmented date field for use in a timeline:

Augmenting fields has also been streamlined.

Once you hit the “Create Property” button, Viewshare automatically starts augmenting the data. A status bar at the top of the window alerts the user when the field has been created successfully. The new field appears at the very top of the field list:

A status bar alerts users to the progress of augmenting fields.

Another great feature of the new Viewshare is that whenever you make changes to a record field (such as changing a field type from text to date), Viewshare saves those changes automatically. (However, you still need to remember to hit the “Save” button for any new views or widgets you create!).

The views in the new Viewshare have larger, more readable graphics than in the previous version. Here is an example of a pie chart showing conference participation data in the old Viewshare:

Old pie chart view.

The pie chart takes up only about a third of the screen width and is tilted at an angle. Here is the same view in the new Viewshare:

Improved pie chart view.

Here, the pie chart occupies more than half of the screen and is displayed flat rather than tilted. This new style of view renders Viewshare graphics much more legible, especially when projected onto a screen.

Lastly, Viewshare has been redesigned with a simplified, streamlined interface that is as pleasing to the eye as it is easy to use. Unlike the old Viewshare, where lists of a user’s data sets and views were listed under different tabs, the new Viewshare consolidates the list of views into one dashboard:

Improved dashboard.

Navigation has also been streamlined. Instead of multiple navigation options (a top menu and two sets of tabs) in the old Viewshare, the navigation options have been consolidated into a dropdown menu at the upper right hand of the browser window. Thus, it is easier for users to find the information they need.

Some users may wonder whether the new Viewshare will affect existing data sets and views they have created. Viewshare’s designers have already thought of this, and, rest assured, all existing accounts, data sets and views will be migrated from the old version to the new version. Users will still be able to access, view, embed and share data sets that they uploaded in the past.

Many of the changes to Viewshare were influenced directly by user feedback about the older version. Here at the Library of Congress we are eager to hear your suggestions about improving Viewshare and about any problems you encounter in its use. Please feel free to report your problems and suggestions by clicking on the green “Feedback” tab on the Viewshare website. You should also feel free to add your comments and contact information in the comment form below.

Enjoy the rest of fall, and make sure to take time to check out Viewshare’s new features and look!

Jason Ronallo: HTML5 Video Caption Cue Settings in WebVTT

planet code4lib - Tue, 2014-10-21 13:25

TL;DR Check out my tool to better understand how cue settings position captions for HTML5 video.

Having video be a part of the Web with HTML5 <video> opens up a lot of new opportunities for creating rich video experiences. Being able to style video with CSS and control it with the JavaScript API makes it possible to do fun stuff and to create accessible players and a consistent experience across browsers. With better support in browsers for timed text tracks in the <track> element, I hope to see more captioned video.

An important consideration in creating really professional looking closed captions is placing them correctly. I don’t rely on captions, but I do increasingly turn them on to improve my viewing experience. I’ve come to appreciate some attributes of really well done captions. Accuracy is certainly important. The captions should match the words spoken. As someone who can hear, I see inaccurate captions all too often. Thoroughness is another factor. Are all the sounds important for the action represented in captions. Captions will also include a “music” caption, but other sounds, especially those off screen are often omitted. But accuracy and thoroughness aren’t the only factors to consider when evaluating caption quality.

Placement of captions can be equally important. The captions should not block other important content. They should not run off the edge of the screen. If two speakers are on screen you want the appropriate captions to be placed near each speaker. If a sound or voice is coming from off screen, the caption is best placed as close to the source as possible. These extra clues can help with understanding the content and action. These are the basics. There are other style guidelines for producing good captions. Producing good captions is something of an art form. More than two rows long is usually too much, and rows ought to be split at phrase breaks. Periods should be used to end sentences and are usually the end of a single cue. There’s judgment necessary to have pleasing phrasing.

While there are tools for doing this proper placement for television and burned in captions, I haven’t found a tool for this for Web video. While I haven’t yet have a tool to do this, in the following I’ll show you how to:

  • Use the JavaScript API to dynamically change cue text and settings.
  • Control placement of captions for your HTML5 video using cue settings.
  • Play around with different cue settings to better understand how they work.
  • Style captions with CSS.
Track and Cue JavaScript API

The <video> element has an API which allows you to get a list of all tracks for that video.

Let’s say we have the following video markup which is the only video on the page. This video is embedded far below, so you should be able to run these in the console of your developer tools right now.

<video poster="soybean-talk-clip.png" controls autoplay loop> <source src="soybean-talk-clip.mp4" type="video/mp4"> <track label="Captions" kind="captions" srclang="en" src="soybean-talk-clip.vtt" id="soybean-talk-clip-captions" default> </video>

Here we get the first video on the page:

var video = document.getElementsByTagName('video')[0];

You can then get all the tracks (in this case just one) with the following:

var tracks = video.textTracks; // returns a TextTrackList var track = tracks[0]; // returns TextTrack

Alternately, if your track element has an id you can get it more directly:

var track = document.getElementById('soybean-talk-clip-captions').track;

Once you have the track you can see the kind, label, and language:

track.kind; // "captions" track.label; // "Captions" track.language; // "en"

You can also get all the cues as a TextTrackCueList:

var cues = track.cues; // TextTrackCueList

In our example we have just two cues. We can also get just the active cues (in this case only one so far):

var active_cues = track.activeCues; // TextTrackCueList

Now we can see the text of the current cue:

var text = active_cues[0].text;

Now the really interesting part is that we can change the text of the caption dynamically and it will immediately change:

track.activeCues[0].text = "This is a completely different caption text!!!!1"; Cue Settings

We can also then change the position of the cue using cue settings. The following will move the first active cue to the top of the video.

track.activeCues[0].line = 1;

The cue can also be aligned to the start of the line position:

track.activeCues[0].align = "start";

Now for one last trick we’ll add another cue with the arguments of start time and end time in seconds and the cue text:

var new_cue = new VTTCue(1,30, "This is the next of the new cue.");

We’ll set a position for our new cue before we place it in the track:

new_cue.line = 5;

Then we can add the cue to the track:

track.addCue(new_cuew);

And now you should see your new cue for most of the duration of the video.

Playing with Cue Settings

The other settings you can play with including position and size. Position is the text position as a percentage of the width of the video. The size is the width of the cue as a percentage of the width of the video.

While I could go through all of the different cue settings, I found it easier to understand them after I built a demonstration of dynamically changing all the cue settings. There you can play around with all the settings together to see how they actually interact with each other.

At least as of the time of this writing there is some variability between how different browsers apply these settings.

Test WebVTT Cue Settings and Styling

Cue Settings in WebVTT

I’m honestly still a bit confused about all of the optional ways in which cue settings can be defined in WebVTT. The demonstration outputs the simplest and most straightforward representation of cue settings. You’d have to read the spec for optional ways to apply some cue settings in WebVTT.

Styling Cues

In browsers that support styling of cues (Chrome, Opera, Safari), the demonstration also allows you to apply styling to cues in a few different ways. This CSS code is included in the demo to show some simple examples of styling.

::cue(.red){ color: red; } ::cue(.blue){ color: blue; } ::cue(.green){ color: green; } ::cue(.yellow){ color: yellow; } ::cue(.background-red){ background-color: red; } ::cue(.background-blue){ background-color: blue; } ::cue(.background-green){ background-color: green; } ::cue(.background-yellow){ background-color: yellow; }

Then the following cue text can be added to show red text with a yellow background. The

<c.red.background-yellow>This cue has red text with a yellow background.</c>

In the demo you can see which text styles are supported by which browsers for styling the ::cue pseudo-element. There’s a text box at the bottom that allows you to enter any arbitrary styles and see what effect they have.

Example Video

Test WebVTT Cue Settings and Styling

FOSS4Lib Recent Releases: ArchivesSpace - 1.1.0

planet code4lib - Tue, 2014-10-21 12:53
Package: ArchivesSpaceRelease Date: Tuesday, October 21, 2014

Last updated October 21, 2014. Created by Peter Murray on October 21, 2014.
Log in to edit this page.

The ArchivesSpace team is happy to release version v1.1.0.

Please see the documentation for information on how to upgrade your ArchivesSpace installs.

This release includes upgrading Rails to 3.2.19, which addresses another important security patch. It is recommended that users update ArchivesSpace in order to apply this patch.

Jodi Schneider: Genre defined, a quote from John Swales

planet code4lib - Tue, 2014-10-21 12:06

A genre comprises a class of communicative events, the members of which share some set of communicative purposes. These purposes are recognized by the expert members of the parent discourse community and thereby constitute the rationale for the genre. This rationale shapes the schematic structure of the discourse and influences and constrains choice of content and style. Communicative purpose is both a privileged criterion and one that operates to keep the scope of a genre as here conceived narrowly focused on comparable rhetorical action. In addition to purpose, exemplars of a genre exhibit various patterns of similarity in terms of structure, style, content and intended audience. If all high probability expectations are realized, the exemplar will be viewed as prototypical by the parent discourse community. The genre names inherited and produced by discourse communities and imported by others constitute valuable ethnographic communication, but typically need further validation.1

  1. Genre defined, from John M. Swales, page 58, Chapter 3 “The concept of genre” in Genre Analysis: English in academic and research settings. Cambridge University Press 1990. Reprinted with other selections in
    The Discourse Studies Reader: Main currents in theory and analysis (see pages 305-316).

Open Knowledge Foundation: Storytelling with Infogr.am

planet code4lib - Tue, 2014-10-21 11:55

As we well know, Data is only data until you use it for storytelling and insights. Some people are super talented and can use D3 or other amazing visual tools, just see this great list of resources on Visualising Advocacy. In this 1 hour Community Session, Nika Aleksejeva of Infogr.am shares some easy ways that you can started with simple data visualizations. Her talk also includes tips for telling a great story and some thoughtful comments on when to use various data viz techniques.

We’d love you to join us and do a skillshare on tools and techniques. Really, we are tool agnostic and simply want to share with the community. Please do get in touch and learn more: about Community Sessions.

Open Knowledge Foundation: New Open Knowledge Initiative on the Future of Open Access in the Humanities and Social Sciences

planet code4lib - Tue, 2014-10-21 10:58

To coincide with Open Access Week, Open Knowledge is launching a new initiative focusing on the future of open access in the humanities and social sciences.

The Future of Scholarship project aims to build a stronger, better connected network of people interested in open access in the humanities and social sciences. It will serve as a central point of reference for leading voices, examples, practical advice and critical debate about the future of humanities and social sciences scholarship on the web.

If you’d like to join us and hear about new resources and developments in this area, please leave us your details and we’ll be in touch.

For now we’ll leave you with some thoughts on why open access to humanities and social science scholarship matters:

“Open access is important because it can give power and resources back to academics and universities; because it rightly makes research more widely and publicly available; and because, like it or not, it’s beginning and this is our brief chance to shape its future so that it benefits all of us in the humanities and social sciences” – Robert Eaglestone, Professor of Contemporary Literature and Thought, Royal Holloway, University of London.

*

“For scholars, open access is the most important movement of our times. It offers an unprecedented opportunity to open up our research to the world, irrespective of readers’ geographical, institutional or financial limitations. We cannot falter in pursuing a fair academic landscape that facilitates such a shift, without transferring prohibitive costs onto scholars themselves in order to maintain unsustainable levels of profit for some parts of the commercial publishing industry.” Dr Caroline Edwards, Lecturer in Modern & Contemporary Literature, Birkbeck, University of London and Co-Founder of the Open Library of Humanities

*

“If you write to be read, to encourage critical thinking and to educate, then why wouldn’t you disseminate your work as far as possible? Open access is the answer.” – Martin Eve, Co-Founder of the Open Library of Humanities and Lecturer, University of Lincoln.

*

“Our open access monograph The History Manifesto argues for breaking down the barriers between academics and wider publics: open-access publication achieved that. The impact was immediate, global and uniquely gratifying–a chance to inject ideas straight into the bloodstream of civic discussion around the world. Kudos to Cambridge University Press for supporting innovation!” — David Armitage, Professor and Chair of the Department of History, Harvard University and co-author of The History Manifesto

*

“Technology allows for efficient worldwide dissemination of research and scholarship. But closed distribution models can get in the way. Open access helps to fulfill the promise of the digital age. It benefits the public by making knowledge freely available to everyone, not hidden behind paywalls. It also benefits authors by maximizing the impact and dissemination of their work.” – Jennifer Jenkins, Senior Lecturing Fellow and Director, Center for the Study of the Public Domain, Duke University

*

“Unhappy with your current democracy providers? Work for political and institutional change by making your research open access and joining the struggle for the democratization of democracy” – Gary Hall, co-founder of Open Humanities Press and Professor of Media and Performing Arts, Coventry University

District Dispatch: I’m right! Librarians have to think

planet code4lib - Tue, 2014-10-21 09:00

I will pat myself on the back (somebody has to). I wrote in the 2004 edition of Copyright Copyright, “Fair use cannot be reduced to a checklist. Fair use requires that people think.” This point has been affirmed (pdf) by the Eleventh Circuit Court of Appeals in the long standing Georgia State University (GSU) e-reserves copyright case. The appeals court rejected the lower court’s use of quantitative fair use guidelines in making its fair use ruling, stating that fair use should be determined on a case-by-case basis and that the four factors of fair use should be evaluated and weighed.

Lesson: Guidelines are arbitrary and silly. Determine fair use by considering the evidence before you. (see an earlier District Dispatch article).

The lower court decision was called a win for higher education and libraries because only five assertions of infringement (out of 99) were actually infringing. Hooray for us! But most stakeholders on both sides of the issue, felt that the use of guidelines in weighing the third factor—amount of the work—was puzzling to say the least (but no matter, we won!)

Now that the case has been sent back to the lower court, some assert that GSU has lost the case. But not so fast. This decision validates what the U.S. Supreme Court has long held that fair use is not to be simplified with “bright line rules, for the statute, like the doctrine it recognizes, calls for case-by-case analysis. . . . Nor may the four statutory factors be treated in isolation, one from another. All are to be explored, and the results weighed together, in light of the purposes of copyright.” (510 U.S. 569, 577–78).

Thus, GSU could prevail. Or it might not. But at least fair use will be applied in the appropriate fashion.

Thinking—it’s a good thing.

The post I’m right! Librarians have to think appeared first on District Dispatch.

HangingTogether: BIBFRAME Testing and Implementation

planet code4lib - Tue, 2014-10-21 08:00

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Philip Schreur of Stanford. We were fortunate that staff from several BIBFRAME testers participated: Columbia, Cornell, George Washington University, Princeton, Stanford and University of Washington. They shared their experiences and tips with others who are still monitoring BIBFRAME developments.

Much of the testers’ focus has been on data evaluation and identifying problems or errors in converting MARC records to BIBFRAME using either the BIBFRAME Comparison Service or Transformation Service.  Some have started to create BIBFRAME data from scratch using the BIBFRAME Editor. This raised a concern among managers about how much time and staffing was needed to conduct this testing. Several institutions have followed Stanford’s advice and enrolled staff in the Library Juice Academy series to gain competency in XML and RDF- based systems, a good skill set to have for digital library and linked data work, not just for BIBFRAME. Others are taking Zepheira’s Linked Data and BIBFRAME Practical Practitioner Training course.  The Music Library Association’s Bibliographic Control Committee has created a BIBFRAME Task Force focusing on how LC’s MARC-to-BIBFRAME converter handles music materials.

Rather than looking at how MARC data looks like in BIBFRAME, people should be thinking about how RDA (Resource Description and Access) works with BIBFRAME. We shouldn’t be too concerned if BIBFRAME doesn’t handle all the MARC fields and subfields, as many are rarely used anyway. See for example Roy Tennant’s “MARC Usage in WorldCat”, which shows the fields and subfields that are actually used in WorldCat, and how they are used, by format. (Data is available by quarters in 2013 and for 1 January 2013 and 1 January 2014, now issued annually.) Caveat: A field/subfield might be used rarely, but is very important when it occurs. For example, a Participant/Performer note (511) is mostly used in visual materials and recordings; for maps, scale is incredibly important. People agreed the focus should be on the most frequently used fields first.

Moving beyond MARC gives libraries an opportunity to identify entities as “things not strings”. RDA was considered “way too stringy” for linked data. The metadata managers mentioned the desire to use various identifiers, including id.loc.gov, FAST, ISNI, ORCID, VIAF and OCLC WorkIDs.  Sometimes transcribed data would still be useful, e.g., a place of publication that has changed names. Many still questioned how authority data fits into BIBFRAME (we had a separate discussion earlier this year on Implications of BIBFRAME Authorities.) Core vocabularies need to be maintained and extended in one place so that everyone can take advantage of each other’s work.

Several noted “floundering” due to insufficient information about how the BIBFRAME model was to be applied. In particular, it is not always clear how to differentiate FRBR “works” from “BIBFRAME “works”. There may never be a consensus on what a “work” is between “FRBR and non-FRBR people”. Concentrate instead on identifying the relationships among entities. If you have an English translation linked to a German translation linked to a work originally published in Danish, does it really matter whether you consider the translations separate works or expressions?

Will we still have the concept of “database of record”? Stanford currently has two databases of record, one for the ILS and one for the digital library. A triple store will become the database of record for materials not expressed in MARC or MODS.  This raised the question of developing a converter for MODS used by digital collections. Columbia, LC and Stanford have been collaborating on mapping MODS to BIBFRAME. Colorado College has done some sample MODS to BIBFRAME transformations.

How do managers justify the time and effort spent on BIBFRAME testing to administrators and other colleagues? Currently we do not have new services built upon linked data to demonstrate the value of this investment. The use cases developed by the Linked Data for Libraries project offers a vision of what could be done, that can’t be now, in a linked data environment. A user interface is needed to show others what the new data will look like; pulling data from external resources is the most compelling use case.

Tips offered:

  • The LC Converter has a steep learning curve; to convert MARC data into BIBFRAME use Terry Reese’s MARCEdit MARCNext Bibframe Testbed–also converts EADs (Encoded Archival Descriptions). See Terry’s blog post introducing the MARCNext toolkit.
  • Use Turtle rather than XML to look at records (less verbose).
  • Use subfield 0 (authority record control number) when including identifiers in MARC access points (several requested that OCLC start using $0 in WorldCat records).

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, program officer, works on topics related to renovating descriptive and organizing practices with a focus on large research libraries and area studies requirements.

Mail | Web | Twitter | More Posts (51)

FOSS4Lib Upcoming Events: High pitched noise which Replica Cartier Love Bracelet

planet code4lib - Tue, 2014-10-21 06:44
Date: Tuesday, October 21, 2014 - 02:45Supports: eXtensible Text Framework

Last updated October 21, 2014. Created by cartierlove on October 21, 2014.
Log in to edit this page.

It includes a mother of pearl dial, luminous hands and indexes plus a date display. There are numerous kinds of couple watches at Price - Angels, including Crystal Dial Stainless Steel Water Resistant Sweethearts Couple Wrist Watch, Fashionable Style Rectangular Dial Crystal Display Stainless Steel Band Couple Watch (Grey). Article Source: is just a review of mens Invicta watches.

PeerLibrary: PeerLibrary Facebook

planet code4lib - Tue, 2014-10-21 02:48

Please follow and support us on our recently-opened Facebook page: https://www.facebook.com/PeerLibrary

PeerLibrary: Knight News Challenge Update

planet code4lib - Tue, 2014-10-21 02:46

Semifinalists for the Knight News Challenge will be chosen tomorrow and the refinement period will begin. This is your last chance to show your support for our submission before the next stage of the competition. The Knight Foundation is asking “How might we leverage libraries as a platform to build more knowledgeable communities?” We believe that PeerLibrary closely parallels the theme of the challenge and provides an answer to the foundation’s question. By facilitating a community of independent learners and promoting collaborative reading and discussion of academic resources, PeerLibrary is modernizing the concept of a library in order to educate and enrich the global community. Please help us improve our proposal, give us feedback, and wish PeerLibrary good luck in the next stage of the Knight News Challenge.

Peter Murray: Case Studies on Open Source Adoption in Libraries: Koha, CoralERM, and Kuali OLE

planet code4lib - Tue, 2014-10-21 02:12


LYRASIS has published three open source software case studies on FOSS4LIB.org as part of its continuation of support and services for libraries and other cultural heritage organizations interested in learning about, evaluating, adopting, and using open source software systems.

With support from a grant from The Andrew W. Mellon Foundation, LYRASIS asked academic and public libraries to share their experiences with open source systems, such as content repositories, integrated library systems, and websites. Of the submitted proposals, LYRASIS selected three concepts for development into case studies from Crawford County Federated Library System (Koha), Fenway Libraries Online (Coral), and the University of Chicago Library (Kuali OLE). The three selected organizations then prepared narrative descriptions of their experience and learning, to provide models, advice, and ideas for others.

Each case study details how the organization handled the evaluation, selection, adoption, conversion, and implementation of the open source system. They also include the rationale for going with an open source solution. The case studies all provide valuable information and insights, including:

  • Actual experiences, both good and bad
  • Steps, decision points, and processes used in evaluation, selection, and implementation
  • Factors that led to selection of an open source system
  • Organization-wide involvement of and impact to staffs and patrons
  • Useful tools created or applied to enhance the open source system and/or expand its functionality, usefulness, or benefit
  • Plans for ongoing support and future enhancement
  • Key takeaways from the process, including what worked well, what didn’t work as planned, and what the organization might do differently in the future

The goal of freely offering these case studies to the public is to help cultural heritage organizations use firsthand experience with open source to inform their evaluation and decision-making process, the same objective of FOSS4LIB.org. While open source software is typically available at no cost, these case studies provide tangible examples of the associated costs, time, energy, commitment and resources required to effectively leverage open source software and participate in the community.

“These three organizations expertly outline the in-depth process of selecting and implementing open source software with insight, humor, candor and clarity. LYRASIS is honored to work with these organizations to share this invaluable information with the larger community,” stated Kate Nevins, Executive Director of LYRASIS. “The case studies exemplify the importance of understanding the options and experiences necessary to fully utilize open source software solutions.”

Link to this post!

Tara Robertson: The Library Juice Press Handbook of Intellectual Freedom

planet code4lib - Tue, 2014-10-21 00:28

Ahhhh! It’s done!

This project took over 7 years and went through a few big iterations. I was just finishing library school when it started and learned a lot from the other advisory board members. I appreciate how the much more experienced folks on the advisory board helped bring me up to speed on issues I was less familiar with, and how they treated me, even though I was just a student.

It was published this spring but my copy just arrived in the mail.  Here’s the page about the book on the Library Juice Press site, and here’s where you can order a copy on Amazon.

Tara Robertson: Porn in the library

planet code4lib - Tue, 2014-10-21 00:09

At the Gender and Sexuality in Information Studies Colloquium the program session I was the most excited about was Porn in the library. There were 3 presentations in this panel exploring this theme.

First,  Joan Beaudoin and Elaine Ménard presented The P Project: Scope Notes and Literary Warrant Required! Their study looked at 22 websites that are aggregators of free porn clips. Most of these sites were in English, but a few were in French. Ménard acknowledged that it is risky and sometimes uncomfortable to study porn in the academy. They looked at the terminology used to describe porn videos, specifically the categories available to access porn videos. They described their coding manual which outlined  various metadata facets (activity, age, cinematography, company/producers, age, ethnicity, gender, genre, illustration/cartoon, individual/stars, instruction, number of individuals, objects, physical characteristics, role, setting, sexual orientation). I learned that xhamster has scope notes for their various categories (mouseover the lightbulb icon to see).

While I appreciate that Beaudoin and Ménard are taking a risk to look at porn, I think they made the mistake of using very clinical language to legitimize and sanitize their work. I’m curious why they are so interested in porn, but realize that it might be too risky for them to situate themselves in their research.

It didn’t seem like they understood the difference between production company websites and free aggregator sites. Production company sites  have very robust and high quality metadata and excellent information architecture. Free aggregator sites that have variable quality metadata and likely have a business model that is based on ads or referring users to the main production company websites. Porn is, after all, a content business, and most porn companies are invested in making their content findable, and making it easy for the user to find more content with the same performers, same genre, or by the same director.

Beaudoin and Ménard expressed disappointment that porn companies didn’t want to participate in their study. As these two researchers don’t seem to understand the porn industry or have relationships with individuals I don’t think it’s surprising at all. For them to successfully build on this line of inquiry I think they need to have some skin in the game and clearly articulate what they offer their research subjects in exchange for building their own academic capital.

It was awesome to have a quick Twitter conversation with Jiz Lee and Chris Lowrance, the web manager for feminist porn company Pink and White productions,  about how sometimes the terms a consumer might be looking for is prioritized over the  performers’ own gender identity.

Jiz Lee is genderqueer porn performer and uses the pronouns they/them and is sometimes misgendreed by mainstream porn and by feminist porn. I am a huge fan of their work.

I think this is the same issue that Amber Billy, Emily Drabinski and K.R. Roberto raise in their paper What’s gender got to do with it? A critique of RDA rule 9.7. They argue that it is regressive for a cataloguer to assign a binary gender value to an author. In both these cases someone (porn company or consumer, or cataloguer) is assigning gender to someone else (porn performer or content creator). This process can be disrespectful, offensive, inaccurate and highlights a power dynamic where the consumer’s (porn viewer or researcher/student/librarian) desires/politics/needs/worldview is put above someone’s own identity.

Next, Lisa Sloniowski and Bobby Noble. presented Fisting the Library: Feminist Porn and Academic Libraries (which is the best paper title ever). I’ve been really excited their SSHRC funded porn archive research. This research project has become more of a conceptional project, rather than building a brick and mortar porn archive. Bobby talked about the challenging process of getting his porn studies class going at York University. Lisa talked they initially hoped to start a porn collection as part of York University Library’s main collection, not as a reading room or a marginal collection. Lisa spoke about the challenges of drafting a collection development policy and some of the labour issues, presumably about staff who were uncomfortable with porn having to order, catalogue, process and circulate porn. They also talked about the Feminist Porn Awards and second feminist porn conference that took place before the Feminist Porn Awards last year.

Finally, Emily Lawrence and Richard Fry presented Pornography, Bomb Building and Good Intentions: What would it take for an internet filter to work? They presented a philosophical argument against internet filters. They argued that for a filter to not overblock and underblock it would need to be mind reading and fortune telling. A filter would need to be able to read an individual’s mind and note factors like the person viewing, their values, their mood, etc and be fortune telling by knowing exactly what information that the user was seeking  before they looked at it. I’ve been thinking about internet filtering a lot lately, because of Vancouver Public Library’s recent policy change that forbids “sexually explicit images”. I was hoping to get a new or deeper understanding on filtering but was disappointed.

This colloquium was really exciting for me. The conversations that people on the porn in the library panel were having are discussions I haven’t heard elsewhere in librarianship. I look forward to talking about porn in the library more.

Library Tech Talk (U of Michigan): Practical Relevance Ranking for 11 Million Books

planet code4lib - Tue, 2014-10-21 00:00
Relevance is a complex concept which reflects aspects of a query, a document, and the user as well as contextual factors. Relevance involves many factors such as the user's preferences, task, stage in their information-seeking, domain knowledge, intent, and the context of a particular search. Tom Burton-West, one of the HathiTrust developers, has been working on practical relevance ranking for all the volumes in HathiTrust for a number of years.

DuraSpace News: NEW tool for Archiving Social Media–Instagram and Facebook

planet code4lib - Tue, 2014-10-21 00:00

From Jon Ippolito, Professor of New Media,Director, Digital Curation graduate program, The University of Maine

Orono, ME  Digital conservator Dragan Espenschied and the crew at Rhizome, one of the leading platforms for new media art, have created a tool for archiving social media such as Instagram and Facebook.

Open Knowledge Foundation: Celebrating Open Access Week by highlighting community projects!

planet code4lib - Mon, 2014-10-20 16:15

This week is Open Access Week all around the world, and from Open Knowledge’s side we are following up on last year’s tradition by putting together a blog post series to highlight great Open Access projects and activities in communities around the world. Every day this week will feature a new writer and activity.

Open Access Week, a global event now entering its eighth year, is an opportunity for the academic and research community to continue to learn about the potential benefits of Open Access, to share what they’ve learned, and to help inspire wider participation in helping to make Open Access a new norm in scholarship and research.

This past year has seen lots in great progress and with the Open Knowledge blog we want to help amplify this amazing work done in communities around the world:

  • Tuesday, Jonathan Gray from Open Knowledge: “Open Knowledge work on Open Access in humanities and social sciences”
  • Wednesday, David Carroll from Open Access Button: “Launching the New Open Access Button”
  • Thursday, Alma Swan from SPARC Europe: “Open Access and the humanities: on our travels round the UK”
  • Friday, Jenny Molloy from Open Science working group: “OK activities in open access to science”
  • Saturday, Kshitiz Khanal from Open Knowledge Nepal: “Combining Open Science, Open Access, and Collaborative Research”
  • Sunday, Denis Parfenov from Open Knowledge Ireland: “Open Access: Case of Ireland”

We’re hoping that this series can inspire even more work around Open Access in the year to come and that our community will use this week to get involved both locally and globally. A good first step is to sign up at http://www.openaccessweek.org for access to a plethora of support resources, and to connect with the worldwide Open Access Week community. Another way to connect is to join the Open Access working group.

Open Access Week is an invaluable chance to connect the global momentum toward open sharing with the advancement of policy changes on the local level. Universities, colleges, research institutes, funding agencies, libraries, and think tanks use Open Access Week as a platform to host faculty votes on campus open-access policies, to issue reports on the societal and economic benefits of Open Access, to commit new funds in support of open-access publication, and more. Let’s add to their brilliant work this week!

Pages

Subscribe to code4lib aggregator