You are here

Feed aggregator

Library Tech Talk (U of Michigan): HTTPS Everywhere -- Promise Fulfilled

planet code4lib - Wed, 2017-01-11 00:00

Over Fall 2016, the University of Michigan Library updated most of its web sites to operate exclusively on a secure, HTTPS, protocol. Along the way, we learned a few lessons.

FOSS4Lib Recent Releases: veraPDF - 1.0

planet code4lib - Tue, 2017-01-10 21:15

Last updated January 10, 2017. Created by Peter Murray on January 10, 2017.
Log in to edit this page.

Package: veraPDFRelease Date: Tuesday, January 10, 2017

District Dispatch: Equipping librarians to code: part 2

planet code4lib - Tue, 2017-01-10 19:41

I know, I know, you just put down Increasing CS Opportunities for Young People, the Libraries Ready to Code final report, and are already saying to yourself, “What’s next?” and “How can I get involved?” Here’s the answer:

Today we officially launched Ready to Code 2 (RtC2), Embedding RtC Concepts in LIS Curricula. Building on findings from last year’s work, RtC2 focuses on ensuring pre-service and in-service librarians are prepared to facilitate and deliver youth coding activities that foster the development of computational thinking skills—skills critical for success in education and career. Like Phase 1, this will be a yearlong project and is also supported by Google, Inc.

Photo Credit: Los Angeles Public Library

Several of the findings from Phase 1 led us to consider the potential impact of focusing on librarian preparation and professional development could have on increasing the pool of librarians and library staff who have the skills necessary to design and implement coding programs that spark the curiosity and creativity of our young patrons and help them connect coding to their own interests and passions which can be outside of computer science specific domains.

RtC2 will include a carefully selected LIS Faculty cohort of seven that will redesign and then pilot their new tech/media courses at their institutions. Results of the pilot courses will then be synthesized, and course models will be disseminated nationally. Faculty and their students will provide input throughout the project to the project team through faculty documentation, regular virtual meetings, a survey, student products and other outreach mechanisms. An outside evaluator will also work with the project team to identify the impacts of project activities and outcomes. This input will provide content for the final synthesis and recommendations for scaling in other LIS institutions.

Working along with me, the RtC2 project team includes Dr. Mega Subramaniam, Associate Professor and Associate Director of the Information Policy and Access Center at the University of Maryland’s College of Information Studies; Linda Braun, Learning Consultant, LEO: Librarians and Educators Online; and Dr. Alan S. Inouye, Director, OITP. OITP Youth and Technology Fellow Christopher Harris will provide overall guidance throughout the project. Can you tell how excited this makes me?!

Curious? Read the RtC2 Summary.

Are you LIS faculty? You can

  • Read the RtC2 Call for Applications.
  • Attend an in-person information session at the 2017 ALISE conference on Wednesday, January 18 at 6:00pm (meet Dr. Subramaniam in the Sheraton Atlanta hotel lobby).
  • Attend a virtual information session on January 27, 2017 at noon EST via Adobe Connect. Please complete this form if you are interested in attending the information session or would like to receive a recording of the session.

Yes, there will be a Libraries Ready to Code Website, where all this and more will live. In the meantime, if you have questions, please contact me directly at mvisser@alawash.org.

The post Equipping librarians to code: part 2 appeared first on District Dispatch.

Library of Congress: The Signal: The Keepers Registry: Ensuring the Future of the Digital Scholarly Record

planet code4lib - Tue, 2017-01-10 17:12

Humanités Numériques, on Wikimedia by Calvinius: http://bit.ly/2jacCXv.

This is a guest post by Ted Westervelt, section head in the Library of Congress’s US Arts, Sciences & Humanities Division.

Strange as it now seems, it was not that long ago that scholarship was not digital. Writing a dissertation in the 1990s was done on a computer and took full advantage of the latest suite of word-processing tools available (that a graduate student could afford). And it certainly was a world away from the typewritten dissertations of the 1950s, requested at the university library and pored over in  reading rooms.

Yet these were tools to create physical items not much different than those dissertations of the previous forty, fifty or a hundred years. That sense of completion and accomplishment came with the bound copy of the dissertation taken from the bookbinders or with the offprints of the article sent in the post by the journal publisher.

Now, instead of using digital tools to create a physical item, we create a digital item. From this digital item we might make a physical copy but that is no longer a necessary endpoint. Once we have created the digital item encompassing our scholarly work, it can be complete. Now when we talk of the scholarly record, we talk of the digital scholarly record, for they are almost entirely one and the same.

The advantages of this near-complete overlap are evident to anyone who works with scholarly works or with any creative works for that matter. The challenges, on the other hand, can be less immediately apparent, though they are not hidden too deeply.

“The library at Holland House in Kensington, London, extensively damaged by a Molotov ‘Breadbasket’ fire bomb.” On Flickr by Musgo Dumio_Momio. http://bit.ly/2j5q5vr.

The most immediate of these challenges relate to managing and preserving the digital scholarly record as we have done for the scholarly record for centuries and millennia (if we draw a curtain over the destruction of the Library of Alexandria). We have had those centuries to learn how to manage, keep and preserve the textual part of the scholarly record, to use it while also keeping it safe and usable for future generations (e.g. keep it away from fire).

With digital content, we do not have those centuries of knowledge; the sharp shift to digital creation and to a digital scholarly record has not come with a history of experiences in keeping that record safe and secure.

Which is not to say that preserving, protecting and ensuring the ongoing use and value of that digital scholarly record are hopeless dreams or that there is not a lot of work being dedicated to accomplishing these ends. This is a concern of anyone with an interest in scholarly works and in the scholarly record as a whole and, as any who delve into this at all know, productive work is being undertaken by a variety of groups using a variety of means in order to ensure its survival.

The Keepers Registry, based at the University of Edinburgh, is an important effort to preserve the digital scholarly record. The Keepers Registry brings together institutions and organizations that are committed to the preservation of electronic serials and enables those institutions to share titles, volumes and issues they have preserved.

In doing so, The Keepers Registry allows us to identify which parts of the digital scholarly record are being preserved, which institutions have taken on this responsibility and, just as important, which parts of the digital scholarly record are not being preserved and are therefore at higher risk of being lost.

There is a clear general benefit in sharing the names, missions and holdings of the institutions and organizations (those Keepers), thus committing themselves and their resources to the preservation of these parts of the digital scholarly record. But there is also a very clear benefit to those individual Keepers to better know their fellows who have similarly committed themselves to serve as Keepers of the digital scholarly record.

Since all are committed to the same end, the staff behind The Keepers Registry organized meetings of The Keepers and other similar organizations in Edinburgh in September 2015 and in Paris in June 2016. The organizations and institutions – and the individuals who represented them at these meetings – can and in some cases do meet and interact with each other in other forums. But the meetings arranged by The Keepers Registry allowed for a focus on the preservation of the digital scholarly record and how that can be accomplished collectively.

The preservation of the digital scholarly record cannot happen except through collaboration and cooperation. And none know this better than the individual Keepers and their colleagues at these meetings who have committed resources to the issue. The very existence of The Keepers Registry is an admission that the preservation of the scholarly record is larger than any one institution and in fact cannot be entrusted in any one institution alone.

But as much as this is known, the answer to how best to work, collaboratively and cooperatively, is less readily apparent.  We benefit from the opportunities to discuss this in person.

CC0 Public Domain.

These meetings and discussions, while valuable, were not intended as an end unto themselves.  The meetings helped crystallize in the minds of the participants that, because of what they do and because of their participation in The Keepers Registry, they form a Keepers network.

As such, they have a shared commitment and a shared idea of how they can do for the digital scholarly record what they have managed for the scholarly record in centuries past: ensure its preservation and ongoing use.  This vision has been encapsulated in the joint statement that representatives from the Keepers issued this past August, “Ensuring the Future of the Digital Scholarly Record.”  It sets out a plan of engagement with other stakeholders –- especially publishers, research libraries and national libraries -– who also have a role in this mission.

To this end, the Keepers network welcomes any institution, organization or consortium that wishes to endorse the statement, as some have already.  And it encourages any stakeholders which wish to begin working with the Keepers network to let them know.

The Keepers network is committed to making all parties aware of their roles in the preservation of the digital scholarly record and have already begun reaching out to those stakeholders, such as at the Fall Meeting of the Coalition for Networked Information.  This is a shared need and a shared responsibility.  No one institution has to do it alone. We cannot succeed in preserving the digital scholarly record unless we do it together.

DPLA: Registration Now Open for DPLAfest 2017

planet code4lib - Tue, 2017-01-10 16:00

We’re pleased to announce that registration for DPLAfest 2017 — taking place on April 20-21 in Chicago, Illinois — has officially opened. We invite all those interested from public and research libraries, cultural organizations, the educational community, the creative community, publishers, the technology sector, and the general public to join us for conversation and community building as we celebrate our fourth annual DPLAfest.

The two-day event is open to all and advance registration is required. Registration for DPLAfest 2017 is $150 and includes access to all DPLAfest events including a reception on April 20. Coffee, refreshments, and a boxed lunch will be provided on April 20 and 21. Register today.

About

Participants collaborate at DPLAfest 2016. Photo by Jason Dixson.

DPLAfest 2017 will take place on April 20-21, 2017 in Chicago at Chicago Public Library’s Harold Washington Library Center. The hosts for DPLAfest 2017 include Chicago Public Library, the Black Metropolis Research Consortium, Chicago Collections, and the Reaching Across Illinois Library System (RAILS).

Agenda

We are currently seeking session proposals for DPLAfest 2017. The deadline to submit a session proposal is January 17, 2017. Click here to review submission terms and submit a session proposal.

We will be posting a full set of activities and programming for DPLAfest 2017 in February. Until then, to review topics and themes from previous fests, check out the agendas from DPLAfest 2016 and 2015.

Travel and Logistics

Click here for logistical and travel information about DPLAfest and our host city, Chicago.

Contact

Should you have any questions, please do not hesitate to reach out to us at info@dp.la. We look forward to seeing you in Chicago!

Register for DPLAfest 2017

David Rosenthal: Gresham's Law

planet code4lib - Tue, 2017-01-10 16:00
Jeffrey Beall, who has done invaluable work identifying predatory publishers and garnered legal threats for his pains, reports that:
Hyderabad, India-based open-access publisher OMICS International is on a buying spree, snatching up legitimate scholarly journals and publishers, incorporating them into its mega-fleet of bogus, exploitative, and low-quality publications. ... OMICS International is on a mission to take over all of scholarly publishing. It is purchasing journals and publishers and incorporating them into its evil empire. Its strategy is to saturate scholarly publishing with its low-quality and poorly-managed journals, aiming to squeeze out and acquire legitimate publishers.Below the fold, a look at how OMICS demonstrates the application of Gresham's Law to academic publishing.

Following John Bohannon's 2013 sting against predatory publishers with papers that were superficially credible, in 2014 Tom Pears wrote a paper that "that absolutely shouldn’t be published by anyone, anywhere", submitted it to 18 journals, and got accepted by 8 of them. None of them could even have understood the title:
“Acidity and aridity: Soil inorganic carbon storage exhibits complex relationship with low-pH soils and myeloablation followed by autologous PBSC infusion.”

Look more closely. The first half is about soil science. Then halfway through it switches to medical terms, myeloablation and PBSC infusion, which relate to treatment of cancer using stem cells.

The reason: I copied and pasted one phrase from a geology paper online, and the rest from a medical one, on hematology.

I wrote the whole paper that way, copying and pasting from soil, then blood, then soil again, and so on. There are a couple of graphs from a paper about Mars. They had squiggly lines and looked cool, so I threw them in.

Footnotes came largely from a paper on wine chemistry. The finished product is completely meaningless.

The university where I claim to work doesn’t exist. Nor do the Nepean Desert or my co-author. Software that catches plagiarism identified 67% of my paper as stolen (and that’s missing some). And geology and blood work don’t mix, even with my invention of seismic platelets.Among the publishers recently acquired by OMICS were two previously legitimate Canadian companies. Tom Spears tried and scored again:
OMICS has publicly insisted it will maintain high standards. But now the company has published an unintelligible and heavily plagiarized piece of writing submitted by the Citizen to test its quality control. The paper is online today in the Journal of Clinical Research and Bioethics — not one of the original Canadian journals, but now jointly owned with them. And it’s awful. OMICS claims this paper passed peer review, and presents useful insights in philosophy, when clearly it is entirely fake.Bryson Masse's Fake Science News Is Just As Bad As Fake News explains how Spears came to submit the paper:
This summer, OMICS reached out to Spears, who has previously demonstrated how to game the scientific publishing system, and now gets a lot of spam from journal publishers. This time, he decided he might have some fun with them.

“I'd sent test submissions to a couple of predators in the past and had kind of moved on, but then I got this request to write for what looked like a fake journal—of ethics,” Spears wrote me in an email. “Something about that attracted me so I just thought: Why not? And one morning in late August when I woke up early I made extra coffee and banged out some drivel and sent it to them.”
...
And voila, his minutes of toil paid off.

It got published without him paying. He did get invoiced, though, and the publisher was not afraid to haggle, said Spears.At the New York Times, Kevin Carey's A Peek Inside the Strange World of Fake Academia reveals that OMICS is following the lead of less deplorable academic publishers who have observed that in the Internet era, conferences are less vulnerable to disintermediation than journals:
The caller ID on my office telephone said the number was from Las Vegas, but when I picked up the receiver I heard what sounded like a busy overseas call center in the background. The operator, “John,” asked if I would be interested in attending the 15th World Cardiology and Angiology Conference in Philadelphia next month.

“Do I have to be a doctor?” I said, because I’m not one. I got the call because 20 minutes earlier I had entered my phone number into a website run by a Hyderabad, India, company called OMICS International.

“You can have the student rate,” the man replied. With a 20 percent discount, it would be $599. The conference was in just a few weeks, I pointed out — would that be enough time for the academic paper I would be submitting to be properly reviewed? ... It would be approved on an “expedited basis” within 24 hours, he replied, and he asked which credit card I would like to use.

it seems that I was about to be taken, that’s because I was. OMICS International is a leader in the growing business of academic publication fraud. It has created scores of “journals” that mimic the look and feel of traditional scholarly publications, but without the integrity. This year the Federal Trade Commission formally charged OMICS with “deceiving academics and researchers about the nature of its publications and hiding publication fees ranging from hundreds to thousands of dollars.”

OMICS is also in the less well-known business of what might be called conference fraud, which is what led to the call from John. Both schemes exploit a fundamental weakness of modern higher education: Academics need to publish in order to advance professionally, get better jobs or secure tenure. Even within the halls of respectable academia, the difference between legitimate and fake publications and conferences is far blurrier than scholars would like to admit.Carey goes in to considerable detail about OMICS and its competitors in the fraudulent conference business and concludes:
There are real, prestigious journals and conferences in higher education that enforce and defend the highest standards of scholarship. But there are also many more Ph.D.-holders than there is space in those publications, and those people are all in different ways subject to the “publish or perish” system of professional advancement. The academic journal-and-conference system is subject to no real outside oversight. Standards are whatever the scholars involved say they are.

So it’s not surprising that some academics have chosen to give one another permission to accumulate publication credits on their C.V.’s and spend some of the departmental travel budget on short holidays. Nor is it surprising that some canny operators have now realized that when standards are loose to begin with, there are healthy profits to be made in the gray areas of academe.Carey's right, but that isn't the fundamental problem. Two years ago I wrote Stretching the "peer reviewed" brand until it snaps responding to a then-current outbreak of concern about this issue:
These recent examples, while egregious, are merely a continuation of a trend publishers themselves started many years ago of stretching the "peer reviewed" brand by proliferating journals. If your role is to act as a gatekeeper for the literature database, you better be good at being a gatekeeper. Opening the gate so wide that anything can get published somewhere is not being a good gatekeeper.Gresham's Law states "Bad money drives out good". The major publishers can hardly complain if others more enthusiastically follow their example by proliferating journals (and conferences) and lowering reviewing standards. Their value-added was supposed to be "peer review", but the trend they started has devalued it to the point where peer-reviewed science no longer influences public policy. It is true that industries such as tobacco and fossil fuels have funded decades-long campaigns pushing invented reasons to doubt published research. But at the same time academic publishers were providing real reasons for doing so.

Mark E. Phillips: LC Name Authority File Analysis: Where are the Commas?

planet code4lib - Tue, 2017-01-10 15:31

This is the second in a series of blog posts on some analysis of the Name Authority File dataset from the Library of Congress. If you are interested in the setup of this work and bit more background take a look at the previous post.

The goal of this work is to better understand how personal and corporate names are formatted so that I can hopefully train a classifier to automatically identify a new name into either category.

In the last post we saw that commas seem to be important in differentiating between corporate and personal names.  Here is a graphic from the previous post.

Distribution of Commas in Name Strings

You can see that  the majority of personal names have commas 99% with a much smaller set of corporate names 14% having a comma present.

The next thing that I was curious about is does that placement of the comma in the name string reveal anything about the kind of name that it is?

How Many?

The first thing to look at is just counting the number of commas per name string.  My initial thought is that there are going to be more commas in the Corporate Names than in the Personal Names.  Let’s take a look.

Name Type Total Name Strings Names With Comma min 25% 50% 75% max mean std Personal 6,362,262 6,280,219 1 1 1 2 8 1.309 0.471 Corporate 1,499,459 213,580 1 1 1 1 11 1.123 0.389

In looking at the overall statistics for the number of commas in the name strings indicate that there are more commas for the Personal Names than for the Corporate Names.  The Corporate Name with the most commas, in this case eleven is International Monetary Fund. Office of the Executive Director for Antigua and Barbuda, the Bahamas, Barbados, Belize, Canada, Dominica, Granada, Ireland, Jamaica, St. Kitts and Nevis, St. Lucia, and St. Vincent and the Grenadines you can view the name record here.

The Personal Name with the most commas had eight of them and is this name string Seu constante leitor, hum homem nem alto, nem baixo, nem gordo, nem magro, nem corcunda, nem ultra-liberal, que assistio no Beco do Proposito, e mora hoje no Cosme-Velho and you can view the name record here.

I can figure out the Corporate Name but needed a little help with the Personal Name so Google Translate to the rescue. From what I can tell that translate to His constant reader, a man neither tall, nor short, nor fat, nor thin, nor hunchback nor ultra-liberal, who attended in the Alley of the Purpose, and lives today in Cosme-Velho which I think is a pretty cool sounding Personal Name.

I was surprised when I made a histogram of the values and saw that it was actually pretty common for Personal Names to have more than one comma.   Very common actually.

Number of Commas in Personal Names

And while there are instances of more overall commas in Corporate Names, you generally are only going to see one comma per string.

Number of Commas in Corporate Names

Which Half?

The next thing that I wanted to look at is the placement of the first comma in the name string.

The numbers below represent the stats for just the name strings that contain a comma. The values of the number is the position of the first comma as a percentage of the overall number of characters in the name string.

Name Type Names With Comma min 25% 50% 75% max mean std Personal 6,280,219 1.9% 26.7% 36.4% 46.7% 95.7% 37.3% 13.8% Corporate 213,580 2.2% 60.5% 76.9% 83.3% 99.0% 69.6% 19.3%

If we look at these as graphics we can see some trends a bit better.  Here is a histogram of the placement of the first comma in the Personal Name strings.

Comma Percentage Placement for Personal Name

It shows the bulk of the names with a comma have that comma occurring in the first half (50%) of the string.

This looks a bit different with the Corporate Names as you can see below.

Comma Percentage Placement for Corporate Name

You will see that the placement of that first comma trends very strongly to the right side of the graph, definitely over 50%.

Let’s be Absolute

Next up I wanted to take a look at the absolute distance from the first comma to the first space character in the name string.

My thought is that a Personal Name is going to have an overall lower absolute distance than the Corporate Names.  Two examples will hopefully help you see why.

For a Personal Name string like “Phillips, Mark Edward” the absolute distance from the first comma to the first space is going to be one.

For a Corporate Name string like “Worldwide Documentaries, Inc.” the absolute distances from the first comma to the first space is fourteen.

I’ll jump right to the graphs here.  First is the histogram of the Personal Name strings.

Personal Name: Absolute Distance Between First Space and First Comma

You can see that the vast majority of the name strings have an absolute distance from the first comma to the first space of 1 (that’s the value for the really tall bar).

If you compare this to the Corporate Name strings in graph below you will see some differences.

Corporate Name: Absolute Distance Between First Space and First Comma

Compared to the Personal Names, the Corporate Name graph has quite a bit more variety in the values.  Most of the values are higher than one.

If you are interested in the data tables they can provide some additional information.

Name Type Names With Comma min 25% 50% 75% max mean std Personal 6,280,219 1 1 1 1 131 1.4 1.8 Corporate 213,580 1 18 27 37 270 28.9 17.4 Absolute Tokens

This next section is very similar to the previous but this time I am interested in the placement of the first comma in relation to the first token in the string.  I have a feeling that it will be similar to what we saw for the absolute first space distance that we saw above but should normalize the data a bit because we are dealing with tokens instead of characters.

Name Type Names With Comma min 25% 50% 75% max mean std Personal 6,280,219 1 1 1 1 17 1.1 0.3 Corporate 213,580 1 3 4 6 35 4.8 2.4

And now to round things out with graphs of both of the datasets for the absolute distance from first comma to first token.

Personal Name: Absolute Distance Between First Token and First Comma

Just as we saw in the section above the Personal Name strings will have commas that are placed right next to the first token in the string.

Corporate Name: Absolute Distance Between First Token and First Comma

The Corporate Names are a bit more distributed away from the first token.

Conclusion

Some observations that I have now that I’ve spent a little more time with the LC Name Authority File while working on this post and the previous one.

First, it appears that the presence of a comma in a name string is a very good indicator that it is going to be a Personal Name.  Another thing is that if the first comma occurs in the first half of the name string it is most likely going to be a Personal Name and if it occurs in the second half of the string it is most likely to be a Corporate Name. Finally the absolute distance from the first comma to either the first space or from the first token is a good indicator of it the string is a Personal Name or a Corporate Name.

If you have questions or comments about this post,  please let me know via Twitter.

DPLA: One Week Left! Apply to Present at DPLAfest 2017

planet code4lib - Tue, 2017-01-10 15:00

DPLA is seeking session proposals for DPLAfest 2017, an annual conference that brings together librarians, archivists, and museum professionals, developers and technologists, publishers and authors, educators, and many others to celebrate DPLA and its community of creative professionals.

Proposals should be related to digital libraries, broadly defined, including topics at the intersection of digital libraries and social justice, copyright and rights management, public engagement, metadata, collaboration, and more. Learn more.

The deadline to submit a session proposal is Tuesday, January 17, 2017.

See you in Chicago!

View full Call for Proposals and Submission Form

LITA: #NoFilter: Social Media Content Ideas for Libraries

planet code4lib - Tue, 2017-01-10 15:00

In my previous blog entry, I introduced the #NoFilter series which will explore some of the challenges and concerns pertaining to social media and its use in the library. For this post, let’s consider a topic that can be simultaneously fun and perplexing: generating quality content for social media! Thoughtful, consistent, and varied content is one of the keys to cultivating a meaningful social media presence for a library i.e., opening up channels of communication with patrons and encouraging enthusiasm for the library’s materials, services, and staff.  Where does one look for social media content ideas? Keeping in mind that the intricacies of each platform necessitate different presentations in content, below are three suggestions for where those in charge of a library’s social media may find some inspiration.

Image accompanying a Tumblr post about the behind-the-scenes process of evaluating donations at the Othmer Library.

  • Behind-the-scenes – The day-to-day operations in a library may not seem like the most riveting subject matter for a social media post. However, in my experience, posts that feature behind-the-scenes work at the library often do very well. Think of it this way: isn’t it exciting when you get a sneak peek of what is to come or a look into processes with which you are not familiar? In terms of social media content, this could mean providing patrons with a photo of the library preparing to open, new acquisitions being processed, a book being repaired, a recent donation to the library still in boxes, a new addition being built, a new technology being installed, or a new fish tank being set up. For this type of content, consider consulting staff throughout the library such as those in technical services, collection development, or interlibrary loan. Not sure how a post about an ILL would look? Check out this great Instagram post from The Frick Collection.
  • Reference Questions – What questions have the library staff recently answered for patrons or for one another? What information was unearthed? What resources were consulted? What steps were taken to track down an answer? You may want to consider working with reference staff to compose social media posts that not only share the findings of research, but also the research process. Chances are that such information will be of interest to others. Additionally, this type of post highlights the expertise and talents of library staff. Individuals who may never have thought to consult your library before on such topics may find themselves reconsidering after seeing your post. One example is this “From the Othmer Library Reference Desk” post on my library’s Tumblr.
  • Events – Event-driven content is one of the most commonly employed on institutional social media outlets. There is an event coming up (e.g., an open house, a movie night, a special guest lecturer, edible book festival) and the library wants to get the word out about it. It’s not a guarantee of higher attendance at the actual event, but such a post, when written in a personable tone, does alert patrons to the fact that the library is a dynamic place, not just a repository of materials in varying formats. Taking this type of post one step further, a library’s social media manager may want to consider sharing stories that come about from the event. Did the library debut a new gadget at the event? Did a quote from the lecturer stand out? Did the cake you ordered for your National Library Week celebration arrive with the library’s name misspelled – e.g., the “Othmer Library of Chemical History” became the “Other Library of Chemical History”? The fun moments, the serious moments, the quirky moments – all can have a place on social media, all are demonstrations of what patrons can take away from participating in a library event.

    The History of Four Footed Beasts and Serpents (1658) on display at the 2015 Othmer Library Open House. An iPad next to the book displays a GIF made from one of the book’s illustrations.

Whether you are new to social media or an established presence on a platform(s), I hope the above suggestions have provided some creative inspiration for your library’s future content.

Where do you look for social media content ideas? What types of content seem to do the best on your library’s social media? Share your thoughts in the comments below!

District Dispatch: Fight for Email Privacy Act passage begins now . . . again

planet code4lib - Tue, 2017-01-10 14:49

It’s a pretty sure bet that, when James Madison penned the Fourth Amendment to assure the right of all Americans to be “secure in their persons, houses, papers, and effects against unreasonable searches and seizures,” he didn’t have protecting emails, texts, tweets and cloud-stored photo and other files in mind. Fortunately, Congress attempted to remedy that understandable omission 197 years later by passing the Electronic Communications Privacy Act (ECPA) to require that authorities obtain a search warrant, based on probable cause, to access the full content of such material. But given the difficulty, expense and thus unlikelihood of storing digital information for extended periods in 1986, ECPA’s protections were written to sunset 180 days after a communication had been created.

Credit: Mike McQuade

Thirty years later, in an age of essentially limitless, cheap storage – and the routine “warehousing” of our digital lives and materials – this anachronism has become a real, clear and present danger to Americans’ privacy. ALA, in concert with the many other members of the Digital Due Process coalition, has been pushing hard in every Congress since early 2010 to update ECPA for the digital age by requiring authorities to obtain a “warrant for content” for access to any electronic communications from the moment that they’re created. Last year, in the 114th Congress, we got tantalizingly close as the Email Privacy Act (H.R. 699) passed the House of Representatives unanimously (yup, you read that right) by a vote of 419 – 0: a margin unheard of for any bill not naming a post office or creating “National Remember a Day” day. Sadly, action on the bill then stalled in the Senate.

Undaunted, the bill’s diehard sponsors – Reps. Kevin Yoder (R-KS3) and Jared Polis (D–CO2) – have come roaring back in the first full week of the new, 115th Congress to reintroduce exactly the same version of the Email Privacy Act that passed the House last year without opposition. Look for similar action to get the ball rolling again in the Senate in the very near future and, not long after that, for a call to action to help convince Congress to heed ALA President Julie Todaro’s call for immediate action on this critical bill. As she put it in a January 9 statement:

ALA calls on both chambers of Congress to immediately enact H.R. 387 and send this uniquely bipartisan and long-overdue update of our laws to the President in time for him to mark Data Privacy Day on January 28, 2017, by signing it into law.”

The post Fight for Email Privacy Act passage begins now . . . again appeared first on District Dispatch.

Karen G. Schneider: A scholar’s pool of tears, Part 2: The pre in preprint means not done yet

planet code4lib - Tue, 2017-01-10 14:23

Note, for two more days, January 10 and 11, you (as in all of you) have free access to my article, To be real: Antecedents and consequences of sexual identity disclosure by academic library directors. Then it drops behind a paywall and sits there for a year.

When I wrote Part 1 of this blog post in late September, I had keen ambitions of concluding this two-part series by discussing “the intricacies of navigating the liminal world of OA that is not born OA; the OA advocacy happening in my world; and the implications of the publishing environment scholars now work in.”

Since then, the world, and my priorities have changed. My goals are to prevent nuclear winter and lead our library to its first significant building upgrades since it opened close to 20 years ago. But at some point I said on Twitter, in response to a conversation about posting preprints, that I would explain why I won’t post a preprint of To be real. And the answer is very simple: because what qualifies as a preprint for Elsevier is a draft of the final product that presents my writing before I incorporated significant stylistic guidance from the second reviewer, and that’s not a version of the article I want people to read.

In the pre-Elsevier draft, as noted before, my research is present, but it is overshadowed by clumsy style decisions that Reviewer 2 presented far more politely than the following summary suggests: quotations that were too brief; rushing into the next thought without adequately closing out the previous thought; failure to loop back to link the literature review to the discussion; overlooking a chance to address the underlying meaning of this research; and a boggy conclusion. A crucial piece of advice from Reviewer 2 was to use pseudonyms or labels to make the participants more real.

All of this advice led to a final product, the one I have chosen to show the world. That’s really all there is to it. It would be better for the world if my article were in an open access publication, but regardless of where it is published, I as the author choose to share what I know is my best work, not my work in progress.

The OA world–all sides of it, including those arguing against OA–has some loud, confident voices with plenty of “shoulds,” such as the guy (and so many loud OA voices are male) who on a discussion list excoriated an author who was selling self-published books on Amazon by saying “people who value open access should praise those scholars who do and scorn those scholars who don’t.” There’s an encouraging appproach! Then there are the loud voices announcing the death of OA when a journal’s submissions drop, followed by the people who declare all repositories are Potemkin villages, and let’s not forget the fellow who curates a directory of predatory OA journals that is routinely cited as an example of what’s wrong with scholarly publishing.

I keep saying, the scholarly-industrial complex is broken. I’m beyond proud that the Council of Library Deans for the California State University–my 22 peers–voted to encourage and advocate for open access publishing in the CSU system. I’m also excited that my library has its first scholarly communications librarian who is going to bat on open access and open educational resources and all other things open–a position that in consultation with the library faculty I prioritized as our first hire in a series of retirement/moving-on faculty hires. But none of that translates to sharing work I consider unfinished.

We need to fix things in scholarly publishing and there is no easy, or single, path. And there are many other things happening in the world right now. I respect every author’s decision about what they will share with the world and when and how they will share it. As for my decision–you have it here.

Bookmark to:

LibUX: Critical Librarianship in the Design of Libraries

planet code4lib - Tue, 2017-01-10 12:41

Sarah Houghton and Andy Woodworth announced Operation 451, a movement intentionally invoking Fahrenheit 451 as a

symbolic affirmation of our librarian values of knowledge, service of others, and free expression of ideas. [Operation 451] stands in direct opposition to the forces of intolerance and ignorance that seek to divide neighbors, communities, and the country.

“Call it luck, fate, or serendipity, but we noticed that that individual numbers matched up with the fourth and fifth articles of the Library Bill of Rights and the First Amendment to the Constitution. These were the values that we want to promote.”

I want to be a part of this. This resonates with me.

I said as much in “The Election as a Design Problem” that at a moment defined by fake news and the echo chambers that blossom as a result of the user-experience zeitgeist, these — and the racist, sexist, other-ist fires they start — might be assuaged by that same ethos of deliberate design core to the UX boom.

Bear with me.

Fake news proliferates because of the more-useful-than-not algorithms that tailor our time online to our tastes, our friends, and family. We don’t complain about our connectivity — for example — to Facebook, nor do we complain about its uptime, because we massage the obvious kinks out of the web that interfere with access to and engagement with our feed.

These work for us – except when they don’t.

In Facebook’s case, the mechanisms to report bullshit already exist, sort of, but they’re not obvious. The interaction cost is high.

It takes four steps just to get to this point.

And even though you’re one of the good guys, your experience is substantially better — because you’re either walking between meetings, or sitting in traffic, being a parent, or whatever — by scrolling and letting it slip by like a piece of trash in the stream.

Because the features to train the Facebook algorithm are designed poorly, scrolling is the straightest path back to the content you’re interested in. Even the hardiest fist-shakers don’t want to be on the job all the time.

But, now, fake news et al. actively impede your access to and engagement with Facebook, and the pain of reporting this bullshit is now for many becoming too great to even deal with the feed whatsoever. Fake news sucks for Facebook.

Approached as a design challenge, the answer seems to be to make these reporting tools painless to use.

The user experience is a net value, so negative features pull all ships down with the low tide.

What’s more, there is an opportunity for institutions that are positioned — either actively or by reputation — as intellectual and moral community cores (libraries) to exert greater if not just more obvious influence on the filters through which patrons access content.

We take for granted that these filters are wide open.

Librarianship, more than other disciplines, is wrapped-up in deep worldviews about information freedom that lends itself in practice to objectivity in the journalistic sense. But whereas I believe the commitment to objectivity in journalism was rooted — at one time, but no longer — in good business sense, our commitment is moral.

Even so, I am not sure objectivity is good for the success of libraries, either, although I didn’t really have the vocabulary to communicate this before reading Meredith Farkas’s column in American Libraries, “Never Neutral,” where she defines “critical librarianship”:

Critical librarianship supports the belief that, in our work as librarians, we should examine and fight attempts at social oppression. … Many librarians are thinking about how they can fight for social justice in their work, which raises the question of whether that work reflects the neutrality that has long been a value in our profession. Meredith Farkas

For me it’s been just out of earshot, although as Meredith mentions #critlib’s been edging into conversation around algorithmic bias and — in my own way — when I make the point about not blindly trying to serve all users but deliberately identifying and eschewing non-adopters.

I imply in the library interface that the best design decisions for libraries are those that get them out of the way, not as an end unto itself but to optimize the user experience so that libraries are primed to strategically exert control over that interface.

The library is the interface

When I usually talk about this, I tend to refer to controlling negotiations with vendors who want access to the audience libraries attract. We want to force vendors to commit to a user experience that suits the library mission. Users don’t discern between the services libraries control and those that libraries don’t, so it behooves libraries to wrest control and aggressively negotiate. We have the leverage, after all.

That said, these design decisions also position libraries to more deliberately influence the user experience in other ways – such as communicating moral or social values.

For gun-shy administrations, values do not have to be in policy or in the mission statement – although I suspect that’s better marketing than not. Service design communicates these values just as effectively.

Meredith again:

Librarians may not be able to change Google or Facebook, but we can educate our patrons and support the development of the critical-thinking skills they need to navigate an often-biased online world. We can empower our patrons when we help them critically evaluate information and teach them about bias in search engines, social media, and publishing.

This is the opportunity I mean for librarians to embrace.

Reframing the #Operation451 pledge as a design challenge to integrate critical librarianship

Let’s approach support for #Operation451 as a design challenge. We’ll start with participants’ pledge to

  • work towards increasing information access, especially for vulnerable populations;
  • establish your library as a place for everyone in the community, no matter who they are;
  • ensure and expand the right of free speech, particularly for minorities’ voices and opinions.

We can rally behind these philosophically but as problems to be solved they’re too big to approach practically.

First, to prime even stubborn obstacles for brainstorming ideas that can be tested, you can reword these as how might we notes. It’s a little workshop trick shifting focus from how daunting the challenge is to how it can solved.

  • How might we increase information access in general?
  • How might we increase information access for vulnerable populations?
  • How might we establish the library as a place for everyone?
  • How might we ensure the right of free speech?
  • How might we encourage patrons to share their voices and opinions?
  • How might we amplify our patrons voices and opinions?
  • How might we encourage and amplify the voices and opinions of minorities?

What we then want to be able to do is identify as many ways as these challenges manifest.

These could just be bullet points, but you can step in the user’s shoes and phrase ideas as job stories. These are madlib-style statements meant to approach motivations as actionable problems. “When _____, I want to _____, so that _____.”

And what may seem counterintuitive is that unlike a user story — “As a _____, I want to _____, so that _____” — the job story shifts the focus away from the persona to tasks that are independent of demographic.

A list of job stories in the spirit of #Operation451

I think this could be a living list that can serve to inspire design solutions that are intended to bake-in the spirit of critical librarianship into our practice. I would love your help.

You can contribute by either leaving a comment below, or triple hashtagging a job story #libux #op451 #critlib on twitter.

  • When I don’t have a stable place to live, I want to be able to get a library card, so that I can still take advantage of library services.
  • When I am concerned about privacy and my personal information, I want to get a library card without feeling like I’m giving too much information, so that I can take advantage of library services.
  • When I can’t pay library fines, I want to continue to be able to use the library services, so that I can get what I need done.
  • When I owe library fines, I want to use the library services without feeling like I’m in trouble, so that I still feel welcome. Okay, this could use some wordsmithing
  • When I see fake news or fake information in the library collection, I want to report it, so that the item gets reevaluated for its place in the collection.
  • When I am looking for non-fiction in the library collection, I want to see that content is untrustworthy or questionable, so that I can make an informed choice.
  • When I am looking for non-fiction in the library collection, I want to be able to filter results by most factual, so that I can make an informed choice.

This is just a start.

Ed Summers: Document Time

planet code4lib - Tue, 2017-01-10 05:00
A segment from @Berg:1997a about the ways in which documents and records fix narratives. I think I picked up Berg's work from @Mol:2002. I particularly like this notion of *document time* where experience is flattened, and then used in particular ways. > In addition, the record is the very place where a public account of "what has > happened" is created. It is when writing into this potential source for > retrospective inspection that physicians and nurses construe narratives that > align what actually happened with what should have happened, no matter how > insignificant these occurrences may seem [@Garfinkel:1967, pp. 197-207; > @Hunter:1991]. If a patient has been hospitalized for several days, for > example, nurses may omit measuring the blood pressure and just fill in > yesterday's measurement in today's column. Likewise, residents often ask > nurses what to prescribe while they complete the order form in the regular > fashion, as if it is they who have told the nurses what to do. The same > phenomenon occurs in and through the summaries that are continually being > produced. In this process, details are omitted, and the story is simplified > and retold in ways that fit the situation at hand. This results in an > increasing stylization of past events into a standard canon, a sign leading to > a diagnosis leading to a therapy leading to an outcome. A sentence like > "admitted with Hodgkins, now 8 days post-reinfusion" effectively sets the > focus of the current attention. Yet in doing so, it also smoothes over any > diagnostic uncertainties that might have played a role, erasing the > deliberations that went into the selection of this therapy, and Mr Wood's > fears and anxieties. > > Finally, all this adds to the peculiar feature of written text that, once > written, tends to have a privileged position, vis-á-vis other recollections > of these events (see @Clanchy:1993 for the historical genesis of this > privilege). Wherever it travels (from the audit committee to the insurance > inspector's desk to the courtroom), it becomes *the* trace to the > "original event". As @Smith:1974 aptly summarizes these issues, accounts > enter "document time" once they are written: "that crucial point at which > much if not every trace of what has gone into the making of that > account is obliterated and what remains is only the text which aims at > being read as 'what actually happened'" > > @Berg:1997a, p. 525

District Dispatch: Network neutrality in the crosshairs

planet code4lib - Tue, 2017-01-10 00:15

Jointly authored by Larra Clark, Krista Cox and Kara Malenfant

It is widely reported that network neutrality is one of the most endangered telecommunications policy gains of the past two years. The ALA, ARL and ACRL—with EDUCAUSE and other library and higher education allies—have been on the front lines of this battle with the Federal Communications Commission (FCC), Congress, and the courts for more than a decade. Here’s an update on where we stand, what might come next, and what the library community may do to mobilize.

From Flickr

What’s at stake?

Net neutrality is the principle that internet service providers (ISPs) should enable access to all content and applications regardless of the source, and without favoring or blocking particular services or websites. Net neutrality is essential for library and educational institutions to carry out our missions and to ensure protection of freedom of speech, educational achievement, research and economic growth. The Internet has become the pre-eminent platform for learning, collaboration and interaction among students, faculty, library patrons, local communities and the world.

In February 2015, the FCC adopted Open Internet rules that provided the strongest network neutrality protections we’ve seen, and which are aligned with library and higher education principles for network neutrality and ongoing direct advocacy with FCC and other allies. The rules:

  • Prohibit blocking or degrading access to legal content, applications, services, and non-harmful devices; as well as banning paid prioritization, or favoring some content over other traffic;
  • Apply network neutrality protections to both fixed and mobile broadband, which the library and higher education coalition advocated for in our most recent filings, as well as (unsuccessfully) in response to the 2010 Open Internet Order;
  • Allow for reasonable network management while enhancing transparency rules regarding how ISPs are doing this;
  • Create a general Open Internet standard for future ISP conduct; and
  • Re-classify ISPs as Title II “common carriers.”

As anticipated, the decision was quickly challenged in court and in Congress. A broad coalition of network neutrality advocates successfully stymied Congressional efforts to undermine the FCC’s Open Internet Order, and library organizations filed as amici at the U.S. Appeals Court for the D.C. Circuit. In June 2016, the three-judge panel affirmed the FCC’s rules.

What’s the threat?

During the presidential campaign, and with more specificity since the election, President-elect Donald Trump and members of his transition team, as well as some Republican members of Congress and the FCC, have made rolling back network neutrality protections a priority for action.

Here’s a sample of what we are reading and hearing these days:

As in the past, attacks on network neutrality may take many different forms, including new legislation, judicial appeal to the Supreme Court, initiating a new rulemaking and/or lack of enforcement by new FCC leadership, or new efforts by ISPs to skirt the rules.

For instance, there may be an effort by some Members of Congress to craft a “compromise” bill that would prohibit blocking and degradation by statute but reverse the FCC’s decision to classify ISPs as Title II common carriers. We are wary, however, that this so-called compromise may not give the FCC the authority to enforce the statutory rules.

So, now what?

As the precise shape of the attacks is still taking form, the library and higher education communities are beginning to connect and engage in planning discussions. We will monitor developments and work with others to mobilize action to ensure Open Internet protections are preserved.

Library advocates can help in several ways:

  • Stay informed via District Dispatch blog (subscribe here) and ARL Policy Notes blog (subscribe here)
  • Sign up for Action Alerts so we can reach you quickly when direct action is needed
  • Share your stories, blog and engage on social networks about the importance of network neutrality and the need to defend it

Larra Clark is Deputy Director for the ALA Office for Information Technology Policy and Public Library Association. Krista Cox is ARL Director of Public Policy Initiatives. Kara Malenfant is ACRL Senior Strategist for Special Initiatives.

The post Network neutrality in the crosshairs appeared first on District Dispatch.

Islandora: Updates: the Second Islandora Conference, May 15 - 19, 2017

planet code4lib - Mon, 2017-01-09 15:16

The first Islandora conference took place in Charlottetown, PEI, back in August of 2015. It was a rousing success, so we're doing it again, this time in Hamilton, Ontario.

The Islandora Foundation, with Islandoracon host McMaster University, invites you to join us May 15 - 19 for sessions, workshops, and the best opportunity you'll get to meet up with your fellow islandora users and share your knowledge.

Registration is open, with Early Bird rates until January 31st.

Call for Proposals

Our Call for Proposals is ongoing! Extended to January 13th, the field is wide open and we are actively seeking sessions. If you work with Islandora and have done something that you'd like to share with the community, please consider presenting a poster or 30 minute session.

Speakers get a discounted rate for conference registration.

Logo Contest

One of the features of Islandora events is the t-shirt given to all attendees. Every camp has its own logo, and so shall the conference. We want to give a free registration and an extra t-shirt to the Islandoracon attendee who comes up with the best logo to represent our second conference.

This was the logo the first time around:

Entries will be accepted through January 31st, 2016. Entries will be judged by the Planing Committee and a winner will be selected and announced in early February. Details here.

Schedule

The detailed schedule is still pending, but the rough outline for the week goes like this:

Monday, May 15: Hackfest
Tuesday, May 16: Opening and General Sessions
Wednesday, May 17: General Sessions
Thursday, May 18: Workshops
Friday, May 19: Post-Conference Sessions 

The Hackfest will be styled to need participation from all sorts of skillsets, so please don't feel you have to be a developer to join in!

Workshops will be 90 minutes each, led by expert members of the Islandora community. They will cover the following topics:

  • Working with Linked Data and Ontologies
  • Islandora CLAW Overview
  • Solr Tuning and Solr Views
  • Working with Form Builder
  • Infrastructure and Performance
  • Islandora Scholar
  • Querying and SPARQL update
  • Islandora 101

Post-Conference Sessions will be a mix of longer/more specific workshops, Interest Groups meetings, working groups, and other content that doesn't fit into the main track of the conference.

Sponsors

Islandoracon will be brought to you through the support of a number of sponsors from our community. If you would like to join them, you can find out more about sponsoring Islandoracon here.

Many thanks to those who have already contributed:

Host

Gold

        

Silver

  

Bronze

None yet!

Code of Conduct

All Islandora gatherings, be they online or in person, are covered by our community Code of Conduct.

Let's be friendly, professional, and safe.

LibUX: The Nintendo Switch

planet code4lib - Mon, 2017-01-09 05:48

Even if you’re not much of a gamer, the modular design of the Nintendo Switch is super compelling. What you see in its reveal is an experience that moves not just between your living room screen and your backpack, but one that adapts to different social contexts.

Its controllers — joycons — peal off, can attach to a more conventional lean-back controller, or pop-on to the sides of the screen when you leave the house, or divvied-up between friends for — you know — some roof co-op (watch the video, really). What’s more, there’s a really good likelihood that joycons might be able to be switched out for alternate designs.

Nintendo’s really selling the experience here.

Notes

Remediation is the process through which the characteristics and approaches of competing media are imitated, altered, and critiqued in a new medium… (or) the representation of one medium in another. Meredith Davis

Listen and subscribe

If you like, you can download the MP3 or subscribe to LibUX on StitcheriTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

District Dispatch: Rethinking education of youth and children’s librarians

planet code4lib - Wed, 2017-01-04 22:21

Guest post by Mega Subramaniam and Amanda Waugh*

Recent reports from a wide variety of sources including the Institute of Museum and Library Services, the Joan Ganz Cooney Center (leading researchers on media and young children), and the Young Adult Library Services Association are a clarion call to think differently about the education of youth and children’s librarians. The findings of the report outlined several areas for growth, including that libraries should “bridge the growing digital and knowledge divide,” “leverage teens’ motivation to learn,” “provide workforce development training” and “serve as the connector between teens and other community agencies.”

Photo credit: Los Angeles Public Library

Working with partners such as ALA’s Office for Information Technology Policy (OITP) and YALSA, a group of faculty and researchers from the University of Maryland came together to develop a continuing education program for youth librarians serving young people from birth to 18.

With the goal of answering this call to action, a new, online Graduate Certificate of Professional Studies in Youth Experience (YX) will be offered at the College of Information Studies at the University of Maryland (UMD) in May 2017.

The YX Certificate will train librarians working with children and teens to:

  • demonstrate an understanding of the issues, concepts and policies related to youth-led learning and programming through libraries;
  • implement best practices to be inclusive of all youth’s needs, in particular youth from disadvantaged populations;
  • apply core theories and models from information science and learning sciences to address needs of youth; and
  • partner with other cultural institutions and community organizations to help with youth programming, education and other projects related to youth development.

There are several events coming up in January 2017 that are relevant to anyone interested in this certificate:

Online information session on January 10, at 1 p.m. EST

YX faculty will describe the courses in this certificate that will enhance librarians’ skills to incorporate design thinking, participatory design, and connected learning into their programming, as well as provide details on the application process. The session will be recorded and the link made available after the session. To participate, please complete the following form either to attend the session or to receive the recording of the session: https://go.umd.edu/52b.

ALA Midwinter Conference in Atlanta, January 22-23

The YX faculty will conduct participatory design/information sessions with public librarians working with children and teens. This interactive session will help guide the development of post-MLIS continuing education for youth, teen and children’s librarians, as well as learning more about the Graduate Certificate of Professional Studies in YX. Public librarians working with children and teens are invited to join this session with a team of researchers and library educators to re-envision the next generation of professional education (light refreshments included). Sessions will be on Sunday, January 22 (Room: GWCC A410) and Monday, January 23 (Room: GWCC A406) from 10:30 am to noon. If you would like to participate, please follow this link to register: https://go.umd.edu/52a.

And most importantly, applications for the first cohort of Graduate Certificate of Professional Studies in Youth Experience (YX) offered by the College of Information Studies at the University of Maryland will open on January 9, 2017. Substantial tuition support is available through the generosity of the Institute of Museum and Library Services. More information about the certificate and the application process is available at: http://yx.umd.edu/.

We are excited to share information about this program with you. If you have any questions, send an email to the YX team at yxischool@umd.edu.

 

Mega Subramaniam (@mmsubram) is an Associate Professor at the University of Maryland’s College of Information Studies, studying young adults’ use of libraries for their development of digital literacies and information practices. She led the development of this certificate, and serves as the certification Director.

Amanda Waugh (@amandainmd) is a doctoral candidate at the University of Maryland’s College of Information Studies studying the information practices of teens in online communities. She is a certified school librarian with experience in elementary schools. Amanda provides administrative support to the YX Certificate.

The post Rethinking education of youth and children’s librarians appeared first on District Dispatch.

LITA: Jobs in Information Technology: January 4, 2017

planet code4lib - Wed, 2017-01-04 19:54

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

State of Oregon Law Library, Communications Librarian, Salem, OR

Northwest Area Education Agency, Administrator, Media/Technology and Educational Services, Sioux City, IA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Eric Hellman: How to check if your library is leaking catalog searches to Amazon

planet code4lib - Wed, 2017-01-04 16:42
I've been writing about privacy in libraries for a while now, and I get a bit down sometimes because progress is so slow. I've come to realize that part of the problem is that the issues are sometimes really complex and  technical; people just don't believe that the web works the way it does, violating user privacy at every opportunity.

Content embedded in websites is a a huge source of privacy leakage in library services. Cover images can be particularly problematic. I've written before that, without meaning to, many libraries send data to Amazon about the books a user is searching for; cover images are almost always the culprit. I've been reporting this issue to the library automation companies that enable this, but a year and a half later, nothing has changed. (I understand that "discovery" services such as Primo/Summon even include config checkboxes that make this easy to do; the companies say this is what their customers want.)

Two indications that a third-party cover image is a privacy problem are:
  1. the provider sets tracking cookies on the hostname serving the content.
  2. the provider collects personal information, for example as part of commerce. 
For example, covers served by Amazon send a bonanza of actionable intelligence to Amazon.

Here's how to tell if your library is sending Amazon your library search data.
SetupYou'll need a web browser equipped with developer tools; I use Chrome. Firefox should work, too.

Log into Amazon.com. They will give you a tracking cookie that identifies you. If you buy something, they'll have your credit card number, your physical and electronic addresses, records about the stuff you buy, and a big chunk of your web browsing history on websites that offer affiliate linking. These cookies are used to optimize the advertisements you're shown around the web.

To see your Amazon cookies, go to Preferences > Settings. Click "Show advanced setting..." (It's hiding at the bottom.)

Click the  "Content settings.." button.

Now click the "All cookies and site data" button.

in the "Search cookies" box, type "amazon". Chances are, you'll see something like this.

I've got 65 cookies for "amazon.com"!

If you remove all the cookies and then go back to Amazon, you'll get 15 fresh cookies, most of them set to last for 20 years. Amazon knows who I am even if a delete all the cookies except "x-main".

Test the LibraryNow it's time to find a library search box. For demonstration purposes, I'll use Harvard's "Hollis" catalog. I would get similar results at 36 different ARL libraries, but Harvard has lots of books and returns plenty of results. In the past, I've used What to expect as my search string, but just to make a point, I'll use Killing Trump, a book that Bill O'Reilly hasn't written yet.

Once you've executed your search, choose View > Developer > Developer Tools

Click on the "Sources" tab and to see the requests made of "images.amazon.com". Amazon has returned 1x1 clear pixels for three requested covers. The covers are requested by ISBN. But that's not all the information contained in the cover request.

To see the cover request, click on the "Network" tab and hit reload. You can see that the cover images were requested by a javascript called "primo_library_web" (Hollis is an instance of Ex Libris' Primo discovery service.)

Now click on the request you're interested in. Look at the request headers.


There are two of interest, the "Cookie" and the "Referer".

The "Cookie" sent to Amazon is this:
x-main="oO@WgrX2LoaTFJeRfVIWNu1Hx?a1Mt0s";
skin=noskin; session-token="bcgYhb7dksVolyQIRy4abz1kCvlXoYGNUM5gZe9z4pV75B53o/4Bs6cv1Plr4INdSFTkEPBV1pm74vGkGGd0HHLb9cMvu9bp3qekVLaboQtTr+gtC90lOFvJwXDM4Fpqi6bEbmv3lCqYC5FDhDKZQp1v8DlYr8ZdJJBP5lwEu2a+OSXbJhfVFnb3860I1i3DWntYyU1ip0s="; x-wl-uid=1OgIBsslBlOoArUsYcVdZ0IESKFUYR0iZ3fLcjTXQ1PyTMaFdjy6gB9uaILvMGaN9I+mRtJmbSFwNKfMRJWX7jg==; ubid-main=156-1472903-4100903;
session-id-time=2082787201l;
session-id=161-0692439-8899146Note that Amazon can tell who I am from the x-main cookie alone. In the privacy biz, this is known as "PII" or personally identifiable information.

The "Referer" sent to Amazon is this:
http://hollis.harvard.edu/primo_library/libweb/action/search.do?fn=search&ct=search&initialSearch=true&mode=Basic&tab=everything&indx=1&dum=true&srt=rank&vid=HVD&frbg=&tb=t&vl%28freeText0%29=killing+trump&scp.scps=scope%3A%28HVD_FGDC%29%2Cscope%3A%28HVD%29%2Cscope%3A%28HVD_VIA%29%2Cprimo_central_multiple_fe&vl%28394521272UI1%29=all_items&vl%281UI0%29=contains&vl%2851615747UI0%29=any&vl%2851615747UI0%29=title&vl%2851615747UI0%29=anyTo put this plainly, my entire search session, including my search string killing trump is sent to Amazon, alongside my personal information, whether I like it or not. I don't know what Amazon does with this information. I assume if a government actor wants my search history, they will get it from Amazon without much fuss.

I don't like it.
Rant[I wrote a rant; but I decided to save it for a future post if needed.] Anyone want a Cookie?

Notes 12/23/2016:
  1. As Keith Jenkins noted, users can configure Chrome and Safari to block 3rd Party cookies. Firefox won't block Amazon cookies, however. And some libraries advise users to not to block 3rd party cookies because doing so can cause problems with proxy authentication.
  2. If Chrome's network panel tells you "Provisional headers are shown" this means it doesn't know what request headers were really sent because another plugin is modifying headers. So if you have HTTPS Everywhere, Ghostery, Adblock, or Privacy Badger installed, you may not be able to use Chrome developer tools to see request headers. Thanks to Scott Carlson for the heads up.
  3. Cover images from Google leak similar data; as does use of Google Analytics. As do Facebook Like buttons. Et cetera.
  4. Thanks to Sarah Houghton for suggesting that I write this up.

David Rosenthal: Error 400: Blogger is Bloggered

planet code4lib - Wed, 2017-01-04 16:00
If you tried to post a comment and got the scary message:
Bad Request
Error 400please read below the fold for an explanation and a work-around.

This problem appeared on many Blogger blogs at the end of October, and has collected a lot of reports on the Blogger Help Forum. It is a really annoying problem but, in two months after replicating the problem, Google has made exactly zero progress fixing it. Which sucks.

The problem is that, at least for me and many others, 100% of the time publishing a comment after previewing it gets "Bad Request Error 400". Publishing a comment without previewing it works. So what I'm forced to do is:
  • Click "Post a comment".
  • Write the comment.
  • Select the whole comment and copy it to the clipboard.
  • Click "Preview".
  • Click "Publish", get "Bad Request Error 400".
  • Left-click on the "Back" button, choose the post in which you were commenting.
  • Click "Post a comment".
  • Paste the comment from the clipboard.
  • Click "Publish" - do NOT click "Preview".
It is really unacceptable that for the last two months we have been reduced to this level of bogosity.

Pages

Subscribe to code4lib aggregator