You are here

Feed aggregator

Nicole Engard: SxSW: Behind the Social at WGBH

planet code4lib - Fri, 2015-03-13 21:11

WGBH is the number one producer of content for PBS. Great content is not a problem for the people on this panel – the problem they sometimes have is doing more than the bear minimum with social.

Today’s panel was made of up:

Molly Jacobs, works with American Experience on PBS and spends about 20% of her time on social networks. Hannah Auerbach from Antiques Roadshow spends up to 40% of her time on social networks. Olivia Wong works on Masterpiece on PBS and spends about 100% of her time on social media.

Social Media Best Practices:

  1. Know your audience

  2. Have a unique voice
  3. Plan ahead
  4. Prioritize your platforms
  5. Take advantage of partnerships
  6. Be visual
  7. Engage

For #1 Molly says ‘content only’ instead of ‘content first’ for the fans of her American history series. The goal is to be their audience – be total history nerds – not just them posting stuff for the audience but them posting for everyone. Instead of saying that the broadcast is going to happen she posts about the events that the broadcast cover to get interest ahead of time.

For #2 Hannah they decided that their voice was one of a trade magazine – they post content from a lot of other related folks.

For Olivia the biggest challenge they have is that there is a lot of buzz out there ahead of the time because Downton airs in the UK before it airs here. For #2 the voice of Masterpiece is a knowledgeable, fun voice that knows a lot about the history of Masterpiece and entertainment.

Molly plans ahead (#3) by posting facts that will then later be shown on a episode of American Experience. Hannah does this by giving context to conversations that are going on from her show. For example Antiques Roadshow had nothing to say about whether a dress is gold or blue so they passed on joining in on that conversation. “Be nimble. Instead of news hijacking, figure out how you can add context to what’s happening now”

When talking about #4 Olivia has just recently started using tools like Vine and Instagram because it’s hard to figure out what you can offer on each platform that’s different because you have overlap in followers. You don’t want to repeat content on all outlets. You have to figure out how to diversify and keep it unique. Hannah is primarily focused on Facebook and Pinterest with content from Antiques Roadshow. Molly encourages us not to be afraid to fail – try out the tools – if you fail it’s no big deal. You can choose the one tool that you love and start there.

Side note – give Vine a try because it’s a great way to share little nuggets with your audience.

We live in a sharing economy. So no one is going to listen to you if you’re pointing at yourself all the time – you have to share others’ content. Olivia talked about how (for #5) they formed partnerships with Jane Austin fan bloggers to get them to live tweet during Downton Abbey. They even found fashion bloggers to live tweet about the fashion in the show. They aren’t paid, they’re just asked if they want to tweet while they watch and get promotion by doing so. “Always be on the lookout for high profile people who love your content. They are the best ambassadors”

Olivia brings in cast to do video interviews for #6 (visuals). Molly mentioned that everyone loves visuals – but keep in mind that different types of audiences like different types of content. Keep diversification in mind.

Olivia is always looking for ways to do things a little bit differently to engage folks (#7). What are the new things they can do that are familiar at the same time. Again – don’t be afraid to fail. Hannah finds that they have different audiences – their social media (with the exception of Facebook) audience is younger than their broadcast audience. It excites her to see that their social media is succeeding in exciting users. Facebook users though get really excited and participate in the online community – which shows that the social media policies are succeeding. Molly gets excited when a post on Facebook shows a reach higher than the number of followers they have on the page.

Each speaker gave us one final lesson they have from using social media on a shoestring budget.

Olivia started by telling us about her experience with the finale of Downton. Her team bought a poll app for Facebook for $200 (My Polls) and posted a poll every day up to the finale asking people what they thought was going to happen – they got over 30,000 viewers of the polls – but they saw an uptake of 70% participation rate on their page. At the end they did an infographic of the most popular answers and they took all the responses and posted them to the Masterpiece website.

Hannah has been emailing appraisers before their show letting them know when they were going to air. Recently she started including social media links and hashtags in the emails to get more interaction – this is a free way to get others tweeting/posting about the show.

The post SxSW: Behind the Social at WGBH appeared first on What I Learned Today....

Related posts:

  1. SxSW: New Social Networks Are Changing Entire Industries
  2. ATO2014: Social media for slackers
  3. Social Mention

CrossRef: New CrossRef Members

planet code4lib - Fri, 2015-03-13 20:16

Updated March 9, 2015

Voting Members
American Mental Health Counselors Association
Audio Engineering Society
Auricle Technologies, Pvt., Ltd.
Austrian Geological Society (OGG)
Croatian Society of Art Historians
Diacronia
Editorial Board of Journal Radioelectronics, Nanosystems, Information Technology RENSIT
Entomological Society of Israel
Eurasian Academy of Sciences
Future Energy Service and Publishing
Harvard Education Publishing Group
INCDMTM
Institute of Environmental Sciences and Technology (IEST)
International Academy Publishing (IAP)
Neurosciences
Paul Mellon Centre for Studies in British Art
Pyatigorsk State Linguistic University
Techmind Research Society
The Finnish Society of Photogrammetry and Remote Sensing
The NCHERM Group, LLC

Represented Members
Bumhan Philosophical Society
Centro Universitario de Maringa
Daegu Historical Association
Historical and Social Educational Ideas
Institute of Archaeology and Ethnography SB RAS
IRBIS
Journal of Security Strategies
Kazan Medical Journal
Modern Studies in English
Panorama of Brazilian Law
Raizes e Amidos Tropicais/Tropical Roots and Starches
Real Economy Publishing
Sociedade Brasileira de Dermatologia

Last updated March 2, 2015

Voting Members
Asian Scientific Publishers
Global Business Publications
Institute of Polish Language
Journal of Case Reports
Journal Sovremennye Tehnologii v Medicine
Penza Psychological Newsletter
QUASAR, LLC
Science and Education, Ltd.
The International Child Neurology Association (ICNA)
Universidad de Antioquia

Represented Members
Balkan Journal of Electrical & Computer Engineering (BAJECE)
EIA Energy in Agriculture
Faculdade de Enfermagem Nova Esperanca
Faculdade de Medicina de Sao Jose do Rio Preto - FAMERP
Gumushane University Journal of Science and Technology Institute
Innovative Medical Technologies Development Foundation
Laboratorio de Anatomia Comparada dos Vertebrados
Nucleo para o Desenvolvimento de Tecnologia e Ambientes Educacionais (NPT)
The Journal of International Social Research
The Korean Society for the Study of Moral Education
Turkish Online Journal of Distance Education
Uni-FACEF Centro Universitario de Franca
Yunus Arastirma Bulteni

Eric Hellman: 16 of the top 20 Research Journals Let Ad Networks Spy on Their Readers

planet code4lib - Fri, 2015-03-13 13:55
A recent query to the "LibLicense" listserv asked:
Is there any kind of organization that has put together a website or list of database providers/publishers that indicate the extent to which they respect patron privacy?The answer is "no", but I thought it would useful to look at the top journal publishers to see if their websites are built with an orientation towards reader privacy.

I came up with a list of 20 top journals. I took the 10 journals with the most citations and the 10 journals with the most citations per published article, according to the SCImago journal rankings.

I used Ghostery to count the number of trackers present on the web page for an article in each journal. Each of these trackers gets a feed of each user's browsing behavior. I looked at the trackers to see if user browsing behavior was being sent to advertising networks. I also determined whether the journal supported secure connections. Based on these results, I assigned a letter grade for each journal.
Passing, Grade ANone of the scholarly journals I looked at earned excellent grades for reader privacy.
Passing, Grade BTwo journals, both published by the American Physical Society, earned good grades for reader privacy. They use a social sharing widget that respects privacy.
Reviews of Modern Physics.  Ranked #2 in citations/article. 1 Tracker (Google Analytics). No advertising networks. Supports HTTPS, but allows insecure connections.Physical Review Letters. Ranked #9 in total citations, #393 in citations/article. 1 Tracker (Google Analytics). No advertising networks. Supports HTTPS, but allows insecure connections.Passing Grade CTwo journals, both published by Annual Reviews, earned acceptable grades for reader privacy.

Annual Review of Immunology. Ranked #3 in citations/article. 1 Tracker (Google Analytics). No advertising networks. Insecure connections only.Annual Review of Biochemistry. Ranked #5 in citations/article. 1 Tracker (Google Analytics). No advertising networks. Insecure connections only.
Failing Grade DFailing grades are earned by publishers that allow their readers to be tracked by advertising networks. These networks get access to the full browsing history of a user and track them with cookies; it's difficult for users to maintain anonymity when most of their web browsing is exposed to tracking.
Science, published by AAAS. Ranked #5 in total citations, #49 in citations/article. 10 Trackers. Multiple advertising networks. Science gets a D rather than an F because it supports HTTPS, although it allows insecure connections.Failing Grade F15 journals earned failing grades because their participation in advertising networks exposes their readers to tracking and spying. Some of the publishers are more flagrant about this than others. Maybe I should have given F+ to some and F- to others. All of these journals force insecure connections.

PLoS One, published by the Public Library of Science. #1 in total citations, #1776 in citations/article. 3 trackers. One advertising network.
Proceedings of the National Academy of Sciences of the United States, published by the National Academy of Sciences. #2 in total citations, #155 in citations/article. 3 trackers. One advertising network.
Journal of Biological Chemistry
, published by the American Society for Biochemistry and Molecular Biology. #8 in total citations, #513 in citations/article. 3 trackers. One advertising network.
Quarterly Journal of Economics
, published by Oxford Journals. #6 in citations/article. 4 trackers. One advertising network.
Chemical Communications
, published by the Royal Society of Chemistry. #10 in total citations, #680 in citations/article. 6 trackers. Multiple advertising networks.
Journal of the American Chemical Society
, published by the American Chemical Society. #4 in total citations, #185 in citations/article. 7 trackers. Multiple advertising networks.
Chemical Reviews
, published by the American Chemical Society. #10 in citations/article. 8 trackers. Multiple advertising networks. 
CA: A Cancer Journal for Clinicians
, published by Wiley. #1 in citations/article. 9 trackers. Multiple advertising networks.
Cell
, published by Elsevier. #4 in citations/article. 9 trackers. Multiple advertising networks.
Angewandte Chemie - International Edition
, published by Wiley. #6 in total citations, #202 in citations/article. 11 trackers. Multiple advertising networks.
Nature Genetics
, published by Nature Publishing Group. #7 in citations/article. 11 trackers. Multiple advertising networks.
Nature
, published by Nature Publishing Group. #3 in total citations, #11 in citations/article. 11 trackers. One advertising network.
Nature Reviews Genetics
, published by Nature Publishing Group. #8 in citations/article. 12 trackers. Multiple advertising networks.
Nature Reviews Molecular Cell Biology
, published by Nature Publishing Group. #9 in citations/article. 13 trackers. Multiple advertising networks.
New England Journal of Medicine,
 published by the Massachusetts Medical Society. #7 in total citations, #41 in citations/article. 14 trackers. Multiple advertising networks.
RemarksI'm particularly concerned about the medical journals that participate in advertising networks. Imagine that someone is researching clinical trials for a deadly disease. A smart insurance company could target such users with ads that mark them for higher premiums. A pharmaceutical company could use advertising targeting researchers at competing companies to find clues about their research directions. Most journal users (and probably most journal publishers) don't realize how easily online ads can be used to gain intelligence as well as to sell products.
In defense of the publishers, it should be noted that the web advertising business has developed very rapidly over the past few years due to intense competition. A few years ago, the attacks on user privacy enabled by the ad networks' massive data collection were mostly theoretical. But competition has led the networks to increase their targeting ability and scoop up more and more "demographic" data. What was theory a few years ago is today's reality. We still have time to prevent tomorrow's privacy disaster, but change will only happen if the institutions that purchase and fund these journals learn what's really going on and start to demand the privacy that readers deserve.

HangingTogether: Complete* List of Terry Pratchett’s Discworld Novels

planet code4lib - Fri, 2015-03-13 13:29

In honor of Terry Pratchett, I want to share with everyone, one of my favorite places in all the worlds – Terry Pratchett’s Discworld.  If you know it, you love it.  If you don’t know it, I highly encourage you to explore it.  There are over 40 books in the series, and I’ve read them all – many more than once. Well, to be honest, many more than a dozen times. It is a world populated by many creatures including, but not limited to; humans, sentient luggage, dwarves, trolls, witches, wizards, vampires, werwolves, heros, gods, and one Nobby Nobs.  These books never fail to inspire me, and I want to share them with you.

Fortunately, lots and lots of libraries around the world hold these books.  To help you find them, I’ve compiled the complete* list of all the Discworld books with links to WorldCat (so that you can find them near you).  Enjoy! and be warned – reading one book, usually leads to reading 3 or more.  (It’s the original binge watching. I know, I’ve been binge reading Pratchett since the 1990s.)

Complete* List of Terry Pratchett’s Discworld Novels (In order of publication date)

Not Discworld – but I love it; Good Omens with Neil Gaiman.

There are more Discworld books that are not novels; mapps, cookbooks, portfolios, and handbooks.  You can find these in your library too, WorldCat can help:

And now, if you’ll excuse me I have appointments with Rincewind, Commander Vimes, Tiffany Aching, Granny Weatherwax, Moist von Lipwig, and the Librarian**. I’ll send a clacks to let you know when I’ll be back.

Terry Pratchett, I will never forget you and thank you for sharing the Discworld with our world.
THE END

 

* The list is as complete as I could make it. But I’m only human, and the series is impressive, if not magical unto itself and does mysterious things. (I don’t have proof, but I think the books change a bit on every 3rd reading. Text can be slippery that way.) Whatever I’ve missed, please post it in the comments with a link to WorldCat if you can.

** the Librarian is a side character in many stories, but he’s a personal favorite.  But don’t call him a monkey, unless you want your arm ripped off.

^ these two are, strictly speaking, picture books and not novels. But I put them on the list anyway.

*^Also, if you’ve never read Pratchett before, I recommend you just pick one and read.  the Discworld series is actually made of several series and they do have a reading order. A quick internet search for “Discworld reading order” can lead you to some guides. You don’t really need it though. Read what you think looks interesting and just dive in.  I don’t think the Discworld would approve of that much order imposed upon it anyway.

About JD Shipengrover

JD Shipengrover. OCLC Research. Information Architect. My primary focus is to bring user-centered interface design and usability principles to the web applications created by OCLC Research. I have been with OCLC for over 7 years and have been working as a Web Creative for 15+ years.

Mail | More Posts (2)

Library of Congress: The Signal: Creating Workflows for Born-Digital Collections: An NDSR Project Update

planet code4lib - Fri, 2015-03-13 13:28

The following is a guest post by Julia Kim, National Digital Stewardship Resident at New York University Libraries.

Julia Kim analyzing Jeremy Blake’s digital artwork. Photo by Elena Olivo.

I’m now into the last leg of my nine-month residency, and I’m amazed by what has been accomplished and the major steps still ahead of me. In this post, I’ll give a project update on my primary task: to create, test and implement access-driven workflows of born-digital collections at New York University Libraries.

My residency is very broad; I am tasked with investigating and implementing workflows that encompass the entirety of the born-digital process, from accession to access (project overview). This means that while I spent a month learning digital forensics techniques, I have also researched and implemented workflow steps that occur before acquisition and after ingest. Rather than signing off when the bits have been checked, duplicated and dispersed in multiple locations to long-term storage, I’ve also focused on access. In the past five months, I’ve worked on many collections. Such depth and breadth has been crucial. Time and again, I’ve been challenged to revise and refine my sense of the workflow.

The ingestion of incoming born-digital material is time consuming. In many cases, I only create a bit-exact disk image or copy of the content for ingest with minimal metadata from my end. NYU’s three archives (and now Abu Dhabi) collect actively. Imaging or copying files, validating, bagging and ingesting such increasingly large collections tie up our dedicated imaging station and localized storage. This past week, for example, I finished ingesting a collection into the repository with 2 TB, 5 TB and 3 TB hard drives. It took the full weekend to create the initial image of the 2 TB hard drive and validate with checksums and approximately the same amount of time for ingest into the repository. The Digital Forensics Lab, however, contains a number of other computers at my disposal in addition to the imaging desktop. This is also extremely helpful with collections that rely on other operating systems.

NYU’s Digital Forensics Laboratory.

Over the course of my residency I’ve also worked with the digital counterparts of previously published hybrid collections including Exit Art Archive (2 TB organizational RAID) and the Robert Fitch Papers (several floppy disks with easily renderable text files and no researcher restrictions). The collection I’ve spent the most time with is the Jeremy Blake Papers which were acquired in 2007. These “papers” include files copied on-site at the donor’s house from Blake’s MacBook Pro, an external hard drive and a flash drive. NYU also acquired several hundred optical disks, three additional hard drives, dozens of zip disks and digital linear tapes. The Blake Papers present many of the challenges that hinder access: sheer data size and variety of media format types, a prevalence of incompletely documented or misunderstood proprietary file formats, and complicated rights and privacy restrictions.

Jeremy Blake’s PSD files, accessed with a Power PC.

The bulk of the Blake Papers is composed of Photoshop files (PSD) that span the late 1990s to 2007. To create his work, Blake would collage different sources into Photoshop. These sources would be layered and further processed to create the dense and dreamlike imagery characteristic of his final moving image work. Blake would share these layered PSD files with close collaborators that animated his still images and composed the soundtracks under his close supervision.

PSD file format normalization was not a viable preservation solution. Normalization would render a file with fifty layers, turned on and off in different ways, into a singular flat image. Any normalization process would lose Blake’s working process, the area in which we thought his archive could be most valuable to future researchers. We cannot simply migrate the files to TIFF 6.0. Paradoxically, any TIFF that did encompass layers would no longer be a true TIFF.

While Photoshop has retained robust backward and forward compatibility with its files and software, Blake’s working methods are very much a product of the intersection of developing technologies and art-making practices of his time. His methods, were cutting edge at the time, but they seem unimaginably labor-intensive today. For these reasons, his works will be migrated through Photoshop software to the current version of Photoshop, but they will also be migrated and made accessible through emulations of the approximate software versions and operating systems used. Some of my focus recently was to create these emulations.

Emulated Access of Blake’s artwork.

Next month, I will lead and design a usability test of representative portions of the Jeremy Blake Papers and the Exit Art Collection with a small, representative group of NYU’s Fales Library & Special Collections researchers. This will serve as a pilot test for making accessible emulation of complex media. It will also be an opportunity to test my documentation as I explain these concepts and strategies to researchers unused to the idea of archival research done with only a (non-networked) laptop.

A secondary purpose will be to note qualities of interest to researchers. This may seem an odd question to pose, but given the still enormous effort needed to stabilize and make accessible this type of work, it is worth noting which qualities researchers are interested in. Their subjects of research and even their definition of “content” may differ. A digital humanist may be more interested in the timestamps across a large digital collection rather than any of the text and image “content” in the files themselves. Some researchers may be well versed in Photoshop’s changes, while some may only be interested in the finalized moving images. Through these pilot studies, I hope to answer some of these questions while creating a template for other archivists interested in replicating and adding to the data gathered from this study.

In addition to this technical work, I’m also coordinating a born-digital workflows CURATEcamp (April 23), which will be hosted at the beautiful landmark Brooklyn Historical Society in Brooklyn Heights. This un-conference will bring together digital archivists, stewards, repository managers, and staff involved in managing born-digital collections for discussions, presentations and demonstrations. In addition to two streams of small groups that will tackle issues like the Forensic Toolkit’s integration into workflows, we will also have a larger stream of demonstrations and workshops to highlight developments with BitCurator Access, for example.

In addition to CURATEcamp, I will be sharing updates of my work at the American Institute of Conservation conference (May 2015), as well as at the Society of American Archivists (August 2015). It’s been especially gratifying to be able to learn from different intersecting worlds and competencies, whether moving images, digital curation, fine art or archiving.

The activities and tasks mentioned in this post should keep me busy for the next two months. As someone who loves investigating and research with tangible “hands-on” components and outputs, this has been a great experience for me. I’d like to note that without the administrative and technical support from my mentors, Don Mennerich and Lisa Darms, this work would not have been at all possible. I have been able to explore very interesting questions with not only exceptional collections, but exceptional mentors.

HangingTogether: Introducing the 2015 OCLC Research Collective Collections Tournament! Madness!

planet code4lib - Thu, 2015-03-12 17:29

It’s March, and along with the approach of Spring, that means March Madness is around the corner – the NCAA Men’s and Women’s College Basketball Tournaments! This year, OCLC Research is presenting a library-themed tournament to help get you in the mood for the real thing – except this competition doesn’t require a basketball. Instead, get ready for the 2015 OCLC Research Collective Collections Tournament! #oclctourney

A collective collection is the combined collections of a group of institutions, with duplicate holdings removed, yielding the set of distinct publications held across the collections of the group’s members. Collective collections are an important concept for thinking about library collections today, as collection building and management increasingly take place within, and are informed by, the broader context of the system-wide library resource. OCLC Research has done a great deal of work with collective collections, culminating in our recently published (and award-winning!) volume Understanding Collective Collections. Our work in this area continues, but we’ve taken a little time out to have some fun with collective collections, with our own Collective Collections Tournament.

Here’s how the tournament works. Thirty-two athletic conferences receive an automatic bid into the Men’s and Women’s NCAA basketball tournaments. Using WorldCat data, we will construct the collective collection for each conference – that is, the distinct publications held across the library collections of all conference members. In the first round, the 32 conference collective collections will be randomly assigned into 16 pairs. Each pair of conference collections will then “compete” on the basis of some metric related to the contents of the collections. The “winner” of each pairing will then move on to the second round, and so on, until only two conference collections are left standing to compete for the championship!

Here are the key dates:

  • Round of 32: Results posted Friday, March 20
  • Round of 16: Results posted Friday, March 27
  • Round of 8: Results posted Tuesday, March 31
  • Round of 4: Results posted Friday, April 3
  • Championship: Results posted Monday, April 6

You can participate! The Collective Collections Tournament will have a “bracket competition”. Enter the competition using our convenient entry form. You’ll be asked to select one of the 32 competing conferences. Choose a conference, and then follow the tournament to see how your conference fares. All entrants that have selected the winning conference will be entered into a random drawing for a $100 Visa Gift Card! If no one selects the winning conference, then a random drawing will be held among all entrants to determine the winner! Entries must be received by 5 PM Eastern time, Thursday, March 19, 2015. The winner will be announced on the HangingTogether blog no later than April 8, 2015. Please read the 2015 OCLC Research Collective Collections Tournament: Bracket Competition Official Rules (“Official Rules”); submitting an Entry shall constitute acknowledgment and acceptance of the Official Rules. The Collective Collections Tournament or the Bracket Competition is not endorsed by, associated with, or sponsored by, the National Collegiate Athletic Association (“NCAA”).

Please keep in mind that the tournament is not intended to show that one conference collective collection is “better” than another (and when you see the metrics we’ve chosen to compete on, you’ll see there is no danger of that!). Our purpose is to have some fun, but also to highlight the concept of collective collections, and demonstrate how they can be constructed and analyzed with WorldCat data. In reality, of course, collective collections are not a source of competition for libraries, but a way of identifying collective strengths and complementarities within the system-wide library resource.

Our data source for the tournament is WorldCat, so all conference collective collections reflect their members’ collections as they are cataloged in WorldCat. We recognize that the NCAA basketball tournament may not be familiar to many of our non-US colleagues; we chose it because of its timeliness, and because WorldCat’s coverage of North American academic library collections is particularly strong. If you haven’t heard of the NCAA basketball tournament, we hope you’ll find our Collective Collections Tournament entertaining anyway!

Watch this space for further announcements, and we’ll see you in the first round!

#oclctourney

About Brian Lavoie

Brian Lavoie is a Research Scientist in OCLC Research. Brian's research interests include collective collections, the system-wide organization of library resources, and digital preservation.

Mail | Web | LinkedIn | More Posts (7)

District Dispatch: President Obama nominates Kathryn Matthew to lead IMLS

planet code4lib - Thu, 2015-03-12 16:36

This week, President Barack Obama announced his intent to nominate Dr. Kathryn Matthew to serve as the director of the Institute of Museum and Library Services (IMLS). Dr. Kathryn Matthew is currently the chief science educator at the Children’s Museum of Indianapolis, a position she has held since 2014. She was a principal consultant and a product manager at Blackbaud, Inc. from 2008 to 2013, a director at Historic Charleston Foundation from 2006 to 2008, and an exhibits consultant at Chemical Heritage Foundation from 2005 to 2006.

Previously, Dr. Matthew was vice president at Please Touch Museum from 2003 to 2005 and a director at The Nature Conservancy from 2001 to 2002. She was a director at Reebok International from 1998 to 2001. Dr. Matthew also held senior positions at various museums, including a director at Science City at Union Station from 1996 to 1998, executive director at the New Mexico Museum of Natural History and Science from 1991 to 1994, deputy director at the Virginia Museum of Natural History from 1988 to 1990, and an assistant director at the Santa Barbara Museum of Natural History from 1986 to 1988. Dr. Matthew received a B.A. from Mount Holyoke College, an M.B.A. from the University of Minnesota Carlson School of Management, and a Ph.D. from the University of Pennsylvania.

The post President Obama nominates Kathryn Matthew to lead IMLS appeared first on District Dispatch.

District Dispatch: Your “dialing for dollars” critical to saving millions for library programs

planet code4lib - Thu, 2015-03-12 16:00

By Philip Taylor

Congress’ process for funding programs is in full swing and millions in federal funding for libraries hang in the balance.  There’s never enough money to go around, and Members are always looking for programs to “zero out” so they can reallocate those budgets to their pet projects. Right now, the real keys to saving library funding from the chopping block – particularly the Library Services Technology Act (LSTA) and Innovative Approaches to Literacy (IAL) programs — are the members of the powerful House and Senate Appropriations Committees. Your Representative in the House and two Senators have influence with those Committee members, so it’s important that your Members let the Appropriations Committee know of their support for continued library funding.

The best way for them to do that is to sign what we call “Dear Appropriator” letters that three Members of Congress who are huge library champions have drafted to the members of the Appropriations Committees in the House and Senate. The more Members of Congress that we can get to sign these “Dear Appropriator” letters, the better the chance of preserving and securing real money for libraries.

But there’s a catch – Members of Congress generally only add their names to “Dear Appropriator” letters if they hear from their own constituents. Right now, it’s your Senators and Representative in the House who needs to sign LSTA and IAL “Dear Appropriator” letters.

With the March 20 deadline for signatures fast approaching, it’s urgent that you email or phone your own Senators and Representative today by calling (202) 225-3121, asking the Operator to connect you to your Senators and Representative’s office (you can find out who that is easily here) and ask the person who answers to ask their boss to add their name to “Dear Appropriator” letters supporting LSTA and IAL currently being circulated by our champions in Congress. To see whether your Members of Congress signed the letters last year, view the FY 2015 Funding Letter Signees document (pdf).  If so, please be sure to thank and remind them of that when you email or call!

Background material for you and contact information for your Senators and Representative to use to add their name to these crucial letters follow. Again, signatures on the letters are due by March 20 so, please, call your Congressperson’s office now and ask him or her to sign both the LSTA and IAL “Dear Appropriator” letters being circulated by our champions (see chart below).

Please join us. There’s not a moment, but millions and millions of dollars, to lose!

Note: these letters are due before the end of the month so you will need to call this week.

  “DEAR APPROPRIATOR” LETTERS …         LSTA IAL BACKGROUND INFORMATION LSTA is the only source of funding for libraries in the federal budget. The bulk of this funding is returned to states through a population-based grant program through the Institute of Museum and Library Services (IMLS). Libraries use LSTA funds to, among other things, build and maintain  21st century collections that facilitate employment and entrepreneurship, community engagement, and individual empowerment. For more information on LSTA, check out this document LSTA Background and Ask (pdf). IAL is the only federal program supporting literacy for underserved school libraries and has become the primary source for federal funding for school library materials. Focusing on low income schools, these funds help many schools bring their school libraries up to standard. For more information on IAL, view School Libraries Brief (pdf).       Congressional staff your Member should contact to sign… HOUSE STAFF/ CHAMPION Norma Salazar (Representative Raul Grijalva) Don Andres (Representative Eddie Bernice Johnson)   SENATE STAFF/ CHAMPION Elyse Wasch (Senator Jack Reed) Elyse Wasch (Senator Jack Reed)

James Rice (Senator Charles Grassley)

 

Additional information:

House Ask Sheet (pdf)

Senate Ask Sheet (pdf)

 

The post Your “dialing for dollars” critical to saving millions for library programs appeared first on District Dispatch.

David Rosenthal: Google's near-line storage offering

planet code4lib - Thu, 2015-03-12 15:00
Yesterday, Google announced the beta of their Nearline Storage offering. It has the same 1c/GB/mo pricing as Amazon's Glacier, but it has three significant differences:
  • It claims to have much lower latency, a few seconds instead of a few hours.
  • It has the same (synchronous) API as Google's more expensive storage, where Glacier has a different (asynchronous) API than S3.
  • Its pricing for getting data out lacks Glacier's 5% free tier, but otherwise is much simpler than Glacier's.
As I predicted at Glacier's launch, Amazon stuck with the 1c/GB/mo price point while, albeit slowly, the technology got cheaper. So they have room to cut prices in response, but I'd bet that they won't.

I believe I know how Google has built their nearline technology, I wrote about it two years ago.

Ed Summers: twarc & Ferguson demo

planet code4lib - Thu, 2015-03-12 09:41

Here’s a brief demo of what it looks like to use twarc on the command line to archive tweets that are mentioning Ferguson. I’ve been doing archiving around this topic off and on since August of last year, and happened to start it up again recently to collect the response to the Justice Department report.

I kind of glossed over getting your Twitter keys set up, which is a bit tedious. I have them set in environment variables for that demo, but you can pass them in on the command line now. I guess that could be another demo sometime. If you are interested send me a tweet.

Ed Summers: JavaScript and Archives

planet code4lib - Thu, 2015-03-12 09:33

Tantek Çelik has some strong words about the use of JavaScript in Web publishing, specifically regarding it’s accessibility and longevity:

… in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web

It is a dire warning. It sounds and feels true. I am in the middle of writing a webapp that happens to use React, so Tantek’s words are particularly sobering.

And yet, consider for a moment how Twitter make personal downloadable archives available. When you request your archive you eventually get a zip file. When you unzip it, you open an index.html file in your browser, and are provided you with a view of all the tweets you’ve ever sent.

If you take a look under the covers you’ll see it is actually a JavaScript application called Grailbird. If you have JavaScript turned on it looks something like this:

If you have JavaScript turned off it looks something like this:

But remember this is a static site. There is no server side piece. Everything is happening in your browser. You can disconnect from the Internet and as long as your browser has JavaScript turned on it is fully functional. (Well the avatar URLs break, but that could be fixed). You can search across your tweets. You can drill into particular time periods. You can view your account summary. It feels pretty durable. I could stash it away on a hard drive somewhere, and come back in 10 years and (assuming there are still web browsers with a working JavaScript runtime) I could still look at it right?

So is Tantek right about JavaScript being at odds with preservation of Web content? I think he is, but I also think JavaScript can be used in the service of archiving, and that there are starting to be some options out there that make archiving JavaScript heavy websites possible.

The real problem that Tantek is talking about is when human readable content isn’t available in the HTML and is getting loaded dynamically from Web APIs using JavaScript. This started to get popular back in 2005 when Jesse James Garrett coined the term AJAX for building app-like web pages using asynchronous requests for XML, which is now mostly JSON. The scene has since exploded with all sorts of client side JavaScript frameworks for building web applications as opposed to web pages.

So if someone (e.g. Internet Archive) comes along and tries to archive a URL it will get the HTML and associated images, stylesheets and JavaScript files that are referenced in that HTML. These will get saved just fine. But when the content is played back later in (e.g. Wayback Machine) the JavaScript will run and try to talk to these external Web APIs to load content. If those APIs no longer exist, the content won’t load.

One solution to this problem is for the web archiving process to execute the JavaScript and to archive any of the dynamic content that was retrieved. This can be done using headless browsers like PhantomJS, and supposedly Google has started executing JavaScript. Like Tantek I’m dubious about how widely they execute JavaScript. I’ve had trouble getting Google to index a JavaScript heavy site that I’ve inherited at work. But even if the crawler does execute the JavaScript, user interactions can cause different content to load. So does the bot start clicking around in the application to get content to load? This is yet more work for a archiving bot to do, and could potentially result in write operations which might not be great.

Another option is to change or at least augment the current web archiving paradigm by adding curator driven web archiving to the mix. The best examples I’ve seen of this are Ilya Kreymer’s work on pywb and pywb-recorder. Ilya is a former Internet Archive engineer, and is well aware of the limitations in the most common forms of web archiving today. pywb is a new player for web archives and pywb-recorder is a new recording environment. Both work in concert to let archivists interactively select web content that needs to be archived, and then for that content to be played back. The best example of this is his demo service webrecorder.io which composes pywb and pywb-recorder so that anyone can create a web archive of a highly dynamic website, download the WARC archive file, and then reupload it for playback.

The nice thing about Ilya’s work is that it is geared at archiving this JavaScript heavy content. Rhizome and the New Museum in New York City have started working with Ilya to use pywb to archive highly dynamic Web content. I think this represents a possible bright future for archives, where curators or archivists are more part of the equation, and where Web archives are more distributed, not just at Internet Archive and some major national libraries. I think the work Genius are doing to annotate the Web, archived versions of the Web is in a similar space. It’s exciting times for Web archiving. You know, exciting if you happen to be an archivist and/or archiving things.

At any rate, getting back to Tantek’s point about JavaScript. If you are in the business of building archives on the Web definitely think twice about using client side JavaScript frameworks. If you do, make sure your site degrades so that the majority of the content is still available. You want to make it easy for Internet Archive to archive your content (lots of copies keeps stuff safe) and you want to make it easy for Google et al to index it, so people looking for your content can actually find it. Stanford University’s Web Archiving team have a super set of pages describing archivability of websites. We can’t control how other people publish on the Web, but I think as archivists we have a responsibility to think about these issues as we create archives on the Web.

Nicole Engard: Bookmarks for March 11, 2015

planet code4lib - Wed, 2015-03-11 20:30

Today I found the following resources and bookmarked them on <a href=

  • Avalon The Avalon Media System is an open source system for managing and providing access to large collections of digital audio and video. The freely available system enables libraries and archives to easily curate, distribute and provide online access to their collections for purposes of teaching, learning and research.

Digest powered by RSS Digest

The post Bookmarks for March 11, 2015 appeared first on What I Learned Today....

Related posts:

  1. Harvard Business School approves open-access policy
  2. Why can’t it all be this easy?
  3. Handheld Librarian Online Conference

LITA: Yes, You Can Video!

planet code4lib - Wed, 2015-03-11 19:24

A how-to guide for creating high-impact instructional videos without tearing your hair out.

Tuesday May 12, 2015
1:00 pm – 2:30 pm Central Time
Register now for this webinar

This brand new LITA Webinar promises a fun time learning how to create instructional videos

Have you ever wanted to create an engaging and educational instructional video, but felt like you didn’t have the time, ability, or technology? Are you perplexed by all the moving parts that go into creating an effective tutorial? In this session, Anne Burke and Andreas Orphanides will help to demystify the process, breaking it down into easy-to-follow steps, and provide a variety of technical approaches suited to a range of skill sets. They will cover choosing and scoping your topic, scripting and storyboarding, producing the video, and getting it online. They will also address common pitfalls at each stage.

Join

Anne Burke
Undergraduate Instruction & Outreach Librarian
North Carolina State University Libraries

and

Andreas Orphanides
Librarian for Digital Technologies and Learning
North Carolina State University Libraries

Then register for the webinar

Full details
Can’t make the date but still want to join in? Registered participants will have access to the recorded webinar.
Cost:

LITA Member: $45
Non-Member: $105
Group: $196
Registration Information

Register Online page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
Call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, mbeatty@ala.org.

Open Knowledge Foundation: Open Knowledge Russia: Experimenting with data expeditions

planet code4lib - Wed, 2015-03-11 00:43

As part of Open Education Week #openeducationwk activities we are publishing a post on how Open Knowledge Russia have been experimenting with data expeditions. This a follow up post to one that appeared on the Open Education Working Group Website which gave an overview of Open Education projects in Russia.

Anna Sakoyan

The authors of this post are Anna Sakoyan and Irina Radchenko, who together have founded DataDrivenJournalism.RU.

Irina Radchenko

Anna is currently working as a journalist and translator for a Russian analytical resource Polit.ru and is also involved in the activities of NGO InfoCulture. You can reach Anna on Twitter on @ansakoy, on Facebook and on LinkedIn. She blogs in English at http://ourchiefweapons.wordpress.com/.

Irina Radchenko is a Associate Professor at ITMO University and Chief Coordinator of Open Knowledge Russia. You can reach Irina on Twitter on @iradche, on Facebook and on LinkedIn. She blogs in Russian at http://iradche.ru//.

1. DataDrivenJournalism.RU project and Russian Data Expeditions

The open educational project DataDrivenJournalism.RU was launched in April 2013 by a group of enthusiasts. Initially it was predominantly a blog, which accumulated translated and originally written manuals on working with data, as well as more general articles about data driven journalism. Its mission was formulated as promoting the use of data (Open Data first of all) in the Russian-language environment and its main objective was to create an online platform to consolidate the Russian-speaking people who were interested in working with data, so that they can exchange their experiences and learn from each other. As the number of the published materials grew, they had to be structured in a searchable way, which resulted in making it look more like a website with special sections for learning materials, interactive educational projects (data expeditions), helpful links, etc.

On one hand, it operates as an educational resource with a growing collection of tutorials, a glossary and lists of helpful external links, as well as the central platform of its data expeditions; on the other hand, as a blog, it provides a broader context of open data application to various areas of activity, including data driven journalism itself. After almost two years of its existence, DataDrivenJournalism.RU has a team of 10 regular authors (comprised of enthusiasts from Germany, Kazakhstan, Russia, Sweden and UK). More than a hundred posts have been published, including 15 tutorials. It has also launched 4 data expeditions, the most recent in December 2014.

The term data expedition was first coined by Open Knowledge’s School of Data, which launched such peer-learning projects both in online and offline formats. We took this model as the basic principle and tried to apply it to the Russian environment. It turned out to be rather perspective, so we began experimenting with it, in order to make this format a more efficient education tool. In particular, we have tried a very loose organisational approach where the participants only had a general subject in common, but were free to choose their own strategy in working with it; a rather rigid approach with a scenario and tasks; and a model, which included experts who could navigate the participants in the area that they had to explore. These have been discussed in our guest post on Brian Kelly’s blog ‘UK Web Focus’.

Our fourth data expedition was part of a hybrid learning model. Namely, it was the practical part of a two-week’s offline course taught by Irina Radchenko in Kazakhstan. This experience appears to be rather inspiring and instructive.

2. International Data Expedition in Kazakhstan

The fourth Russian-language data expedition (DE4) was a part of a two-week’s course under the auspices of Karaganda State Technological University taught by Irina Radchenko. After the course was over the university participants who sucessfully completed all the tasks within DE4 received a certificate. Most interesting projects were later published at DataDrivenJournalism.RU. One of them is about industry in Kazakhstan by Asylbek Mubarak who also tells (in Russian) about his experience of participating in DE4 and also about the key stages of his work with data. The other, by Roman Ni is about some aspects of Kazakhstan budget.

First off, it was a unique experience of launching a data expedition outside Russia. It was also interesting that DE4 was a part of a hybrid learning format, which combined traditional offline lectures and seminars with a peer-learning approach. The specific of the peer-learning part was that it was open, so that any online user could participate. The problem was that the decision to make it open occurred rather late, so there was not much time to properly promote its announcement. However, there were several people from Russia and Ukraine who registered for participation. Unfortunately none of them participated actively, but hopefully, they managed to make some use of course materials and tasks published in the DE4 Google group.

This mixed format was rather time-taking, because it required not only preparation for regular lectures, but also a lot of online activity, including interaction with the participants, answering their questions in Google group and checking their online projects. The participants of the offline course seemed enthusiastic about the online part, many found it interesting and intriguing. In the final survey following DE4, most of the respondents emphasised that they liked the online part.

The initial level of the participants was very uneven. Some of them knew how to program and work with data bases, others had hardly ever been exposed to working with data. DE4 main tasks were build in a way that they could be done from scratch based only on the knowledge provided within the course. Meanwhile, there were also more advanced tasks and techniques for those who might find them interesting. Unfortunately, many participants could not complete all the tasks, because they were students and were right in the middle of taking their midterm exams at university.

Compared to our previous DEs, the percentage of completed tasks was much higher. The DE4 participants were clearly better motivated in terms of demonstrating their performance. Most importantly, some of them were interested in receiving a certificate. Another considerable motivation was participation in offline activities, including face-to-face discussions, as well as interaction during Irina’s lectures and seminars.

Technically, like all the previous expeditions, DE4 was centered around a closed Google group, which was used by the organisers to publish materials and tasks and by participants to discuss tasks, ask questions, exchange helpful links and coordinate their working process (as most of them worked in small teams). The chief tools within DE4 were Google Docs, Google Spreadsheets, Google Refine and Infogr.am. Participants were also encouraged to suggest or use other tools if they find it appropriate.

42 people registered for participation. 36 of them were those who took the offline course at Karaganda State Technical University. Those were most active, so most of our observations are based on their results and feedback. Also, due to the university base of the course, 50% of the participants were undergraduate students, while the other half included postgraduate students, people with a higher education and PhD. Two thirds of the participants were women. As to age groups, almost a half of the participants were between 16 and 21 years old, but there was also a considerable number of those between 22 and 30 years old and two above 50.

13 per cent of the participants completed all the tasks, including the final report. According to their responses to the final survey, most of them did their practical tasks by small pieces, but regularly. As to online interaction, the majority of respondens said they were quite satisfied with their communication experience. About a half of them though admitted that they did not contribute to online discussions, although found others’ contributions helpful. General feedback was very positive. Many pointed out that they were inspired by the friendly atmosphere and mutual helpfulness. Most said they were going to keep learning how to work with open data on their own. Almost all claimed they would like to participate in other data expeditions.

3. Conclusions

DE4 was an interesting step in the development of the format. In particular, it showed that an open peer-learning format can be an important integral part of a traditional course. It had a ready-made scenario and an instructor, but at the same time it heavily relied on the participants’ mutual help and experience exchange, and also provided a great degree of freedom and flexibility regarding the choice of subjects and tools. It is also yet another contribution to the collection of materials, which might be helpful in future expeditions alongside with the materials from all the previous DEs. It is part of a process of gradual formation of an educational resources base, as well as a supportive social base. As new methods are applied and tested in DEs, the practices that proved best are stored and used, which helps to make this format more flexible and helpful. What is most important is that this model can be applied to almost any educational initiative, because it is easily replicated and based on using free online services.

DuraSpace News: OR2015 NEWS: Registration Opens; Speakers from Mozilla and Google Announced

planet code4lib - Wed, 2015-03-11 00:00

From Jon Dunn, Julie Speer, and Sarah Shreeves, OR2015 Conference Organizing Committee; Holly Mercer, William Nixon, and Imma Subirats OR2015 Program Co-Chairs

Indianapolis, IN  We are pleased to announce that registration is now open for the 10th International Conference on Open Repositories, to be held on June 8-11, 2015 in Indianapolis, Indiana, United States of America. Full registration details and a link to the registration form may be found at: http://www.or2015.net/registration

Ed Summers: Facts are Mobile

planet code4lib - Tue, 2015-03-10 19:42

To classify is, indeed, as useful as it is natural. The indefinite multitude of particular and changing events is met by the mind with acts of defining, inventorying and listing, reducing the common heads and tying up in bunches. But these acts like other intelligent acts are performed for a purpose, and the accomplishment of purpose is their only justification. Speaking generally, the purpose is to facilitate our dealing with unique individuals and changing events. When we assume that our clefts and bunches represent fixed separations and collections in rerum natura, we obstruct rather than aid our transactions with things. We are guilty of a presumption which nature promptly punishes. We are rendered incompetent to deal effectively with the delicacies and novelties of nature and life. Our thought is hard where facts are mobile ; bunched and chunky where events are fluid, dissolving.

John Dewey in Human Nature and Conduct (p. 131)

FOSS4Lib Recent Releases: Koha - Maintenance and security releases v 3.16.8 and 3.18.4

planet code4lib - Tue, 2015-03-10 19:22
Package: KohaRelease Date: Tuesday, March 3, 2015

Last updated March 10, 2015. Created by David Nind on March 10, 2015.
Log in to edit this page.

Monthly maintenance and security releases for Koha. See the release announcements for the details:

Koha 3.18 is the latest stable release of Koha and is recommended for new installations.

OCLC Dev Network: Developer House Project: Advanced Typeahead

planet code4lib - Tue, 2015-03-10 16:00

We are Jason Thomale from the University of North Texas and George Campbell from OCLC, and we created an advanced “Advanced Typeahead” application during the December 1-5, 2014 Developer House event at OCLC headquarters in Columbus, Ohio. The Developer House events provide OCLC Platform Engineers and library developers an opportunity to brainstorm and develop applications against OCLC Web Services. We would like to share our development experience and the application we designed in this blog post.

HangingTogether: OCLC Research Library Partnership, making a difference: part 2

planet code4lib - Tue, 2015-03-10 15:37

I previously shared the story of Keio University, who benefited from attending our 2013 partner meeting — I wanted to share two more “member stories” which have roots in the OCLC Research Library Partnership.

OCLC member stories are being highlighted on the OCLC web page — there are many other interesting and dare I say inspiring stories shared there, so go check them out.

About Merrilee Proffitt

Mail | Web | Twitter | Facebook | LinkedIn | More Posts (279)

Islandora: Site built on Islandora wins the ABC-CLIO Online History Award

planet code4lib - Tue, 2015-03-10 15:24

Congratulations are in order for Drexel's University and its Doctor or Doctress digital collection, which as been selected as this year's winner of the ABC-CLIO Online History Award.

This amazing site, which is both a historical collection and an online learning tool, more than fulfills its mission of helping people to "explore American history through the stories of women physicians." It is also one of the hands-down most stunningly executed Islandora sites out there in production right now and we could not be more thrilled to see it recognized as the accomplishment it truly is. For more information about the award, please visit the ALA's announcement.

Pages

Subscribe to code4lib aggregator