You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 54 min ago

LibUX: Digital Technology as Affordance and Barrier in Higher Education

Thu, 2017-02-02 01:45

@stephenfrancoeur: This new book is written by colleagues of mine in CUNY.  Both are librarians who have  been conducted ethnographic research into the ways that CUNY students use technology. I’ve learned a lot from their presentations and articles over the years but am happy to see that their book is finally out. https://ushep.commons.gc.cuny.edu/2017/01/18/our-book-is-here/

DuraSpace News: NOW AVAILABLE: VIVO 1.9.2

Thu, 2017-02-02 00:00

From Grahm Triggs, VIVO Tech Lead, on behalf of the VIVO team

Austin, TX  VIVO 1.9.2 is now available. This is a minor release that addresses an ORCID integration issue, and a problem with creating new items.

District Dispatch: Call for applications: Ready to Code Faculty Fellows

Wed, 2017-02-01 21:17

The call for participation in OITP’s Ready to Code Phase II (RtC) is open now. In partnership with the University of Maryland’s iSchool and with support from Google, Inc., we are seeking full-time faculty members of ALA accredited graduate schools of Library and Information Science or graduate schools that provide school library certification programs in the U.S. to become RtC Faculty Fellows. LIS faculty applicants must be teaching technology/media course(s) in Fall 2017 tailored for pre-service library staff planning on working with children and teens.

Los Angeles Public Library Coder Time

Ready to Code Phase II: Embedding RtC Concepts in Library and Information Science Curricula, builds on one of the recommendations from the recently released Ready to Code: Connecting Youth to CS Opportunity through Libraries (pdf). Findings from Phase I highlight the need for pre-service librarians to have access to courses that prepare them with skills to design and implement youth learning programs infused with RtC core concepts (pdf). These concepts are integral to ensuring library programs provide youth with opportunity to develop computational thinking skills while inspiring them to explore the intersection of coding and computer science with their personal interests and passions. A cohort of RtC Faculty Fellows will work with the project team to address this challenge throughout 2017.

RtC Faculty Fellows will work with the RtC Phase II project team to develop, revise and pilot technology and media curricula that infuses existing courses with content and learning experiences grounded in RtC concepts. The resulting curricula will challenge future librarians working with children and teens to develop requisite teaching skills and pedagogical expertise to engage with children and teens through programs and experiences that foster computational thinking.

The RtC project team held a virtual information session last week, but if you missed it, the recording and slides (pdf) are now available. The application period closes February 28, 2017. More information, including the application, is available on the Libraries Ready to Code website.

 

The post Call for applications: Ready to Code Faculty Fellows appeared first on District Dispatch.

ACRL TechConnect: Data Refuge and the Role of Libraries

Wed, 2017-02-01 17:00

Society is always changing. For some, the change can seem slow and frustrating, while others may feel as though the change occurred in a blink of an eye. What is this change that I speak of? It can be anything…civil rights, autonomous cars, or national leaders. One change that no one ever seems particularly prepared for, however, is when a website link becomes broken. One day, you could click a link and get to a site and the next day you get a 404 error. Sometimes this occurs because a site was migrated to a new server and the link was not redirected. Sometimes this occurs because the owner ceased to maintain the site. And sometimes, this occurs for less benign reasons.

Information access via the Internet is an activity that many (but not all) of us do everyday, in sometimes unconscious fashion: checking the weather, reading email, receiving news alerts. We also use the Internet to make datasets and other sources of information widely available. Individuals, universities, corporations, and governments share data and information in this way. In the Obama administration, the Open Government Initiative led to the development of Project Open Data and data.gov. Federal agencies started looking at ways to make information sharing easier, especially in areas where the data are unique.

One area of unique data is in climate science. Since climate data is captured on a specific day, time, and under certain conditions, it can never be truly reproduced. It will never be January XX, 2017 again. With these constraints, climate data can be thought of as fragile. The copies that we have are the only records that we have. Much of our nation’s climate data has been captured by research groups at institutes, universities, and government labs and agencies. During the election, much of the rhetoric from Donald Trump was rooted in the belief that climate change is a hoax. Upon his election, Trump tapped Scott Pruitt, who has fought much of the EPA’s attempts to regulate pollution, to lead the EPA. This, along with other messages from the new administration, has raised alarms within the scientific community that the United States may repeat the actions of the Harper administration in Canada, which literally threw away thousands of items from federal libraries that were deemed outside scope, through a process that was criticized as not transparent.

In an effort to safeguard and preserve this data, the Penn Program of Environmental Humanities (PPEH) helped organize a collaborative project called Data Refuge. This project requires the expertise of scientists, librarians, archivists, and programmers to organize, document, and back-up data that is distributed across federal agencies’ websites. Maintaining the integrity of the data, while ensuring the re-usability of it, are paramount concerns and areas where librarians and archivists must work hand in glove with the programmers (sometimes one and the same) who are writing the code to pull, duplicate, and push content. Wired magazine recently covered one of the Data Refuge events and detailed the way that the group worked together, while much of the process is driven by individual actions.

In order to capture as much of this data as possible, the Data Refuge project relies on groups of people organizing around this topic across the country. The PPEH site details the requirements to host a successful DataRescue event and has a Toolkit to help promote and document the event. There is also a survey that you can use to nominate climate or environmental data to be part of the Data Refuge. Not in a position to organize an event? Don’t like people? You can also work on your own! An interesting observation from the work on your own page is the option to nominate any “downloadable data that is vulnerable and valuable.” This means that Internet Archive and the End of Term Harvest Team (a project to preserve government websites from the Obama administration) is interested in any data that you have reason to believe may be in jeopardy under the current administration.

A quick note about politics. Politics are messy and it can seem odd that people are organizing in this way, when administrations change every four or eight years and, when there is a party change in the presidency, it is almost a certainty that there will be major departures in policy and prioritizations from administration to administration. What is important to recognize is that our data holdings are increasingly solely digital, and therefore fragile. The positions on issues like climate, environment, civil rights, and many, many others are so diametrically opposite from the Obama to Trump offices, that we – the public – have no assurances that the data will be retained or made widely available for sharing. This administration speaks of “alternative facts” and “disagree[ing] with the facts” and this makes people charged with preserving facts wary.

Many questions about the sustainability and longevity of the project remain. Will End of Term or Data Refuge be able to/need to expand the scope of these DataRescue efforts? How much resourcing can people donate to these events? What is the role of institutions in these efforts? This is a fantastic way for libraries to build partnerships with entities across campus and across a community, but some may view the political nature of these actions as incongruous with the library mission.

I would argue that policies and political actions are not inert abstractions. There is a difference between promoting a political party and calling attention to policies that are in conflict with human rights and freedom to information. Loathe as I am to make this comparison, would anyone truly claim that burning books is protected political speech, and that opposing such burning is “playing politics?” Yet, these were the actions of a political party – in living memory – hosted at university towns across Germany. Considering the initial attempt to silence the USDA and the temporary freeze on the EPA, libraries should strongly support the efforts of PPEH, Data Refuge, End of Term, and concerned citizens across the country.

 

LibUX: I want to redesign LibUX, so am I going to blog the whole process.

Wed, 2017-02-01 16:33

This isn’t the first LibUX website, but it’s definitely been the most visible. Our first was a static site made with Jekyll and hosted for free on Github. We needed a landing page for our brand new podcast, which we now call Metric.

I started writing about design and the user experience on the regular, so we hopped ship to WordPress. And while this site has been customized, it’s piggy-backing on a theme that’s not one I developed. It’s always sat wrong with me. As someone who runs a small freelancing operation, too, I’ve never been able — or comfortable — using our own site as part of the portfolio. Time’s short, there’s thankfully never been a lack of work, so the site’s been doing its job.

Growing pains

Throughout 2016 I was feeling around for what LibUX was going to be next. I experimented with link-sharing Daring Fireball style. I started publishing amazing, aaaamazing guest-writers. The slack community‘s been growing, so I tried figuring out ways to make more community-driven content and started posting jobs in the newsletter. I even started a second weekly podcast called W3 Radio, which I think is really fun and really good, but it was using up all the Metric bandwidth we pay for and I wasn’t exactly chomping at the bit to pay for more.

Now, LibUX will be offering its first, free, high-quality webinar, in addition to a brand-spanking new Patreon service that — if we’re projecting a year out — means I will be writing twenty times the number of articles in 2016, in addition to piloting the return of W3 Radio and other projects, as well as doing LibUX’s part to break libraryland’s blog-for-exposure culture and pay all guest writers and speakers.

All that’s to say that this website needs more wiggle room, and I’m not willing to use too many other people’s plugins. So, I am going to build something to purpose – and write about it here. I’m not necessarily committed to WordPress, and as I walk through the small discovery process I will let the needs of the site determine its technical makeup.

These kinds of projects benefit from unbreakable axioms

With clients and in my regular work, we start new projects by agreeing on axioms. These are guiding decisions, parts of the contract, that shape all the choices that follow. They’re useful with multiple stakeholders to prevent eventual scope creep, or to bake-in user-centricity from the beginning if you anticipate the upper ranks to make decisions based off what they like as opposed to what works. In libraries, I often make an axiom that reads something like

Decisions that impact navigation or include animation must cite user research.

This usually gets us away from carousels, mega menus, and the like, but it also sets a useful precedent for making research part of a process that may be unfamiliar to our stakeholders. We may take for granted that design is about efficacy, not look and feel, that it is a strategic part of business or mission success. For people who took other career paths, they’ve only ever been end-users. They look, and feel, but may not put too much thought into the fact that what they feel is at least in part by design (or, maybe, design that failed).

So, for this project, the axioms I choose to set play both to technical aspirations — so I am proud to include LibUX in my portfolio — as well as to set me free from having to make major ground-up redesigns in the future.

Here they are.

LibUX will work offline

I want to be able to save posts and tutorials to my phone. That’s mostly it, but I also think there is a user-minded ethic to offline support that aligns with my beliefs, and it’s important that LibUX — which preaches a lot, we’re not fooling anyone — walks the walk.

Content must COPE

This is technically already true mainly through the WP REST API and RSS, but I want to approach content creation and modeling not as an afterthought but as core to what LibUX does. It’s a content machine, and I reuse content liberally – in newsletters, in podcasts, and in Patreon now, but in the future I want to be able to craft small courses out of content that already exists without having to actually edit.

Speed Index of 1000

This roughly means that I would like readers to be able to interact with the site in one second or less. This sort of axiom puts restraints on page weight, the order in which things load, and even server response times.

Strictly Mobile First

I almost didn’t put mobile first, kind of figuring that it was a given, or that the above axioms strategically lead to a mobile-first process. But I kind of want to hold myself to it almost like a rigid diet. I tend to daydream about what sites will look like on the widescreen – I might even sketch it out first, and figure out what the “mobile first” realization of that looks like. This time, I am going to resist my guilty impulses and I promise to flagellate myself live for every – single – widescreen – thought.

You caught me. I’m lying about that last part.

So, what’s next?

Anyway, stay tuned. Normally, you next start gathering all your user research. I am going to skip that, but I will touch on business goals that will inform what I want landing pages (like the homepage) to do down the road, as well as other information architecture decisions. After that, the real fun will begin with content models.

If you need help keeping posted, consider subscribing to the newsletter , a mostly-weekly that includes anything that appears in this space. Or, if you can, consider supporting LibUX on Patreon. At a dollar per month you’ll get something like twenty or so exclusive articles and early access to Metric. If your organization has an earmark for professional development, there’s even a tier there to get me on retainer. Let me teach your crew how to do something cool.

Library of Congress: The Signal: Spotlighting Research Data: Building Relationships with Outreach for the NYU Data Catalog

Wed, 2017-02-01 14:23

This is a guest post by Nicole Contaxis, Data Catalog Coordinator at NYU Health Sciences Library. You can email her at nicole.contaxis@nyumc.org.

Screenshot of the NYU Data Catalog Homepage.

An increasing number of publishers and grant-funding organizations are requiring researchers to share their data, so libraries and other institutions are creating tools and strategies to support researchers in this effort. To meet these challenges and communicate the benefits of data sharing, the NYU Health Sciences Library created the NYU Data Catalog, a low-barrier way for researchers to share information about their data.

The NYU Data Catalog
The NYU Data Catalog is a searchable and browsable online collection of datasets. Rather than function as a data repository, the catalog is a digital way-finder for researchers looking for data relevant to their work. Each dataset is described in detail with rich metadata. We include information about who can access each dataset and how, using the metadata elements Access Restrictions and Access Instructions. Other important descriptors include Subject Domains and Keywords which are meant to give users a better idea of the content of the dataset. Some metadata elements are not intended to be used for all types of datasets but are particularly helpful in certain circumstances. Geographic Coverage and Timeframe of Data Collection help researchers identify data about population characteristics and public health by explaining where and for how long the data was collected. When these descriptors are not important or not pertinent to the dataset, we simply leave them blank.

Some of the datasets in the catalog are created by researchers at NYU and others are created by outside agencies, like the U.S. Census Bureau. The catalog includes information on licensed datasets, datasets that require IRB application, as well as datasets that are publicly available. To connect researchers to colleagues with knowledge about these datasets and to encourage collaboration, the catalog lists the NYU researcher who has authored the data or has expertise in it (e.g. published on the dataset or used it previously in her research).

In this way, the NYU Data Catalog was designed to:

  • increase the visibility of research data generated at NYU
  • facilitate collaboration across departments and institutes
  • help researchers locate and understand datasets generated by outside institutions
  • support the process of re-using research data.

These goals, while lofty, are attainable with adequate researcher participation and for that reason the NYU Health Sciences Library is currently engaged in a comprehensive outreach effort.

A community meeting of participants in the NYU study, Diabetes Research, Education, and Action for Minorities. A description of the data from this study is now available on the NYU Data Catalog. Credit: Laura Wyatt.

The success of the catalog relies on researcher buy-in. In order for the catalog to be a helpful resource, researchers need to contribute records for their datasets and they need to use the catalog to locate datasets and possible collaborators. Achieving adequate user participation for library projects is not a novel obstacle, and this issue has received attention on The Signal previously with posts about the Smithsonian’s Transcription Center.

For projects that require user participation like the NYU Data Catalog, it is imperative to perform outreach in a way that ensures the researchers feel comfortable contributing to the resource, using the resource, sharing the resource with other researchers and updating their contributions as their expertise and publication history grows over time. It is not enough for researchers to contribute records for their research data; we want the catalog to grow and change along with the research community.

Outreach for Building & Maintaining Relationships
Outreach for this project is best understood as a bedrock for building and maintaining relationships with researchers. To design our outreach strategy, we have pulled from the experience and expertise of other librarians, including T-Kay Sangward’s work on ethical partnerships for digital libraries and Micha Broadnax’s work on archival outreach with students. Building a successful relationship with a researcher means that she will be engaged with the catalog, that she will be more likely to point her students to it, that she will be more likely to use it herself and that she is more likely to contribute new datasets as her research expands.

To help maintain relationships with researchers, we are working to create services that will continue to engage researchers after they initially describe their data in the catalog. We are working towards creating usable and helpful analytics so that we can send reports to researchers on how frequently users look at their dataset records.

The Story of One Record
Each record in the NYU Data Catalog is the result of a discussion between the cataloger and the researcher. While a cataloger can gather a substantial amount of information about a researcher’s data from her publications and grants, researcher approval and input is necessary to ensure that each record is accurate, helpful, and complete. Research data is not a monolith. It needs to be cataloged in a way that respects differences across academic disciplines, privacy and ethics concerns, and data sharing requirements from publishers and grant funders. Because of these facts, it is necessary to listen attentively to each individual researcher while cataloging their data. Deferring to their subject expertise is particularly important.

Laura Wyatt, for example, is the Research Data Manager for the Section for Health Equity at the Department of Population Health in the NYU School of Medicine. We located her while becoming better acquainted with the staff, faculty and research projects within the Department of Population Health. After introducing Ms. Wyatt to the NYU Data Catalog via email, we set up an in-person meeting to discuss the various datasets in her care and whether or not they would be a good fit for the catalog. During the meeting, Ms. Wyatt mentioned that the team had heard about the catalog before and had wanted to contribute to it but with publication and grant application deadlines, they were never able to complete the process. Although Ms. Wyatt needed to confirm with each of the Principal Investigators what could be shared, she was able to contribute five unique datasets. Those datasets include:

Screenshot of the DREAM dataset on the NYU Data Catalog.

Throughout the outreach process, it has become increasingly apparent that focused and personalized attention, demonstrated through individualized emails and one-on-one meetings, helps increase researcher participation. Because of the number of obligations researchers have, it is important to demonstrate that the cataloger has the time and energy to address their specific needs and the needs of their data. Individual outreach, including exploring each researcher’s work before emailing her, can make all of the difference. Even researchers who are interested in sharing their data may not contribute to the catalog unless they are individually addressed. Forming an individual relationship may be time-consuming but it can make a big difference in the quantity and quality of researcher contributions.

With permission from the Principal Investigators, Wyatt sent detailed descriptions about each dataset, including a description, time frame, geographic coverage, subject domains, keywords, grant support, and publications that describe how the data was collected or analyzed. While we added information about how to access the datasets and who to contact about them, there was little additional work for the cataloger to do.

Making the Invisible Visible
It is important to note that some of the datasets in the catalog, like the datasets that Wyatt helped contribute, are only made visible with the NYU Data Catalog. Although the publications related to these datasets are available elsewhere, the NYU Data Catalog is the only resource that provides information about the data explicitly and provides access information for how to access it. Because we allow researchers to retain control over their data, there are fewer obstacles for contributing to the catalog than there are for depositing data in a repository. While it would be ideal for researchers to store their data in a repository and we do encourage them to do so, it is not always practical, possible or desirable. By being flexible, we are able to highlight unique datasets that cannot be found anywhere else.

Performing outreach and building relationships with researchers requires time and energy but it allows us to highlight previously unknown datasets, encourage collaboration and create a resource for the research community. Building tools and devising strategies to help researchers share and re-use data is only helpful with researcher buy-in. We at the NYU Health Sciences Library aim to generate that buy-in by developing long-lasting relationships.

Code Availability
In addition to helping researchers share and locate data, the NYU Data Catalog’s code is available on GitHub and documentation is available on the Open Science Framework. Moving forward, the NYU Health Sciences Library hopes to work with other institutions so that they too can create catalogs for datasets relevant to their researchers. If others implement the Data Catalog’s code, it would facilitate the creation of a cross-institutional data catalog that would enable greater data discovery through federated searching.

pinboard: Twitter

Wed, 2017-02-01 14:02
Why don't you join us in the #libux slack? Sign yourself up: #litaux #ux #code4lib…

pinboard: Untitled (http://libux.co/slack?utm_content=buffer0f822&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer)

Wed, 2017-02-01 14:02
Why don't you join us in the #libux slack? Sign yourself up: #litaux #ux #code4lib…

In the Library, With the Lead Pipe: Updated Submission Guidelines for What We Publish

Wed, 2017-02-01 14:00

It’s been about a year since we updated our publication schedule, and we thought it a good time to revisit our submission guidelines and clarify the description of the types of articles we publish, too. If you’ve been thinking about submitting to In the Library with the Lead Pipe, take a look at these updates (also available under the Submission Guidelines tab).

WHAT WE PUBLISH

We publish high quality peer-reviewed articles in a range of formats. Whilst we are open to suggestions for new article types and formats, including material previously published in part or full, we expect proposals to include unique and substantial new content from the author. Examples of material we would publish include:

  • Original research with a discussion of its consequences and an argument for action that makes a unique, significant contribution to the professional literature.
  • Articles arguing for a particular approach, strategy or development in librarianship, with practical examples of how it might be achieved.
  • Transformative works with additional explanatory or interpretive content. For example, a transcription of an interview or panel discussion, with a substantial introduction explaining the importance of the subject to librarianship and a discussion of related literature.

HOW TO PROPOSE AN ARTICLE

To propose an article, please submit the following to itlwtlp at gmail dot com:

1. An abstract of your proposed article (200 word maximum);
2. A link to (or attachment with) an example of your writing; and
3. Your current resume/CV or a brief biography. (Our goal is to share perspectives from across the library community, so this item is intended to give us a sense of who you are, with what type of library you are associated, if any, and what perspective you bring to the topic.)

Alternately, you may submit a completed article. It should be approximately 2,000-5,000 words with citations as appropriate. If submitting a completed article please ensure it follows our style guide.

A member of the Editorial Board will respond to your message within a week. In general, we will make a decision based on how well your proposal seems to fit our goals, content, and style. We will include in our initial decision email any thoughts your submission raised among the Editorial Board.

ARTICLE FRAMEWORK QUESTIONS

If we like the sound of your initial proposal, we will proceed to the next step in the submissions process: Framework Questions. This step is vital in allowing the Editorial Board to have a stronger sense of your proposed article, your thesis, and what your article would contribute to the professional literature. We are interested in well written articles that have actionable solutions, and we intend that these questions will help frame your idea appropriately. We expect the Framework Questions will be answered thoughtfully and completely:

1. What specific event or experience led you to be passionate about this topic?
2. What 5-7 things are most interesting to you about your topic?
3. Of those 5-7 things, what are the 3 most important things to consider about your topic and why are they the most critical?
4. What problem is your article addressing and what actions do you want readers to take after reading it?
5. Is your topic more relevant to someone in an academic, school, public, government, private, medical setting, etc.? If so, how is your topic meaningful to someone not in that target audience?
6. In what ways does your article build upon and/or contribute to the existing literature? Provide 3 annotated citations. Depending upon your topic, these citations may be for research on which your article is based; examples that reinforce issues that you’re raising in your article; articles to which yours is responding; conversations to which you are adding; etc.
7. What do you want your readers to remember after they finish reading your article?

The Editorial Board may ask additional questions or request further clarification after receipt of the Framework Questions before determining the final status of the proposal.

IF YOUR ARTICLE IS ACCEPTED

If we choose to accept your proposal, you will be assigned a Publishing Editor who will guide you through the Lead Pipe Publication Process.

Please see the About Page for information on Open Access, Copyright, Licensing, and Article Processing Fees.

LibUX: Library vendors should gamble on taking ethical stances. They’re good for business.

Tue, 2017-01-31 21:59

Starbucks pledged to hire 10,000 refugees and — to the vindication of the #boycottstarbucks crowd — their stock took a hit.

This is probably going to be good for Starbucks.

In May last year, around the time of the passage of North Carolina’s bathroom law, Taylor Pepper wrote in Time that

In recent years, Americans seem to have embraced the notion that the private sector has a role in shaping the political debate, according surveys by the public relations firm Global Strategy Group. In the most recent survey — the results of which were published in January — 78% of Americans said that “corporations should take action to address important issues facing society.” That’s up from 72% in 2013.

To be sure, Americans tend to think it’s more “appropriate” for companies to take stands on economic issues such as the minimum wage, pay equality, and parental leave. Still, a majority also think it is suitable for companies to weigh in on social and political issues ranging from LGBT equality to Obamacare to race relations.

This is not just a sign of the times but a trend carried by the momentum of aggregation theory, which describes how the user experience has become such a dominant force shaping the success of businesses.

And this makes sense.

We increasingly use products and services we identify with. We buy food that aligns with our ethics, we laud brands who align with our worldview, we put face to vast companies — Tim Cook and Apple, Marissa Mayer and Yahoo!, Richard Branson and Virgin, Mark Zuckerberg and Facebook — and by so anthropomorphizing them the character of these individuals either shake our loyalty or increase our engagement.

Starbucks made a shrewd move. They took an ethical stance at a highly controversial and thus public time, that — regardless of your political beliefs — positions them in an inarguably favorable matchup.

racists: #BoycottStarbucks

me: pic.twitter.com/y7n536zZH4

— #HeWillNotDivideUs (@HWNDUS) January 30, 2017

Which leads me to this plea from Tim Spalding:

A plea to library-tech companies. Regular tech is speaking out—Twitter, FB, Google, even Amazon. With our values, shouldn't we be leading?

— Tim Spalding (@librarythingtim) January 31, 2017

This is actually an opportunity for vendors to generate goodwill among their customers. Goodwill for library vendors tends to be low. https://t.co/H4JUWz5mf6

— Michael Schofield (@schoeyfield) January 31, 2017

Libraries skew liberal and what-with the growing clamor around never-neutral / critical librarianship — read my thoughts about you might bake these kinds of ethical stances into your designs —  for-profit library vendors have an opportunity to make a statement that resonates their existing — and potential — buyers.

District Dispatch: Whiplash – back to Washington, D.C.

Tue, 2017-01-31 17:25

Whiplash — going from Midwinter in Atlanta to arriving back in my office in Washington, D.C. and facing a week of policy onslaught. As with much of the nation, I find it hard to know where to begin. Of course, my starting point is always the ALA mission, the Library Bill of Rights and the strategic directions of the Association, ably highlighted in yesterday’s statement from ALA President Julie Todaro. Then, I put on my political hat to see what can be done.

Whiplash – moving from ALA Midwinter Conference back to the policy onslaught in D.C.

Let me begin where Larra Clark, Krista Cox and Kara Malenfant left off in their post on net neutrality earlier this month. One development in the past week was the appointment of Ajit Pai as the new chairman of the Federal Communications Commission (FCC). As Larra and company explained, we are gearing up for challenges on the net neutrality front. However, Chairman Pai, a native of Parsons, Kansas, also is a proponent of broadband access (especially rural areas) and closing the digital divide. He calls this his “Digital Empowerment Agenda.” Clearly, there is reason to talk with him and his staff, and we are now formulating the best approach to doing just that.

Also, this past week, there was press speculation about the futures of the National Endowment for the Arts (NEA) and National Endowment for the Humanities (NEH). Now there isn’t yet a legislative or White House budget proposal to eliminate these agencies, but such action was recommended in a 2016 report by the Heritage Foundation, a think tank that has influence with the Trump Administration.

So, there is no specific threat today. Are we concerned? Yes. Things can move quickly, and we need to be prepared. Now is the time to do spade work. For libraries and the communities we serve, this means gathering evidence about the impact and value of national programs and how they benefit libraries and our communities. We need compelling stories of how libraries contribute to national missions such as economic advancement, services to veterans and educational opportunity.

Yes, you have heard us ask for these stories hundreds of times before, but now we need you to answer the call. When we meet or communicate with Members or staff in Congress, senior Administration officials, or other decision makers, we need stories that will resonate with them (more detail about the desired characteristics of these stories and those we wish to influence in a forthcoming blog post by Kevin Maher).

Sanctuary cities—localities that limit cooperation with immigration-related agencies—also have come to the fore as a policy issue. Among other things, President Trump’s executive order explores the possibility that federal funding could be reduced or terminated for cities found to be deficient in this cooperation. Questions have arisen regarding the legality of this order, as well as the scope—does it implicate E-rate discounts, Institute of Museum and Library Services grants, NEH grants, National Science Foundation or National Library of Medicine grants? At this moment, nobody has the answers, but along with our allies, we will engage and fight the policy as we can.

There is increasing discussion about the Congressional Review Act and how it might be employed to reverse regulations already in place from the Obama Administration (such as the net neutrality order of the FCC or regulations from the Department of Labor related to the Workforce Innovation and Opportunity Act). There are also new proposals for challenging regulatory actions, such as the Regulations from the Executive in Need of Scrutiny (REINS) Act (passed in the House), which would limit the ability of executive-branch agencies to adopt new regulations without congressional approval. We are engaged with our allies in Washington on how to proceed.

There is much more I could say, but I must stop now. Really, I could spend all my time writing blog posts about problems and possible directions to take, given that the waterfront of challenges is broadening by the hour or, rather, by the tweet or executive order. The Washington Office is committed to informing and engaging our diverse membership, as well as taking action with allies to advance shared policy priorities. The ALA and libraries have a big mission, so we must be strategic in our work. Where do we need to spend our time and attention at a given time? When is behind-the-scenes work most important? When is grassroots action most beneficial? When is more background research needed? I dare say that our plate will be overflowing for the foreseeable future.

The post Whiplash – back to Washington, D.C. appeared first on District Dispatch.

David Rosenthal: Preservable emulations

Tue, 2017-01-31 16:00
This post is an edited extract from my talk at last year's IIPC meeting. This part was the message I was trying to get across, but I buried the lede at the tail end. So I'm repeating it here to try and make the message clear.

Emulation technology will evolve through time. The way we expose emulations on the Web right now means that this evolution will break them. We're supposed to be preserving stuff, but the way we're doing it isn't preservable. We need to expose emulations to the Web in a future-proof way, a way whereby they can be collected, preserved and reanimated using future emulation technologies. Below the fold, I explain what is needed using the analogy of PDFs.


The PDF AnalogyLets make an analogy between emulation and something that everyone would agree is a Web format, PDF. Browsers lack built-in support for rendering PDF. They used to depend on external PDF renderers, such as Adobe Reader via a Mime-Type binding. Now, they download pdf.js and render the PDF internally even though its a format for which they have no built-in support. The Webby, HTML5 way to provide access to formats that don't qualify for built-in support is to download a JavaScript renderer. We don't preserve PDFs by wrapping them in a specific PDF renderer, we preserve them as PDF plus a MimeType. At access time the browser chooses an appropriate renderer, which used to be Adobe Reader and is now pdf.js.
Landing Pages ACM landing pageThere's another interesting thing about PDFs on the web. In many cases the links to them don't actually get you to the PDF. The canonical, location-independent link to the LOCKSS paper in ACM ToCS is http://dx.doi.org/10.1145/1047915.1047917, which currently redirects to http://dl.acm.org/citation.cfm?doid=1047915.1047917 which is a so-called "landing page", not the paper but a page about the paper, on which if you look carefully you can find a link to the PDF.

Like PDFs, preserved system images, the disk image for a system to be emulated and the metadata describing the hardware it was intended for, are formats that don't qualify for built-in support. The Webby way to provide access to them is to download a JavaScript emulator, as Emularity does. So is the problem of preserving system images solved?
Problem Solved? NO! No it isn't. We have a problem that is analogous to, but much worse than, the landing page problem. The analogy would be that, instead of a link on the landing page leading to the PDF, embedded in the page was a link to a rendering service. The metadata indicating that the actual resource was a PDF, and the URI giving its location, would be completely invisible to the user's browser or a Web crawler. At best all that could be collected and preserved would be a screenshot.

All three frameworks, bwFLA, Olive and Emularity, have this problem. The underlying emulation service, the analogy of the PDF rendering service, can access the system image and the necessary metadata, but nothing else can. Humans can read a screenshot of a PDF document, a screenshot of an emulation is useless. Wrapping a system image in an emulation like this makes it accessible in the present, not preservable for the future.

If we are using emulation as a preservation strategy, shouldn't we be doing it in a way that is itself able to be preserved?
A MimeType for Emulations?What we need is a MimeType definition that allows browsers to follow a link to a preserved system image and construct an appropriate emulation for it in whatever way suits them. This would allow Web archives to collect preserved system images and later provide access to them.

The linked-to object that the browser obtains needs to describe the hardware that should be emulated. Part of that description must be the contents of the disks attached to the system. So we need two MimeTypes:
  • A metadata MimeType, say Emulation/MachineSpec, that describes the architecture and configuration of the hardware, which links to one or more resources of:
  • A disk image MimeType, say DiskImage/qcow2, with the contents of each of the disks.
Emulation/MachineSpec is pretty much what the hardware part of bwFLA's internal metadata format does, though from a preservation point of view there are some details that are workable but not ideal. For example, using the Handle system is like using a URL shortener or a DOI, it works well until the service dies. When it does, as for example last year when doi.org's domain registration expired, all the identifiers become useless.

I suggest DiskImage/qcow2 because QEMU's qcow2 format is a de facto standard for representing the bits of a preserved system's disk image.
And binding to "emul.js" Then, just as with pdf.js, the browser needs a binding to a suitable "emul.js" which knows, in this browser's environment, how to instantiate a suitable emulator for the specified machine configuration and link it to the disk images.This would solve both problems:
  • The emulated system image would not be wrapped in a specific emulator; the browser would be free to choose appropriate, up-to-date emulation technology.
  • The emulated system image and the necessary metadata would be discoverable and preservable because there would be explicit links to them.
The details need work but the basic point remains. Unless there are MimeTypes for disk images and system descriptions, emulations cannot be first-class Web objects that can be collected, preserved and later disseminated.

DPLA: Michigan Service Hub Collections now live in DPLA

Tue, 2017-01-31 15:45

We are pleased to announce that the collections of the Michigan Service Hub are officially ‘live’ in DPLA and ready to explore! Accepted to the DPLA network in 2015, the Michigan Service Hub represents a collaborative effort between the Library of Michigan, University of Michigan, Wayne State University, Michigan State University, Western Michigan University and the Midwest Collaborative for Library Services. As of this week, the Michigan Hub partners have made 42,000 new items discoverable in DPLA and plan to add more in the future.

Michigan’s contributions to DPLA are rich in the state’s local history and culture including the auto industries of the Motor City, but that’s not all – look for collections and items representing such diverse topics as Civil War soldiers’ experiences, cookbooks, botany, and social protest posters.

Take a peek below at some of the newly-added materials from the Michigan Service Hub and start exploring today!

This Civil War pocket diary was kept by Union soldier Augustus Yenner and is one of several digitized as part of Western Michigan University’s United States Civil War collection. On New Year’s Day, 1863, Yenner wrote of the difficult conditions from a Kentucky battlefield: “Oh such a morn & day will never be forgotten, as long as reason remains, We lay in the frosty air & frozen ground…” 

   

 

This 1912 poster created by the Industrial Workers of the World (IWW) is one of over 2,000 in the Joseph A. Labadie Protest Poster collection at the University of Michigan, which documents twentieth century social protest movements including anarchism, labor rights, women’s rights, and more.

 

The Changing Face of the Auto Industry Collection contributed by Detroit Public Library’s National Automotive History Collection and Burton Historical Collections via Wayne State University documents Detroit’s booming auto industries of the early twentieth century.

  • A woman steps out of a Packard touring car c. 1920-1923
  • Sheet metal radiator assembly workers at the Cadillac Motor Company
  • Workers on the Ford Motor Company Model T Assembly Line
  • Cadillac Motor Company Advertisement, 1902
<>

Michigan State University’s Feeding America historic cookbook collection documents not only some surely delicious recipes, like those included in this chocolate, cocoa, and candy cookbook, but also an essential element of America’s culinary past and cultural history.

Social activist Caroline Bartlett Crane designed “Everyman’s House” for Herbert Hoover’s Better Homes of America campaign in 1924 to best meet the needs of the average American family. Crane’s design, part of which is seen here in blueprint form, was the contest winner and the house was constructed (and is still standing) in Kalamazoo, Michigan. The Caroline Bartlett Crane Everyman’s House collection comes from Western Michigan University. 

Last, but not least, if you are researching botany, you won’t want to miss the University of Michigan’s Vascular Plant Type collection of over 9,000 plant specimens with images. This specimen of Calyptrogenia cuspidata, a flowering shrub or small tree found in the rainforest, was taken from the Dominican Republic. The collection is international in scope with a particular focus on Michigan and the Great Lakes region and makes a substantial contribution to DPLA’s science collections.   

In the months and years to come, the Michigan Service Hub partners look forward to adding new collections from institutions across the state — stay tuned!

For more collection highlights and information about the Michigan Service Hub, visit michiganservicehub.org.

FOSS4Lib Upcoming Events: VIVO Camp

Tue, 2017-01-31 14:54
Date: Thursday, April 6, 2017 - 09:00 to Saturday, April 8, 2017 - 17:00Supports: Vivo

Last updated January 31, 2017. Created by Peter Murray on January 31, 2017.
Log in to edit this page.

GET READY for VIVO Camp in New Mexico | DuraSpace

Open Knowledge Foundation: Open State Foundation Netherlands wins OGP 2016 award for work to advance fiscal transparency through OpenSpending

Tue, 2017-01-31 10:15

Open State Foundation is a non-profit based in the Netherlands, working on digital transparency by opening up public information as open data and making it accessible for re-use. Last December, the organization received one of the seven Open Government Partnership 2016 Awards for its work on OpenSpending at the OGP Global Summit in Paris, France. The awards celebrated civil society initiatives that are using government data to bring concrete benefits. This blog post describes Open State Foundation’s work on advancing fiscal transparency through OpenSpending.

The financial crisis and various budget cuts in the Netherlands caused more than ever before the need for citizens to gain real-time access to financial data of all local and regional governments. Civil servants, journalists and citizens alike need data on budgets and spending to hold their own local governments to account.

Two years ago, Open State Foundation sat down with some civil servants of the Central District of the City of Amsterdam. We discovered that each quarter they were obliged to send an Excel file with financial data on budgets and spending to the Central Bureau of Statistics. We decided to ask for the same Excel file from all districts of the city of Amsterdam and built a website to visualise the data and make comparison possible. Each district could compare not only its own budget with the actual spending but also could compare that with the other districts. We build a tool to show what unlocking all local government financial data would look like.

Image credit: Amsterdam Canal by Lies Thru a Lens CC BY 2.0

Open State Foundation decided then to approach all local governments ourselves and ask each of them for the data. It was a great opportunity to raise awareness about the importance of open data not only for society but also for the local governments themselves.

We thought the easiest thing to do was now to approach the Central Bureau of Statistics and ask for all the files of all local governments. However, we were told that they were not allowed to do this. Each of the 400 municipalities, 12 provinces, 24 water boards and a couple of hundred common arrangements decided on their own in what form they present their financial records to their own citizens. It was the decision of the local governments themselves whether the data could be open or not.

We built a template for local advocacy and started by asking civil servants first. We asked for the data and if they declined our request, we then approached the alderman. And if the alderman rejected our request, we approached the municipal council. Sometimes with the help of local journalists. And so, in various municipalities council members raised questions and even resolutions were tabled.

Within a year, using this approach, we managed to gain access to financial data of more than 200 local governments in the Netherlands, collecting thousands of files, containing millions of data points.

We then approached the Central Bureau of Statistics again. Now, together with the Ministry of Interior, that supported our mission. We could show that there was a huge number of cities and towns that were willing to share their financial information with anyone.

And so, not much later, the Central Bureau of Statistics sent out a memorandum to all local and regional governments in the Netherlands, that by the end of that year, all budget and spending of all local governments, regional authorities would be released as open data. Not only was the data released historically, but from that moment, the data was published each quarter in a sustainable manner.

Municipal council members can now hold their local government to account throughout the year. Civil servants can easily benchmark the financial performance of their own city and create their own benchmarks, something that they in the past spent a lot of money on. Journalists use the tool to see how their local governments are performing. Citizens are now able to challenge the government by showing that they could do things better and reduce costs.

Eventually, this success depended on the right approach to trigger various local governments. With a strong community and a mix of technical and political knowledge, everyone should be able to hold power to account.

By now, a number of cities are providing data as deep as transaction level. At the moment, Open State Foundation is working with a number of local governments to dive in deeper levels of detail and to make it possible to scale this up. Together with the process to unlock local council data on minutes and decisions we want to continue working towards connecting spending to decisions made.

DuraSpace News: It is a Happy New Year for 4Science!

Tue, 2017-01-31 00:00

From Michele Mennielli, International Business Developer 4Science  

DuraSpace News: GET READY for VIVO Camp in New Mexico

Tue, 2017-01-31 00:00

Austin, TX  The VIVO Project is pleased to announce that the first VIVO Camp will be offered April 6-8 (Thursday-Saturday) at the University of New Mexico in Albuquerque.

DuraSpace News: VIVO Updates–Camp, Conference, Microsoft Academic

Tue, 2017-01-31 00:00

From Mike Conlon, VIVO Project Director

Pages