You are here

Feed aggregator

District Dispatch: Washington Office at Annual 2017: Libraries #ReadytoCode

planet code4lib - Wed, 2017-06-07 14:28

The #ReadytoCode team is building off our Phase I project report to address some of the recommendations on support, resources and capacity school and public libraries need to get their libraries Ready to Code.

Are you tracking what’s going on with coding in libraries? OITP’s Libraries #ReadytoCode initiative is in full swing and if you haven’t heard, you can find out more in Chicago.

The Ready to Code team is building off our Phase I project report to address some of the recommendations on support, resources and capacity school and public libraries need to get their libraries Ready to Code.

Get Your Library #ReadytoCode (Sunday, June 25, 1-2:30 p.m.)
Get a taste of what we heard from the field and hear from librarians who have youth coding programs up and running in their libraries. Join us on Sunday, June 25 at 1 to 2:30. Play “Around the World” and talk with library staff from different backgrounds and experiences who will share the ups and downs and ins and outs of designing coding activities for youth. Table experts will cover topics like community and family engagement, analog coding, serving diverse youth, evaluating your coding programs and more!

Learn how to get started. Hear about favorite resources. Build computational thinking facilitation skills. Discuss issues of diversity and inclusion. Visit each table and get your #ReadytoCode passport stamped with one-of-a-kind stamps. Share your own examples for a bonus stamp.

Start your library’s coding club with Google’s CS First and other free resources (Saturday, June 24, 1 – 2:30 p.m.)
Interested in offering a computer science program at your library? Join a team from Google to learn about free resources to support librarians in facilitating activities for youth, including how to set up and run free CS First clubs, designed to help youth learn coding in fun and engaging ways through interest-based modules like story-telling, design, animation and more. Speakers include Hai Hong, program manager of CS Education; Nicky Rigg, program manager of CS Education; and Chris Busselle, program manager of CS First

Libraries as change agents in reducing implicit bias: Partnering with Google to support 21st Century skills for all youth (Saturday, June 24, 3 – 4 p.m.)
As our economy shifts, digital skills, computer science and computational thinking are becoming increasingly essential for economic and social mobility, yet access to these skills is not equitable. Join a team Hai Hong and Nicky Rigg from Google to learn about recent research to address implicit biases in education, and be ready to work as we discuss how libraries and Google can partner to increase the diversity of youth who are prepared to participate in the digital future.

Tech education in libraries: Google’s support for digital literacy and inclusion (Sunday, June 25, 10:30 – 11:30 a.m.)
How can we better support our youth to participate in and benefit from the digital future? Join Google’s Connor Regan, associate product manager of Be Internet Awesome, and others from Google to learn about the range of free resources available to help librarians, families and communities to promote digital literacy and the safe use of the internet.

 

Want to know more? Follow the Libraries #ReadytoCode conference track on the Conference Scheduler and stock up on ideas to design awesome coding programs when you get back home!

The post Washington Office at Annual 2017: Libraries #ReadytoCode appeared first on District Dispatch.

HangingTogether: Institutional researchers and librarians unite!

planet code4lib - Tue, 2017-06-06 19:51

Institutional research information management requires the engagement and partnership of numerous stakeholders within the university or research institution. A critical stakeholder group on any campus are institutional researchers, and I encourage greater collaboration between university libraries and institutional research professionals to support research information management.

Last week I had the opportunity to present a poster at the annual meeting of the Association of Institutional Research (AIR), the primary professional organization for US institutional research (or IR) professionals. The IR professionals I spoke with expressed frustration with the ability to collect high quality, reliable information about the research productivity at their institutions. They require this information for many different reasons:

  • They are increasingly being asked to report on faculty research activities as a component of institutional decision support and strategic planning.
  • They support institutional and disciplinary accreditation activities which require extensive accounting of research activities.
  • They are asked to support cyclical internal reviews of undergraduate and graduate degree programs (typically called program review). While program review emphasizes student academic activities and outcomes, quantitative and qualitative information about faculty research is needed.
  • They aggregate information that may support institutional competitiveness in national and international rankings and conduct benchmarking against peer institutions.
  • They may be asked to support or lead annual academic progress review workflows (called faculty activity reporting or FAR in the US), in which faculty self-report research, teaching, and service activities to support promotion and tenure evaluation as well as annual reviews.

IR professionals, who usually report directly to senior academic leadership, are keen to discover improved ways to collect and interpret campus research productivity. While European institutions have been collecting and managing research information for some time, as demonstrated through the maturity of international organizations like EuroCRIS and the maintenance of database models like CERIF to support Current Research Information Systems (CRIS), this is still fairly new in the United States, and US research information management practices are developing quite differently than European CRIS models. As Amy Brand articulated in her excellent 2015 blog post, US RIM adoption straggles in great part because no single campus unit “owns” interoperability; instead, system development takes place in a decentralized and uncoordinated way. This could be seen within the AIR community, as conversations about collecting program review and benchmarking data were usually separate from faculty activity reporting (FAR) workflows. And completely absent from the conversation there were RIM components that libraries are usually keen to address, including public researcher profiles to support expertise discovery, linkages to open access content and repositories, and reuse in faculty web pages, CVs, and biosketches. Different components of the RIM landscape are being developed and supported in siloed communities. This isn’t good for anyone.

I see complementary goals and potential alliances between libraries and institutional research professionals. Collecting and managing the scholarly record of an institution is a challenging endeavor requiring enterprise collaboration. By working together and with other institutional stakeholders, I believe institutional researchers and librarians can collect and preserve quality metadata about the institutional scholarly record, and they can support a variety of activities, including public researcher profiles, faculty activity review workflows, linkages to open access content, and reporting and assessment activities–all parts of a rich, complex research information management infrastructure. By working together to enter once and reuse often, the researchers also win, as improved systems can save them time by reducing multiple requests for the same information, accelerate CV and biosketch creation, and automatically update other systems and web pages through APIs and plug-ins.

IR professionals are obviously data savvy, but publications metadata is usually outside of their experience. They understand that there are significant challenges to collecting the publications and scholarly record of their institutions, but they are largely unfamiliar with specific challenges of person, object, and institutional name ambiguity in bibliographic records or why sources or coverage may vary by discipline. Because they may have not previously collaborated with libraries, it’s easy for institutional researchers to miss the knowledge and expertise the library may offer in addressing these challenges. Libraries can offer institutional researchers this expertise, as well as knowledge about bibliographic standards, identifiers, and vocabularies.

Libraries have complementary perspectives on research and researcher information to offer cross-campus institutional research colleagues. For example, while libraries–and OCLC Research–is paying close attention to the evolving scholarly record and the growing importance of research data sets, grey literature, and preprints, this is largely unfamiliar and unimportant to institutional researchers. For the immediate future, publications remain the primary, measurable intellectual currency for benchmarking and reporting at US universities, as are traditional, article-level citation metrics. And unlike the library community, institutional reporting offices have little interest or experiences supporting open access, discoverability, expertise identification, and content preservation.

I think it’s equally important for libraries to ask what they can learn from the institutional research community. IR professionals are the experts about institutional data, and they hold the keys to demystifying campus information, including institutional hierarchies and affiliations that need to be addressed in any RIM implementation. They provide leadership and support for departmental, disciplinary, and institutional data aggregation efforts like accreditation, which provides them with a unique and powerful view of challenges and opportunities–for improving data, systems, workflows, and collaborations. They are familiar with institutional and national policies, like FERPA, that ensure personal privacy, and they also have well-established communities of practice to support data sharing, such as the Association of American Universities Data Exchange (AAUDE).

OCLC Research and working group members from OCLC Research Library Partnership institutions are working together to understand rapid changes in institutional research information management and the role of the library within it. Stay tuned for upcoming research reports this fall, as well as conference presentations this summer at the LIBER Annual Conference in Patras, Greece and the 8th annual VIVO conference in New York City.

About Rebecca Bryant

Rebecca Bryant is Senior Program Officer at OCLC where she leads research and initiatives related to research information management in research universities.

Mail | Web | Twitter | LinkedIn | More Posts (2)

Archival Connections: Installing Social Feed Manager Locally

planet code4lib - Tue, 2017-06-06 18:12
The easiest way to get started with Social Feed Manager is to install Docker on a local machine, such as a laptop or (preferably) desktop computer with a persistent internet connection. Running SFM locally for anything other than testing purposes is NOT recommended. It will not be sufficient for a long-term documentation project and would […]

DPLA: DPLA Board Call: June 15, 2017, 3:00 PM Eastern

planet code4lib - Tue, 2017-06-06 17:50

The next DPLA Board of Directors call is scheduled for Thursday, June 15 at 3:00 PM Eastern. Agenda and dial-in information is included below. This call is open to the public, except where noted.

Agenda

[Public] Welcome and Introduction of Board Members and Michele Kimpton, Interim Executive Director – Board  Chair, Amy Ryan
[Public] Updates on Executive Director Search – Amy Ryan
[Public] DPLA Update – Michele Kimpton
[Public] Questions/comments from the public
Executive Session to follow public portion of call

Dial-in

Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/173812951

Or Telephone: +1 408-638-0968 (US Toll)

Meeting ID: 173 812 951

Islandora: Islandora CLAW Install: Call for Stakeholders

planet code4lib - Tue, 2017-06-06 16:47

Have you ever installed Islandora yourself? Do you think it could be a better experience? Would you like to spare yourself, the community, and all the potential adopters out there the difficulties of installing an entire repository solution from scratch? Then the Islandora Foundation is calling on you to help make that possible.

Now that the core of CLAW is shaping up, we plan on holding a series of sprints to implement and document a modular installation process that will benefit everyone. We know that there is a deep well of knowledge and experience out there in the community, and we're hoping motivated individuals and organizations will step forward and commit to being part of this process. Identifying as a stakeholder will give you influence over the direction that this effort takes if you're willing to put in the time to make it happen.

Work will commence in July, but it will be in a different format than before. Before any programming or documenting gets started, we're asking for stakeholders to be involved in a series of planning meetings to identify the scope of work to do be done for each sprint.

So if you're interested in being involved with the creation of what will be one of the greatest features of CLAW, please respond to this doodle poll for an initial informative meeting about being a stakeholder. At the meeting, we will be discussing the new sprint format in detail, what it means to be a stakeholder, as well as prior efforts to give people the context they need to decide if they want to be involved. So if you're curious, please feel free to stop by. And if you don't feel like participating in conversation but just want to listen in, that's okay. As always, lurkers are welcome.

Meredith Farkas: Framework Freakout presentation and Questions Answered

planet code4lib - Tue, 2017-06-06 04:19

Last week, I gave an online presentation about the ACRL Framework for Information Literacy for the ACRL Student Learning & Information Literacy Committee. It was entitled Framework Freakout: How to Stop Worrying and Learn to Live with the Framework. Way more people attended than I’d expected (you know how webinars go) and it ended up being a lot of fun with a plethora of good questions. You can check out my slides or watch my archived talk embedded below.

 

I wasn’t able to get to all of the questions, so I thought I could answer some of them here (I believe this was Rhonda’s off-hand suggestion and a good one!). I want to preface this by saying that I am not “she who has all the answers.” I am not an expert. I am not the most knowledgeable about the Framework by a long-shot. I’m just a fellow-traveller on this journey to improve our teaching and student information literacy. I have engaged with the Framework some and have integrated it into my teaching where it makes sense. I would like to do more in the future, but, to me, the focus for all of us should be open-minded engagement with the Framework and incremental improvements to our teaching. Making people feel like their teaching is not Framework-y enough or that they need a philosophy degree to really do anything with the Framework is counterproductive. As Zoe Fisher says in her great post about critical information literacy (which I would argue has barriers similar to the Framework in terms of engagement and implementation), “Do Your Best and Fuck The Rest.”

 

Can you give the full citation for the Schroeder article?

There are actually two articles by Schroeder and Cahoy about affective components of information literacy:

Schroeder, Robert, and Ellysa Stern Cahoy. “Valuing information literacy: Affective learning and the ACRL standards.” portal: Libraries and the Academy 10.2 (2010): 127-146.

Cahoy, Ellysa Stern, and Robert Schroeder. “Embedding affective learning outcomes in library instruction.” Communications in Information Literacy (2012).

 

How does student self-reflection work when your contact with students is very limited?

I kind of answered this in the talk, but I think I can expand on it a little. Reflection doesn’t need to take much time and you can definitely incorporate it into a one-shot. You can have students share (anonymously) in the beginning what aspects of doing research they feel confident in and what areas they’d like to improve. You can ask them what they hope to learn from this session. You can use this as formative assessment data to tailor your session (that’s especially easy if you get the instructor to have students do this in advance. You can have them reflect in the middle of class on things that confused them, that they have questions about, or that they’d hoped you’d cover that you haven’t yet. This again is a reflection that you can act upon in the session. Finally, you can have them reflect at the end of class in the form of a minute paper or similar. I usually ask them about something useful they learned, something that they’re confused or have questions about, and how they feel now about their ability to complete their assignment. I probably wouldn’t kludge three reflections into a one-shot, but you could absolutely do one or even perhaps two of these in your teaching (I usually just have one).

You could also collaborate with the faculty member to have students reflect later on what they learned in your session that was useful in completing the assignment and how they’ve grown as a researcher over the course of the class. That could even be built into the assignment for students to do at the end, which will help to cement the lessons learned for students, and will help the librarian to see what value they provided and consider what they might want to do differently next time.

 

If a class is coming in for one single session, how do you prioritize content? The framework ideas are interesting and likely most beneficial long-term … but aren’t we doing them a disservice, by not focusing on task at hand i.e. how to find/evaluate/select sources needed for particular assignment?

There were a few questions around the question of teaching content vs. teaching the Framework. I don’t see these two things as distinct nor are they an either/or thing. When I am designing learning outcomes and experiences for a class, I first look at their assignment and see what they need to do. My feeling is that the content in the Framework IS what students need to learn to be successful at their assignments. I don’t think that teaching “information creation as a process” is not relevant to a class where students need to find and evaluate a variety of types of sources. How will students successfully evaluate sources if they don’t know what goes into their creation? How will students be able to use sources rhetorically if they haven’t considered that “authority is constructed and contextual?” Integrating aspects of the latter frame into a one-shot may be as simple as having students do a pair-share about how people become experts and then facilitating a discussion around different types of expertise and how different audiences might be swayed by different kinds of expertise. Integrating the former might be as simple as using process cards (which I discussed in the webinar) discussing what students discovered, and then showing them how to find those different types of sources. It doesn’t matter if students know how to use a library database if they haven’t learned how important language is to searching and how to brainstorm keywords and related terms. Depending on the assignment and learning outcomes, I might spend more or less time on mechanical skills. Embracing the Framework doesn’t require you to never again show a student how to use a database. It just might mean that instead of just showing them how to use it, you explain what it is, what goes into the creation of the different kinds of sources in it, or what to consider when determining which sources to select. I’ve never found it a big departure from how I taught before the Framework was adopted, though it is a big departure from how I taught when I was fresh out of library school (ugh).

Also, I cover mechanics less by having students view videos before class that cover the mechanics. Sometimes instructors show them in class, or sometimes students complete pre-assignments I design in Google Forms or Qualtrics where they watch videos and then do the things described in the videos (brainstorm keywords on their topic, search the database, etc.). The great thing about the pre-assignment, beyond freeing up my time to focus less on mechanics, is that I have all of this formative assessment data I can use to focus on where students are struggling. You can see some examples and specifics on my pre-assignments in my slides from a preconference workshop I gave last year: The Mindful Instruction Librarian and the One-Shot (see slides 16-27).

 

How can I find the page that holds PDX’s All Learning Objects?

Not sure here if you meant PCC’s instructional videos or if you meant the Information Literacy Toolkit. Our tutorials are all housed on our Handouts and Tutorials page, but the toolkit is still in development and won’t be released until shortly before Fall.

 

Has anyone implemented Poll Everywhere for student reflection/assessment in your sessions? Hoping to start doing that this semester!

I have not used Poll Everywhere, but I have used Padlet, which is somewhat similar in that all students can anonymously post content to a collaborative whiteboard. I’ve used it more for research question and keyword brainstorming than for reflection, but both certainly could be used for student reflection. I find it easy enough for students to post to a Google Form or a collaborative Google Doc, so that’s what I use.

 

What did you call the short sessions you did at Portland State in advance of the full one-shot session?

Warmth sessions, which are borrowed shamelessly from the work of Dale Vidmar (Southern Oregon University) and Constance Mellon.

Vidmar, Dale J. “Affective change: Integrating pre-sessions in the students’ classroom prior to library instruction.” Reference Services Review 26.3/4 (1998): 75-95.

Mellon, Constance A. “Library anxiety: A grounded theory and its development.” College & Research Libraries 47.2 (1986): 160-165.

 

Also, do you have suggestions about how to explain the Framework in easy-to-understand terms. I have faculty that tell me what can you do for my students and do not care about the lingo, such as “information literacy.” It’s what can you do for my students.

What about non-librarian faculty framework freakout? as in…just teach them how to use the databases…

As to the first question, I never use info lit jargon with faculty. As a liaison, I work with a diverse portfolio of departments, each of which requires different approaches to outreach. With our developmental education faculty, we have shown them aspects of the framework because they are a population that groks reading apprenticeship, metacognition, and the idea that helping students develop positive habits of mind is as important (if not more so) as mechanical skill development. In my more content-focused disciplines, a different approach is needed.

In this era of declining budgets and neoliberalism, colleges and universities are more focused than ever on ensuring that students are developing the skills employers are looking for. I think the Project Information Literacy research around how students navigate information in the workplace as well as additional research I’ve seen here and there about the information skills employers want can be very persuasive in framing information literacy as workplace readiness. But really, the key is to know your audience and what they’d find persuasive.

In terms of pushing back on faculty who want us to focus solely on “how to search databases,” this isn’t a tension I encounter much at PCC and maybe that’s the community college difference (I don’t know for sure). I think letting faculty know that learning how to use a library database does not mean that they will be able to brainstorm keywords from their topic that will help them find relevant sources or that they will be able to evaluate the sources they find to select ones that are both relevant and of sufficient quality for the task at hand. Databases aren’t magic. If we only focus on searching, students will not be able to do research effectively as there is so much more that goes into the process. Framing things in terms of workplace readiness may also be persuasive, but it really all depends on the instructor and what you think will persuade them. But, as I said, at PCC, our faculty for the most part respect our expertise in this area and trust us to make good decisions about what to teach.

 

We’re in the process of creating tutorials for faculty to embed into their online classes. I’m curious whether your have created assessments for your faculty to use with your tutorials.

Not assessments per se, but we are in the process of creating (and have already created a few) suggested activities that faculty can either have students do in class or have them do as homework around the learning outcome covered in the video. This will give the faculty member the opportunity to see how well students have internalized/mastered the lessons in the video so they’ll know whether further emphasis/teaching is required. They will live in the Information Literacy Toolkit.

 

Can Meredith comment on the applicability of the Framework in the community college context, versus the other contexts she also has work experience in? Does it apply “equally yet differently”?

It’s hard for me to know what is specific to my community college vs. what is specific to community colleges in general, because I’ve only worked at one. So your mileage may vary. I think having a lot of non-traditional students and being a conduit either to a trade or to a four-year institution helps faculty think differently about what we do here. I think the Framework and teaching in a Framework-y way is an easier sell at my community college. I’ve already talked about the developmental education faculty and how they are very much on the same page as the Framework, but faculty here in general do recognize that there are a lot of things that go into learning that go beyond content and mechanical skills. We see students who are being held back by their lack of self-efficacy, lack of good studentship skills/dispositions, etc. We know that building student self-efficacy and good studentship are as important (if not more) than the content we teach in a particular course. There is also a strong focus on making students employable, which encourages a focus on what students need beyond the content and after school is over. At PCC, one of our core outcomes is self-reflection so that metacognitive work is respected here at the College. I feel very lucky to work where I do.

Harvard Library Innovation Lab: LIL Talks: Synthesizer

planet code4lib - Mon, 2017-06-05 18:45

This week Ben Steinberg took us on a strange and magical trip through the world of synthesizers.

Ben wasn’t talking about those Casio keyboards we all had as kids:

No, he was talking about this kind of thing … a self-built modular synthesizer:

Before showing off the hardware, Ben first asked us to ponder: what is sound? For Ben, it’s the neurological, psychological, cultural, social phenomenon that occurs when waves of compression and attenuation hit the insides of your ears.

And then the sounds began, starting with a simple sine wave. Ben showed us what happens as you adjust the frequency on the sine wave, and explained that humans can hear frequencies ranging from 20 to 20,000 Hz.

(Did you know? There’s a device businesses (like malls) can use to emit “a high-pitched sound that drives teens crazy but can’t be heard by most adults over 25.” It’s called “The Mosquito.”)

Ben then introduced us to control voltage, envelopes, voltage-controlled oscillators (VCOs), voltage-controlled amplifiers (VCAs), and low frequency oscillators (LFOs). Look them up if you want to know more, but here’s the kind of sounds they make:

https://lil.law.harvard.edu/blog/wp-content/uploads/2017/06/ben-synths.mp3

 

https://lil.law.harvard.edu/blog/wp-content/uploads/2017/06/bsynths5.mp3

All of these effects conspire to produce the “timbre,” which is the quality of the sound produced by the distribution of frequencies within it. And you can add filters – like “low-pass,” “high-pass,” “bandpass,” and “notch” filters – to create even more interesting sound effects, like the effect created by a wah-wah pedal. And if that’s not enough, Ben showed us a cool, minimalistic interface called a monome grid, which you can use to trigger different sound patterns and effects:

https://lil.law.harvard.edu/blog/wp-content/uploads/2017/06/monome1.mp4

Ben wrapped up with a discussion of the “most interesting” sounds. To him, these are the ones that aren’t simulating other sounds and don’t sound like anything else. They sound like “machines playing themselves.”

For a sample, visit Ben’s own sound machine on the web: http://partytronic.com/.

Thanks Ben!

HangingTogether: Cast your vote! Society of American Archivists and DLF Forum / Digital Preservation

planet code4lib - Mon, 2017-06-05 15:11


Voting is in the air! Ballots are open for both the DLF Forum / Digital Preservation 2017: and also for the “Pop up” Sessions for the Society of American Archivists. As usual, there are many excellent options.

Team OCLC Research wants to share what we are learning and discuss both opportunities and challenges with the broader community and would like to ask for your vote!

2017 DLF Forum / Digital Preservation 2017

The Realities of Research Data Management
The Realities of Research Data Management project explores the context and choices research universities face in building or acquiring RDM capacity. Findings are derived from case studies of four research universities in four national contexts, and address scoping decisions, the role of incentives, and sourcing and scaling choices.

Surveying global practices in research information management
OCLC Research and EuroCRIS will provide a progress report on a survey of research information management practices worldwide.

Evangelizing for Digital Preservation Across your Organization: Reaching out to IT
How do you bridge the divide that often exists between those focused on permanent preservation of inactive digital records and those charged with the day to day operation of active business systems? The focus of the panel discussion will be on success stories and lessons learned.

“SAA Pop Up Sessions” (vote by June 9th)

Digitization Matters: 10 Years Later
In 2007 OCLC Research and the Society of American Archivists held a seminal meeting to explore barriers preventing institutions from scaling up digitization of archives and special collections. Inspired at the time by book scanning projects spearheaded by Google and the Internet Archive, participants examined what was preventing libraries from doing more to get collections into the hands of users. A report from that meeting, “Shifting Gears: Gearing Up to Get Into the Flow,” summarized these (sometimes contradictory) ideas for making digitized special collections more ubiquitously available.

Ten years later, digitization of archives and special collections has moved across the spectrum from boutique scanning and carefully curated online exhibits to massive digitization projects that have converted millions of pages of documents and microfilm for online access. There have been major advancements in access to digitized materials through state-wide digital libraries, the partnerships that formed HathiTrust, and the emergence of the Digital Public Library of America as an aggregator. Legally there is growing support for digitization as a fair use and a value added contribution. Once only accessible to the most privileged of users, archives and special collections are now available to diverse populations around the world.

Yet, are we any closer to reaching the scale we imagined ten years ago? Are the challenges and solutions to large-scale digitization any different? How has the landscape changed, or remained the same, in special collections and archives? How should special collections and archives approach digitization in the future? What opportunities lie ahead?

Moderated by Merrilee Proffitt, OCLC Research, this panel discussion will include Erik Moore from the University of Minnesota and Michelle Light from UNLV, and will encourage audience feedback.

Michelle Light
University of Nevada, Las Vegas

Merrilee Proffitt
OCLC Research

Erik Moore
University of Minnesota

About Merrilee Proffitt

Mail | Web | Twitter | Facebook | LinkedIn | More Posts (292)

HangingTogether: On Librarians and RDM …

planet code4lib - Mon, 2017-06-05 15:08

Discussions about libraries often elevate their subject to the status of an animate being, à lalibraries are expanding their range of research support services”, or “libraries must do a better job of demonstrating their value to the campus community.” Of course, such references are merely shorthand for the sum total of activities taking place within the library. Still, it is worth making the point that libraries don’t make decisions or carry them out.

Librarians do.

This observation is prompted by a recent note from Linda Salem of San Diego State University, who, after reading the first report in our Realities of Research Data Management series, wrote to us with a question that cuts right to the heart of the matter (we take the liberty of paraphrasing): “What is the librarianship of RDM work? In other words, what RDM-related activities will librarians do?”

Precisely. When we speak about the role of libraries in RDM, we are, fundamentally, speaking about the work of librarians. In our Realities of Research Data Management reports, we are looking at RDM as an institutional (university) capacity; in this sense, we tend to speak of the university – or some unit of the university like the library – as the primary agent in building or acquiring RDM capacity. But as a practical matter, it is the librarians, data specialists, technical personnel, and other staff that bring a university’s RDM capacity to life.

In A Tour of the Research Data Management Service Space, we divide RDM services into three components:

  • Education services: raising awareness, skill-building, disclosing RDM resources
  • Expertise services: RDM decision support and customized solutions
  • Curation services: technical infrastructure and services for data management

So in response to Linda’s question, we can say that librarians will make their contributions to RDM in some or all of these areas. But that is not a very tidy answer. RDM is an emerging service space, and the role of libraries and librarians in its provision is likely to shift and transform as the area matures. However, we can supply a few general observations that might help shape future thinking about librarians and RDM.

  1. In some cases, librarians won’t do anything …

The fabric of RDM services and infrastructure extends well beyond those provided by the academic library, or the university itself. Think, for example, of the Dryad data repository in the bio-sciences (an independent 501(c)3 non-profit), the DMPonline tool (provided by the UK Digital Curation Center, a national center of expertise in digital curation), or figshare (a commercial service from Digital Science). Many researchers navigate these resources without any mediation by librarians or indeed any local university staff.

Even when researchers avail themselves of local RDM services through their university, it is not necessarily true that those resources will be sourced in the library and staffed by librarians. Instead, they may be provided by other campus units, without library involvement. For example, Northwestern University’s Research Data Storage Service is provided through the campus Information Technology unit.

  1. … yet they will still need to be conversant in RDM and the RDM service space.

One area where academic librarians have already demonstrated leadership is in the Education component of campus RDM services. In particular, librarians often assume the responsibility of facilitating access to RDM resources provided through the library, other campus units, or extra-institutional organizations. For example, many academic librarians compile LibGuides, like this one provided by the University of North Carolina Greensboro, to assist researchers in navigating the RDM service space, pointing them to resources available on campus or elsewhere. Training is another area where librarians focus effort, educating researchers on the benefits of good data management practices, as well as the basic workflows involved. Librarians at the University of York, for example, regularly offer RDM workshops for faculty and students, covering the basics of RDM, university and funder data policies, data management planning, active data management, and long-term preservation and sharing.

So even if librarians do not direct a full spectrum of RDM services sourced within the library, we expect that they will be increasingly regarded as subject experts in this area, helping researchers to understand the topography of the RDM service space, and connecting them to the RDM resources that are available either locally or externally. Indeed, RDM may become an essential part of broader digital literacy curricula implemented by academic librarians on their campuses.

  1. The librarianship of RDM will become more concrete as roles mature and solidify.

Academic librarians are often shouldering major new RDM responsibilities. In some cases, this involves being asked to do things which are at present beyond their usual expertise. At many universities, a new role – data librarian – is emerging, but there can be ambiguity over the meaning of this designation: does it embody a librarian with a particular skill set, or is it more a description of new duties? In the case of the latter, a data librarian may be someone thrust into a new role who must acquire a range of new skills on the job.

We see the emergence of efforts to codify the basic skill sets needed for librarians and other university staff tasked with new RDM responsibilities. “Train the trainer” programs are a good example, such as the “Essentials 4 Data Support: the Train the Trainer version” workshop at the recent IDCC conference. Similarly, the MANTRA project maintains an RDM training kit specifically for librarians. As new training programs and even certifications appear, the librarianship of RDM, and the skill set needed to support it, will become both more solidified and consistent – and will complement a growing emphasis on digital curation and research support as essential skills in the librarian’s toolkit.

  1. Cooperation is needed.

The need for librarians to acquire new skills to support research data management boosts the incentive to develop programs for cooperatively developing and sharing RDM expertise. In some countries, such programs take the form of national centers, such as Data Archiving and Networked Services (DANS) in the Netherlands, or the Digital Curation Centre in the UK. Other cooperative programs adopt a more distributed approach – the recently launched Data Curation Network is a collaboration between six US universities, and aims to create a “network of expertise” for RDM, enabling libraries to act collectively to curate a greater variety of data (type, format, discipline) than a single institution could manage.

Collaboration around developing skills and sharing expertise is likely to be particularly important for academic libraries, as they assume new data curation responsibilities at the same time that library budgets are static or shrinking. The availability of a shared pool of expertise and other resources amplifies librarians’ ability to support RDM at their local institution.

  1. The librarianship of RDM is the librarianship of the scholarly record.

RDM is an emerging – and fast-developing – feature of 21st century scholarship, re-shaping researcher practices and, ultimately, the scholarly record. The scholarly record is, of course, fundamental to the mission of academic libraries, and therefore provides a good starting point for thinking about the librarianship involved in managing research data. Librarians are the traditional stewards of the scholarly record, collecting and preserving the outputs of scholarly inquiry. Historically, those outputs tended to be print books and journals; today, the scholarly record is evolving to encompass a much wider range of materials, including research data. So the role librarians play in RDM – however that role comes to be defined – is a natural extension of librarians’ commitment to securing the ongoing availability of the scholarly record in its manifold forms.

Thanks to Rebecca Bryant and Constance Malpas for their input in writing this post, and to Linda Salem for inspiring it!

 

About Brian Lavoie

Brian Lavoie is a Research Scientist in OCLC Research. He has worked on projects in many areas, such as digital preservation, cooperative print management, and data-mining of bibliographic resources. He was a co-founder of the working group that developed the PREMIS Data Dictionary for preservation metadata, and served as co-chair of a US National Science Foundation blue-ribbon task force on economically sustainable digital preservation. Brian's academic background is in economics; he has a Ph.D. in agricultural economics. Brian's current research interests include stewardship of the evolving scholarly record, analysis of collective collections, and the system-wide organization of library resources.

Mail | Web | LinkedIn | More Posts (18)

District Dispatch: Washington Office at Annual 2017: Report from the swamp

planet code4lib - Mon, 2017-06-05 14:45

“We live in interesting times” has never been truer in the realm of national policy and libraries. There are a lot of headlines in the news, but what’s going on specifically with respect to library interests? How can we separate the wheat from the policy chaff? And what’s happening —or likely to happen—in the swamp that’s not widely reported?

Experts in the OITP session “Report from the swamp” at ALA Annual 2017 will share insights on national policy issues that matter to library professionals. Photo credit: eskipaper.com

Report from the swamp: Policy developments from Washington (Sunday, June 25, 3:00 – 4:00 p.m.)

Come to this session to learn about what’s happening in Washington and what you and ALA can do about it. Experts will address diverse issues from net neutrality and the status of threatened federal agencies to E-rate, infrastructure, small business, health care and more. The session will point you toward resources from ALA and elsewhere to help you and your library patrons learn more about national policy issues—and some of these resources translate to the state and local contexts as well.

Beltway insider Ellen Satterwhite, a vice president with Glen Echo Group (and former staffer at the Federal Communications Commission), will provide her insights alongside OITP policy experts Larra Clark and Alan Inouye. Marc Gartler, chair of OITP’s Advisory Committee, will moderate.

We will also provide suggestions for action. You can make a difference, for example, by submitting comments on net neutrality to the Federal Communications Commission. We will explain how. And, of course, bring your questions!

The post Washington Office at Annual 2017: Report from the swamp appeared first on District Dispatch.

Islandora: Do You Want To Host an iCamp?

planet code4lib - Mon, 2017-06-05 12:40

The Islandora Foundation is seeking locations for our slate of 2018 Islandora Camps. We usually look to do three or four each year, in locations serving the east and west coasts of North America and in Europe.

The Islandora Foundation pays for all catering, supplies, and instructor travel costs for the event. The only thing we ask from our host is for the venue to be provided free of charge (including wifi and projectors). Benefits for the host include:

  • One free registration to the event. Additionally, we try to have at least one of our workshop instructors come from the host institution, for another free registration.
  • Convenient access to Islandora training for local staff. Islandora Camp has one day of general sessions about Islandora, one day of more specific sessions on tools and sites, and one day of hands-on training either site building via the front-end with the admin track or working on code in the developers track.
  • A gathering of local/regional Islandorians at your institution, with all of the attendant networking and collaboration opportunities that can bring.
  • Promotion of your host institution among Islandora community and social networks (Google groups, Twitter, Facebook, website, and other listservs); your logo will be displayed on our website via the camp page.

What kind of space does an iCamp need?

  • Projectors and screens. One per room is fine; two is awesome.
  • Seating for 30 - 40 people, appropriate for using laptops.
  • A second room with seating for half that number, for the workshop day.
  • Wifi and power.
  • Reasonable accommodations nearby or with easy transportation (hotels and/or student housing).
  • Reasonable travel to the institution (nearby airport, or regional airport with good ground transportation).

Interested? Drop us a line at community@islandora.ca to let us know about where you'd like to host the camp and what time of year works best at your institution.

Open Knowledge Foundation: Impact Series: Improving Data Collection Capacity in Non-Technical Organisations

planet code4lib - Mon, 2017-06-05 10:00

Open Knowledge International is a member of Open Data for Development (OD4D), a global network of leaders in the open data community, working together to develop open data solutions around the world. In this blog, David Opoku of Open Knowledge International talks about how the OD4D programme’s Africa Open Data Collaboration Fund and  Embedded Fellowships are helping build the capacity of civil society organisations (CSOs) in Africa to explore the challenges and opportunities of becoming alternative public data producers.

Nana Baah Gyan was an embedded fellow who worked with Advocates for Community Alternatives (ACA) in Ghana to help with their data needs.

Context 

Due to the challenge of governments providing open data in Africa, civil society organisations (CSOs) have begun to emerge as alternative data producers. The value these CSOs bring includes familiarity of the local context or specific domain where data may be of benefit.  In some cases, this new role for CSOs serves to provide additional checks and verification for data that is already available, and in others to provide entire sets of data where none exists. CSOs now face the challenge of building their own skills to effectively produce public data that will benefit its users. For most CSOs in low-income areas, building this capacity can be long, logistically-intensive, and expensive.

Figure 1: CSOs are evolving from traditional roles as just data intermediaries to include producers of data for public use. Original image by Schalkwyk, Francois; Canares, Michael; Chattapadhyay, Sumandro; Andrason, Alexander (2015): Open Data Intermediaries in Developing Countries.

Through the Open Data for Development (OD4D) program, Open Knowledge International (OKI) sought to learn more about what it takes to enable CSOs to become capable data collectors. Using the Africa Open Data Collaboration (AODC) Fund and the OD4D embedded fellowship programmes, we have been exploring the challenges and opportunities for CSO capacity development to collect relevant data for their work.

Our Solution

The AODC Fund provided funding ($15000 USD) and technical support to the Women Environmental Programme (WEP) team in Abuja, Nigeria, that was working on a data collection project aimed at transparency and accountability in infrastructure and services for local communities. WEP was supported through the AODC Fund in learning how to design the entire data collection process, including recruiting and training the data collectors, selecting the best data collection tool, analysing and publishing the findings, and documenting the entire process.

Figure 2: Flowchart of a data collection process. Data collection usually requires several components or stages that make it challenging for non-technical CSOs to easily implement without the necessary skills and resources.

In addition, the embedded fellowship programme allowed us to place a data expert in the Advocates for Community Alternatives (ACA) team for 3 months to build their data collection skills. ACA, which works on land issues in Ghana, has been collecting data on various community members and their land. Their challenge was building an efficient system for data collection, analysis and use. The data expert has been working with them to design and test this system and train ACA staff members in using it.

Emerging Outcomes

Through this project, there has been an increased desire within both WEP and ACA to educate their staff members about open data and its value in advocacy work. Both organisations have learned the value of data and now understand the need to develop an organisational data strategy. This is coupled with an acknowledgement of the need to strengthen organisational infrastructure capacity (such as better emailing systems, data storage, etc.) to support this work.

The hope is that both organisations will have greater knowledge going forward on the importance of data, and have gained new skills in how to apply it in practice. WEP, for instance, has since collected and published their dataset from their project and are now making use of the Kobo Toolbox along with other newly acquired skills in their new projects. ACA, on the other hand, is training more of its staff members with the Kobo Toolbox manual that was developed, and are exploring other channels to build internal data capacity.

Lessons

These two experiences have shed some more light on the growing needs of CSOs to build their data collection capacity. However, the extent of the process as depicted in Figure 1 shows that more resources need to be developed to enhance the learning and training of CSOs. A great example of a beneficial resource is the School of Data’s  Easy Guide to Mobile Data Collection. This resource has been crucial in providing a holistic view of data collection processes to interested CSOs.

Another example is the development of tools such as the Kobo Toolbox, which has simplified a lot of the technical challenges that would have been present for non-technical and low-income data collectors.

Figure 3: CSO-led data collection projects should be collaborative efforts with other data stakeholders.

We are also learning that it is crucial to foster collaborations with other data stakeholders in a CSO-led data collection exercise. Such stakeholders could include working with academic institutions in methodology research and design,  national statistics offices for data verification and authorisation, civic tech hubs for technical support and equipment, telecommunication companies for internet support, and other CSOs for contextualised experiences in data collection.

Learn more about this project:

DuraSpace News: HykuDirect Pilot Program and Hyku Beta Testing Underway

planet code4lib - Mon, 2017-06-05 00:00

The Hydra-In-A-Box team is pleased to announce that Beta testing for local installs of Hyku repository have begun! Read all the details on our wiki and learn how your institution can participate through June 23.

Hyku is a digital content repository that provides robust deposit workflows, standards-based metadata management, convenient collection organization, preservation support, as well as integrated search, discovery, and access capabilities.

LITA: Hack to Learn: Two days of data munging in our nation’s capital

planet code4lib - Sun, 2017-06-04 19:29

In May, I was one of the 50+ fortunate participants in Collections as Data: Hack-to-Learn, a two-day workshop organized by Library of Congress, George Washington University, and George Mason University.

Four datasets and five tools to work with the data were provided in advance. On the first day, we met at the Library of Congress and were introduced to all the datasets, as well as three of the tools.

The first of the datasets was the Phyllis Diller Gag File. The Smithsonian’s National Museum of American History and Smithsonian Transcription Center have been conducting a crowdsourcing project to transcribe the 52,000 original index cards of the comedian’s jokes; in March of this year the transcriptions were made available. Each card contains a joke, often a date, sometimes an attribution – if someone gave Diller the joke – and are organized by subjects that appear at the top of the cards. Some prep work had been done on the data, and we got comma separated files (.csv) and folders of individual text files to work with.

On May 16th, the day before the workshop, the Library of Congress released 25 million MARC records for free bulk download, making this, the second of the datasets, available for workshop participants, along with the rest of the world. The MARC files had been converted to .csv using MarcEdit, placing the MARC fields into separate columns.

Our third pile o’ data was the Eleanor Roosevelt’s “My Day” columns consisting of 8,000 transcribed documents representing Eleanor Roosevelt’s nationally-syndicated newspaper column, provided by George Washington University, where there’s a longstanding historical editing project of which “My Day” is only a small part. The columns have been encoded in TEI, one column per xml file. To prep the data for the workshop, both python and R were used to extract text. Participants got xml and txt files as well as the python extractor and a strip xml R script.

Possibly the most complicated dataset was the End of Term tumblr posts, consisting of  text and metadata from 56,864 tumblr blogs posts from 72 federal tumblr blogs harvested as part of the End of Term Archive Project. This data was also made available in a variety of file formats, json, .csv, and plain text. The reason that this data was more complex, at least to my way of thinking, is because we only had the text and metadata – tags, from the tumblr posts – while much of the meaning in a tumblr post is visual. It was like getting only the caption, without the picture. The dates in the data were a mixed bag as well, because the federal agencies posting to tumblr included the National Archives, and Smithsonian museums, who often posted artifacts from their collections, resulting in a large number of 19th century – and earlier! – dates.

The five tools were OpenRefine, Voyant, MALLET, Gephi, and Carto. Day one was OpenRefine, MALLET, and Voyant, while day two was Gephi and Carto. Most attendees (including me) had some experience with OpenRefine, but the other tools were new to the majority of attendees. We were sent good instructions for downloading and installing software in advance of the workshop. Mac users had a bit of an advantage with java-based applications like Voyant and MALLET, since we could download a .jar file and run it locally.

In the afternoons, we divided up into teams based on our interests in working with particular data, or tools, or both. On the first day, I worked with a team using Voyant to try to see patterns in Eleanor Roosevelt’s “My Day”. We thought we might be able to detect differences in the terminology Mrs. Roosevelt used pre-WWII, and during and after the war. We thought about looking at change from while FDR was living, and after his death, but FDR’s death date (April 12, 1945) coincides closely with the wind-down of the war in Europe, such as the end of the Battle of Berlin on May 2. We decided to use the .txt files, and divided them into chunks of a size that Voyant could ingest. We identified stop words to take out of our analysis; for example, every “My Day” entry closes with a copyright statement, Copyright [year] United Features Syndicate, Inc. We ended up with two graphs at the end of our work time:

1936-1941

On day two, we moved to George Washington University to work with network analysis tool Gephi, and Carto, a geographic analysis tool. Jen Stevens, Humanities Liaison Librarian at George Mason University, presented on Gephi, and I was initially pretty charmed with it. Stevens mentioned that at George Mason, they’re using Gephi to analyze ingredient use in the digitized version of the Fairfax Family Cookbook, that contains recipes popular in 17th and 18th century England. I thought maybe I could do a network analysis of which food bloggers link to which other food bloggers, and I still think that could be a good Gephi project, however, Gephi is pretty picky when it comes to prepping the data you want to analyze. In the afternoon, I worked with a team trying to use Gephi to analyze the tags from the end of term tumblr blogs, which kind of devolved into each of us scowling at OpenRefine, trying to come up with queries to extract the data in a usable form, pondering how to connect the tags to the posts, and generally trying to get the data into shape so that Gephi would paint a graph for us.

Overall, what I learned at the workshop consists of first, a bit of personal insight: I’m pretty good at tossing a small dataset into one of these tools, but still have not been successful at getting meaningful results from a larger body of data. For example, I was able to get Voyant to create a nice word cloud from a set of text files that all include a title, URL, and 200-250 word abstract, and I got Carto to map a spreadsheet of locations where I have students in internships this summer. These look nice but don’t tell us much. Second, regular expressions rule! Finally, and more universally important, you really have to have some kind of idea of what questions the data you have might be able to answer before you start using one of these tools. You may find yourself, in the words of Jeff Edmonds, “OpenRefining a lot of crap“. Almost ten years ago, Chris Anderson predicted that the data deluge meant the end of scientific theory. Rather than starting research by creating an hypothesis and looking for proof, Anderson said we’d be able to start by looking for patterns in the data itself. With all due respect to Anderson, after spending two long days looking at data and data analysis tools, I think forming the hypothesis is still a critical step in the research process, even in the age of massive data.

Evergreen ILS: Evergreen 3.0 development update #8

planet code4lib - Sat, 2017-06-03 01:13

Yellow-billed Pintail (Anas georgica) by Ron Knight on Flickr (CC-BY)

Since the previous update, another 4 patches have been committed to the master branch.

Sounds like an exceptionally quiet week on the development front, right?

Not really. During the previous week, 66 updates were made to Evergreen and OpenSRF bugs in Launchpad, including:

  • reports of new bugs or wishlist items
  • discussions about the bugs and/or proposed fixes
  • folks assigning bugs to themselves to indicate that they are actively working on them

This is part of the normal ebb and flow of development: sometimes a problem, once identified, has a clear path to resolution — and sometimes it requires debate. However, it can be difficult to track what is going on. If, like me, you subscribe to the firehose and have Launchpad send you an email every time a bug report gets updated, it’s possible to have an overall sense of where things are going, but it takes time to read every update.

So let’s talk about dashboards. I’m a fan of Koha’s development dashboard, which displays things like the five most recently pushed features, the ten oldest bugs in need of signoff, and a list of bugs that need QA. It also lists leader boards; for example, I can tell at a glanced who signed off on the most patches in 2017. This is perhaps a double-edged sword: on the one hand, some people are motivated to increase the numbers of their contributions in order to climb higher on the lists, and in moderation, that’s fine. On the other hand, one tends to get what one measures; picking metrics carefully matters.

As I mentioned in my proposal to be release manager for 3.0, I would like to help create a development dashboard for Evergreen as a way of increasing visibility for various aspects of the development process and to help guide folks who are looking for things to do. But I have questions:

  • If you’re a frequent Evergreen contributor, what could be included in a dashboard to make your work easier?
  • If you use or administer Evergreen, what summary information about the state of development would be useful to you?
  • Do folks know of other good examples of dashboards for free software projects?
  • Who wants to help build a dashboard?
  • What other questions should I be asking?

I look forward to hearing from folks.

Duck trivia

There are no duck species that breed in Antarctica, but the yellow-billed pintail will occasionally vacation there.

Submissions

Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

District Dispatch: Washington Office at Annual 2017: Copyright

planet code4lib - Fri, 2017-06-02 22:13

Copyright is just as complex as always and librarians are expected to be knowledgeable and manage their institutions’ copyright issues. The Office for Information Technology Policy (OITP) presents three programs at Annual 2017 designed to help librarians new to the copyright specialist role find professional guidance:

At Annual 2017, Look for the life-size “Lola Lola” character in the McCormick Place Exhibit Hall to find the “Ask A Copyright Question” booth. Source: Carrie Russell

“Ask a Copyright Question” Booth (Saturday and Sunday, June 24-25, 10:00 AM – 4:00 PM)

Is copyright getting in the way of effective teaching and learning? Do you have rogue teachers copying textbooks? Can you show a Netflix film in the classroom? Copyright experts at the “Ask a Copyright Question” Booth in the McCormick Place Exhibit Hall will be on hand to respond to questions you need to be answered when addressing copyright issues at your public, school, college or university library. Stop on by while you’re checking out the exhibits and pick up new (and free!) copyright education tools. Our experts are ready to give you an opinion on anything.

You Make the Call Copyright Game (Sunday, June 25, 1:00-2:30 PM)

Don’t miss this interactive copyright program in game show format, where panelists will respond to fair use questions, pop culture and potent potables. Panelists include Sandra Enimil, director of Copyright Resources Center at Ohio State University; Eric Harbeson, music special collections librarian at the University of Colorado; and Lindsey Weeramuni, from the Office of Digital Learning at MIT. Kyle Courtney from the Office for Scholarly Communication at Harvard University will keep us laughing and Marty Brennan, copyright and licensing librarian at UCLA, will be channeling bygone game show hosts like Gene Barry of the Match Game. Remember those long microphones? And yes, attendees will have their own buzzers!

Another view from the swamp: copyright policy update (Monday, June 26,  10:30-11:30 AM)

The U.S. Copyright Office is a unit of the Library of Congress, and now that a librarian had been appointed Librarian of Congress, rights holders have some concerns. Why are they so worried, and how is Congress going to respond? A panel of policy experts will address House Judiciary copyright review, including legislation that would make the Copyright Office Register more independent from the Library of Congress – and proposes that President Trump appoint the next Register instead of the Library of Congress. (Really?!?) Adam Eisgrau, managing director of ALA’s Office of Government Relations, and Krista Cox, director of Public Policy Initiatives at the Association of Research Libraries, will discuss U.S. copyright policy with special guest and international copyright expert Stephen Wyber from the International Federation of Library Associations. (What in the world is the EU up to?!?)

All of this brought to you by the OITP Copyright Education Subcommittee, which strives to make copyright entertaining.

The post Washington Office at Annual 2017: Copyright appeared first on District Dispatch.

Dan Scott: Wikidata, Canada 150, and music festival data

planet code4lib - Fri, 2017-06-02 17:15

Following my workshop at the Wikipedia/Canadian Music preconference, I had the opportunity to present with Stacy Allison-Cassin on the subject of Wikidata, Music, and Community: Leveraging Local Music Festival Data to a more general audience of music librarians--most of whom had never heard of Wikidata--on why we were advocating the use of Wikidata as one of the repositories of data about Canadian music festivals.

Our central argument was that, rather than focusing on directly enhancing our own local data repository silos (for example, library catalogues, digital exhibits), libraries and archives should invest their limited resources in enriching Wikidata, a centralized data repository, to maximize the visibility of those entities and the reusability of that data in the world at large… and then pull that data back into our local repositories to enrich our displays and integration with the broader world of data.

Having heard from colleagues at the Evergreen conference in April that they were tired of hearing about the promise of linked data and wanted to see some actual demonstrable value for users, I showed a proof of concept that I had implemented for Laurentian University's catalogue. Any record recognized as a "music album" adds a musical note to the primary contributor's name; clicking that note queries Wikidata for a band or musician with a matching name and displays a subset of available data, such as a description, an image, a link to their website, etc. In the following image you can see the result of pulling up a record for the fine Canadian band Men Without Hats and clicking on the musical note:

It is a simple example: the user experience could be greatly improved, and it would be far better if we used the Wikidata entity ID as the authority control value in the underlying records to avoid any ambiguities in the cases of bands or musicians that have identical names, but for a quick hack put together over a few hours, I'm pretty happy with the results. The code is available, of course :)

Stacy and I began with a high-level overview of Wikidata, noting that it is:

  • Like Wikipedia, but machine & human-readable & writable
  • Focuses on entities, with statements of fact about those entities backed up by references
  • Open for participation: no organizational barriers such as having to be an OCLC member to contribute to LCNAF
  • Open for use: all data is CC0 licensed (dedicated to the public domain) thus requiring no special acknowledgements, etc on the part of the user of the data

As an example of how Wikidata supports Wikipedia, I highlighted how authority control used to be accomplished in Wikipedia articles via manually-coded lists of authority references for a given person, but now that job can be delegated to the Wikidata entity counterpart via the {{Authority control}} macro to dynamically generate an authority list, helping both humans & machines. The multilingual nature of the data means that those lists no longer need to be manually updated in every language variant! But of course there is still plenty of labour-saving development to be done: for example, the Infobox musical artist in Wikipedia is still maintained manually.

Stacy discussed some comparisons of the musical genres in Wikidata versus the Library of Congress vocabulary (in short: the quantity of genres is certainly there, but some work linking the vocabularies would be beneficial), and highlighting how we have been experimenting with structuring music festivals in Wikidata. Even with a recent example like the Northern Lights Festival Boréal 2015, we found only 25% of the performers already had corresponding entities in Wikidata. That leaves a lot of room for us to improve the visibility of Canadian musicians during the Canada 150 Wikipedia edit-a-thons--and with Wikidata's notability policy that allows new entities to be added if they fulfill a structural need by making statements made in other items more useful, we believe this is a positive way forward.

In the hopes that others may find our presentation useful, Stacy and I offer our slides under the CC-BY-SA 4.0 license.

Hugh Cayless: Reminder

planet code4lib - Fri, 2017-06-02 13:07
In the midst of the ongoing disaster that has befallen the country, I had a reminder recently that healthcare in the USA is still a wreck. When I had my episode of food poisoning (or whatever it was) in Michigan recently, my concerned wife took me to an urgent care. We of course had to pay out-of-pocket for service (about $100), as we were way outside our network (the group of providers who have agreements with our insurance company). I submitted the paperwork to our insurance company when we got home (Duke uses Aetna), to see if they would reimburse some of that amount. Nope. Rejected, because we didn't call them first to get approval—not something you think of at a time like that. Thank God I waved off the 911 responders when my daughter called them after I first got sick and almost passed out. We might have been out thousands of dollars. And this is with really first-class insurance, mind you. I have great insurance through Duke. You can't get much better in this country.

People from countries with real healthcare systems find this kind of thing shocking, but it's par for the course here. And our government is actively trying to make it worse. It's just one more bit of dreadful in a sea's worth, but it's worth remembering that the disastrous state of healthcare in the US affects all of us, even the lucky ones with insurance through our jobs. And again, our government is trying its best to make it worse. You can be quite sure it will be worse for everyone.

Open Knowledge Foundation: How participatory budgeting can transform community engagement – An interview with Amir Campos

planet code4lib - Fri, 2017-06-02 13:00

For most municipalities, participatory budgeting is a relatively new approach to include their citizens directly in the decision making for new investments and developments in their community. Fundación Civio is a civic tech organisation based in Madrid, Spain that develops tools for citizens that both reveal the civic value of data and promote transparency. The organisation has developed an online platform for participatory budgeting processes, both for voting and monitoring incoming proposals, that is currently being tested in three Spanish municipalities.

Diana Krebs (Project Manager for Fiscal Projects at OKI) talked with Amir Campos, project officer at Fundación Civio, on how tech solutions can help to make participatory budgeting a sustainable process in communities and what is needed beyond from a non-tech point of view.

Amir Campos, Project officer at Fundación Civio

Participatory budgeting (PB) is a relatively new form for municipalities to engage with their citizens. You developed an online platform to help to make the participatory process easier. How can this help in order to turn PB in an integrative part of community life?

Participatory budgets are born with the desire to democratise power at a local level, to “municipalise the State”, with a clear objective, that these actions at local level serve as an example at a regional and national level and foster change in State participation and investment policies. This aim for the democratisation of power also represents a struggle for a better distribution of wealth, giving voice to the citizens, taking them out of political anonymity every year, making local investment’s needs visible much faster than any traditional electoral process. Participatory budgeting is a tough citizen’s marking of their local representatives.

The tool we have designed is powerful but easy to use because we have avoided the development of a tool that only technical people would use. Users are able to upload their own data (submitting or voting proposals, comments, feedback, etc. in order to generate discussions, voting processes, announcements, visualisations, etc.) It has a more visual approach that clearly differentiates our solution from existing solutions and gives further value to it. Our tool is targeted at administrators, users and policy makers without advanced technical skills and it is online, presented as Software as a Service (SaaS), avoiding the need for users to download or install any special software.

All in all, out tool, will bring the experience of taking part in a process of participatory budgeting closer to all citizens. Once registered, its user-friendliness and visual features will keep users connected, not only to vote proposals but also to monitor and share them, while exercising effective decision-making actions and redistributing available resources in their municipality. Along with off-line participatory processes, this platform gives voice to citizens, vote and also gives them the possibility of making their public representatives more accountable through its monitoring capabilities. The final aim is to enable real participatory experiences, providing solutions that are easy to implement by all stakeholders involved, thus strengthening the democratic process.

Do you think that participatory budgeting is a concept that will be more successful in small communities, where the daily business is less ruled by political parties’ interest and more by consent of what the community needs (like new playgrounds or sports parks)? Or can it work in bigger communities such as Madrid as well?

Of course! The smaller the community, the better the decision-making process, not only at the PB level but at all levels. Wherever there is a “feeling” of a community it is much easier to generate agreements oriented towards the common good. That is why in large cities there are always more than one PB process at the same time, one at the neighborhood level, and another at the municipal level (whole city), to engage people at the neighborhood level and push them to vote at the city level. Examples such as Paris or Madrid, which use on-line and off-line platforms use that division, instead, small town halls, such as Torrelodones, open just a single process for the whole municipality. All process need municipal representatives commitment and citizens engagement, connected to a culture of participation, for harvesting successful outcomes.

Do you see a chance that PB might increase fiscal data literacy if communities are more involved in deciding on what the community should spend tax money on?

Well, I am not sure about an improvement on fiscal data literacy, but I am absolutely convinced that citizens will better understand the budget cycle, concepts and the overall approval process. Currently, in most cases, budget preparation and approval has been a closed-door process within administrations. Municipal PB implementations will act as enabling processes for citizens to influence budget decisions, becoming actual stakeholders of the decision-making process and auditing budget compromised vs. actual spending and giving feedback to the administrations.

Furthermore, projects implemented thanks to a PB will last longer since citizens will take on a commitment to the project implemented, their representatives and their peers with whom individuals will have to agree once and will easily renew this agreement.

The educational resources available for citizens in the platform will help also to improve the degree of literacy. They provide online materials to better understand the budget period, terms used or how to influence and monitor the budget.

What non-tech measures and commitments do a municipal council or parliament need to take so that participatory budgeting will become a long-term integrative part of citizens’ engagement?

They will have to agree as a government. One of the key steps to maintain a Participatory Budgets initiative over time is to legislate on this so that, regardless of the party that governs the municipality, the Participatory Budgeting processes keep running and a long-lasting prevalence is achieved. Porto Alegre (Brazil) is a very good example of this; they have been redistributing their resources at the municipal level for the last 25 years.

Fundación Civio is part of the EU H2020 project openbudgets.eu, where it collaborates with 8 other partners around topics of fiscal transparency.

 

Pages

Subscribe to code4lib aggregator