You are here

Feed aggregator

District Dispatch: The Copyright Office doesn’t need a small claims court

planet code4lib - Mon, 2017-10-23 20:57

The American Library Association (ALA), through the Library Copyright Alliance (LCA), provided cautionary feedback on the Copyright Alternative in Small-Claims Enforcement Act of 2017 (CASE Act) introduced by Representatives Tom Marino (R-PA 10th) and Hakeem Jeffries (D-NY 8th). Co-signers to the LCA letter included R Street Institute, the Authors Alliance, and Public Knowledge. The bill calls for the establishment of a small claims court for handling copyright infringement when the rights holders do not have the funds necessary to bring an infringement suit to federal court. For several years, visual artists, including photographers, the Authors Guild, the Copyright Alliance and others have called for an alternative judicial system to resolve copyright disputes, encourage licensing and be cost effective for the stakeholders involved. Small businesses, independent creators and authors do not have the resources to bring infringement suits and end up stymied by infringement they are unable to stop, license, or monetize.

Knox County (Neb.) Courthouse, Photo credit: Wikimedia

Congress asked the U.S. Copyright Office to study the issue and in 2013, after soliciting public comments and convening public roundtables, the Copyright Office published a report recommending that “a centralized tribunal within the Copyright Office” be created as an alternative to federal court. The CASE Act is based on those recommendations.

While the LCA understands “the challenges low-value infringement cases pose to individual artists,” LCA does not believe that the CASE Act would be an effective solution. Because participation in a small claims system would be voluntary, defendants would be unlikely to participate especially without an independent judicial review that is guaranteed in the federal court. LCA also argues that a voluntary claims process is already established in Federal Rules of Civil Procedure. A small claims system could be tested before establishing an additional system that people would be unlikely to use.

The post The Copyright Office doesn’t need a small claims court appeared first on District Dispatch.

Cynthia Ng: Article: A Practical Guide to Improving Web Accessibility

planet code4lib - Mon, 2017-10-23 15:55
What’s that? Why yes, it’s another article! Open-access, peer-reviewed article, this time written more for the content creator (as opposed to the developer). Check it out issue no. 7 of Weave: Journal of Library User Experience. Copy of abstract This article is intended to provide guidance on making library websites and other digital content accessible … Continue reading Article: A Practical Guide to Improving Web Accessibility

LITA: Discover Altmetrics and Reproducibility at 2 new LITA events

planet code4lib - Mon, 2017-10-23 14:49

Don’t miss out on either of these two new LITA continuing education opportunities. First comes a webinar on Altmetrics and then a web course on Reproducibility. Sign up before it’s too late.

The Webinar

Taking Altmetrics to the Next Level in Your Library’s Systems and Services

Instructor: Lily Troia, Engagement Manager, Altmetric
October 31, 2017, 1:00 pm – 2:30 pm Central time

Register here, courses are listed by date

This 90 minute webinar will bring participants up to speed on the current state of altmetrics, and focus in on changes across the scholarly ecosystem. Through sharing of use cases, tips, and open discussion, this session will help participants to develop a nuanced, strategic framework for incorporating and promoting wider adoption of altmetrics throughout the research lifecycle at their institution and beyond.

View details and Register here.

The Web Course

Building Services Around Reproducibility & Open Scholarship

Instructor: Vicky Steeves, Librarian for Research Data Management and Reproducibility, a dual appointment between New York University Division of Libraries and NYU Center for Data Science
November 1 – November 22, 2017

Register here, courses are listed by date

This course will examine, cover and discuss:

  • The discourse around open scholarship.
  • Best practices around use of open source tools, creating an open web presence, preparing research output for publication, and linking those outputs to more traditional publications.
  • The tools that both researchers and librarians are using to engage in open work.

View details and Register here.

Discover upcoming LITA webinars

Introduction to and JSON-LD
Offered: November 15, 2017

Diversity and Inclusion in Library Makerspace
Offered: December 6, 2017

Digital Life Decoded: A user-centered approach to cyber-security and privacy
Offered: December 12, 2017

Questions or Comments?

For all other questions or comments related to the courses, contact LITA at (312) 280-4268 or Mark Beatty,

Alf Eaton, Alf: A single-user blog

planet code4lib - Sun, 2017-10-22 21:43

As an exercise, I’ve been trying to make the simplest possible blog, using the best possible tools.

It’s a single-user blog, so there’s no need to worry about protecting against malicious users, and the permissions are quite straightforward.


The app uses Google for authentication, via Firebase. There’s a button to sign in, the OAuth2 callback is handled by react-redux-firebase, and there’s a button to sign out.

Database and permissions

The app uses Firebase’s Realtime Database to store the posts, divided into two collections: public, which anyone can read but only the authenticated user can write, and private, which only the authenticated user can read or write.

Each collection contains two further collections: content, which contains the HTML of each post, and metadata, which contains information about each post.

The authenticated user object contains a verified email address, which we use for permissions:

{ "rules": { "public": { ".read": true, ".write": "auth.token.email_verified == true && == ''", "metadata": { ".indexOn": "published" } }, "private": { ".read": "auth.token.email_verified == true && == ''", ".write": "auth.token.email_verified == true && == ''", "metadata": { ".indexOn": "created" } } } }

A post is created in the private collection, then written to the public collection on publication.

User interface

The client interface uses react-redux-firebase for communication with the database, and Material UI for the app elements.

The following components are inside an App container, which handles the routing:


Displays the user’s profile picture, or a “Sign in” button if they haven’t yet signed in.


Displays a loading bar until the user is authenticated.


Displays a list of posts, with 3 possible actions:


Creates a new post.


A menu of actions (“Unpublish”, “Remove”) for each post.


Each item links to the editor: a rich-text WYSIWYG editor built using the browser’s ContentEditable API.

The content format is HTML, so if the features provided by this editor aren't sufficient it's easy to switch to a different one (e.g. something built on ProseMirror, Slate or Draft.js.

The editor contains a “Publish” button, which writes the content and metadata to the public collection, where anyone can read it via the public JSON API.

Source code

The source code for this app is hosted on GitHub.

Ed Summers: Appraisal Talk

planet code4lib - Sun, 2017-10-22 12:07

This is a draft of a talk I’m giving at SIGCIS on October 29, 2017. It’s part of a larger article that I will hopefully publish shortly or drop in a pre-print archive.

As the World Wide Web has become a prominent, if not the predominant, form of global communications and publishing over the last 25 years we have seen the emergence of web archiving as an increasingly important activity. The web is an immensely large and constantly changing information landscape that fundamentally resists the idea of “archiving it all” (Masanès, 2006). The web is also a site for constant breakdown in the form of broken links, failed business models, unsustainable infrastructure, obsolescence and general neglect. Web archiving projects work in varying measures to stem this tide of loss–to save what is deemed worth saving before it is 404 Not Found. In many ways you can think of web archiving as a form of repair or maintenance work that is conducted by archivists in collaboration with each other, as well as tools and infrastructures (Graham & Thrift, 2007 ; Jackson, 2014).

In this presentation I will describe some research I’ve been doing into how web archives are assembled and why I think this matters for historians of technology. What follows is essentially what (Brügger, 2012b) calls a web historiography where the focus is on the web as a particular technology of history rather than a particular history of web technology. The web, and by extension, web archives provide a singular view of life and culture since its inception 25 years ago. Understanding how and why web archives are assembled is an important task for the scholars who are attempting to use them (Maemura, Becker, & Milligan, 2016). As we will see, it is the network of relationships and connections that a web archive is involved with that make it an archive.

By web archives I specifically mean archives of web content, not necessarily archives that are on the web. Brügger distinguishes between three types of content that can be found on the web:

  • digitized: content that has been converted to digital format by some means (image scanning, transcription, etc) and then placed on the web.
  • born-digital: content that is created digital (word processor files, blog posts, social media, digital photographs, etc) and can be naturally found on the web.
  • reborn-digital: is digitized or born-digital content that has been collected and preserved from the web, and then re-presented as part of a web archive.

It is this third category of reborn-digital content that I’m concerned with here. A prime example is the Internet Archive, which I imagine some of you have used as a source of material in your own research. There are now thousands of organizations around the world collecting web content for a variety of archival purposes.

The question of what and how web content ends up in an archive is of historiographical significance, because history is necessarily shaped by the evidence of the past that survives into the present. Since it is physically impossible to archive everything, archives have always contained gaps or silences. Trouillot (1995) provides a framework for thinking about these moments in which these silences enter the archive:

Silences enter the process of historical production at four crucial moments: the moment of fact creation (the making of sources): the moment of fact assembly (the making of archives); the moment of fact retrieval (the making of narratives); and the moment of retrospective significance (the making of history in the final instance).

Given the significance of the making of archives to the making of history, and the abundance of material on the web, how do archivists decide what to save?

Archivists have traditionally used the term appraisal to describe the process of determining the value of records, in order to justify their inclusion into the archive. While notions of value, and the methods for measuring it differ, the activity of appraisal is central to the work of the archivist. To further specify this moment in which content becomes archival Ketelaar (2001) introduced the neologism archivalization as

the conscious or unconscious choice (determined by social and cultural factors) to consider something worth archiving. Archivalization precedes archiving. The searchlight of archivalization has to sweep the world for something to light up in the archival sense, before we can proceed to register, to record, to inscribe it, in short before we archive it.

In order to better understand this process of lighting up web content in web archives I conducted 30 interviews with web archivists, software developers, researchers and activists to discover how they decide to preserve web content. Inspired by the work of Suchman (1995), Star (1999) and Kelty (2008) these were ethnographic interviews that aimed to develop a thick description of how practitioners enact appraisal in their particular work environments.

In the first pass at analysis I coded the jottings and field notes generated. These provided a detailed picture of the sociotechnical environment in which appraisal work is being performed (Summers & Punzalan, 2017). However questions still remained about the particular psychological or social context for the decision making process around moments of archivalization in web archives.

On a second pass I performed a critical discourse analysis on the interview transcripts themselves. I selected critical discourse analysis (CDA) because it offers a theoretical framework for analyzing the way in which participants’ use of language reflects identity formation, figured worlds and communities of practice, while also speaking to the larger sociocultural context that web archiving work is taking place within.

A Discourse is a socially accepted association among ways of using language, of thinking, feeling, believing, valuing, and of acting that can be used to identify oneself as a member of a socially meaningful group or ‘social network’, or to signal (that one is playing) a socially meaningful ‘role’. (J. Gee, 2015, p. 143)

CDA provides a theoretical framework for empirically studying the way that form and function operate in language, and how this analysis can provide insight into social practices. One of CDA’s key proponents is James Gee, whose 7 building tasks provided me with a guide for analyzing my interview transcripts to gain insight into practices of appraisal in web archives (J. P. Gee, 2014). The 7 building tasks include:

  • Significance: how is language used to foreground and background certain things?
  • Activities: how is language being used to enact particular activities?
  • Identity: how is language being used to position specific identities and make them recognizable?
  • Relationships: what relationships are signaled in the use of language?
  • Politics: how are notions of of value and norms established in the use of language?
  • Connections: how is language used to connect and disconnect ideas, activities, objects?
  • Sign systems and knowledge: how does language position (privilege or disprivilege) particular sign systems, or ways of knowing and believing?

There’s not enough time for me to get into all the details of my findings here, but I would like to share a brief look at what this analysis looks like as a way of introducing my key findings. All the names used in the transcriptions are pseudonyms in order to allow the participants to be themselves as much as possible.

Line Speaker Utterance 41 Jim Well Alex helped me get in contact with the employees / 42
Alex was already on the ground with it. 43 Ed Oh okay // 44 Jim and Alex / 45
KNEW / 46
that it was going to be a lot of data / 47
and was like / 48
ok so [be a little more] / 49 Ed           [ahhhh] 50 Jim careful with this

Here I am interviewing Jim, who works at a non-profit web archiving organization. I selected this snippet because it highlights how discourse reflects the relationships that are involved in the appraisal process. Just before this snippet Jim is talking about how he wasn’t sure whether a particular video streaming site could be archived because of the amount of data involved. He sought the advice of his immediate supervisor Ariana, who then brought in Alex, who is the Director of the archive. It turned out that the Director had a connection with a staff person who was working at the video streaming company, who could provide key information about the amount of data that needed to be archived. Here Jim is using the hierarchical, chain-of-command relationships to lend weight and formality to what is actually a much richer set of circular relationships within the organization. The relationships also extended outside the archive and into the organization that had created the video content.

We see this pattern reflected in another interview with Jack, who is an archivist at a large university, who has been working to document the activities of the fracking industry within his state.

Line Speaker Utterance 1 Jack I really see like one of / my next curatorial responsibilities being / 2
not really more crawling or more selecting / 3
but using the connections I’ve made here / 4
to get more contact and more dialogue going with / 5
with the actual communities I’ve been documenting // 6
And I’m a little nervous about how it’s gonna go / 7
because I went ahead and crawled a bunch of stuff / 8
without really doing that in advance //

Here Jack is explicitly describing “connections” or relationships as an essential part of his job as an archivist. Just before this snippet he had finished describing how he got the idea to document fracking from a web archivist at another institution, who was already engaged in documenting fracking in his tate. Jack’s interest in documenting environmental issues had developed while working with a mentor at a previous university. Jack wanted to collaborate with this archivist to better document fracking activity as it extends across geopolitical boundaries. He sought the approval from the Associate Dean of the Library who was very supportive of the idea. However as this snippet illustrates Jack sees these professional relationships as necessary but not sufficient for doing the work. He sees dialogue with the communities being documented, in this case activist communities, as an important dimension to the work of web archiving.

In addition to focusing on relationships Gee’s Making Strange Tool is a discourse analysis technique for foregrounding what might otherwise slip into the background:

In any communication, listeners/readers should try to act as if they were outsiders.

The use of crawling and selecting on line 2 is a phrase that Jack uses several times in the interview. Crawling refers to the behavior of software used to collect content from the web. The software that is used to do this is originally referred to as a web spider because of the way it automatically and recursively follows links in web content for some period of time. But web spiders need to be told by a person where to begin crawling, which is the process of selection.

If you are thinking that selection and appraisal sound similar that’s because they are practically synonyms for each other. Both terms are concerned with identifying material that is of enduring value for preservation in an archive. Appraisal speaks to the theory, method or framework that is used for performing the activity of selection.

In physical archives, boxes of paper manuscripts, files, diskettes or hard drives change hands. A retiring researcher donates their personal papers or workstation to an archive. Or a particular business unit transfers a set of material to an archive according to a previously agreed upon record retention program. In either case a relationship between the record creator or owner and the archive is established. This relationship is intrinsic to the appraisal process.

But in web archives this material transaction is not necessary or it is transformed almost beyond recognition. The architecture and infrastructure of the web, as well as the underlying Internet, allow content to be instantly retrieved across vast distances. You only need to know the URL for the resource and to instruct your web client (be it a browser or a crawler) to retrieve it. When it is all working. As noted by Brügger (2012a) the reliability of archived copies of web content is not a given. Features of the HTTP protocol, such as cookies (Barth, 2011) and caching (Fielding, Nottingham, & Reschke, 2014) combined with the rendering capabilities of the client software mean that the idea of a single idealized, canonical representation of a web resource retreats from view. This seeming immateriality of web content is an illusion generated by the very real assemblage of physical networks, computing machinery, storage devices, electrical grids and cooling units that must operate in concert to deliver access.

Berners-Lee (1990)

As we saw with Jack, there is no need to enter into a conversation with a website owner to start archiving web content. When the content is on the web an archivist can start the archiving software, give it a URL, configure the crawling behavior (how far, how long, how much, etc) and let it do its work. The decision of what to crawl is detached from the relationships that have traditionally guided appraisal. But like a ghost limb, Jack still felt the significance of these connections between the archive and the content creators for doing archival work. He wanted to establish them, even if they were not technically necessary. The links of relationships between people have effectively been replaced by hypertext links that provide discoverability and access.

In many ways what this analysis seems to point to is an evolving practice of web archiving where traditional concepts of appraisal are being unbracketed from one context and reapplied in another. Focusing on the objects, be they paper files, boxes, or representations of HTTP transactions, is less at issue than the practices that involve those objects, and their network of interactions. This shift in attention recalls the work of ethnographer and philosopher Annemarie Mol, whose work studying the treatment of atherosclerosis highlights the importance of practice:

It is possible to refrain from understanding objects as the central focus of different people’s perspectives. It is possible to understand them instead as things manipulated in practices. If we do this–if instead of bracketing the practices in which objects are handled we foreground them–this has far reaching effects. Reality multiplies. (Mol, 2002, p. 5)

The web archive is situated among these multiple record realities involving the creators of records with the preservers of records with the users of records.

But to return to the question I started with: what does all this tell us about how content is appraised for websites, and historiography of the web? I think these brief examples highlight just how important it is to maintain the manifold of relationships between record creators and the archive. Appraisal, as it is embodied in the practices of archivists, and encoded into software tools, is a social enterprise that shapes the historical record. Just as the infrastructure of the web enables communication across great geographic distances, it also simultaneously moves to obscure the relationship between the archive and the archived. Further research is needed to discover practices that help bridge this gap and make it legible, while allowing for new conceptions of appraisal to develop and be translated.

If you’re a scholar who uses archives of web content I encourage you to reach out to the archivists you know, and to work with them to help build these practices and ensure that they are collecting the things you value. If you work as part of an organization and want to ensure that your web content is being collected and archives try reaching out to an archivist to let them know of your interest. And of course if you are an archivist, and you are stymied by thinking about archiving web content, there are good reasons for that. The web is a big place, and its hard to know what to collect. Focusing on the relationships you have with the communities you document can help make it more manageable and meaningful.


Barth, A. (2011). HTTP state management mechanism (No. 6265). Internet Engineering Task Force. Retrieved from

Berners-Lee, T. (1990). Information management: A proposal. CERN. Retrieved from

Brügger, N. (2012a). Web historiography and internet studies: Challenges and perspectives. New Media & Society.

Brügger, N. (2012b). When the present web is later the past: Web historiography, digital history, and internet studies. Historical Social Research/Historische Sozialforschung, 102–117.

Fielding, R., Nottingham, M., & Reschke, J. (2014). Hypertext transfer protocol (http/1.1): Caching (No. 7234). Internet Engineering Task Force. Retrieved from

Gee, J. (2015). Social linguistics and literacies: Ideology in discourses (Fifth). Routledge.

Gee, J. P. (2014). How to do discourse analysis: A toolkit. Routledge.

Graham, S., & Thrift, N. (2007). Out of order understanding repair and maintenance. Theory, Culture & Society, 24(3), 1–25.

Jackson, S. J. (2014). Media technologies: Essays on communication, materiality and society. In P. Boczkowski & K. Foot (Eds.),. MIT Press. Retrieved from

Kelty, C. M. (2008). Two bits: The cultural significance of free software. Duke University Press. Retrieved from

Ketelaar, E. (2001). Tacit narratives: The meanings of archives. Archival Science, 1(2), 131–141.

Maemura, E., Becker, C., & Milligan, I. (2016). Understanding computational web archives research methods using research objects. In IEEE big data: Computation archival science. IEEE.

Masanès, J. (2006). Web archiving methods and approaches: A comparative study. Library Trends, 54(1), 72–90.

Mol, A. (2002). The body multiple: Ontology in medical practice. Duke University Press.

Star, S. L. (1999). The ethnography of infrastructure. American Behavioral Scientist, 43(3), 377–391.

Suchman, L. (1995). Making work visible. Communications of the ACM, 38(9), 56–64.

Summers, E., & Punzalan, R. (2017). Bots, seeds and people: Web archives as infrastructure. In Proceedings of the 2017 acm conference on computer supported cooperative work and social computing (pp. 821–834). New York, NY, USA: ACM.

Trouillot, M.-R. (1995). Silencing the past: Power and the production of history. Beacon Press.

LITA: LITA Forum 2017 Preview

planet code4lib - Fri, 2017-10-20 18:41

The LITA Membership Development Committee will be moderating another Twitter #LITAchat on October 27, 2017 12:00pm CDT. This month we invite you to participate in discussing the upcoming LITA Forum November 9-12 in Denver. Joining us will be Aimee Fifarek, LITA Past President, to answer questions and talk about some of the exciting speakers and events.

To participate, launch your favorite Twitter client and check out the #LITAchat hashtag. On the web client, just search for #LITAchat and then click “LIVE” to follow along. Ask questions using the hashtag #LITAchat, add your own comments, and even answer questions posed by other participants.

Hope to see you there!

Archival Connections: SIA Workshop Links

planet code4lib - Fri, 2017-10-20 11:07
Just sharing a few links for use during the SIA workshop I’ll be teaching later today: Google Form for Exercises SIA Workshop Slides

Evergreen ILS: Evergreen 2.12.7 and 3.0.1 released

planet code4lib - Fri, 2017-10-20 01:01

The Evergreen community is pleased to announce two maintenance releases of Evergreen: 2.12.7 and 3.0.1.

Evergreen 3.0.1 has the following changes improving on Evergreen 3.0.0:

  • Fixes a bug in the web staff client that prevented initials from being stored with copy notes.
  • Adds billing types that may have been missed by systems that were running Evergreen prior to the 1.4 release.
  • Fixes a web staff client bug with the CSV export option available from the Import Queue in the MARC Batch Import/Export interface.
  • Adds the missing copy alert field in the web client’s volume/copy editor.
  • Fixes a bug where the setting to require date of birth in patron registration was not being honored in the web staff client.
  • Fixes a bug in the web staff client patron registration form where the password wasn’t generating from the last four digits of the patron’s phone number.
  • Fixes an issue in the web staff client where the complete barcode did not display in some interfaces when partial barcodes were scanned.
  • Fixes an HTML error in the new copy tags that display on the record summary page.
  • Fixes a web staff client bug where recording a large number of in-house uses at one time doesn’t display a confirmation dialog once it hits the correct threshold.
  • Adds a Print Full Grid action in the web staff client holds pull list to allow staff to print the entire pull list as it displays on the screen. This change also changes the Export CSV action to an Export Full CSV option.
  • Fixes an issue with the Patron Messages interface that prevented it from saving column configuration changes in the web staff client.
  • Fixes a bug in the web staff client where a billing prompt did not correctly display after marking an item damaged in those systems that have enabled the setting to bill for damaged items.
  • Adds an option to the specific due date feature that allows saving that due date until logout. This allows all circulations from a given workstation to be due on the same date.

Evergreen 2.12.7 has the following changes improving on 2.12.6:

  • Fixes a bug in the web staff client that prevented initials from being stored with copy notes.
  • Adds billing types that may have been missed by systems that were running Evergreen prior to the 1.4 release.
  • Fixes a web staff client bug with the CSV export option available from the Import Queue in the MARC Batch Import/Export interface.
  • Adds the missing copy alert field in the web client’s volume/copy editor.
  • Fixes a bug where the setting to require date of birth in patron registration was not being honored in the web staff client.
  • Fixes a bug in the web staff client patron registration form where the password wasn’t generating from the last four digits of the patron’s phone number.
  • Fixes an issue in the web staff client where the complete barcode did not display in some interfaces when partial barcodes were scanned.

Please visit the downloads page to view the release notes and retrieve the server software and staff clients.

Archival Connections: Scaling Machine-Assisted Description of Historical Records

planet code4lib - Fri, 2017-10-20 00:54
One of the questions I’ve been grappling with as part of the Archival Connections research project is simple: Is there a future for the finding aid?  I’m inclined to think not, at least not in the form we are used to. Looking to the future, I recently had the chance to propose something slightly different, and … Continue reading Scaling Machine-Assisted Description of Historical Records

Open Knowledge Foundation: Leveraging the fight for stronger openness in education

planet code4lib - Thu, 2017-10-19 14:21

This blog has been jointly written by Muriel Poisson (IIEP-UNESCO) and Javiera Atenas (Open Education Working Group): their full bio’s can be found below this post.

Education and corruption: these two themes tend to come out in every discussion about development, although, there is little discussion on corruption in the educational systems, or how to teach students and teachers to learn about corruption, ethics and governance. Open school data and open education resources constitute two distinctive areas, but which both contribute actively to the improvement of transparency and accountability within education systems:

  • Open school data constitute a powerful tool to promote citizen control over the transfer and use of financial, material and human resources. Their publication allows the users of the system to better know their rights and to stand up for them.
  • Open education resources, the development of open textbooks, and the adoption of OSS also contribute to promote more transparent practices across the educational sector, with the support of the Open Education Community.
Improving accountability via open school data

The UNESCO International Institute for Educational Planning (IIEP-UNESCO) maintains ETICO, an online platform which provides resources to fight corruption in education and to provide the community with instruments to understand what corruption may mean. As summarised on ETICO, corruption may be found in all areas of educational planning and management – school financing, recruitment, promotion and appointment of teachers, building of schools, supply and distribution of equipment and textbooks, admission to universities, and so on”.

The tool has recently been relaunched with improved features and updated content on ethics and corruption in education. It provides open, easy access to all of IIEP’s research and training materials on the subject, a media library, a global agenda, and over 1,000 press articles on corruption in education.

In addition to a multitude of other resources, ETICO brings to the forefront new initiatives around the use of open data. Its users can access IIEP’s research on an in-depth review of 14 school report card (SRC) initiatives from around the world, which was published as a book in late 2016. It shows that school report cards can be powerful tools to engage communities and hold schools accountable for providing students with a high-quality education. If the process is inclusive and participatory, SRCs can serve as a unique channel allowing education stakeholders to make more informed decisions based on school-level data.


Improving accountability via open education resources

When the open education and science communities talk about transparency, they often have in mind developing open resources and opening up their academic, teaching and research practices, promoting data sharing, and publishing in Open Access Journals, and collaborating towards widening up the participation in the sciences and humanities, and also at the school level by engaging students and teachers in co-creating knowledge by using open resources and practices.

The Open Education Working Group, in partnership with other organisations such as the Open Initiative for Open Data (ILDA), Núcleo REA, Abriendo Datos Costa Rica and Giggap, have been working towards promoting openness and the use of Open Data for teaching and learning, by giving workshops for academics in Uruguay and Costa Rica. Also, A Scuola di Open Coesione in Italy has been training secondary school teachers and students in using open data to teach citizenship skills, and Monithon Italy works with higher education students to help them with developing and understanding policy by using Open Data.

As a community, we need to start thinking about fighting corruption in the educational systems, promoting a more transparent and open governance, supporting the adoption of Open Contracting Partnership and its standards and work towards developing policies that not only promote the sharing and openness of resources, data and research papers but also fair contracting, transparent governance and accountability of educational institutions. Also governments need to be more transparent with their budgets and how the finance schools and universities and how public funds are administered towards improving education and research.

How to contribute to the ETICO online resource platform

ETICO serves anti-corruption specialists working in ministries, international organizations and agencies, non-governmental organizations, universities, and research institutions as well as policy makers and others. Its main features are available in English, French and Spanish and include:

  • a resource base of over 650 items including case studies, analytical tools and country-specific documents,
  • over 1,000 press articles on corruption in education going back to 2001,
  • a media library presenting short films on the subject from around the world,
  • a global agenda of all related events,
  • a quarterly bulletin about ethics and corruption in education (subscribe here),
  • a blog featuring innovative initiatives designed to tackle corruption.

Users now have more opportunity to get involved, share resources on the subject and contribute to the blog. The enhanced search function also has the ability to scan thousands of national and international documents, media articles, and IIEP’s training materials and research findings spanning over 15 years.

How to contribute to the Open Education Working Group Initiatives in Open Data

If you have a case study or if you are using open data as school or HE level, or if you are interested in organising a training session on open data for academics, management or policy makers, email us at You can read more about the Open Education working group at



Muriel Poisson (@etico_iiep) is the task manager of the IIEP-UNESCO’s project on Ethics and Corruption in Education. She is responsible for research and training activities dealing with a variety of topics on the issue, such as the use of open education data, public expenditure tracking surveys, teacher codes of conduct, and academic fraud.

In this capacity, she trained more than 2,000 people on how to design and implement diagnostic tools aimed at assessing distorted practices in the use of education resources; and on how to design and implement strategies to improve transparency and accountability in education. She also provides technical assistance in the area of transparency and integrity planning, for instance to national teams in charge of the development of an integrity risk assessment, a PETS, or a code of conduct. Finally, she is managing the ETICO resource platform, a dynamic platform for all information and activities related to transparency and accountability issues in education.

Muriel has authored and co-authored a number of articles and books, including: ‘Corrupt Schools, Corrupt Universities: What Can Be Done?’ (UNESCO Press).


Javiera Atenas has a PhD in Education and is the co-coordinator of the Open Education Working Group and the Education Lead of the Open American Initiative for Open Data. She is responsible for the Open Data agenda, with focus in capacity building across the higher education sector towards supporting the adoption Open Educational Practices and policy development.

She works with the OpenMed project for capacity building in South Mediterranean countries and is an associate lecturer at the University of Barcelona, Spain. She has also authored a series of papers and studies about Open Education and Open Data.



Library of Congress: The Signal: Announcing the Library of Congress Congressional Data Challenge

planet code4lib - Thu, 2017-10-19 14:03

Today we launch a Congressional Data Challenge, a competition asking participants to leverage legislative data sets on and other platforms to develop digital projects that analyze, interpret or share congressional data in user-friendly ways.

“There is so much information now available online about our legislative process, and that is a great thing,” said Librarian of Congress Carla Hayden. “But it can also be overwhelming and sometimes intimidating. We are asking citizen coders to explore ways to analyze, interpret or share this information in user-friendly ways. I hope this challenge will spark an interest in the legislative process and also a spirit of information sharing by the tech-savvy and digital humanities pioneers who answer the call. I can’t wait to see what you come up with.” 

The Congressional Data Challenge will run from 19 October 2017 through 02 April 2018

Your submission could take the form of interactive visualizations, mobile or desktop applications, a website or other digital creations. Entries will be evaluated based on three criteria: usefulness, creativity and design. Entries are due April 2, 2018, and must be submitted through the platform. The final submission should include a 2-minute demonstration video, the data sources used, and statement of benefits. Source code is required to be published and licensed as CC0. For rules and additional information, visit Library of Congress Labs Experiments page.

The Library of Congress will award $5,000 for the first prize and $1,000 for the best high school project, with other criteria under consideration for honorable mentions. It might help you to look at the NEH “Chronicling America” data challenge winners for inspiration in how innovators of all ages have looked at data in a new way.

To inspire thinking, Library staff envisioned outcomes such as :

  • A visualization of the legislative process using legislative data;
  • Tools that could be embedded on congressional and public websites;
  • A legislative matching service to identify members of Congress with similar legislative interests;
  • Tools to improve accessibility of legislative data;
  • A web-based display connecting Library digital collection items with related legislative activities.

“We are expecting submissions from a range of groups – from journalists to those working in civic tech, as well as high school teams,” Library of Congress Chief Information Officer Bud Barton explained. “This challenge is an opportunity to match creativity with impact, using the data made available from” is the official source for federal legislative information. A collaboration among the Library of Congress, the U.S. Senate, the U.S. House of Representatives and the Government Publishing Office, is a free resource that provides searchable access to bill status and summary, bill text, member profiles, the Congressional Record, committee reports, direct links from bills to cost estimates from the Congressional Budget Office, legislative process videos, committee profile pages and historic access reaching back as far as 1973.

You can also get your feet wet with bulk data sets from on the recently launched Library of Congress Labs LC for Robots page. LC Labs hosts a changing selection of experiments, projects, events and resources designed to encourage creative use of the Library’s digital collections.

District Dispatch: On budgets and appropriations

planet code4lib - Thu, 2017-10-19 13:44

If anyone thought passing a bill was as easy as a Saturday morning cartoon, one need only look at the budget and appropriations processes in Congress to realize just how complex legislation is in real time. Whether we’re talking about funding for libraries, student loans or other programs, fiscal decision-making is as puzzling as it gets, even to the most seasoned Washington insiders.

The U.S. Senate is slated to take up the 2018 Budget Resolution this week

This week, the U.S. Senate is slated to take up the 2018 Budget Resolution, which provides a target framework for congressional spending. Meanwhile, the FY 2018 appropriations process is on hold until a temporary “continuing resolution” expires on December 8, as reported last month in District Dispatch. So, where does this leave direct library funding?

To understand how library – or any federal funding decisions are made, it is important to differentiate between budgets and appropriations. Appropriations are about the annual optional (discretionary) spending while budget resolutions address mandatory spending (entitlements). Appropriations bills contain the annual decisions made by Congress about how the federal government spends money on such things as Library Services and Technology Act, Innovative Approaches to Literacy, the Library of Congress or the Department of Education – all programs the federal government considers discretionary. Budget bills address mandatory spending such as Medicare and Social Security, which are considered entitlements.

Another important piece of the puzzle is that the budget resolution may contain “reconciliation instructions” directing congressional committees to find cuts in mandatory programs in order to reach spending targets. The House budget passed on October 5 included specific instructions to cut mandatory education spending by $211 billion – which will likely come, in part, from student loans programs such as Pell and Public Service Loan Forgiveness. Many library students utilize these popular loan programs and any cuts will likely affect affordability for higher education for some.

The Senate Budget will apparently not contain reconciliation instructions for specific committees, relying on broad targeted cuts. The Senate rules allow for unlimited amendments to its Budget Resolution, but limits debate to 20 hours. Once the 20 hours has expired, the Senate will move to vote on amendments in what is called a vote-a-rama that often goes late into the evening.

After the Senate passes its budget, as is expected, a conference committee will be needed to produce a final version of the budget that both chambers must pass – which may take months. The final reconciliation instructions may target student loan programs that impact library students, and ALA will continue to work to oppose such cuts.


The post On budgets and appropriations appeared first on District Dispatch.

Peter Sefton: DataCrate Formalising ways of packaging research data for re-use and dissemination

planet code4lib - Wed, 2017-10-18 22:00

[Update: 2017-10-20 Fixed a few typos and some formatting.]

This is a presentation I gave at eResearch Australasia 2017-10-18 about the new Draft (v0.1) Data Crate Specification for data packaging I’ve just completed, with lots of help from others (credits at the end).


In 2013 Peter Sefton and Peter Bugeia presented at eResearch Australasia on a format for packaging research data(1), using standards based metadata, with one innovative feature – instead of including metadata in a machine readable format only, each data package came with an HTML file that contained both human and machine readable metadata, via RDFa, which allows semantic assertions to be embedded in a web page.

Variations of this technique have been included in various software products over the last few years, but the there was no agreed standard on which vocabularies to use for metadata, or specification of how the files fitted together.


This presentation will describe work in progress on the DataCrate specification(2), illustrated with examples, including a tool to create DataCrate. We will also discuss other work in this area, including Research Object Bundles (3) and DataConservency(4) packaging.

We will be seeking feedback from the community on this work should it continue? Is it useful? Who can help out? The DataCrate spec:

  • Has both human and machine readable metadata at a package (data set/collection) level as well as at a file level

  • Allows for and encourages inclusion of contextual metadata such as descriptions of organisations, facilities, experiments and people linked to files with meaningful relationships (eg to say a file was created by a particular machine, as part of a particular experiment, at an organisation).

  • Is a BagIt profile(5). BagIt(6) is a simple packaging standard for file-based data.

  • Has a README.html tag file at the root with bagit-style metadata about the distribution (contact details etc) with a link to;

  • a CATALOG.html file in RDFa, using metadata inside the payload (data) dir with detailed information about the files in the package, and a redundant CATALOG.json in JSON-LD format

  • Is extensible easily as it is based on RDF.


Sefton P, Bugeia P. Introducing next year’s model, the data-crate; applied standards for data-set packaging. In: eResearch Australasia 2013 [Internet]. Brisbane, Australia; 2013. Available from:

datacrate: Bagit-based data packaging specification for dissemination of research data with useful human and machine readable metadata: “Make Data Crate Again!” [Internet]. UTS-eResearch; 2017 [cited 2017 Jun 29]. Available from:

Research Object Bundle [Internet]. [cited 2017 Jun 16]. Available from:

Data Conservancy Packaging Specification Home [Internet]. [cited 2017 Jun 29]. Available from:

Ruest N. BagIt Profiles Specification [Internet]. 2017 Jun. Available from:

Kunze J, Boyko A, Vargas B, Madden L, Littman J. The BagIt File Packaging Format (V0.97) [Internet]. [cited 2013 Mar 1]. Available from:

Slide notes

This is a presentation I gave at eResearch Australasia 2017-10-18.

Slide notes

Peter Bugeia and I talked about this 4 years ago. This year I got around to leading the effort to standardising what we did back then.

Slide notes

This presentation is structured as a story.

Back in June Cameron Neylon was annoyed

Slide notes

When I saw this cry for help I contacted Cameron and offered to work with him.

Slide notes

More from Cameron.

Slide notes

But actually, there are no simple examples of how to organise “long-tail” data sets for publication. Research data management books will tell you about various metadata standards, but how do you enter the metadata and associate it with your data?

Slide notes

The dataset is available from Zenodo, an open data repository hosted by CERN.

Slide notes

This is a human-readable catalog that lists all the files in the data set.

Slide notes

And has information about their context and the relationships between them.

Slide notes

For example it shows that Cameron is the creator of the dataset. Note that Cameron is idetified by his ORCID ID: Using URLs to identify things such as people is one of the key principles of Linked Data.

Slide notes

Here’s an example of a relationship between two of the files - one is a translation of another.

Slide notes

The HTML contains RDFa embedded metadata. RDFa is a standard way of embedding sematics in a web page.

Slide notes

RDFa, using the metadata vocabulary is widely used by search engines.

Slide notes

Movie times, opening times, recipes - these are all some of the things that search engines understand.

Slide notes

This package also has JSON metadata.

Slide notes

The JSON is easily usable by programmers - getting the contact for this dataset for example is a simple operation.

Slide notes

But if needed, the simple “Contact” can be turned into a URI, as per LInked Data principles.

Slide notes

You can look up Contact in the DataCrate JSON-LD context and see that it maps to schema:accountablePerson

Slide notes

Then you can map schema:Accountable person to

Slide notes

There are also checksums for all the data files.

Slide notes

There’s a Bagit manifest file.

Slide notes

Which lists all the files and their checksums, so the validity of the bag can be checked.

Slide notes

This package is like a gift from Cameron, to his collaborators, to other researchers and to his future self.

Slide notes

.. to do this work …

Slide notes

We used an experimental tool called Calcyte

Slide notes

… I ran Calcyte on Cameron’s Google Drive share to create CATALOG.xlsx files …

Slide notes

Calcyte is experimental early- stage open source software written by my group (mainly me) at UTS.

Slide notes

Calcyte created spreadsheets which functioned as metadata forms that Cameron could fill out.

Slide notes

The spreadsheets are multi-sheet workbooks, giving us scope to describe not only data entities like files, but metadata entities such as people, licenses and organisations.

Slide notes

We spent a couple of months working on this intermittently, it will be quicker next time, but this level of data description will always involve a fair bit of care and work, at least a few hours for this scale of project. It’s also important to proofread the result, just as with publishing articles.

Slide notes

The advantages of this approach are that the package has: Human AND machine readable web-native linked-data metadata, not just string-values in XML

Slide notes

This slide is a reminder of what the CATALOG.html file looks like, complete with its DataCite citation, which, when people start citing this, will add to Cameron’s academic capital.

Slide notes

This work is based on previous efforts

  • Cr8it - now being looked after by (via Western Sydney and Intersect)

  • HIEv

  • Mike Lake’s CAVE repository.

Cr8it and HIEv are covered in our 2013 presentation at eResearch Australasia

It builds on other standards:

  • BagIt:


Slide notes

The format used in this demo is described in a draft specification.

Slide notes

- Use at UTS for our data repository, and for export from various services

Slide notes

I’ll leave it with this slogan from our UTS data librarian and friend of eResearch, Liz Stokes.

Thanks to: - Cameron Neylon for being customer zero

  • Liz Stokes for working on metadata crosswalking/mapping

  • Mike Lake for coding and ideas

  • Conal Tuohy and Duncan Loxton for commenting on the draft spec

  • Amir Aryani for discussions about metadata

And the mainly Sydney-based metadata group who met in the leadup to this work Piyachat Ratana, Sharyn Wise, Michael Lynch, Craig Hamilton, Vicki Picasso, Gerry Devine, Katrin Trewin, Ingrid Mason, Peter Bugeia

District Dispatch: Keep your Wi-Fi signal strong: defend E-rate

planet code4lib - Wed, 2017-10-18 19:44

We have been anticipating some changes would take place after Chairman Pai took the helm of the Federal Communications Commission’s (FCC), and wondered about where he might take the E-rate program. During his time as Commissioner, while supportive of the intent and goals of the program, he was less than enthusiastic about many of the program changes as a result of the Modernization.

At the end of September the FCC launched a Public Notice asking for input about Category 2 (C2) funding. Specifically, they want to know whether libraries are using their allotted budgets and if it meets their needs. Since the FCC’s E-rate Modernization in 2014, library applicants have been doing their darndest to receive their share of the $3.9 billion available for libraries and take advantage of the program changes that were put in place. These changes helped libraries increase broadband capacity and improve Wi-Fi access in their buildings.

While we know there are many reasons why libraries do or do not request funding for C2, what we want to make crystal clear to the FCC is that having funds available is critical for libraries, ensuring they can maintain and upgrade their Wi-Fi connectivity. How much are we talking about? In 2016, libraries requested more than $22 million for C2 through the E-rate program.

The deadline to submit comments is October 23, 2017, and we are calling on you to tell the FCC that libraries need secure funding for E-rate.

Here’s how to submit a comment:

  • Format your response as a PDF document. Don’t forget to use your library’s letterhead!
  • Go to
  • For the Proceeding Number, enter the following proceeding numbers: 13-184
  • Complete the rest of the information on the form.
  • Upload your comments at the bottom of the form.

Not sure what to write? Use this template to tell the FCC how your patrons depend on the library to connect to the internet. We encourage you to edit the template to add specifics that are important to your library and your community. Does your library offer special programs that depend in Wi-Fi? Do you know a patron who comes in to use your Wi-Fi to look for jobs or have you seen a student doing homework on a tablet? These stories and examples are critical for the FCC to know about!

ALA will be submitting our own comments on October 23 based on input from our E-rate task force who advises the Washington Office on all things E-rate and feedback from the E-rate state coordinators. Your voices will amplify our message to the FCC and illustrate the difference E-rate can make in local communities. We expect further action in the coming weeks and will be calling on libraries to share their stories and step up their support of the E-rate program. There’s more to come.

Read more about ALA’s work on E-rate here.

The post Keep your Wi-Fi signal strong: defend E-rate appeared first on District Dispatch.

LITA: Jobs in Information Technology: October 18, 2017

planet code4lib - Wed, 2017-10-18 18:44

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Hunterdon County, County Library Director, Flemington, NJ

Western Michigan University, Web Developer Content Strategist, Kalamazoo, MI

California Historical Society, Project Manager – Teaching California, San Francisco, CA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Bohyun Kim: From Need to Want: How to Maximize Social Impact for Libraries, Archives, and Museums

planet code4lib - Wed, 2017-10-18 16:50

At the NDP at Three event organized by IMLS yesterday, Sayeed Choudhury on the “Open Scholarly Communications” panel suggested that libraries think about return on impact in addition to return on investment (ROI). He further elaborated on this point by proposing a possible description of such impact. His description was that when an object or resource created through scholarly communication efforts is being used by someone we don’t know and is interpreted correctly without contacting us (=libraries, archives, museums etc.), that is an impact; to push that further, if someone uses the object or the resource in a way we didn’t anticipate, that’s an impact; if it is integrated into someone’s workflow, that’s also an impact.

This emphasis on impact as a goal for libraries, archives, and museums (or non-profit organizations in general to apply broadly) resonated with me particularly because I gave a talk just a few days ago to a group of librarians at the IOLUG conference about how libraries can and should maximize their social impact in the context of innovation in the way many social entrepreneurs have been already doing for quite some time. In this post, I would like to revisit one point that I made in that talk. It is a specific interpretation of the idea of maximizing social impact as a conscious goal for libraries, archives, and museums (LAM). Hopefully, this will provide a useful heuristic for LAM institutions in mapping out the future efforts.

Considering that ROI is a measure of cost-effectiveness, I believe impact is a much better goal than ROI for LAM institutions. We often think that to collect, organize, provide equitable access to, and preserve information, knowledge, and cultural heritage is the goal of a library, an archive, and a museum. But doing that well doesn’t mean simply doing it cost-effectively. Our efforts no doubt aim at achieving better-collected, better-organized, better-accessed, and better-preserved information, knowledge, and cultural heritage. However, our ultimate end-goal is attained only when such information, knowledge, and cultural heritage is better used by our users. Not simply better accessed, but better used in the sense that the person gets to leverage such information, knowledge, and cultural heritage to succeed in whatever endeavor that s/he was making, whether it be career success, advanced education, personal fulfillment, or private business growth. In my opinion, that’s the true impact that LAM institutions should aim at. If that kind of impact were a destination, cost-effectiveness is simply one mode of transportation, preferred one maybe but not quite comparable to the destination in terms of importance.

But what does “better used” exactly mean? “Integrated into people’s workflow” is a hint; “unanticipated use” is another clue. If you are like me and need to create and design that kind of integrated or unanticipated use at your library, archive, or museum, how will you go about that? This is the same question we ask over and over again. How do you plan and implement innovation? Yes, we will go talk to our users, ask what they would like to see, meet with our stakeholders and find out their interests and concerns are, discuss ourselves what we can do to deliver things that our users want, and go from there to another wonderful project we work hard for. Then after all that, we reach a stage where we stop and wonder where that “greater social impact” went in almost all our projects. And we frantically look for numbers. How many people accessed what we created? How many downloads? What does the satisfaction survey say?

In those moments, how does the “impact” verbiage help us? How does that help us in charting our actual path to creating and maximizing our social impact more than the old-fashioned “ROI” verbiage? At least ROI is quantifiable and measurable. This, I believe, is why we need a more concrete heuristic to translate the lofty “impact” to everyday “actions” we can take. Maybe not quite as specific as to dictate what exactly those actions are at each project level but a bit more specific to enable us to frame the value we are attempting to create and deliver at our LAM institutions beyond cost-effectiveness.

I think the heuristic we need is the conversion of need to demand. What is an untapped need that people are not even aware of in the realm of information, knowledge, and cultural heritage? When we can identify any such need in a specific form and successfully convert that need to a demand, we make an impact. By “demand,” I mean the kind of user experience that people will desire and subsequently fulfill by using that object, resource, tool, service, etc., we create at our library, archive, and museum. (One good example of such desirable UX that comes to my mind is NYPL Photo Booth: When we create a demand out of such an untapped need, when the fulfillment of that kind of demand effectively creates, strengthens, and enriches our society in the direction of information, knowledge, evidence-based decisions, and truth being more valued, promoted, and equitably shared, I think we get to maximize our social impact.

In the last “Going Forward” panel where the information discovery was discussed, Loretta Parham pointed out that in the corporate sector, information finds consumers, not the other way. By contrast, we (by which I mean all of us working at LAM institutions) still frame our value in terms of helping and supporting users access and use our material, resources, and physical and digital objects and tools. This is a mistake in my opinion, because it is a self-limiting value proposition for libraries, archives, and museums.

What is the point of us LAM institutions, working so hard to get the public to use their resources and services? The end goal is so that we can maximize our social impact through such use. The rhetoric of “helping and supporting people to access and use our resources” does not adequately convey that. Businesses want their clients to use their goods and services, of course. But their real target is the making of profit out of those uses, aka purchases.

Similarly, but far more importantly, the real goal of libraries, archives and museums is to move the society forward, closer in the direction of knowledge, evidence-based decisions, and truth being more valued, promoted, and equitably shared. One person at a time, yes, but the ultimate goal reaching far beyond individuals. The end goal is maximizing our impact on this side of the public good.


ACRL TechConnect: Got an interest? There’s a group for that: A DLF Group Primer for the 2017 DFL Forum

planet code4lib - Wed, 2017-10-18 15:14

The 2017 Digital Library Federation (DLF) Forum will take place October 23-25 in Pittsburgh, and throughout the program there are multiple opportunities to interact with several of the DLF Groups. For those who are new to DLF, or have never been to a Forum before, it may be hard to know what to expect or how these Groups are different from other associations’ interest groups or committees.

It can be helpful to remember that DLF is an institutional member organization. You don’t need a personal membership to belong to a working group of DLF. Actually, you don’t even need to belong to an institution to sign up to work with a group. DLF practices a very welcoming and inclusive approach to community. Membership does grant discounts on the Forum or other programs, like the eResearch Network, but more importantly, it signals an institution’s commitment to the work that DLF supports and coordinates – such as these groups.

DLF’s groups are not just interest groups or working groups. They are essentially communities that drive a conversation around a topic, or have a particular focus, and usually have some kind of an output. Here is the current list of active groups, with a brief description from their website – those that have programming at this year’s Forum are noted with an asterisk:

  • DLF Assessment Interest Group*
    • The DLF Assessment Interest Group (DLF AIG) was formed in 2014 as an informal interest group within the larger DLF community. The group meets during the DLF Forum to share problems, ideas, and solutions [related to digital library assessment]. The group also has a dedicated Google Group, DLF-supported wiki, and project documentation available in the Open Science Framework.
  • DLF Digital Library Pedagogy Group*
    • The DLF Digital Library Pedagogy group is an informal community within the larger DLF community that was formed thanks to practitioner interest following the 2015 DLF Forum. The group, which has a dedicated Google Group, is open to anyone interested in learning about or collaborating on digital library pedagogy.
  • DLF eResearch Network
    • The DLF eResearch Network brings together teams from research-supporting libraries to strengthen and advance their data services and digital scholarship roles within their organizations. The core of the 2017 network is a working curriculum that guides participants through 6 monthly webinars that address current topics and strategic methods for supporting and facilitating data services and digital scholarship locally.
  • DLF Forum Mentors*
    • DLF has created a new framework for establishing mentoring relationships among our community members, centered around face-to-face interaction at our annual Forum. The program is meant to be lightweight, collegial, and mostly focused around the annual DLF Forum.
  • DLF Interest Group on Records Transparency/Accountability*
    • A new DLF interest group oriented around the topic of records transparency, open data, and accountability.
  • DLF Liberal Arts Colleges*
    • In 2015, a volunteer planning committee from within our Liberal Arts College  community organized a first, one-day Liberal Arts Colleges Pre-conference, specifically created for those who work with digital libraries and/or digital scholarship at teaching-focused institutions, held before the DLF Forum in Vancouver. Both this event and the one that followed in Milwaukee (2016) were huge successes, including concurrent sessions of presentations and panels on pedagogical, organizational, and technological approaches to the digital humanities and digital scholarship, data curation, digital collections, and digital preservation.
  • DLF Museums Cohort
    • All DLF practitioners with museum interests or who engage in college and university museum-based projects are welcome to join. Likewise, current DLF member institutions with museums, galleries, and museum libraries are invited to participate in Museums Cohort conversations.
  • DLF Project Managers Group*
    • The DLF Project Managers group is an informal community within the larger DLF community. They meet at the annual DLF Forum and also have a dedicated listserv. The DLF PM Group was formed in 2008 to acknowledge the intersection of the discipline of project management and library technology. The group provides a forum for sharing project management methodologies and tools, alongside broader discussions that consider issues such as portfolio management and cross-organizational communication. The group also maintains an eye towards keeping pace with the dynamic digital library landscape, by bringing new and evolving project management practices to the attention and mutual benefit of our colleagues.
  • DLF Working Group on Labor in Digital Libraries*
    • A new DLF group, looking for all levels of commitment, from willingness to be a co-leader of the Working Group to dropping in to point out a good article/blog post/someone-doing-this-already we may not have seen. A Google Group is used for coordination of meetings and work.
  • Linked Open Data
  • Born-Digital Access Group*
  • DLF Metadata Support Group*
    • Metadata is hard. The Metadata Support Group aims to help. This is a place to share resources, strategies for working through some common metadata conundrums, and reassurances that you’re not the only one that has no idea how that happened. If you’re coming here with a problem we hope you’ll find a solution or a strategy to move you towards a solution!

These groups are excellent ways to learn more about a topic, contribute to problem-solving strategies, and to network with others who share your interests. As you can see, some of these groups have been around for nearly a decade, while others just started this year. There have also been several groups that have sunsetted, reflecting DLF groups’ strength as responsive and current communities, based on need and interest.

If you are at the 2017 Forum, consider learning more by joining a group’s working lunch or presentation. And remember, these groups are based off need and interest. Consider proposing something that stirs your passion, if you don’t see it reflected in the current DLF community!

Code4Lib Journal: Editorial: The Economics of Not Being an Organization

planet code4lib - Wed, 2017-10-18 15:00
Our successes have caught up with us. Now we get to choose the next step in our evolution.

Code4Lib Journal: Usability Analysis of the Big Ten Academic Alliance Geoportal: Findings and Recommendations for Improvement of the User Experience

planet code4lib - Wed, 2017-10-18 15:00
The Big Ten Academic Alliance (BTAA) Geospatial Data Project is a collaboration between twelve member institutions of the consortium and works towards providing discoverability and access to geospatial data, scanned maps, and web mapping services. Usability tests and heuristic evaluations were chosen as methods of evaluation, as they have had a long standing in measuring and managing website engagement and are essential in the process of iterative design. The BTAA project hopes to give back to the community by publishing the results of our usability findings with the hope that it will benefit other portals built with GeoBlacklight.


Subscribe to code4lib aggregator