You are here

Feed aggregator

FOSS4Lib Recent Releases: pycounter - 0.15.1

planet code4lib - Fri, 2016-09-09 12:12

Last updated September 9, 2016. Created by wooble on September 9, 2016.
Log in to edit this page.

Package: pycounterRelease Date: Wednesday, August 31, 2016

Open Knowledge Foundation: OpenSpending collaborates with Mexico’s Ministry of Finance to standardise and visualise government budget data

planet code4lib - Fri, 2016-09-09 09:30


On September 8, 2016, Mexico became the first country to formally adopt the Open Fiscal Data Package, an international open data standard promoted by the Global Initiative for Fiscal Transparency (GIFT), in collaboration with Open Knowledge International and the World Bank, with the support of Omidyar Network. This collaboration is a pioneering step for publishing fiscal information in open formats. Mexico set an example to OpenSpending community who are intending to make use of the Open Fiscal Data Package and the new tools.

The announcement was made during an event hosted by the Ministry of Finance of Mexico to present the Executive’s Budget Proposal for 2017. The Ministry also revealed that it published the 2008-2016 Federal Budget on its website. The data was prepared using the OpenSpending Viewer, a tool which allows the users to upload and analyze data, and create visualizations.

One of Open Knowledge International’s core projects is OpenSpending, a free and open platform looking to track and analyse public fiscal information globally. The OpenSpending community is made up of citizens, organisations and government partners interested in using and sharing public fiscal data like government budget and spending information. The OpenSpending project is also involved in the creation of tools and standards to ensure this public information is more comparable and useful for a wide-range of users.

For the past few months, OpenSpending, in collaboration with the Global Initiative for Fiscal Transparency and WB-BOOST initiative team, has been working with the Ministry of Finance of Mexico to pilot the OpenSpending tools and the Open Fiscal Data Package (OFDP). The OFDP powers the new version of the OpenSpending tools used to publish Mexico’s Federal Budget data. The OFDP helps make data releases more comparable and useful.

The data package, embedded on Ministry of Finance’s web page, enables users to analyse the 2008-2016 budget, to create visualizations on all or selected spending sectors and share their personalized visualizations. All data is available for download in open format, while the API allows users to create their own apps based on this data.

Explore the visualization here.

In the next few months, the OpenSpending team will pilot the OFDP specification in a number of other countries. The specification and the OpenSpending tools are free and available to use to any interested stakeholder. To find out more, get in touch with us on the discussion forum.

Upload financial data, browse datasets and learn more about public finances from around the world by visiting OpenSpending – let’s work together to build the world’s largest fiscal data repository.

Ed Summers: Practice Theory

planet code4lib - Fri, 2016-09-09 04:00

This semester I’m going to be doing an independent study with Professor Andrea Wiggins to research and apply Practice Theory in my own work. I’ve included part of my research proposal (which is still a bit in flux at the moment) below. The plan is to use this web space to write up notes about my reading and work as I go. I thought I’d share it in case it shows up in your feed reader–hey I’m told people still use them. Writing here also puts a little bit of pressure on myself to stick to the plan as best I can. I’ll tag all the posts with practice.


The purpose of this independent study is for me to perform a detailed analysis of interview transcripts obtained during a previous study of appraisal practices among web archivists. The goal is to use practice theory as a critical lens for coding the transcripts. Weekly readings will introduce the field of practice theory. Each week I will write up brief summaries of the readings on my blog. The data analysis will be packaged up and results will be written up as a paper that could serve as an initial draft submission for a conference or a journal.


For the past year I have been researching how Web archivists go about doing their work to better understand existing and potential designs for Web archiving technology. Specifically, I’ve been interested in how archivists decide what to collect from the Web, and how they perform these actions using automated agents (software). The goal in this work is to help inform the design of archival systems for collecting and preserving Web content.

While the archivist’s work is sometimes guided by institutional collection development policies, not all organizations have them; and even when they do, considerable interpretive work often needs to be done when putting these policies into practice. The situation is further complicated by the fact that the tools that are available to the archivist in their Web archiving work, and the material of the Web itself are changing rapidly. This churn makes it very difficult to add specific detail to collection development policies without it becoming quickly out of date. By necessity they must remain at a fairly high level, which leaves the archivist with quite a bit of room for experimentation and practice. The practice of Web archiving is relatively young, compared with the longer established archival science. As a result there remain large questions about how the materiality of the Web, and tools for working with it impacts archival science at a theoretical and practical level.

Existing survey work done by the International Internet Preservation Consortium (Marill, Boyko, & Ashenfelder, 2004) and the National Digital Stewardship Alliance (Bailey et al., 2013; NDSA, 2012 ) described high level characteristics of Web archives work, particularly at the level of national libraries and universities. However these surveys intentionally did not provide a very rich picture of the day to day work of Web archivists. In order to better understand how these appraisal decisions are being enacted in Web archives I decided to conduct a series of unstructured interviews with active Web archivists to see what common themes and interaction patterns emerged from descriptions of their work. I employed a grounded theory research methodology which allowed me explore theoretical perspectives that emerged during iterative data collection and analysis. Through coding of my field notes I was able to observe a set of high level themes, which I reported on in a paper I will be presenting at CSCW 2017.

One overarching theme that emerged in this work was the ways in which the archivists and their software agents both worked together to produce the Web archive. I became increasingly interested in ways of viewing this interaction, which led me to reflect on the use of sociotechnical theory as a possible lens for further analysis of the interviews. After some consultation with Professor Wiggins I decided to spend some time exploring the sociotechnical theory literature in order to build a list of readings and a work plan for taking another look at my interview data using a sociotechnical theoretical lens, and more detailed coding of the actual interview transcripts.

I found an excellent overview from Sawyer & Jarrahi (2014) about the application of sociotechnical theory in Information Systems. This led to the realization that sociotechnical theory, while seemingly narrow, was in fact a large intellectual space that had many different branches and connections into IS and ICT. In fact it felt like such a broad area that I wouldn’t have time to thoroughly review the literature while also doing data analysis and writing.

In order to further refine my focus I decided to read Geiger (2015) and Ford (2015) which are two recent dissertations that have looked at Wikipedia as a sociotechnical system. I was drawn to their work because of the parallels between studying a collaboratively built encyclopedia and the study of archives of Web content. Both Geiger and Ford examine a medium or artifact that predates the Internet, the encyclopedia, but which has subsequently been transformed by the emergence of the Web as a sociotechnical artifact. Their ethnographic approach led them to the use of participant observation as a method, which aligned nicely with the first phase of my study. While there were certainly theoretical angles (the study of algorithms) that I can draw on, increasingly I found that it was their focus on participation that I found compelling for my own work.

Last spring Cliff Lampe UMD to give a talk about citizen interaction design. While describing his work Lampe stressed what he saw as a turn toward practice in the HCI community. He recommended a series of resources for further exploration of the subject including Kuutti & Bannon (2014). Since I had been having some difficulty in focusing my exploration of sociotechnical theory, and Ford and Geiger seemed to also point towards the importance of practice in their ethnographic work I decided to focus my independent study on three texts that came up many times in the literature I reviewed. I wanted to read books instead of articles because it seemed like a broad area and deep area that would benefit from a few deep dives rather than a survey approach to the literature.


Nicolini, D. (2012). Practice theory, work, and organization: An introduction. Oxford University Press.

This text was recommended by Kuutti & Bannon (2014) for providing an overview of the field of practice theory, and its theoretical and philosophical foundations in phenomenology, ethnomethodlogy and activity theory. I’m hopeful that this text will provide a useful and current picture of the field, which can be useful in diving off into other readings later in the semester.

Dourish, P. (2004). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.

Dourish is a heavily cited figure in HCI and sociotechnical literature. Where the Action Is in particular helped establish the theoretical foundations for incorporating social practices into system design. I’m particularly interested in how Dourish grounds HCI in the philosophical work of Heidegger and Wittgenstein. I guess it could be arguable about whether Dourish belongs in the practice theory camp. I guess I’ll know more after reading this book. I really wanted to make sure I connected the dots between practice theory and information technology.

Suchman, L. (1986). Plans and situated actions. Cambridge University Press.

This book by Suchman is constantly referenced in HCI literature as helping to establish a theoretical focus on the social and material properties of computer systems. As an anthropologist her use of ethnographic analysis is particular interest to me. I wanted to read it first hand instead of just citing it as a touchstone.


I’ve left some breathing room in the reading schedule near the end of the semester to allow for additional reading encountered during the reading of the main texts, and also for addition ideas from Professor Wiggins. I also wanted to leave time for coding, analysis and writing since the goal of this independent study is a paper.

Week 1

Nicolini, chapters 1-3

Week 2

Nicolini, chapters 4-5

Week 3

Nicolini, chapter 6-7

Week 4

Nicolini, chapters 8-9

Week 5

Dourish, chapters 1-2


Week 6

Dourish, chapters 3-4


Week 7

Dourish, chapters 5-7


Week 8

Suchman, chapters 1-4


Week 9

Suchman, chapters 5-8

Paper Outline

Week 10

Data Analysis

Week 11

Data Analysis


Week 12


Week 13


Week 14


Week 15

Final paper due.


Bailey, J., Grotke, A., Hanna, K., Hartman, C., McCain, E., Moffatt, C., & Taylor, N. (2013). Web archiving in the United States: A 2013 survey. National Digital Stewardship Alliance. Retrieved from

Ford, H. (2015). Fact factories: Wikipedia and the power to represent (PhD thesis). University of Oxford. Retrieved from

Geiger, R. S. (2015). Robots.txt: An ethnographic investigation of automated software agents in user-generated content platforms (PhD thesis). University of California at Berkeley.

Kuutti, K., & Bannon, L. J. (2014). The turn to practice in HCI: Towards a research agenda. In Proceedings of the 32nd annual ACM Conference on Human Factors in Computing Systems (pp. 3543–3552). Association for Computing Machinery. Retrieved from

Marill, J., Boyko, A., & Ashenfelder, M. (2004). Web harvesting survey. International Internet Preservation Consortium. Retrieved from

NDSA. (2012). Web archiving survey report. National Digital Stewardship Alliance. Retrieved from

Sawyer, S., & Jarrahi, M. H. (2014). CRC handbook of computing. In A. Tucker & H. Topi (Eds.),. Chapman; Hall.

Galen Charlton: A small thought on library and tech unions in light of a lockout

planet code4lib - Fri, 2016-09-09 01:12

I’ve never been a member of a union. Computer programmers — and IT workers in general — in the U.S. are mostly unorganized. Not only that, they tend to resist unions, even though banding together would be a good idea.

It’s not necessarily a matter of pay, at least not at the moment: many IT workers have decent to excellent salaries. Of course not all do, and there are an increasing number of IT job categories that are becoming commoditized. Working conditions at a lot of IT shops are another matter: the very long hours that many programmers and sysadmins work are not healthy, but it can be very hard to be first person in the office to leave at a reasonable quitting time day.

There are other reasons to be part of a union as an IT worker. Consider one of the points in the ACM code of ethics: “Respect the privacy of others.” Do you have a qualm about writing a web tracker? It can be hard to push back all by yourself against a management imperative to do so. A union can provide power and cover: what you can’t resist singly, a union might help forestall.

The various library software firms I’ve worked for have not been exceptions: no unions. At the moment, I’m also distinctly on the management side of the table.

Assuming good health, I can reasonably expect to spend another few decades working, and may well switch from management to labor and back again — IT work is squishy like that. Either way, I’ll benefit from the work — and blood, and lives — of union workers and organizers past and future. (Hello, upcoming weekend! You are literally the least of the good things that unions have given me!)

I may well find myself (or more likely, people representing me) bargaining hard with or against a union. And that’s fine.

However, if I find myself sitting, figuratively or literally, on the management side of a negotiation table, I hope that I never lose sight of this: the union has a right to exist.

Unfortunately, the U.S. has a long history of management and owners rejecting that premise, and doing their level best to break unions or prevent them from forming.

The Long Island University Faculty Federation, which represents the full time and adjunct faculty at the Brooklyn campus of LIU, holds a distinction: it was the first union to negotiate a collective bargaining agreement for faculty at a private university in the U.S.

Forty-four years later, the administration of LIU Brooklyn seems determined to break LIUFF, and have locked out the faculty. Worse, LIU has elected not to continue the health insurance of the LIUFF members. I have only one word for that tactic: it is an obscenity.

As an aside, this came to my attention last week largely because I follow LIU librarian and LIUFF secretary Emily Drabinski on Twitter. If you want to know what’s going on with the lockout, follow her blog and Twitter account as well as the #LIUlockout hashtag.

I don’t pretend that I have a full command of all of the issues under discussion between the university and the union, but I’ve read enough to be rather dubious that the university is presently acting in good faith. There’s plenty of precedent for university faculty unions to work without contracts while negotiations continue; LIU could do the same.

Remember, the union has a right to exist. Applies to LIUFF, to libraries, and hopefully in time, to more IT shops.

If you agree with me that lockouts are wrong, please consider joining me in donating to the solidarity fund for the benefit of LIUFF members run the by American Federation of Teachers.

DuraSpace News: TRY IT OUT: DSpace 6.0 Release Candidate #3 Available

planet code4lib - Fri, 2016-09-09 00:00

From Tim Donohue, DSpace tech lead, on behalf of the DSpace committers team

Austin, TX  The third release candidate of 6.0 is now available for download and testing. 6.0-RC3 (Release Candidate #3) is a pre-release of 6.0, and we hope that the 6.0 final release will follow closely in its footsteps.

LITA: Social Media For My Institution – a new LITA web course

planet code4lib - Thu, 2016-09-08 20:29

Social Media For My Institution: from “mine” to “ours”

Instructor: Dr. Plamen Miltenoff
Wednesdays, 9/21/2016 – 10/12/2016
Blended format web course

Register Online, page arranged by session date (login required)

This course is for librarians who want to explore the institutional application of social media. Based on the established academic course at St. Cloud State University “Social Media in Global Context” (more information at ). A theoretical introduction will assist participants to detect and differentiate the private use of social media from the structured approach to social media for an educational institution. Legal and ethical issues will be discussed, including future trends and management issues. The course will include hands-on exercises on creation and dissemination of textual and multimedia content and patrons’ engagement. Brainstorming on suitable for the institution strategies regarding resources, human and technological, workload share, storytelling, and branding.

This is a blended format web course:

The course will be delivered as 4 separate live webinar lectures, one per week on Tuesdays, September 21, 28, October 5, and 12 at 2pm Central. You do not have to attend the live lectures in order to participate. The webinars will be recorded and distributed through the web course platform, Moodle, for asynchronous participation. The web course space will also contain the exercises and discussions for the course.

Details here and Registration here


By the end of this class, participants will be able to:

  • Move from the state of personal use of social media (SM) and contemplate the institutional approach
  • Have a hands-on experience with finding and selecting multimedia resources and their application for branding of the institution
  • Participants will acquire the foundational structure of the elements, which constitute meaningful institutional social media
    michael schofield headshot

Dr. Plamen Miltenoff is an information specialist and Professor at St. Cloud State University. His education includes several graduate degrees in history and Library and Information Science and in education. His professional interests encompass social Web development and design, gaming and gamification environments. For more information see

And don’t miss other upcoming LITA fall continuing education offerings:

Online Productivity Tools: Smart Shortcuts and Clever Tricks
Presenter: Jaclyn McKewan
Tuesday September 20, 2016
11:00 am – 12:30 pm Central Time
Register Online, page arranged by session date (login required)

Questions or Comments?

For questions or comments, contact LITA at (312) 280-4268 or Mark Beatty,

FOSS4Lib Recent Releases: veraPDF - 0.22

planet code4lib - Thu, 2016-09-08 19:16

Last updated September 8, 2016. Created by Peter Murray on September 8, 2016.
Log in to edit this page.

Package: veraPDFRelease Date: Wednesday, September 7, 2016

SearchHub: Third Annual Solr Developer Survey

planet code4lib - Wed, 2016-09-07 17:03

It’s that time of the year again – time for our third annual survey of the Solr marketplace and ecosystem. Every day, we hear from organizations looking to hire Solr talent. Recruiters want to know how to find and hire the right developers and engineers, and how to compensate them accordingly.

Lucidworks is conducting our annual global survey of Solr professionals to better understand how engineers and developers at all levels of experience can take advantage of the growth of the Solr ecosystem – and how they are using Solr to build amazing search applications.

This survey will take about 2 minutes to complete. Responses are anonymized and confidential. Once our survey and research is completed, we’ll share the results with you and the Solr community.

As a thank you for your participation, you’ll be entered in a drawing to win one of our blue SOLR t-shirts plus copies of the popular books Taming Text and Solr in Action. Be sure to include your t-shirt size in the questionnaire.

Take the survey today

Past survey results: 2015, 2014

The post Third Annual Solr Developer Survey appeared first on

ACRL TechConnect: A High-Level Look at an ILS Migration

planet code4lib - Wed, 2016-09-07 16:00

My library recently performed that most miraculous of feats—a full transition from one integrated library system to another, specifically Innovative’s Millennium to the open source Koha (supported by ByWater Solutions). We were prompted to migrate by Millennium’s approaching end-of-life and a desire to move to a more open system where we feel in greater control of our data. I’m sure many librarians have been through ILS migrations, and plenty has been written about them, but as this was my first I wanted to reflect upon the process. If you’re considering changing your ILS, or if you work in another area of librarianship & wonder how a migration looks from the systems end, I hope this post holds some value for you.


No migration is without its problems. For starters, certain pieces of data in our old ILS weren’t accessible in any meaningful format. While Millennium has a robust “Create Lists” feature for querying & exporting different types of records (patron, bibliographic, vendor, etc.), it does not expose certain types of information. We couldn’t find a way to export detailed fines information, only a lump sum for each patron. To help with this post-migration, we saved an email listing of all itemized fines that we can refer to later. The email is saved as a shared Google Doc which allows circulation staff to comment on it as fines are resolved.

We also discovered that patron checkout history couldn’t be exported in bulk. While each patron can opt-in to a reading history & view it in the catalog, there’s no way for an administrator to download everyone’s history at once. As a solution, we kept our self-hosted Millennium instance running & can login to patrons’ accounts to retrieve their reading history upon request. Luckily, this feature wasn’t heavily used, so access to it hasn’t come up many times. We plan to keep our old, self-hosted ILS running for a year and then re-evaluate whether it’s prudent to shut it down, losing the data.

While some types of data simply couldn’t be exported, many more couldn’t emigrate in their exact same form. An ILS is a complicated piece of software, with many interdependent parts, and no two are going to represent concepts in the exact same way. To provide a concrete example: Millennium’s loan rules are based upon patron type & the item’s location, so a rule definition might resemble

  • a FACULTY patron can keep items from the MAIN SHELVES for four weeks & renew them once
  • a STUDENT patron can keep items from the MAIN SHELVES for two weeks & renew them two times

Koha, however, uses patron category & item type to determine loan rules, eschewing location as the pivotal attribute of an item. Neither implementation is wrong in any way; they both make sense, but are suited to slightly different situations. This difference necessitated completely reevaluating our item types, which didn’t previously affect loan rules. We had many, many item types because they were meant to represent the different media in our collection, not act as a hook for particular ILS functionality. Under the new system, our Associate Director of Libraries put copious work into reconfiguring & simplifying our types such that they would be compatible with our loan rules. This was a time-consuming process & it’s just one example of how a straightforward migration from one system to the next was impossible.

While some data couldn’t be exported, and others needed extensive rethinking in the new ILS, there was also information that could only be migrated after much massaging. Our patron records were a good example: under Millennium, users logged in on an insecure HTTP page with their barcode & last name. Yikes. I know, I felt terrible about it, but integration with our campus authentication & upgrading to HTTPS were both additional costs that we couldn’t afford. Now, under Koha, we can use the campus CAS (a central authentication system) & HTTPS (yay!), but wait…we don’t have the usernames for any of our patrons. So I spent a while writing Python scripts to parse our patron data, attempting to extract usernames from institutional email addresses. A system administrator also helped use unique identifying information (like phone number) to find potential patron matches in another campus database.

A more amusing example of weird Millennium data was active holds, which are stored in a single field on item records & looks like this:


Can you tell what’s going on here? With a little poking around in the system, it became apparent that letters like “NNB” stood for “date not needed by” & that other fields were identifiers connecting to patron & item records. So, once again, I wrote scripts to extract meaningful details from this silly format.

I won’t lie, the data munging was some of the most enjoyable work of the migration. Maybe I’m weird, but it was both challenging & interesting as we were suddenly forced to dive deeper into our old system and understand more of its hideous internal organs, just as we were leaving it behind. The problem-solving & sleuthing were fun & distracted me from some of the more frustrating challenges detailed above.

Finally, while we had a migration server where we tested our data & staff played around for almost a month’s time, when it came to the final leap things didn’t quite work as expected. The CAS integration, which I had so anticipated, didn’t work immediately. We started bumping into errors we hadn’t seen on the migration server. Much of this is inevitable; it’s simply unrealistic to create a perfect replica of our live catalog. We cannot, for instance, host the migration server on the exact same domain, and while that seems like a trivial difference it does affect a few things. Luckily, we had few summer classes so there was time to suffer a few setbacks & now that our fall semester is about to begin, we’re in great shape.

Difference & Repetition

Koha is primarily used by public libraries, and as such we’ve run into a few areas where common academic library functions aren’t implemented in a familiar way or are unavailable. Often, it’s that our perspective is so heavily rooted in Millennium that we need to think differently to achieve the same effect in Koha. But sometimes it’s clear that what’s a concern to us isn’t to other libraries.

For instance, bib records for serials with large numbers of issues is an ongoing struggle for us. We have many print periodicals where we have extensive holdings, including bound editions of past issues. The holdings display in the catalog is more oriented towards recent periodicals & displaying whether the latest few issues have arrived yet. That’s fine for materials like newspapers or popular magazines with few back issues, and I’ve seen a few public libraries using Koha that have minimalistic periodical records intended only to point the patron to a certain shelf. However, we have complex holdings like “issues 1 through 10 are bound together, issue 11 is missing, issues 12 through 18 are held in a separate location…” Parsing the catalog record to determine if we have a certain issue, and where it might be, is quite challenging.

Another example of the public versus academic functions: there’s no “recall” feature per se in Koha, wherein a faculty member could retrieve an item they want to place on course reserve from a student. Instead, we have tried to simulate this feature with a mixture of adjustments to our loan rules & internal reports which show the status of contested items. Recall isn’t a huge feature & isn’t used all the time, it’s not something we thought to research when selecting our new ILS, but it’s a great example of a minute difference that ended up creating a headache as we adapted to a new piece of software.

Moving from Millennium to Koha also meant we were shifting from a closed source system where we had to pay additional fees for limited API access to an open source system which boasts full read access to the database via its reporting feature. Koha’s open source nature has been perhaps the biggest boon for me during our migration. It’s very simple to look at the actual server-side code generating particular pages, or pull up specific rows in database tables, to see exactly what’s happening. In a black box ILS, everything we do is based on a vague adumbration of how we think the system operates. We can provide an input & record the output, but we’re never sure about edge cases or whether strange behavior is a bug or somehow intentional.

Koha has its share of bugs, I’ve discovered, but thankfully I’m able to jump right into the source code itself to determine what’s occurring. I’ve been able to diagnose problems by looking at open bug reports on Koha’s bugzilla tracker, pondering over perl code, and applying snippets of code from the Koha wiki or git repository. I’ve already submitted two bug patches, one of which has been pulled into the project. It’s empowering to be able to trace exactly what’s happening when troubleshooting & submit one’s own solution, or just a detailed bug report, for it. Whether or not a patch is the best way to fix an issue, being able to see precisely how the system works is deeply satisfying. It also makes it much easier to me to design JavaScript hacks that smooth over issues on the client side, be it in the staff-facing administrative functions or the public catalog.

What I Would Do Differently

Set clearer expectations.

We had Millennium for more than a decade. We invested substantial resources, both monetary & temporal, in customizing it to suit our tastes & unique collections. As we began testing the new ILS, the most common feedback from staff fell along the lines “this isn’t like it was in Millennium”. I think that would have been a less common observation, or perhaps phrased more productively, if I’d made it clear that a) it’ll take time to customize our new ILS to the degree of the old one, and b) not everything will be or needs to be the same.

Most of the customization decisions were made years ago & were never revisited. We need to return to the reason why things were set up a certain way, then determine if that reason is still legitimate, and finally find a way to achieve the best possible result in the new system. Instead, it’s felt like the process was framed more as “how do we simulate our old ILS in the new one” which sets us up for disappointment & failure from the start. I think there’s a feeling that a new system should automatically be better, and it’s true that we’re gaining several new & useful features, but we’re also losing substantial Millennium-specific customization. It’s important to realize that just because everything is not optimal out of the box doesn’t mean we cannot discover even better solutions if we approach our problems in a new light.

Encourage experimentation, deny expertise.

Because I’m the Systems Librarian, staff naturally turn to me with their systems questions. Here’s a secret: I know very little about the ILS. Like them, I’m still learning, and what’s more I’m often unfamiliar with the particular quarters of the system where they spend large amounts of time. I don’t know what it’s like to check in books & process holds all day, but our circulation staff do. It’s been tough at times when staff seek my guidance & I’m far from able to help them. Instead, we all need to approach the ongoing migration as an exploration. If we’re not sure how something works, the best way is to research & test, then test again. While Koha’s manual is long & quite detailed, it cannot (& arguably should not, lest it grow to unreasonable lengths) specify every edge case that can possibly occur. The only way to know is to test & document, which we should have emphasized & encouraged more towards the start of the process.

To be fair, many staff had reasonable expectations & performed a lot of experiments. Still, I did not do a great job of facilitating either of those as a leader. That’s truly my job as Systems Librarian during this process; I’m not here merely to mold our data so it fits perfectly in the new system, I’m here to oversee the entire transition as a process that involves data, workflows, staff, and technology.

Take more time.

Initially, the ILS migration was such an enormous amount of work that it was not clear where to start. It felt as if, for a few months before our on-site training, we did little but sit around & await a whirlwind of busyness. I wish we had a better sense of the work we could have front-loaded such that we could focus efforts on other tasks later on. For example, we ended up deleting thousands of patron, item, and bibliographic records in an effort to “clean house” & not spend effort migrating data that was unneeded in the first place. We should have attacked that much earlier, and it might have obviated the need for some work. For instance, if in the course of cleaning up Millennium we delete invalid MARC records or eliminate obscure item types, those represent fewer problems encountered later in the migration process.


As we start our fall semester, I feel accomplished. We raced through this migration, beginning the initial stages only in April for a go-live date that would occur in June. I learned a lot & appreciated the challenge but also had one horrible epiphany: I’m still relatively young, and I hope to be in librarianship for a long time, so this is likely not the last ILS migration I’ll participate in. While that very thought gives me chills, I hope the lessons I’ve taken from this one will serve me well in the future.

LITA: LITA Personas Task Force Survey

planet code4lib - Wed, 2016-09-07 15:11

The LITA Personas Task Force seeks your help in developing personas in order to identify who are a natural fit for LITA. We invite everyone who works in the overlapping space between libraries and technology, whether or not you belong to LITA, to participate. This survey is designed to assess your needs and identify how you interact with LITA.

We anticipate this survey will take approximately 10 – 15 minutes to complete. Data will be gathered anonymously and kept confidential. You may be offered the opportunity to participate in a virtual interview at a later date. This is optional and will require you to provide your contact information if you are interested. Names and emails will not be associated with your survey responses. The Survey closes on Friday, Sept. 30th, 2016, so don’t delay!

If you have any questions regarding LITA personas, please contact either

Hong Ma at
Yoo Young Lee at

We thank you in advance for your time and support.

LITA Personas Task Force Members:

Callan Bignoli
Lynne Edgar
Eric Frierson
Isabel Gonzalez-Smith
Amanda L. Goodman
TJ Lamanna
Yoo Young Lee
Hong Ma
Frank Skornia
Nadaleen Tempelman-Kluit

LITA: The President’s Post – #1

planet code4lib - Wed, 2016-09-07 14:16

Hello fellow LITAns!  For those of you who don’t know me my name is Aimee Fifarek and I will be serving as your fearless leader for the coming year. I have been a LITA member since I joined ALA in 1997 when I started my first professional job as the Louisiana State University Libraries System Administrator.  It’s hard to believe nearly 20 years have passed since I was a baby librarian running NOTIS in a mainframe environment. So many people in LITA-land have helped me over the course of my career, and I am happy to be able to repay those favors, in part, by serving as your President.

My plan is to do monthly posts during my tenure to share information about what is happening at the LITA Board level and share information about new and upcoming initiatives.  Communication is always in issue with an organization of our size and sometimes the wonkier bits of association business don’t always get communicated widely even though they are often news you can use.  Feel free to contact me – online or off – about anything LITA-related and I will do my best to respond in a timely fashion.

First, some old business, at least for me: committee appointments.  It’s what I spent my tenure as LITA VP doing and I’m happy to say appointments have been fully transitioned to our new VP Andromeda Yelton.  She has gotten off to an excellent start by coding an interface for the appointments database that she and her new Appointments Committee can use to manage all of those volunteer forms you submit.  Between the new committee and Andromeda’s app we are well on our way to defeating the traditional “black hole” nature of the appointments process.

Although it is tempting to think of Committee Appointments as an annual process, it really happens year round as people need to drop off committees for one reason or another or as new committees and task forces are formed.  If you are looking to get more involved with LITA, add some professional experience to your resume, or just want to give back, please do consider volunteering for a committee.  You get to meet new people, go in depth on issues and processes, and have the chance to make the Association that much better.  Check out the options on the LITA Committee Page and don’t be shy about letting us know about your prior experience and special skills.  The more info you put into the volunteer form the better we will be at matching you up with an excellent opportunity.

Speaking of new committees, did you know that as of last year LITA has a Diversity and Inclusion Committee?  With the volume of issues being discussed within the realm of technology in general and librarianship in particular it was well past time for LITA to establish a formal commitment to establish Diversity as a fundamental principle of LITA.  Thanks to Carli Spina who has agreed to be the committee’s first chair and to Evvivia Weinraub for being the first Board Liaison.  Their work will be fundamental to the committee’s ongoing success.

Before I leave the topic of committees I’d just like to send a big thank you to Michelle Frisque and Margaret Heller, our newest Interest Group and Committee Chair Coordinators.  If you are not familiar with this role, these are the folks who make sure the IG and Committee Chairs get the info they need to have successful meetings throughout the year.  We are happy to have them on board.  I would be remiss if I didn’t thank the outgoing inhabitants of those roles, David Lee King and Lauren Pressley, who did an admirable job.

Now, onto some new business.  The first LITA Board Meeting is TODAY September 7th at 11am Pacific.  I encourage everyone to tune in at, and not just the fans of parliamentary procedure snafus (you know who you are!).  We will be discussing, and hopefully adopting, the new LITA Strategic Plan.  Once adopted, this document will stay as is over the next two years and help guide LITA’s activities, specifically helping us to decide how to spend our most valuable commodity:  our time.  The document has four major focus areas:  Member Engagement, Organizational Sustainability, Education and Professional Development, and Advocacy and Information Policy. You can check out the final draft of the Strategic Plan, along with a very preliminary draft of the tactical plan, at the ALA Connect Node 256917.

Advocacy and Information Policy is definitely a growth area for us, and we will be starting out in this plan with some baby steps.  Although LITA will always be the home for library technologists within ALA, we have to think critically about what LITA’s purpose is in a world where everyone does technology.  This strategic plan item formalizes the idea that, as the group that has been thinking about and working with technology for the longest time, we are in an excellent position to guide the development of policies surrounding technology for our libraries and our world.  In the coming year we will be working on building a closer relationship with the units within ALA that are currently working in this area, like the Office for Information Technology Policy. Our goal is not to duplicate efforts already being made, but rather to lend our expertise to the policy decisions that affect all of us.

So that’s my update for September.  But before I go I want to extend hearty thanks to Brianna Marshall who is stepping down as LITA’s first Blog Editor.  She did an amazing job assembling a team and creating policies to bring you the quality content you get regularly through the LITA Blog.  Being first at something is always a challenge and Brianna met that challenge head on.  She is leaving the Blog in the capable hands of Lindsay Cronk, who has big ideas of her own and has been most helpful to me in my first post.  Brianna and Lindsay are just two more examples of the dedication and expertise that has made LITA a great place to be for the last 50 years.  More on that in my next post.

— Aimee

DuraSpace News: VIVO Updates for Sept 4–Woods Hole, Tech Docs, Wiki Improvements, Modeling Fellowships

planet code4lib - Wed, 2016-09-07 00:00

Woods Hole VIVO launched  The Marine Biology Laboratory Woods Hole Oceanographic Institution in Woods Hole, Massachusetts, has a new VIVO, and its beautiful!  See  Congratulations to the Library at MBLWHOI for creating this wonderful new site!

DuraSpace News: NEW DEMO from the Hydra-in-a-Box Tech Team

planet code4lib - Wed, 2016-09-07 00:00

From Mike Giarlo, software architect, Stanford University Libraries, on behalf of the Hydra-in-a-Box tech team

Palo Alto, CA  Development on the Hydra-in-a-Box repository application continues, and here's our latest demo. Thanks to the Chemical Heritage Foundation and Indiana University for contributing to these sprints!

DPLA: DPLA Board Call: Thursday, September 15, 3:00 PM Eastern

planet code4lib - Tue, 2016-09-06 18:34

The next DPLA Board of Directors call is scheduled for Thursday, September 15 at 3:00 PM Eastern. Agenda and dial-in information is included below. This call is open to the public, except where noted.

  • [Public] Welcome, Denise and Mary
  • [Public] General updates from Executive Director
  • [Public] DPLAfest 2017
  • [Public] Questions/comments from the public
  • Executive Session to follow public portion of call

FOSS4Lib Recent Releases: Hydra - 10.3.0

planet code4lib - Tue, 2016-09-06 16:44

Last updated September 6, 2016. Created by Peter Murray on September 6, 2016.
Log in to edit this page.

Package: HydraRelease Date: Friday, September 2, 2016

Islandora: Life on the Bleeding Edge: Why some Islandora sites run on HEAD

planet code4lib - Tue, 2016-09-06 16:00

'Running on HEAD,' in the Islandora context, means to run Islandora code from the actively developed 7.x branch in GitHub instead of running a stable release. 

Islandora has a twice-yearly release cycle, with a new version coming out roughly every April and October (in fact, we're getting started on the next one right now!). A lot of Islandora sites run on those releases, updating when the new code is out, or sticking with an older version until there's some new bug fix or feature they have to have. Others... don't wait for those bug fixes and features. They pick them up as soon as they are merged, by running Islandora on HEAD in production.

It's an approach that has a lot of benefits (and a few pitfalls to be aware of if you're considering it for your installation). I talked with a few members of the Islandora community who take the HEAD approach, to ask them why they're on the bleeding edge:

  • TJ Lewis is the COO of discoverygarden, Inc, which is the longest-running Islandora service company and has a deep history with the project. They are big supporters of running on HEAD and do so for most of their clients. 
  • Mark Jordan is the Head of Library Systems at Simon Fraser University, which has been running on HEAD since they launched Islandora in April 2016.
  • Jared Whiklo is a Developer with Digital Initiatives at the University of Manitoba Libraries, and has made the move to HEAD quite recently.
  • Jennifer Eustis is the Digital Repository Content Administrator at the University of Connecticut, which supports Islandora for several other institutions. They started running on HEAD in the Fall of 2014 and now operate on a quarterly maintenance schedule.
  • Brad Spry handles Infrastructure Architecture and Programming at the University of North Carolina at Charlotte and works his Islandora updates around official releases while still taking advantage of the fixes and improvements available from HEAD.

Why run Islandora on HEAD instead of using a release?

Main advantage: Getting bug fixes, features, and improvements faster, without having to resort to manual patching of release code. It's also handy when developing custom modules, since you do not have to worry about developing against an outdated version of Islandora. Running on HEAD puts you in a position to adopt security fixes more easily, without waiting for backports.

Islandora's GitHub workflows and integrated Travis testing mean that updates to the code are well reviewed and tested before being merged, making HEAD quite stable.

Bottom line: it's safe, it's useful, and it gets you fast access to the latest Islandora goodies.

What are the drawbacks?

Not running on releases can be perceived as more risky - with some justification. New features may not be entirely complete, as Brad found with the Solr-powered collection display introduced in a recent release:

We badly needed the ability to sort certain collections, like photographs, in the exact chronological order they were taken in. To do that, we determined we needed to sort the collection by mods_titleInfo_partNumber_int. The new Solr-powered collection display enables just that, the ability to sort collections using Solr fields. However, as powerful as the new Solr-powered collection display was for solving our photo collection sorting issues, it was not feature-parity with the SPARQL (Legacy) powered collection display. The Solr-powered display omitted collection description display on each collection, and our Archivists spotted the discrepancy immediately... Given the choice of Solr-powered photo collection sorting vs. informative collection descriptions, our Archivists chose collection descriptions. So we had to revert back to SPARQL (Legacy) until the Solr-powered collection description functionality was fully realized.

Adopting new features outside of a release may also mean adopting them without complete documentation, as Jennifer discovered:

Some new features are not always explained in ways that a non developer might understand especially in terms of the consequences of turning on a new module. For us, the batch report is a good example. The batch set report once enabled doesn't really provide information that our users understand. It does store a batch set report in a temporary directory. Because our content users are busy bees ingesting night and day, our batch queue kept growing resulting in a temporary directory that didn't have any more space. This meant we had to find a way to batch delete these set reports. We decided to disable this module. 

Dealing with these issues does have a bright side, as Jared noted: There is an inherent risk that code has not been fully tested for your use cases/setup and something may misbehave. With more experience, this means you can fix that issue and submit the fix back to the community to save someone else the problem.

What tools or tricks can you use to manage running on HEAD?

There are a variety of approaches to make running on HEAD safer and easier. At discoverygarden, TJ and his team manage everything with Puppet, to ensure consistent development environments and staging environments that mimic production where they can develop and enable QA to happen prior to pushing to production. Brad uses a tool designed for managing Islandora Releases, called Islandora Release Manager Helper Scripts as a basis for his own scripts; one for backing up modules, another for removing them, and a third to install them again fresh from HEAD. Mark and his team at SFU run a bash command to switch to 7.x on all modules and run git pull.

At the University of Connecticut, where Jennifer is managing a multisite, she's found it helpful to create a common theme and module library shared by each of their Drupal instances. When they build and test new functionality independent of running on HEAD, they add those to their core module library that gets updated during their maintenance schedule. This also ensures that those extra modules work with the newest and best Islandora. They also have a dedicated maintenance schedule, complete with rigorous testing at development, staging, and production to ensure that they work out all of the wrinkles.

The University of Manitoba is too new to HEAD to have much procedure worked out, but Jared is working on some small scripts to ensure that backend and display instances of Islandora have all the same commit points for the various modules as they share a Fedora repository.

What do you think someone considering a move to HEAD should know before they begin? (or, what do you wish you had known?)

Brad: Learn how to watch modules on Github for changes. Learn how to file and watch issues on Islandora's JIRA. Join the Google Group, Islandora Interest Groups, and conference calls, especially the Committer's Call. Login to IRC (Editor's Note: #islandora irc channel on freenode). Participate as much as you can, build relationships, get to know people; they're human after all (at least most of them are :-) You'll want to have your finger on the pulse of Islandora. And you don't have to be a supreme programmer, you just need to be able to represent and effectively communicate your organization's needs and desires in order to make Islandora better.

Mark: It might be a bit riskier than running on releases, but Islandora development and bug fixing happens at such a fast pace that waiting for releases (even though they are frequent and well tested) seems more appropriate for a commercial product.

Jennifer: When we didn't run on HEAD and decided to run on HEAD, almost 2 years had passed. There had been so many changes. It was pretty much an entirely new Islandora that we were working on when we did our update and decided to 1st run at HEAD. The transition was an extremely difficult one for those who had to perform the update and users who had to get to know an entirely new system. Now that we're running at HEAD, these transitions are easier.

What do you need to know? We're still figuring that out but here are some in the middle thoughts:

  • Ensure your administration is on board with maintenance as being required.
  • Develop a maintenance schedule with a core team.
  • Determine if running on head also means updating Drupal, security patches, server applications, database patches or the like.
  • If you don't have a dedicated person in charge of monitoring when systems need patches, then think of a support contract.
  • Realize that things change and this is OK.
  • Test.
  • Get involved with the community. Try testing out the new releases as this will get you a first hand view of the changes.
  • Have fun. 

Jared: Open source code has no warranty, even code that has been “released” does not mean bug free. So a lot of the HEAD versus release discussion is about your comfort. I think your institution should have some comfort with the code; you might encounter a bug and being able to either correct it or help to diagnose it clearly means that it can be resolved, and (running on HEAD) you can update and benefit from those changes.

David Rosenthal: Memento at W3C

planet code4lib - Tue, 2016-09-06 15:00
Herbert van de Sompel's post at the W3C's blog Memento and the W3C announces that both the W3C's specifications and their Wiki now support Memento (RFC7089):
The Memento protocol is a straightforward extension of HTTP that adds a time dimension to the Web. It supports integrating live web resources, resources in versioning systems, and archived resources in web archives into an interoperable, distributed, machine-accessible versioning system for the entire web. The protocol is broadly supported by web archives. Recently, its use was recommended in the W3C Data on the Web Best Practices, when data versioning is concerned. But resource versioning systems have been slow to adopt. Hopefully, the investment made by the W3C will convince others to follow suit.This is a very significant step towards broad adoption of Memento. Below the fold, some details.

The specifications and the Wiki use different implementation techniques:
Memento support was added to the W3C Wiki pages by deploying the Memento Extension for MediaWiki. Memento support for W3C specifications was realized by installing a Generic TimeGate Server for which a handler was implemented that interfaces with the versioning capabilities offered by the W3C API.Herbert, Harihar Shankar and Shawn M. Jones also have a much more detailed blog post covering many of the technical details, and the history leading up to this, starting in 2010 when:
Herbert Van de Sompel presented Memento as part of the Linked Data on the Web Workshop (LDOW) at WWW. The presentation was met with much enthusiasm. In fact, Sir Tim Berners-Lee stated "this is neat and there is a real need for it". Later, he met with Herbert to suggest that Memento could be used on the W3C site itself, specifically for time-based access to W3C specifications.Even for its inventor, getting things to happen on the Web takes longer than it takes! They conclude with by stressing the importance of Link headers, a point that relates to the Signposting proposal discussed in Signposting the Scholarly Web and Improving e-Journal Ingest (among other things):
Even though the W3C maintains the Apache server holding mementos and original resources, and LANL maintains the systems running the W3C TimeGate software, it is the relations within the Link headers that tie everything together. It is an excellent example of the harmony possible with meaningful Link headers. Memento allows users to negotiate in time with a single web standard, making web archives, semantic web resources, and now W3C specifications all accessible the same way. Memento provides a standard alternative to a series of implementation-specific approaches.Both posts are well worth reading.

LITA: There’s A (Digital) Outcome For That!

planet code4lib - Tue, 2016-09-06 15:00

The more I work with faculty and students on integrating new technologies such as 3D printing and virtual reality into the curriculum, the more I think about ways we can measure learning for non-Information Literacy related competencies.

How do we know that students know how to use a 3D printer successfully? How can we measure the learning that occurred when they designed a file for upload into a visualization software package? While the Association of College and Research Libraries (ACRL) has taken the lead on delineating national standards for Information Literacy, and more recently updated them to the Framework for Information Literacy, there isn’t quite as much information available about designing and assessing assignments that are less traditional than the ubiquitous 3-5 page research paper. I’m not sure that we will find one set of competencies to rule them all, simply because there are so many dimensions to these areas. In one seemingly straightforward activity such as creating an online presentation, you might have elements of visual literacy, creativity, and communication to name a few. But it would be interesting to try-so here goes!

What might an actual competency look like? Measurable learning outcomes are structured similarly no matter what the context. They have to explain:

  1. What the learner is able to do
  2. How the learner does it
  3. To what degree of success

ACRL has a great tool for developing these types of outcomes:

Applying that to a digital competency might work like this. Students will be able to create effective online presentations utilizing various free web tools by:

  • Selecting appropriate images and visual media aligned with the presentation’s purpose
  • Integrating images into projects purposefully, considering meaning, aesthetic criteria, visual impact, and audience
  • Editing images as appropriate for quality, layout, and display (e.g., cropping, color, contrast)
  • Including textual information as needed to convey an image’s meaning (e.g., using captions, referencing figures in a text, incorporating keys or legends)
  • Adapting writing purpose, style, content and format to the appropriate digital context

A sample assignment that includes those competencies might be: Create a 1-3 minute presentation on a given topic and consisting of the following elements:

  1. Must use one of these presentation tools
  2. Content must be relevant to the theme
  3. Visual design must contain at least 3-5 images or video elements. Color scheme, layout and overall design must be consistent with the guidelines mentioned above
  4. All material created by someone other than the student is given attribution in citations and used according to ethical and legal best practices

A rubric could then be developed to measure how well the presentation integrates the various elements involved:

Goal Outcome Levels Benchmark Create effective online presentations utilizing various free web tools Select appropriate images and visual media aligned with the presentation’s purpose 0 (does not meet competency) Visual elements do not lend any value to the content and there is no overarching purpose or structure to their inclusion
1 (meets competency) Some images and media elements are integrated well  into the presentation and align with its content and purpose
2 (exceeds competency) Images and visual media significantly support the content presented and are effectively integrated into the overall presentation At least 75% of students score a 1 or above

As we continue to forge new digital paths, we are constantly challenged to to re-define the notions of instruction, authorship and intellectual property in our ever shifting landscape of learning. I’m excited at the possibilities that digital literacy brings to student learning in this new environment, and I can only imagine the power and complexity these various assignments entail, and how much fun students (and faculty) would have in developing them.

Some additional standards to consider are:

Meredith Farkas: Choose your own professional involvement adventure

planet code4lib - Tue, 2016-09-06 14:52

Last month, I had lunch with two friends who are also in academia. We talked a lot about professional ambitions and “extracurricular” professional involvement. One of them is starting a new book and the other is thinking about doing consulting as a side-job. In every job I’ve had (even before librarianship), I’ve been focused on moving up in my career, whether that was new responsibilities, a promotion, or a job elsewhere, so I was always focused on doing things that might help me get there. When asked about my current professional ambitions, I realized that I didn’t have any. Or, more accurately, I didn’t have any that were not related to my current job.

The fact is, I love my job. I love what I do every day. I love the people I work with in the library, the collegial atmosphere, and their dedication to the students and faculty here. I love the academic community I’m a part of at PCC. I feel a sense of fit that’s uncanny. My major professional ambitions now center around progressing in the work I’m doing, build stronger relationships with faculty, and do work that really helps our students be successful.

Not having ambitions toward moving up or out has, at times, made me feel weirdly adrift, especially as someone who has always felt like I wasn’t doing enough in any area of my life. I was so engaged professionally over my first decade in the profession — starting with blogging and social media, then professional writing and national service. At Norwich, that kind of engagement wasn’t required, but I did it to connect with other wonderful librarians around the world, to support things I believed in, and to build a professional network. I did a lot of unorthodox things like creating Five Weeks to a Social Library and the ALA Unconference with some amazing partners-in-crime, because I wasn’t hamstrung by a specific vision of what being professionally involved should look like. All that helped me build the professional network I have today.

Then, at Portland State, I was on the tenure track, and was required to contribute to the profession. While there wasn’t a specific list of what we should or should not do to get tenure, the assumption was that ideal involvement included publishing peer-reviewed articles, presenting at major national conferences, and serving on state or national committees. I did all of those things and enjoyed some of what I did, but I kept asking myself what I really would do if I had the freedom to choose.

And then, suddenly, I did again. And it was hard to start saying no to opportunities because, for so long, that was what made me feel good about myself; speaking at conferences, getting published, etc. I based so much of my happiness and self-esteem on things that were not very meaningful in the big picture. And I was so focused on my career to the detriment of other aspects of my life. The past year has reminded me of what was important. This year has been soul-crushingly hard for me and my family, and I’m lucky that I could step away from a lot of my outside-of-work engagement without repercussions. I think we’re lucky to be in a profession where most librarians are understanding of people’s needs to step away and focus on their family/spouse/child/parent/health. We often have a more difficult time letting ourselves off the hook, I think. I’m working on that myself.

When I came to PCC, what I did stay engaged with was the Oregon Library Association. I love my service at the state level — the librarians in Oregon are so positive and passionate and have such an ethic of sharing and collaboration. They also are very open to new ideas, like when I and another librarian proposed creating a mentoring program. I’ve been administering the OLA mentoring program for the past three years (and this year we launched a resume review program!) and is has been really rewarding and fun. I’ve stepped away from my leadership role in the organization for the coming year, and I feel lucky that I can continue to contribute in a more limited capacity.

I have friends who are engaged professionally in many different ways. Some are loyal committee members in state, national, or international organizations. Some have taken on leadership roles in those organizations. Some are more focused on contributing to the profession through publishing and presenting. I have friends for whom writing is a passion and have published one or more books. I have friends who are annoyed by the poor quality of library research and want to produce more solid evidence-based literature. I have friends who are fantastic speakers and have engaged and inspired so many librarians by sharing their insights. Many do a combination of all these things. Some do big, visible, shiny things and others do vital work that will never get them national recognition. Some do just a little and others do more than seems possible for one person to do. The key is that they do what is a good fit for them; what makes them feel fulfilled. For many, professional involvement ebbs and flows at different points in their career, depending on other priorities. And that’s a good thing. We sometimes need to step back from things to focus on other priorities in our lives and we shouldn’t feel badly about that.

I also have friends who are not professionally involved beyond their day jobs. Many of them are active in other things, like service to their communities, and even if they’re not, that is a reasonable choice. I am involved in service to the profession because I find the work satisfying, not because I feel like it’s my obligation. Finding the things that make us happy in this life can be hard when we are bombarded with the expectations and assumptions of others. I feel like the past 12 years of my professional life have been spent trying to figure out what makes me happy, and untangling that from what I think will make others think well of me.

My advice to new librarians is to ask yourself what makes you feel like a good librarian? What gives you satisfaction? Don’t feel like you have to follow the same path as your boss or someone you admire; have to join the same organizations and serve on similar committees. Find your tribe. Find your happy place. The opportunities for connecting with other librarians and giving back to the profession are only limited by your imagination. If you don’t see the sort of thing you’d like to contribute to (a conference, a service, a publication, etc.) find some like-minded people and create it! I’ve seen so many librarians do just that. If you’re tenure track, you may have to do things that aren’t a perfect fit for you, but, even then, you usually can tailor your service to the profession as much as possible to things that make you feel fulfilled. I’ve been on too many committees with people who contribute nothing and are clearly only there to say that they served on x committee. Service without engagement is meaningless.

Life is so short that spending time trying to fit a mould or live up to other people’s expectations seems like a tremendous waste of time and energy. Be the professional you want to be.


Subscribe to code4lib aggregator