You are here

Feed aggregator

DuraSpace News: Welcome Erin Tripp: Business Development Manager for DuraSpace

planet code4lib - Wed, 2017-04-19 00:00

Austin, TX  DuraSpace is pleased to announce that Erin Tripp will join DuraSpace as the new Business Development Manager on May 1, 2017. Her duties will include pursuing new partnerships, grants, and strategies to grow business for DuraSpace hosted services DuraCloud, DSpaceDirect, and ArchivesDirect. Working with staff and our community, Mrs. Tripp will help cultivate new business opportunities for DuraSpace.

DuraSpace News: PARTICIPATE: Beyond the Repository Survey On Distributed Preservation

planet code4lib - Wed, 2017-04-19 00:00

From the "Beyond the Repository" Team: Laura Alagna, Carolyn Caizzi, Brendan Quinn, Sibyl Schaefer, Evviva Weinraub

District Dispatch: How to participate in #NLLD17 from home

planet code4lib - Tue, 2017-04-18 18:59

As library supporters from across the United States prepare to go to Washington, D.C. to participate in National Library Legislative Day, don’t forget that you can participate from home!

All week long (May 1-5th), we’re asking library supporters to email, call, and tweet their Members of Congress about federal library funding and other key library issues. Register now, and you will receive an email on May 1st reminding you to take action, along with a link to the livestream from National Library Legislative Day, so you can hear our keynote speaker and the issue briefings live.

This year’s keynote speaker will be Hina Shamsi, Director of the ACLU National Security Project, and the issue briefings will be provided by the staff of the ALA Washington Office. Check out our earlier post to see the full list of panels at National Library Legislative Day this year.

This year, we’re asking Congress to:

House: Save IMLS; Fully Fund LSTA & IAL
Senate: Sign LSTA & IAL “Dear Appropriator” Letters
House/Senate Reauthorize MLSA (incl. LSTA)

We’ll have talking points and background information available on the Action Center starting May 1st, to help you craft your message. You can use the event tag #NLLD17 to join in the conversation.

Looking for other ways to participate? Facebook, Twitter and Tumblr users can sign up to participate in our Thunderclap.

Questions? Email llindle@alawash.org

The post How to participate in #NLLD17 from home appeared first on District Dispatch.

LITA: Universal Design for Libraries and Librarians a popular repeat LITA web course

planet code4lib - Tue, 2017-04-18 15:06

Don’t miss your chance to participate in the repeat of this popular web course. Register now for this  LITA web course:

Universal Design for Libraries and Librarians

Instructors:

  • Holly Mabry, Digital Services Librarian, Gardner-Webb University; and
  • Jessica Olin, Director of the Library, Robert H. Parker Library, Wesley College

Offered May 15 to June 19, 2017.

A Moodle based web course with asynchronous weekly content lessons, tutorials, assignments, and group discussions.

Register Online, page arranged by session date (login required)

Universal Design is the idea of designing products, places, and experiences to make them accessible to as broad a spectrum of people as possible, without requiring special modifications or adaptations. This course will present an overview of universal design as a historical movement, as a philosophy, and as an applicable set of tools. Students will learn about the diversity of experiences and capabilities that people have, including disabilities (e.g. physical, learning, cognitive, resulting from age and/or accident), cultural backgrounds, and other abilities. The class will also give students the opportunity to redesign specific products or environments to make them more universally accessible and usable.

Takeaways

By the end of this class, students will be able to…

  • Articulate the ethical, philosophical, and practical aspects of Universal Design as a method and movement – both in general and as it relates to their specific work and life circumstances
  • Demonstrate the specific pedagogical, ethical, and customer service benefits of using Universal Design principles to develop and recreate library spaces and services in order to make them more broadly accessible
  • Integrate the ideals and practicalities of Universal Design into library spaces and services via a continuous critique and evaluation cycle

Here’s the Course Page

Holly Mabry

Holly Mabry received her MLIS from UNC-Greensboro in 2009. She is currently the Digital Services Librarian at Gardner-Webb University where she manages the university’s institutional repository, and teaches the library’s for-credit online research skills course. Since finishing her MLIS, she has done several presentations at local and national library conferences on implementing universal design in libraries with a focus on accessibility for patrons with disabilities.

Jessica Olin

Jessica Olin is the Director of the Library, Robert H. Parker Library, Wesley College. Ms. Olin received her MLIS from Simmons College in 2003 and an MAEd, with a concentration in Adult Education, from Touro University International. Her first position in higher education was at Landmark College, a college that is specifically geared to meeting the unique needs of people with learning differences. While at Landmark, Ms. Olin learned about the ethical, theoretical, and practical aspects of universal design. She has since taught an undergraduate course for both the education and the entrepreneurship departments at Hiram College on the subject.

Dates:

May 15 – June 19, 2017

Costs:

  • LITA Member: $135
  • ALA Member: $195
  • Non-member: $260

Technical Requirements:

Moodle login info will be sent to registrants the week prior to the start date. The Moodle-developed course site will include weekly new content lessons and is composed of self-paced modules with facilitated interaction led by the instructor. Students regularly use the forum and chat room functions to facilitate their class participation. The course web site will be open for about a week prior to the start date for students to have access to Moodle instructions and set their browser correctly. The course site will remain open for 90 days after the end date for students to refer back to course material.

Registration Information:

Register Online, page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Open Knowledge Foundation: Open Research event in Yaoundé, Cameroon

planet code4lib - Tue, 2017-04-18 09:02

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the Open Research theme.

On 6th April 2017, I was finally to organise an Open Research Data event in Yaoundé, Cameroon to train the young and next generation of social scientists on transparency and reproducibility tools to enhance the openness of their research. Of a pool of 80 applicants, about 40 participants were carefully selected based on their gender, their field of study as well as their previous knowledge and interest towards research replicability and openness.

In spite of the heavy rainfall that preceded the opening ceremony, about 30 participants from various Cameroonian universities and disciplines ranging from economics, political science to psychology were able to attend the event. We were also lucky to have among the attendees about 4 participants originally from Benin. The event kicked off with an introduction of the topics intended to be covered.

The first part of the presentation focused on sensitising participants on the different forms of academic research misconducts, with concrete examples of research falsifications with regards to economics and psychology over the last decade.  

We also discussed the various types of academic research misconducts, such as publication bias, p-hacking, failure to replicate, unreproducible workflow as well as the lack of sharing and openness in research. At the end of this first part of the workshop, a lively discussion arose with participants, especially on the difficulties for young PhD students to deviate from the traditional “hidden”  and “lack of sharing” behaviour inherited from their senior mentors.

Some attendees also mentioned bottlenecks to access data from National Statistical Offices (NIS), that are meant to be opened and freely accessible to the academic research community, as one of the key impediment to pursuing their respective research. They also raise the difficulty they face in getting access to publication (not even raw or cleaned datasets) from their peers/colleagues.

The second half of the day centred on introducing participants to different solutions that could be undertaken to enhance the openness of their research, such as pre-registration, pre-analysis plan, data sharing and the construction of a reproducible and transparent workflow, dynamic documents etc. An example on how to pre-register a Randomized Controlled Trial (RCT) or a research undertaken with secondary data was made under the American Economic Association (AEA) Social Science Registry as well as the Open Science Framework (OSF). A compelling presentation of what is a Pre-Analysis Plan (PAP) was done by the BITSS (Berkeley Initiative for Transparency in the Social Sciences) catalyst Faha Nochi Dief Reagen.

Dief Reagen Nochi Faha of Berkeley Initiative for Transparency in the Social Sciences leading a presentation.

After that, STATA Do files and R Markdown codes along with R, R studio and STATA 13 set up were distributed to participants with useful assistance from Mr Cyrille Moise Touk and Mr Dief Reagan Nochi Faha.

The internet connection was a bit of a challenge, especially when it came to loading up some of the R packages to build dynamic documents in R (R Markdown, Foreign, Stargazer, Sandwich) and Stata (Markdoc). The practical sessions, however, went very well and almost all the participants were able to successfully run the code and get their dynamic documents done either in R or STATA.

At the end of the workshop, students were encouraged to apply for the forthcoming OpenCon2017 conference to learn more about Scholarly Publishing and Altmetrics and also apply to the BITSS summer institute of UC Berkeley.

The views of two participants:

“I really wish I knew about all those bottlenecks to research openness (Publication bias, P-hacking, failure to replicate, unreproducible workflow, lack of data sharing and transparency) at the very beginning of my PhD, I would have been more cautious. However, now that the workshop has raised my awareness on the necessity to be more transparent and open in research, I could use the knowledge acquired to enhance the quality of my current and forthcoming publications.” – Mr Armand Mboutchouang Kountchou; Final year PhD Student in economics, University of Yaounde II-SOA and African Economic and Research Consortium (AERC)

“Research transparency, reproducibility and openness tools should be integrated into the academic curriculum of our universities from the undergraduate level. This could enable the next generation of African economic researchers to embrace a different path in order to enhance the credibility and quality of their research outputs.” – Mr Nochi Faha Dief Reagen; PhD Student in economics, University of Yaounde II-SOA and University of Rennes 1, France.

 

ACRL TechConnect: Hosting a Coding Challenge in the Library

planet code4lib - Mon, 2017-04-17 19:39

In Fall of 2016, the city of Los Angeles held a 2-week “Innovate LA” event intended to celebrate innovation and creativity within the LA region.  Dozens of organizations around Los Angeles held events during Innovate LA to showcase and provide resources for making, invention, and application development.  As part of this event, the library at California State University, Northridge developed and hosted two weeks of coding challenges, designed to introduce novice coders to basic development using existing tutorials. Coders were rewarded with digital badges distributed by the application Credly.

The primary organization of the events came out of the library’s Creative Media Studio, a space designed to facilitate audio and video production as well as experimentation with emerging technologies such as 3D printing and virtual reality.  Users can use computers and recording equipment in the space, and can check out media production devices, such as camcorders, green screens, GoPros, and more.  Our aim was to provide a fun, very low-stress way to learn about coding, provide time for new coders to get hands-on help with coding tutorials, and generally celebrate how coding can be fun.  While anyone was welcome to join, our marketing efforts specifically focused on students, with coding challenges distributed daily throughout the Innovate LA period through Facebook.

The Challenges

The coding challenges were sourced from existing coding tutorial sites such as Free Code CampLearn Ruby and Codecademy.  We wanted to offer a mix of front-end and server side coding challenges, starting with HTML, CSS and JavaScript and ramping up to PHP, Python, and Ruby.  We tested out several free tutorials and chose tutorials that had the most straightforward instructions that provided immediate feedback about incorrect code. We also tried to keep the interfaces consistent, using Free Code Camp most frequently so participants could get used to the interface and focus on coding rather than the tutorial mechanism itself.

Here’s a list of the challenges and their corresponding badges earned:

Challenge Badge Received Say Hello to the HML Elements, Headline with the H2 Element, Inform with the Paragraph Element HTML Ninja Change the Color of Text, Use CSS Selectors to Style Elements, Use a CSS Class to Style an Element CSS Ninja Use Responsive Design with Bootstrap Fluid Containers, Make Images Mobile Responsive, Center Text with Bootstrap Bootstrapper Comment your JavaScript Code, Declare JavaScript Variables, Storing Values with the Assignment Operator JavaScript Hacker Learn how Script Tags and Document Ready Work, Target HTML Elements with Selectors Using jQuery, Target Elements by Class Using jQuery jQuery Ninja Uncomment HTML, Comment out HTML, Fill in the Blank with Placeholder Text HTML Master Style Multiple Elements with a CSS Class, Change the Font Size of an Element, Set the Font Family of an Element CSS Master Create a Bootstrap Button, Create a a Block Element Bootstrap Button, Taste the Bootstrap Button Color Rainbow Bootstrap Master Getting Started and Cat/Dog JS Game Maker Target Elements by ID Using jQuery, Delete your jQuery Functions, Target the same element with multiple jQuery selectors jQuery Master Hello World

Variables and Types

Lists

Python Ninja Hello World, Variables and Types, Simple Arrays PHP Ninja Hello World, Variables and Types, Math Ruby Ninja How to Use APIs with JavaScript (complete through Step 9: Authentication and API Keys) API Ninja Edit or create a wikipedia page. You may join in at the Wikipedia Edit-a-thon or do editing remotely. The Citation Hunt tool is a cool/easy way of going about editing a Wikipedia page. Narrow it to a topic that interests you and make. WikiWiz Create a 3D Model for an original animated character. You may use TinkerCAD or Blender as free options or feel free to use SolidWorks AutoCAD if you are familiar with them. If you don’t know where to begin, TinkerCAD has step by step tutorials for you to bring your ideas to life. 3D Designer Get a selfie with a Google Cardboard or any virtual reality goggles VR Explorer

Note the final three challenges – editing a Wikipedia page, creating a 3D model, and experimenting with Google Cardboard or other virtual reality (VR) goggles are not coding challenges, but we wanted to use the opportunity to promote some of the other services the Creative Media Studio provides.  Conveniently, the library was hosting a Wikipedia Edit-A-Thon during the same period as the coding challenges, so it made sense to leverage both of those events as part of our Innovate LA programming.

The coding challenges and instructions were distributed via Facebook, and we also held “office hours” (complete with snacks) in one of the library’s computer labs to provide assistance with completing the challenges.  The office hours were mostly informal, with two library staff members available to walk users through completing and submitting the challenges.  One special office hours was planned, bringing in a guest professor from our Cinema and Television Arts program to help users with a web-based game making tutorial he had designed.  This partnership was very successful, and that particular office hour session had the most attendance of any we offered.  In future iterations of this event, more advance planning would enable us to partner with additional faculty members and feature tutorials they already use effectively with students in their curriculum.

Credly

We needed a way to both accept submissions documenting completion of coding challenges and a way to award digital badges.  Originally we had investigated potentially distributing digital badges through our campus learning management system, as some learning management systems like Moodle are capable of awarding digital badges.  There were a couple of problems with this – 1) we wanted the event to be open to anyone, including members of the community who wouldn’t have access to the learning management system, and 2), the digital badge capability hadn’t been activated in our campus’ instance of Moodle.   Another route we considered taking was accepting submissions for completed challenges was through the university’s Portfolium application, which has a fairly robust ability to accept submissions for completed work, but again, wouldn’t facilitate anyone from outside of the university participating. Credly seemed like an easy, efficient way to both accept submissions and award badges that could also be embedded in 3rd party applications, such as LinkedIN.  Since we hosted the competition in 2016, the capability to integrate Credly badges in Portfolium has been made available.

Credly enables you to either design your badges using Credly’s Badge Builder or upload your own badge designs.  Luckily, we had access to amazing student designers Katie Pappace, Rose Rieux, and Eva Cohen, who custom-created our badges using Adobe Illustrator.  A Credly account for the library’s Creative Media Studio was created to issue the badges, and Credly “Credits” were defined using the custom-created badge designs for each of the coding skills for which we wanted to award badges.

When a credit is designed in Credly and you enable the credit to allow others to claim the credit, you have several options.  You can require a claim code, which requires users to submit a code in order to claim the credit.  Claim codes are useful if you want to award badges not based on evidence (like file submission) but are awarding badges based on participation or attendance at an event at which you distribute the claim code to attendees.  When claim codes are required, you can also set approval of submissions to be automatic, so that anyone with a claim code automatically receives their badge.  We didn’t require a claim code, and instead required evidence to be submitted.

When requiring evidence, you can configure which what types of evidence are appropriate to receive the badge. Choices for evidence submission include a URL, a document (Word, text, or PDF), image, audio file, video file, or just an open text submission.  As users were completing code challenges, we asked for screenshots (images) as evidence of completion for most challenges.  We reviewed all submissions to ensure the submission was correct, but by requiring screenshots, we could easily see whether or not the tutorial itself had “passed” the code submission.

Awards

Credly gives the ability of easily counting the number of badges earned by each of the participants. From those numbers, we were able to determine the top badge earners and award them prizes. All participants, even the ones with a single badge, were awarded buttons of each of their earned badges. In addition to the virtual and physical badges, participants with the greatest number of earned badges were rewarded with prizes. The top five prizes were awarded with gift cards and the grand prize winner also got a 3D printed trophy designed with Tinkercad and their photo as a Lithopane incorporated into the trophy. A low stakes award ceremony was held for all contestants and winners. Top awards were high commodity and it was a good opportunity for students to meet others interested in coding and STEM.

Lessons Learned

Our first attempt at hosting coding challenges in the library taught us a few things.  First, taking a screenshot is definitely not a skill most participants started out with – the majority of initial questions we received from participants were not related to coding, but rather involved how to take a screenshot of their completed code to submit to Credly.  For future events, we’ll definitely make sure to include step-by-step instructions for taking screenshots on both PC and Mac with each challenge, or consider an alternative method of collecting submissions (e.g., copying and pasting code as a text submission into Credly).  It’s still important to not assume that copying and pasting text from a screen is a skill that all participants will have.

As noted above, planning ahead would enable us to more effectively reach out and partner with faculty, and possibly coordinate coding challenges with curriculum.  A few months before the coding challenges, we did reach out to computer science faculty, cinema and television arts faculty, and other faculty who teach curriculum involving code, but if we had reached out much earlier (e.g., the semester before) we likely would have been able to garner more faculty involvement.  Faculty schedules are so jam-packed and often set that way so far in advance, at least six months of advance notice is definitely appreciated.

Only about 10% of coding challenge participants came to coding office hours regularly, but that enabled us to provide tailored, one-on-one assistance to our novice coders.  A good portion of understanding how to get started with coding and application development is not related to syntax, but involves larger questions about how applications work:  if I wanted to make a website, where would my code go?  How does a URL figure out where my website code is?  How does a browser understand and render code?  What’s the difference between JavaScript (client-side code) and PHP (server-side code), and why are they different?  These were the types of questions we really enjoyed answering with participants during office hours.  Having fewer, more targeted office hours — where open questions are certainly encouraged, but where participants know the office hours are focused on particular topics — makes attending the office hours more worthwhile, and I think gives novice coders the language to ask questions they may not know they have.

One small bit of feedback that was personally rewarding for the authors:  at one of our office hours, a young woman came up to us and asked us if we were the planners of the coding challenges.  When we said yes, she told how excited she was (and a bit surprised) to see women involved with coding and development.  She asked us several questions about our jobs and how we got involved with careers relating to technology.  That interaction indicated to us that future outreach could potentially focus on promoting coding to women specifically, or hosting coding office hours to enable mentoring for women coders on campus, modeling (or joining up with) Women Who Code networks.

If you’re interested in hosting support for coding activities or challenges in your library, a great resource to get started with is Hour of Code, which promotes holding one-hour introductions to coding and computer science particularly during Computer Science Education Week.  Hour of Code provides tutorials, resources for hosts, promotional materials and more.  This year, Hour of Code week / Computer Science Education Week will be  December 4-10 2017, so start planning now!

District Dispatch: ALA announces Google Policy Fellow for 2017

planet code4lib - Mon, 2017-04-17 17:05

Alisa Holahan will serve as ALA’s 2017 Google Policy Fellow. Holahan is a candidate for the Master of Science in Information Science degree at the School of Information at the University of Texas, Austin. Previously, she completed her J.D. at the University of Texas Law School where she graduated with honors.

I am pleased to announce that Alisa Holahan will serve as ALA’s 2017 Google Policy Fellow. She will spend ten weeks in Washington, D.C. working on technology and internet policy issues through the library lens. As a Google Policy Fellow, Holahan will explore diverse areas of information policy, such as copyright law, information access for underserved populations, telecommunications policy, digital literacy, online privacy, the future of libraries and others. Google, Inc. pays the summer stipends for the fellows and the respective host organizations determine the fellows’ work agendas.

Holahan is a candidate for the Master of Science in Information Science degree at the School of Information at the University of Texas, Austin. Previously, she completed her J.D. at the University of Texas Law School where she graduated with honors and served as Associate Editor of the Texas Law Review. Holahan also completed her undergraduate degree at the University of Texas.

Since September 2015, Holahan has served as a Tarlton Fellow at the Tarlton Law Library at the University of Texas. She has interned twice in Washington, D.C., at the U.S. Department of Justice and U.S. Department of Health and Human Services. Holahan is licensed to practice law in Texas.

ALA is pleased to participate once again in the Google Policy Fellowship program as it has from its 2007-8 inception. We look forward to working with Alisa Holahan on information policy topics that leverage her strong background and fight for library interests with the Trump Administration and U.S. Congress.

Find more information the Google Policy Fellowship Program.

The post ALA announces Google Policy Fellow for 2017 appeared first on District Dispatch.

Harvard Library Innovation Lab: LIL Talks: Parsing Caselaw

planet code4lib - Mon, 2017-04-17 16:41

In last week’s LIL talk, expert witness Adam Ziegler took the stand to explain the structure of legal opinions and give an overview of our country’s appellate process.

First on the docket was a general overview of our country’s judicial structure, specifically noting the similarities between our federal and state systems, which both progress from district courts, to appellate courts, to supreme courts.

Next, we dissected several cases which would eventually be heard by the US Supreme Court. While some elements, such as a list of attorneys and the opinion text, are standard in all cases, each court individually decides how their cases will be formatted. They are, however, often forced to work within the guidelines and workflows specified by their contracted publishers.

In our Caselaw Access Project, we’re working on friendlier, faster, totally open, and more data-focused systems for courts to publish opinions. For more information, please send an email to: lil@law.harvard.edu

Terry Reese: Can my ILS be added to MarcEdit’s ILS Integration?

planet code4lib - Mon, 2017-04-17 14:32

This question has shown up in my email box a number of times over the past couple of days.  My guess, it’s related to the youtube videos recently posted demonstrating how to setup and use MarcEdit directly with Alma.

  1. Windows Version: https://youtu.be/8aSUnNC48Hw
  2. Mac Version: https://youtu.be/6SNYjR_WHKU

 

Folks have been curious how this work was done, and if it would be possible to do this kind of integration on their local ILS system.  As I was answering these questions, it dawned on me, others may be interested in this information as well — especially if they are planning to speak to their ILS vendor.  So, here are some common questions currently being asked, and my answers.

How are you integrating MarcEdit with the ILS?

About 3 years ago, the folks at Koha approached me.  A number of their users make use of MarcEdit, and had wondered if it would be possible to have MarcEdit work directly with their ILS system.  I love the folks over in that community — they are consistently putting out great work, and had just recently developed a REST-based API that provided read/write operations into the database.   Working with a few folks (who happen to be at ByWaters, another great group of people), I was provided with documentation, a testing system, and a few folks willing to give it a go — so I started working to see how difficult it would be.  And the whole time I was doing this, I kept thinking – it would be really nice if I could do this kind of thing with our Innovative Interfaces (III) catalog.  While III didn’t offer an API at the time (and for the record, as of 4/17/2017, they still don’t offer a viable API for their product outside of some toy API for dealing primarily with patron and circulation information), I started to think beyond Koha and realized that I had an opportunity to not just create a Koha specific plugin but use this integration as a model to develop an integration framework in MarcEdit.  And that’s what I did.  MarcEdit’s integration framework can potentially handle the following operations (assuming the system’s API provides them):

  1. Bibliographic and Holdings Records Search and Retrieval — search can be via API call, SRU or Z39.50
  2. Bibliographic and Holdings Records creation and update
  3. Item record management

 

I’ve added tooling directly into MarcEdit that supports the above functionality, allowing me to plug and play an ILS based on the API that they provide.  The benefit is that this code is available in all versions of MarcEdit, so once the integration is created, it works in the Windows version, the Linux version, and the Mac version without any additional work.  If a community was interested in building a more robust integration client, then I/they could look at developing a plugin — but this would be outside of the integration framework, and takes a significant amount of work to make cross-platform compatible (given the significant differences in UI development between Windows, the MacOS, and Linux).

This sounds great, what do you need to integrate my ILS with MarcEdit?

This has been one of the most common questions I’ve received this weekend.  Folks have watched or read about the Alma integration, and wondered if I can do it with their ILS.  My general answer, and I mean this, is that I’m willing to integrate any ILS system with MarcEdit, so long as they can provide the available API end points that make it possible to:

  1. Search for bibliographic data (holdings data is a plus)
  2. Allow for the creation and update of bibliographic data
  3. Utilize an application friendly authentication process, that hopefully allows the tool to determine user permissions

 

This is a pretty low bar.  Basically, an API just needs to be present; and if there is one, then integrating the ILS with MarcEdit is pretty straightforward.

OK, so my ILS system has an API, what else do I need to do?

This is where it gets a bit trickier.  ILS systems tend to not work well with folks that are not their customers, or who are not other corporations.  I’m generally neither, and for the purposes of this type of development, I’ll always be neither.  This means that getting this work to happen generally requires a local organization within a particular ILS community to champion the development, and by that, I mean either provide the introductions to the necessary people at the ILS, or provide access to a local sandbox so that development can occur.  This is how the Alma integration was first initiated.  There were some interested folks at the University of Maryland that spent a lot of time working with me and with ExLibris to make it possible for me to do this integration work.  Of course, after getting started and this work gained some interest, ExLibris reached out directly, which ultimately made this a much easier process.  In fact, I’m rarely impressed by our ILS community, but I’ve been impressed by the individuals at ExLibris for this specifically.  While it took a little while to get the process started, they do have open documentation, and once we got started, have been very approachable in answering questions.  I’ve never used their systems, and I’ve had other dealings with the company that have been less positive, but in this, ExLibris’s open approach to documentation is something I wish other ILS vendors would emulate.

I’ve checked, we have an API and our library would be happy to work with you…but we’ll need you to sign an NDA because the ILS API isn’t open

Ah, I neglected above to mention one of my deal-breakers and why I have not at present, worked with the APIs that I know are available in systems like Sirsi.  I won’t sign an NDA.  In fact, in most cases, I’ll likely publish the integration code for those that are interested.  But more importantly, and this I can’t stress enough, I will not build an integration into MarcEdit to an ILS system where the API is something that must be purchased as an add-on service, or requires an organization to purchase a license to “unlock” the API access.  API access is a core part of any system, and the ability to interact, update, and develop new workflows should be available to every user.  I have no problem that ILS vendors work with closed sourced systems (MarcEdit is closed source, even though I release large portions of the components into the public domain, to simplify supporting the tool), but if you are going to develop a closed source tool, you have a responsibility to open up your APIs and provide meaningful gateways into the application to enable innovation.  And let’s face it, ILS systems have sucked at this, and much to the library community’s detriment.  This really needs to change, and while the ability to integrate with a tiny, insignificant tool like MarcEdit isn’t going to make an ILS system more open, I personally get to make that same choice, and I have made the choice that I will only put development time into integration efforts on ILS systems that understand that their community needs choices and actively embraces the ability for their communities to innovate.  What this means, in practical terms, is if your ILS system requires you or I to sign an NDA to work with the API, I’m out.  If your ILS system requires you or their customers to pay for access to the API through additional license, training, or as an add-on to the system (and this one particularly annoys me), I’m out.  As an individual, you are welcome to develop the integrations yourself as a MarcEdit plugin, and I’m happy to answer questions and help individuals through that process, but I will not do the integration work in MarcEdit itself.

I’ve checked, my ILS system API meets the above requirements, how do we proceed?

Get in touch with me at reeset@gmail.com.  The actual integration work is pretty insignificant (I’m just plugging things into the integration framework), usually, the most time consuming part is getting access to a test system and documenting the process.

Hopefully, that answers some questions.

–tr

 

 

 

 

 

Islandora: Report from a release stance: Islandora 7.x-1.9RC2 VM available and only 14 days left for release

planet code4lib - Mon, 2017-04-17 13:10

Spring is here and so is also our Release Candidate 2 Islandora 7.x-1.9 Machinery.

A wonderful scented bouquet of colourfull islandora modules was updated to 7.x-1.9 RC2 version and I'm happy to announce that also an incredible (and even) number of 48 bugs have been fixed since this release process started, which of course speaks good of you people. Not counting community participation, documentation work and general engagement, which would make that number close to infinite.   Also no dangerous, critical or even medium risk "Bugs" are open or in an irresolute state, which could mean that even small fixes could still get into this release because we don't know what to do with so much free time =)   Before: Give this a look again (by know you should already dream with this) https://github.com/Islandora/islandora/wiki/How-To-Audit,-Document,-or-Test-an-Islandora-Release https://github.com/Islandora/islandora/wiki/Release-Team-Roles   How to use Testing Machine: Passwords, which URL, other questions, are answered here.   VirtualBox or VMware   Virtual Machine is available and ready to be tested at  https://s3.amazonaws.com/islandoravm/7.x-1.9/Islandora_7.x-1.9_RC2-Development-VM.ova   etag is 4947dada98abb576c9968be1d0db96f3 and Md5 is: 4947dada98abb576c9968be1d0db96f3 (hu they match!)   Old good terminal action   git clone -b 7.x-1.9 https://github.com/Islandora-Labs/islandora_vagrant
cd islandora_vagrant
vagrant up   wait for it...   vagrant ssh   This particular VM is reduced in cholesterol but same old flavour. Which means you will be downloading 1.3 Gbytes less of 1 and 0 than before, leaving more time for easter egg hunts, eating easter eggs any other social/outdoor activity you prefer this weekend (who are we to impose what to do with your free time?)   Need help? Did I say something wrong? The VM does not work? Too informal? Write me directly to dpino@metro.org Also on general Q/A, don't hesitate to reach out, contact us/me/or Melissa Anez (Project & Community Manager) if you have questions (email, IRC or Skype).   Nothing more to say than thanking you all for your feedback, pull requests and help.  Enjoy this community code. Test, explore and find / document/ bugs    Thanks again for making Islandora happen, over and over, at least twice a year.   Diego Pino Navarro / Islandora 7.x-1.9 release manager Metro.org  

HangingTogether: New skill sets for metadata management

planet code4lib - Mon, 2017-04-17 12:00

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Jennifer Baxmeyer of Princeton, Dawn Hale of Johns Hopkins University and MJ Han of University of Illinois at Urbana-Champaign. Educating and training catalogers has been at the forefront of many discussions in the metadata community. Today’s changing landscape calls for skill sets needed by both new professionals entering the field and seasoned catalogers to successfully transition to the emerging linked data and semantic web environment. Catalogers are learning about and experimenting with BIBFRAME while remaining responsible for traditional bibliographic control of collections. Metadata specialists utilize tools for metadata mapping, remediation, and enhancement. They identify and map semantic relationships among assorted taxonomies to make multiple thesauri intelligible to end users. For the more technical aspects of metadata management, we increasingly see competition for talent from other industries. This may intensify as metadata becomes more central to various areas of government, non-profit, and private enterprise.

Managers want to focus less on specific schema and more on metadata principles that can be applied to a range of different formats and environments. Desired soft skills included problem solving, effective collaboration, willingness—even eagerness—to try new things, understanding researchers’ needs, and advocacy. Although some metadata specialists have always enjoyed experimenting with new approaches, they lack the time to learn new tools or methodologies while keeping up with their routine work assignments. We should promote metadata as an exciting career option to new professionals in venues such as library schools and ALA’s New Members Roundtable. Emphasizing that metadata encompasses much more than library cataloging can increase its appeal, for example: entity identification, descriptive standards used in various academic disciplines, and describing born-digital, archival and research data that can interact with the semantic Web. As one participant noted, “We bring order out of a vacuum.”

Metadata increasingly is being created outside the library by academics and students who receive minimal training, leading to a need for more catalogers with record maintenance skills. Participants noted the need for technical skills such as simple scripting, data remediation, and identity management to reconcile equivalents across multiple registries.  Frequently mentioned sources of instruction include Library Juice Academy, MARCEdit tutorials, Lynda.com, Library of Congress Training Webinars, ALCTS Webinars, Code Academy, Software Carpentry and conferences such as Code4Lib and Mashcat. W3C’s recently published Data on the Web Best Practices and Semantic Web for the Working Ontologist were recommended reading. Crucial to the success of such training is to be able to quickly apply what has been learned. If new skills are not used, people forget what they have learned. Staff feel frustrated when they have invested the time to learn something that they cannot use in their daily work regularly.

We’ve seen a big shift from relying on instructions from the Library of Congress to self-education from multiple sources. Some approaches mentioned by participants:

  • Emphasize continuity of metadata principles when introducing an expanded scope of work.
  • Take advantage of the Library Workflow Exchange, a site designed to help librarians share workflows and best practices across institutions, including scripts.
  • From the recent Electronic Resources & Libraries Conference: “Don’t wait; iterate!” In other words, rather than waiting until staff have all the required skills, let them do tasks iteratively, learning as they go, so they are ready for new tasks when the time comes.
  • Have small groups of metadata specialists take programming courses together, after which they can continue to meet and discuss ways to apply their new skills to automate routine tasks.
  • Send staff to events such as OCLC’s DEVCONNECT, OCLC Developer Conference being held on 8-9 May 2017 to learn from libraries using OCLC APIs to enhance their library operations and services.
  • Create reading and study groups that include cross-campus or cross-divisional staff.
  • Expand the scope of current work to enable metadata specialists to apply their skills to new domains or terminology, such as using Dublin Core for digital collections. Involve staff in digital projects from the conceptual stage to developing project specifications, quality assurance practices and tool selection. As an added bonus, this fosters collaborative teamwork relationships.
  • Hire graduate students in computer science for short-term tasks such as creating scripts. The students need money and the library needs their skills.

The extent of collaboration with IT or systems staff varies among institutions. Such collaboration is necessary for many reasons, including managing data that is outside the library’s control. Some noted that “cultural differences” exist between the professions: developers tend to be more dynamic and focus on quick prototyping and iteration, while librarians focus first on documenting what is needed and are more “schematic.”  Which is more likely to be successful: teaching metadata specialists IT skills or teaching IT staff metadata principles?  The “holy grail” is to recruit someone with an IT background interested in metadata services. Retaining staff with IT skills is difficult—if they are really good, they can find higher-paying jobs in the private sector. Ideally, metadata managers would like a few staff who have the technical skills to take batch actions on data, or at least know how to use the external tools available to automate as many tasks as possible.

About Karen Smith-Yoshimura

Karen Smith-Yoshimura, senior program officer, works on topics related to creating and managing metadata with a focus on large research libraries and multilingual requirements.

Mail | Web | Twitter | More Posts (76)

Terry Reese: MarcEdit Updates Posted

planet code4lib - Sun, 2017-04-16 21:12

Change log below:

 

Mac Updates: 2.3.12 ************************************************** ** 2.3.12 ************************************************** * Update: Alma Integration Updates: New Create Holdings Record Template * Update: Integration Framework refresh: corrects issues where folks were getting undefined function errors. * Update: the #xx field syntax will be available in the Edit field and Edit indicator functions. This means users will be able to edit all 6xx fields using the edit field function by using 6xx in the field textbox. * Update: SRU Library updates to provide better error checking (specific for Windows XP) * Update: Adding support for the Export Settings command. This will let users export and import settings when changing computers. Windows Updates: 6.3.2 * Update: Alma Integration Updates: New Create Holdings Record Template * Update: Integration Framework refresh: corrects issues where folks were getting undefined function errors. * Update: the #xx field syntax will be available in the Edit field and Edit indicator functions. This means users will be able to edit all 6xx fields using the edit field function by using 6xx in the field textbox. * UI Updates: All comboboxes that include 0-999 field numbers in the Edit Field, Edit Subfield, Swap Field, Copy Field, etc. have been replaced with Textboxes. Having the dropdown boxes just didn't seem like good UX design. * Enhancement: RunAs32 bit mode on the 64 bit systems (for using Connexion) has been updated. Also, I'll likely be adding a visual cue (like adding * 32 to the Main window title bar) so that users know that the program is running in 32 bit mode while on a 64 bit system. * Enhancement: MarcEdit 7 Update Advisor * Update: SRU Library updates to provide better error checking (specific for Windows XP)

–tr

Terry Reese: MarcEdit 7 Upgrade Advisor

planet code4lib - Sun, 2017-04-16 19:12

This post is related to the: MarcEdit and the Windows XP Sunsetting conversation

I’ll be updating this periodically, but I wanted to make this available now.  One of the biggest changes related to MarcEdit 7, is that I’m interested in building against an updated version of the .NET framework.  Tentatively, I’m looking to build against the 4.6 framework, but would be open to building against the 4.5.2 framework.  To allow users to check their local systems, and provide me with feedback — I’m including an upgrade advisor.   This will be updated periodically as my plans related to MarcEdit 7 come into shape.

You can find the upgrade advisor under the Help menu item on the Main MarcEdit Window.

Upgrade Advisor Window:

 

 

 

 

As I’ve noted, my plan is to build against the 4.6 .NET Framework.  This version of the .NET framework is supported on Windows Vista-Windows 10.

–tr

 

Library of Congress: The Signal: Identity Crisis: The Reality of Preparing MLS Students for a Competitive and Increasingly Digital World

planet code4lib - Fri, 2017-04-14 12:57

This is a guest post by Mary Kendig, a student of the Master of Information Science program and the research coordinator for the DCIC Center at the University of Maryland.

The Problem                                                   

With the explosive emergence of computers and information technology since the 1960’s, electronic records have overwhelmed librarians and archivists. Federal agencies have responded in kind as evidenced by 30 years of investments in research partnerships and e-records. Over $11 million and 90 projects have been counted from the National Historical Publications and Records Commission and tens of millions more from NARA, NEH, IMLS, LOC, Mellon, and others.

As access portals are built and information infrastructure is constructed, is vital for librarians, archivists, and curators to collaborate in the design of digital archives and repositories, in concert with computer engineers, data scientists, and programmers. However, when digital software or information system projects are required to sustain online collections, programmers and computer scientists are at the helm to update, migrate, and build these storage systems.

The trend to not hire librarians and archivists for libraries and archives is not limited to information infrastructure. Upper management and project leader positions are filled with business majors and project management institute certificates, regardless of their experience with libraries or MLS education. Even simple website modifications to increase online traffic and digital record use is offered to social media coordinators and basic programmers rather than public outreach librarians. The data and computational social science librarian for Stanford University Libraries is Dr. Ken Nakao, a Stanford graduate with a chemical engineering degree. The research data manager for New Castle Libraries in the United Kingdom, Dr. Chris Emmerson, gained his doctorate in Transportation Engineering.

Greg Jensen, software architect, working with UMD’s Cyberinfrastructure Center to store records gifted by NARA.

I correlate this recent trend to the current education offered in Master of Library of Science (MLS) programs. Despite our awareness in the 60’s and efforts in the 90’s to maintain electronic records, MLS programs have been slow to enact major modifications to their programs that will train students for the future. Interview current MLS students and those who received their degree in the last 5 years, and they quietly confess their degree did not adequately prepare them for the electronic record influx. Monitor any MLS program in the United States and abroad, and one will notice name modifications as well as slow yearly program revisions.

In the College of Information Studies at the University of Maryland, the MLS program has undergone multiple iterations through their “re-envisioning the MLS” efforts; this is reflected in the program’s Fall 2016 name change from Master of Library Science to Master of Library and Information Science (MLIS). While previous coursework centered on traditional archiving, the program now embraces more electives in digital curation and data management for libraries, offering a specialization in Archives and Digital Curation. Several universities have dropped “library” or “archival” from the name all together; for instance, the University of Iceland now offers the Master of Information Science with specialization options such as Electronic Records Management. When asked about the removal, Dr. Jóhanna Gunnlaugsdóttir admitted that employers outside libraries were confused or uninformed regarding the degree. Furthermore, graduates could only attain small reference positions within their own library institutions and failed to gain upward mobility.

While universities attempt to rebrand their programs to give students competitive advantage, course revisions arise slower. For many MLS programs, database or information system design is not a mandatory requirement, even though catalog records and digital collections are managed through these systems. Programming is an afterthought, despite many repositories and catalogs now providing Application Programming Interfaces (APIs). With probabilistic methods and algorithms, researchers in

Students attending an ArchivesSpace lecture by Dr. Adam Kriesberg outside of normal coursework

the Traces through Time project at the National Archives (UK) are attempting connect people within genealogical records across collections and assign confidence that the connection is accurate, requiring major coursework in data science. Only recently have MLS programs strived to embed technology and data intensive skills into their programs, and many students elect to enroll by their own desire to attain well-paying jobs following graduation.

The opportunity to work with medieval transcripts in the rare book special collections is limited to a select few. It is time for MLS students to fully embrace available digital jobs and data management positions. As educators and industry professionals, it is time we admit that across the United States, Europe, South America, and Asia, the Master of Library and Information Science program is facing an identity crisis amongst the digital revolution, and students are facing the consequences.

Please take a moment to catch your breath and reflect on my message, however controversial you feel it is!

Good. I am aware that stating a problem is vastly different than solving a problem, in regards to simplicity and bureaucratic politics across the profession. With this in mind, I will be reflecting on existing/potential solutions over the next several paragraphs, which will include three major statements and explanations.

1. We need to offer and encourage students to enroll technology intensive courses and programs.

There was a time when textual processing and special collection courses supported students entering libraries and archives. However, as budgets are cut and libraries go digital, the path to sustainable and well-paying careers involves co-developing infrastructure to hold, curate, and provide access to online collections and data. To qualify for these careers, job listings require various programming languages, experience in information system design or web enabled databases, automation techniques, and data analysis.

There is an emerging coalition of librarians, archivists, and computer scientists, composed of researchers and educators from Canada, the UK, and the US, who are responding to the technological challenges by introducing computational methods to libraries and archives. Under the moniker of Computational Archive Science (CAS), the coalition is promoting an interdisciplinary field concerned with the application of computational methods and resources to sustain large-scale records/archives processing, analysis, storage, long-term preservation, and access. In this vein, the Library of Congress has recently explored the theme of “collections as data”, as seen in its conference September 27, 2016 titled Collections as Data: Stewardship and Use Models to Enhance Access. In the past two years, this coalition has strived to develop novel coursework to sustain MLS students in electronic record management and information careers.

Based on present research and problems faced by modern institutions, the coursework ranges from computational linguistics and network analysis to graph databases and big data infrastructure. To better equip students, MLS programs must introduce the theory and practice of managing digital born records and information objects at scale. The courses must expose students to technology, software, and techniques utilized by computer engineers and data scientists to sustain large record collections. Exposure includes physically working with these tools on existing collections and repositories at scale. A semester long practicum with institutional collections may be necessary to give students hand on experience with electronic record accession, processing, maintenance, migration, and storage.

In addition to offering more technologically intensive courses, MLS programs must mandate basic online information infrastructure courses. At a minimum, relational database or information system design should represent a core requirement. Electronic record management in digital repositories should take its place amongst the introduction courses. Even if students are disinterested in building infrastructure for online collections, they must be exposed to the technology.

2. We need research organizations/projects for students to gain digital skills and hands on experience

With coursework reflecting digital provenance theory, appraisal techniques, and OAIS standards, students need time to work through the motions of physically implementing digital projects and electronic preservation. This must include exposure to existing software and computer skills necessary to move electronic objects through the record lifecycle; this includes born-digital records and paper records digitized for preservation purposes. For optimum experience, students must be involved from project conception to completion and lessons learned, and have the opportunity to lead the project or make major project decisions

I work for the Digital Curation Innovation Center (DCIC) at the University of Maryland, an iSchool center dedicated to integrating research and education through Big Record and Archival Analytic partnerships. We have over 50 student volunteers who work on these projects to gain hand

Visualization analyzing archival data from US Holocaust men and women in 1940 France by created Yuting Lao, PhD, Information Science at University of Maryland

s on experience with curating digital collections, both born-digital and digitized, or building infrastructure to maintain records at scale. Students volunteer for the DCIC because they are able to experiment with industry software and techniques on projects provided by library and archival institutions. For example, in the Mapping Inequality Project, students digitize historical maps and stretch them across modern

google maps to understand geographical and societal changes. These maps were collected from the US National Archives through team digitization efforts. In the Overseas Pension Project, students digitally reunify US Civil War letters from foreign soldiers attempting to collect their veteran pensions, health records describing their various conditions, and state pension tables through graph and relational databases to improve genealogical services and understand important economic data. In the St. Louis Voyage Project, the US Holocaust Memorial Museum supplied data from patrons and users so that students could visualize the experience of the SS St. Louis and its passengers. Finally, in the DRAS-TIC Project (Digital Repository at Scale That Invites Computation), students are engaged in developing and testing innovative cyberinfrastructure that scales to billions of records and leverages distributed scalable NoSQL frameworks. In each project, MLS students are working with institutional professionals, fellow data scientists, and programmers to curate these historical collections and build infrastructures to maintain them.  Our institution is not unique in its endeavors, as the Digital Curation Centre in Edinburgh explores data curation and management in academic libraries.

With realistic research organizations and projects, students are equipped to handle the problems faced in implementing digital projects and preserving electronic records within their institutions. Furthermore, the students can point to physical project deliverables to say, “I designed that and I can design it for you.”

3. We need to collaborate with institutions to provide beneficial learning environments for students

The MLS field study can be the core foundation to student success in locating employment opportunities following graduation. Sadly, so many students view the field study as another program checkbox rather than the opportunity of a lifetime. This is likely due to the common practice of pairing students with institutions that give students the “busy work” of our profession or are unable to handle a student for the semester.

If MLS programs are demanding students to enroll in semester long courses that require 120 hours of

onsite institution work, then the environment must be rich and beneficial to the student, especially if the work is unpaid. Ideally, the environment must include technological elements and introduction to

St. Louis Voyage Project students meeting with USHMM coordinators and archivists

systems for maintaining records. Furthermore, the student must work with employees facing modern institution problems, such as budget cuts or locating resources for funding. These experiences must instill leadership and decision making skills into the students so they are equipped to handle electronic record influxes with a diminishing budget.

In addition to rich internship experience, institutions must actively engage with MLS programs through coursework, projects, and funding. The DCIC actively works with the National Archives, US Holocaust Memorial Museum, Library of Congress, and National Park Service to provide collections, experience, and funding for MLS students. The Michigan State Archives actively engages with the school of Information at the University of Michigan. In the summer, students at the University of Iceland can enroll into a course that involves visiting and researching libraries across the country. The need for collaboration was further echoed by US National Archives specialist Mark Conrad at IEEE Big Data conference in December. In his presentation Collaboration is the Thing Conrad encourages researchers and institutions to “kick the tires” on new technology and notes several examples of collaboration in action with NARA. When institutions interact with academic programs, the active learning benefits both organizations. Students are exposed to the modern problems facing archives and libraries and are equipped to tackle them while institutions have access to advantageous research and work pools.

Final Thoughts

I am not afraid to admit that my analysis could very well be incorrect or jaded. In the end, the separation between librarians/archivists and computer/data scientists might have value.  Recently, attending the 12th International Digital Curation Conference in Edinburgh, my attention was directed to applying digital curation workflows to data science, academic libraries, and STEM research. Following several presentations on data management plans, text mining, and archival software storage for biological data

Overseas Pension Project students presenting digital archive research to the International Digital Curation Conference

, I commented to one of my student researchers, “I wish these types of courses were required in MLS programs so students could learn to work with this data and feel comfortable with such advance techniques.” I was shocked when they did not agree and responded that students did not necessarily join the MLS programs to build infrastructure for historical records or work in STEM driven libraries. After the conference, a different student admitted that they may drop the archives and digital curation specialization altogether because the presentations greatly modified their perception of working in modern libraries and archives.

As a previous NARA employee, federal librarian, and one semester MLS student, the student complaints of ill preparedness for library and archival careers in digitally motivated institutions haunts educators and research coordinators. I stand by my analysis; if we continue to separate librarians and archivists from technology, we put students at a severe disadvantage as archives, libraries and museums increasingly become digital, both through the influx of born-digital records and digitization of existing analogy collections.

It is time for library educators, archival professors, and program advisors to break from the past and modify their courses to include hand on experience with technology and project management. There is an urgency for MLS students to go beyond the theoretical study of provenance theory, OAIS standards, and managing textual records in cultural institutions. The MLS programs must swiftly implement information system courses, database design, and big data infrastructure into their programs, or at least offer a more technologically driven path. If current practice continues, we are knowingly setting up the students for failure despite our awareness of what the future holds.

As I read about the digital lab projects and library consortiums discussed in the Signal, I know universities and institutions are arming themselves for the digital revolution. I look forward to educators, industry professionals, and administrators collaborating to better train and equip their students to prepare for the fight. Librarians’ and archivists’ identity will remain constant as their mission is to preserve and provide access to information. The identity crisis is in the ability to continue the mission.

Evergreen ILS: Evergreen 3.0 development update #1

planet code4lib - Fri, 2017-04-14 12:30

1905 woodcut by Bertha Lum, retrieved from the Library of Congress

We have ambitious plans for Evergreen 3.0. Not only will it mark the first release where the community will fully support production use of the web staff client, a number of new features are in the works, including copy tags, batch patron editing, and support for performing ebook circulation transactions directly in the public catalog. A full list of the planned features can be found on the roadmap. (And if you have any other features in the works, please add them to the roadmap by the end of the day on 14 April.)

In addition, some things will be going away. The open-ils.permacrud service, a Perl predecessor to open-ils.pcrud that is barely used, will be removed outright. The XUL staff client will still be present in 3.0, but it will be deprecated, and is slated to be removed in the Fall 2018 release.

Some changes to the project’s development infrastructure may happen as well during the 3.0 cycle. In particular, there was a discussion at the hackfest during the Evergreen International Conference last week about possibly replacing Launchpad for bug tracking with something that manages both our Git repositories and issue tracking. If you have thoughts on the matter, please add them to the wiki page where we’re discussing this.

Speaking of the conference, several presentations touched on development, documentation, and translation matters. Here’s a list of the ones for which slides are available as of this writing:

As of this writing, 13 patches have been committed to the master branch since 6 April 2017. It may be useful to mention how I arrived at this number. I ran git log --pretty="%cd %s" --date short --since '2017-04-06 23:59:59' origin/master. Breaking this down, git log is the command that lists the history of Git commits in a branch; --pretty="%cd %s" says to output the commit date (%cd) and subject line (%s); --date short says to format the date like YYYY-MM-DD; --since '2017-04-06 23:59:59' says to include only commits applied since that time; while origin/master is the branch to report on. (I did a git fetch origin first). The output ended up being:

2017-04-12 LP#1670425: RTL improvements to new advanced search limiter block 2017-04-12 LP#1670425: Adjust the release notes entry to reflect changes 2017-04-12 LP#1670425: New responsive design for advanced search limiters block 2017-04-12 LP#1670425: Moving display of advanced search limiters on search results page 2017-04-12 LP#1665933: describe the new -x option when running -h 2017-04-12 LP 1665933: Skip XUL staff client build in make_release. 2017-04-11 LP#1680624 Remove bower packaging bits 2017-04-11 LP#1680624 angular-ui-bootstrap stopped shipping minified files 2017-04-11 LP#1680624 Consolidate package dependencies into package.json 2017-04-11 LP#1680312: Fix IDs for 950.data.seed-values.sql for i18n 2017-04-11 LP#1680312 Ensure oils_i18n_gettext keys are unique 2017-04-10 LP#1677416: unbreak use of egOrgSelector by egEditFmRecord 2017-04-10 LP#1167541: Use Patron home org for pickup lib instead of staff's Duck trivia

For 22 years, Cincinnati (across the river from where the 2017 Evergreen International Conference was held in Covington, Kentucky) has held a Rubber Duck Regatta benefiting a local food bank.

Submissions

Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

Cynthia Ng: COSUGI 2017: Walk like a SQLian

planet code4lib - Thu, 2017-04-13 22:10
Got a session on SQL by Jeremy Newville. Introduction SQL is what allows direct work with the Horizon database, useful both inside and outside of the client. Inside the Client Many mq view reports allow good search options, but can’t always get the views you want. Example: want to see list of borrowers where barcodes … Continue reading COSUGI 2017: Walk like a SQLian

Cynthia Ng: COSUGI 2017: Horizon System Administrator Sharing Session

planet code4lib - Thu, 2017-04-13 21:07
Moderated by Kay Dunker, Systems Librarian, Valley Library Consortium Sybase: upgrade to 16, went fine. Clients do not need to be updated. Windows 64-bit: Horizon fresh client install has issues, Sybase chokes. No concrete documentation yet. Windows 10 Anniversary edition: client locks up if tab away. Tentative workaround, click on taskbar to minimize and re-open. … Continue reading COSUGI 2017: Horizon System Administrator Sharing Session

Cynthia Ng: COSUGI 2017: BlueCloud Circulation for Horizon

planet code4lib - Thu, 2017-04-13 19:49
Taking a look at BlueCloud Circulation with a demo. Basics newest addition to the “family” 35 pilot sites first users: academic (load patron records), special, K-12 (records loading, check in/out), outlets (community centres, etc.), consortia members requirements: Horizon 7.5.3+, Web services 2017.01+, BlueCloud Central (institution, users, circulation role, profiles), ILS policies for circulation certificate for … Continue reading COSUGI 2017: BlueCloud Circulation for Horizon

Cynthia Ng: COSUGI 2017: Horizon Lightning Talks

planet code4lib - Thu, 2017-04-13 19:41
Lightning talks specifically on Horizon. Let’s learn some stuff! Using the Horizon Debugger client comes with built-in debugger that records all database transactions log can provide relevant table and column names, if task can be done with SQL, help troubleshoot performance issues invoke debugger with Ctrl+Alt+Shift+D Options DbCommand: make sure this box is checked so … Continue reading COSUGI 2017: Horizon Lightning Talks

Brown University Library Digital Technologies Projects: Ivy Plus Discovery Day

planet code4lib - Thu, 2017-04-13 18:48

On June 4-5, 2017 the Library will host the third annual Ivy Plus Discovery Day. “DiscoDay”, as we like to call it, is an opportunity for staff who work on discovery systems (like Blacklight Josiah) to share an update of their work in progress and discuss common issues.

On Sunday, June 4 we will have a hackathon on these two topics.

  • StackLife — integrating virtual browse in discovery systems
  • Linked Data Authorities — leveraging authorities to provide users with another robust method for exploring our data and finding materials of interest

On Monday, June 5 there will be a full day of sharing and unconference discussion sessions.

We expect about 40 staff from the 13 Ivy Plus Libraries.  We’ve initially limited participation to three staff from each institution and we hope to have a good mix of developers, metadata specialists, user experience librarians and others whose work is closely tied to the institution’s discovery system.

For more information about Discovery Day see: https://library.brown.edu/create/discoveryday/

Pages

Subscribe to code4lib aggregator