You are here

Feed aggregator

Mark E. Phillips: UNT Libraries’ Digital Collections 2016 in Review: Items

planet code4lib - Mon, 2017-01-23 17:29

This post is just an overview of the 2016 year for the UNT Libraries’ Digital Collections.  I have wanted to do one of these for a number of years now but never really got around to it.  So here we go.

I plan to look at two areas of activity for the digital collections.  Content added, usage, and some info on metadata curation activities.  This first post will focus on items added.

Items added

From January 1, 2016 until December 31, 2016 we added a total of 295,077 new items to the UNT Libraries’ Digital Collections.  The UNT Libraries’ Digital Collections encompasses The Portal to Texas History, the UNT Digital Library, and the Gateway to Oklahoma History.  The graphic below shows the number of records added to each of the systems throughout the year.

Items Added by System

The Portal to Texas History (PTH in the chart) had the most items added at 145,268 new items.  This was followed by the UNT Digital Library (DC in the chart) with 124,402 items and finally the Gateway to Oklahoma History (OK in the chart) with 25,809 new items.

If you look at files (often ‘pages’) instead of items the graph will change a bit.

New Pages by System

While we added the most items to The Portal to Texas History, we added the most pages of content to the UNT Digital Library.  In total we added 5,704,046 files to the Digital Collections in 2016.

Added by Date

The number of items added per month is a good way of getting an overview of activity across the year.  The graphic below presents that data.

New Items By Month

The average number of items added per months is 24,590 which is a very respectable number. When you look at the number of items added on a given day during the year, the graph is a bit harder to read but you can see some days that had quite a bit of data loading going on.

New Items Added Per Day

As you can see it is a bit harder to tell what is going on.  some days of note include May 19th that had 19,858 items processed and uploaded, March 19th with 16,649, and January 13th with 13,338 new items added.  there are at least six other days with over 10,000 items processed and added to the digital collections.

If you take the number of items and spread them across the entire year you will get an average of 808 items loaded into the system per day.  Not bad at all. There were actually 165 days during 2016 that there weren’t any items added to the Digital Collections which leaves an impressive 200 days that new content was being processed and loaded. When you remove weekends you are left with content being added almost four days a week.

Another fun number to think about is that if we added an average of 808 items per day during 2016.  That’s 33.6 items added per hour during the day, for just about one item created and added every thirty seconds.

Items by Type

Next up is to take a look at what kind of items were added throughout the year.  I’m going to base these numbers off of the resource type field for each of the records.  If for some reason the item doesn’t have a resource type set then it will have a value of None.

Resource Type Item Count % of Total text_newspaper 124,662 42.25% text_report 56,279 19.07% image_photo 42,203 14.30% text_article 31,129 10.55% video 12,238 4.15% text_script 7,230 2.45% sound 4,956 1.68% image_drawing 4,097 1.39% text_etd 2,763 0.94% text 2,365 0.80% text_leg 1,433 0.49% image_postcard 1,193 0.40% text_journal 886 0.30% text_book 858 0.29% text_pamphlet 778 0.26% text_letter 541 0.18% None 523 0.18% text_clipping 174 0.06% physical-object 144 0.05% image_presentation 125 0.04% text_legal 111 0.04% text_review 107 0.04% image_poster 89 0.03% text_yearbook 47 0.02% text_paper 37 0.01% dataset 29 0.01% image_map 22 0.01% website 11 0.00% image 11 0.00% image_score 11 0.00% image_artwork 8 0.00% text_chapter 7 0.00% collection 5 0.00% text_poem 3 0.00% interactive-resource 2 0.00%

I’ve taken the ten most commonly added item types, which account for over 97% of items added to the system and made a little pie chart out of them below.

Item by Type

As you can see the Digital Collections added a large number of newspapers over the past year.  Newspapers accounted for 124,662 or 43% of new items added to the system.  There were a large number of reports, photographs, and articles added as well.  Coming in at the fifth most added type are videos of which we added 12,238 new video items.

Items by Partner

Because we work with a number of partners here at UNT, across Texas, and into Oklahoma we upload content into the system associated with one partner. Throughout the year we added items to 154 different partner collections in the UNT Libraries’ Digital Collections.  I’ve presented the ten partners that contributed the most content to the collections in 2016.

Partner Partner Code Item Count Item Percentage UNT Libraries Government Documents Department UNTGD 90,393 30.63% UNT Libraries’ Special Collections UNTA 32,263 10.93% Oklahoma Historical Society OKHS 25,786 8.74% Texas Historical Commission THC 25,222 8.55% UNT Libraries UNT 15,319 5.19% Cuero Public Library CUERPU 5,901 2.00% Nellie Pederson Civic Library CLIFNE 5,881 1.99% Coleman Public Library CLMNPL 5,729 1.94% Gladys Johnson Ritchie Library GJRL 4,850 1.64% Abilene Christian University Library ACUL 4,359 1.48%

You can see that we had a strong year for the UNT Libraries’ Government Documents Department that added over 90,000 items to the system.  We have been ramping up the digitization activities for the UNT Libraries’ Special Collections and you can see the results with over 32,000 new items being added to the UNT Digital Library.

Closing

I think that’s just about it for the year overview of new content added to the UNT Libraries’ Digital Collections.  Next up I’m going to dig into some usage data that was collected from 2016 and see what that can tell us about last year.

I’m quite impressed with the amount of content that we added in 2016.  Adding 295,077 to the Digital Collections brought us to 1,751,015 items and 26,326,187 files (pages) of content in the systems.  I’m looking forward to 2017 and what it has in store for us.  At the rate we added content in 2016 I have a strong feeling that we will be passing the 2 million item mark.

If you have questions or comments about this post,  please let me know via Twitter.

Terry Reese: MarcEdit: Networked Task Folders and network latency

planet code4lib - Mon, 2017-01-23 17:21

I had a really interesting question make it into my email the other day.  A user had configured MarcEdit to use a networked task folder, and in general, it was working.  But then, it wouldn’t.  The folder was there, the tasks were there, but the program simply wouldn’t see the network.  Maybe that has happened to you – you’ve selected a network task folder, uploaded the changes, and then had MarcEdit fall back into offline mode.  So what’s happening?

Well, the culprit here is network latency most likely.  Here’s the problem – Windows, by default, will keep trying and trying and trying to connect to a networked folder.  By default, the timeout to reconnect to a networked device is over 100 seconds.  When you are offline, that would make performance simply unacceptable, because the areas where networked task directories need to be resolved would simply freeze, locking the program. To solve that issue, I have a small function in the application that checks to see if a directory exists (the Networked Task directory), and it sets a timeout.  By default, I’ve set the timeout to be 300 milliseconds.  This doesn’t sound like a very long time, but it’s ages in network time.  All the network has to do is respond to a ping.  To support this, I use a function that looks something like this:

private bool VerifyDirectoryExists(Uri uri, int timeout) { var task = new System.Threading.Tasks.Task(() => { var fi = new System.IO.DirectoryInfo(uri.LocalPath); return fi.Exists; }); task.Start(); return task.Wait(timeout) && task.Result; }

If you look at the code, you’ll see I have a timeout that can end a specific thread and thus, allow the program to continue.  This is how MarcEdit determines if the network folder is offline.  It works well, but if there is significant latency on the network, 300 milliseconds may be too small. 

To support users that may run into this problem, I’ve added a new preference.  In the Locations tab, I’ve added the ability to change the latency timeout. 

By default, this value will remain at 300 milliseconds, but uses have the option to change this value.  Users also need to keep in mind, that the timeout is set in milliseconds, and there is a maximum value of 100 seconds (the windows default timeout, which is controlled via the registry).  Personally, I would recommend against setting this value above 1 second or (1000 milliseconds), because you will notice program freezing when truly offline.  What’s more, I’d argue that if your networks latency requires this kind of setting, using the networking options likely isn’t the best choice given your environment.  But these options are now available for users.  This feature has was rolled into the Windows version as of Update build 6.2452 and will be moved into the MacOS version of MarcEdit later this week.

–tr

Islandora: Islandora 7.x-1.9 Call for Volunteers

planet code4lib - Mon, 2017-01-23 15:25

Islandora is released twice-yearly, at the end of April and October.

We are now looking for volunteers to join the team for the April release of Islandora 7.x-1.9, including a Release Manager, if anyone wants to try their hand at the wheel (support from former release managers is readily available!). We also welcome co-managers if you prefer to tackle it as a team.

The roles are described in detail here, but in short we are seeking:

* Release Manager 
* Testing Manager 
* Documentation Manager
* Auditing Manager 
* Component Managers
* Testers
* Documenters 
* Auditors 

If you have been a Tester, Documenter, or Auditor for a previous Islandora Release, please consider taking on a little more responsibility and being a mentor to new volunteers by managing a role!

Details on exactly how to Audit, Test, and Document an Islandora release are listed here.

SIGN UP HERE

Why join the 7.x-1.9 Release Team?

* Give back to Islandora. This project survives because of our volunteers. If you've been using Islandora and want to contribute back to the project, being a part of a Release Team is one of the most helpful commitments you can make.

* There's a commitment to fit your skills and time. Do you have a strong grasp of the inner workings of a module and want to make sure bugs, improvements, and features are properly managed in its newest version? Be a Component Manager. Do you work with a module a lot as an end user and think you can break it? Be a Tester! Do you want to learn more about a module and need an excuse to take a deep dive? Be a Documenter! Do you have a busy few months coming up and can't give a lot of time to the Islandora release?  Be an Auditor (small time commitment - big help!). You can take on a single module or sign up for several. 

* Credit. Your name goes into the release announcement and Release notes for posterity.

* T-Shirts. Each member of an Islandora Release Team gets a t-shirt unique to that release. They really are quite nifty;

Tentative schedule for the release:

* Code Freeze:  Wednesday, March 1, 2017
* Release Candidate: Monday, March 13, 2017
* Release: Friday, April 28, 2017

OCLC Dev Network: Image Open Access: Implementing IIIF in CONTENTdm

planet code4lib - Mon, 2017-01-23 14:00

The latest release of CONTENTdm introduced support for the International Image Interoperability Framework (IIIF) Image API version 2.1.

LibUX: Common Data Mining Tools

planet code4lib - Mon, 2017-01-23 13:55

One of my favorite quotes that I refer to a lot in the field of User Experience is said by Sherlock Holmes, in A Scandal in Bohemia.

“It’s a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

The quote from Sherlock Holmes describes a problem I run into a lot in the field of User Experience, so I wanted to write something about this problem, and to champion data mining and the usage of analytical data. For a User Experience professional, data mining is very useful in generating new information from large amounts of data. This article lists common analytical data applications which can provide user data, and should inform further user research.

Over the years I’ve come to understand that everyone’s experience with analytical data varies greatly. To that end enterprise software suites, 3rd-party software applications, and simple browser extensions – some of which you may currently use, some not – can provide valuable data and parameters when conducting audits, redesigns, research, or usability tests. Broadly speaking, most analytical data applications are stand-alone, from Adobe, exist through a web browser, or are from Google.

Crazy Egg’s Heatmaps and Visitor Insights

3rd-Party Applications

The following are various popular data mining tools used to accomplish many different tasks:

  • CrazyEgg *
    Through Crazy Egg’s heat map and scroll map reports you can get an understanding of how your visitors engage with your website so you can boost your conversion rates. In CrazyEgg you create experiments that run for a certain amount of users, days, or both.
  • ForSee
    ForSee turns customer insights into an action plan – with embedded polls, questionnaires, and surveys – with multichannel customer experience analytics for web, mobile, and contact centers.
  • Hodes
    Redefining how brands and talent connect, Hodes is a full-service employer brand agency that uniquely connects companies to talent. Hodes combines analytics with spending to calculate CPH (cost per hire) and conversions.
  • PageFair
    Adblocking has gone mainstream, and PageFair’s goal is to protect the future of the free web by re-establishing a fair deal between web users and the content creators they want to support. Adblocking can disable external fonts, social media iconography, pop-up windows, and more. PageFair detects what percentage of your visitors are using adblocking.
  • QR Codes
    QR Codes (abbreviated from Quick Response Codes) let you track the scan statistics – how many times, when, where and with what devices the codes have been scanned – allowing you to notice any changes in performance immediately, and gauge real world and app integration.
  • Qualtrics
    With Qualtrics survey software you can capture, analyze, and act on insights. Qualtics makes it easy for you to build and share a survey with peers inside or outside your organization.
  • ShareThis
    The ShareThis button is an all-in-one widget that lets people share any content on the internet with friends via e-mail, social media, instant messenger, or text message. The ShareThis Social Optimization Platform affords A/B testing and viral prediction. The ShareThis box and integration is 100% free, but any analytic requires a paid prescription.
  • Survey Monkey
    SurveyMonkey is an online survey development cloud-based software that allows you to create surveys, publish online surveys in minutes, and view results graphically and in real time.
  • Webalizer (or other server-statistics)
    Webalizer is a website traffic analysis server-side application, produced by grouping and aggregating various data items. These data items are captured by the web server in the form of log files, while the website visitor is browsing the website. Comparing server-side statistics against both Adobe and Google Analytics identifies the number of real humans versus bots.

Adobe Marketing Clouds eight softwares

Adobe Marketing Cloud

Adobe Marketing Cloud consists of the following eight data resources:

  • Analytics *
    Adobe Analytics is a set of tools for predictive and real-time analytics that can be integrated into third-party sources. It includes the Marketing Reports and Analytics (formerly SiteCatalyst), Ad hoc analysis (formerly Adobe Discover) and Data Workbench (formerly Insight) applications to help create a holistic view of business activities by transforming customer interactions into insights.
  • Audience Manager
    Adobe Audience Manager is a data management platform that can be used to create profiles of audience segments. These profiles can then be used for targeted ad campaigns.
  • Campaign
    Adobe Campaign is an analytics tool that helps users build a personalized experience based on customer habits and preferences. It plans, manages and executes campaigns from a unique environment. You can now have an intuitive, automated way to deliver messages over marketing channels. The new Adobe Campaign, formerly Neolane, is now being integrated with Adobe Experience Manager to help predict customers needs.
  • Experience Manager
    Adobe Experience Manager is a web content management system for organizing, managing, and delivering creative assets. The user can use templates to create targeted content and publish them securely in the cloud. It is derived from a product called CQ by Day Software, which Adobe acquired in 2010.
  • Media Optimizer
    Adobe Media Optimizer is a tool that manages, forecasts and optimizes media. It provides a consolidated view of how media is performing together with tools to accurately forecast user media. Media Optimizer helps you manage search engine marketing, display, and social campaigns.
  • Primetime
    Adobe Primetime is a video platform that can be used to create and monetize video content, and make it available across multiple types of devices. A strategic partnership with comScore, announced in March 2016, will promote the collection and interpretation of viewing metrics across a range of non-traditional TV devices.
  • Social
    Adobe Social is a tool for managing social content and social campaigns. It’s a comprehensive solution for building stronger connections through data-driven content. It deals with relevant posts, insightful conversations, measurable results, and social activities connected to business. Adobe Social is about the discovery of precise content, social networks and business results.
  • Target
    Adobe Target is a tool for testing and targeting digital experiences. It includes a user interface, built-in best practices, and robust optimization tools for following site visitors. With its self-learning algorithmic approach it is able to increase conversion and filter results precisely. Adobe Target also uses factorial testing to understand elements for real-time targeted content. Adobe Target uses automated behavioral targeting with acquired data such as IP addresses, time of day, referral URLs and brand affinity.
Browser Tools

Apple Safari, Google Chrome, Microsoft Internet Explorer/Edge, and Mozilla Firefox all have tools that help with JavaScript debugging/errors, performance load times, performance audits under different speeds, DOM inspection, CSS changes, and storage issues.

  • Chrome DevTools
    To access the DevTools, open a web page or web app in Google Chrome. Either: Select the Chrome menu at the top-right of your browser window, then select Tools > Developer Tools. Right-click on any page element and select Inspect Element.
  • Internet Explorer Developer Tools
    On any site you want to debug, open the Developer Tools and switch to the Script tab, then click Start Debugging. When starting the debugging process, the Developer Tools will Refresh the page and Unpin the tools if it is pinned.
  • Firefox Developer Tools
    There are a few different ways to open the Toolbox: select “Toggle Tools” from the Web Developer menu (under “Tools” on OS X and Linux, or “Firefox” on Windows) click the wrench icon, which is in the main toolbar or under the Hamburger menu, then select “Toggle Tools”
  • Safari Web Development Tools
    To do so, enable the “Show Develop menu in menu bar” setting found in Safari’s preferences under the Advanced pane. You can then access Web Inspector through the Develop menu that appears in the menubar, or by pressing Command-Option-I.

Google AdWords

Google AdWords

Google AdWords is an advertising service for those wanting to display ads on Google and its advertising network. AdWords enables businesses to set a budget for advertising and only pay when people click the ads. The ad service is largely focused on keywords, and consists of the following eight tools:

  • Change History
    AdWords account contains a history of changes that shows what you’ve done in the past. This change history can help you better understand what events may have triggered changes in your campaign’s’ performance. You can then filter the changes to see only the ones you’re interested in. You could filter by date range, campaign, ad group, or user, for example.
  • Conversions
    Conversion tracking is a free tool that shows you what happens after a customer clicks on your ads – whether they purchased a product, signed up for your newsletter, called your business, or downloaded your app.
  • Attribution
    You can use the Model Comparison Tool to compare how different attribution models impact the valuation of your marketing channels. In the tool, the calculated Conversion Value (and the number of conversions) for each of your marketing channels will vary according to the attribution model used.
  • Analytics  *
    Google Analytics Solutions offer free and enterprise analytics tools to measure website, app, digital, and offline data to gain customer insights. By enabling your Advertising Features, Google Analytics will collect additional data about your traffic (you may need to update your privacy policy before enabling Advertising Features).
  • Google Merchant Center
    Google Merchant Center is a tool which helps you to upload your product listings for use with Google Shopping, Google Product Ads, and Google Commerce Search.
  • Keyword Planner
    Keyword Planner is a free AdWords tool that helps you build Search Network campaigns by finding keyword ideas and estimating how they may perform.
  • Display Planner
    An AdWords tool that provides ideas and estimates to help you plan a Display Network campaign that you can add to your account or download. Display Planner generates ideas for all the ways you can target the Display Network. Targeting ideas are based on your customers’ interests or your landing page.
  • Ad Preview and Diagnosis
    A tool in your account that helps identify why your ad or ad extension might not be appearing. The tool also shows a preview of a Google search result page for a specific term.

Even though each User Experience professional has experience with different software suites, 3rd-party software, or even browser extensions, hopefully this article has provided you with some additions or alternatives that prove to be valuable. The software and applications provided are by no means complete, so please comment and share any other applications or tools you use to get analytical data from your users.

* Adobe and Google Analytics don’t track external links by default. External links can be tracked by Google Analytics Solutions, but a line of JavaScript must be added to each external link. Adblocking software disables ShareThis and and JavaScript external link detection. Heatmaps are the only way to determine external link clicking without JavaScript detection.

Terry Reese: MarcEdit Update (Windows and Linux)

planet code4lib - Mon, 2017-01-23 06:14

Couple updates, couple bug fixes.  Change log below.

6.2.452
* Bug fix/Behavior Change: Export Tab Delimited Records: Second delimiter insertion should be standardized with all regressions removed.
* New Feature: Linked Data Tools: Service Status options have been included so users can check the status of the currently profiled linked data services.
* New Feature: Preferences/Networked Tasks: MarcEdit uses a short timeout (0.03 seconds) when determining if a network is available.  I’ve had reports of folks using MarcEdit have their network dropped from MarcEdit.  This is likely because their network has more latency.  In the preferences, you can modify this value.  I would never set it above 500 milliseconds (0.05 seconds) because it will cause MarcEdit to freeze when off network, but this will give users more control over their network interactions.
* Bug Fix: Swap Field Function: The new enhancement in the swap field function added with the last update didn’t work in all cases.  This should close that gap.

Update can be found from the download page (http://marcedit.reeset.net/downloads)  or the automatic update tool.

–tr

DuraSpace News: CALL for Participation in DSpace 7 Development

planet code4lib - Mon, 2017-01-23 00:00

From Tim Donohue, DSpace Tech Lead, on behalf of the DSpace 7 team

DuraSpace News: VIVO Updates–VIVO Camp, Triple Pattern Fragments

planet code4lib - Mon, 2017-01-23 00:00

From Mike Conlon, VIVO Project Director

DPLA: ALA Midwinter 2017 Update

planet code4lib - Sat, 2017-01-21 17:44

2016 was another exciting and busy year at the Digital Public Library of America, with extensive growth of our national network, the launch of an important projects to standardize rights statements, provide greater access to ebooks, curate our materials for education, and improve the technical systems our community relies upon.

DPLA continues to expand our network and collections rapidly. At present, we have over 15 million resources from 2,200 contributing institutions throughout the country. Our network is currently comprised of 17 Content Hubs and 25 Service Hubs. This year we accepted applications for new Service Hubs from Mississippi, Oklahoma, Florida, Montana and Ohio. We also announced an important partnership with the Library of Congress as a Content Hub, and saw the first of many Library of Congress collections go live in DPLA.

In addition to partnership and collection growth, DPLA was pleased to be a key partner in the launch of RightsStatements.org, standardized statements to express copyright status for digital objects. With the launch of these statements, we have asked the DPLA network to begin implementing RightsStatements.org statements in their metadata. In the coming weeks, you will begin to see these standardized statements appear in the DPLA portal and through our API. In the coming months, we will add features that will allow users to find materials based on their rights status, a critical step toward greater use and reuse.

DPLA has been thrilled with the major impact of its partnership to help in-need children to gain access to ebooks. The Open eBooks initiative announced in the fall that over 1 million books had been read by those children in just the first nine months, and hundreds of thousands of additional books have been read since then. Recently, we have just received a major grant from the Alfred P. Sloan Foundation to accelerate our efforts to provide broad access to ebooks. That work will involve a significant increase in our activities along with our partners in 2017 and beyond, something that we have begun to discuss here at ALA Midwinter.

The education-oriented part of our site has grown substantially over the last year, as has its impact. We now have 100 primary source sets of curated materials drawn from our thousands of contributing institutions, and hundreds of thousands of students and teachers have taken advantage of those sets over the past year.

Another partnership, with Stanford and DuraSpace, has pushed forward on a next-generation repository and aggregation service that should help our Hubs and many others working with cultural heritage materials. Formerly called Hydra-in-a-Box and now with the snazzier name Hyku, this Hydra-based software will see important milestones in 2017, including the first pilots with our partners.

To assist our Hubs in the on-boarding process and in continuing partnership with DPLA, Kelcy Shepherd joined the DPLA staff in August. Also in the fall Michael Della Bitta joined us as our new Director of Technology, and he has already started to streamline and accelerate some of the core technologies in our ingest process and platform. And just this week, Arielle Perry became our program assistant, helping our organization and community with logistics, communication, and event planning.

Finally, speaking of events, registration for our annual meeting of the DPLA community and those who care about maximizing access to our shared culture, DPLAfest 2017, is now open. This fourth major gathering will take place on April 20-21, 2017 in Chicago at Chicago Public Library’s Harold Washington Library Center. The hosts for DPLAfest 2017 include Chicago Public Library, the Black Metropolis Research ConsortiumChicago Collections, and the Reaching Across Illinois Library System (RAILS). We do hope you join us in Chicago on the fourth anniversary of our launch!

Tara Robertson: Trying to track the changes to the PDF of the Women’s March’s Unity Principles

planet code4lib - Fri, 2017-01-20 22:49

From the title of this post you have probably already figured out that I wasn’t successful in tracking when the PDFs on the Women’s March Unity Principles page changed. It’s always less fun to document when something doesn’t work the way you wanted, but I’m doing this in case it’s useful for anyone else.

These words of wisdom have helped me through this week:

Your feminism is either intersectional or it is garbage

— Lorelei Lee (@MissLoreleiLee) January 18, 2017

Why was I even trying to do this?

It was easy to set up Versionista to track changes to the Women’s March Unity Principles webpage. On this page there’s a link to a longer PDF document. I wanted to be able to save the various versions of the full PDF statement and then compare the different versions to see what changes happened. I know that this document has also changed because people have screenshots of various version. Also, this document used to be 5 pages and now it’s 6.

This started as a place for me to put my anger around sex workers being thrown under the bus by the Women’s March. In watching the changes to the website I also saw how “disabled women” was added to the first paragraph of that page. To me, the changes in language (additions, deletions, changes) illustrate power struggles within this movement. I’m so curious about the politics behind each edit.

Library technology colleagues are awesome

I’m really lucky to work with library technology colleagues who are smart, curious and generous. A big thank you to Peter Binkley for his time tweaking a script he had written to email him updates to the bus schedule when the PDF schedule was changed. Peter made some changes of his script to email both of us changes to the PDFs on the Women’s March site. Unfortunately that didn’t work as the name of the PDF and the location of the file kept changing.

Coming out as a former sex worker is the scariest thing I’ve done professionally. My big fear is that the people I work with (both at my workplace and in the Access and code4lib communities) would dismiss or shun me and the work that I do. These communities are really important to me, and it’s been amazing to have colleagues offer their technical smarts and support. When Christina Harlow suggested I could put the PDFs in GitHub and that she and others would help run comparisons and share the change outputs I found myself in crying on the bus.

Positionality

Being clear that I am a former sex worker (and a feminist and a librarian) positions me in a unique place to be making these critiques of the Women’s March. Librarianship is not neutral, and neither are the changes to Women’s March Unity Principles. Being out is also necessary to be trusted by some sex work activists–I’m not a researcher who wishes to study sex workers, I have this lived experience. While I have experience doing feminist activism, I have very little experience doing sex worker activism. It’s felt good to put my librarian skills to use in service of sex worker rights and supporting sex worker activists.

How to see what has changed in 2 versions of a PDF

There were 3 excellent suggestions from colleagues:

Juxta Commons

For a free, web based tool Juxta Commons does a lot and is easy enough to use.

Juxta Commons walkthrough from NINES on Vimeo.

According to the 4 year old video Juxta Commons can only accept plain text or XML, according to the documentation it accepts more file types now: HTMl files, Microsoft Word DOCX, Open Office, EPUB and PDF. I didn’t realize this so did the unnecessary step of converting the PDFs to text files using Omnipage.

I liked the different comparison tools. The heatmap shows where changes have happened and there’s icons to identify things that have been added, deleted or changed. For me the side by side comparison was the most useful. The histogram was also useful to see all of the changes on more of a macro level. This is how I realized that I was comparing different copies of the same version of the PDF.

Adobe Acrobat Pro – Compare Documents

I’m glad Carmen reminded me of this as I had forgotten it was there. This was pretty straightforward. You tell Adobe Acrobat which PDF is the newer one and which is the older one, tell it which pages you want to compare, and then pick from 3 different document layout types: 1) reports, spreadsheets, magazine layouts; 2) presentation decks, drawings, illustrations; 3) scanned documents.

Again, I was unknowingly comparing 2 copies of the same PDF and it found no changes.

Juxta Commons is way more useful, but most people already have Adobe Acrobat on their computer. If I had a bunch of documents to compare or was going to do this more than once I’d recommend using Juxta Commons.

Today Trump was inaugurated as the US President. Already his government is making radical changes to what information is on the White House website, including removing the LGBT rights page, and removing pages on civil rights, health care and climate change. As librarians we have some useful skills that we need to use to resist fascism and foster the social change we want to see.

Be careful with each other so we can be dangerous together.

 

Harvard Library Innovation Lab: Awesome Box was an Amazing Experiment. Thank you!

planet code4lib - Fri, 2017-01-20 19:08

Awesome Box was a highly successful experiment that helped LIL explore new ways of enabling peer to peer reading recommendations in libraries.

 

 

The Awesome Box was a physical box that a library would sit next to the library’s regular returns box and if you thought the book was mind blowing, you dropped it in the Awesome Box instead of the regular returns box. The librarian then has the option to scan the book into the Awesome Box website to enable digital sharing of lists of awesome items. Or, the librarian can keep things no-tech and put the item on a shelf labelled Community Recommendations.

Annie Cain and I created the Awesome Box after hearing about a similar idea functioning in a European library. In 2013, we developed the web app, received a little grant funding from Harvard’s Library Lab and the Arcadia Foundation, and started collaborating with libraries at Harvard, Somerville Public (first Awesome Box in the wild!!) , Cambridge Public, and Brookline Public here in the Boston area.

Annie and I (with Annie doing the lion’s share) worked hard to develop the Awesome Box community by quickly replying with advice when emails arrived and talking about Awesome Box at several conferences and gatherings of librarians.

I learned a ton about product development and adoption with the Awesome Box, but two big things that stick out after much reflection — make the thing you’re building fit with the patterns of the folks that will use the thing (people are returning books anyway, they just need to choose a box), and you have to sell, sell, sell! Awesome Box is fun and free (as in open source and as in no money) and we still constantly talked it up and pushed it for three years. I’ve found that it’s hard to find success with a project if you just dump on the web and expect people to use it — you’ve got to wire people to your project.

Awesome Box is certainly one of the most successful projects I’ve been lucky enough to be part of. And, arguably, one of the most successful projects to roll out of LIL. Thank you so much to all the libraries that joined together to make Awesome Box so much fun! If you’re a library and you didn’t have a chance to export your Awesome items, please drop me an email and I’ll get your data to you.

Awesome Box was an experiment. It’s done and the servers have been powered down. During it’s glorious run, the Awesome Box supported 512 private, public, and academic libraries across the US. The members of those libraries dropped 104,715 items dropped in the Awesome Box from 2013 to 2016.

Thank you. Thank you. Thank you.

LITA: Fostering Digital Literacy at the Reference Desk

planet code4lib - Fri, 2017-01-20 15:00

Computing and digital literacy initiatives aren’t new in the library — planned programs and educational offerings that support digital citizenship exist in nearly every library in the nation. But digital literacy is developed not only via programs and classes; learning is supported by informal interactions between library staff and patrons. It’s important not to overlook instruction that occurs on a one-to-one basis.

Informal instruction is a concept in education that can be useful in libraries as well. Formal instruction takes place in the classroom, during a scheduled educational program. By contrast, predetermined learning outcomes are not built into informal instruction — from the learner’s point of view, what’s happening isn’t education, but experience: learning by doing.

Libraries are most effective at fostering digital literacy when staff take the same care during casual educational encounters as we do in the classroom. If patrons’ worst fears about their lack of knowledge are confirmed by staff attitudes during at-the-device instructional sessions, this acts as a wall to future teaching interactions, blocks patrons from asking questions, and makes them feel unmotivated to pursue the classes and programs that may be helpful to them.

At every library where I’ve worked since 2008, instructional questions far outweigh reference questions at the public service desks. Most of these instructional queries occur in the realm of computing and web help. In fact, I decided that I wanted to become a librarian because of an experience I had as a circulation supervisor, guiding a patron through the navigation tools for an online job application when the librarian was off-desk. A few weeks later, this patron returned to the library to tell me that he’d gotten a job after filling out a few more web applications on his own. The satisfaction of helping someone learn a skill that they found useful hooked me on public services.

This gentleman may never have attended the library’s computing classes, but he felt comfortable in the one-on-one environment, asking for help with his specific, task-oriented question. In many libraries, this kind of informal instruction comprises the bulk of our direct interactions with patrons. These are some of the practices I’ve developed over the years when providing on-demand computer help at the reference desk, with the ambition that a few of these educational opportunities morph into a-ha moments.

1) Legitimize the question, and begin by indicating a starting point for the process of solving for the x of the patron’s query — even before you’ve reached the computer.

“I’m having trouble. Can you show me how to find some information in JSTOR?”

“Of course! Tell me what you’d like to find and we can use the search tools to look for an article.”

2) Reassure the patron that their lack of knowledge is not unique.

“I feel so dumb for not knowing how to delete emails from my trash folder!”

“No way. I’ve seen this question before. You’re not alone in not knowing how it works.”

3) By default, give the patron the wheel, letting them find, drag, and click while you guide them to the controls they need. Describe areas of a screen with location language (upper right of the screen, at the bottom of the window, etc) and let the learner find the option they need by offering clues to its location and visual representation, e.g., “The button looks like a file folder.” This helps patrons build spatial relationships with the tasks they’re learning — letting them drive builds muscle memory for the task so that next time, it’ll be incrementally easier for them to remember the process.

4) If the patron signals that they’re more comfortable watching you perform the task, narrate your actions. Explain how you’re selecting an object (double-click, right-click, etc), what you’re doing with it…

“I click and hold to ‘drag’, and then let go where I want to ‘drop’,”

…and why.

“This will open up a file we need to download in order to install the update.”

Narration slows the process, which allows the patron to ask questions and absorb the steps they’re seeing unfold — which can go a long way toward helping them feel confident enough to try the task on their own as you remind them of the steps the next time they need assistance with it.

5) Throughout the process, ask the patron whether everything makes sense; recap what you’re doing as you go, and pay attention to the learner’s body language so you don’t move too quickly or past something they don’t understand — frowning, a shaking head, looking at the keyboard rather than the screen, furrowing the brow — each of these is a sign that the patron may not understand something, but isn’t quite sure how to frame a question about it.

6) Before ending the interaction, ask again — “Does this make sense?” — and check in to see if the patron has any additional questions. If it seems like nothing further is needed, congratulate them on a completed task and/or invite further questions in future.

“It looks like you’ve got it! Let me know if you need a refresher some other time, or if you run into anything else you want help with.”

Informal training has the potential to be even more effective than program-based learning because it’s task-based: the learner has a specific goal in mind, which provides an intrinsic motivation to master the skills shared with them. Sometimes patrons feel intimidated by a formal instructional setting — they don’t want to ask “dumb” questions in front of a group; they find that some of what’s covered is either over or under their current knowledge level, so they zone out; they may not see how they can apply the skills in a practical way. With informal interactions, the task is meaningful, so the process becomes almost secondary; patrons barely notice that they’re gradually building skills, in 5-minute increments every few days with librarian coaches at their computer stations. It’s important to make sure that we set the same tone of openness, exploration, and engagement whether we’re teaching a 3-week workshop on web basics, a 1-session class on email etiquette, or a 5-minute tutorial on how to fill out a job application at a public computer station.

 

Do you have any tips on successful one-on-one instructional interactions? Any challenges you’ve overcome or are facing now? How do you ensure that staff is on the same page when it comes to providing consistent computing help?

Evergreen ILS: Join Us At MidWinter!

planet code4lib - Fri, 2017-01-20 13:07

Join us at the American Library Association Midwinter Meeting in Atlanta for the Evergreen Community Meetup. The meetup is an opportunity for Evergreen users, enthusiasts, and potential future users to learn about Evergreen, see what’s up and coming in the software, hear how open source software empowers libraries, and find out about the vibrant community supporting Evergreen.

The meetup is scheduled for 3 to 4 p.m. Saturday, January 21 in Room Dogwood B at the Omni Hotel. All ALA attendees interested in learning about Evergreen are invited to attend.

LITA: Save the Date: LITA AdaCamp

planet code4lib - Fri, 2017-01-20 13:00

Save the date for this exciting LITA preconference at the upcoming ALA Annual conference in Chicago, IL.

LITA AdaCamp
Friday June 23, 2017, 9:00 am – 4:00 pm
Northwestern University campus in Evanston, IL

Women in library technology face numerous challenges in their day-to-day work. If you would like to join other women in the field to discuss topics related to those challenges, AdaCamp is for you. This one-day LITA preconference at ALA Annual in Chicago will allow women employed in various technological industries an opportunity to network with others in the field and to collectively examine common barriers faced. This day will follow the unconference model allowing attendees the power to choose topics most relevant to their work and their lives. Watch for more program details and registration information following ALA Midwinter!

Find out more about AdaCamp.

Open Knowledge Foundation: Danish Energinet.dk will use CKAN to launch Energy DataStore – a free and open portal for sharing energy data

planet code4lib - Fri, 2017-01-20 09:00

For immediate release

Open data service provider Viderum is working with Energinet.dk, the gas and electricity transmission system operator in Denmark, to provide near real-time access to Danish energy data. Using CKAN, an open-source platform for sharing data originally developed by Open Knowledge International, Energinet.dk’s Energy DataStore will provide easy and open access to large quantities of energy data to support the green transition and enable innovation.

Image credit: Jürgen Sandesneben, Flickr CC BY

What is the Energy DataStore?

Energinet.dk holds the energy consumption data from Danish house-holds and businesses as well as production data from windmills, solar cells and power plants. All this data will be made available in aggregated form through the Energy DataStore, including electricity market data and near-real-time information on CO2 emissions.

The Energy DataStore will be built using open-source platform CKAN, the world’s leading data management system for open data. Through the platform, users will be able to find and extract data manually or through an API.

“The Energy DataStore opens the next frontier for CKAN by expanding into large-scale, continuously growing datasets published by public sector enterprises”, writes Sebastian Moleski, Managing Director of Viderum, “We’re delighted Energinet.dk has chosen Viderum as the CKAN experts to help build this revolutionary platform. With our contribution to the success of the Energy DataStore, Viderum is taking the next step in fulfilling our mission: to make the world’s public data discoverable and accessible to everyone.”

Open Knowledge International’s commercial spin-off, Viderum, is using CKAN to build a responsive platform for Energinet.dk that publishes energy consumption data for every municipality in hourly increments with a look to provide real-time in future. The Energy DataStore will provide consumers, businesses and non-profit organizations access to information vital for consumer savings, business innovation and green technology.

As Pavel Richter, CEO of Open Knowledge International explains, “CKAN has been instrumental over the past 10 years in providing access to a wide range of government data. By using CKAN, the Energy DataStore signals a growing awareness of the value of open data and open source to society, not just for business growth and innovation, but for citizens and civil society organizations looking to use this data to address environmental issues.”

Energinet.dk hopes that by providing easily accessible energy data, citizens will feel empowered by the transparency and businesses can create new products and services, leading to more knowledge sharing around innovative business models.

Editor’s Notes:

Energinet.dk Energinet.dk owns the Danish electricity and gas transmission system – the ‘energy’ motorways. The company’s main task is to maintain the overall security of electricity and gas supply and create objective and transparent conditions for competition on the energy markets. CKAN CKAN is the world’s leading open-source data portal platform. It is a complete out-of-the-box software solution that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available. A slide-deck overview of CKAN can be found here. Viderum Viderum is an open data solutions provider spun off from Open Knowledge, an internationally recognized non-profit working to open knowledge and see it used to empower and improve the lives of citizens around the world. Open Knowledge International   Open Knowledge International is a global non-profit organisation focused on realising open data’s value to society by helping civil society groups access and use data to take action on social problems. Open Knowledge International does this in three ways: 1) we show the value of open data for the work of civil society organizations; 2) we provide organisations with the tools and skills to effectively use open data; and 3) we make government information systems responsive to civil society.

For more information, please contact Sierra Williams sierra.williams@okfn.org +44 07807 869884

William Denton: Politics in the Library

planet code4lib - Fri, 2017-01-20 04:31

Last month I read Sam Popowich’s post Gramsci and Library Neutrality, where he said he’d been “interviewed along with University of Alberta School of Library and Information Studies professor Michael McNally on the CJSR radio show Shout for Libraries.” I started following CJSR, and they must have broadcast the show a couple of days ago because it showed up on SoundCloud. I recommend it to anyone interested in libraries and politics (bearing in mind that if you don’t like it when people recommend reading Marx and Gramsci then the interview will rub you the wrong way).

Sam digs into Gramsci in his blog post:

We began by discussing the age-old question of library neutrality. Neither Michael nor I support the idea of library neutrality and, while I have met rank-and-file librarians who hold this position, I find it mostly part of the discourse and value system of library administrators. When Michael and I were asked why we think the idea of library neutrality continues to be so strongly held, we mentioned things like reification of social relations and hegemony. But the question made me start wanting to dig a little deeper into this: why has library neutrality continued to be a bone of contention ever since at least the 1970s debates around social responsibility and professionalism, if not before.

In the show Sam says this issue and others can come to a head when there’s a labour problem happening. Things get real. I can tell you that’s my experience where I work. I’m one of two librarian union stewards in the York University Faculty Association (Patti Ryan is the other), and we’ve been dealing with a variety of issues over the last few years. We haven’t had any trouble with “neutrality,” but other things have come up, and they’ve moved from being theoretical issues discussed in the abstract to being real things discussed in real terms because they are problems that need to be solved, and there is a framework—the collective agreement—we can use to do that.

Moss on a tree—not a metaphor.

At the end of the interview, Sam recommends reading Gramsci’s Notebooks and The Myth of the Neutral Professional. Michael McNally recommends In Solidarity: Academic Librarian Labour Activism and Union Participation in Canada, edited by Jennifer Dekker and my colleague Mary Kandiuk, which I think every Canadian academic librarian should have a look at, as well as any others interested in academic librarianship and labour issues.

LITA: #LITAchat “What is IoT”

planet code4lib - Thu, 2017-01-19 20:52

What is IoT, the Internet of Things, and how can you leverage these new “things” for your library?  From sensors to AI, IoT devices are springing up everywhere.  Join a conversation with Lauren Di Monte from North Carolina State to hear how they have leveraged IoT technologies in their makerspace and discuss general issues related to IoT and cyber-physical systems.  

LITA’s Membership Development Committee invites you to join in the Friday February 24, 2017 #LITAchat at 12pm (CDT)

What: February #LITAchat – “What is IoT”

When: Friday, February 24th, 12pm-1pm (Central)

Where: Twitter

To participate, fire up your favorite Twitter client and check out the #LITAchat hashtag. On the web client, just search for #LITAchat and then click “LIVE” to follow along. Ask questions using the hashtag #LITAchat, add your own comments, and even answer questions posed by other participants. Hope to see you there!

District Dispatch: Want to double-down on fixing the Copyright Law? Fix ELUAs.

planet code4lib - Thu, 2017-01-19 20:35

The American Library Association is taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of the law and addressing what’s at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Oh, EULAs! End-user license agreements, that is. Don’t you hate them? In addition to causing a lot of confusion for consumers and forcing them to give up fair use rights (as explained here), libraries and educational institutions have troubles as well. A growing number of resources purchased by libraries for their collections are only available in digital formats from consumer entertainment sites like iTunes. If the library buys a song from iTunes, under the EULA, all of the exceptions and limitations to copyright that libraries enjoy under the federal law do not exist. Instead, the use of the resource is limited to “personal, non-commercial use.” Libraries cannot lend the resource (Section 109), they cannot preserve or replace the resource (Section 108), they cannot make an accessible copy for people with disabilities (Section 121) and if used in the classroom even for face-to-face teaching (Section 110), educators cannot publicly perform the work. The only thing that can be done with the EULA imprisoned resource is add it to the collection!

The Music Library Association led an effort to address licensing and ownership issues of download-only music back in 2013. Recognizing that it would be unlikely that Congress would pass an amendment that would ensure that exceptions cannot be waived by licenses, the music librarians met with rights holders and music collecting societies to see if accommodations for non-profit, cultural institutions like libraries could be made. Libraries suggested an institutional license with a higher purchase price that would allow for library uses, but no dice.

Since librarians feel a responsibility to demonstrate ethical copyright behavior, most are unwilling to do the unthinkable – ignore the license. Is fair use an option? Probably not because the license terms circumvent fair use rights as well. More than a few librarians have contacted staff at Amazon, Netflix, and iTunes, and, more often than not, the staff person says “we don’t care” what the library does with the resource other than making infringing copies. One could assume that the staff person has the authority to make that decision, document the conversation, and move forward. (I would.) Educators unaware of the license implications when buying consumer licensed resources for classroom and research purposes, end up using the materials just as they would any other resource. Let them remained uninformed. Let behavior demonstrate what is reasonable. It is not acceptable that willing library buyers cannot do what libraries do because of a click-on license.

The post Want to double-down on fixing the Copyright Law? Fix ELUAs. appeared first on District Dispatch.

David Rosenthal: The long tail of non-English science

planet code4lib - Thu, 2017-01-19 16:00
Ben Panko's English Is the Language of Science. That Isn't Always a Good Thing is based on Languages Are Still a Major Barrier to Global Science, a paper in PLOS Biology by Tatsuya Amano, Juan P. González-Varo and William J. Sutherland. Panko writes:
For the new study, Amano's team looked at the entire body of research available on Google Scholar about biodiversity and conservation, starting in the year 2014. Searching with keywords in 16 languages, the researchers found a total of more than 75,000 scientific papers. Of those papers, more than 35 percent were in languages other than English, with Spanish, Portuguese and Chinese topping the list.

Even for people who try not to ignore research published in non-English languages, Amano says, difficulties exist. More than half of the non-English papers observed in this study had no English title, abstract or keywords, making them all but invisible to most scientists doing database searches in English. Below the fold, how this problem relates to work by the LOCKSS team.

It has long been a problem that the resources for preserving e-journal content were almost exclusively devoted to providing post-cancellation access rather than to preserving the academic record (Both links are from 2007). In other words, resources went to preserving content that, because it was expensive, was not at risk. Estimates of how much of the record was being preserved ranged from a half down. It is clear that the expensive, low-risk content is almost exclusively in English.

Over the years, the LOCKSS team have made several explorations of the long tail. Among these were a 2002 meeting of humanities librarians that identified high-risk content such as World Haiku Review and Exquisite Corpse, and work funded by the Soros Foundation with South African librarians that identified fascinating local academic journals in fields such as dry-land agriculture and AIDS in urban settings. Experience leads to two conclusions:
  • Both subject and language knowledge is important to identifying the worthwhile long-tail content.
  • Long-tail content in English is likely to be open access; in other languages much more is subscription.
Both were part of the motivation behind the LOCKSS Program's efforts to implement National Hosting networks. Librarians in-country are far more capable of identifying, and negotiating with the publishers of, worthwhile long-tail content than we are. An example is Brazil's Cariniana, established by consortia of libraries to preserve their national open access academic literature, mostly in Portuguese.

Despite its importance, as shown by Amano et al, few if any individual libraries have the resources to collect and preserve their national language academic literature.  The collaborative networks of libraries engendered by the LOCKSS technology can operate at a national scale to address this problem more effectively and affordably.

LITA: LITA Highlights at ALA Midwinter

planet code4lib - Thu, 2017-01-19 15:00

ALA and LITA are heading to Atlanta for ALA Midwinter 2017. Whether or not you will be attending the conference, there are plenty of opportunities to check out what’s happening at the conference. All the LITA highlights are on the LITA at Midwinter webpage.

You can find the whole LITA schedule at the Midwinter Scheduler. Most committee meetings are open to anyone whether or not you’re on the committee, so feel free to stop by and check out what’s going on. There’s even a page showcasing the LITA Interest groups managed discussions.

Make sure you don’t miss the following:

LITA Diversity and Inclusion Committee – Kitchen Table Conversation
Saturday, January 21 from 4:30 to 5:30 PM

LITA’s Diversity & Inclusion Committee is thrilled to provide ALA and LITA members with an opportunity to provide substantial feedback on developing inclusive programming and member services, as well as meaningful membership outreach efforts over the coming years. LITA is dedicated to offering an inclusive community for our members and others attending our programs. This conversation series will be anchored by questions that will help us gauge how to improve in each of these areas: Where are our problems? What opportunities are we missing? How can we better support all of our members and attract and retain a more diverse membership?

LITA Open House
Sunday, January 22, 4:30-5:30 pm

All are welcome to meet LITA leaders, committee chairs, and interest group participants. We will share information about our recent and upcoming activities, build professional connections, and discuss issues in library and information technology. Whether you are considering LITA membership for the first time, a long-time member looking to engage with others in your area, or anywhere in between, take part in great conversation and learn more about volunteer and networking opportunities at this meeting.

LITA Happy Hour
Sunday, January 22, 6:00 to 8:00 PM

Please join the LITA Membership Development Committee and LITA members and friends from around the country for networking, good cheer, and great fun! We’re celebrating our 50th Anniversary as a division – don’t miss it. You can “Buy LITA a Drink” by filling up the LITA tip jar at the bar. Location: Gordon Biersch at 848 Peachtree Street NE, Atlanta, Georgia 30308 (404-870-0805).

LITA Top Technology Trends
Sunday, January 22, 1:00 pm – 2:30 pm

LITA’s premier program on changes and advances in technology. Top Technology Trends features our ongoing roundtable discussion about trends and advances in library technology by a panel of LITA technology experts and thought leaders. The panelists will describe changes and advances in technology that they see having an impact on the library world, and suggest what libraries might do to take advantage of these trends. This conference panelists and their suggested trends include:

  • Ken Varnum, Session Moderator, Senior Program Manager for Discovery, Delivery, and Learning Analytics, University of Michigan
  • Cynthia Hart, Emerging Technologies Librarian, Virginia Beach Public Library
  • Bill Jones, Creative Technologist, IDS Project
  • Gena Marker, Teacher-Librarian, Centennial High School Library (Boise, ID)
  • Meredith Powers, Senior Reference Librarian, Brooklyn Public Library

LITA Town Meeting
Monday, January 23, 8:30 to 10:00 AM

Even if you’re not going to be in Atlanta for ALA Midwinter you can still participate in the LITA Town Hall on Monday, January 23.  Tune in at 8:50am EST and catch LITA VP Andromeda Yelton reviewing the results of the Personas Task Force study and brainstorm how LITA can effectively serve our different types of members. This event will be streamed on Facebook Live. Make sure to like the LITA Facebook page to get a notification when streaming begins.

Join your fellow LITA members for breakfast and a discussion about LITA’s strategic path. We will focus on how LITA’s goals–collaboration and networking; education and sharing of expertise; advocacy; and infrastructure–help our organization serve you and the broader library community. This Town Meeting will help us turn those goals into plans that will guide LITA going forward.

We hope you’ll join us at some of these events in Atlanta, or follow #alamw17 on social media to join the conversation online.

Pages

Subscribe to code4lib aggregator