You are here

Feed aggregator

Jonathan Rochkind: Hash#map ?

planet code4lib - Wed, 2017-03-22 02:15

I frequently have griped that Hash didn’t have a useful map/collect function, something allowing me to transform the hash keys or values (usually values), into another transformed hash. I even go looking for for it in ActiveSupport::CoreExtensions sometimes, surely they’ve added something, everyone must want to do this… nope.

Thanks to realization triggered by an example in BigBinary’s blog post about the new ruby 2.4 Enumerable#uniq… I realized, duh, it’s already there!

olympics = {1896 => 'Athens', 1900 => 'Paris', 1904 => 'Chicago', 1906 => 'Athens', 1908 => 'Rome'} olympics.collect { |k, v| [k, v.upcase]}.to_h # => => {1896=>"ATHENS", 1900=>"PARIS", 1904=>"CHICAGO", 1906=>"ATHENS", 1908=>"ROME"}

Just use ordinary Enumerable#collect, with two block args — it works to get key and value. Return an array from the block, to get an array of arrays, which can be turned to a hash again easily with #to_h.

It’s a bit messy, but not really too bad. (I somehow learned to prefer collect over it’s synonym map, but I think maybe I’m in the minority? collect still seems more descriptive to me of what it’s doing. But this is one place where I wouldn’t have held it against Matz if he had decided to give the method only one name so we were all using the same one!)

(Did you know Array#to_h turned an array of duples into a hash?  I am not sure I did! I knew about Hash(), but I don’t think I knew about Array#to_h… ah, it looks like it was added in ruby 2.1.0.  The equivalent before that would have been more like Hash( hash.collect {|k, v| [k, v]}), which I think is too messy to want to use.

I’ve been writing ruby for 10 years, and periodically thinking “damn, I wish there was something like Hash#collect” — and didn’t realize that Array#to_h was added in 2.1, and makes this pattern a lot more readable. I’ll def be using it next time I have that thought. Thanks BigBinary for using something similar in your Enumerable#uniq example that made me realize, oh, yeah.

 


Filed under: General

FOSS4Lib Recent Releases: Sufia - 7.3.0

planet code4lib - Tue, 2017-03-21 21:21

Last updated March 21, 2017. Created by Peter Murray on March 21, 2017.
Log in to edit this page.

Package: SufiaRelease Date: Tuesday, March 21, 2017

Library of Congress: The Signal: Collecting Digital Content at the Library of Congress

planet code4lib - Tue, 2017-03-21 20:30

This is a guest post by Joe Puccio, the Collection Development Officer at the Library of Congress.

Joe Puccio. Photo by Beth Davis-Brown.

The Library of Congress has steadily increased its digital collecting capacity and capability over the past two decades. This has come as the product of numerous independent efforts pointed to the same goal – acquire as much selected digital content as technically possible and make that content as broadly accessible to users as possible. At present, over 12.5 petabytes of content – both acquired material and content produced by the Library itself through its digitization program – are under management.

In January, the Library adopted a set of strategic steps related to its future acquisition of digital content. Further expansion of the digital collecting program is seen as an essential part of the institution’s strategic goal to: Acquire, preserve, and provide access to a universal collection of knowledge and the record of America’s creativity.

The scope of the newly-adopted strategy is limited to actions directly involved with acquisitions and collecting. It does not cover digitization nor does it cover other actions that are critical to a successful digital collections program, including:

  • Further development of the Library’s technical infrastructure
  • Development of various access policies and procedures appropriate to different categories of digital content
  • Preservation of acquired digital content
  • Training and development of staff
  • Eventual realignment of resources to match an environment where a greater portion of the Library’s collection building program focuses on digital materials

It must also be emphasized that the strategy is aspirational since all of the resources required to accomplish it are not yet in place.

Current Status of Digital Collecting and Vision for the Future

In the past few years, much progress has been made in the Library’s digital collecting effort, and an impressive amount of content has been acquired.  As the eDeposit pilot began the complex process of obtaining digital content via the Copyright Office, additional efforts made great strides toward the goal of acquiring and making accessible other content.  Digital collecting has also been integrated into a range of special collections acquisitions.

The adopted strategy is based on a vision in which the Library’s universal collection will continue to be built by selectively acquiring materials in a wide range of formats – both tangible and digital.  Policies, workflows and an agile technical infrastructure will allow for the routine and efficient acquisition of desired digital materials. This type of collection building will be partially accomplished via collaborative relationships with other entities. The total collection will allow the Library to support the Congress in fulfilling its duties and to further the progress of knowledge and creativity for the benefit of the American people.

Assumptions and Principles

The strategy is based on a number of assumptions, most significantly that the amount of available digital content will continue to grow at a rapid rate and that the Library will be selective regarding the content it acquires. An additional primary assumption is that there will continue to be much duplication in the marketplace, with the same content being available both in tangible and digital formats.

Likewise, there are a number of principles that support the strategy, including the fact that the Library is developing one interdependent collection that contains both its traditional physical holdings and materials in digital formats. Other major principles are that the Library will ensure that the rights of those holding intellectual property will be respected and that appropriate methods will be put in place to ensure that rights-restricted digital content remains secure.

Plan for Digital Collecting

Over the next five years, the Library intends to follow a strategic framework categorized into six objectives:

Strategic Objective 1 – Maximize receipt and addition to the Library’s collections of selected digital content submitted for copyright purposes

Strategic Objective 2 – Expand digital collecting via routine modes of acquisitions (primarily purchase, exchange and gift)

Strategic Objective 3 – Focus on purchased and leased electronic resources

Strategic Objective 4 – Expand use of web archiving to acquire digital content

Strategic Objective 5 – Develop and implement an acquisitions program for openly available content

Strategic Objective 6 – Expand collecting of appropriate datasets and other large units of content

More Information

Much more detail is available in Collecting Digital Content at the Library of Congress.  Any questions or comments about this strategy or any aspect of the Library’s collection building program may be directed to me, jpuc@loc.gov.

LITA: Library Life with Universal Translators

planet code4lib - Tue, 2017-03-21 20:02

As a lifetime science fiction watcher, I’ve been patiently waiting for current science to catch up to the futures I saw on the screen. Tiny computer in my pocket? Check. Hovercraft? All good. Commercial space flight? Almost there.

But when I saw the Indiegogo campaign for Mymanu CLIK – wireless earbud translators – I looked at them through my former public librarian’s eyes. My mediocre Spanish fluency would be replaced by effortless, instant, two-way translation, smoothing out frustrations and improving customer service.

Imagine: A library staffer wearing an earpiece and holding a smartphone (translation app installed) asks the patron to speak into the microphone, then hears the translation in their earpiece in real time. The conversation goes quickly, and the patron is more likely to get the information they need, even if the materials are still mostly in English.

Of course, there are concerns and questions. The key to most real-time translation is the computing power of servers hosted…somewhere…owned by…someone. As the person you’re listening to speaks, their words are streamed to these computers, analyzed, translated, and the translation is streamed back to you. Is that content saved, are people identifiable, what happens to patron privacy and library liability in the age of livestreamed translation? We collectively threw a fit when we discovered Adobe was sending patron information in the clear through their ebook reading service? Are we willing to ask for less in the name of better customer service?

More immediately, how accurate are the translations? Google Translate is good, and getting better all the time, but if we’re using these services to help patrons find medical or legal information, we can’t risk misunderstandings. Again, is it worse to suffer along with no translation at all, and run the risk of inaccurate information, or to risk a bad translation?

Both the CLIK and the Pilot earpiece from Waverly Labs are coming soon. What questions do we need to remember to ask before we’re mediating our interactions through these devices?

Tim Ribaric: Tweak to the Bot

planet code4lib - Tue, 2017-03-21 19:24

Made a change to the bot.

EDIT: Yes I know typo in the image, changed it and not gonna screen cap again.

read more

Terry Reese: MarcEdit Update Notes

planet code4lib - Tue, 2017-03-21 15:28

MarcEdit Update: All Versions

Over the past several weeks, I’ve been working on a wide range of updates related to MarcEdit. Some of these updates have dealt with how MarcEdit handles interactions with other systems, some of these updates have dealt with integrating the new bibframe processing into the toolkit, and some of these updates have been related to adding more functionality around the programs terminal programs and SRU support. In all, this is a significant update that required the addition of ~20k lines of code to the Windows version, and almost 3x that to the MacOs version (as I was adding SRU support). In all, I think the updates provide substantial benefit. The updates completed were as follows:

MacOS:

* Enhancement: SRU Support — added SRU support to the Z39.50 Client
* Enhancement: Z39.50/SRU import: Direct import from the MarcEditor
* Enhancement: Alma/Koha integration: SRU Support
* Enhancement: Alma Integration: All code needed to add Holdings editing has been completed; TODO: UI work.
* Enhancement: Validator: MacOS was using older code — updated to match Windows/Linux code (i.e., moved away from original custom code to the shared validator.dll library)
* Enhancement: MARCNext: Bibframe2 Profile added
* Enhancement: BibFrame2 conversion added to the terminal
* Enhancement: Unhandled Exception Handling: MacOS handles exceptions differently — I created a new unhandled exception handler to make it so that if there is an application error that causes a crash, you receive good information about what caused it.

Couple of specific notes about changes in the Mac Update.

Validation – the Mac program was using an older set of code that handled validation. The code wasn’t incorrect, but it was out of date. At some point, I’d consolidated the validation code into its own namespace and hadn’t updated these changes on the Mac side. This was unfortunate. Anyway, I spent time updating the process so the all versions now share the same code and will receive updates at the same pace.

SRU Support – I’m not how I missed adding SRU support to the Mac version, but I had. So, while I was updating ILS integrations to support SRU when available, I added SRU support to the MacOS.

BibFrame2 Support – One of the things I was never able to get working in MarcEdit’s Mac version was the Bibframe XQuery code. There were some issues with how URI paths resolved in the .NET version of Saxon. Fortunately, the new bibframe2 tools don’t have this issue, so I’ve been able to add them to the application. You will find the new option under the MARCNext area or via the command-line.

Windows/Linux:

* Enhancement: Alma/Koha integration: SRU Support
* Enhancement: MARCNext: Bibframe2 Profile added
* Enhancement: Terminal: Bibframe2 conversion added to the terminal.
* Enhancement: Alma Integration: All code needed to add Holdings editing has been completed; TODO: UI work.
Windows changes were specifically related to integrations and bibframe2 support. On the integrations side, I enabled SRU support when available and wrote a good deal of code to support holdings record manipulation in Alma. I’ll be exposing this functionality through the UI shortly. On the bibframe front, I added the ability to convert data using either the bibframe2 or bibframe1 profiles. Bibframe2 is obviously the default.

With both updates, I made significant changes to the Terminal and wrote up some new documentation. You can find the documentation, and information on how to leverage the terminal versions of MarcEdit at this location: The MarcEdit Field Guide: Working with MarcEdit’s command-line tools

Downloads can be picked up through the automated updating tool or from the downloads page at: http://marcedit.reeset.net/downloads

David Rosenthal: The Amnesiac Civilization: Part 5

planet code4lib - Tue, 2017-03-21 15:00
Part 2 and Part 3 of this series established that, for technical, legal and economic reasons there is much Web content that cannot be ingested and preserved by Web archives. Part 4 established that there is much Web content that can currently be ingested and preserved by public Web archives that, in the near future, will become inaccessible. It will be subject to Digital Rights Management (DRM) technologies which will, at least in most countries, be illegal to defeat. Below the fold I look at ways, albeit unsatisfactory, to address these problems.

There is a set of assumptions that underlies much of the discussion in Rick Whitt's "Through A Glass, Darkly" Technical, Policy, and Financial Actions to Avert the Coming Digital Dark Ages. For example, they are made explicit in this paragraph (page 195):
Kirchhoff has listed the key elements of a successful digital preservation program: an independent organization with a mission to carry out preservation; a sustainable economic model to support preservation activities over targeted timeframes; clear legal rights to preserve content; relationships with the content owners, and the content users; a preservation strategy and supporting technological infrastructure; and transparency about the key decisions.The assumption that there is a singular "independent organization with a mission to carry out preservation" to which content is transferred so that it may be preserved is also at the heart of the OAIS model. As in almost all discussions of digital preservation, it is not surprising to see it here.

There are three essential aspects; the singular organization, its independence, and the transfer of content. They are related to, but not quite the same as, the three options Whitt sets out on page 209:
Digital preservation should be seen not as a commercial threat, but as a new marketplace opportunity, and even advantage. Some voluntary options include persuading content owners to (1) preserve the materials in their custody, (2) cede the rights to preserve to another entity; and/or (3) be willing to assume responsibility for preservation, through "escrow repositories" or "archives of last resort."Lets look at each in turn.
Not SingularIf the preservation organization isn't singular at least some of it will be independent and there will be transfer of content. The LOCKSS system was designed to eliminate the single point of failure created by the singular organization. The LOCKSS Program provided software that enabled the transfer of content to multiple independent libraries, each taking custody of the content they purchased. This has had some success in the fairly simple case of academic journals and related materials, but it is fair to say that there are few other examples of similarly decentralized preservation systems in production use (Brian Hill at Ars Technica points to an off-the-wall exception).

Not singular solutions have several disadvantages to set against their lack of a single point of failure. They still need permission from the content owners, which except for the special case of LOCKSS tends to mean individual negotiation between each component and each publisher, raising costs significantly. And managing the components into a coherent whole can be like herding cats.
Not IndependentThe CLOCKSS Archive is a real-world example of an "escrow repository". It ingests content from academic publishers and holds it in a dark archive. If the content ever becomes unavailable, it is triggered and made available under Creative Commons licenses. The content owners agree up-front to this contingency. It isn't really independent because, although in theory publishers and libraries share equally in the governance, in practice the publishers control and fund it. Experience suggests that content owners would not use escrow repositories that they don't in practice control.

"Escrow repositories" solve the IP and organizational problems, but still face the technical and cost problems. How would the "escrow repositories" actually ingest the flow of content from the content owners, and how would they make it accessible if it were ever to be triggered? How would these processes be funded? The CLOCKSS Archive is economically and technically feasible only because of the relatively limited scale of academic publishing. Doing the same for YouTube, for example, would be infeasible.
No TransferI was once in a meeting with major content owners and the Library of Congress at which it became clear to me that (a) hell would freeze over before these owners would hand a copy of their core digital assets to the Library, and (b) even after hell froze the Library would lack the ability or the resources to do anything useful with them. The Library's handling of the feed that Twitter donated is an example of (b). Whitt makes a related point on page 209:
In particular, some in the content community may perceive digital obsolescence not as a flaw to be fixed, but a feature to be embraced. After all, selling a single copy of content that theoretically could live on forever in a variety of futuristic incarnations does not appear quite as financially renumerative as leasing a copy of content that must be replaced, over and over, as technological innovationmarches on.In the Web era, only a few cases of successful pay-per-view models are evident. Content that isn't advertiser-supported, ranging from academic journals to music to news and TV programs is much more likely to sold as an all-you-can-eat bundle. The more content available only as part of the bundle, the more valuable the bundle. Thus the obsession of content owners with maintaining control over the only accessible version of each item of content (see, for example, Sci-Hub), no matter how rarely accessed.

The scale of current Web publishing platforms, the size and growth rates of their content, and the enormous cash flow they generate all militate against the idea that their content, the asset that generates the cash flow, would be transferred to some third party for preservation. In this imperfect world the least bad solution may be some form of "preservation in place". As I wrote in The Half-Empty Archive discussing ways to reduce the cost of ingest, which is the largest cost component of preservation:
It is becoming clear that there is much important content that is too big, too dynamic, too proprietary or too DRM-ed for ingestion into an archive to be either feasible or affordable. In these cases where we simply can't ingest it, preserving it in place may be the best we can do; creating a legal framework in which the owner of the dataset commits, for some consideration such as a tax advantage, to preserve their data and allow scholars some suitable access. Of course, since the data will be under a single institution's control it will be a lot more vulnerable than we would like, but this type of arrangement is better than nothing, and not ingesting the content is certainly a lot cheaper than the alternative.This approach has many disadvantages. It has a single point of failure. In effect preservation is at the whim of the content owner, because no-one will have standing, resources and motivation to sue in case the owner fails to deliver on their commitment. And note the connection between these ideas and Whitt's discussion of bankruptcy in Section III.C.2:
Bankruptcy laws typically treat tangible assets of a firm or individual as private property. This would include, for example, the software code, hardware, and other elements of an online business. When an entity files for bankruptcy, those assets would be subject to claims by creditors. The same arguably would be true of the third party digital materials stored by a data repository or cloud services provider. Without an explicit agreement in place that says otherwise, the courts may treat the data as part of the estate, or corporate assets, and thus not eligible to be returned to the content "owner."But to set against these disadvantages there are two major advantages
  • As the earlier parts of this series show, there may be no technical or legal alternative for much important content.
  • Preservation in place allows for the survival of the entire publishing system, not just the content. Thus it mitigates the multiple version problem discussed in Part 3. Future readers can access the versions they are interested in by emulating the appropriate browser, device, person and location combinations.
I would argue that an urgent task should be to figure out the best approach we can to "preservation in place". A place to start might be the "preservation easement" approach take by land trusts, such as the Peninsula Open Space Trust in Silicon Valley. A viable approach would preserve more content at lower cost than any other.

Evergreen ILS: OpenSRF 2.5.0 released

planet code4lib - Tue, 2017-03-21 13:55

We are pleased to announce the release of OpenSRF 2.5.0, a message routing network that offers scalability and failover support for individual services and entire servers with minimal development and deployment overhead.

New features in OpenSRF 2.5.0 include:

  • Support for message chunking, i.e., breaking up large OpenSRF messages across multiple XMPP envelopes.
  • The ability to detect the time zone of client applications and include it in messages passed to the server.
  • Dispatch mode for method_lookup subrequests.
  • Example configuration files for using NGINX or HAProxy as a reverse proxy for HTTP, HTTPS, and WebSockets traffic. This can be useful for Evergreen systems that wish to use port 443 for both HTTPS and secure WebSockets traffic.

OpenSRF includes various other improvements as detailed in the release notes.

OpenSRF 2.5.0 will be the minimum version of OpenSRF required for the upcoming release of Evergreen 2.12.

To download OpenSRF, please visit the downloads page.

We would also like to thank the following people who contributed to the release:

  • Ben Shum
  • Bill Erickson
  • Chris Sharp
  • Dan Scott
  • Galen Charlton
  • Jason Etheridge
  • Jason Stephenson
  • Jeff Davis
  • Kathy Lussier
  • Mike Rylander
  • Remington Steed

DPLA: Announcing the DPLAfest 2017 Travel Award Recipients

planet code4lib - Tue, 2017-03-21 13:45

We are thrilled to officially introduce the five talented and diverse members of the extended DPLA community who will be attending DPLAfest 2017 in Chicago as recipients of the travel awards announced last month! We received a tremendous response to the call from many excellent members of our field and are grateful that in addition to the three travel awards initially announced, we are also able to welcome two members of the Greater Chicago cultural heritage community to the fest.

The selected awardees represent a broad cross-section of the DPLA community including graduate students and established professionals studying and working in public libraries, government institutions, and local colleges. Together, this group also serves diverse communities across the country, from Los Angeles to North Carolina.

Here are the folks to look for at DPLAfest:

Tommy Bui
Los Angeles Public Library

At the Los Angeles Public Library, Tommy Vinh Bui works to promote literacy and to bridge the information gap in his community. He encourages utilizing emerging technologies and guides stakeholders to become critical and self-aware consumers of information and teaches good information literacy. He holds an MLIS in Library and Information Science with an emphasis on Digital Assets Management. He previously worked with the Los Angeles County Metropolitan Transportation Authority in the Art and Design Department organizing their image collection and served in the Peace Corps abroad. Tommy Vinh Bui is enthused to be attending the conference and avers, “Attending DPLAFest allows me an ideal opportunity to network and collaborate with like-minded professionals and peers who are passionate about digital public libraries and the increasingly significant role they’ll play in creating a verdant and informed society.”

 

Amanda Davis
Charlotte Mecklenberg Library

Amanda H. Davis is an Adult Services Librarian at Charlotte Mecklenburg Library in North Carolina. She received her MLIS from Valdosta State University and is a proud ALA Spectrum Scholar and ARL Career Enhancement Program Fellow. Her professional interests include diversity in LIS, public librarianship, community building, and creative writing. She is excited about attending DPLAfest because of her interest in making sure her city’s diverse perspectives are meaningfully and sustainably recorded.

 

Raquel Flores-Clemons
Chicago State University

Raquel Flores-Clemons is the University Archivist and Director of Archives, Records Management, and Special Collections at Chicago State University. In this role, she manages over thirty collections that reflect the history of CSU as well as capture the historical narratives of South Side communities of Chicago. Raquel maintains a deep commitment to capturing the historical narratives of communities of color and has a strong research interest in hip hop and its use as a platform for social justice and change, as well as the use of hip hop pedagogy to enhance information literacy. Raquel is interested in attending DPLAfest to share the unique archival collections and digital projects happening at Chicago State University, as well as to connect with and learn from other LIS professionals to expand collaboration and techniques in preserving historical materials through digital means. Raquel also looks forward to engaging with other professionals who are working to elevate unknown histories.

 

Valerie Hawkins
Prairie State College

Prior to her current position at Prairie State College, Valerie Hawkins served as Library Reference Specialist in the ALA (American Library Association) Library at its headquarters in downtown Chicago, answering most of the questions that came in to its reference desk, from member and non-member libraries as well as from the public, for nearly twenty years. Valerie was on the front lines of the transition within librarianship to electronic and online communications, publications, resources, and tools. Valerie is also deeply interested in pop culture, performing arts, and media representations of African American history. She writes, “It’s greatly informed my deliberate moves to increase the visibility of works by people of color and other marginalized communities, including the disabled and LGBTQ, in a public e-newsletter I curate called ‘Diverse Books and Media.’” Of her interest in attending DPLAfest, Valerie says, “The past, present, and future of librarianship is digital. Once materials are digitized, the work has actually just begun, not ended.” At DPLAfest, she looks forward to engaging in discussion around questions of organizing and providing maximal access to digital collections as well as user experiences.

 

Nicole Umayam
Digital Arizona Library

Nicole Umayam works as a content and metadata analyst for the Digital Arizona Library. She also works as a corps member of the National Digital Inclusion Alliance to engage tribal and rural community stakeholders in Arizona in increasing home and public broadband access and digital literacy skills. She worked previously with tribal communities in Oklahoma on various endangered language revitalization projects, including building a digital community language archive and training community members in using technology for language documentation. Nicole holds an MLIS and an MA in Applied Linguistic Anthropology from the University of Oklahoma. Nicole says, “I am eager to attend DPLAfest to learn more about creating inclusive and culturally relevant metadata, increasing discoverability, and forging digital library partnerships. I hope to contribute to future efforts of providing equitable access to digitized cultural heritage resources for diverse communities.” Learn more about Nicole’s work at the Arizona Memory Project in this DPLAfest session.

 

Congratulations to all – we look forward to meeting you in Chicago next month!

 

OCLC Dev Network: Code4Lib 2017 and The Cobra Effect

planet code4lib - Tue, 2017-03-21 13:00

I have recently returned from Code4Lib 2017 in Los Angeles. This was my first national Code4Lib, and I have brought back much more than a great t-shirt to our Dublin, Ohio, office.

Open Knowledge Foundation: Scientific Publisher Celebrates Open Data Day

planet code4lib - Tue, 2017-03-21 11:00

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office.  This event was supported through the mini-grants scheme under the Open Research theme.

The Electrochemical Society (ECS) hosted an Open Data Day event with assistance from Open Knowledge International and SPARC-Science.

The Electrochemical Society, a nonprofit, international, scientific publisher communicated with over 27,000 scientists about the importance of open data in the scientific disciplines between 2nd to 4th March. ECS encouraged the researchers contacted to take a survey to assess the interest and need for open data services in the scientific community, the knowledge gaps which existed, and responsiveness to open data tools.

Participants from 33 institutions, 14 countries, and six continents gave the scholarly publisher information about what they felt was necessary for open data to be successful in their field and what they didn’t know about open data concepts.

The Electrochemical Society has been exploring open research policies, including open access for their flagship journal, and other open science practices. Eager to contribute to the world of open data and data science the scientific society has been making strides to incorporate research projects which implement open data and data science practices in their publications.

In order to determine the next steps to socialising open data into the community, questions asked on the survey included:

  1. How often do you access data via repositories;
  2. how often do you deposit data into repositories;
  3. do you feel there are enough open notebook tools for this specific field of science;
  4. did you know what open data was before today;
  5. what concentrated areas of open data do you most contribute to?

ECS’s PRiME Meeting Hall where scientists from around the world came to openly discuss and share the results of their research. In October, ECS will host their annual Fall meeting which will introduce symposia on energy data science and open science, including open data and open access practices.

Outcomes

The event successfully enabled The Electrochemical Society to determine the needs of their constituents in the electrochemical and solid state science community in terms of open data and open science platforms.

The Society randomly selected survey participants and issued 20 open access article credits to allow 20 scholarly papers to be published completely free of charge and completely free to read.

The event led to the announcement of their contribution to an open research repository and the launch of a new open science tool.

ECS’s celebration of Open Data Day helped to determine gaps of knowledge in the field, assess the need for more open data tools, next steps for open science and open data within the organization, the anticipated publication of 20 new research papers, and, most importantly, an increased understanding of open data within their community.

ECS’s Open Data Day celebration is part of a larger initiative to incorporate open science practices into scientific and scholarly communications. You can learn more about the Free the Science initiative and why open research and open data is critical to the advancement of science, here. Below is also short video on the New Model for Scientific Publishing #FreetheScience !

Open Knowledge Foundation: Open data hackathon brings evidence on the right to health care

planet code4lib - Tue, 2017-03-21 09:54

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was part of the Human rights theme. 

This is an English translation of the Latvian blog at http://www.datuskola.lv/2017/03/05/atverto-datu-hakatona-laika-atklaj-vairakus-datos-balstitus-pieradijumus-sabiedribas-veselibas-nozare/

On Sunday 4th March during Open Data Day, six new data projects have started in a Data Hackathon. Three of them were dedicated to human rights, to access good healthcare and to find new solutions on what should be done to provide the accessibility for health. The teams have also been focusing on the rights to education for people with special needs to understand if inclusive education actually works in Riga.


 

The participants of the hackathon chose two projects as winners. One of the winning teams created an example for a tool that would allow patients to write in their diagnosis and see all compensated medicaments and the compensation amounts in the selected country. The new tool brings attention towards the support each individual could get in case of a rare disease.

Translation: 1st Graphic: Sum for one person
2nd Graphic: Number of patients
3rd Graphic: Sum for one person
Table – 1) Disease 2) Date 3) Number of patients 4) The amount of compensation 5) The sum for one person

 

The second best team addressed social security as a human right. The team analysed yields and bank fees across all Baltic states to compare investment profitability. The team compared situations in Baltic countries and concluded that from a 1000 euros salary, an individual gets the least in Estonia, but the most in Lithuania.

Translation: If gross salary is 1k euros, how much will 2nd pension pillars contributions earn in financial markets (720 euros in a year)?
Latvia: 22 euros (for you 22 euros + for bank 11 euros)
Lithuania: 32 euros (for you 25 euros + for bank 7 euros)
Estonia: 21 euros (for you 13 euros + for bank 8 euros)

 

Another team focused on health check availability, an important topic for the public health sector. They concluded with a map of pharmacies in Riga, the capital of Latvia, where there should be stations for men between 20-40 years old to check if they have been infected with HIV.

This year one team researched the budget in the field of defence. The team created a detailed view on how the state of Latvia has tried to reach for a defence budget of 2% of the gross domestic product (GDP).

The right to inclusive education has also been a topic this year and has been viewed from a data perspective. Even though there have been 34 million euros funding for inclusive education in the period 2014-2020, more and more children in Riga who are registered as individuals with mental and learning disorders are being separated from their peers and learn in special schools.

Besides these projects, the participants of the hackathon had a chance to see a presentation for a data project that has not been created during the hackathon, which gathered data about electromagnetic radiation in the centre of Riga. This data project discovered spots in a map where the radiation exceeds the norm, even as much as 5 times, and harms people health.

The work on the projects that has been started during Open Data hackathon continues: results will be prepared for publications in the media and for use in NGO projects.

This is already the second Open Data hackathon. We thank visualisation company Infogram, the Nordic Council of Ministers’ office in LatviaOpen Knowledge International and the UK Foreign & Commonwealth Office for their support to create this event.

LibUX: Listen: Personas, Jobs to be Done, and LITA (18:08)

planet code4lib - Tue, 2017-03-21 05:10

Recently, LITA embarked on a big persona-making project in order to better align their services to the needs of their members, the results of which they just revealed. This provides a solid talking point to introduce conceptual problems with personas and introduce a potentially better-suited approach: jobs to be done.

  • 00:43 – LITA created a bunch of personas
  • 2:14 – What does LITA actually want?
  • 3:39 – Personas are more noise than signal
  • 5:37 – Personas are best as a demographic snapshot
  • 6:05 – The User Story
  • 7:35 – The Job Story
  • 8:04 – Jobs to be Done
  • 11:36 – So what jobs do LITA personas need done?
  • 14:04 – What should LITA do, then?
  • 15:44 – Support Metric: https://patreon.com/libux
  • 16:42 – How to enter for our giveaway: a copy of Practical Design Discovery by Dan Brown.

You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

Library Tech Talk (U of Michigan): Sounded to bits: Digital preservation of U-M Library’s audio collections

planet code4lib - Tue, 2017-03-21 00:00

The Audio/Moving Image Team has been digitizing audio since 2009. Read more to find out why, how, what we've done, what we're going to do, and what others are doing!

District Dispatch: #ALAWO is tracking #SaveIMLS and collecting your stories

planet code4lib - Mon, 2017-03-20 22:36

Since 11 a.m. last Thursday (and as of 5 p.m. this afternoon), there have been 3,838 tweets under the #saveIMLS hashtag on Twitter. That is over 767 tweets a day. Or, sliced another way, there are currently 1,800 people who are participating in the conversation on Twitter. Anyway you dice it, we need this momentum to continue.

Right now, the ALA Washington Office is collecting your tweets and stories via TAGS, the Twitter Archiving Google Sheet. You can see the conversion as it has unfolded via this afternoon’s snapshot:

#SaveIMLS conversation on Twitter from March 17 through March 20. The Washington Office is collecting your stories. View and explore the live version here.

As we march towards the next phase of the appropriations process, we need to keep IMLS at the center of the conversation. We need you to keep beating the drum and sharing your stories.

How can you tell an impactful story?

  • First, look up what IMLS does for you specifically. Search their database to see what they have funded in your zip code.
  • Then, pick a project (from the database or one you already know about) and tweet about it’s impact with the hashtag #saveIMLS. (Bonus points: Enter your zip code into GovTrack so you can find and tag your Senator or representative; their social media information is listed.)

While your “numbers” — how many computers, how many programs, how many books, how many patrons — are very important, the best kind of stories talk about how IMLS or LSTA funding has helped you to contribute to the “big picture.” A powerful story from your Congressional district can and will move mountains.

Here are some examples, from the 3,838 tweets, that we thought were great. Keep it coming!

Stay tuned for more information, particularly as it pertains to the upcoming advocacy campaign around “Dear Appropriator” letters. Meanwhile, subscribe to our action alerts to ensure you receive the latest updates on the budget process.

The post #ALAWO is tracking #SaveIMLS and collecting your stories appeared first on District Dispatch.

LITA: LITA @ ALA Annual 2017 – Chicago

planet code4lib - Mon, 2017-03-20 18:01
Early bird registration closes at noon, Wednesday March 22 central time. Start making your plans for ALA Annual now by checking out all the great LITA events.

Go to the LITA at ALA Annual conference web page.

Attend the LITA President’s Program featuring Kameron Hurley
Sunday June 25, 2017 from 4:30 pm – 5:30 pm
Program Title: We are the Sum of our Stories

LITA President Aimee Fifarek welcomes Kameron Hurley, author of the essay collection The Geek Feminist Revolution, as well as the award-winning God’s War Trilogy and The Worldbreaker Saga. Hurley has won the Hugo Award, Kitschy Award, and Sydney J. Bounds Award for Best Newcomer. She was also a finalist for the Arthur C. Clarke Award, the Nebula Award, and the Gemmell Morningstar Award. Her short fiction has appeared in Popular Science Magazine, Lightspeed Magazine, and many anthologies. Hurley has written for The Atlantic, Entertainment Weekly, The Village Voice, Bitch Magazine, and Locus Magazine. She posts regularly at KameronHurley.com.

Register for ALA Annual and Discover Ticketed Events.

Sign up for the LITA AdaCamp preconference

Friday, June 23, 2017, 9:00 am – 4:00 pm
Northwestern University Libraries, Evanston, IL
Facilitators: Margaret Heller, Digital Services Librarian, Loyola University Chicago; Evviva Weinraub, Associate University Librarian for Digital Strategies, Northwestern University.

Women in technology face numerous challenges in their day-to-day work. If you would like to join other women in the field to discuss topics related to those challenges, AdaCamp is for you. This one-day LITA preconference during ALA Annual in Chicago will allow female-identifying individuals employed in various technological industries an opportunity to network with others in the field and to collectively examine common barriers faced.

Other Featured LITA Events Include

Top Technology Trends
Sunday, June 25, 2017, 1:00 pm – 2:30 pm

LITA’s premier program on changes and advances in technology. Top Technology Trends features our ongoing roundtable discussion about trends and advances in library technology by a panel of LITA technology experts and thought leaders. The panelists will describe changes and advances in technology that they see having an impact on the library world, and suggest what libraries might do to take advantage of these trends. This conference panelists and their suggested trends include:

  • Margaret Heller, Session Moderator, Digital Services Librarian, Loyola University Chicago
  • Emily Almond, Director of IT, Georgia Public Library Service
  • Marshall Breeding, Independent Consultant and Founder, Library Technology Guides
  • Vanessa Hannesschläger, Researcher, Austrian Centre for Digital Humanities/Austrian Academy of Sciences
  • Jenny Jing, Manager, Library Systems, Brandeis University Library
  • Veronda Pitchford, Director of Membership and Resource Sharing, Reaching Across Illinois Library System (RAILS)
  • Tara Radniecki, Engineering Librarian, University of Nevada, Reno

LITA Imagineering: Generation Gap: Science Fiction and Fantasy Authors Look at Youth and Technology
Saturday June 24, 2017, 1:00 pm – 2:30 pm

Join LITA, the Imagineering Interest Group, and Tor Books as a panel of Science Fiction and Fantasy authors discuss how their work can help explain and bridge the interests of generational gaps, as well as what it takes for a literary work to gain crossover appeal for both youth and adults. This year’s line up is slated to include:

  • Cory Doctorow
  • Annalee Newitz
  • V.E. Schwab
  • Susan Dennard

LITA Conference Kickoff
Friday June 23, 2017, 3:00 pm – 4:00 pm

Join current and prospective LITA members for an overview and informal conversation at the Library Information Technology Association (LITA) Conference Kickoff. All are welcome to meet LITA leaders, committee chairs, and interest group participants. Whether you are considering LITA membership for the first time, a long-time member looking to engage with others in your area, or anywhere in between, take part in great conversation and learn more about volunteer and networking opportunities at this meeting.

LITA Happy Hour
Sunday, June 25, 2017, 6:00 pm – 8:00 pm

This year the LITA Happy Hour continues the year long celebration of LITA’s 50th anniversary. Expect anniversary fun and games. Make sure you join the LITA Membership Development Committee and LITA members from around the country for networking, good cheer, and great fun! There will be lively conversation and excellent drinks; cash bar.

Find all the LITA programs and meetings using the online conference scheduler.

More Information about LITA conference events and Registration

Go to the LITA at ALA Annual Conference web page.

Open Knowledge Foundation: Open Data Durban celebrates Open Data Day building an Arduino weather station

planet code4lib - Mon, 2017-03-20 13:00

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the Open Environment theme.

This post was first published on Open Data Durban website: https://opendata.durban

Open Data Durban in partnership with The MakerSpace Foundation hosted International Open Data Day in a dual-charged effort to ignite openness and participation in the Durban community. Together with environmentalists, ecologists, data wranglers, techies and active citizens we built an Arduino weather station. According to Arduino.cc, “Arduino is an open-source electronics platform based on easy-to-use hardware and software.”

How did we promote diversity on the day?

On arrival, participants had to select different coloured stickers of what they thought represented their interest and skill set, choosing either a data wrangler; a maker; an environmentalist; techie and more importantly learners.  The latter being an obvious choice to Mondli, Nolwazi and Nosipho three learners from Umkhumbane Secondary School, located in Chesterville, a township on the periphery of Durban CBD. We invited the learners as part of our data club’s programme, where learners will also be building an Arduino weather station which will be rolling out soon.

It was essential that the teams were made up of each of the skill sets above to ensure:

  • the project speaks to the broader theme of an informed decision-making through the micro-weather station data;
  • participants are assisted in assembling the electronics;
  • the IoT device is programmed through code;
  • participants gain critical environmental insights towards the practical use of the tool;

and more importantly to enable and create a guild of new-age active citizens and evangelists of open knowledge.

Each team was provided with an Arduino weather kit consisting of dust, gas, temperature and rainfall sensors and all other relevant components to build the weather station. We did not provide the teams with step by step instructions for the build. Instead, we challenged them to google search the build instructions and figure out the steps. Within minutes, the teams were busy scouring for instructions from various websites such as Instructables. This emphasised the openness of sharing knowledge and introduced the learners to open knowledge and how someone from another place in the world can share their expertise with you.

What were some of the insights from the environmentalists?

Bruce and Lee, both retired ecologist and environmentalist respectively were charming in their approach to problem-solving and tinkering with the electronic parts. Although not well-versed in the Arduino toolkit, their gallant efforts saw them learning and later tutoring the learners on building the weather station.

Their insights into the environmental status of Durban was unmatched and painted a grim picture of the Durban community’s awareness of the problems that exist.

What were some of the insights from the techies?

Often at our events, we have a number of techies come in who are brilliant at coding but have no concept of data science or how coding can be used to address various issues such as economic, social and environmental. This event helped to introduce such techies to how coding Arduino boards and sensors can be used to gather weather condition data and how such data can be further used to monitor the weather conditions in a given area.

This data then allows the public to be aware of their weather conditions such as the concentrations of harmful gases in the air. The city can also map out pollution hotspots and identify trends which aid in decision making to eliminate or manage the air quality.

How did the learners participate in the session? What were some of their learnings?

There were many different languages spoken by the participants which made communication across all groups a challenge. However, the confidence and enthusiastic wanting to learn prompted the learners to ask some captivating questions for the group members more notably in their pursuit of understanding how things work in the space.

All the attendees were attempting to build the Arduino weather for the first time. The adult attendees were quite hesitant at first to share what they were doing with the learners because they were not certain if what they were doing was correct or not and did not want to confuse the learners. Once the adult attendees were confident with the method of building, they then began to communicate more with the learners.

Outcomes of the day

We eventually saw one complete weather station built by Sphe Shandu who stayed behind after some team members tinkered with other goods in the MakerSpace, minus the LCD component (no team figured this out).

Learnings
  1. Lend an extra hand to students that engage with maker spaces for the first time in an urban setting, they have a natural innate understanding of the moving parts (3D printers, laser cutter, electronics etc) in the MakerSpace and not necessarily the context of new-age manufacturing, practicality and potential outputs.
  2. After lunch, the teams became quite weary. Progress dived down but the teams managed to pull through and complete as much as they could. Long events tend to be vigorous at the beginning and hit a stall towards the end. A possible lesson learnt is to host much shorter events.
  3. Teachers need to be incentivised to attend the programme outside formal school learning.
  4. Parents prove to be the most difficult stakeholders to engage – although involved in their children’s learning they need to be engaged to attend such functions.
  5. For community events on Saturday it is most difficult to rally large attendance numbers.

 

 

Open Knowledge Foundation: Celebrating International Open Data Day in Chicago

planet code4lib - Mon, 2017-03-20 10:39

This blog is part of the event report series on International Open Data Day 2017. On Saturday 4 March, groups from around the world organised over 300 events to celebrate, promote and spread the use of open data. 44 events received additional support through the Open Knowledge International mini-grants scheme, funded by SPARC, the Open Contracting Program of Hivos, Article 19, Hewlett Foundation and the UK Foreign & Commonwealth Office. This event was supported through the mini-grants scheme under the Open Research theme. 

This post was originally published on Teodora’s webpage: http://bit.ly/2nn6q0m

Open Data Day Chicago 2017 was a great experience for everyone.

I am so glad to be the organiser of the celebration of open data in Chicago this year. 

We were about 30 people, working on 6 projects. There were participants from UChicago (RCC, ImLab, Becker Friedman Institute), Smart Chicago Collaborative, Cook County, Google NYC, Open Source Policy Center, Cornell University, and others. Also, the background experience of the participants was very diverse: economics, software engineering, genomics, humanities, web development, and others. While the hackathon was programmed from 10 am to 5 pm, we worked on some projects until 8 pm.

More information about the event: http://wiki.opendataday.org/Chicago2017

As with any good hackathon, the day started with coffee and sandwiches. Thanks to Research Computing Center and SPARC-Science for sponsoring. 

With the opening festivities done, we started the day by presenting the 6 registered projects:

  1. Visualising Open Data of Chicago
  2. Taxbrain Web Application
  3. Open Data and Virtual Reality  
  4. Reddit Data Visualizer
  5. Computational methods for analysing large amounts of genomic data
  6. Exploring the data portal of Cook County

A member of each project team presented their project to the participants so anyone interested in the project could join the project team.

Kathleen Lynch, the Information Technology Communications Manager of Cook County Government presented their data portal and the work they are doing to support open data. She also presented the Chicago Hack Night event that takes places every Tuesday evening for people interested in supporting open data to meet, build, share, and learn about civic tech. After that, Josh Kalov, Consultant at Smart Chicago Collaborative presented the work they are doing to support Cook County Government, with a focus on access, skills, and data. Their work is currently focused on health, education, and justice.

Work on the projects then started. Using mainly datasets from the City of Chicago Data Portal, we analysed Red Light Traffic Violations in Chicago, and also the Beach Weather Stations. Here you can see a map we created in Python with the main locations in Chicago where the most red light traffic violations occur. Of course, the next step will be to label the locations.

We also created an application to visualise charts on Virtual Reality devices like Google cardboards. We used Three.js and D3 to create the 3D charts and Google Chrome VR.

We designed some graphical widgets for TaxBrain web application, a platform for accessing the open-source tax models. Also, we learned about Tax-Calculator, a tool that computes federal individual income taxes and Federal Insurance Contribution Act (FICA) taxes for a sample of tax filing units in years beginning with 2013.

We also discussed how we can integrate the Reddit Data Visualizer with other open datasets:

Professor Hae Kyung Im from Im-Lab at UChicago led a discussion on the Genomic Data Commons Data Portal and the prediction models offered by the tool which was developed by MetaXcan.

The projects that we worked on are all available on OpenDataDayChicago2017‘s Github.

After all the hard work during the hackathon, we decided to continue working after hours on some of the projects. The projects we worked were later presented at Chicago Hack Night.

All in all, the day was productive, entertaining, and educational. We celebrated open data in a pleasant way and good friendships were founded and strengthened.

Pages

Subscribe to code4lib aggregator