You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 16 hours 55 min ago

Jonathan Rochkind: rubyland infrastruture, and a modest sponsorship from honeybadger

Fri, 2017-02-24 16:41

Rubyland.news is my hobby project ruby RSS/atom feed aggregator.

Previously it was run on entirely free heroku resources — free dyno, free postgres (limited to 10K rows, which dashes my dreams of a searchable archive, oh well). The only thing I had to pay for was the domain. Rubyland doesn’t take many resources because it is mostly relatively ‘static’ and cacheable content, so could get by fine on one dyno. (I’m caching whole pages with Rails “fragment” caching and an in-process memory-based store, not quite how Rails fragment caching was intended to be used, but works out pretty well for this simple use case, with no additional resources required).

But the heroku free dyno doesn’t allow SSL on a custom hostname.  It’s actually pretty amazing what one can accomplish with ‘free tier’ resources from various cloud providers these days.  (I also use a free tier mailgun account for an MX server to receive @rubyland.news emails, and SMTP server for sending admin notifications from the app. And free DNS from cloudflare).  Yeah, for the limited resources rubyland needs, a very cheap DigitalOcean droplet would also work — but just as I’m not willing to spend much money on this hobby project, I’m also not willing to spend any more ‘sysadmin’ type time than I need — I like programming and UX design and enjoy doing it in my spare ‘hobby’ time, but sysadmin’ing is more like a necessary evil to me. Heroku works so well and does so much for you.

With a very kind sponsorship gift of $20/month for 6 months from Honeybadger, I used the money to upgrade to a heroku hobby-dev dyno, which does allow SSL on custom hostnames. So now rubyland.news is available at https, via letsencrypt.org, with cert acquisition and renewal fully automated by the letsencrypt-rails-heroku gem, which makes it incredibly painless, just set a few heroku config variables and you’re pretty much done.

I still haven’t redirected all http to https, and am not sure what to do about https on rubyland. For one, if I don’t continue to get sponsorship donations, I might not continue the heroku paid dyno, and then wouldn’t have custom domain SSL available. Also, even with SSL, since the rubyland.news feed often includes embedded <img> tags with their original src, you still get browser mixed-content warnings (which browsers may be moving to give you a security error page on?).  So not sure about the ultimate disposition of SSL on rubyland.news, but for now it’s available on both http and https — so at least I can do secure admin or other logins if I want (haven’t implemented yet, but an admin interface for approving feed suggestions is on my agenda).

Honeybadger

I hadn’t looked at Honeybadger before myself.  I have used bugsnag on client projects before, and been quite happy with it. Honeybadger looks like basically a bugsnag competitor — it’s main feature set is about capturing errors from your Rails (or other, including non-ruby platform) apps, and presenting them well for your response, with grouping, notifications, status disposition, etc.

I’ve set up honeybadger integration on rubyland.news, to check it out. (Note: “Honeybadger is free for non-commercial open-source projects”, which is pretty awesome, thanks honeybadger!) Honeybadger’s feature set and user/developer experience are looking really good.  It’s got much more favorable pricing than bugsnag for many projects–pricing is just per-app, not per-event-logged or per-seat.  It’s got pretty similar featureset to bugsnag, in some areas I like how honeybadger does things a lot better than bugsnag, in others not sure.

(I’ve been thinking for a while about wanting to forward all Rails.logger error-level log lines to my error monitoring service, even though they aren’t fatal exceptions/500s. I think this would be quite do-able with honeybadger, might try to rig it up at some point. I like the idea of being able to put error-level logging in my code rather than monitoring-service-specific logic, and have it just work with whatever monitoring service is configured).

So I’d encourage folks to check out honeybadger — yeah, my attention was caught by their (modest, but welcome and appreciated! $20/month) sponsorship, but I’m not being paid to write this specifically, all they asked for in return for sponsorship was a mention on the rubyland.news about page.

Honeybadger also includes some limited uptime monitoring.   The other important piece of monitoring, in my opinion, is request- or page-load time monitoring, with reports and notifications on median and 90th/95th percentile. I’m not sure if honeybadger includes that in any way. (for non-heroku deploys, disk space, RAM, and CPU usage monitoring is also key. RAM and CPU can still be useful with heroku, but less vital in my experience).

Is there even a service that will work well for Rails apps that combines error, uptime, and request time monitoring, with a great developer experience, at a reasonable price? It’s a bit surprising to me that there are so many services that do just one or two of these, and few that combine all of them in one package.  Anyone had any good experiences?

For my library-sector readers, I think this is one area where most library web infrastruture is not yet operating at professional standards. In this decade, a professional website means you have monitoring and notification to tell you about errors and outages without needing to wait for users to report em, so you can get em fixed as soon as possible. Few library services are being operated such, and it’s time to get up to speed.  While you can run your own monitoring and notification services on your own hardware, in my experience few open source packages are up to the quality of current commercial cloud offerings — and when you run your own monitoring/notification, you run the risk of losing notice of problems because of misconfiguration of some kind (it’s happened to me!), or a local infrastructure event that takes out both your app and your monitoring/notification (that too!).  A cloud commercial offering makes a lot of sense. While there are many “reasonably” priced options these days, they are admittedly still not ‘cheap’ for a library budget (or lack thereof) — but it’s a price worth paying, it’s what i means to run web sites, apps, and services professionally.


Filed under: General

Tim Ribaric: Frankenstein for Fun and Profit

Fri, 2017-02-24 14:30

It's Alive.

read more

District Dispatch: Top 5 myths about National Library Legislative Day

Thu, 2017-02-23 18:36

Originally published by American Libraries in Cognotes during ALA Midwinter 2017.

The list of core library values is a proud one, and a long one. For the past 42 years, library supporters from all over the country have gathered in Washington, D.C. in May with one goal in mind – to advance libraries’ core values and communicate the importance of libraries to Members of Congress. They’ve told their stories, shared data and highlighted pressing legislation impacting their libraries and their patrons.

Photo Credit: Adam Mason Photography

This year, Congressional action may well threaten principles and practices that librarians hold dear as never before. That makes it more important than ever that National Library Legislative Day 2017 be the best attended ever. So, let’s tackle a few of the common misconceptions about National Library Legislative Day that often keep people from coming to D.C. to share their own stories:

  1. Only librarians can attend.
    This event is open to the public and anyone who loves libraries – students, business owners, stay-at-home moms, just plain library enthusiasts – has a story to tell. Those firsthand stories are critical to conveying to members of Congress and their staffs just how important libraries are to their constituents.
  2. Only policy and legislative experts should attend.
    While some attendees have been following library legislative issues for many years, many are first time advocates. We provide a full day of training to ensure that participants have the most up-to-date information and can go into their meetings on Capitol Hill fully prepared to answer questions and convey key talking points.
  3. I’m not allowed to lobby.
    The IRS has developed guidelines so that nonprofit groups and private citizens can advocate legally. Even if you are a government appointee, there are ways you can advocate on issues important to libraries and help educate elected officials about the important work libraries do.
    Still concerned? The National Council of Nonprofits has resources to help you.
  4. My voice won’t make a difference.
    From confirming the new Librarian of Congress in 2016 to limiting mass surveillance under the USA FREEDOM Act in 2015 to securing billions in federal support for library programs over many decades, your voice combined with other dedicated library advocates’ has time and again defended the rights of the people we serve and moved our elected officials to take positive action. This can’t be done without you!
  5. I can’t participate if I don’t go to D.C.
    Although having advocates in D.C. to personally visit every Congressional office is hugely beneficial – and is itself a powerful testimony to librarian’s commitment to their communities –  you can participate from home. During Virtual Library Legislative Day you can help effectively double the impact of National Library Legislative Day by calling, emailing or tweeting Members of Congress using the same talking points carried by onsite NLLD participants.

Legislative threats to core library values are all too real this year. Don’t let myths prevent you from standing up for them on May 1-2, 2017. Whether you’ve been advocating for 3 months or 30 years, there’s a place for you in your National Library Legislative Day state delegation, either in person or online.

For more information, and to register for National Library Legislative Day, please visit ala.org/nlld.

The post Top 5 myths about National Library Legislative Day appeared first on District Dispatch.

David Rosenthal: Poynder on the Open Access mess

Thu, 2017-02-23 16:00
Do not be put off by the fact that it is 36 pages long. Richard Poynder's Copyright: the immoveable barrier that open access advocates underestimated is a must-read. Every one of the 36 pages is full of insight.

Briefly, Poynder is arguing that the mis-match of resources, expertise and motivation makes it futile to depend on a transaction between an author and a publisher to provide useful open access to scientific articles. As I have argued before, Poynder concludes that the only way out is for Universities to act:
As it happens, the much-lauded Harvard open access policy contains the seeds for such a development. This includes wording along the lines of: “each faculty member grants to the school a nonexclusive copyright for all of his/her scholarly articles.” A rational next step would be for schools to appropriate faculty copyright all together. This would be a way of preventing publishers from doing so, and it would have the added benefit of avoiding the legal uncertainty some see in the Harvard policies. Importantly, it would be a top-down diktat rather than a bottom-up approach. Since currently researchers can request a no-questions-asked opt-out, and publishers have learned that they can bully researchers into requesting that opt-out, the objective of the Harvard OA policies is in any case subverted. Note the word "faculty" above. Poynder does not examine the issue that very few papers are published all of whose authors are faculty. Most authors are students, post-docs or staff. The copyright in a joint work is held by the authors jointly, or if some are employees working for hire, jointly by the faculty authors and the institution. I doubt very much that the copyright transfer agreements in these cases are actually valid, because they have been signed only by the primary author (most frequently not a faculty member), and/or have been signed by a worker-for-hire who does not in fact own the copyright.

District Dispatch: Look Back, Move Forward: network neutrality

Thu, 2017-02-23 15:44

Background image is from the ALA Archives.

With news about network neutrality in everyone’s feeds recently, let’s TBT to 2014 at the Annual Conference in Las Vegas, Nevada, where the ALA Council passed a resolution “Reaffirming Support for National Open Internet Policies and Network Neutrality.” And in 2006—over a decade ago!—our first resolution “Affirming Network Neutrality” was approved.

You can read both resolutions from 2006 and 2014 in ALA’s Institutional Repository. While you are here, be sure to sign up for the Washington Office’s legislative action center for more news and opportunities to act as the issue evolves.

2014 Resolution Reaffirming Support for National Open Internet Policies and “Network Neutrality”

Citations
• Resolution endorsed by ALA Council on June 28, 2006. Council Document 20.12.
• Resolution adopted by ALA Council on July 1, 2014, in Las Vegas, Nevada. Council Document 20.7.

The post Look Back, Move Forward: network neutrality appeared first on District Dispatch.

LibUX: WordPress could be libraries’ best bet against losing their independence to vendors

Thu, 2017-02-23 13:17

Stephen Francouer: Interesting play by EBSCO. I’m going to guess that it’s optimized to work with EDS and other EBSCO products. “When It Comes To Improving Your Library Website, Not All Web Platforms Are Created Equal” https://libraryux.slack.com/archives/what-to-read/p1487376220000478

Stephen’s linking to an article where Ebsco announces Stacks:

Stacks is the only web platform created by library professionals for library professionals. Stacks understands the challenges librarians face when it comes to the library website and has built a web platform and native mobile apps that lets you get back to doing what you do best; curating excellent content for your users. Learn more about how Stacks and the New Library Experience.

I haven’t had any hands-on opportunity with Stacks, so I can’t comment on the product – it might be good. My contention, however, is that it is probably worse for libraries if it’s good.

Ebsco is not the first in this space. I think, probably, Springshare has the leg up – so far. Ebsco won’t be the last in this space, either. I know of two vendors who are poised to announce their product.

The opportunity for library-specific content management systems is huge, though. Open-source is still such an incredibly steep hill for libraries that installing, maintaining, customizing — and I am going to say this without any first-hand experience with Stacks, but I can’t believe Ebsco will break free of the vendor-wide pattern — a superior platform like WordPress requires too much involvement. So, because library websites fail to convert and library professionals lack the expertise to solve that problem themselves, it’s ripe for the picking.

This is part of a trend I’ve warned about in my last few posts, the last podcast (called “Your front end is doomed”), and so on all the way back to my once optimistic observation of the Library as Interface: libraries are losing control of their most important asset – the gate.

Libraries are so concerned with being help-desk level professionals that they are ignoring the in-house opportunity for design and development expertise and unable to comprehend the role that plays in libraries’ independence.

Why I title this post “WordPress could be libraries’ best bet against losing their independence to vendors” is because WordPress — moreso than Drupal — is the easiest platform through which to learn how to develop custom solutions. There are more developers, cheap conferences worldwide, ubiquitous meetups, literally more WordPress sites than any other site on the internet, that is easy-ish to use out of the box and most capable to scale for complexity.

These in-house skills are crucial for the libraries’ ability to say “no” over the long term.

Open Knowledge Foundation: Measuring the openness of government data in southern Africa: the experience of a GODI contributor

Thu, 2017-02-23 10:22

The Global Open Data Index (GODI) is one of our core projects at Open Knowledge International. The index measures and benchmarks the openness of government data around the world. As we complete the review phase of the audit of government data, we are soliciting feedback on the submission process. Tricia Govindasamy shares her experience submitting to #GODI16.

Open Data Durban (ODD), a civic tech lab based in Durban South Africa, received the opportunity from Open Knowledge International (OKI) to contribute to the Global Open Data Index (GODI) 2016 for eight (8) southern African countries. OKI defines GODI as “an annual effort to measure the state of open government data around the world.” With a fast approaching deadline, I was eager to take up the challenge of measuring the openness of specified datasets as made available by the governments of South Africa, Botswana, Namibia, Malawi, Zambia, Zimbabwe, Mozambique and Lesotho.

This intense data wrangling consisted of finding the state of open government data for the following datasets: National Maps, National Laws, Government Budget, Government Spending, National Statistics, Administrative Boundaries, Procurement, Pollutant Emissions, Election Results, Weather Forecast, Water Quality, Locations, Draft Legislation, Company Register, Land Ownership. A quick calculation: 15 datasets multiplied by 8 individual countries, results in 120 surveys! As you can imagine, this repetitive task took hours of google searches until late hours of the night (the best and most productive time for data wrangling I reckon) resulting in my sleep pattern being completely messed up. Nonetheless, I got the task done. Here are some of the findings.

Part of the survey for Pollutant Emissions in South Africa

Trends

The African Development Bank developed Open Data Portals for most of the 8 countries. At first sight, these portals are quite impressive with data visualisations and graphics, however, these portals are poorly organised and rarely updated. For most countries, the environmental departments are lagging as there is barely any records on Pollutant Emissions or Water Quality. Datasets on Weather forecasts and Land Ownership are only available for half of the countries. In some situations, sections of the datasets were not available. For example, while both South Africa and Malawi had data on land parcel boundary, there was no data on property value or tenure type.

It was quite shocking to note that Company Register, an important dataset that can help monitor fraud as it relates to trade and industry was unavailable for all the countries with the exception of Lesotho.

National Laws dataset was found for all countries with the exception of Mozambique, whereas Draft Legislation data was not available in Mozambique, Namibia and Botswana. I believe the availability of data on National Laws for almost all the countries can in part be attributed to the African Legal Information Institute, which has contributed to making legalisation open and has created websites for South Africa, Lesotho, Malawi and Zambia. Also, while Government Budget and Expenditure data are available, important detailed information such as transactions are lacking for most countries.

On a more positive note, Election Results compiled by independent electoral commissions were the easiest data to find and were generally up to date for all countries except Mozambique, for which I found no results.

It is important to note that none of the datasets for any of the 8 countries are openly licensed or in the public domain, begging the question for more education on the importance of the matter.

Challenges

OKI has a forum in which Network members from around the world discuss projects and also ask and resolve questions. I must admit, I took full advantage of this since I am a new member of the community with my training wheels still on. The biggest challenge I faced during this process was searching for Mozambique’s government data. I had to resort to using Google translator to find relevant data sources since all the data are published in Portuguese, Mozambique’s national language.

Due to the language barrier, I felt certain things were lost in translation, thus not providing a fair depiction of the survey. Luckily, OKI members from Brazil will be reviewing my submission to verify the data sources.

Tricia Govindasamy submitting to GODI on behalf of 8 countries in southern Africa.

Being South African and having prior knowledge of available government data made the process much easier when I submitted for South Africa. I already knew where to find the data sources even though many of the sources did not show up on simple google searches. I do not have experience with government data from the 7 other countries so I solely relied on google searches which may or may not have contained all exhaustible sources of data in its first few pages of search results.

The part of the survey which I felt my efforts really did not provide much insight into the Index was in situations where I found no datasets. If no datasets are found, the survey asks to “provide the reason that the data are not collected by the government”. I did not have any evidence to sufficiently substantiate an answer and contacting government departments for a variety of countries to get an answer was simply not practical at the time.

I would like to thank OKI for giving Open Data Durban the opportunity to be part of contributing to  GODI. It was a fulfilling experience as it is a volunteer based programme for people around the world. It is always great to know that the open data community extends beyond just Durban or South Africa but is an international community who are always collaborating on projects with a joint objective of advocating for open data.

LibUX: Listen: Your Front End is Doomed (33:10)

Wed, 2017-02-22 21:56

Metric alum Emily King @emilykingatcsn swings by to chat with me about conversational UI and “interface aggregation” – front ends other than yours letting users connect with your service without ever actually having to visit your app. We cover a lot: API-first, considering the role of tone in voice user interfaces, and — of course — predicting doom.

You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

LITA: Jobs in Information Technology: February 22, 2017

Wed, 2017-02-22 20:09

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Yale University, Sterling Memorial Library, Workflow Analyst/Programmer, New Haven, CT

Penn State University Libraries, Nursing and Allied Health Liaison Librarian, University Park, PA

St. Lawrence University, Science Librarian, Canton, NY

Louisiana State University, Department Head/Chairman, Baton Rouge, LA

Louisiana State University, Associate Dean for Special Collections, Baton Rouge, LA

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Evergreen ILS: Evergreen 2.12 beta is released

Wed, 2017-02-22 18:55

The Evergreen community is pleased to announce the beta release of Evergreen 2.12 and the beta release of OpenSRF 2.5. The releases are available for download and testing from the Evergreen downloads page and from the OpenSRF downloads page. Testers must upgrade to OpenSRF 2.5 to test Evergreen 2.12.

This release includes the implementation of acquisitions and booking in the new web staff client in addition to many web client bug fixes for circulation, cataloging, administration and reports. We strongly encourage libraries to start using the web client on a trial basis in production. All functionality is available for testing with the exception of serials and offline circulation.

Other notable new features and enhancements for 2.12 include:

  • Overdrive and OneClickdigital integration. When configured, patrons will be able to see ebook availability in search results and on the record summary page. They will also see ebook checkouts and holds in My Account.
  • Improvements to metarecords that include:
    • improvements to the bibliographic fingerprint to prevent the system from grouping different parts of a work together and to better distinguish between the title and author in the fingerprint;
    • the ability to limit the “Group Formats & Editions” search by format or other limiters;
    • improvements to the retrieval of e-resources in a “Group Formats & Editions” search;
    • and the ability to jump to other formats and editions of a work directly from the record summary page.
  • The removal of advanced search limiters from the basic search box, with a new widget added to the sidebar where users can see and remove those limiters.
  • A change to topic, geographic and temporal subject browse indexes that will display the entire heading as a unit rather than displaying individual subject terms separately.
  • Support for right-to-left languages, such as Arabic, in the public catalog. Arabic has also become a new officially-supported language in Evergreen.
  • A new hold targeting service supporting new targeting options and runtime optimizations to speed up targeting.
  • In the web staff client, the ability to apply merge profiles in the record bucket merge and Z39.50 interfaces.
  • The ability to display copy alerts when recording in-house use.
  • The ability to ignore punctuation, such as hyphens and apostrophes, when performing patron searches.
  • Support for recognition of client time zones,  particularly useful for consortia spanning time zones.

With release 2.12, minimum requirements for Evergreen have increased to PostgreSQL 9.3 and OpenSRF 2.5

For more information about what will be available in the release, check out the draft release notes.

Many thanks to all of the developers, testers, documentors, translators, funders and other contributors who helped make this release happen.

DPLA: Michele Kimpton to Lead Business Development Strategy at DPLA

Wed, 2017-02-22 16:00

The Digital Public Library of America is pleased to announce that Michele Kimpton will be joining its staff as Director of Business Development and Senior Strategist beginning March 1, 2017.

In this critical role, Michele will be responsible for developing and implementing business strategies to increase the impact and reach of DPLA. This will include building key strategic partnerships, creating new services and exploring new opportunities, expanding private and public funding, and developing community support models, both financial and in-kind. Together these important activities will support DPLA’s present and future.

“We are truly fortunate to have someone of Michele’s deep experience, tremendous ability, and stellar reputation join DPLA at this time,” said Dan Cohen, DPLA’s Executive Director. “Along with the rest of the DPLA staff, I look forward to working with Michele to strengthen and expand our community and mission.”

Prior to joining DPLA, Michele Kimpton worked as Chief Strategist for LYRASIS and CEO of DuraSpace, where she developed several new cloud-based managed services for the digital library community, and developed new sustainability and governance models for multiple open source projects. Kimpton is a founding member of both the National Digital Strategic Alliance (NDSA) and the IIPC (International Internet Preservation Consortium). In 2013, Kimpton was named Digital Preservation Pioneer by the NDIPP program at Library of Congress. She holds a MBA from Santa Clara University, and a Bachelor of Science in Mechanical Engineering from Lehigh University. She can now be reached at michele dot kimpton at dp dot la.

Welcome, Michele!

DuraSpace News: INTRODUCING Fedora 4 Ansible

Wed, 2017-02-22 00:00

From Yinlin Chen, Software Engineer, Digital Library Development, University Libraries, Virginia Tech

LITA: Only a week left to sign up for the joint LITA and ACRL supercomputing webinar

Tue, 2017-02-21 20:48

What’s so super about supercomputing? A very basic introduction to high performance computing

Presenters: Jamene Brooks-Kieffer and Mark J. Laufersweiler
Tuesday February 28, 2017
2:00 pm – 3:30 pm Central Time

Register Online, page arranged by session date (login required)

This 90 minute webinar provides a bare-bones introduction to high-performance computing (HPC) or supercomputing. This program is a unique attempt to connect the academic library to introductory information about HPC. Librarians who are learning about researchers’ data-intensive work will want to familiarize themselves with the computing environment often used to conduct that work. Bibliometric analysis, quantitative statistical analysis, and geographic data visualizations are just a few examples of computationally-intensive work underway in humanities, social science, and science fields.

Covered topics will include:

  • Why librarians should care about HPC
  • HPC terminology and working environment
  • Examples of problems appropriate for HPC
  • HPC resources at institutions and nation-wide
  • Low-cost entry-level programs for learning distributed computing

Details here and Registration here

Jamene Brooks-Kieffer brings a background in electronic resources to her work as Data Services Librarian at the University or Kansas.

Dr. Mark Laufersweiler has, since the Fall of 2013, served as the Research Data Specialist for the University of Oklahoma Libraries.

     

Look here for current and past LITA continuing education offerings

Questions or Comments?

contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org
contact ACRL at (312) 280-2522 or Margot Conahan, mconahan@ala.org

District Dispatch: Papers AND passwords, please…

Tue, 2017-02-21 19:27

The Department of Homeland Security is increasingly demanding without cause that non-citizens attempting to lawfully enter the U.S. provide border officials with their electronic devices and the passwords to their private social media accounts. Today, ALA is pleased to join 50 other national public interest organizations – and nearly 90 academic security, technology and legal experts in the US and abroad – in a statement condemning these activities and the policy underlying them.

Source: shutterstock

Linked below, the statement calls the policy (first articulated by DHS Secretary John Kelly at a February 7 congressional hearing) a “direct assault on fundamental human rights.” It goes on to warn that the practice also will violate the privacy of millions of U.S. citizens and persons in their social networks and will encourage the governments of other nations to retaliate against Americans in kind.

For the statement’s signators, the literal bottom line is: “The first rule of online security is simple: do not share your passwords. No government agency should undermine security, privacy, and other rights with a blanket policy of demanding passwords from individuals.”

Click here to read the full statement.

Additional Resources:

Tech, advocacy groups slam DHS call to demand foreign traveler’s passwords
By Ali Breland Feb 21, 17

Electronic Media Searches at Border Crossings Raise Worry
By The Associated Press Feb. 18, 2017

What Are Your Rights if Border Agents Want to Search Your Phone?
By Daniel Victor  Feb. 14, 2017

‘Give Us Your Passwords’
The Atlantic by Kaveh Waddell  Feb 10, 2017

The post Papers AND passwords, please… appeared first on District Dispatch.

DPLA: DPLAfest 2017 Program Now Available

Tue, 2017-02-21 18:00
The DPLAfest 2017 program of sessions, workshops, lightning talks, and more is now available!

Taking place at Chicago Public Library’s Harold Washington Library Center on April 20 and 21, DPLAfest 2017 will bring together librarians, archivists, and museum professionals, developers and technologists, publishers and authors, educators, and many others to celebrate DPLA and its community of creative professionals.

We received an excellent array of submissions in response to this year’s call for proposals and are excited to officially unveil the dynamic program that we have lined up for you. Look for opportunities to engage with topics such as social justice and digital collections; public engagement; library technology and interoperability; metadata best practices; ebooks; and using digital collections in education and curation projects.

DPLAfest 2017 presenters represent institutions across the country–and as far as Europe–but also include folks from some of our host city’s premier cultural and educational institutions, including the Art Institute of Chicago, the Field Museum and Chicago State University. We are also grateful for the support and collaboration of DPLAfest hosting partners  Chicago Public Library, the Black Metropolis Research Consortium, Chicago Collections, and the Reaching Across Illinois Library System (RAILS).

View the DPLAfest 2017 program and register to reserve your spot today.

District Dispatch: Changes to copyright liability calculus counterproductive

Tue, 2017-02-21 16:24

ALA – as part of the Library Copyright Alliance (LCA) – submitted a second round of comments in the Copyright Office’s study on the effectiveness of the notice and takedown provisions of Section 512. In its comments, LCA argues that the effectiveness of federal policies to improve access to information and enhance education (such as the National Broadband Plan adopted by the FCC in 2010, ConnectEd and the expansion of the E-rate program) would have been seriously compromised without Section 512. Accordingly, LCA again opposes changes to Section 512 not required by the DMCA and which could upset the present balance that the statute attempts to strike between the protection of copyrighted information and its necessary free flow and access over the internet.

Photo credit: Pixabay

Last year the U.S. Copyright Office initiated separate inquiries into several aspects of copyright law relevant to libraries, their users and the public in general. One such important proceeding asked for comment on the part of the Digital Millennium Copyright Act (DMCA) that provides internet service providers (ISPs) and others with a “safe harbor” from secondary copyright liability if they comply with a process that’s become known as “notice and takedown.”

Specifically, Section 512 protects online service providers from liability for the infringing actions of others who use online networks. Libraries are included in this safe harbor because they offer broadband and open access computing to the public. Because of the safe harbor, libraries have been able to provide broadband services to millions of people without the fear of being sued for onerous damages because of infringing user activity.

The Copyright Office has not yet announced a timeline for publication of its findings or recommendations regarding Section 512.

 

The post Changes to copyright liability calculus counterproductive appeared first on District Dispatch.

Open Knowledge Foundation: Using data.world to collaborate on Open Data Day and to showcase work after the event

Tue, 2017-02-21 16:05

March 4th is Open Data Day! Open Data Day is an annual celebration of open data all over the world. For the fifth time in history, groups from around the world will create local events on the day where they will use open data in their communities. Here is a look at how groups can use the data.world platform to identify data sources and collaborate with open data users.

Although the data.world team will only be present at our local Open Data Day in Austin, Texas, everyone at data.world is proud to support the groups that will participate in events all over the world. The platform will make it easier to collaborate on your data projects, connect with the community, and preserve your work for others to build upon after Open Data Day.

For those of you that don’t know us yet, this is central to our vision as a B Corp and Public Benefit Corporation. By setting data.world up in this way, we commit to considering our impact on stakeholders – not only on shareholders – and allow ourselves to publicly report on progress towards our mission in the same way companies report on finances. Our mission is to:

  1. build the most meaningful, collaborative and abundant data resource in the world in order to maximize data’s societal problem-solving utility,
  2. advocate publicly for improving the adoption, usability, and proliferation of open data and linked data, and
  3. serve as an accessible historical repository of the world’s data.

When I reached out to OKI about supporting the event, they suggested that I write some tips on how groups could benefit most from using the platform on Open Data Day and I prepared this short list:  

  • Data discovery and organization: before Open Data Day, search the platform and identify other data sources that are relevant to a project you hope to work on during the event. Create a dataset that includes hypotheses, questions, or goals for your project as well as data and related documentation
  • Explore and query data: as soon as you find a data file, understand its shape and descriptive statistics to determine if the data has the right characteristics for your project as well as query the file directly on data.world using SQL
  • Use the API: interact with data via R Studio or Python programs using the data.world API or link a Google Sheet to a dataset (if you prefer working locally in a spreadsheet you can do that too)
  • Communicate effectively: as you work on your project, use discussion threads in the project’s dataset as well as annotate data within the platform so group members have maximum context
  • Showcase your work: including data, notebooks, analysis, and visualizations in a single workspace to preserve what was achieved and permit the community to build on it without unnecessarily repeating the data prep and analysis completed during the event

If you’d like to see some relevant examples on data.world I would suggest looking at this dataset from the Anti-defamation League, this analysis of Cancer Clinical Trials, and this Data for Democracy project around Drug Spending.

I’d love to see your projects on data.world so tag @len in a discussion on your dataset or invite me to be a read-only contributor. If you have questions, email help@data.world and you’ll get the attention of our whole team as your feedback goes right into our company Slack.

Hopefully data.world helps your group be more productive on Open Data Day and also sustain momentum from the event afterwards.

Open Knowledge Foundation: Europe in the age of Tr… Transparency

Tue, 2017-02-21 10:01

For the past few years, the USA has been an example of how governments can manage open government initiatives and open data particularly. They have done this by introducing positions like federal chief information officer and chief data officers. With datasets being opened on a massive scale in a standardised format, it laid the ground for startups and citizen apps to flourish. Now, when referring to the example of the US, it is common to add ‘under Obama’s administration’ with a sigh. Initiatives to halt data collection put the narrative on many sensitive issues such as climate change, women’s rights or racial inequality under threat. Now, more than ever, the EU should take a global lead with its open data initiatives.

One of these initiatives just took place last week: developers of civic apps from all over Europe went on a Transparency Tour of Brussels. Participants were the winners of the app competition that was held at TransparencyCamp EU in Amsterdam last June. In the run up to the final event, 30 teams submitted their apps online while another 40 teams were created in a series of diplohacks that Dutch embassies organised in eight countries. If you just asked yourself ‘what is diplohack?’, let me explain.

ConsiliumVote team pitching their app at TCampEU, by EU2016NL

Diplohacks are hackathons where developers meet diplomats – with initial suspicion from both sides. Gradually, both sides understand how they can benefit from this cooperation. As much as the word ‘diplohack’ itself brings two worlds together, the event was foremost an ice breaker between the communities. According to the survey of participants, direct interaction is what both sides enjoyed the most. Diplohacks helped teams to find and understand the data, and also enabled data providers to see the points of improvement like better interface, adding relevant data fields to their datasets, etc.  

Experience the diplohack atmosphere by watching this short video:

All winners of the app competition were invited last week for the transparency tour at the EU institutions. The winning teams were Citybik.es, which h makes use of bike data; Harta Banilor Publici (Public Spending Map) in Romania; and ConsiliumVote, a visualization tool of the votes in the Council of the EU. Developers were shown the EU institutions from the inside, but the most exciting part of it was a meeting with the EU open data steering committee.

Winners of the app competition at the Council of EU, by Open Knowledge Belgium

Yet again, it proved how important it is to meet face to face and discuss things. Diplomats encouraged coders to use their data more. Tony Agotha, a member of the cabinet of First Vice-President Frans Timmermans, reminded and praised coders for the social relevance of their work. Developers, in turn, provided feedback with both specific comments like making the search on the Financial Transparency website possible across years; and general ideas such as making the platform of the European data portal open sourced so that regional and municipal portals can build on it.

Open data is not a favour, it’s a right’ – said one of the developers. To use this right, we need more meetings between publishers and re-users, we need community growth, we need communication of data and ultimately, more data. TransparencyCamp Europe and last week’s events in Brussels were good first steps. However, both EU officials and European citizens using data should keep the dialogue going if we want to take up the opportunity for the EU to lead on open data. Your comments and ideas are welcome. Join the discussion here.

 

 

Terry Reese: MarcEdit Mac Update

Tue, 2017-02-21 06:34

It seems like I’ve been making a lot of progress wrapping up some of the last major features missing from the Mac version of MarcEdit.  The previous update introduced support for custom/user defined fonts and font sizes which I hope went a long way towards solving accessibility issues.  Today’s update brings plugin support to MarcEdit Mac.  This version integrates the plugin manager and provides a new set of templates for interacting with the program.  Additionally, I’ve migrated one of the Windows plugins (Internet Archive to HathiTrust Packager) to the new framework.  Once the program is updated, you’ll have access to the current plugins.  I have 3 that I’d like to migrate, and will likely be doing some work over the next few weeks to make that happen.

Interested in seeing what the plugin support looks like? See: https://youtu.be/JM-0i5KLm74

You can download the file from the downloads page (http://marcedit.reeset.net/downloads) or via the automatic updating tool in the program.

Questions?  Let me know.

–tr

DuraSpace News: VIVO Updates for Feb 19–Camp, Wiki, Ontology

Tue, 2017-02-21 00:00

From Mike Conlon, VIVO Project Director

Pages