You are here

Feed aggregator

District Dispatch: Net neutrality, e-rate hot topics again in Washington

planet code4lib - Mon, 2017-07-31 14:00

Telecommunications policy has figured prominently in the Washington Office’s work recently. Most visibly, ALA participated actively with scores of other organizations, companies and trade associations in a nationwide “Day of Action” on July 12 to let the Federal Communications Commission know that we strongly oppose its pending anti-net neutrality proposal and filed initial comments (joined by American Association of Law Libraries and COSLA) with the FCC to that effect. Recently, both the House and Senate held committee hearings at which we anticipated ALA priority issues– most notably net neutrality and potential changes in the E-rate program – being prominently discussed, as they were. We worked with key members of Congress serving on these committees to submit questions and background material ahead of the hearings to be placed in their official records. More on these strategic committee meetings follows:

Senate Holds Nominations Hearings for Three FCC Commissioners

The Senate Commerce committee recently held a hearing on three nominees to the FCC who would fill out the current vacancies at the Commission. Two of those tapped, FCC Chairman Ajit Pai and former FCC Commissioner Jessica Rosenworcel, are well known to the Senate and ALA. The third nominee, Brendan Carr, has not previously served as a Commissioner though he has been an attorney at the Commission since 2012. All three nominees are expected to be confirmed by the Senate.

ALA noted with interest the dialogue surrounding E-rate and net neutrality at the hearing. While all three nominees agreed that E-rate continues to be an important conduit for affordable broadband to libraries and schools, Chairman Pai and nominee Carr declined to commit to maintaining its present funding level or to taking a “hands-off” approach to changing E-rate modernization orders just adopted in 2015 and not yet fully implemented. Rosenworcel, a longtime supporter of the E-rate program, noted that “the future belongs to the connected. No matter who you are or where you live in the country, you need access to modern communication for a fair shot at 21st century success.”

Chairman Pai also declined to commit to any firm position on net neutrality as the Commission has only just begun to reviewing the millions of public comments just submitted on his proposal to effectively reverse current law assuring net neutrality and strongly backed by ALA.

FCC Oversight and Reauthorization at House Subcommittee

The three sitting FCC Commissioners – Chairman Ajit Pai, Commissioners Mignon Clyburn and Mike O’Reilly – appeared last week before the House Energy and Commerce Subcommittee on Communications and Technology. They addressed a range of telecommunications issues with net neutrality figuring especially prominently in the hearing. The Commissioners received numerous questions on the issue from both Republicans and Democrats on the Subcommittee. As noted above, ALA continues to oppose any legislation that would reverse the 2015 FCC Open Internet Order.

At the hearing, full Committee Chairman Greg Walden (R-OR) expressed interest in bi-partisan legislation to address net neutrality. Chairman Walden noted that “it’s time for Congress to call a halt on the back-and-forth and set clear net neutrality ground rules for the internet.” There appears, however, to be very little interest among Democratic members in joining the Chairman.

Several senior Subcommittee Democrats criticized the FCC proposed rule to reverse the 2015 Order. Rep. Peter Welch (D-VT) questioned “Why change the existing regime where everyone agrees that there is an open internet?” Rep. Mike Doyle (D-PA) criticized Chairman Pai for proceeding on “an agenda that is anti-consumer, anti-small business, anti-competition, anti-innovation, and anti-opportunity.” Also echoing ALA’s position on net neutrality were Senior Democrats Rep. Frank Pallone (D-NJ) and Rep. Anna Eshoo (D-CA).

Congresswoman Eshoo recently hosted a net neutrality roundtable in her California district. Director of Redwood City Public Library Derek Wolfgram joined the panel to discuss the importance of net neutrality for libraries.

The House Subcommittee also questioned the FCC commissioners on a discussion draft of legislation that would reauthorize the Commission. The Republican draft, not yet introduced, would reauthorize the FCC through 2022 and implement procedural changes at the Commission. The FCC was last reauthorized in 1990.

ALA will continue to work with E-rate and net neutrality supporters in the House and Senate over the coming months. Stay tuned as these issues develop.

The post Net neutrality, e-rate hot topics again in Washington appeared first on District Dispatch.

Hugh Rundle: Identify. Not too much. Mostly collections.

planet code4lib - Sun, 2017-07-30 04:17

This week I’ve been using one of the tools I learned about at VALA Tech Camp to clean up membership data as part of a migration we’re doing at MPOW from the aging Amlib system to Koha ILS. Open Refine is a powerful data cleaning tool that can be thought of as Excel with regex. As we’ve been working on getting data migration right, we hit a bit of a snag. In Koha, each discrete part of a member’s address is in a different field:

  • street number
  • street address1
  • street address2
  • city (suburb)
  • state
  • postcode

etc...

This is quite typical of most databases storing address data. Amlib, however, is a fairly old system with some data architecture that is ...interesting. For whatever reason, the address data coming out of Amlib is all stored in a single field. Combine that with multiple possibilities for how addresses can appear, and literally twenty years of data entry into a single free text field, and I’m sure you can imagine how consistent the data is. Working out how to split it out in a sane way that minimises work later has been time consuming. Part of the problem is having to consider all the possibly ways an address may have been entered. Even when library staff have followed procedures correctly, the data is still inconsistent - for example some records list a phone number first, then the street address, whereas other include no phone number. Consider all of the following possibilities, which are all ‘correct’:

0401 234 567, 1 Main St, Booksville, 3020

0401234567, 1 Main St, Booksville, 3020

1 Main St, Booksville, 3020

90123456, 1 Main St, Booksville, 3020

9012 3456, 1 Main St, Booksville, 3020

9012-3456, 1 Main St, Booksville, 3020

There are thousands of examples of all these types of records within our 130,000 member records. Initially, it looked like these were the major differences. Urban and suburban Australia tends to have very few street numbers above 999, partially because streets often change their name when they hit a new suburb. I wrote a quick regex query in OpenRefine to find every record where the first four characters didn’t include a space, and created a new column with the part before the first comma for records matching that query. That was fine until I realised that “P.O. Box 123” would appear to be a phone number under this rule, so I adjusted to exclude anything with a space or a full stop. That was the easy bit. Addresses aren’t as simple as you might think:

1 Main St, Booksville, 3020

Unit 1, 10 Main St, Booksville, 3020

1/10 Main St, Booksville, 3020

F1/10 Main St, Booksville, 3020

F1 10 Main St, Booksville, 3020

Unit 1, The Mews, 1 Main St, Booksville, 3020

1 Main St, Booksville, Vic, 3020

Welcome to regex hell. After a bit of trial and error, I eventually split out the ‘number’ from each address. There are some edge cases where the address information somehow ended up with no commas at all or was incomplete, that we will need to clean up manually, but that’s probably about 4000 out of 130,000, which isn’t so bad. I’ll post something on GitHub at some point with some of the formulas I used to clean the data up for import - for when all you Amlib libraries move over to Koha amirite?

A need to know basis

Going through this process has helped me to keep top of mind something that all librarians, indeed everyone working with any database of personal information, needs to constantly question:

What data are we storing about people, and do we need to store it?

For example, public libraries generally record a member’s sex or gender (no distinction is generally made, though usually it’s labelled as ‘sex’ but actually means ‘gender’). Why? Do I need to know the gender of a member in order to provide information, advice or assistance? The only real argument I’ve heard about this is that it assist in finding members in the database when they do not have their membership card, but that seems to be a fairly weak argument for storing a sometimes intensely personal data point that isn’t always readily ascertained, and can change over time. Of course, most public libraries, in Australia at least, aren’t necessarily able to make decisions like this alone. National, state and local government standards about ‘minimum data sets’ define what must or at least should be collected, sometimes seemingly in contradiction of privacy standards. Once we ask this question of whether we need to store certain data at all, however, another one pops up, in some ways just as important.

How are we storing this data about people?”

I don’t mean databases vs index cards here. What was frustrating me about migrating user address data was the the process of normalising it. Koha wants address data to be chopped into discrete data points - street number, street name, city/suburb, etc. Amlib just stores it as one field, so I need to ‘normalise’ the Amlib data to fit into Koha’s database model. These questions of course feed into each other. Why you want the data affects how you record it. How you record it affects how it can be used. In the case of postal addresses this is pretty innocuous. The fact Koha chops it up like this makes it much easier to correctly format postal addresses on library notices, and allows the system to conform to many different postal service standards in terms of whether the street number is listed first, or the state before the postcode or after it, for example.

But normalising, by definition, smooths out inconvenient differences in how information is turned into data. Consider the gender data point - the overwhelming majority of systems (and the Australian national government data standards) allow, at most, three options - male, female, or ‘not known’. O'Reilly's Introduction to SQL book even uses gender as an example of a data point that only has two possibly options. Note that the assumption here is that if someone’s gender is known, then it must be binary - either male or female, so if it is known that you identify as something else it has to be recorded incorrectly. This is why Tim Sherratt cautioned in A Map and Some Pins that even “open data” needs to be viewed critically and its biases interrogated:

...open data must always, to some extent, be closed. Categories have been determined, data has been normalised, decisions made about what is significant and why. There is power embedded in every CSV file, arguments in every API. This is inevitable. There is no neutral position.

There is no neutral position. This is the case whether we are describing people in our user databases or people within the books that line our shelves. Under pressure from librarians and the ALA, the Library of Congress decided in March 2016 to replace the term “Illegal aliens” with the terms “Noncitizens” and “Unauthorized immigration”. In the middle of a nasty Presidential election campaign, this was inevitably controversial.

When we classify items in our collections, we are deciding as much what terms not to use as we are deciding what terms to use. When Sherratt says we are determining categories, he is also pointing out that we have determined what categories are not used, not appropriate, not valid. When we decide what is significant, we also decide what is insignificant. Every act of classification is an act of erasure. Every ‘finding aid’ also helps to hide things.

Never normalise

Discussions about and changes in how people are described in library collections - whether due to their sexuality and gender, their ethnicity, or their religion - are important, but insufficient. The terms we use to classify people within our collections can affect the broader discourse. But it isn’t just in our collections that we classify, catalogue, and define. Every piece of data recorded about library users is a potential landmine. “People who are Jewish”, “People whose gender identity doesn't match their biological sex”, and “People who read books about socialism” are all identities that have been a death sentence at various times and places. As former NSA chief, General Michael Hayden, put it so clearly: “We kill people based on metadata”. If you’re keeping data about users, you need to think about the worst case scenario, and mitigate against it.

Jarrett M Drake, addressing the British Colombia Library Association earlier this year and seeing the danger, had a simple piece of advice: “Never normalize”:

...the rising tide of fascism should offer pause regarding the benefits of normalized data that can easily be piped from system to another. Local languages, taxonomies, and other forms of knowledge that only people within specific communities can decipher might well be a form of resistance in a country where a president not only advocates for a Muslim database but also for “a lot of systems… beyond databases.” In other words, a world of normalized data is a best friend to the surveillance state that deploys the technologies used to further fascist aspirations.

Identities can be exciting, empowering, and comforting. But they can also be stifling, exclusive, or dangerous. An identity can be something you embrace, accept, or create, but it can just as easily be something that is given to or stamped upon you - sometimes literally. Identity is Pride March and St Patrick’s Day, but it’s also the Cronulla riots and Auschwitz tattoos. In libraries, as well as other cultural and memory institutions like archives and museums, we must take care in how we identify objects and people.

In these public institutions there is no neutral position. Every identity is dangerous. Every database is a site of erasure. Every act is a political act.

District Dispatch: What’s NAFTA got to do with it?

planet code4lib - Sat, 2017-07-29 16:45

Many may not realize that trade treaties can impact copyright law, by not including exceptions that are important for libraries services, research, user access, and fair use. So, when the U.S. Trade Representative (USTR) asked for comments before negotiations to re-write the North American Free Trade Agreement (NAFTA) get underway, the Library Copyright Alliance (LCA) took the opportunity to provide our perspective in a letter. Our message hasn’t changed—Congress put exceptions in the copyright law for a reason, so trade negotiators, don’t mess around with our copyright law, even when interested parties urge you do so.

“The Trading Post” by Clinton Steeds is licensed under CC BY 2.0

During Trans-Pacific Partnership (TPP) negotiations in 2012, LCA was happy to see that balanced copyright was recognized as desirable element of the treaty by including library exceptions in treaty language including fair use:

“Each Party shall endeavor to achieve an appropriate balance in its copyright and related rights system, inter alia by means of limitations or exceptions that are  consistent with Article 18.65 (Limitations and Exceptions), including those for the  digital environment, giving due consideration to legitimate purposes such as, but not limited to: criticism; comment; news reporting; teaching, scholarship, research, and other similar purposes; and facilitating access to published works for persons who are blind, visually impaired, or otherwise print disabled.” (TPP Article 18.66).

The Library Copyright Alliance recommended to NAFTA negotiators that this same language be included in the treaty. In addition, LCA asked that first sale or “exhaustion” be addressed. This is the U.S. exception that allows librarians to lend books, and more broadly allows consumers with lawfully acquired copies of a work the right to distribute that work without authorization. Without exhaustion, there would be no eBay, no Salvation Army collection centers and no second-hand book stores. If included in the treaty, we would advance first sale policy into the international realm which would be interesting because many countries do not have first sale in their respective copyright laws. Of course, that would be a baby step.

LCA also submitted comments on intermediary safe harbors that ensure libraries will not be held liable for the actions of library users. Additionally, LCA addressed copyright term, the public domain, and DRM (digital rights management).

This is just the beginning of a trade negotiation process that will be hidden from the public—unless parts of the treaty are leaked (which often occurs). Only private sector players can negotiate, so it is extremely important to have library concerns that represent the public interest on record. Once the treaty is approved, it will still have to pass in the Senate by two thirds vote. The Senate’s option will be “take it or leave it” because modifications of the treaty cannot be allowed without going back to the drawing board to seek country approval for any modifications. Because the current administration has made trade a priority, we may see a trade treaty negotiated more quickly than usual. LCA will follow its developments.

The post What’s NAFTA got to do with it? appeared first on District Dispatch.

Tara Robertson: UBC’s Open Dialogues Series: How to make open content accessible

planet code4lib - Fri, 2017-07-28 19:23

A couple of months ago I had the pleasure of chatting with the folks from the Centre for Teaching and Learning at UBC about accessibility, universal design for learning and inclusion. I’m really happy with how this video turned out. I love that captioning is now part of their production workflow, and not an afterthought. Yay born accessible content!

I’m also thrilled that the Accessibility Toolkit I co-wrote with Sue Doner and Amanda Coolidge has been remixed by UBC  for their guide on creating accessible resources.

Evergreen ILS: Evergreen 2.11.7 and 2.12.4 released

planet code4lib - Fri, 2017-07-28 19:05

The Evergreen community is pleased to announce two maintenance releases of Evergreen: 2.11.7 and 2.12.4.

Evergreen 2.12.4 has the following changes improving on Evergreen 2.12.3:
  • A fix to a web client bug where adding copies through the Add Volumes and Copies menu item could fail silently.
  • A fix to a bug that allowed users to access some web client admin interfaces without a login.
  • A fix to the display of the loan duration and fine level fields in the web client Item Status Detail view.
  • A fix to the display of duplicate data on the bib record View Holds page when toggling between the holds and OPAC view.
  • A fix to a bug that prevented the web client patron registration page from loading.
  • Support for Org Unit Includes alert text, notice text, event text, header text, and footer text in the web client print templates.
  • A fix to make the web client MARC Editor’s flat text editor selection sticky.
  • A fix to make the Patron Search library selector sticky.
  • A fix to a bug in the web client that prevented the user from saving a new copy after using the MARC Edit Add Item option.
  • A fix to a patron registration bug that did not require the entry of a required user statistical category for stat cats that do not allow free-text entries.
  • The addition of the bad bacode image file in the web client.
  • An improvement to the MARC Batch Edit progress indicator to reduce the likelihood of system backlogs.
  • Downloading checkout history as a CSV from My Account has been fixed for users with a large circulation history. Previously, this would time out for patrons with more than 100 or so circulations.
  • A fix to syntax in the Spanish lang.dtd file that was creating an error when using the Closed Date Editor.
  • Improvements to CSS to silence some Mozilla extension warnings.
  • A fix to a failure to update targeted circulations when utilzing the recall functionality.
  • The addition of text wrapping in the copy details table on the bib record to prevent contents from falling off the page.
  • A fix to the adjust to zero option so that it can be applied correctly to multiple billings.
  • A fix to the “Hold/Copy Ratio per Bib and Pickup Library (and Descendants)” data source so that it will now include counts of eligible copies at locations that are not a pickup library for bib’s holds.
  • A fix to the XUL client Item Status ? Alternate View ? Holds / Transit tab so that it properly refreshes all data when switching between copies.

Note that any report templates using the “Hold/Copy Ratio per Bib and Pickup Library (and Descendants)” reporting source will need to be recreated for the change to be effective.

Evergreen 2.11.7 includes the following changes improving on 2.11.6:
  • Improvements to CSS to silence some Mozilla extension warnings.
  • A fix to a failure to update targeted circulations when utilzing the recall functionality.
  • The addition of text wrapping in the copy details table on the bib record to prevent contents from falling off the page.
  • A fix to the adjust to zero option so that it can be applied correctly to multiple billings.
  • A fix to the “Hold/Copy Ratio per Bib and Pickup Library (and Descendants)” data source so that it will now include counts of eligible copies at locations that are not a pickup library for bib’s hold

Please visit the downloads page to view the release notes and retrieve the server software and staff clients.

Brown University Library Digital Technologies Projects: Python 2 => 3

planet code4lib - Fri, 2017-07-28 17:58

We’ve recently been migrating our code from Python 2 to Python 3. There is a lot of documentation about the changes, but these are changes we had to make in our code.

Print

First, the print statement had to be changed to the print function:

print 'message'

became

print('message') Text and bytes

Python 3 change bytes and unicode text handling, so here some changes related to that:

json.dumps required a unicode string, instead of bytes, so

json.dumps(xml.serialize())

became

json.dumps(xml.serialize().decode('utf8'))

basestring was removed, so

isinstance("", basestring)

became

isinstance("", str)

This change to explicit unicode and bytes handling affected the way we opened files. In Python 2, we could open and use a binary file, without specifying that it was binary:

open('file.zip')

In Python 3, we have to specify that it’s a binary file:

open('file.zip', 'rb')

Some functions couldn’t handle unicode in Python 2, so in Python 3 we don’t have to encode the unicode as bytes:

urllib.quote(u'tëst'.encode('utf8'))

became

urllib.quote('tëst')

Of course, Python 3 reorganized parts of the standard library, so the last line would actually be:

urllib.parse.quote('tëst') Dicts

There were also some changes to Python dicts. The keys() method now returns a view object, so

dict.keys()

became

list(dict.keys()) dict.iteritems()

also became

dict.items() Virtual environments

Python 3 has virtual environments built in, which means we don’t need to install virtualenv anymore. There’s no activate_this.py in Python 3 environments, though, so we switched to using django-dotenv instead.

Miscellaneous

Some more changes we made include imports:

from base import * => from .base import *

function names:

func.func_name => func.__name__

and exceptions:

exception.message => str(exception) except Exception, e => except Exception as e Optional

Finally, there were optional changes we made. Python 3 uses UTF-8 encoding for source files by default, so we could remove the encoding line from the top of files. Also, the unicode u” prefix is allowed in Python 3, but not necessary.

District Dispatch: Email privacy protection measures introduced in Senate

planet code4lib - Fri, 2017-07-28 15:06

Nearly six months ago, the Email Privacy Act (H.R. 387) was approved overwhelmingly in the House. Now, bipartisan legislation just introduced in the Senate goes further. It fully incorporates and significantly expands the protections laid out in H.R. 387 to comprehensively update the 1986 Electronic Communications Privacy Act (ECPA). The “ECPA Modernization Act of 2017” was co-authored by Sens. Mike Lee (R-UT) and Patrick Leahy (D-VT). It will be referred to the Senate Judiciary Committee on which both serve.

ALA has long been a staunch supporter of comprehensive ECPA reform, which has been proposed but failed to pass in the past several Congresses. President James Neal greeted the milestone introduction with this public statement:

Source: http://www.penchat.net/privacy-policy/

“No freedoms are more vital, and important to librarians, than those of inquiry and speech. Without real privacy, Americans effectively have neither. Current law that allows our government to get and view the full content of our most private electronic communications without a search warrant isn’t just outdated, it’s dangerous in a democracy. ALA strongly supports the bipartisan Lee/Leahy “ECPA Modernization Act” to finally and fully bring the Electronic Communications Privacy Act – and with it our fundamental rights to privacy, inquiry and speech – into the modern era.”

Like the House’s bill, the ECPA Modernization Act will for the first time require a warrant for authorities to access the content of many forms of electronic communications not now protected. It also goes further to impose a similar requirement for “geo-location” information from cell phones. In addition, among other important new measures outlined on Sen. Lee’s website, the bill puts “teeth” in the cell phone location clause by permitting courts to suppress such evidence if acquired in an illegal warrantless search.

No action in the Judiciary Committee is anticipated on the bill before the Senate recesses for its August break. ALA and fellow public and private sector members of the Digital Due Process coalition collectively will be pushing hard in the fall, however, for adoption of this potentially landmark legislation. (You can read many of their statements of support for the bill here.)

As a hedge against this ambitious reform package stalling, supporters also introduced a second bill identical to H.R. 387 as adopted by the House in February. Were the Senate to pass that more limited but still valuable measure, it would move directly to the President’s desk for signature. The broader ECPA Modernization Act, if passed by the Senate, would require further consideration and approval by the House. Its currently broader scope could make that difficult.

The post Email privacy protection measures introduced in Senate appeared first on District Dispatch.

District Dispatch: New legislation would protect your right to research

planet code4lib - Thu, 2017-07-27 20:53

ALA applauds the introduction of the Fair Access to Science and Technology Research Act (FASTR). Reps. Mike Doyle (D-PA), Kevin Yoder (R-KS), and Zoe Lofgren (D-CA) introduced the bipartisan legislation as H.R. 3427 yesterday.

FASTR would ensure that, when taxpayers fund scientific research, they are able to freely access the results of that research. Every federal agency that significantly funds research would have to adopt a policy to provide for free, online public access to research articles resulting from that public funding.

As our colleagues at SPARC explain:

The government funds research with the expectation that new ideas and discoveries resulting from that research will advance science, stimulate innovation, grow the economy, and improve the lives and welfare of Americans. The Internet makes it possible to advance these goals by providing public online access to federally funded research and has revolutionized information sharing by enabling prompt sharing of the latest advances with every scientist, physician, educator, entrepreneur and citizen.

FASTR would build on the law, first signed by then-President George W. Bush, that created the National Institutes of Health’s Public Access Policy. Subsequently, the White House Office of Science and Technology Policy under then-President Barack Obama directed other agencies to adopt similar plans to make their research transparent. FASTR would codify and strengthen that directive and speed up public access to this important information.

ALA welcomes the growing bipartisan recognition that public access to information accelerates innovation and encourages Congress to “move FASTR.”

The post New legislation would protect your right to research appeared first on District Dispatch.

LITA: Technical Debt: that escalated quickly

planet code4lib - Thu, 2017-07-27 19:00

If you’re not familiar with the term “technical debt”, it’s an analogy coined by Ward Cunningham[1], used to relay what happens when rather than following best practices and standards we take shortcuts on technical projects to have a quick fix. Debt occurs when we take on a long-term burden in order to gain something in the short term.

I want to note that inevitably we will always take on some sort of debt, often unknowingly and usually while learning; the phrase “hindsight is 20/20” comes to mind, we see where we went wrong after the fact. There is also inherited technical debt, the bit that you can’t control. In all of my jobs, current and past, I’ve inherited technical debt, this is out of my control, it happens and I still need to learn how to deal with it. This piece aims to give some guidelines and bits I’ve learned over the years in dealing with technical debt and doing me best to maintain it, because really, it’s unavoidable and ignoring it doesn’t make it go away. Believe me, I’ve tried. 

Technical debt can refer to many different things including, but not limited to: infrastructure, software, design/UX, or code. Technical debt reduces the long term agility of a team; it forces us to rely on short term solution thinking and make trade-offs for short term agility. When done haphazardly and not managed, technical debt can shut down a team’s ability to move forward on a project, their long term agility.

It accrues quickly and often we don’t realize just how quickly. For example, I’d been tasked with implementing single-sign on (SSO) for a multitude of applications in our library. In the process of mapping out the path of action this led to learning that in order to implement the bits we needed for SSO most of the applications needed to be updated and the newer versions weren’t compatible with the version of PHP running on our servers, and to use the version of PHP that would be compatible we needed to upgrade our server and the upgrade on the server was a major upgrade which led to having to do a full server upgrade and migration. Needless to say, SSO has not yet been implemented. This technical debt accrued from a previous admin’s past decisions to not stay on top of the upgrades for many of our applications because short term hacks were put in place and the upgrades would break those hacks. These decisions to take on technical debt ultimately caught up with us and halted the ability to move forward on a project. Whether the debt is created under your watch or inherited, it will eventually need to be addressed.

The decisions that are made which result in technical debt should be made with a strategic engineering perspective. Technical debt should only be accrued on purpose because it enables some business goal, intentional and unintentional. Steve McConnell’s talk on Managing Technical Debt [2] does a good job of laying the business and technical aspects of taking on technical debt. Following that, ideally there should be a plan in place on how to reasonably reduce the debt down the road. If technical debt is left unaddressed, at some point the only light at the end of the tunnel is to declare bankruptcy, analogically: just blow it up and start over.

Technical debt is always present, it’s not always bad either but it’s always on the verge of getting worse. It is important to have ways of hammering through it, as well as having preventative measures in place to keep debt to a minimum and manageable for as long as possible.

So how do you deal with it?

Tips for dealing with inherited technical debt:

  • Define it. What counts as technical debt? Why is it important to do something about it?
  • Take inventory, know what you’re working with.
  • Prioritize your payoffs. Pick your technical battles carefully, which bits need addressing NOW and which bits can be addressed at a later date?
  • Develop a plan on what and how you’re going to address and ultimately tidy up the debt.
  • Track technical debt. However you track it, make sure you capture enough detail to identify the problem and why it needs to be fixed.

Preventative tips to avoiding technical debt (as much as you can):

  • Before taking on debt ask yourself…
    • Do we have estimates for the debt and non-debt options?
    • How much will the quick & dirty option cost now? What about the clean options?
    • Why do we believe that it is better to incur the effort later than to incur it now? What is expected to change to make taking on that effort more palatable in the future?
    • Have we considered all the options?
    • Who’s going to own the debt?
  • Define initial requirements in a clear and constant style. A good example of this is Gherkin: https://cucumber.io/docs/reference
  • Create best practices. Some examples:  KISS (Keep It Simple Stupid), DRY (Don’t Repeat Yourself), YAGNI (You Aren’t Gonna Need it)
  • Have a standard, an approved model of taking shortcuts, and stick to it. Remember to also reevaluate that standard periodically, what once was the best way may not always be the best way.
  • Documentation. A personal favorite: the “why-and” approach. If you take a temporary (but necessary) shortcut, make note of it and explain why you did what you did and what needs to be done to address it. Your goal is to avoid having someone look at your code/infrastructure/digital records/etc and asking “why is it like that?” Also for documentation, a phenomenal resource (and community) is Write The Docs (http://www.writethedocs.org/guide
  • Allow for gardening. Just as you would with a real garden you want to tidy up things in your projects sooner rather than later. General maintenance tasks that can be done to improve code/systems/etc now rather than filed on the low priority “to-do” list.
  • TESTS! Write/use automated tests that will catch bugs and issues before your users. I’m a fan of using tools like Travis CI (https://travis-ci.org/), Cucumber (https://cucumber.io/docs), Fiddler (http://www.telerik.com/fiddler) and Nagios (https://www.nagios.org/)  for testing and monitoring. Another resource recommended to me (thanks Andromeda!)  is Obey the Testing Goat (http://www.obeythetestinggoat.com/pages/book.html#toc)
  • Remember to act slower than you think. Essentially, think through how something should be done before actually doing it.

And my final thought, commonly referred to as the boy scout rule, when you move on from a project or team and someone else inherits what you leave behind, do your best to leave it better than when you found it.

Footnote:
  1. Ward Cunningham, Explaing Debt Metaphor [Video] http://wiki.c2.com/?WardExplainsDebtMetaphor
  2. Managing Technical Debt by Steve McConnell (slides) http://2013.icse-conferences.org/documents/publicity/MTD-WS-McConnell-slides.pdf
Extra Reading/Tools:

How to deal with technical debt? by Vlad Alive https://m.vladalive.com/how-to-deal-with-technical-debt-33bc7787ed7c

Obey the Testing Goat by Harry Percival  http://www.obeythetestinggoat.com/pages/book.html#toc

How to write a good bug report? Tips and Tricks http://www.softwaretestinghelp.com/how-to-write-good-bug-report/

Tools & Services list https://www.stickyminds.com/tools-guide

Don’t take the technical debt metaphor too far http://swreflections.blogspot.com/2012/12/dont-take-technical-debt-metaphor-too.html 

David Rosenthal: Decentralized Long-Term Preservation

planet code4lib - Thu, 2017-07-27 16:46
Lambert Heller is correct to point out that:
name allocation using IPFS or a blockchain is not necessarily linked to the guarantee of permanent availability, the latter must be offered as a separate service.Storage isn't free, and thus the "separate services" need to have a viable business model. I have demonstrated that increasing returns to scale mean that the "separate service" market will end up being dominated by a few large providers just as, for example, the Bitcoin mining market is. People who don't like this conclusion often argue that, at least for long-term preservation of scholarly resources, the service will be provided by a consortium of libraries, museums and archives. Below the fold I look into how this might work.

These institutions would act in the public interest rather than for profit, and thus somehow be exempt from the effects of increasing returns to scale. Given the budget pressures these institutions are under, I'm skeptical. But lets assume that they are magically exempt.

The whole point of truly decentralized peer-to-peer systems is that they cannot be centrally managed; for example by a consortium of libraries. A system of this kind needs management that arises spontaneously by the effect of its built-in incentives on each individual participant. Among the functions that this spontaneous management needs to perform for a long-term storage service is to ensure that:
  • the storage resources needed to meet the demand are provided,
  • they are replaced as they fail or become obsolete,
  • each object is adequately replicated to ensure its long-term viability,
  • the replicas maintain suitable geographic and organizational diversity,
  • the software is maintained to fix the inevitable vulnerabilities,
and that the software is upgraded as the computing infrastructure evolves through time. Note that these are mostly requirements on the network as a whole rather than on individual peers. The SEC's report on Initial Coin Offerings recognizes similar needs:
Investors in The DAO reasonably expected Slock.it and its co-founders, and The DAO’s Curators, to provide significant managerial efforts after The DAO’s launch. The expertise of The DAO’s creators and Curators was critical in monitoring the operation of The DAO, safeguarding investor funds, and determining whether proposed contracts should be put for a vote. Investors had little choice but to rely on their expertise.

By contract and in reality, DAO Token holders relied on the significant managerial efforts provided by Slock.it and its co-founders, and The DAO’s Curators, as described above.Even in the profit-driven world of crypto-currencies, the incentive from profit doesn't always lead to concensus (see the issue of increasing the Bitcoin block size, and the DAO heist), or to the provision of resources to meet the demand (see Bitcoin's backlog of unconfirmed transactions). Since we have assumed away the profit motive, and all we have left is a vague sense of the public interest, the built-in incentives powering the necessary functions will be weak.

This lack of effective governance is a problem in the short-term world of crypto-currency speculation (see the surplus GPUs flooding the market as Ethereum miners drop out). It is a disaster in digital preservation, where the requirement is to perform continuously and correctly over a time-scale of many technology generations. Human organizations can survive much longer time-scales; 8 years ago my University celebrated its 800-th birthday. Does anybody believe we'll be using Bitcoin or Ethereum 80 years from now as it celebrates its 888-th?

We have experience in these matters. Seventeen years ago we published the first paper describing the LOCKSS peer-to-peer digital preservation system. At the software level it was, and has remained through its subsequent evolution, a truly decentralized system. All peers are equal, no peer trusts any other, peers discover others through gossip-style communication. At the management and organizational level, however, formal structures arose such as the LOCKSS Alliance, the MetaArchive and the CLOCKSS Archive to meet real-world demand for the functions above to be performed in a reliable and timely fashion.

Trying by technical means to remove the need to have viable economics and governance is doomed to fail in the medium- let alone the long-term. What is needed is a solution to the economic and governance problems. Then a technology can be designed to work in that framework. Blockchain is a technology in search of a problem to solve, being pushed by ideology into areas where the unsolved problems are not technological.

District Dispatch: The 2017 Congressional App Challenge is live!

planet code4lib - Thu, 2017-07-27 14:55

The 2017 Congressional App Challenge is live!

The App Challenge is an annual congressional initiative to encourage student engagement in coding and computer science through local events hosted by the Members of Congress.

Between now and November 1, high school students from across the country will be busy creating an app for mobile, tablet or computer devices.

This year, there are over 165 Members of Congress signed up to participate in the launch! Check to see if your district is participating. If not, we encourage you to connect with your Representative to make sure that s/he does sign up. The App Challenge website also has a library letter template you can use to send to your Member of Congress.

How does it work?
Students work solo and in teams to turn a personal interest or social issue into an app that solves a problem or adds another layer to something they are interested in. In past years students developed apps that help reduce the impact of disease in developing countries; guide you through choosing the best soccer cleats online; allow chemistry students to learn the history of atoms in a virtual reality; translate American sign language into other languages; monitor allergies by scanning product barcodes; and to organize your recipe collection.

Every participating district has a winner who is recognized by their Member of Congress and many come to Washington to exhibit their winning app and meet with their Member during the #HouseofCode celebration. The Challenge is sponsored by the Internet Education Foundation and supported by ALA as part of our Libraries Ready to Code (RtC) initiative.

Why code at the library?
Through the Libraries Ready to Code work, we have heard from libraries all over the country and have heard about the variety of ways libraries facilitate coding programs for youth. The variety of programs is as varied as the libraries and the communities they serve. What we have learned (that our current RtC Phase III grant program is now promoting!) is library coding programs should incorporate basic RtC concepts. The App Challenge is a perfect way to bring coding into your library and expose kids to the opportunities coding can open up.

Whether you already have coding programs at your library or not, you can get teens excited about the App Challenge. In addition to building an app, the Challenge introduces teens to the idea of connecting with their elected officials through a fun and creative way. Participating in the Challenge can pave the way for future civic engagement on issues that matter to the teens you work with. At last year’s #HouseofCode event, three young men had designed a climate change strategy game, Code Carbon and were very excited to talk to their Representative about where she stands on climate change.

Interested?
There are lots of ways libraries can encourage students to participate in the Challenge! Host an App Challenge event, an “app-a-thon,” a game night for teens to work on their apps, or start an app building club. Students wishing to participate work through their Member of Congress who must sign up.

Again, check to see if your district is participating and connect with your Representative to make sure that s/he does sign up.

If you do participate we want to hear about it! Share using the App Challenge hashtag #CAC17 and ALA’s hashtag #readytocode. The App Challenge runs through November 1.

The post The 2017 Congressional App Challenge is live! appeared first on District Dispatch.

Open Knowledge Foundation: Open Data for Tax Justice design sprint: building a pilot database of public country-by-country reporting

planet code4lib - Thu, 2017-07-27 14:36

Tax justice advocates, global campaigners and open data specialists came together this week from across the world to work with Open Knowledge International on the first stages of creating a pilot country-by-country reporting database. Such a database may enable anyone to understand the activities of multinational corporations and uncover potential tax avoidance schemes. 

This design sprint event was part of our Open Data for Tax Justice project to create a global network of people and organisations using open data to improve advocacy, journalism and public policy around tax justice in line with our mission to empower civil society organisations to use open data to improve people’s lives. In this post my colleague Serah Rono and I share our experiences and learnings from the sprint. 

 

What is country-by-country reporting?

Image: Financial Transparency Coalition

Country-by-country reporting (CBCR) is a transparency mechanism which requires multinational corporations to publish information about their economic activities in all of the countries where they operate. This includes information on the taxes they pay, the number of people they employ and the profits they report. Publishing this information can bring to light structures or techniques multinational corporations might be using to avoid paying tax in certain jurisdictions by shifting their profits or activities elsewhere.

In February 2017, Open Knowledge International published a white paper co-authored by Alex Cobham, Jonathan Gray and Richard Murphy which examined the prospects for creating a global public database on the tax contributions and economic activities of multinational companies as measured by CBCR.

The authors found that such a public database was possible and concluded that a pilot database could be created by bringing together the best existing source of public CBCR information – disclosures made by European Union banking institutions in line with the Capital Requirements Directive IV (CRD IV) passed in 2013.  The aim of our design sprint was to take the first steps towards the creation of this pilot database.

 

What did we achieve?

From left to right: Tim Davies (Open Data Services), Jonathan Gray (University of Bath/Public Data Lab), Tommaso Faccio (University of Nottingham/BEPS Monitoring Group), Oliver Pearce (Oxfam GB), Elena Gaita (Transparency International EU), Dorcas Mensah (University of Edinburgh/Tax Justice Network – Africa) and Serah Rono (Open Knowledge International). Photo: Stephen Abbott Pugh

A design sprint is intended to be a short and sharp process bringing together a multidisciplinary team in order to quickly prototype and iterate on a technical product.

On Monday 24th and Tuesday 25th July 2017, Open Knowledge International convened a team of tax justice, advocacy, research and open data experts at Friends House in London to work alongside developers and a developer advocate from our product team. This followed three days of pre-sprint planning and work on the part of our developers. All the outputs of this event are public on Google Drive, Github and hackmd.io.

To understand more from those who had knowledge of trying to find and understand CRD IV data, we heard expert presentations from George Turner of Tax Justice Network on the scale of international tax avoidance, Jason Braganza of Tax Justice Network – Africa and Financial Transparency Coalition on why developing countries need public CBCR (see report for more details) and Oliver Pearce of Oxfam Great Britain on the lessons learned from using CRD IV data for the Opening the vaults and Following the money reports. These were followed by a presentation from Adam Kariv and Vitor Baptista of Open Knowledge International on how they would be reusing open-source tech products developed for our Open Spending and OpenTrials projects to help with Open Data for Tax Justice.

Next we discussed the problems and challenges the attendees had experienced when trying to access or use public CBCR information before proposing solutions to these issues. This lead into a conversation about the precise questions and hypotheses which attendees would like to be able to answer using either CRD IV data or public CBCR data more generally.

From left to right: Georgiana Bere (Open Knowledge International), Adam Kariv (Open Knowledge International), Vitor Baptista (Open Knowledge International). Photo: Stephen Abbott Pugh

As quickly as possible, the Open Knowledge International team wanted to give attendees the knowledge and tools they needed to be able to answer these questions. So our developers Georgiana Bere and Vitor Baptista demonstrated how anyone could take unstructured CRD IV information from tables published in the PDF version of banks’ annual reports and follow a process set out on the Github repo for the pilot database to contribute this data into a pipeline created by the Open Knowledge International team.

Datapackage-pipelines is a framework – developed as part of the Frictionless Data toolchain – for defining data processing steps to generate self-describing Data Packages. Once attendees had contributed data into the pipeline via Github issues,  Vitor demonstrated how to write queries against this data using Redash in order to get answers to the questions they had posed earlier in the day.

 

Storytelling with CRD IV data

Evidence-based, data-driven storytelling is an increasingly important mechanism used to inform and empower audiences, and encourage them to take action and push for positive change in the communities they live in. So our sprint focus on day two shifted to researching and drafting thematic stories using this data.

Discussions around data quality are commonplace in working with open data. George Turner and Oliver Pearce noticed a recurring issue in the available data: the use of hyphens to denote both nil and unrecorded values. The two spent part of the day thinking about ways to highlight the issue and guidelines that can help overcome this challenge so as to avoid incorrect interpretations.

Open data from a single source often has gaps so combining it with data from additional sources often helps with verification and to build a stronger narrative around it. In light of this, Elena Gaita, Dorcas Mensa and Jason Braganza narrowed their focus to examine a single organisation to see whether or not this bank changed its policy towards using tax havens following a 2012 investigative exposé by a British newspaper. They achieved this by comparing data from the investigation with the bank’s 2014 CRD IV disclosures. In the coming days, they hope to publish a blogpost detailing their findings on the extent to which the new transparency requirements have changed the bank’s tax behaviour.

 

Visual network showing relation between top 50 banks and financial institutions who comply with Capital Requirements Directive IV (CRD IV) and countries in which they report profits. Image: Public Data Lab

To complement these story ideas, we explored visualisation tools which could help draw insights and revelations from the assembled CRD IV data. Visualisations often help to draw attention to aspects of the data that would have otherwise gone unnoticed. Oliver Pearce and George Turner studied the exploratory visual network of CRD IV data for the EU’s top 50 banks created by our friends at Density Design and the Public Data Lab (see screengrab above) to learn where banks were recording most profits and losses. Pearce and Turner quickly realised that one bank in particular recorded losses in all but one of its jurisdictions. In just a few minutes, the finding from this visual network sparked their interest and encouraged them to ask more questions. Was the lone profit-recording jurisdiction a tax haven? How did other banks operating in the same jurisdiction fare on the profit/loss scale in the same period? We look forward to reading their findings as soon as they are published.

 

What happens next?

The Open Data for Tax Justice network team are now exploring opportunities for collaborations to collect and process all available CRD IV data via the pipeline and tools developed during our sprint. We are also examining options to resolve some of the data challenges experienced during the sprint like the perceived lack of an established codelist of tax jurisdictions and are searching for a standard exchange rate source which could be used across all recorded payments data.

In light of the European Union Parliament’s recent vote in favour of requiring all large multinational corporations to publish public CBCR information as open data, we will be working with advocacy partners to join the ongoing discussion about the “common template” and “open data format” for future public CBCR disclosures which will be mandated by the EU.

Having identified extractives industry data as another potential source of public CBCR to connect to our future database, we are also heartened to see the ongoing project between the Natural Resource Governance Institute and Publish What You Pay Canada so will liaise further with the team working on extracting data from these new disclosures.

Please email contact@datafortaxjustice.net if you’d like to be added to the project mailing list or want to join the Open Data for Tax Justice network. You can also follow the #OD4TJ hashtag on Twitter for updates.

 

Thanks to our partners at Open Data for Development, Tax Justice Network, Financial Transparency Coalition and Public Data Lab for the funding and support which made this design sprint possible.

 

             

 

In the Library, With the Lead Pipe: Editorial: Recent Reads

planet code4lib - Wed, 2017-07-26 16:17

It’s summer in the northern hemisphere, and your editors at In the Library with the Lead Pipe are busy keeping up with the influx of patrons, with improving our instruction programs, and with other joys of summer. As always, we’re also thinking of ways librarians can improve our profession.

Here’s a few recent articles that we’ve been revisiting and think you might also enjoy reading or revisiting. If you have other reading recommendations, feel free to suggest them in the comments.

 

Terry Reese: MarcEdit Updates (all)

planet code4lib - Wed, 2017-07-26 16:09

I’ve posted update for all versions.  Windows and linux updates for 6.3.x Sunday evening and updates to MacOS for 2.5.x on Wed. morning.  Change log below:

Windows/Linux:

* Bug Fix: MarcEditor: Convert clipboard content to….: The change in control caused this to stop working – mostly because the data container that renders the content is a rich object, not plain text like the function was expecting.  Missed that one.  I’ve fixed this in the code.
* Enhancement: Extract Selected Records:  Connected the exact match to the search by file
* Bug Fix: MarcEditor: Right to left flipping wasn’t working correctly for Arabic and Hebrew if the codes were already embedded into the file.
* Update: Cleaned up some UI code.
* Update: Batch Process MarcXML: respecting the native versus the XSLT options.

MacOS Updates:

* Bug Fix: MarcEditor: Right to left flipping wasn’t working correctly for Arabic and Hebrew if the codes were already embedded into the file.
* Update: Cleaned up some UI code.
* Update: Batch Process MarcXML: respecting the native versus the XSLT options.
* Enhancement: Exact Match searching in the Extract, Delete Selected Records tool
* Enhancement: Exact Match searching in the Find/Replace Tool
* Enhancement: Work updates in the Linked data tool to support the new MAC proposal

–tr

David Rosenthal: Initial Coin Offerings

planet code4lib - Tue, 2017-07-25 15:00
The FT's Alphaville blog has started a new series, called ICOmedy looking at the insanity surrounding Initial Coin Offerings (ICOs). The blockchain hype has created an even bigger opportunity to separate the fools from their money than the dot-com era did. To motivate you to follow the series, below the fold there are some extracts and related links.

So far the series includes:
  • ICOs and the money markets:
    how can you determine fair relative value or what the no-arbitrage condition for a multitude of crypto currencies should be if they bear no income potential whatsoever? They have no time value of money in the ordinary sense.

    If and when they do bear interest it is derived not from lending to a productive industry but to short sellers — and this is done at heterogeneous rates across varying exchanges and at varying risk. There is no uniform base lending rate. Everything is arbitrary. Worse than that, the lack of income equates the whole thing to a casino-style game of chance, with ongoing profits entirely dependent on ongoing capital inflows from external sources.
  • In the crypto world, you can get something for nothing:
    you have some cash, and I have a “token”. The token is worthless. It has no purpose or function. There’s a big label on the token that says, “this token cannot be used for anything”. And we exchange the two, and so I end up with your cash, and you end up with nothing, and for some reason you’re happy with the transaction. ... This is a pretty accurate description of an “initial coin offering” (ICO) that has raised $200m worth of cryptocurrency. The company behind it is called block.one ... In an earlier post, we likened an initial coin offering to a Kickstarter campaign. Investors hand over their money, and in return get some sort of access to the product when it’s finished. The access is granted by a token that can be used with the software being developed. Block.one’s initial coin offering is different. There’s a token, but it can’t actually be used for anything. This is from the FAQs:
    The EOS Tokens do not have any rights, uses, purpose, attributes, functionalities or features, express or implied, including, without limitation, any uses, purpose, attributes, functionalities or features on the EOS Platform.You might want to read that over a couple of times, keeping in mind that investors have spent over $200m buying these “EOS Tokens”.
  • From dot.comedy to ICOmedy…:
    mainstream media coverage of the crypto phenomenon has all focused on the similarities with the dotcom mania of the late 90s, which came to a head in the Spring of 2000. ... Sure, there was a mania, and stocks went to comical valuations, and thousands and thousands of people thought they had become overnight millionaires, only to discover they weren’t. Yes, it was tech-related and people were making fabulous predictions about how the world was going to change. ... But during the dotcom era it was clear that the world was changing, for real. Old skool, analogue businesses like Barnes & Noble were getting Amazon-ed. It was clear that all forms of business were already being revolutionised as the digital aged dawned. The trouble was that greed and a herd-like mentality sent the public markets potty for a time.

    The crypto craze is different. It has grown from fringe libertarian philosophy, preaching that any and all government is a bad thing, and that all our current systems where society is organised centrally will soon be replaced by loose ‘non-trusting’ digital networks and protocols that transcend the nation state. ... State sovereignty is not going to disappear. Democratic government is generally a good way for nations to organise their affairs. Dollars will buy you food and energy for the foreseeable.
  • What does a crypto startup do with $230m?:
    You’ve probably never heard of Tezos before. It’s a “new decentralized blockchain” that’s apparently better than all the other blockchains, and last week, it completed a $230m fundraising. ... If the sum of money raised was a guarantor of success, then Tezos would now be a sure bet. It’s the biggest ICO to-date. The platform is the brainchild of Kathleen and Arthur Breitman, who previously worked at Accenture and Goldman Sachs respectively. They have been developing it through their venture Dynamic Ledger Solutions since 2014 and if they can get the Tezos blockchain running for three months “substantially as described” in their marketing, they and the other investors in DLS like venture capitalist Tim Draper will make $20m. What they will do with nearly a quarter of a billion dollars isn't clear. Ideas include "Acquire mainstream print and TV media outlets to promote and defend the use of cryptographic ledger in society"!
Ether priceLeaving aside the daily multi-million dollar heists, of which last Sunday's was $8.4M from Veritaseum, there is the opinion of one of Ethereum's co-founders that the speculative frenzy in Initial Coin Offerings is dangerous:
Initial coin offerings, a means of crowdfunding for blockchain-technology companies, have caught so much attention that even the co-founder of the ethereum network, where many of these digital coins are built, says it’s time for things to cool down in a big way.

“People say ICOs are great for ethereum because, look at the price, but it’s a ticking time-bomb,” Charles Hoskinson, who helped develop ethereum, said in an interview. “There’s an over-tokenization of things as companies are issuing tokens when the same tasks can be achieved with existing blockchains. People are blinded by fast and easy money.”

Firms have raised $1.3 billion this year in digital coin sales, surpassing venture capital funding of blockchain companies and up more than six-fold from the total raised last year, according to Autonomous Research. Ether, the digital currency linked to the ethereum blockchain, surged from around $8 after its ICO at the start of the year to just under $400 last month. It’s since dropped by about 50 percent.The frenzy around ICOs using Ethereum was so intense that it caused a worldwide shortage of GPUs, but:
Over the past few months, there has been a GPU shortage, forcing the prices of mid-range graphics cards up as cryptocurrency miners from across the world purchased hardware in bulk in search for quick and easy profits.

This has forced the prices of most modern AMD and certain Nvidia GPUs to skyrocket, but now these GPUs are starting to saturate the used market as more and more Ethereum miners sell up and quit mining. Some other miners are starting to look at other emerging Cryptocurrencies, though it is clear that the hype behind Ethereum is dying down.

Earlier this week Ethereum's value dropped below $200, as soon as the currency experienced a new difficulty spike, making the currency 20% harder to mine and significantly less profitable. This combined with its decrease in value has made mining Ethereum unprofitable for many miners, especially in regions with higher than average electricity costs. As I write, it is back around $225. If you are minded to invest, the FT's Alphaville blog just announced a great opportunity.


LITA: Please evaluate LITA events @ ALA Annual 2017

planet code4lib - Tue, 2017-07-25 14:50

If you attended the recent 2017 ALA Annual conference in Chicago, thank you for attending LITA events.

Please do us the large favor of completing our LITA conference programs survey evaluation at:

http://bit.ly/litaatannual2017

We hope you had the best ALA Annual conference, and that attending useful, informative and fun LITA programs were an important part of your conference experience. If so please take a moment to complete our evaluation survey. Your responses are very important to your colleagues who are planning programming for next years ALA Annual, as well as LITA year round continuing education sessions.

To complete your survey it might also help to check back at the

Full schedule of LITA programs and meetings

And recall other details at the LITA @ ALA Annual page.

Thank you and we hope to see you at the

LITA Forum in Denver CO, November 9 – 12, 2017

Questions or Comments?

Contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Library of Congress: The Signal: Watch Collections as Data: IMPACT Today

planet code4lib - Tue, 2017-07-25 12:59

 

This is a friendly reminder that our 2nd annual Collections as Data event will be livestreamed TODAY starting at 9:30am.

Watch it on the Library of Congress YouTube channel and Facebook page and follow #AsData on Twitter.

Click here for the full agenda including talks from Ed Ayers, Paul Ford, Sarah Hatton, Tahir Hemphill and Geoff Haines-Stiles.

We’ll see you there!

Terry Reese: MarcEdit 7 Wireframes–XML Functions

planet code4lib - Mon, 2017-07-24 23:05

In this set of wireframes, you can see one of the concepts that I’ll be introducing with MarcEdit 7…wizards.  Each wizard is designed to encapsulate a reference interview to attempt to make adding new functions, etc. to the tool easier.  You will find these throughout MarcEdit 7. 

XML Functions Window:

XML Functions Wizard Screens:

You’ll notice one of the options is the new XML/JSON Profiler.  This is a new tool that I’ll wireframe later; likely sometime in August 2017.

–tr

Islandora: CLAW Install Sprints: Call for stakeholders

planet code4lib - Mon, 2017-07-24 18:30

The Islandora Foundation is seeking volunteers for stakeholders in our first community sprint geared towards creating an Ansible based installation for CLAW.  Please see this short document for more information outlining what we hope to accomplish during the sprint and what is expected of stakeholders.

We're scheduling the work for the weeks of August 21st and 28th, just before Labour Day.  If you or your organization is interested in helping us offer an improved installation process while gaining valuable experience working with Islandora CLAW, please add your name to the signup sheet.

Pages

Subscribe to code4lib aggregator