You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 11 hours 16 min ago

David Rosenthal: Initial Coin Offerings

Tue, 2017-07-25 15:00
The FT's Alphaville blog has started a new series, called ICOmedy looking at the insanity surrounding Initial Coin Offerings (ICOs). The blockchain hype has created an even bigger opportunity to separate the fools from their money than the dot-com era did. To motivate you to follow the series, below the fold there are some extracts and related links.

So far the series includes:
  • ICOs and the money markets:
    how can you determine fair relative value or what the no-arbitrage condition for a multitude of crypto currencies should be if they bear no income potential whatsoever? They have no time value of money in the ordinary sense.

    If and when they do bear interest it is derived not from lending to a productive industry but to short sellers — and this is done at heterogeneous rates across varying exchanges and at varying risk. There is no uniform base lending rate. Everything is arbitrary. Worse than that, the lack of income equates the whole thing to a casino-style game of chance, with ongoing profits entirely dependent on ongoing capital inflows from external sources.
  • In the crypto world, you can get something for nothing:
    you have some cash, and I have a “token”. The token is worthless. It has no purpose or function. There’s a big label on the token that says, “this token cannot be used for anything”. And we exchange the two, and so I end up with your cash, and you end up with nothing, and for some reason you’re happy with the transaction. ... This is a pretty accurate description of an “initial coin offering” (ICO) that has raised $200m worth of cryptocurrency. The company behind it is called block.one ... In an earlier post, we likened an initial coin offering to a Kickstarter campaign. Investors hand over their money, and in return get some sort of access to the product when it’s finished. The access is granted by a token that can be used with the software being developed. Block.one’s initial coin offering is different. There’s a token, but it can’t actually be used for anything. This is from the FAQs:
    The EOS Tokens do not have any rights, uses, purpose, attributes, functionalities or features, express or implied, including, without limitation, any uses, purpose, attributes, functionalities or features on the EOS Platform.You might want to read that over a couple of times, keeping in mind that investors have spent over $200m buying these “EOS Tokens”.
  • From dot.comedy to ICOmedy…:
    mainstream media coverage of the crypto phenomenon has all focused on the similarities with the dotcom mania of the late 90s, which came to a head in the Spring of 2000. ... Sure, there was a mania, and stocks went to comical valuations, and thousands and thousands of people thought they had become overnight millionaires, only to discover they weren’t. Yes, it was tech-related and people were making fabulous predictions about how the world was going to change. ... But during the dotcom era it was clear that the world was changing, for real. Old skool, analogue businesses like Barnes & Noble were getting Amazon-ed. It was clear that all forms of business were already being revolutionised as the digital aged dawned. The trouble was that greed and a herd-like mentality sent the public markets potty for a time.

    The crypto craze is different. It has grown from fringe libertarian philosophy, preaching that any and all government is a bad thing, and that all our current systems where society is organised centrally will soon be replaced by loose ‘non-trusting’ digital networks and protocols that transcend the nation state. ... State sovereignty is not going to disappear. Democratic government is generally a good way for nations to organise their affairs. Dollars will buy you food and energy for the foreseeable.
  • What does a crypto startup do with $230m?:
    You’ve probably never heard of Tezos before. It’s a “new decentralized blockchain” that’s apparently better than all the other blockchains, and last week, it completed a $230m fundraising. ... If the sum of money raised was a guarantor of success, then Tezos would now be a sure bet. It’s the biggest ICO to-date. The platform is the brainchild of Kathleen and Arthur Breitman, who previously worked at Accenture and Goldman Sachs respectively. They have been developing it through their venture Dynamic Ledger Solutions since 2014 and if they can get the Tezos blockchain running for three months “substantially as described” in their marketing, they and the other investors in DLS like venture capitalist Tim Draper will make $20m. What they will do with nearly a quarter of a billion dollars isn't clear. Ideas include "Acquire mainstream print and TV media outlets to promote and defend the use of cryptographic ledger in society"!
Ether priceLeaving aside the daily multi-million dollar heists, of which last Sunday's was $8.4M from Veritaseum, there is the opinion of one of Ethereum's co-founders that the speculative frenzy in Initial Coin Offerings is dangerous:
Initial coin offerings, a means of crowdfunding for blockchain-technology companies, have caught so much attention that even the co-founder of the ethereum network, where many of these digital coins are built, says it’s time for things to cool down in a big way.

“People say ICOs are great for ethereum because, look at the price, but it’s a ticking time-bomb,” Charles Hoskinson, who helped develop ethereum, said in an interview. “There’s an over-tokenization of things as companies are issuing tokens when the same tasks can be achieved with existing blockchains. People are blinded by fast and easy money.”

Firms have raised $1.3 billion this year in digital coin sales, surpassing venture capital funding of blockchain companies and up more than six-fold from the total raised last year, according to Autonomous Research. Ether, the digital currency linked to the ethereum blockchain, surged from around $8 after its ICO at the start of the year to just under $400 last month. It’s since dropped by about 50 percent.The frenzy around ICOs using Ethereum was so intense that it caused a worldwide shortage of GPUs, but:
Over the past few months, there has been a GPU shortage, forcing the prices of mid-range graphics cards up as cryptocurrency miners from across the world purchased hardware in bulk in search for quick and easy profits.

This has forced the prices of most modern AMD and certain Nvidia GPUs to skyrocket, but now these GPUs are starting to saturate the used market as more and more Ethereum miners sell up and quit mining. Some other miners are starting to look at other emerging Cryptocurrencies, though it is clear that the hype behind Ethereum is dying down.

Earlier this week Ethereum's value dropped below $200, as soon as the currency experienced a new difficulty spike, making the currency 20% harder to mine and significantly less profitable. This combined with its decrease in value has made mining Ethereum unprofitable for many miners, especially in regions with higher than average electricity costs. As I write, it is back around $225. If you are minded to invest, the FT's Alphaville blog just announced a great opportunity.


LITA: Please evaluate LITA events @ ALA Annual 2017

Tue, 2017-07-25 14:50

If you attended the recent 2017 ALA Annual conference in Chicago, thank you for attending LITA events.

Please do us the large favor of completing our LITA conference programs survey evaluation at:

http://bit.ly/litaatannual2017

We hope you had the best ALA Annual conference, and that attending useful, informative and fun LITA programs were an important part of your conference experience. If so please take a moment to complete our evaluation survey. Your responses are very important to your colleagues who are planning programming for next years ALA Annual, as well as LITA year round continuing education sessions.

To complete your survey it might also help to check back at the

Full schedule of LITA programs and meetings

And recall other details at the LITA @ ALA Annual page.

Thank you and we hope to see you at the

LITA Forum in Denver CO, November 9 – 12, 2017

Questions or Comments?

Contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

Library of Congress: The Signal: Watch Collections as Data: IMPACT Today

Tue, 2017-07-25 12:59

 

This is a friendly reminder that our 2nd annual Collections as Data event will be livestreamed TODAY starting at 9:30am.

Watch it on the Library of Congress YouTube channel and Facebook page and follow #AsData on Twitter.

Click here for the full agenda including talks from Ed Ayers, Paul Ford, Sarah Hatton, Tahir Hemphill and Geoff Haines-Stiles.

We’ll see you there!

Terry Reese: MarcEdit 7 Wireframes–XML Functions

Mon, 2017-07-24 23:05

In this set of wireframes, you can see one of the concepts that I’ll be introducing with MarcEdit 7…wizards.  Each wizard is designed to encapsulate a reference interview to attempt to make adding new functions, etc. to the tool easier.  You will find these throughout MarcEdit 7. 

XML Functions Window:

XML Functions Wizard Screens:

You’ll notice one of the options is the new XML/JSON Profiler.  This is a new tool that I’ll wireframe later; likely sometime in August 2017.

–tr

Islandora: CLAW Install Sprints: Call for stakeholders

Mon, 2017-07-24 18:30

The Islandora Foundation is seeking volunteers for stakeholders in our first community sprint geared towards creating an Ansible based installation for CLAW.  Please see this short document for more information outlining what we hope to accomplish during the sprint and what is expected of stakeholders.

We're scheduling the work for the weeks of August 21st and 28th, just before Labour Day.  If you or your organization is interested in helping us offer an improved installation process while gaining valuable experience working with Islandora CLAW, please add your name to the signup sheet.

District Dispatch: Applications for Libraries Ready to Code grants now open

Mon, 2017-07-24 15:50

Those of you who signed up for email updates for the Libraries Ready to Code grant program got a sneak peek of the application last Friday. Today, it’s out for the world! Check out the Libraries Ready to Code website for all the details you’ll need to apply. We are accepting applications now through August 31, 2017.

Libraries interested in applying should read the request for proposals (RFP) linked on the website and fill out the application based on criteria described in detail in the RFP. Be sure to check out the resources about the Ready to Code work we’ve been doing so you are familiar with the over all Ready to Code objectives and report findings. You should also read over the eligibility requirements carefully to make sure you are eligible before working on the application. Public, K12 school, and native and tribal libraries from rural to urban areas are encouraged to apply.

A cohort of 25-50 libraries will be selected to receive grants of up to $25,000 to design and implement youth coding programs that incorporate Ready to Code concepts. Through these programs, the library cohort will collaboratively develop, pilot and rapidly iterate a “Ready to Code” toolkit containing a selection of CS resources for libraries and an implementation guide.

The selection committee that will review the applications and select the grant recipients will be made up of members from the three youth divisions – the  American Association of School Librarians (AASL), the Association of Library Service to Children (ALSC) and the Young Adult Library Services Association (YALSA). YALSA will  administer the grant program.

Questions? Sign up today for an informational webinar on Tuesday, August 1st at 2:30, eastern. You will learn what it takes to put together a successful application and you will have a chance to ask the Ready to Code project team questions. The webinar will be archived on the website but we encourage you to attend!

The post Applications for Libraries Ready to Code grants now open appeared first on District Dispatch.

Eric Lease Morgan: Freebo@ND

Mon, 2017-07-24 13:58

This is the initial blog posting introducing a fledgling website called Freebo@ND — a collection of early English print materials and services provided against them. [1]

For the past year a number of us here in the Hesburgh Libraries at the University of Notre Dame have been working on a grant-sponsored project with others from Northwestern University and Washington University in St. Louis. Collectively, we have been calling our efforts the Early English Print Project, and our goal is to improve on the good work done by the Text Creation Partnership (TCP). [2]

“What is the TCP?” Briefly stated, the TCP is/was an organization set out to make freely available the content of Early English Books Online (EBBO). The desire is/was to create & distribute thoroughly & accurately marked up (TEI) transcriptions of early English books printed between 1460 and 1699. Over time the scope of the TCP project seemed to wax & wane, and I’m still not really sure how many texts are in scope nor where they can all be found. But I do know the texts are being distributed in two phases. Phase I texts are freely available to anybody. [3] Phase II texts are only available to institutions who sponsored the Partnership, but they too will be freely available to everybody in a few years.

Our goals — the goals of the Early English Print Project — are to:

  1. improve the accuracy (reduce the number of “dot” words) in the TCP transcriptions
  2. associate page images (scans/facsimiles) with the TCP transcriptions
  3. provide useful services against the transcriptions for the purposes of distant reading

While I have had my hand in the first two tasks, much of my time has been spent on the third. To this end I have been engineering ways to collect, organize, archive, disseminate, and evaluate our Project’s output. To date, the local collection includes approximately 15,000 transcriptions and 60,000,000 words. When the whole thing is said & done, they tell me I will have close to 60,000 transcriptions and 2,000,000,000 words. Consequently, this is by far the biggest collection I’ve ever curated.

My desire is to make sure Freebo@ND goes beyond “find & get” and towards “use & understanding”. [4] My goal is to provide services against the texts, not just the texts themselves. Locally collecting & archiving the original transcriptions has been relatively trivial. [5] After extracting the bibliographic data from each transcription, and after transforming the transcriptions into plain text, implementing full text searching has been easy. [6] Search even comes with faceted browse. To support “use & understanding” I’m beginning to provide services against the texts. For example, it is possible to download — in a computer-readable format — all the words from a given text, where each word from each text is characterized by its part-of-speech, lemma, given form, normalized form, and position in the text. Using this output, it is more than possible for students or researchers to compare & contrast the use of words & types of words across texts. Because the texts are described in both bibliographic as well as numeric terms, it is possible to sort search results by date, page length, or word count. [7] Additional numeric characteristics are being implemented. The use of “log-likelihood ratios” is a simple and effective way to compare the use of words in a given text with an entire corpus. Such has been implemented in Freebo@ND using a set of words called the “great ideas”. [8] There is also a way to create one’s own sub-collection for analysis, but the functionality is meager. [9]

I have had to learn a lot to get this far, and I have had to use a myriad of technologies. Some of these things include: getting along sans a fully normalized database, parallel processing & cluster computing, “map & reduce”, responsive Web page design, etc. This being the initial blog posting documenting the why’s & wherefore’s of Freebo@ND, more postings ought to be coming; I hope to document here more thoroughly my part in our Project. Thank you for listening.

Links

[1] Freebo@ND – http://cds.crc.nd.edu/

[2] Text Creation Partnership (TCP) – http://www.textcreationpartnership.org

[3] The Phase I TCP texts are “best” gotten from GitHub – https://github.com/textcreationpartnership

[4] use & understanding – http://infomotions.com/blog/2011/09/dpla/

[5] local collection & archive – http://cds.crc.nd.edu/freebo/

[6] search – http://cds.crc.nd.edu/cgi-bin/search.cgi

[7] tabled search results – http://cds.crc.nd.edu/cgi-bin/did2catalog.cgi

[8] log-likelihood ratios – http://cds.crc.nd.edu/cgi-bin/likelihood.cgi

[9] sub-collections – http://cds.crc.nd.edu/cgi-bin/request-collection.cgi

Harvard Library Innovation Lab: AALL 2017: The Caselaw Access Project + Perma.cc Hit Austin

Mon, 2017-07-24 06:53

Members of the LIL team including Adam, Anastasia, Brett and Caitlin visited Texas this past weekend to participate in the American Association of Law Libraries Conference in Austin. Tacos were eaten, talks were given (and attended) and friends were made over additional tacos.

Brett and Caitlin had to the chance to meet dozens of law librarians, court staff and others while manning the Perma.cc table in the main hall:

.@permacc is rocking the booth at #aall17 (thanks @mkmaes)! Come say hi, ask Q’s and hear about Perma’s new commercial option- coming soon! pic.twitter.com/yYO44g9DxT

— perma.cc (@permacc) July 16, 2017

.@CaitlinLaughlin engages an #aall17 attendee- come say hi to us and grab a @permacc pin at table 819! pic.twitter.com/kpxa12eUbs

— perma.cc (@permacc) July 16, 2017

On Monday Adam and Anastaia presented “Case Law as Data: Making It, Sharing It, Using It“, discussing the CAP project and the exploring ways to use the new legal data the project is surfacing.

After their presentation they asked those that attended for ideas on how ways to use the data and received an incredible response- over 60 ideas were tossed out by those there!

This year’s AALL was a hot spot of good ideas, conversation and creative thought. Thanks AALL and inland Texas!

John Miedema: Ten Years of the OpenBook plugin for WordPress

Sat, 2017-07-22 13:51

Ten years ago I was writing book reviews online and liked to insert a book cover image in the webpage. I would download a cover image from Amazon and link back to the Amazon page. This practice was encouraged by Amazon; it was good for sales. Amazon was quickly becoming the central repository of book data. One could see a time when all online book catalogs became advertising for Amazon.

I decided to create an easy way for people to link to an alternate source of book cover images and data. I built the OpenBook plugin. The Open Library repository of the Internet Archive was selected as a data source because it was a non-profit that used open source practices including open data. WordPress was the content management platform. I published a technical article in the Code4Lib journal. The article generated a lot of interest in the library community. At the time, libraries were paying to insert book data into their online catalogs, even though it promoted the sales of books.

Three major version upgrades were performed, adding features such as automatic links to related book websites, HTML templates and a stylesheet to standardize the appearance, a WordPress ‘wizard’ to preview the display, and COinS to integrate with external book services like Zotero and OpenURL resolver. I published a second article (pdf) in NISO.

As an open source product, OpenBook enjoyed lively growth in new directions. A Drupal version was created. I was contracted by BookNet Canada to develop a similar plugin for their book repository; BNC BookShare continues to be maintained today. The OpenBook code was posted to GitHub and has been branched for enhancement.

OpenBook has had influence outside the technical sphere. In my initial design I considered using OCLC’s WorldCat as a data source. OCLC is a non-profit serving the library community, so it seemed a good fit. I hesitated because only librarians could add or edit records. As I dug further, I found the OCLC business model appeared to own the data, i.e., not an open data source like Open Library. My assessment was correct. In 2009 OCLC updated its data license to tighten its ownership. The library community exploded. An article in the Guardian asked why you cannot find a library book in your search engine, and explained that it had much to do with OCLC’s closed approach with library records. The article contrasted the closed approach of OCLC with the open approach of Open Library, and mentioned “a plug-in for WordPress that lets bloggers automatically integrate a link to the Open Library page of any book.” <blush>

An online search shows that OpenBook has been cited in three books for librarians:

  • Jones and Farrington (2013). Learning from Libraries that Use WordPress: Content-Management System Best Practices and Case Studies.
  • Jones and Farrington (2011). Using WordPress as a Library Content Management System.
  • Stuart (2011). Facilitating Access to the Web of Data: A Guide for Librarians.

In a moment of inspiration a few years ago I envisioned a cloud service evolution of OpenBook, with adapters to multiple content management platforms and data sources. This new OpenBook cloud service would remove the tight coupling with WordPress and Open Library, truly liberating book data. There was an immediate positive response when I blogged about the idea. Alas, time.

I decided to sunset OpenBook. After two years of inactivity, the plugin was automatically dropped from the WordPress search index. Recently I have been writing on the subject of book covers and peeked at OpenBook’s status. WordPress reports 600+ active installs. Nice. I took a few minutes to test the plugin’s compatibility with the current version of WordPress. Everything tested positive. I updated the plugin’s version numbers and republished the code. OpenBook is again available in the WordPress plugin search index.

 

Harvard Library Innovation Lab: A Million Squandered: The “Million Dollar Homepage” as a Decaying Digital Artifact

Fri, 2017-07-21 16:56

In 2005, British student Alex Tew had a million-dollar idea. He launched www.MillionDollarHomepage.com, a website that presented initial visitors with nothing but a 1000×1000 canvas of blank pixels. At the cost of $1/pixel, visitors could permanently claim 10×10 blocks of pixels and populate them however they’d like. Pixel blocks could also be embedded with URLs and tooltip text of the buyer’s choosing.

The site took off, raising a total of $1,037,100 (the last 1,000 pixels were auctioned off for $38,100). Its customers and content demonstrate a massive range of variation, from individuals bragging about their disposable income to payday loan companies and media promoters. Some purchased minimal 10×10 blocks, while others strung together thousands of pixels to create detailed graphics. The biggest graphic on the page, a chain of pixel blocks purchased by a seemingly defunct domain called “pixellance.com”, contains $10,800 worth of pixels.

The largest graphic on the Million Dollar Homepage, an advertisement for www.pixellance.com

While most of the graphical elements on the Million Dollar Homepage are promotional in nature, it seems safe to say that the buying craze was motivated by a deeper fixation on the site’s perceived importance as a digital artifact. A banner at the top of the page reads “Own a Piece of Internet History,” a fair claim given the coverage that it received in the blogosphere and in the popular press. To buy a block of pixels was, in theory, to leave one’s mark on a collective accomplishment reflective of the internet’s enormous power to connect people and generate value.

But to what extent has this history been preserved? Does the Million Dollar Homepage represent a robust digital artifact 12 years after its creation, or has it fallen prey to the ephemerality common to internet content? Have the forces of link rot and administrative neglect rendered it a shell of its former self?

The Site

On the surface, there is little amiss with www.MillionDollarHomepage.com. Its landing page retains its early 2000’s styling, save for an embedded twitter link in the upper left corner. The (now full) pixel canvas remains intact, saturated with the eye-melting color palettes of an earlier internet era. Overall, the site’s landing page gives the impression of having been frozen at the time of its completion.

A screenshot of the Million Dollar Homepage captured in July of 2017

However, efforts to access the other pages linked on the site’s navigation bar return unformatted 404 messages. The “contact me” link redirects to the creator’s Twitter page. It seems that the site has been stripped of its functional components, leaving little but the content of the pixel canvas itself.

Still, the canvas remains a largely intact record of the aesthetics and commercialization patterns of the internet circa 2005. It is populated by pixelated representations of clunky fonts, advertisements for sketchy looking internet gambling sites, and promises of risqué images. Many of the pixel blocks bear a familial resemblance to today’s clickbait banner ads, with scantily clothed models and promises of free goods and content. Of course, this eye-catching pixel art serves a specific purpose: to get the user to click, redirecting to a site of the buyer’s choosing. What happens when we do?

The Links

Internet links are not always permanent. As pages are deleted or renamed, backends are restructured, and domain namespaces change hands, previously reachable content and resources can be replaced by 404 pages. This “link rot” is the target of the Library Innovation Lab’s Perma.cc project, which allows individuals and institutions to create archived snapshots of webpages hosted at a trustable, static URLs.

Over the decade or so since the Million Dollar Homepage sold its last pixel, link rot has ravaged the site’s embedded links. Of the 2,816 links that embedded on the page (accounting for a total of 999,400 pixels), 547 are entirely unreachable at this time. A further 489 redirect to a different domain or to a domain resale portal, leaving 1,780 reachable links. Most of the domains to which these links correspond are for sale or devoid of content.

A visualization of link rot in the Million Dollar Homepage. Pixel blocks shaded in red link to unreachable or entirely empty pages, blocks shaded in blue link to domain redirects, and blocks shaded in green are reachable (but are often for sale or have limited content) [Note: this image replaces a previous image which was not colorblind-safe]

The 547 unreachable links are attached to graphical elements that collectively take up 342,000 pixels (face value: $342,000). Redirects account for a further 145,000 pixels (face value: $145,000). While it would take a good deal of manual work to assess the reachable pages for content value, the majority do not seem to reflect their original purpose. Though the Million Dollar Homepage’s pixel canvas exists as a largely intact digital artifact, the vast web of sites which it publicizes has decayed greatly over the course of time.

The decay of the Million Dollar Homepage speaks to a pressing challenge in the field of digital archiving. The meaning of a digital artifact to a viewer or researcher is often dependent on the accessibility of other digital artifacts with which it is linked or otherwise networked – a troubling proposition given the inherent dynamism of internet links and addresses. The process of archiving a digital object does not, therefore, necessarily end with the object itself.

What, then, is to be done about the Million Dollar Homepage? While it has clear value as an example of the internet’s ever-evolving culture, emergent potential, and sheer bizarreness, the site reveals itself to be little more than an empty directory upon closer inspection. For the full potential of the Million Dollar Homepage as an artifact to be realized, the web of sites which it catalogues would optimally need to be restored as it existed when the pixels were sold. Given the existence of powerful and widely accessible tools such as the Wayback machine, this kind of restorative curation may well be within reach.

LITA: Lucy Flamm Awarded 2017 LITA/Christian Larew Memorial Scholarship

Fri, 2017-07-21 15:23

Lucy Flamm has been selected to receive the 2017 LITA/Christian Larew Memorial Scholarship ($3,000) sponsored by the Library and Information Technology Association (LITA) and Baker & Taylor. Flamm will be attending the University of Texas at Austin starting in Fall 2017 to earn a Master of Science in Information Studies and Master of Arts in Middle Eastern Studies.

This Scholarship is for master’s level study, with an emphasis on library technology and/or automation, at a library school program accredited by the American Library Association. Criteria for the Scholarship includes previous academic excellence, evidence of leadership potential, and a commitment to a career in library automation and information technology.

The Committee noted that “Flamm’s award-winning undergraduate research on Middle Eastern history and year spent working in a library in the West Bank have prepared her well for Middle Eastern studies librarianship. We hope that this scholarship will help her to fulfill her vision for using technology to create online platforms to preserve Middle Eastern materials and make them more accessible.”

Flamm currently works as a freelance archival researcher and bibliographer. Prior to this she spent a year coordinating undergraduate and graduate access to print and digital materials, and developing and delivering workshops concerning academic resources and research methods at the first Palestinian liberal arts college. Her perspective has been shaped through her involvement with the Boston Center for Refugee Health and Human Rights and her experience as an intern for CyArk, where she supported ongoing efforts to digitally preserve tangible and intangible cultural heritage. During her undergraduate studies at Bard College, Flamm worked for the Bard Prison Initiative digitizing materials to be available to incarcerated individuals pursuing academic degrees, and her undergraduate thesis utilizing archival materials was awarded Bard College’s Marc Bloch Prize. Her original research concerning archives as sites of politics in the Middle East has received international recognition, granting her the opportunity to present at conferences held by Smolny College (St. Petersburg, Russia) and the British Society of Middle Eastern Studies.

When notified she had won, Flamm said, “Experiences to date have led me to firmly believe that librarians are agents of change for communities both local and global. Information accessibility dictates how one interacts with the world, and it is an honor to receive the LITA/Christian Larew Memorial Scholarship which itself promotes the power of knowledge sharing. I am grateful to LITA, Baker & Taylor, and the ALA for supporting me in working towards piloting digital initiatives for Middle Eastern materials.”

Members of the 2017 LITA/Christian Larew Memorial Scholarship Committee are: Julia Bauder (Chair), Matthew Carruthers, Erin Grant, Cole Hudson, Soo-yeon Hwang, and Amber Seely.

Thank you to Baker & Taylor for sponsoring this scholarship.

FOSS4Lib Recent Releases: ArchivesSpace - 2.1.0

Fri, 2017-07-21 12:35

Last updated July 21, 2017. Created by Peter Murray on July 21, 2017.
Log in to edit this page.

Package: ArchivesSpaceRelease Date: Tuesday, July 18, 2017

Ed Summers: Post Custodial Logics

Fri, 2017-07-21 04:00

I love how Kelleher (2017) positions the radical? idea of funding the development of archival infrastructure where it is actually needed using such a logical appeal to the status quo:

One strategy that UTL employed in collaboration with project partners to address challenges of agency, differential access to resources, and the most direct application of benefit was very deliberate transactional use of project funding. Rather than assume transfer of documentation to UTL — either through donation or purchase — as required under the custodial paradigm, UTL instead helped to arrange and purchased negotiated access to documentation that remained in the custody or control of the partner organization. Project funds were put toward the arrangement, description, preservation, and digitization of documentation, just as they would have been if the archival materials were at UTL. But the investments were made not in Texas, but locally with the partner organizations. In this way, the partner organizations and in some cases communities were able to build infrastructure and skills in digitization, metadata, software development, and preservation appropriate to the context of their organizational goals and uses of the documentation. And in two cases at least, the human rights organization developed significant local expertise that served them well beyond their partnership with UTL. Additionally, rather than acquire the original records themselves — as called for under the custodial paradigm — UTL sometimes purchased digitized copies of documentation or gained non-exclusive access to documentation as they and partners made it available online. Though somewhat unusual for a custodial archival repository, this system was very familiar and comfortable for UTL as an academic library that annually spent hundreds of thousands of dollars for access to databases. Partner organizations, with funds earned in this manner, could and did hire and train, or otherwise provide direct humanitarian aid to individuals documented in the records, so at least some saw benefit from participation in the project.

Kelleher, C. (2017). Archives without archives:(Re) locating and (re) defining the archive through post-custodial praxis. Journal of Critical Library and Information Studies, (2). Retrieved from http://libraryjuicepress.com/journals/index.php/jclis/article/view/29

Terry Reese: MarcEdit 7 Keycode Documentation

Thu, 2017-07-20 21:31

Something that comes up a lot is the lack of key combinations or pathways to using functions in MarcEdit.  I’ll admit, the program is very mouse heavy.  So, as part of the accessibility work in MarcEdit 7, I’m taking a long look at how access to all functions can be accommodated via the keyboard.  This means that for MarcEdit 7, I’m mapping out all keycode combinations (the ALT+[KEY]) paths and the more traditional shortcut key combinations) for each window in MarcEdit.  When it’s finished, I’ll make this part of the application documentation.  Before I get too far along, I wanted to show what this looks like.  Please see: http://marcedit.reeset.net/software/MarcEdit7_KeycodeMap.pdf

Does this look like it will be helpful? 

–tr

FOSS4Lib Recent Releases: CollectionSpace - 4.5

Thu, 2017-07-20 16:17

Last updated July 20, 2017. Created by Peter Murray on July 20, 2017.
Log in to edit this page.

Package: CollectionSpaceRelease Date: Thursday, July 20, 2017

LITA: Apply for a Scholarship to attend the 2017 LITA Forum

Thu, 2017-07-20 15:47

Do you want to attend and participate in the LITA Forum?

The three-day technology-focused conference for everyone who cares about libraries, archives, and other information services? The 2017 LITA Forum will be held November 9 – 12, 2017 in Denver, CO. Would travel funding help you to attend? As a result of the successful LITA 50th Anniversary Scholarship Campaign LITA is offering six $1500 travel scholarships, to support new librarians or new to ALA/LITA technologists to attend the 2017 LITA Forum.

Complete the application form.

Scholarships will be awarded competitively based on the committee’s ranking of applications received by the deadline, August 20, 2017.

Scholarship Eligibility

Selection criteria for the LITA Forum scholarship:

  • Work with library technology in any role
  • Provide library services to underrepresented groups
  • You are a new librarian or new to LITA; from a diverse range of backgrounds and types of libraries; and reflective of the breadth of librarianship
  • Have NOT previously received a LITA scholarship award

Scholarship applicants will be ranked highly if they:

  • Belong to a group not well-represented in LITA, including but not limited to: people of color, people with disabilities, LGBTQ+
  • Show interest in actively contributing to the mission and goals of LITA.

The scholarships are intended for people who couldn’t otherwise attend LITA Forum. LITA can’t assess your financial need and we trust you to self-identify accurately.

How to apply

Please fill out the application form

Applications are due August 20, 2017.
We will notify you by September 8, 2017.

Scholarship selections will be made by the LITA Forum Scholarship sub-committee.

Sponsors

Thanks for the generous support of all who contributed to the LITA 50th Anniversary Scholarship Campaign.

Questions or Comments?

Contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

David Rosenthal: Patting Myself On The Back

Thu, 2017-07-20 15:00
Cost vs. Kryder rateI started working on economic models of long-term storage six years ago, and quickly discovered the effect shown in this graph. It plots the endowment, the money which, deposited with the data and invested at interest, pays for the data to be stored "forever", as a function of the Kryder rate, the rate at which $/GB drops with time. As the rate slows below about 20%,  the endowment needed rises rapidly. Back in early 2011 it was widely believed that 30-40% Kryder rates were a law of nature, they had been that way for 30 years. Thus, if you could afford to store data for the next few years you could afford to store it forever

2014 cost/byte projectionAs it turned out, 2011 was a good time to work on this issue. That October floods in Thailand destroyed 40% of the world's disk manufacturing capacity, and disk prices spiked. Preeti Gupta at UC Santa Cruz reviewed disk pricing in 2014 and we produced this graph. I wrote at the time:
The red lines are projections at the industry roadmap's 20% and a less optimistic 10%. [The graph] shows three things:
  • The slowing started in 2010, before the floods hit Thailand.
  • Disk storage costs in 2014, two and a half years after the floods, were more than 7 times higher than they would have been had Kryder's Law continued at its usual pace from 2010, as shown by the green line.
  • If the industry projections pan out, as shown by the red lines, by 2020 disk costs per byte will be between 130 and 300 times higher than they would have been had Kryder's Law continued.
Backblaze average $/GBThanks to Backblaze's admirable transparency, we have 3 years more data. Their blog reports on their view of disk pricing as a bulk purchaser over many years. It is far more detailed than the data Preeti was able to work with. Eyeballing the graph, we see a 2013 price around 5c/GB and a 2017 price around half that. A 10% Kryder rate would have meant a 2017 price of 3.2c/GB, and a 20% rate would have meant 2c/GB, so the out-turn lies between the two red lines on our graph. It is difficult to make predictions, especially about the future. But Preeti and I nailed this one.

This is a big deal. As I've said many times:
Storage will be
Much less free
Than it used to beThe real cost of a commitment to store data for the long term is much greater than most people believe, and there is no realistic prospect of a technological discontinuity that would change this.

Andrew Pace: Being a Better Ally: First, Believe

Thu, 2017-07-20 15:00

Warning: I might make you uncomfortable. I’m uncomfortable. But it comes from an earnest place.

I was recently lucky enough to participate with my OCLC Membership & Research Division colleagues in DeEtta Jones & Associates’ Cultural Competency Training. This day-long session has a firm spot in the top 5 of my professional development experiences. (Not coincidentally, one of the others in that top 5 was DeEtta’s management training I took part in when she was with the Association of Research Libraries). A week later, I’m still processing this incredible experience. And I’m very grateful to OCLC for sponsoring the workshop!

Cultural competence, equity, diversity, and inclusion are uncomfortable topics for me because I carry my straight, married, able-bodied, white, male privilege with me everywhere I go. And in library-land, despite a female majority, men still dominate leadership positions; despite our bully pulpits on inclusion and diversity, our profession has too few people of color; despite our progressive stances on sexual orientation and gender identity, we struggle with our support for those constituents in our public spaces and workplaces.

DeEtta taught me that I must unlearn so many of the things that we’ve been taught for decades—like denying cultural differences, or not talking about race. She taught me that if being marginalized at work doesn’t feel good, then I should imagine being a diverse workforce member on top of that feeling. And she taught me that culture, by its very nature, seeks to discriminate, so I need to be more aware of de-biasing systems, and purposefully embark on a journey that takes me from a place of tolerance and sensitivity to a place of true cross-cultural competence.

DeEtta taught me some very new things, too. For example, research has shown that multicultural teams perform more effectively when there’s a leader leveraging the team’s diversity. And that the leader does not have to be from a diverse demographic. That is, stepping back from opportunities to lead or manage diverse teams doesn’t necessarily make them more effective. Put even better, stepping up as a culturaly competent leader will make diverse teams more effective.

But most importantly, I learned one of the first steps in being an ally when carrying around all that privilege. First, believe. I must believe the stories that people tell. And I must be mindful of the marginalized position from which they sometimes come. Vital to being an ally, I can believe you when you tell a story even when it isn’t grounded in my own experience. As a good ally, I should believe your story especially under such circumstances.

My cultural mosaic might not look very diverse, but I can gain and develop the skills necessary to be a better ally—mindfulness, integrity, humility, hardiness, and listening with cultural intelligence. I can turn off my liberal cruise control and activate the lenses through which I consciously and unconsciously view diversity issues and acknowledge the layers (both obvious and not so obvious) that make me who I am. And I can express these values at every turn. That is the only way to change culture.

Finally, I learned that “doing diversity” means that we all do it. And we do it all the time. One of the most important parts about being an ally means not only doing so when everyone is watching. It’s something I must do all the time. As I move forward in this process of gaining cultural competence and practicing equity, diversity, and inclusion, I will need a lot of help, especially from those further along in this journey than I am. I promise to be more discerning of the parts of my life in which I have privilege. I will even tap into them to become a better ally. But most importantly, I will start with believing.

District Dispatch: FY 2018 library funding remains uncut by House Appropriations Committee

Thu, 2017-07-20 13:44

Yesterday evening, the House Appropriations Committee confirmed its support for federal library funding by voting to approve the same funding levels passed by the Labor-HHS Subcommittee last week. Yesterday’s action was another significant step toward ensuring FY 2018 funding of $231 million for the Institute of Museum and Library Services (IMLS)—including $183.6 million for Library Services and Technology Act (LSTA) programs—and $27 million for the Department of Education’s Innovative Approaches to Literacy (IAL) program. These sums equal FY 2017 levels.

In addition, as the Subcommittee did last week, the full Committee today also approved $413.9 million for the National Library of Medicine, an increase of $20 million over FY 2017. The Committee also approved appropriations for other significant funding programs in which libraries are eligible to participate. Their levels of support relative to last year are shown here (note: the chart is in thousands of dollars). The Subcommittee and full Committee made cuts to some programs, most notably the elimination of the Department of Education’s Striving Readers program. ALA will continue to work in coalition to restore these funds.

At yesterday’s full committee markup session, the Committee debated and voted on several hours of amendments covering a range of issues, none of which addressed direct library funding.

The Labor-HHS funding bill now heads to the floor for consideration by the full House and a vote, the timing of which is increasingly uncertain. House leaders had floated the possibility of voting on a compiled package of multiple appropriations bills (a.k.a., an omnibus) before the August recess. The prospects of that appear to be fading, which means consideration of the Labor-HHS funding bill approved yesterday in Committee is likely to slip to September or even later in the fall.

The Senate has not moved yet on a Labor-HHS funding measure and is expected to take this bill up after the August recess. The Senate’s shortened recess could provide it time to begin acting on funding measures, but finishing work on the Labor-HHS bill could take the Senate well into the fall. Congress must send 12 appropriations bills to the President before the October 1 start of the fiscal year to avoid a government shutdown. In the past, Congress has failed to do that and instead passed a Continuing Resolution, which is a temporary funding measure that allows the government to operate until an agreement can be reached on the appropriations bills.

Yesterday’s successful and extremely important full Appropriations Committee vote is another major milestone in ALA’s Fight for Libraries! campaign, but there are many more challenges to come.

ALA will continue to lead the fight as the FY 2018 appropriations process moves forward. After tens of thousands of library advocates’ emails, tweets, and calls, Congress has heard the library community’s support for IMLS, LSTA and IAL funding loudly and clearly. While the news is good today, the game is certainly not over and we will continue to need your help.

If you have been fighting with us, thank you! If you haven’t yet had a chance to join the fray, today would be a great day to sign up.

The post FY 2018 library funding remains uncut by House Appropriations Committee appeared first on District Dispatch.

FOSS4Lib Recent Releases: Hydrax - 1.0.3

Thu, 2017-07-20 12:15

Last updated July 20, 2017. Created by Peter Murray on July 20, 2017.
Log in to edit this page.

Package: HydraxRelease Date: Wednesday, July 19, 2017

Pages