You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 11 hours 3 min ago

Jonathan Rochkind: “Registered clinical trials make positive findings vanish”

Tue, 2015-08-18 18:06

via nature.com, Registered clinical trials make positive findings vanish

The launch of the clinicaltrials.gov registry in 2000 seems to have had a striking impact on reported trial results, according to a PLoS ONE study1 that many researchers have been talking about online in the past week.

A 1997 US law mandated the registry’s creation, requiring researchers from 2000 to record their trial methods and outcome measures before collecting data. The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000….

…Irvin says that by having to state their methods and measurements before starting their trial, researchers cannot then cherry-pick data to find an effect once the study is over. “It’s more difficult for investigators to selectively report some outcomes and exclude others,” she says….

“Loose scientific methods are leading to a massive false positive bias in the literature,”


Filed under: General

David Rosenthal: Progress in solid-state memories

Tue, 2015-08-18 15:00
Last week's Storage Valley Supper Club provided an update on developments in solid state memories.

First, the incumbent technology, planar flash, has reached the end of its development path at the 15nm generation. Planar flash will continue to be the majority of flash bits shipped through 2018, but the current generation is the last.

Second, all the major flash manufacturers are now shipping 3D flash, the replacement for planar. Stacking the cells vertically provides much greater density; the cost is a much more complex manufacturing process and, at least until the process is refined, much lower yields. This has led to much skepticism about the economics of 3D flash, but it turns out that the picture isn't as bad as it appeared. The reason is, in a sense, depressing.

It always important to remember that, at bottom, digital storage media are analog. Because 3D flash is much denser, there are a lot more cells. Because of the complexity of the manufacturing process, the quality of each cell is much worse. But because there are many more cells, the impact of the worse quality is reduced. More flash controller intelligence adapting to the poor quality or even non-functionality of the individual cells, and more of the cells used for error correction, mean that 3D flash can survive lower yields of fully functional cells.

The advent of 3D means that flash prices, which had stabilized, will resume their gradual decrease. But anyone hoping that 3D will cause a massive drop will be disappointed.

Third, the post-flash solid state technologies such as Phase Change Memory (PCM) are increasingly real but, as expected, they are aiming at the expensive, high-performance end of the market. HGST has demonstrated a:
PCM SSD with less than two microseconds round-trip access latency for 512B reads, and throughput exceeding 3.5 GB/s for 2KB block sizes.which, despite the near-DRAM performance, draws very little power.

But the big announcement was Intel/Micron's 3D XPoint. They are very cagey about the details, but it is a resistive memory technology that is 1000 times faster than NAND, 1000 times the endurance, and 100 times denser. They see the technology initially being deployed, as shown in the graph, as an ultra-fast but non-volatile layer between DRAM and flash, but it clearly has greater potential once it gets down the price curve.

FOSS4Lib Upcoming Events: Open Source with OPF: JHOVE Stewardship

Tue, 2015-08-18 13:22
Date: Wednesday, August 26, 2015 - 09:00 to 10:00Supports: JHOVE

Last updated August 18, 2015. Created by Peter Murray on August 18, 2015.
Log in to edit this page.

From the announcement:

During March and April 2015 the OPF assumed stewardship of JHOVE after the existing maintainer, Gary McGath, expressed his wish to step down. The OPF’s initial aims were to take ownership of the JHOVE resources and establish sustainable home for the project on GitHub. Following this we’ve updated the build, testing and distribution process for the project.

LITA: Interacting with patrons through their mobile devices

Tue, 2015-08-18 13:00

Mobile technologies, specifically smartphones, have become a peripheral appendage to our everyday experience. We often see individuals oblivious to current surroundings exhibiting dedicated attention to their mobile devices. This behavior is often viewed in a negative light; however, with the level of global media engagement people are able to achieve with these devices, it can be hard to blame them. The ability to participate in social media, sending quick messages to friends, listen to music, watch videos, surfing the web, fact check information, or even read a great book, is all right in your hand.

When attempting to interact with patrons through technology, utilizing their familiarity with their mobile device can help to achieve a more positive experience. This is when “Let’s build an app” is often reverberated. Although that is a great idea, it is a complex development process and there are a number of ways to achieve interactive experiences without the development of a new mobile application.

Over the course of the next several blog posts, I will be discussing various methods of interacting with patrons mobile devices to enhance their experiences through the use of QR codes, NFC (Near Field Communication) tags, and BLE (Bluetooth Low Energy) Beacons. Each of these technologies allow for a different experience, and have areas where they excel and falter, but when incorporating each technology appropriately they can create a comprehensive interactive experience to enhance information seeking.

SearchHub: Solr Developer Survey 2015

Mon, 2015-08-17 20:09
Every day, we hear from organizations looking to hire Solr talent. Recruiters want to know how to find and hire the right developers and engineers, and how to compensate them accordingly. Lucidworks is conducting our annual global survey of Solr professionals to better understand how engineers and developers at all levels of experience can take advantage of the growth of the Solr ecosystem – and how they are using Solr to build amazing search applications. This survey will take about 2 minutes to complete. Responses are anonymized and confidential. Once our survey and research is completed, we’ll share the results with you and the Solr community. As a thank you for your participation, you’ll be entered in a drawing to win one of our blue SOLR t-shirts plus copies of the popular books Taming Text and Solr in Action. Be sure to include your t-shirt size in the questionnaire. We’d appreciate your input by Wednesday, Sept 9th. Click here to take the survey. Thanks so much for your participation!

The post Solr Developer Survey 2015 appeared first on Lucidworks.

Erin White: Recruiting web workers for your library

Mon, 2015-08-17 16:32

In the past few years I’ve created a couple of part-time, then full-time, staff positions on the web team at VCU Libraries. We now have a web designer and a web developer who’ve both been with us for a while, but for a few years it was a revolving door of hires. So let’s just say I’ve hired lots of folks in just a few years as a manager.

A colleague from another library emailed a few weeks ago asking for tips on how to recruit talented web workers for a library web developer position. Here are some things I’ve done to get people in the door.

  1. Advertise on jobs.code4lib.org – these jobs are automatically forwarded to the code4lib listserv. Those listserv subscribers tend to tweet interesting jobs out as well.
  2. Advertise on non-library job websites including Craigslist (lots of spam but talented people too); and consider paying to advertise on LinkedIn and other tech job sites.
  3. Post the salary, both on the code4lib site and on your organization’s jobs site – even if it’s just a range or a minimum.
  4. Indicate some of the big projects you’d like the person to work on – where you would see this person contributing right away. Whet the appetite: “How can I grow? How can I help this organization grow?”
  5. Note your current tech stack. Are you developing your own web applications? Managing your own server? Using PHP, Ruby on Rails, Ember, Node?
  6. Sell the non-salary benefits. Advocate for and advertise soft benefits that tend to go a long way with digital folks:
    • telecommuting – a day a week minimum;
    • dedicated time and administrative support for working on innovative projects – bonus if it’s built into the official job description;
    • support for travel/training;
    • flexible hours;
    • 40-hour workweek – sadly, in the U.S. this is a perk;
    • all other non-salary benefits of working for a higher ed, government or nonprofit institution: retirement, tuition remission, gym membership, etc.?
  7. Sell the mission. Some people are tired of working for the bottom line and want to do work that matters. Libraries help people. Our work matters.
  8. Longer-term: get out there. If you are a web worker yourself, get involved in local web meetups, professional groups, etc., and meet people in your community of practice. This serves a couple of purposes beyond helping with your own learning: it expands the network of people you can reach out to when you’re hiring, and it gives your library some cred as a place to work.

Related: I gave a talk at the Code4Lib conference earlier this year about recruiting and retaining – it’s a repeat of some of the info above but may be helpful.

SearchHub: Securing Solr with Basic Authentication

Mon, 2015-08-17 16:16
Until version 5.2, Solr did not include any specific security features. If you wanted to secure your Solr installation, you needed to use external tools and solutions which were proprietary and maybe not so well known by your organization. A security API was introduced in Solr 5.2 and Solr 5.3 will have full-featured authentication and authorization plugins that use Basic authentication and “permission rules” which are completely driven from ZooKeeper. Caveats
  • Basic authentication sends credentials in plain text. If the communication channels are not secure, attackers can know the password. You should still secure your communications with SSL.
  • ZooKeeper is the weakest link in this security. Ensure that write permissions to ZooKeeper is granted only to appropriate users.
  • It is still not safe to expose your Solr servers to an unprotected network.
  Enabling Basic Authentication Step 1 : Save the following JSON to a file called security.json: { "authentication":{ "class":"solr.BasicAuthPlugin", "credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="} }, "authorization":{ "class":"solr.RuleBasedAuthorizationPlugin", "user-role":{"solr":"admin"}, "permissions":[{"name":"security-edit", "role":"admin"}] }} The above configuration does the following:
  • Enable authentication and authorization
  • A user called 'solr' is created with a password 'SolrRocks'
  • The user 'solr' is assigned a role 'admin'
  • The permissions to edit security is now restricted to the role 'admin'. All other resources are unprotected and the user should configure more rules to secure them.
Step 2: Upload to ZooKeeper server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd putfile /security.json security.json All solr nodes watch the /security.json and this change is immediately reflected in the nodes behavior. You can verify the operation with the following commands. curl http://localhost:8983/solr/admin/authentication curl http://localhost:8983/solr/admin/authorization These calls should return with the corresponding sections in the json we uploaded. BasicAuthPlugin The BasicAuthPlugin authenticates users using HTTP’s Basic authentication mechanism. Authentication is done against the user name and salted sha256 hash of the password stored in ZooKeeper. Editing credentials There is an API to add, edit or remove users. Please note that the commands shown below are tied to this specific Basic authentication implementation and the same set of commands are not valid if the implementation class is not solr.BasicAuthPlugin. Example 1: Adding a user and editing a password curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'Content-type:application/json'-d '{ "set-user": {"tom" : "TomIsCool" , "harry":"HarrysSecret"}}'

Example 2: Deleting a user

curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'Content-type:application/json'-d '{ "delete-user": ["tom","harry"]}' RuleBasedAuthorizationPlugin

This plugin relies on the configuration stored in ZooKeeper to determine if a particular user is allowed to make a request. The configuration has two sections:

  • A mapping of users to roles. A role can be any user defined string.
  • A set of permissions and the rules on who can access what.
What is a Permission? A permission specifies the attributes of a request and also specifies which roles are allowed to make such a request. The attributes are all multivalued. The attributes of request are:
  • collection: The name of the collection for which this rule should be applied to. If this value is not specified, it is applicable to all collections.
  • path: This is the handler name for the request. It can support wild card prefixes  as well.For example, /update/* will apply to all handlers under /update/.
  • method: HTTP methods valid for this permission. Allowed values are GET, POST, PUT, DELETE, and HEAD.
  • params: These are the names and values of request parameters. For example, "params":{"action":["LIST","CLUSTERSTATUS"]} restricts the rule to be matched only when the values of the parameter "action" is one of "LIST" or "CLUSTERSTATUS".
How is a Permission Matched? For an incoming request, the permissions are tested in the order in which they appear in the configuration. The first permission to match is applied. If you have multiple permissions that can match a given request, put the strictest permissions first. For example, if there is a generic permission that says anyone with a 'dev' role can perform write operations to any collection and you wish to restrict write operations to .system collection to admin only, add the permission for the latter before the more generic permission. If there is no permission matching a given request, then it is accessible to all. Well Known Permissions

There are a few convenience permissions which are commonly used. They have fixed default values for certain attributes. If you use one of the following permissions just specify the roles who can  access these. Trying to specify other attributes  for the following will give an error.

  • security-edit : Edit security configuration
  • security-read : Read security configuration
  • schema-edit : Edit schema of any collection
  • schema-read :  Read schema of any collection
  • config-read : Read solrconfig of any collection
  • config-edit : Edit config of any collection
  • collection-admin-read : Read operations performed on /admin/collections such as LIST, CLUSTERSTATUS
  • collection-admin-edit : All operations on /admin/collections which can modify the state of the system.
  • update : Update operation on any collection
  • read : Any read handler such as /select, /get in any collection
Editing permissions There is an API to edit the permissions. Please note that the following commands are valid only for the RulesBasedAuthorizationPlugin. The commands for managing permissions are:
  • set-permission: create a new permission, overwrite an existing permission definition, or assign a pre-defined permission to a role.
  • update-permission: update some attributes of an existing permission definition.
  • delete-permission: remove a permission definition.
Example 1: add or remove roles curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json' -d '{ "set-user-role": {"tom":["admin","dev"}, "set-user-role": {"harry":null} }' Example 2: add or remove permissions curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json'-d '{ "set-permission": { "name":"a-custom-permission-name", "collection":"gettingstarted", "path":"/handler-name", "before": "name-of-another-permission", "role": "dev" }, "delete-permission":"permission-name" }'

Please note that the "set-permission" replaces your permission fully. Use the "update-permission" operation to partially update a permission.  Use the 'before' property to re-order your permissions

Example 4: Restrict collection admin operations (writes only) to be performed by an admin only

curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json' -d '{ "set-permission" : {"name":"collection-admin-edit", "role":"admin"}}'

 This ensures that the write operations can be performed by admin only. Trying to perform a secure operation without credentials with a browser will now prompt for user-name and password:

Example 5: Restrict all writes to all collections to dev role curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json' -d '{ "set-permission" : {"name":"update", "role":"dev"}}' Example 6 : Restrict writes to .system collection to admin only curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json' -d '{ "set-permission" : {"name":"system-coll-write", "collection": ".system" "path":"/update/*", "before":"update", "role":"admin"}}'   Please note the 'before' attribute, this ensures that this permission is inserted right before the "update" permission we added in example 5. Example 7 : Update the above permission to  add the /blob* path curl --user solr:SolrRocks http://localhost:8983/solr/admin/authorization -H 'Content-type:application/json' -d '{ "update-permission" : {"name":"system-coll-write", "path": ["/update/*","/blob/*"]}}' Securing Inter-node calls BasicAuthPlugin uses Solr’s own mechanism for securing inter-node calls using PKI infrastructure. But the same users, roles and permissions work across the nodes because, the user-name in the original request is carried forward in inter-node requests. (That is topic for another blog)

The post Securing Solr with Basic Authentication appeared first on Lucidworks.

Islandora: Islandora Community Sprint 001

Mon, 2015-08-17 15:05

Starting on August 31st, we're going to try our very first community sprint (August 31-September 11). This is going to be a maintenance sprint, and its focus is going to be resolving issues marked as bugs, code tasks, or documentation tasks in JIRA. There is a whole lot there, and it might be daunting, but Jared Whiklo and I will be there to help. We have a wide range of issues from easy to well, not so easy. 

If you'd like to join us in this two week sprint, please sign up here. We'll be coordinating, and checking in each day on irc (#islandora on freenode). 

Hope to see your name on the list, and on August 31 in #islandora. 

LITA: A couple of not totally useless things you can do on the command line [written for beginners]

Mon, 2015-08-17 15:00

As a librarian who has been very engaged in the movement to demystify programming, I’ve really focused on teaching and sharing tools that users can use in daily life, as that has been the most common question I get when teaching, “When will I use this?” This post has been heavily influenced by my work in teaching programming to the non-programmer and teaching something that can be applied beyond the classroom.

With the start of school on the cusp (and for some already come and gone) I wanted to throw something not totally useless out there for you to tuck away to use on a rainy day or now if you’d like.

These have been written in mind that you may have some, experience with the command line but very little. I apologize for those more experience users if this is a bit dense in explanation.

I’ve ran these successfully on both MacOS X Yosemite and Linux-Ubuntu. This is my first attempt at providing documentation on something like this, so please feel free to critique it.

If you are a Windows user, I recommend downloading Console2 [http://sourceforge.net/projects/console/], which is a terminal-emulator that will allow you similar access to the commands used in Linux and MacOS X

For this documentation anything following the $ is what you will type into your command prompt. The $ denotes a new command to be entered on a new line, some commands wrap, but do not hit enter until you’ve type the entire command. For the most part, you can copy and paste the command directly into the terminal, but make sure you make the necessary changes.

Use Find and Exiftool to gather & organize all of your pictures by creation date into folders by year and month

This will walk you through full install of Perl, Exiftool, directory creation and processing of files. Exiftool is a really handy tool for reading, writing and editing metadata in a significant range of file types, so it is a really great tool to have in general.

First you’ll need to install Perl and exiftool. There is a high possibility that your computer will already have Perl installed, but in the case that it doesn’t you will need to install it.

To check to see if you have Perl installed use this command:

$ perl –v

If it is installed, you will get information on the version of Perl you have installed and you can skip the next command.

If it is not installed you will need to install it.

$ curl –L http://xrl.us/installperlosx | bash

Installing exiftool For full Perl distribution, download the Image-ExifTool distribution from http://owl.phy.queensu.ca/~phil/exiftool/index.html to your desktop (if you do not specify where to download it, then cut & paste the download from the downloads folder to your Desktop) and then in your terminal run the following.  **You will be using sudo on one command, please be VERY careful with this as it can do some major damage if not used properly.**

$ cd ~/Desktop
$ tar -xzf Image-ExifTool-9.99.tar.gz
$ cd Image-ExifTool-9.99
$ sudo cp -r exiftool lib /usr/local/bin
$ [PASSWORD]

Don’t want to use the terminal to install this? Go to http://www.sno.phy.queensu.ca/~phil/exiftool/index.html and download the version you need and install it as a normal package.

Anytime you want to run exiftool, you call it up by typing exiftool into the command line.

Now we need to make the folder to compile all the images we want to sort into one place.

$ cd Documents
$ mkdir “newfoldername”
$ pwd
$ cd ~

Replace “newfoldername” with the name of the folder you want to create and use.

pwd will give you the directory pathway for Documents/newfoldername you will need in the next few commands so make note of it, copy it or write it down.

We are going to find all of the JPG files on your computer and put them into that folder you just created. The tilde (~) denotes home directory which will search your entire computer; if you have all of your photos in another directory you can use the pathway for that instead.

This command will find in your computer all files with extensions .JPG and .jpg and copy them to the new specified folder retaining the original files & their modification information. It is important to compile them in one folder so you can run exiftool much more quickly.

$ cd ~
$ find ~ -iname ‘*.jpg’ -print -exec cp –pr ‘{}’ Documents/newfoldername \;

If you want to search other file types like .png, then replace the JPG with png.

If you need to be case sensitive on the extension, remove the i from –iname.

Replace “Documents/newfoldername” with the pathway to directory you just created (noted from pwd command)

Using Exiftool to organize all the files into folders by year and month using this line of command.

$ exiftool ‘-Directory<CreateDate’ –d
$ Documents/newfoldername/%y/%y%m –r Documents/newfoldername

Replace both instances of Documents/newfoldername directory with the directory you created.

Use Exiftool to sort all of your files by create date and then into folders by year and month

If you just want to copy and sort all of your files into folders, not specifying file types run this command instead.

$ Exiftool –o . ‘-Directory<CreateDate’ –d Documents/createfolder/%y/%y%m –r ~

This will search the entire root directory, and any file types that can be copied will be copied and organized by creation date in the folder you specify.

Keep in mind that Exiftool is not limited to moving only image files so you can play around with this as you want.

Send a text message from your command line with this script:

$ curl http://textbelt.com/text -d number=########## -d “message= your text message goes here”

Where the ########## is your 10 digit number and your message goes after the message= and closed with a “

If you are going to send notifications to your phone often you can add it as a quick command:

$ SendText () { curl http://textbelt.com/text -d number=########## -d “message=$*”;echo message sent; }

Now anytime you want to send a text to that number input into the command line:

$ SendText your message goes here

You can also run this as a module or a standalone server, see GitHub source here: https://github.com/whitni/textbelt

This can be used to send notifications to your phone when running a program. Currently, I just use it to send the grocery list to myself, cause you know there is an app for that.

Here is a write up of an example of something you might want to receive text notifications on: http://adambuchanan.me/post/29018724579/fun-with-textbelt-public-sms-api

Library of Congress: The Signal: Cooking Up a Solution to Link Rot

Mon, 2015-08-17 14:40

This post is cross posted on the blog of the Law Library of Congress, In Custodia Legis, which is an excellent source of information on current legal trends and materials from the Library’s collections pertaining to the law. It is a guest post by the Law Library’s managing editor, Charlotte Stichter. When Charlotte is not at her day job she loves to cook, and is currently on a quest to find the perfect recipe for clafouti.

Vivian Jarrell’s canned goods, produced from her garden, including tomato juice, pickles, grape juice, and beans. (Photo by Terry Eiler, 1997) (Source: Coal River Folklife Collection, American Folklife Center, Library of Congress, http://hdl.loc.gov/loc.afc/afccmns.tec03805 )

For those with vivid imaginations, the terms “link rot” and “reference rot” might conjure images of moldy fruit in the back of the office refrigerator or a pungent bag of something unidentifiable pulled from under a car seat weeks after its “use by” date. But the food analogy can only go so far. What the terms are really referring to is the all-too-common problem of hyperlinked web addresses — in legal and academic writing or on web pages, for example — that fail to lead the reader to the consumable content desired, either because the link is rotten (not working at all) or because the particular item sought from the Web’s vast menu has been modified or changed.

The problem stems from the Web’s impermanence, the effects of which have been documented by a number of researchers: Websites can be redesigned or shut down, content can be moved, or service can be restricted without advance notice, making the Web a fluid environment ideal for the fermentation of creative ideas, but also uniquely susceptible to decay. The ubiquitous “404 – File Not Found” error message, among other error messages, alerts the user to link rot. Reference rot can be more difficult to spot, as it concerns modifications to the original ingredients, but might be indicated by a “last modified” message at the bottom of a web page, if noted at all.

During a 2014 internal quality assurance review of recent foreign, comparative, and international law reports prepared by the Law Library’s Global Legal Research Directorate and available on Law.gov, we found that a significant number of linked references in our reports no longer work. The results of our “taste test” were not surprising: studies by other legal entities have found that more than half of linked webpages in law journal and court opinion footnotes don’t work as intended, which is especially problematic in the legal world, where research documentation and reliable access to historic precedents are paramount. A study that appeared in the Harvard Law Review Forum last year found, for example, that about 66-73 percent of web addresses in the footnotes of three Harvard law journals and nearly 50 percent of web addresses in U.S. Supreme Court decisions from 1996 to 2012 suffered from reference rot. Link rot figures were close behind, and both problems were found to increase dramatically over time.

Our dyspepsia-inducing discovery led us to consider archiving solutions that would allow readers to access linked content in real time, while eating . . . er, reading, without having to jump out of the report to search a database of archived material. This quest ultimately led to a solution known as perma.cc, which was developed for the legal community by the Harvard Library Innovation Lab. A plan for implementing perma.cc in the Law Library’s Global Legal Research Directorate is now being cooked up, with a target implementation date of October 1 this year — the beginning of the new fiscal year. This means that hyperlinked footnote references in new reports by the Directorate will also contain a link to an archived version of the referenced web page, allowing readers permanent access to key legal materials. Bon appétit!

District Dispatch: Time to vote for libraries at SXSW

Mon, 2015-08-17 13:53

From Flickr

Believe it or not, the annual interdisciplinary fete known as South by Southwest (SXSW) is once again around the corner – and, as in years past, we need your help to make sure libraries are well represented. Last year, with your help, OITP’s Larra Clark participated in the Austin-based event (which consists of four separate convenings – SXSW Interactive, SXSW Edu, SXSW Music and SXSW Film) with D.C. Public Library’s Nick Kerelchuk and start-up MapStory’s Jonathan Marino. Larra, Nick and Jonathan’s SXSW Interactive panel described how hundreds of U.S. libraries meet the needs of this country’s growing cohort of self-employed, temp and freelance workers by providing workspaces and programming that foster entrepreneurship and creativity.

This year, ALA once again hopes to make an impression at SXSW. The Office for Information Technology Policy proposed two programs, one for Interactive and one for EDU:

Technology Adoption as Policy Linchpin
As technology innovation speeds forward, the gap between early and late adopters is growing to the detriment of individuals and communities. Digital adoption is central to addressing a range of policy woes from underperforming schools to unemployment to housing security. Home broadband adoption took policy center stage in 2015 with President Obama’s Broadband Opportunity Council, the FCC’s Lifeline proceeding and HUD’s public-private ConnectHome effort. This session will discuss the gap, how to consistently link access and adoption across sectors, critically explore policy options, share exemplary examples and look to the future of continuous digital adoption in relationship to innovation.

Improving 3D Printing Workflow to Boost Learning
3D printing is taking off in libraries, schools and universities, expanding opportunities for creative learning and expression. But one of the biggest obstacles to helping all people benefit from this trend is a lack of capacity in these institutions – in terms of physical space, equipment, technical know-how, broadband capacity and person power. How can these learning centers lead everyone onto the 3D printing on-ramp without creating a logjam? It’s possible! Hear from a panel of burning souls from across the 3D printing world who have dedicated blood, sweat and tears to advancing the 3D revolution.

But wait, there’s more. Re:Create, a new copyright coalition of which ALA is a founding member, proposed this program:

Copyright & Creators: 2026
What does the future hold for copyright? Who are the gatekeepers and how does this power structure need to change to meet not only the needs of today’s digital age, but also the needs of future creativity and innovation? The Copyright & Creators: 2026 panel will speculate on where the innovations and advancements will be in 2026. Will our laws keep pace with the times or fall behind? And how will people continually interact with copyright? Moderated by a veteran reporter, panelists include a respected academic, a noted futurist and a fan fiction leader who will debate the trajectory of copyright law and where some of the future conversations and conflicts will be a decade from now.

And…Benetech, a non-profit social enterprise organization, proposed the following program for Edu to highlight its establishment of a new 3D printing coalition between libraries museums and schools, in which ALA is involved:

No More Yoda Heads: 3D printing 4 diverse learners
Research suggests that 3D objects are important for learning and reinforcing complex spatial concepts that are difficult to convey or explore in any other way (e.g., cells and DNA). Although many schools have access to 3D printing technology, many machines are underutilized and used to print novelty items. In this session, learn about new collaborations with libraries and museums to help support teachers in providing multi-modal access to complex STEM topics as well as utilizing student talent to create innovative learning tools.

SXSW received more than 4,000 submissions this year—an all-time record—so we need your help to make the cut. Public voting counts for 30 percent of SXSW’s decision to pick a panel, so please support these great programs. It’s easy: Become a “registered voter” in the Panel Picker process by signing up for a free account here, and get your votes in before Friday, Sept. 4. Supportive comments are even more helpful in making one proposal stand out from another.

ALA also is a member of the SXSW library “team” that connects through the lib*interactive Facebook group and #liblove. Join the group and learn more about library proposals around the country.

Please share far and wide! Selected panels for SXSW Interactive will be announced starting Monday, Oct. 19, 2015. Those for SXSW Edu will be announced starting Wednesday, Oct. 21, 2015. Thanks!

The post Time to vote for libraries at SXSW appeared first on District Dispatch.

Islandora: Announcing Individual Membership in the Islandora Foundation

Mon, 2015-08-17 13:48

One of the biggest changes to come out of our Annual General Meeting on August 6th (aside from changing our Chairman from one Mark to another) was the creation of a new tier of membership, designed for individuals who want to show their support for the Islandora project on their own, outside of institutional membership. We evaluated several options, but the one voted in is a tiered model that allows you to select the amount you want to donate, with benefits varying based on the bracket. Memberships are yearly. The Individual levels and their benefits are as follows:

  1. $10 - $50
    1. Acknowledgement on islandora.ca
  2. $50- 150
    1. Acknowledgement on islandora.ca
    2. e-badge
    3. Tuque Tuque (Not included in Annual Renewal)
  3. $150 - $250
    1. Acknowledgement on islandora.ca
    2. e-badge
    3. Tuque Tuque (Not included in Annual Renewal)
    4. 10% discount for Islandora events 
  4. $250 +
    1. Acknowledgement on islandora.ca
    2. e-badge
    3. t-shirt (Not included in Annual Renewal)
    4. 25% discount for Islandora events
    5. Tuque Tuque (Not included in Annual Renewal)

To join as an Individual member of the Islandora Foundation, please donate today.

Of course, we still welcome (and rely on) the support of our institutional members, so if you are part of an organization that is (or should be) considering membership in the Islandora Foundation, these levels still apply and you should contact us to get on board:

Member - $2,000 / year

  • 2 Islandora Community Supporter T-Shirts (Not included in Annual Renewal)
  • e-badge for organization website
  • 50% discount for 1 Camp registration per year 
  • 25% discount Online Training 
  • Link to organization website 

Collaborator - $4,000 / year

  • 3 Islandora Community Supporter T-Shirts (Not included in Annual Renewal)
  • e-badge for organization website 
  • 1 free Camp registration per year 
  • 50% discount Online Training 
  • Appointment of 1 representative to IF Roadmap Committee 
  • Links to organization sites/collections 

Partner - $10,000 / year

  • 10 Islandora Community Supporter T-Shirts (Not included in Annual Renewal)
  • e-badge for organization website
  • 2 free Camp registrations per year
  • Free access to Training
  • Appoint 1 to IF Board of Directors
  • Links to organization sites/collections
  • Camp booth

Cynthia Ng: Accessible Format Production Part 6: DAISY Book

Mon, 2015-08-17 03:35
Finally, here is the last part of the series, talking about creating DAISY books from edited e-text. Just a reminder that the following assumes you completed step 5 and you have an accessible RTF document to work from. Software Options and Which to Choose The option you choose depends on the kind of work you … Continue reading Accessible Format Production Part 6: DAISY Book

Patrick Hochstenbach: Brush Inking exercises

Sat, 2015-08-15 06:13
She didn’t know I was secretly drawing here with fast brush sketches.        Filed under: Comics, Figure Drawings Tagged: brush, comic, figure drawing, inking, sketch

William Denton: Unreasonably, Ridiculously Long

Fri, 2015-08-14 20:03

Coursera sends me emails every now and then suggesting courses I might want to try. The emails are filled with stupidly long URLs like this, which I present in 20-character lines:

https://eventing.coursera.org/redirect/ _WtLvKefAb4HnpCdGvrQ 2Bzme2yv6Vbcd4MdVvIA AHedEK3IwHDRNzGbV5R1 1c43qg-ohDi3F4H4lL1K qvDPBg.DVNpoJbn4hljT 4VLZbW1Cg.Uz-IOZ2YNh Fi1RAvvYlFfrp8yzjINi 9_AfhUNf7kYjg2ZfF36n fY8EiibmNlPznFJVn4ue 8qRs0SM_mWqyoNNQk9Dx qFeymHudQYCQs2xX3nAT hQ0lbLN_Rxz1ht8k3aTk Il3FUWpTfVhkkViMBsvn y7kkgpArIc9gF1TC8DeU p9Y8iCfRJfFT5pEkJUF3 VcBWKGUxQ1wZ88i79cCX xH6WPX11pb-gQgDSSUMN gK1ZMuZweIsc4tLbjXqS rUpx8Ot672hFol7a5YSM 5DUBaFhO_5bdEEmIgN9D J0YzrZuMqDmdzdlZqhVl UrQbkMHAedLOJPhOXOWm IMzJZH-KYx-DDys6jsSb swOemmenthal7dMVIceI 98sB285q1GMrIyZYM2Vq telTYNMWkperOLU9y7nW cvg-kNp2cBtiXFV-Lu8z c_wHdBdHUNd9IS0NdqG1 l0J0CQIIhyCVTlF81agA B2IrOF0_XPjXNoETLRcv whOf4OQ-ZUJdHGWUvXiW sfWqenNfCfFHNanfsaet vom-h43cK-oVlYMxSk1y 61YKrNZWhGFS4Vll1SO2 jRASohdxl-bEv2dz3YNW kzlr-PW-KpBYqtUVxe3T l69PUWmCiPOJ1Aji1zt7 LTCtooooastlt8tBO8gM xiST4k6qLRxbpkChl6vZ TWmvTEky58_duy-wibto 3pa_-aVfrdpn2TTEHd73 D76Ageve48W7hS8UG7eP raJ1EItRnW3K3V5VwMMr _UU5YVNeuED1Pq9GYXTG VRbZp071g-iiOP5EE_W2 vKipOa1YnwO2S-LN7Lvc uBF_nOexEE0daZlKXbjiZi

That is 942 characters long. With lower- and upper-case letters, numbers, underscores and hyphens available, each character in the string can be one of 64 choices, so there are 64^942 possible strings, which Wolfram Alpha says is on the order of 10^1701.

What is going on in Coursera’s notification system?

William Denton: Genius at Play

Fri, 2015-08-14 15:47

Genius at Play: The Curious Mind of John Horton Conway, by Siobhan Roberts, is the best biography I’ve read in a while, and it’ll be in my top ten favourite books of 2015. Conway is a mathematician, an unruly digressive eccentric fascinating genius mathematician, and this is an unruly digressive eccentric fascinating biography, because no normal narrative structure (like Roberts used for her fine biography of straitlaced geometer Donald Coxeter) could get across what Conway is like. Conway’s such an unstoppable force he gets his own typeface in the book, so he can explain mathematics or tell a story or just interject.

Here’s a trailer Roberts did for the book:

Everyone in mathematics and computer science knows Conway, for the game of Life (which he grew to hate), combinatorics, games, surreal numbers (which inspired Donald Knuth to write a novel), group theory, the Doomsday algorithm (a great trick, and part of Conway’s regular shtick), the free will theorem, and much more. He’s done major work in many different areas of mathematics.

One of the delights of the book is how well it gets across Conway’s unceasing desire to know everything, especially mathematics, and his absolute excitement and delight in numbers and geometry and groups and games. He’s a genius. The way he is in this world is not the way that other people are.

Conway is quoted extensively in the book. I especially like this one, from near the end:

Richard Dawkins wrote a book before he wrote The God Delusion called Unweaving the Rainbow. Now, this title is taken from a few lines from Keats. He says, “Shalt thou unweave the rainbow?” And it’s a vaguely unscientific theme. He’s saying if you explain the rainbow, it is somehow making it less beautiful, by taking away the mystery from it. But everybody who knows anything about anything knows that the more you know, the more beautiful it is.

This is the same point Richard Feynman made:

Feynman said:

I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say, “Look how beautiful it is,” and I’ll agree. Then he says, “I as an artist can see how beautiful this is, but you as a scientist take this all apart and it becomes a dull thing,” and I think that he’s kind of nutty.

First of all, the beauty that he sees is available to other people and to me too, I believe, although I might not be quite as refined aesthetically as he is, I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean, it’s not just beauty at this dimension, at one centimetre; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colours in the flower evolved in order to attract insects to pollinate it is interesting: it means that insects can see the colour. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don’t understand how it subtracts.

Conway was talking about rainbows because he’s fascinated with them and and how they work—which, of course, most people don’t, or why you can sometimes see two rainbows, or that third and fourth rainbows are also possible. The quote continues, in classic Conway style:

And I think that is the theme of Dawkins’s book—he’s referring to Keats and saying, “No, it’s a good idea to unweave the rainbow.” I keep on meaning to catch Dawkins one day and interrogate him on how the rainbow is formed. Because I think if he’s written a book called Unweaving the Rainbow, he should actually succeed in unweaving the rainbow. Maybe he does, I don’t know, I haven’t read his book. So maybe he knows how the rainbow is formed, but it’s really quite conceivable that he doesn’t, because so very few people do.

This is an unusual book, and the only one I can think of that’s similar is Willeford by Don Herron, his biography of Charles Willeford, the great American novelist. (The Burnt Orange Heresy is the finest novel about modern art ever written.) When I first read it I didn’t appreciate how good it was. Herron knew Willeford. Willeford was a supreme storyteller (and bullshitter, in the best sense), and the usual biographical approach wouldn’t work with him, so Herron did it differently, with the kind of approach Roberts takes with Conway. Willeford still deserves a serious academic biography, but you wouldn’t get to know the man in that book like you do in Herron’s.

(By the way, if you’re ever in San Francisco, take Don Herron’s’ Dashiell Hammett walking tour. I went out with him, just the two of us, in 2008, one of the most memorable days of my life. He took me all over Hammett’s San Francisco, including where Brigid O'Shaughnessy shot Miles Archer, and most amazingly of all he was able to show me the apartment where Hammett lived when he wrote The Maltese Falcon—the apartment is the exact model for Spade’s. I’ll never forget that.)

Back to Genius at Play. I highly recommend it. Even if you’re not too interested in mathematics, it’s worth reading. Roberts and Conway do a fine job of explaining the math, but what’s most important is Conway himself, and his utter joy and complete involvement in what he does, same as a composer or painter might have, or, perhaps, that we all seek in our own lives. (Though probably with fewer marriages and affairs.)

Finally, here’s a Numberphile video with more from Roberts, where Conway goes to McMaster University so Sandra Witelson can run an fMRI on his brain. The grumpy visit is also described in the book.

William Denton: Minty fresh

Fri, 2015-08-14 14:31

I listen to Rdio a lot, and I hooked up an old laptop to my stereo with a FiiO E10 USB digital-to-analog converter (it’s great, and priced low) for maximum home listening pleasure. The laptop is a Lenovo Thinkpad X120e, running Ubuntu. I like Thinkpads (I’m writing this on an X240), and they wear well, but the battery on it is pretty much dead, and I spilled a glass of red wine on the keyboard and Page Down sticks, but still, if you spill wine on an advanced computing device, that’s a small price to pay. It was cheap Argentine malbec, so no major loss there either.

A few days ago I ran the updater. One of the updates was to the kernel, which might have been where the problem arose: the machine wouldn’t boot! It started up, detected the hard drives, the screen flashed … and then instead of the login screen showing up in a few seconds, the screen stayed black. If I booted into recovery mode and then rebooted it would work, but at low graphical resolution, and that’s a stupid fix anyway.

After some fiddling I decided Ubuntu just wouldn’t work on it, so I tried to install Debian. The Thinkpad requires a non-free driver for the wifi to work, so I installed with the thing plugged into my router with a cable, got it going, made sure it would reboot properly, added the firmware-realtek package … and it just wouldn’t see the wifi device. After more fiddling I decided Debian wasn’t the thing either.

Next I tried Linux Mint, is based on Debian and Ubuntu, and (philosophically troublingly, but installationally pleasingly) includes non-free wifi drivers, so it all worked pretty much out of the box. (Debian’s great on servers, but Ubuntu and Mint have made installing on a personal machine much, much easier.) All I’ll ever do on it is use Firefox or ssh in from the other side of the room, so I don’t care what it looks like. I got the sound configured to use the FiiO E10, and all is well. I logged into Rdio to find Iron Maiden have released “Speed of Light,” a song from Book of Souls, which comes out next month, so I cranked that up and got back to work. Up the Irons!

Harvard Library Innovation Lab: Link roundup August 14, 2015

Fri, 2015-08-14 13:19

Friday Fun Day

Use the words normal people use

Use the words normal people use

Medieval Sword contains Cryptic Code. British Library appeals for help to crack it. | Ancient Origins

A library, a sword and a cryptic code

PomPom Mirror

PomPoms as pixels. The fluidity is beautiful.

The Last Kings Of Kong

“a player using optimal strategy and getting as many lucky breaks as possible would score 1,265,000 points.”

Old graph paper

“Specialty graph paper was a big deal before computers took over all of our plotting chores.”

LITA: Taming the beast: a case for task-driven projects

Fri, 2015-08-14 13:00

Have you ever been assigned to a project? If so, you know that they can be daunting, sometimes overwhelming creatures that seem challenging to overcome. Where do you begin? What next? Before you know it you’re lost in the jungle with no clear way out. So, how do you tame the beast? How do you get through a project without getting lost along the way? In this post I’ll be making a case for tasks.

Paving the way through the jungle

Tasks are the real, tangible steps taken to accomplish a goal, in this case, a project. Together, they build the roadmap that helps you get from point A to point Z. So, how do you come up with tasks for a large, sometimes abstract project? First, you need to understand what the end goal is. Second, you need to understand where you are currently at. Then you start plotting the tasks. Begin with high-level, somewhat tangible tasks (I sometimes call these objectives). From there, break down each of those tasks into smaller, more refined tasks. Continue that process until you feel you have a solid map to begin with.

Many times tasks are evolutionary. You come across something you didn’t expect, or one of your tasks falls through. Just keep plotting forward towards the end goal. Below is a real-world example from a project I’m currently working on.

The real-world example

I began employment at Iowa State University (ISU) back in June. A month into the job I met with my fellow amazing metadata librarian, Kelly Thompson, to be assigned my first legit metadata project. She tells me that she’d like for me to analyze ISU’s digital collection metadata for data cleanup purposes and to come up with a core set of metadata fields to use for all of the digital collections with the end goal of contributing to consortia like the Open Archives Initiative (OAI) and Digital Public Library of America (DPLA).

So there I was with my first big-boy project. How was I supposed to tackle this project when I had very limited knowledge of ISU’s digital asset management system (an OCLC-hosted ContentDM instance), in addition to the fact that I had no in-depth understanding of their metadata model? Luckily, my task-driven instincts kicked in. First, I needed to figure out the end goal: prepare ISU’s digital collection metadata for outside sharing through OAI and DPLA. Then, I needed to understand my current standing: ground zero. From there, I began paving the way.

The initial tasks I came up with, seen on the sticky note above, gave me enough fuel to get the engine running. Eventually some of these tasks fizzled, while others have exploded into multi-step mini-projects. I’m almost two months in now, and the project has grown exponentially. But I am not stressed out, because I have tasks to keep me grounded.

Concluding thoughts

Reflecting on the project thus far, I do have a couple of thoughts and tips. I have the files for this project organized in a hierarchical folder structure, which helps me keep related files neatly together. They are divided into categories like “Data dictionary”, “Metadata fields to be cleaned”, and “Data cleanup workflows”.  As you can see from my sticky note, my tasks are not as organized. For future projects I would like to arrange my tasks to reflect how I’ve organized my folders/files to better pair the two. This would make the tasks easier for me to keep track of. It would also increase clarity when I meet with colleagues to discuss a project.

One recommendation I would make is to flesh out your tasks and plan ahead as much as you can before the project begins. The more you can prepare beforehand the easier it will be to keep the beast tamed. I have been on past projects where the group did very little preparation beforehand, and it showed. It was very difficult to get the project going and to keep everybody on the same page.

?

Cynthia Ng: Tips on Recording Audiobooks with Audacity

Thu, 2015-08-13 22:49
While there is a ton of documentation out there for using Audacity, recently, I’ve been recording some audiobooks in Audacity and struggled sometimes to find how to do specific things that are less to do with mixing tracks and more to do with editing a single track. So, here are a few tips and list … Continue reading Tips on Recording Audiobooks with Audacity

Pages