You are here

Feed aggregator

LITA: Nominations Sought for Prestigious Kilgour Research Award

planet code4lib - Wed, 2014-09-17 19:17

Nominations are invited for the 2015 Frederick G. Kilgour Award for Research in Library and Information Technology, sponsored by OCLC, Inc. and the Library and Information Technology Association (LITA), a division of the American Library Association (ALA). The deadline for nominations is December 31, 2014.

The Kilgour Research Award recognizes research relevant to the development of information technologies, in particular research showing promise of having a positive and substantive impact on any aspect of the publication, storage, retrieval and dissemination of information or how information and data are manipulated and managed. The Kilgour award consists of $2,000 cash, an award citation and an expense paid trip (airfare and two nights lodging) to the ALA Annual Conference.

Nominations will be accepted from any member of the American Library Association. Nominating letters must address how the research is relevant to libraries; is creative in its design or methodology; builds on existing research or enhances potential for future exploration; and/or solves an important current problem in the delivery of information resources. A curriculum vita and a copy of several seminal publications by the nominee must be included. Preference will be given to completed research over work in progress. More information and a list of previous winners can be found at

http://www.ala.org/lita/awards/kilgour

Currently-serving officers and elected officials of LITA, members of the Kilgour Award Committee and OCLC employees and their immediate family members are ineligible.

Send nominations by December 31, 2014, to the Award jury chair:

Tao Zhang
Purdue University Libraries
504 W State St
West Lafayette, IN 47907-4221
or zhan.1022@purdue.edu

The Kilgour Research Award will be presented at the LITA President’s Program on June 29th during the 2015 ALA Annual Conference in San Francisco.

About OCLC

Founded in 1967, OCLC is a nonprofit, membership, computer library service and research organization dedicated to the public purposes of furthering access to the world’s information and reducing library costs. More than 72,000 libraries in 170 countries have used OCLC services to locate, acquire, catalog, lend, preserve and manage library materials. Researchers, students, faculty, scholars, professional librarians and other information seekers use OCLC services to obtain bibliographic, abstract and full-text information when and where they need it. For more information, visit www.oclc.org.

About LITA

LITA is the leading organization reaching out across types of libraries to provide education and services for a broad membership including systems librarians, library administrators, library schools, vendors and many others interested in leading edge technology and applications for librarians and information providers. For more information, visit www.lita.org, or contact the LITA office by phone, 800-545-2433, ext. 4268; or e-mail: lita@ala.org

For further information, contact Mary Taylor at LITA, 312-280-4267.

LITA: LITA/Ex Libris Seeking LIS Student Authors

planet code4lib - Wed, 2014-09-17 18:54

The Library and Information Technology Association (LITA), a division of the American Library Association (ALA), is pleased to offer an award for the best unpublished manuscript submitted by a student or students enrolled in an ALA-accredited graduate program. Sponsored by LITA and Ex Libris, the award consists of $1,000, publication in LITA’s refereed journal, Information Technology and Libraries (ITAL), and a certificate. The deadline for submission of the manuscript is February 28, 2015.

The purpose of the award is to recognize superior student writing and to enhance the professional development of students. The manuscript can be written on any aspect of libraries and information technology. Examples include digital libraries, metadata, authorization and authentication, electronic journals and electronic publishing, telecommunications, distributed systems and networks, computer security, intellectual property rights, technical standards, desktop applications, online catalogs and bibliographic systems, universal access to technology, library consortia and others.

At the time the unpublished manuscript is submitted, the applicant must be enrolled in an ALA-accredited program in library and information studies at the masters or PhD level.

To be eligible, applicants must follow the detailed guidelines and fill out the application form at:

http://www.ala.org/lita/sites/ala.org.lita/files/content/involve/committees/exlibris/ExLibrisAwardApplication.pdf

Send the signed, completed forms by February 27, 2015 to the Award Committee Chair,

Sandra Barclay
Kennesaw State University
1200 Chastain Rd NW MD# 0009
Kennesaw, GA 30144-5827.

Submit the manuscript to Sandra electronically at

sbarclay@kennesaw.edu

by February 28, 2015.

The award will be presented at the LITA President’s Program during the 2015 ALA Annual Conference in San Francisco.

About Ex Libris??

Ex Libris is a leading provider of automation solutions for academic libraries. Offering the only comprehensive product suite for electronic, digital, and print materials, Ex Libris provides efficient, user-friendly products that serve the needs of libraries today and will facilitate their transition into the future. Ex Libris maintains an impressive customer base consisting of thousands of sites in more than 80 countries on six continents. For more information about Ex Libris Group visit www.exlibrisgroup.com.

About LITA

Established in 1966, LITA is the leading organization reaching out across types of libraries to provide education and services for a broad membership including systems librarians, library administrators, library schools, vendors and many others interested in leading edge technology and applications for librarians and information providers. For more information, visit www.lita.org, or contact the LITA office by phone, 800-545-2433, ext. 4268; or e-mail: lita@ala.org

For further information, please contact Mary Taylor at LITA, 312-280-4267.

Harvard Library Innovation Lab: Link roundup September 17, 2014

planet code4lib - Wed, 2014-09-17 17:01

Looks like Matt’s been spamming the roundup.

Stack independent magazine subscription service

A different magazine delivered every month? Sounds cool.

Innovation and the Bell Labs Miracle

The Idea Factory is the best thing I’ve read about the organization and process required to pump out innovation.

Internet Archive – a short film about accessing knowledge

A short video doc on the @internetarchive. Love the IA culture and work.

There’s Finally A Modern Typeface For Programmers

A typeface for development work. Refined and monospaced.

Libraries trusted to keep manuscripts hidden and safe

The library is trusted. Deeply.

Library of Congress: The Signal: Welcoming the Newest Member of the Viewshare Team to the Library

planet code4lib - Wed, 2014-09-17 15:49

The following is a guest post by Patrick Rourke, an Information Technology Specialist and the newest member of the Library’s Viewshare team.

I made my first forays into computing on days when it was too cold, wet or snowy to walk in the woods behind our house, in a room filled with novels, atlases and other books.  Usually those first programming projects had something to do with books, or writing, or language – trying to generate sentences from word lists, or altering the glyphs the computer used for text to represent different alphabets.

After a traumatic high school exposure to the COBOL programming language (Edsger Dijkstra once wrote that “its teaching should be regarded as a criminal offense” (pdf)), in college I became fascinated with the study of classical Greek and Roman history and literature. I was particularly drawn to the surviving fragments of lost books from antiquity – works that were not preserved, but of which traces remain in small pieces of papyrus, in palimpsests, and through quotations in other works. I spent a lot of my free time in the computer room, using GML, BASIC and ftp on the university’s time sharing system.

My first job after graduation was on the staff of a classics journal, researching potential contributors, proofreading, checking references. At that time, online academic journals and electronic texts were being distributed via email and the now almost-forgotten medium of Gopher. It was an exciting time, as people experimented with ways to leverage these new tools to work with books, then images, then the whole panoply of cultural content.

This editorial experience led to a job in the technical publications department of a research company, and my interest in computing to a role as the company webmaster, and then as an IT specialist, working with applications, servers and networking. In my spare time, I stayed engaged with the humanities, doing testing, web design and social media engagement for the Suda On Line project, who publish a collaborative translation and annotation of the 10th century Byzantine lexicon in which many of those fragments of lost books are found.

My work on corporate intranets and my engagement with SOL motivated me to work harder on extending my programming skills, so before long I was developing web applications to visualize project management data and pursuing a master’s degree in computer science.  In the ten years I’ve been working as a developer, I’ve learned a lot about software development in multiple languages, frameworks and platforms, worked with some great teams and been inspired by great mentors.

I join the National Digital Information Infrastructure and Preservation Program as an Information Technology Specialist, uniting my interests in culture and computing. My primary project is Viewshare, a platform the Library makes available to cultural institutions for generating customized visualizations – including timelines, maps, and charts – of digital collections data. We will be rolling out a new version of Viewshare in the near future, and then I will be working with the NDIIPP team and the Viewshare user community on enhancing the platform by developing new features and new ways to view and share digital collections data. I’m looking forward to learning from and working with my new colleagues at the Library of Congress and everyone in the digital preservation community.

Islandora: Using Intellij IDEA Ultimate as a Dev Environment for Islandora.

planet code4lib - Wed, 2014-09-17 14:48

In the past I have used Netbeans as my preferred environment for developing Islandora code, I also tried Eclipse and others periodically to see if they had any new must have features. At Drupalcon Portland in 2013 I noticed many of the presenters were using PHPStorm and developers spoke highly of it, so I thought I should give it a try.

Most of the code for Islandora is PHP but some of the opensource projects we rely on are written in Java or something else, so instead of trying out PHPStorm I download a trial of Intellij IDEA Ultimate Edition which has the functionality of PHPStorm (via a plugin) plus support for many other languages and frameworks.

My first impressions of IDEA Utlimate Edition were good. It was quick to load (compared to Netbeans) and the user interface was snappy, there was no lag for code completion etc. I also really liked the Darcula theme which was easy on the eyes. My first impression of IDEA was enough to make me think it was worthwhile to spend a bit more time using it. The more I used it, the more I liked it! I have been using IDEA as my main IDE for a year now.

IDEA has many plugins and supports many frameworks for various languages so initial configuration can take some time, but once you have things configured it works well and runs smoothly. Islandora has strict coding standards, and IDEA is able to help with this; we are able to point it at the same codesniffer configuration that the Drupal Coder module uses. IDEA then highlights anything that does not conform to the configured coding standards. It will also fix a lot of the formatting errors if you choose to reformat the code. The PHP plugin also has support for Mess Detector, Composor etc.

I also like the PHP debugger in IDEA. You can have several different configurations setup for various projects. While the debugger is a useful tool, I have run into some situations where it opens a second copy of a file in the editor, which can cause issues if you don't notice.

You can also open an ssh session within IDEA which is great for running stuff like git commands. The editor does have built in support for git and svn etc. but I prefer to use the command line for this and in Intellij I can do this while still in the IDE.

IDEA has good support for editing xml files and running/debugging transforms within the IDE.

Overall, Intellij IDEA Ultimate is definitely worth trying! It is a commercial product so you'll have to be prepared to buy a license after your trial. However, they do have a free community edition; be sure to check whether it supports PHP. Most of the functionality I discussed here is also available in PHPStorm which is cheaper but it doesn't support languages other than PHP, HTML etc. If you are part of an opensource project you can apply for an opensource license (Islandora has one), if you qualify you may get a free license.

District Dispatch: It must be “FCC month” at the ALA

planet code4lib - Wed, 2014-09-17 14:26

Well, yes, almost any month could be “FCC month” with the number of proceedings that affect libraries and our communities, but September has been particularly busy. Monday we entered the next round of E-rate activity with comments in response to the Federal Communication Commission’s Further Notice of Proposed Rulemaking (emphasis added), and closed out a record-setting public comment period in relation to promoting and protecting the Open Internet with two public filings.

I’ll leave it to Marijke to give the low-down on E-rate, but here’s a quick update on the network neutrality front:

ALA and the Association of College & Research Libraries (ACRL) filed “reply” comments with a host of library and higher education allies to further detail our initial filing in July. We also joined with the Center for Democracy & Technology (CDT) to re-affirm that the FCC has legal authority to advance the Open Internet through Title II reclassification or a strong public interest standard under Section 706. This work is particularly important as most network neutrality advocates agree the “commercially reasonable” standard originally proposed by the FCC does not adequately preserve the culture and tradition of the internet as an open platform for free speech, learning, research and innovation.

For better or worse, these filings are just the most recent milestones in our efforts to support libraries’ missions to ensure equitable access to online information. Today the FCC is beginning to hold round tables related to network neutrality (which you can catch online at www.fcc.gov/live). ALA and higher education network neutrality counsel John Windhausen has been invited to participate in a roundtable on October 7 to discuss the “Internet-reasonable” standard we have proposed as a stronger alternative to the FCC’s “commercially reasonable” standard.

The Senate will take up the issue in a hearing today, including CDT President and CEO Nuala O’Connor. And a library voice will again be included in a network neutrality forum—this time with Sacramento Public Library Director Rivkah Sass speaking at a forum convened by Congresswoman Doris Matsui on September 24. Vermont State Librarian Martha Reid testified at a Senate field hearing in July, and Multnomah County Library Director Vailey Oehlke discussed network neutrality with Senator Ron Wyden at part of an event in May.

This month ALA also filed comments in support of filings from the Schools, Health and Libraries Broadband (SHLB) Coalition, State E-rate Coordinators Alliance (SECA) and NTCA—the Broadband Coalition calling for eligible telecommunications carriers (ETCs) in the Connect America Fund to connect anchor institutions at higher speeds than those delivered to residents. Going further, ALA proposes that ETCs receiving CAF funding must serve each public library in its service territory at connection speeds of at least 50 Mbps download and 25 Mbps upload. Access and affordability are the top two barriers to increasing library broadband capacity, so both the Connect America Fund and the E-rate program are important components of increasing our ability to meet our public missions. AND we presented at the Telecommunication Policy Research Conference! Whew.

Buckle your seat belts and stay tuned, because “FCC Month” is only half over!

The post It must be “FCC month” at the ALA appeared first on District Dispatch.

DPLA: More than 148,000 items from the U.S. Government Printing Office now discoverable in DPLA

planet code4lib - Wed, 2014-09-17 13:50

We were pleased to share yesterday that nearly 60,000 items from the Medical Heritage Library have made their way into DPLA, and we’re now doubly pleased to share that more than 148,000 items from the Government Printing Office’s (GPO) Catalog of U.S. Government Publications (CGP) are now also available via DPLA.

To view the Government Printing Office in DPLA, click here.

Notable examples of the types of records now available from the GPO include the Federal Budget, laws such as the Patient Protection and Affordable Care Act, Federal regulations, and Congressional hearings, reports, and documents. GPO continuously adds records to the CGP which will also be available through DPLA, increasing the discoverability of and access to Federal Government information for the American public.

“GPO’s partnership with DPLA will further GPO’s mission of Keeping America Informed by increasing public access to a wealth of information products available from the Federal Government,” said Public Printer Davita Vance-Cooks. “We look forward to continuing this strong partnership as the collection of Government information accessible through DPLA continues to grow”.

GPO is the Federal Government’s official, digital, secure resource for producing, procuring, cataloging, indexing, authenticating, disseminating, and preserving the official information products of the U.S. Government. The GPO is responsible for the production and distribution of information products and services for all three branches of the Federal Government, including U.S. passports for the Department of State as well as the official publications of Congress, the White House, and other Federal agencies in digital and print formats. GPO provides for permanent public access to Federal Government information at no charge through our Federal Digital System (www.fdsys.gov), partnerships with approximately 1,200 libraries nationwide participating in the Federal Depository Library Program, and our secure online bookstore. For more information, please visit www.gpo.gov.

To read the full GPO press release announcing its partnership with DPLA, click here.

All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.

Andromeda Yelton: jQuery workshop teaching techniques, part 2: techniques geared at affective goals

planet code4lib - Wed, 2014-09-17 13:30

I’m writing up what I learned from teaching a jQuery workshop this past month. I’ve already posted on my theoretical basis and pacing. Today, stuff I did to create a positive classroom climate and encourage people to leave the workshop motivated to learn more. (This is actually an area of relative weakness for me, teaching-wise, so I really welcome anyone’s suggestions on how to cultivate related skills!)

Post-it notes

I distributed a bunch of them and had students put them on their laptops when they needed help. This lets them summon TAs without breaking their own work process. I also had them write something that was working and something that wasn’t on post-its at the end of Day 1, so I could make a few course corrections for Day 2 (and make it clear to the students that I care about their feedback and their experience). I shamelessly stole both tactics from Software Carpentry.

Inclusion and emotion

The event was conducted under the DLF Code of Conduct, which I linked to at the start of the course material. I also provided Ada Initiative material as background. I talked specifically, at the outset, about how learning to code can be emotionally tough; it pushes the limits of our frustration tolerance and often (i.e. if we’re not young, white men) our identity – “am I the kind of person who programs? do people who program look like me?” And I said how all that stuff is okay. Were I to do it over again, I’d make sure to specifically name impostor syndrome and stereotype threat, but I’ve gotten mostly good feedback about the emotional and social climate of the course (whose students represented various types of diversity more than I often see in a programming course, if less than I’d like to see), and it felt like most people were generally involved.

Oh, and I subtly referenced various types of diversity in the book titles I used in programming examples, basically as a dog-whistle that I’ve heard of this stuff and it matters to me. (Julia Serano’s Whipping Girl, which I was reading at the time and which interrogated lots of stuff in my head in awesome ways, showed up in a bunch of examples, and a student struck up a conversation with me during a break about how awesome it is. Yay!)

As someone who’s privileged along just about every axis you can be, I’m clueless about a lot of this stuff, but I’m constantly trying to suck less at it, and it was important to me to make that both implicit and explicit in the course.

Tomorrow, how ruthless and granular backward design is super great.

LITA: Browser Developer Tools

planet code4lib - Wed, 2014-09-17 13:00

Despite what the name may imply, browser developer tools are not only useful for developers. Anyone who works with the web (and if you are reading this blog, that probably means you) can find value in browser developer tools because they use the browser, the tool we all use to access the riches of the web, to deconstruct the information that makes up the core of our online experience. A user who has a solid grasp on how to use their browser’s developer tools can see lots of incredibly useful things, such as:

  • Dynamic views of a page’s HTML elements & data
  • CSS rules being applied to any given element
  • The effects of new user-supplied CSS rules
  • Margin & padding boundaries around elements
  • External files being loaded by a page (CSS & JS)
  • JavaScript errors, right down to the line number
  • The speed with which JavaScript files are loaded
  • An interactive JavaScript console (great for learning!)

The first step in understanding your browser’s developer tools is knowing that they exist. If you can only get to this step, you are far ahead of most people. Every browser has its own set of embedded developer tools, whether you are using Internet Explorer, Safari, Firefox, Chrome, or Opera. There’s no special developer version of the browser to install or any add-ons or extensions to download, and it doesn’t matter if you are on Windows, Mac or Linux. If a computer has a browser, it already has developer tools baked in.

The next step on the journey is learning how to use them. All browser developer tools are pretty similar, so skills gained in one browser translate well to others. Unfortunately the minor differences are substantial enough to make a universal tutorial impossible. If you have a favorite browser, learn how to activate the various developer tools, what each one can do, how to use them effectively, and how to call them with their own specific keyboard shortcut (learning to activate a specific tool with a keyboard shortcut is the key to making them a part of your workflow). Once you have a solid understanding of the developer tools in your favorite browser, branch out and learn the developer tools for other browsers as well. After you have learned one, learning others is easy. By learning different sets of developer tools you will find that some are better at certain tasks than others. For instance, (in my opinion) Firefox is best-in-class when dealing with CSS issues, but Chrome takes first place in JavaScript utilities.

Google search results using Firefox’s 3D view mode, which shows a web page’s nested elements as stacks of colored blocks. This is incredibly helpful for debugging CSS issues.

Another great reason to learn developer tools for different browsers has to do with the way browsers work. When most people think of web programming, they think of the server side versions of files because this is where the work is done. While it’s true that server side development is important, browsers are the real stars of the show. When a user requests a web page, the server sends back a tidy package of HTML, CSS and JavaScript that the browser must turn into a visual representation of that information. Think of it like a Lego kit; every kid buys the same Lego kit from the store which has all the parts and instructions in a handy portable package, but it’s up to the individual to actually make something out of it and often the final product varies slightly from person to person.  Browsers are the same way, they all put the HTML, CSS and JavaScript together in a slightly different way to render a slightly different web page (this causes endless headaches for developers struggling to make a consistent user experience across browsers). Browser developer tools give us an insight into both the code that the browser receives and the way that the individual browser is putting the web page together. If a page looks a bit different in Internet Explorer than it does in Chrome, we can use each browser’s respective developer tools to peek into the rendering process and see what’s going on in an effort to minimize these differences.

Now that you know browser developer tools exist and why they are so helpful, the only thing left to do is learn them. Teaching you to actually use browser developer tools is out of the scope of this post since it depends on what browser you use and what your needs are, but if you start playing around with them I promise you will find something useful almost immediately. If you are a web developer and you aren’t already using them, prepare for your life to get a lot easier. If you aren’t a developer but work with web pages extensively, prepare for your understanding of how a web page works to grow considerably (and as a result, for your life to get a lot easier). I’m always surprised at how few people are aware that these tools even exist (and what happens when someone stumbles upon them without knowing what they are), but someone with a solid grasp of browser developer tools can expose a problem with a single keyboard shortcut, even on someone else’s workstation. A person who can leverage these tools to figure out problems no one else can often acquires the mystical aura of an internet wizard with secret magic powers to their relatively mortal coworkers. Become that person with browser developer tools.

Ed Summers: Google’s Subconscious

planet code4lib - Wed, 2014-09-17 11:50

Can a poem provide insight into the inner workings of a complex algorithm? If Google Search had a subconscious, what would it look like? If Google mumbled in its sleep, what would it say?

A few days ago, I ran across these two quotes within hours of each other:

So if algorithms like autocomplete can defame people or businesses, our next logical question might be to ask how to hold those algorithms accountable for their actions.

Algorithmic Defamation: The Case of the Shameless Autocomplete by Nick Diakopoulos

and

A beautiful poem should re-write itself one-half word at a time, in pre-determined intervals.

Seven Controlled Vocabluaries by Tan Lin.

Then I got to thinking about what a poem auto-generated from Google’s autosuggest might look like. Ok, the idea is of dubious value, but it turned out to be pretty easy to do in just HTML and JavaScript (low computational overhead), and I quickly pushed it up to GitHub.

Here’s the heuristic:

  1. Pick a title for your poem, which also serves as a seed.
  2. Look up the seed in Google’s lightly documented suggestion API.
  3. Get the longest suggestion (text length).
  4. Output the suggestion as a line in the poem.
  5. Stop if more than n lines have been written.
  6. Pick a random substring in the suggestion as the seed for the next line.
  7. GOTO 2

The initial results were kind of light on verbs, so I found a list of verbs and randomly added them to the suggested text, occasionally. The poem is generated in your browser using JavaScript so hack on it and send me a pull request.

Assuming that Google’s suggestions are personalized for you (if you are logged into Google) and your location (your IP address), the poem is dependent on you. So I suppose it’s more of a collective subconscious in a way.

If you find an amusing phrase, please hover over the stanza and tweet it — I’d love to see it!

Library Tech Talk (U of Michigan): Image Class Update

planet code4lib - Wed, 2014-09-17 00:00
The last visual refresh to the DLPS Image Class environment updated the layout and styles, but mostly worked the same way. Starting this year, we've been making more drastic changes. These updates were based on what our analytics showed about browser use (larger, wider screens and of course, mobile use) and conversations with collection managers.

District Dispatch: I left the E-rate Order on my desk last night

planet code4lib - Tue, 2014-09-16 19:47

After schlepping the 176 pages of the E-rate modernization Order around since July 23 (when the Commission released the Order, voted on July 11), my bag is remarkably empty today. While I didn’t continually refer to it over the last month and a half, it has been a constant companion as we prepared our comments to the Commission on the Further Notice of Proposed Rulemaking (FNPRM) that accompanied the July Order. I can unabashedly leave it behind since we filed our comments (pdf) last night.

E-rate may be the “other” proceeding with comments due yesterday, but for ALA they represent a milestone of sorts. True to form, the Commission asks many detailed questions in the FNPRM, but two issues stand out for us. First, the Commission opened the door to talk about the long-term funding needs of the program. Second, it’s now time for the Commission to take up our concern that has followed ALA certainly since this proceeding began a year ago, but really since ALA started tracking broadband capacity of libraries. We reopen the call to immediately address the broadband gap among the majority of libraries. With 98% of libraries below the 1 gigabit capacity goal asserted in the National Broadband Plan and adopted by the Commission, we have a long way to go before we can comfortably say we have made a dent in the gap.

In looking to the next order (hopefully sometime this fall) we have heard from our members that while having access to more funding for Wi-Fi (the heart of the July Order) is important, if the library only has a 3 or even 10 Mbps connection to the door, the patron trying to upload a résumé, or stream an online certification course, or download a homework assignment is still going to have a marginal experience.

Our comments therefore focus on these two primary issues—adequate funding to sustain the program and closing the broadband gap for libraries. Among other recommendations we ask the Commission to increase and improve options for special construction where libraries do not have access to affordable, scalable high-capacity broadband by:

  • Clarifying the amortization rules;
  • Eliminating the ban on special construction for dark fiber;
  • Allowing longer term contracts where there is special construction involved; and
  • Requiring service providers to lock in affordable prices for a significant number of years for agreements involving special construction.

As to the overall funding question, ALA is engaged with partners to gather data that will give us an understanding of the costs necessary for libraries to achieve the Commission’s capacity goals. We plan to submit information to the Commission in the next several weeks.

For more details on our comments you can certainly read the whole thing. Or, we prepared a summary (pdf) to start with. With reply comments due at the end of the month, it’s time to get started reading other submissions and picking up where we left off (and with FCC filing system intermittently down—all those net neutrality filers, no doubt). We will continue connecting with our library colleagues and will begin more meetings at the Commission. More to come!

The post I left the E-rate Order on my desk last night appeared first on District Dispatch.

David Rosenthal: Two Sidelights on Short-Termism

planet code4lib - Tue, 2014-09-16 17:26
I've often referred to the empirical work of Haldane & Davies and the theoretical work of Farmer and Geanakoplos, both of which suggest that investors using Discounted Cash Flow (DCF) to decide whether an investment now is justified by returns in the future are likely to undervalue the future. This is a big problem in areas, such as climate change and digital preservation, where the future is some way off.

Now Harvard's Greenwood & Shleifer, in a paper entitled Expectations of Returns and Expected Returns, reinforce this:
We analyze time-series of investor expectations of future stock market returns from six data sources between 1963 and 2011. The six measures of expectations are highly positively correlated with each other, as well as with past stock returns and with the level of the stock market. However, investor expectations are strongly negatively correlated with model-based expected returns.They compare investors' beliefs about the future of the stock market as reported in various opinion surveys, with the outputs of various models used by economists to predict the future based on current information about stocks. They find that when these models, all enhancements to DCF of one kind or another, predict low performance investors expect high performance, and vice versa. If they have experienced poor recent performance and see a low market, they expect this to continue and are unwilling to invest. If they see good recent performance and a high market they expect this to continue. Their expected return from investment will be systematically too high, or in other words they will suffer from short-termism.

Yves Smith at Naked Capitalism has a post worth reading critiquing a Washington Post article entitled America’s top execs seem ready to give up on U.S. workers. It reports on a Harvard Business School survey of its graduates entitled An Economy Doing Half Its Job. Yves writes:
In the early 2000s, we heard regularly from contacts at McKinsey that their clients had become so short-sighted that it was virtually impossible to get investments of any sort approved, even ones that on paper were no-brainers. Why? Any investment still has an expense component, meaning some costs will be reported as expenses on the income statement, as opposed to capitalized on the balance sheet. Companies were so loath to do anything that might blemish their quarterly earnings that they’d shun even remarkably attractive projects out of an antipathy for even a short-term lowering of quarterly profits. Note "Companies were so loath". The usually careful Yves falls into the common confusion between companies (institutions) and their managers (individuals). Managers evaluate investments not in terms of their longer-term return to the company, but in terms of their short-term effect on the stock price, and thus on their stock-based compensation. Its the IBGYBG (I'll Be Gone, You'll Be Gone) phenomenon, which amplifies the underlying problems of short-termism.

Galen Charlton: Libraries, the Ada Initiative, and a challenge

planet code4lib - Tue, 2014-09-16 16:28

I am a firm believer in the power of open source to help libraries build the tools we need to help our patrons and our communities.

Our tools focus our effort. Our effort, of course, does not spring out of thin air; it’s rooted in people.

One of the many currencies that motivates people to contribute to free and open source projects is acknowledgment.

Here are some of the women I’d like to acknowledge for their contributions, direct or indirect, to projects I have been part of. Some of them I know personally, others I admire from afar.

  • Henriette Avram – Love it or hate it, where would we be without the MARC format? For all that we’ve learned about new and better ways to manage metadata, Avram’s work at the LC started the profession’s proud tradition of sharing its metadata in electronic format.
  • Ruth Bavousett – Ruth has been involved in Koha for years and served as QA team member and translation manager. She is also one of the most courageous women I have the privilege of knowing.
  • Karen Coyle – Along with Diane Hillmann, I look to Karen for leadership in revamping our metadata practices.
  • Nicole Engard – Nicole has also been involved in Koha for years as documentation manager. Besides writing most of Koha’s manual, she is consistently helpful to new users.
  • Katrin Fischer – Katrin is Koha’s current QA manager, and has and continues to perform a very difficult job with grace and less thanks than she deserves.
  • Ruth Frasur – Ruth is director of the Hagerstown Jefferson Township Public Library in Indiana, which is a member of Evergreen Indiana. Ruth is one of the very few library administrators I know who not only understands open source, but actively contributes to some of the nitty-gritty work of keeping the software documented.
  • Diane Hillmann – Another leader in library metadata.
  • Kathy Lussier – As the Evergreen project coordinator at MassLNC, Kathy has helped to guide that consortium’s many development contributions to Evergreen.  As a participant in the project and a member of the Evergreen Oversight Board, Kathy has also supplied much-needed organizational help – and a fierce determination to help more women succeed in open source.
  • Liz Rea – Liz has been running Koha systems for years, writing patches, maintaing the project’s website, and injecting humor when most needed – a true jill of all trades.

However, there are unknowns that haunt me. Who has tried to contribute to Koha or Evergreen, only to be turned away by an knee-jerk “RTFM” or simply silence? Who might have been interested, only to rightly judge that they didn’t have time for the flack they’d get? Who never got a chance to go to a Code4Lib conference while her male colleague’s funding request got approved three years in a row?

What have we lost? How many lines of code, pages of documentation, hours of help have not gone into the tools that help us help our patrons?

The ideals of free and open source software projects are necessary, but they’re not sufficient to ensure equal access and participation.

The Ada Initiative can help. It was formed to support women in open technology and culture, and runs workshops, assists communities in setting up and enforcing codes of conduct, and promotes ensuring that women have access to positions of influence in open culture projects.

Why is the Ada Initiative’s work important to me? For many reasons, but I’ll mention three. First, because making sure that everybody who wants to work and play in the field of open technology has a real choice to do so is only fair. Second, because open source projects that are truly welcoming to women are much more likely to be welcoming to everybody – and happier, because of the effort spent on taking care of the community. Third, because I know that I don’t know everything – or all that much, really – and I need exposure to multiple points of view to be effective building tools for libraries.

Right now, folks in the library and archives communities are banding together to raise money for the Ada Initiative. I’ve donated, and I encourage others to do the same. Even better, several folks, including Bess SadlerAndromeda Yelton, Chris Bourg, and Mark Matienzo are providing matching donations up to a total of $5,120.

Go ahead, make a donation by clicking below, then come back. I’ll wait.

Money talks – but whether any given open source community is welcoming, both of new people and of new ideas, depends on its current members.

Therefore, I would also like to extend a challenge to men (including myself — accountability matters!) working in open source software projects in libraries. It’s a simple challenge, summarized in a few words: “listen, look, lift up, and learn.”

ListenListening is hard.  A coder in a library open source project has to listen to other coders, to librarians, to users – and it is all too easy to ignore or dismiss approaches that are unfamiliar.  It can be very difficult to learn that something you’ve poured a lot of effort into may not work well for librarians – and it can be even harder to hear that your are stepping on somebody’s toes or thoughtlessly stomping on their ideas.

What to do? Pay attention to how you communicate while handling bugs and project correspondence. Do you prioritize bugs filed by men? Do you have a subtle tendency to think to yourself, “oh, she’s just not seeing the obvious thing right in front of her!” if a women asks a question on the mailing list about functionality she’s having trouble with? If so, make an effort to be even-handed.

Are you receiving criticism? Count to ten, let your hackles down, and try to look at it from your critic’s point of view.

Be careful about nitpicking.  Many a good idea has died after too much bikeshedding – and while that happens to everybody, I have a gut feeling that it’s more likely to happen if the idea is proposed by a woman.

Is a women colleague confiding in you about concerns she has with community or workplace dynamics? Listen.

Look. Look around you — around your office, the forums, the IRC channels, and Stack Exchanges you frequent. Do you mostly see men who look like yourself?  If so, do what you can to broaden your perspective and your employer’s perspective. Do you have hiring authority? Do you participate in interview panels? You can help who surrounds you.

Remember that I’m talking about library technology here — even if the 70% of the employees of the library you work for are women, if the systems department only employs men, you’re missing other points of view.

Do you have no hiring authority whatsoever? Look around the open source communities you participate in. Are there proportionally far more men participating openly than the gender ratio in librarianship as a whole?  If so, you can help change that by how you choose to participate in the community.

Lift upThis can take many forms.  In some cases, you can help lift up women in library technology by getting out of the way – in other words, by removing or not supporting barriers to participation such as sexist language on the mailing list or by calling out exclusionary behavior by other men (or yourself!).

Sometimes, you can offer active assistance – but ask first! Perhaps a woman is ready to assume a project leadership role or is ready to grow into it. Encourage her – and be ready to support her publicly. Or perhaps you may have an opportunity to mentor a student – go for it, but know that mentoring is hard work.

But note — I’m not an authority on ways to support women in technology.  One of the things that the Ada Initiative does is run Ally Skills workshops that teach simple techniques for supporting women in the workplace and online.  In fact, if you’re coming to Atlanta this October for the DLF Forum, one is being offered there.

Learn. Something I’m still learning is just the sheer amount of crap that women in technology put up with. Have you ever gotten a death threat or a rape threat for something you said online about the software industry? If you’re a guy, probably not. If you’re Anita Sarkeesian or Quinn Norton, it’s a different story entirely.

If you’re thinking to yourself that “we’re librarians, not gamers, and nobody has ever gotten a death threat during a professional dispute with the possible exception of the MARC format” – that’s not good enough. Or if you think that no librarian has ever harassed another over gender – that’s simply not true. It doesn’t take a death threat to convince a women that library technology is too hostile for her; a long string of micro-aggressions can suffice. Do you think that librarians are too progressive or simply too darn nice for harassment to be an issue? Read Ingrid Henny Abrams’ posts about the results of her survey on code of conduct violations at ALA.

This is why the Ada Initiative’s anti-harassment work is so important – and to learn more, including links to sample policies, a good starting point is their own conference policies page. (Which, by the way, was quite useful when the Evergreen Project adopted its own code of conduct). Another good starting point is the Geek Feminism wiki.

And, of course, you could do worse than to go to one of the ally skills workshops.

If you choose to take up the challenge, make a note to come back in a year and write down what you’ve learned, what you’ve listened to and seen, and how you’ve helped to lift up others. It doesn’t have to be public – though that would be nice – but the important thing is to be mindful.

Finally, don’t just take my word for it – remember that I’m not an authority on supporting women in technology. Listen to the women who are.

Update: other #libs4ada posts

DuraSpace News: REGISTER: ADVANCED DSPACE TRAINING

planet code4lib - Tue, 2014-09-16 00:00
Winchester, MA  In response to overwhelming community demand, we are happy to announce the dates for an in-person, 3-day Advanced DSpace Course in Austin October 22-24, 2014. The total cost of the course is being underwritten with generous support from the Texas Digital Library and DuraSpace. As a result, the registration fee for the course for DuraSpace Members is only $250 and $500 for Non-Members (meals and lodging not included). Seating will be limited to 20 participants.  

DuraSpace News: AVAILABLE: The April-June 2014 Quarterly Report from Fedora

planet code4lib - Tue, 2014-09-16 00:00

From The Fedora Steering Group

The Quarterly Report from Fedora
April-June 2014

Fedora Development - In the past quarter, the development team released one Alpha and three Beta releases of Fedora 4; detailed release notes are here:

Roy Tennant: I’m So Very Sorry

planet code4lib - Mon, 2014-09-15 21:56

Two different but very related things happened last week which brought my own fallibility into painful focus for me.

One is that I blogged in support of the work of the Ada Initiative. They do great work to advance women in open technology and culture. If you are not familiar with their work, then by all means go and find out.

The other is that I discovered I had acted badly in exactly the kind of situation where I should have known better. The wake-up call came in the form of a blog post where the writer was kind enough not to call me out by name. But I will. It was me. Go ahead, read it. I’ll wait.

This, from someone who had fancied himself a feminist. I mean, srlsy. To me this shows just how deeply these issues run.

I was wrong, for which I am now apologizing. But allow me to be more specific. What am I sorry about?

  • I’m sorry that I shoved my way into a conversation where I didn’t belong. 
  • I’m sorry that I was wrong in what I advocated.
  • I’m sorry that my privilege and reputation can be unwittingly used to silence someone else.
  • I’m sorry that ignorance of my innate privilege has tended to support ignorance of my bad behavior.

I can’t change the past, but I can change the future. My slowly growing awareness of the effects of my words and actions can only help reduce my harmful impacts, while hopefully enforcing my positive actions.

Among the things that the Ada Initiative lists as ways that they are making a difference is this:

Asking men and influential community members to take responsibility for culture change.

I hear you, and I’m trying, as best as I can, to do this. It isn’t always quick, it isn’t always pretty, but it’s something. Until men stand up and own their own behavior and change it, things aren’t going to get better. I know this. I’m sorry for what I’ve done to perpetuate the problem, and I’m taking responsibility for my own actions, both in the past and in the future. Here’s hoping that the future is much brighter than the past.

 

Photo by butupa, Creative Commons License Attribution 2.0 Generic.

District Dispatch: Libraries, E-rate, and ALA featured at TPRC

planet code4lib - Mon, 2014-09-15 20:59

The scene at the 2014 Telecommunications Policy and Research Conference. Photo by TPRC.

Last Friday, the American Library Association (ALA) made its first appearance (and through a whole panel no less) at the Telecommunications Policy and Research Conference (TPRC), the most prestigious conference in information policy. The telecommunications policy topic, not surprisingly, that has dominated our time for over the past year: E-rate.

The panel “900 Questions: A Case Study of Multistakeholder Policy Advocacy through the E-rate Lens” was moderated by Larra Clark, director of the Program on Networks for ALA’s Office for Information Technology Policy (OITP). The panel featured Jon Peha, professor of Engineering and Public Policy at Carnegie Mellon University and former chief technologist of the Federal Communications Commission (FCC); and Tom Koutsky, chief policy counsel for Connected Nation and a former Attorney-Advisor at the FCC. Rounding out the panel were Marijke Visser, ALA’s own Empress of E-rate and OITP Director Alan S. Inouye.

The panel served as a great opportunity for ALA to cohesively consider the extensive effort on the current proceeding that we’ve expended since June 2013. Of course, it was rather a challenge to pack it in 90 minutes!

Marjike Visser, Larra Clark, and Alan S. Inouye focused on the multiple key tradeoffs that arose in the past year. Supporting the FCC proposal that led to the first order, even though it focused on Wi-Fi—important, but not ALA’s top priority, which is broadband to libraries (and schools)—based on the promise of a second order focusing on broadband to the building. We worked hard to stand with our long-standing coalitions, while not in full agreement with some coalition positions. The panel explored tensions with: school versus library interests and the importance of both differentiation and collaboration; rural versus urban concerns; near-term versus long-term considerations; and the risks and rewards of creative disruption.

Tom Koutsky and Jon Peha provided context and analysis beyond the library lens. The E-rate proceeding emanated from a multi-year process that began with the National Broadband Plan and investments in the Broadband Technology Opportunities Program (BTOP). Koutsky and Peha illuminated the oft-hidden complexity behind advocate groups, who on the surface may seem to represent similar interests or organizations, but in fact engage in considerable conflict and compromise among themselves. They also discussed the challenges with new stakeholder entrants and their competing interests, both in the short run and long run.

This TPRC session is an important milestone for OITP. The Policy Revolution! Initiative is predicated upon reaching decision makers and influencers outside of the library community who affect critical public policies of interest to our community. Thus, increasing the ALA and library presence at key venues such as TPRC represents important progress for us as we continue to work through re-imagining and re-engineering national public policy advocacy. Also in the September-October timeframe, OITP representatives will present at the conferences of the International City/County Management Association (ICMA), NTCA—the Rural Broadband Association, and the National Association of Telecommunications Officers and Advisors (NATOA).

The E-rate saga continues: ALA will submit comments in the most recent round—due tonight (September 15th)—and will submit further comments in the weeks ahead, as well as continue our discussions with the commissioners and staff of the FCC and our key contacts on Capitol Hill.

The post Libraries, E-rate, and ALA featured at TPRC appeared first on District Dispatch.

Manage Metadata (Diane Hillmann and Jon Phipps): Who ya gonna call?

planet code4lib - Mon, 2014-09-15 19:31

Some of you have probably noted that we’ve been somewhat quiet recently, but as usual, it doesn’t mean nothing is going on, more that we’ve been too busy to come up for air to talk about it.

A few of you might have noticed a tweet from the PBCore folks on a conversation we had with them recently. There’s a fuller note on their blog, with links to other posts describing what they’ve been thinking about as they move forward on upgrading the vocabularies they already have in the OMR.

Shortly after that, a post from Bernard Vatant of the Linked Open Vocabularies project (LOV) came over the W3C discussion list for Linked Open Data. Bernard is a hero to those of us toiling in this vineyard, and LOV (lov.okfn.org/dataset/lov/) one of the go-to places for those interested in what’s available in the vocabulary world and the relationships between those vocabularies. Bernard was criticizing the recent release of the DBpedia Ontology, having seen the announcement and, as is his habit, going in to try and add the new ontology to LOV. His gripes fell into a couple of important categories:

* the ontology namespace was dereferenceable, but what he found there was basically useless (his word)
* finding the ontology content itself required making a path via the documentation at another site to get to the goods
* the content was available as an archive that needed to be opened to get to the RDF
* there was no versioning available, thus no way to determine when and where changes were made

I was pretty stunned to see that a big important ontology was released in that way–so was Bernard apparently, although since that release there has apparently been a meeting of the minds, and the DBpedia Ontology is now resident in LOV. But as I read the post and its critique my mind harkened back to the conversation with PBCore. The issues Bernard brought up were exactly the ones we were discussing with them–how to manage a vocabulary, what tools were available to distribute the vocabulary to ensure easy re-use and understanding, the importance of versioning, providing documentation, etc.

These were all issues we’d been working hard on for RDA, and are still working on behind the RDA Registry. Clearly, there are a lot of folks out there looking for help figuring out how to provide useful access to their vocabularies and to maintain them properly. We’re exploring how we might do similar work for others (so ask us!).

Oh, and if you’re interested on our take on vocabulary versioning, take a look at our recent paper on the subject, presented at the IFLA satellite meeting on LOD in Paris last month.

I plan on posting more about that paper and its ideas later this week.

Pages

Subscribe to code4lib aggregator