Planet Code4Lib - http://planet.code4lib.org
Updated: 2 years 18 weeks ago
I wanted to mess around with Google's new Custom Search Engine feature and in casting about for a list of URLs to feed it I thought I'd try the list of blogs at Planet Code4Lib. As it turns out, this might be a modestly useful search if you remember readi
I have been a fan of Amber Case’s work for quite a while and was excited to see her talking at TEDWomen on cyborg anthropology. As we move towards a world where mobile is the norm in any urban environment, her conceptualization of what this means can open doors to interesting insights for librarians as we deal with this change, seeking the balance between space for reflection and silence (a traditional role for libraries) and time for information gathering and our online “second-selves”.
I was interested in how many users of our existing interfaces used a ‘saved records’ or ‘bookmark’ feature, to inform how much development time should be put into such for our Blacklight application.
In particular, I was interested in how many records users that save records save; because unless your saving more than 20 or 30, a ‘tagging’ feature or a ‘search within my saved records’ feature isn’t all that useful.
So I went about trying to find out by looking directly at the underying rdbms for two systems:
1) HIP, our legacy in-production Horizon OPAC, which offers a ‘my list’ feature. This feature works in some odd ways I don’t entirely understand, and additionally I had to reverse engineer the Firebird rdbms schema and may not entirley be understanding the data, but it seems to be decent. (Thanks RazorSQL for connecting to firebird, woo!). The My List feature is in some ways kind of hard to use, but I know at least some people use it.
2) Xerxes, the front-end for Metalib used for federated searching of mostly article databases. The saved record interface is pretty polished, although it’s saving a different sort of thing from HIP, so not entirely comparable.
In both cases, I’m not actually sure how big the total number of possible users is, this is harder to find out in our enterprise environment than you might think, so I can’t express number of users that have used the feature as a percentage of all users.HIP, the OPAC
3611 users currently have a ‘My List’ existing. Actually a pretty decent number. [Some of these may be old lists for users no longer active, not easy to say. The oldest list in the system was created in 2003].
Oddly, around 100 of them don’t actually have any items in their ‘My List’, probably had them at one time (causing ‘My List’ to be lazily created), but deleted them.
(Updated, 10 minutes later). While HIP has no tagging feature, it DOES theoretically allow the user to create more than one ‘my list’, which could be a similar way of dealing with lots of records. However, this feature is VERY confusing in HIP; not only can I never remember how to use it myself, but I think in some circumstances you can accidentally create a second ‘My List’ without meaning to (I have actually done that before), and in some cases your first list may ‘expire’ forcing you to create a second ‘my list’, so I’m not entirely sure what the number of users with more than one ‘my list’ tells us.
However, there are 457 users with more than one ‘my list’. 12% of all users who have used ‘my list’ at all.Xerxes (Metalib, Federated Search)
3787 users have at some time saved records associated with their account. Interestingly this is about the same number as in HIP. (On the one hand, Xerxes saved record feature is a lot easier to use; on the other hand I’m not sure if total usage of HIP is greater than Xerxes or not).
[Note to Xerxes developers -- remember to filter out username like 'local%' from such queries to avoid counting temporary users which may really be one of your logged in users not yet logged in, just saving records to the session].
So interestingly, Xerxes users save somewhat more records than HIP users, although still a distinct minority save very many.
Xerxes already allows the user of tagging. Could that be encouraging more record saving? Not sure, it doesn’t seem like you NEED tags just to save 10-20 records, and even 10-20 is significantly higher in Xerxes than HIP. I think it’s just Xerxes better interface that encourages a bit more saving, but still mostly moderate.
1855 users have applied at least one tag in Xerxes. 48% of all users who have saved records, somewhat surprising. Again, Xerxes interface makes it very easy and quick to add tags. (Nice job David Walker).
I’m not sure that neccesarily means tags are useful; many of those users have fewer than 10- saved records, why do they need tags to distinguish them? Tagging may just be a habit from other applications, or because it’s so easy to do they just tried it out, or perhaps they erroneously believe that their tags will be shared publically in a ‘social’ way. Or they don’t realize what they are doing when using the tagging interface. Or they have a use for tagging even with fewer than 10 saved records, I just don’t know what it is!
In Xerxes, which supports tagging through a very nice interface, an actually surprisingly high minority use the tagging feature.
However, in both HIP and Xerxes, not very many users save more than a couple dozen records — only a distinct minority. The existence of tagging in Xerxes does not lead to a significant number of users saving lots of records (only 10% of all record-saving users with more than 30, only 5% with more than 50).
It is unclear to me what the purpose of tagging records is if you only have a couple dozen. My guess is that Xerxes interface makes tagging so easy that people just go ahead and do it even though it doesn’t in fact end up being particularly useful to them. However, it would be interesting to find some of those users and ask them why they tag.
Enough users use ‘saved records’ in both interfaces that I think we DO need a saved records feature in new interfaces, such as our Blacklight implementation.
However, barring more information on why users in Xerxes tags, the numbers of saved records in both HIP and Xerxes lead me to suggest that we should not prioritize features which are really only neccesary (so far as we know) if you have a large number of saved records, such as tagging and searching within your saved record collection.
Some users, although a distinct minority not a vanishingly small one, do save enough records that more complicated organizational features may be neccesary. We should try to serve these users by making sure our systems export as cleanly as possible to existing bibliographic management applications like EndNote, RefWorks or Zotero, that offer more sophisticated organizational features than we are likely to, and which we already need to support good export to for other use cases as well.
Filed under: General
The following post is from Dirk Heine, Project Team Yourtopia.
Development Economics has for a long time recognised the deficiency of GDP as an indicator of human development but with little reception in policy-circles. Recently, however, the debate changed and no month passes now without a high-level report on “Development beyond GDP”.
OKFN’s new Open Economics Group has now constructed an application to test two solutions to primary problems in this debate, and it is participating in the World Bank’s competition “Applications for Development“. We are writing to you to introduce the main ideas and also to ask for support since we are quite urgently looking for colleagues for certain programming tasks.
Measures of human progress beyond GDP either use so-called dashboards of indicators (e.g. WDI) or composite indices (e.g. HDI or MPI). An openness-problem with the first approach has been that dashboards were so complex that the public was de facto excluded from the debate. The second approach tried to simplify through combining different dimensions into a single index but then suffered from arbitrary assumptions on the choice of weights applied to indices and choice of proxies for different development dimensions. These problems were a significant throw-back and so OKFN created Yourtopia.
Yourtopia is the first application that produces a composite index of human development (OpenHDI) without arbitrary choices of indicator-weights and proxy choices.
We circumvent these problems simply: by letting the user participate. Rather than the researcher selecting proxies and indicator-weights we let the user choose. The resulting index of human progress is then personalised and contains no arbitrary assumptions by construction.
While the constructors of the HDI, for example, was always attacked for their assumption that human progress just depends on education, health and income and that these each carried the same importance, we now let the user decide which dimensions of progress are important and how they compare to each other.
We urgently need more programming support and would very much appreciate your support. The application is currently participating in the World Bank’s competition “Applications for Development” and there are few hours left to finalise some things. You find more information here and to participate join the open economics list.
Sorry for the radio silence this week; I thought it might be a good idea to finish my syllabus for this spring’s digital-curation course, seeing as how class starts next week and all. It’s pretty much done, finally; I’m working on stuff in the course-management system now. I do intend to post the syllabus online [...]
A great opportunity to join us at the Center for History and New Media:
Do you get as excited about clean mark-up as you do about the latest Photoshop effect? Do you want to be on the cutting edge of web design and digital humanities, and design websites that inform and engage end users?
If so, the Center for History and New Media wants to hear from you.
CHNM, known for innovative work in digital media, is seeking an energetic, well-organized, and creative web designer with front-end development skills or experience to work on a variety of innovative, web-based history projects.
This position is particularly appropriate for someone with a combined interest in technology and history or humanities. The successful applicant will be able to create mockups and wireframes for historical, cultural, and educational websites and bring those ideas to fruition using the latest and highest web development standards.
We are looking for a combination of the following skills:
CHNM offers a casual, collaborative work environment, with excellent opportunities for professional growth and development.
This is a grant-funded, two-year position at the Center for History and New Media (http://chnm.gmu.edu), located in Fairfax, Virginia, CHNM is 15 miles from Washington, DC, and accessible by public transportation. Apply online (including resume, three references, links to prior web work, and a cover letter describing technology background and any interest in history) at http://jobs.gmu.edu for position #10376z. We will review applications as they arrive and the job closes on January 31, 2011.
If you have questions, contact us at email@example.com with subject line “Web Designer.”
Farkas, Meredith: Collaborative tech, virtual participation, and what is an “open meeting” anyways?
Let me say this first. I am not an expert in ALA or LITA (or even ACRL) bylaws regarding participation, open meetings, etc. I’m sure a lot of very experienced and awesome people like Jason Griffey, Aaron Dobbs and Cindi Trainor could speak to these issues from the standpoint of someone who is immersed in this world. I am speaking to these issues as someone who does not have the funding nor the inclination to attend both Midwinter and Annual (since those would likely be the only things I’d do all year), but still wants to contribute to her membership organization and is willing to put in the time and effort. I’m also speaking as someone who has dedicated her professional development work over the years to improving access to professional development opportunities for librarians who cannot physically attend conferences. In fact, I even got an award from LITA for my work in this area.
I first heard about the LITA Board shutting down Jason Griffey’s live stream of their meeting through Michelle Boule’s excellent post on the subject (so nice to see a post like this from you Michelle! You’ve been missed). Jason is not just some rabble-rouser who is trying to subvert authority; he’s an elected member of the LITA Board who has dedicated his time in LITA to making the organization more transparent and responsive to the needs of its members. He has had a part in creating most of the best new things to come out of LITA in the past 4 years. I’ve been to and participated in a number of events and meetings that Jason has streamed to make them accessible to people who were unable to attend and I think it’s wonderful that it extended the reach of and conversation about events at ALA Annual/LITA/Midwinter beyond the physical room. I do agree that Jason should have broached the subject of streaming the meeting with the other members of the LITA Board prior to the meeting, but I’d bet that he’d have been turned down and we’d never have heard about it. Maybe it was important for him to do this and be turned down publicly so that we’d know how open our “open meetings” really are.
What I really couldn’t understand was the argument that “we paid a consultant to talk to a Board, not hundreds of people.” First of all, that consultant was paid with money that came from our dues. Why we are any less deserving of access to that report is beyond me. Second of all, the LITA Board meeting was not “closed doors.” It was an open meeting — open to anyone attending ALA Midwinter, so the report couldn’t have had any confidentiality tied to it. There legally could have been hundreds of people in the room who weren’t even LITA members, and they would have been allowed to hear the report bot not members of the organization who could not attend physically. This doesn’t make sense to me other than that it’s the way they’ve done business since before these collaborative technologies existed.
While I do think these meetings should be streamed, I don’t think it should happen in the way that Jason has been doing things. I think this speaks to a bigger issue — that all of the efforts to make these LITA meetings and events more open have spearheaded by individuals. That does not a sustainable project make. If Jason Griffey and other individuals like him suddenly couldn’t attend LITA, ALA and Midwinter, would we suddenly not have any more streaming? This sort of access should happen, but it should be a regular part of how LITA does business. But the way it is now is doomed to failure because it’s seen by most people as something extraneous, or even as “entertainment.” If LITA wants to be responsive to its membership, when fewer and fewer people can attend conferences but still have not lost their passion for contributing to the profession, then it needs to look at how it can accommodate participation and keeping-up from afar. Jason’s done a beautiful job of bringing these issues to the fore, but now it’s time to either make it a part of the way LITA does business or make it clear that this is not the way LITA does business.
Several years ago, I decided that I wanted to get more involved in ALA. I was asked to be on Jim Rettig’s Presidential Initiatives Committee and the ACRL Annual Conference Virtual Conference Committee, so I thought I’d do both. Working with the diverse and impressive group involved in making Jim’s presidency awesome was truly a pleasure, but it was the ACRL committee that really changed my view of participation in ALA (or at least in ACRL). I had always heard that virtual participants were never treated like full citizens on committees and it was one of the big reasons why I hadn’t previously wanted to get involved. With this committee, at least, that could not have been further from the truth. Around that time I was getting funded by ALA for my travel to Annual and Midwinter as I was covering the exhibit hall for American Libraries, so I was actually able to attend all of the meetings for my committee (until I got too pregnant to do so). However, there were other members of the committee who could only attend a few, one or none of the meetings. At every meeting I attended, we had webinar software set up and were able to have a hybrid virtual/physical meeting. This was more than just streaming what went on at the meeting — the people online were just as active participants as those physically in the room. We also met several times synchronously online to catch up, make decisions and conduct other business. It was nice to feel like I could still be helpful and involved when I was too pregnant to go anywhere. Heck, I was able to give a talk for the virtual conference when I was 9 months pregnant! That whole experience gave me new hope that I could make a real contribution to ACRL; that virtual participants didn’t have to be second class citizens.
I would have gotten more involved in ACRL immediately after my experience with the Virtual Conference Committee, but I had a baby a month after ACRL’s National Conference and have been just a tad bit busy with that bundle of energy and moxie since. Now that he’s nearly two, I’ve decided to volunteer with ACRL again and am eager to see what committees I end up on this time around. I hope that I’ll be able to participate through a mixture of virtual and physical participation, since I neither can afford to nor want to attend two ALA conferences each year. I hope that I’ll be given the opportunity to do good things for ACRL, because I’m certainly willing to put in the time and energy. And LITA? I decided to let my membership to LITA lapse. From what I’ve seen, I feel like that division is languishing and that those who want to innovate and make LITA more relevant and accessible are facing one brick wall after another. ACRL has responded in many ways to the needs of its membership (Cyber Zed Shed, OnPoint Chats, Virtual Institute, online classes, National Virtual Conference, etc.), making professional development experiences and participation more interesting and accessible to those who can’t physically attend conferences. I feel like I can find a home at ACRL, because I believe that the organization is moving in the right direction (they’re not there yet, but I believe they will be). I know there are a lot of really fantastic people working to make LITA better (take a look at the EParticipation Task Force Recommendations), but I get the sense that they are swimming against the tide.
ALA, LITA and ACRL are not organizations that embrace or are even structured for radical change, but I think the age that we are in (where people have less funding, more job stress, and more opportunities to participate in professional development, network and make professional contributions online) requires radical change to ensure the survival of the organizations. Enabling more people to participate virtually is not going to kill ALA. People do not just attend ALA and Midwinter because of committee responsibilities and to hear what a Board has to say. They also attend because there is still nothing that holds a candle to attending a conference, learning from someone standing in front of you, seeing old friends, and having long talks with like-minded librarians over sushi and beer. Offering more opportunities to benefit from and make contributions to the organization virtually will increase overall participation and will likely attract members who wouldn’t otherwise have joined because they didn’t feel like ALA/LITA/ACRL represented their needs.
But don’t just read my views on this. Here are some other interesting perspectives:
How Much Is Enough? at ACRLog
Towards the end of 2010, Wikileaks generates many headlines as it publishes information on the web, causing controversy and leading to talk about politicians hiding information from the public. Reporters and commentators express shock or admiration when telling the story of a rogue organisation making governmental information public. What has not been as mainstream is that for the past year or more, governments around the world have been doing something very similar themselves: publishing information online.
Big names like President Obama, Sir Tim Berners-Lee and the headliners at big events like the International Open Government Data Conference favour publishing public data for transparency and benefits to society. This all finally began to take off in 2010. Governments from around the world have been developing their public information strategies, with the launches of data.gov and data.gov.uk and data.govt.nz.
This is all taking place at a time of economic restraint. Dr Martin Read from the UK Cabinet Office’s Efficiency Reform Board explained in a recent interview: “If you are going to improve the efficiency of something, making that change involves risk and innovation … If they get it wrong, they’re hauled up in front of a committee for interrogation.” (moderngov, November 2010) It may seem tricky to justify the expense of big projects like data.gov.uk, and there certainly seems to be a huge amount of pressure.
Nevertheless, governments are proving themselves committed to prioritising data publishing. Towards the end of last year, the UK Prime Minister announced that every item of governmental spending over £25,000 will be published online, and updated monthly. He emphasised the importance of this publication in terms of transparency, inviting the public to scrutinise the data. Interestingly, he also said: “This scrutiny will act as a powerful straightjacket on spending, saving us a lot of money.” So, not only is data publishing seen as a benefit to democracy, but also as a useful way to “flag up waste”.
While that press conference was taking place, developers and civil servants were gathered together elsewhere at the Open Government Data Camp (disclosure, Talis was a sponsor). At the event, much was made of the modelling and tools which have been developed with open data in mind: particularly the Linked Data API, which allows developers from just about any web background to work with data.gov.uk’s data very quickly. Visualisations demonstrated what can be done with well-structured data.
One of the things this high-level data publishing has done is raise the standard for what can be published and developed. Last year, we built a proof-of-concept app for the Department of Business Innovation and Skills (BIS) to illustrate the potential of applications of this data. A few minutes spent on DEFRA’s UK Climate Projections site shows what can happen when raw data is matched with a plan, and is designed with a citizen in mind. Anyone can check the primary source for their government’s climate policy, and it doesn’t take a climatologist to understand it. A little further development allows fully-fledged applications to be built that are instantly useful: one available on the front page of data.gov.uk lets me download an app that helps me plan my cycle route!
Open government data is probably good for transparency. But it’s also got a plenty of potential to seed ideas that add value to this information. Innovators know that there are more people with better ideas outside our organisations than could possibly be in them, so sharing means that they can be developed into products and services that are mutually beneficial to everyone. The web industry routinely works with open-source software that’s been at least partly built by others, and this open-source mentality might just be an incredibly useful piece in the public-sector machinery. Open business models work very well with ideas.
2011 promises to be the year when all this data gets put to use. I was recently invited to a press conference at which the Deputy Prime Minister confirmed the UK’s commitment to published data as a priority and even a recognised civil liberty. The story will shift to more local applications of big public data tools. January will see the publication of local authority’s spending data, and public bodies will be looking to add value to this data, bringing the headlines of open data to life in the places we live.
With a bit of thought into how data is published in the first place, and a plan for encouraging people with good ideas to work with this information, this investment in data publishing could be more than just a tick-box exercise for a political transparency agenda. I hope that this year, it won’t be Wikileaks-level events that get people talking about open data publishing. We should notice it improving services we use, and see whole new applications for the bits and pieces of information that make up our public lives.
The video for Saturday’s interview with noted science fiction author Verner Vinge is now available on the LITA Ustream Channel. The complete interview runs for about two hours and is available in part 1 and part 2.
The work of Vernor Vinge pushes information and technology to its incredible, but possible, conclusions. In A Fire UponThe Deep and A Deepness in the Sky, Vinge examines the concept of the technological singularity, a theoretical point where machine intelligence overtakes human intelligence, and does so in ways that play with information systems and processes. In Rainbows End, Vinge explores one potentially very real future for libraries in which we live in a world of complete information immersion. Jason Griffey interviews Vernor Vinge; futurist, author, thinker, and visionary. This program was recorded live on Saturday, January 8th 2011 at 1pm in the San Diego Convention Center. Sponsored by LITA’s Imagineering Interest Group.
The archived video of Top Tech Trends is now available on the LITA Ustream Channel.
The bi-annual gathering of library technology practitioners and thinkers, hosted by the Library Information Technology Association, gathers in San Diego to ponder topics ranging from WikiLeaks to Angry Birds, from cloud-based information systems to mobile services. The program was recorded live on January 9th, 2011 at 8am in the San Diego Convention Center Room 26 A/B.
The San Diego panelists include:
~Lorcan Dempsey, Vice President and Chief Strategist, OCLC
The program is moderated by Jason Vaughn, Director of Library Technologies at University of Nevada Las Vegas.
Due to network difficulties beyond our control, the recorded program joins the panelists already in progress. Our apologies for the few minutes that are lost. Enjoy the show!
OCLC Innovation Lab held a public demonstration of a project with the working title, “A Web Presence for Small Libraries.” It is a templated website that could serve as a library’s barest bones presence on the web. The target audience is small and/or rural libraries that may not have the technological infrastructure — human knowledge, equipment, and/or money — to host their own web presence. If it comes to fruition, the basic service would give a library four pages on the web that can be customized by the library staff plus dynamic areas of content that would be generated by OCLC algorithms and optionally placed on each library’s site. A more advanced version of the service could include a light-weight book inventory and circulation option.
They created a sample library called Loremville, TN public library to demonstrate key aspects of the service. I did not ask them how long that particular example will be around, so you may follow that link at a later date and not find it.
This “Library Website in a Box” is a concept that has been around for many years, and the latest trigger to try something was a resolution from the last OCLC Members Council to make scaled-down versions of OCLC services for “small and rural libraries.” The Innovation Lab group conducted some research about the existing state of public library web presences by sorting the IMLS-reported data from 2008 by number of volumes held and number of library staff. They looked at the lowest quartile (roughly 20,000 volumes or less and staff size in the low single digits) and found that there was generally no web presence for these libraries. In the second quartile there were instances of library websites, but they did the library no credit (outdated, poorly constructed, incomplete information — as was said at the meeting, these libraries had a presence on the web but probably shouldn’t). Some had automation systems supplied by large groups, but others didn’t have evidence of an automation system. So the project charter was to find an easy and inexpensive way for a library in these quartiles to create a desktop and mobile device web presence.
One of the unique aspects of the project development was to first set the approximate lower ($5/month) and upper ($40/month) price boundaries and find a way to provide the highest level of service possible at those price points. The Innovation Lab team tried techniques such as a farm of WordPress sites but found they couldn’t make the revenue-versus-cost equation to work. In the end they constructed a custom database-driven content system in PHP. Institutional data is initially pulled from sources such as the WorldCat Registry, and there will be some process for a library to “claim” its site. There might also be a way for a library to create a site for itself if no registry data yet exists.
There are four levels of site authorization: public (unauthenticated) viewing, a registered patron, a staff member, and an administrator. Content from the pages are edited by the administrator at the subscribing library with a WYSIWYG1 editor. There are content boxes for the library’s location, staff/volunteers listing, events calendar and news, hours and phone number, policies, and service information. The library’s address is fed into a Google Maps service to display a map of the area surrounding the library.
The dynamic parts of each library’s website could have a list of books from various sources like the New York Times and Oprah’s book group. The service also envisions offering a “default digital collection” using public domain works in text, PDF, and mobile reading device formats from sources such as Project Gutenberg and the Internet Archive.
The inventory and circulation module is simple and straight-forward. Each item has only eight full-text fields with the intention that the description will likely be done by a volunteer without professional library training. Cataloging can be done by typing in the information or scanning the ISBN with an app on a mobile device; item information is pulled from WorldCat if found. The cataloging application does not attach holdings to WorldCat, but the OCLC number is kept and might be used to facilitate offloading MARC records in cases where a library outgrows this simple circulation module to a more functional integrated library system. The circulation functions are check-in, check-out, renew, place hold and cancel hold. There are no financial functions in the system.
At these price levels, the system needs to be highly automated and self-supporting. The cost to OCLC of one call to a customer support phone number could easily run through all the revenue OCLC would receive from the subscribing library in a year. A widely adopted implementation at the targeted price points means that OCLC could dedicate one or two technical staff to support and upgrade the system in addition to the hardware and support service amortization.
One unresolved issue is domain names — what is the URL that will be used for each library’s site. OCLC is investigating options such as partnering with a domain registrar company (someone like GoDaddy), becoming a domain registrar themselves, or putting all the sites under one domain. The economics of each of these options will be a factor. The sometimes cumbersome aspects of migrating domain names from one service to another may make that activity cost-prohibitive as well.
Mike Teets noted that this was at a “project” stage, not a “product” stage. My paraphrasing of what this distinction means is that the technology to create a product is largely done, but the decisions on the final formative pieces of the technology and the surrounding support infrastructure — is not yet done (and might never be done). There isn’t even a formal name for it yet; it is being called “A Web Presence for Small Libraries.” One desired additional feature is to add e-mail boxes for library staff/functions to the site. Those in the room, including me but more importantly others who are more closely aligned with the target “small and rural” public library population, were pretty excited about it and wanting to talk further about if what we saw could be made a reality.Personal Impressions
Like others in the room, I came away impressed by the project demonstration. It is definitely fits the bill as a basic library website and even a starter inventory and circulation management system. I see that libraries could start with something like this for the cost of a couple of books added to the collection (roughly estimated at $60/year). At this price level, OCLC thinks they could sustain the costs of operations plus have some left over for investment in incremental improvements. I also think that because a library would pay a minimal fee for it, they would feel a tangible sense of ownership over the site and would keep them up-to-date. From this a library could “graduate” to another service — their own Drupal or WordPress site, to a shared ILS or Webscale Management Services.
The content areas seemed the most appropriate for the target audience. The web page design is modern, and I could see options for future enhancements as time and revenue permit such as providing limited options to personalize the template (change colors, adding picture — or adding links to pictures that might be stored on services such as Flickr).
One attendee at the session suggested that rather than OCLC prepopulating a digital content library of all public domain content that the set be limited to those that are the most downloaded so as not to overwhelm the user with a lot of unused (and/or unusable) digital content. That sounds like a good suggestion to me.
OCLC Staff are looking for feedback on this project. They say that the system is “production ready” with all the software controls and data recovery features of OCLC behind it. What they think is missing is community support to have local engagement with the targeted libraries to show them what is possible. That is an area where OCLC needs help. (I can only imagine the shocked silence of a volunteer at a small library to get a call from OCLC — if they even knew what OCLC was — with an offer to create a website for the library. “Only $5/month…sign up now and we’ll throw in a second one for free!”) There is an e-mail address — firstname.lastname@example.org — that goes to all the OCLC Innovation Lab members, and a WebJunction group for public discussion.
About the Innovation Lab
For this project, the Innovation Lab sought out cooperation from OCLC staff to build prototypes and components of the service in their spare time. Each Monday the self-selected group would get together to show what had been built and discuss ways to move the project forward in the following week. In this way they rapidly iterated over ideas to come up with what was ultimately proposed.
The text was modified to update a link from http://expreimental.worldcat.org/ to http://experimental.worldcat.org/ on January 12th, 2011.
The text was modified to update a link from http://www.webjunction.org/923 to http://www.webjunction.org/923 on January 12th, 2011.Footnotes
Digest powered by RSS Digest
I really enjoyed being on the Kojo Nnamdi Show today talking about digital humanities for an hour with Kojo, the NEH‘s Brett Bobley, and UVA‘s Bill Ferster. Kojo’s show is produced at Washington’s NPR station, WAMU, and syndicated nationally. It’s also available as an audio stream and a podcast.
Having done podcasts for four years now, I’ve come to understand how difficult it is to do a radio show—to ask the right questions, to not um and er a lot, and to stimulate informative conversation. Kojo really makes it look easy, which is even more impressive given the wide variety of topics he covers. As I left the studio today he immediately prepped to do a show on Eisenhower and the military-industrial complex.
Brett, Bill, and I talked about how to define digital humanities, the use of text mining, visualization, and digital mapping, problems associated with the abundant digital record, collaboration in the digital humanities, and questions of publishing, open access, and tenure. We also took numerous questions from callers. I thought the show had a good vibe.
So, worth a listen: The Kojo Nnamdi Show: “History Meets High-Tech: Digital Humanities”
Today was also a moment to reflect on the fact that the last time I was on the Kojo Nnamdi Show was exactly five years ago, with Roy Rosenzweig. Our book Digital History had just come out. It was just before Roy got sick. Probably said a lot on the broadcast today that Roy would have said.
I thought it was time to revive the Tuesday Tech Links on my blog, and since it is the New Year I’ve decided to focus on technologies that allow us to work smarter and give us the extra time needed to achieve some of those non-work related resolutions, or at least time to talk about them.
1) Dropbox is a free web-based service that allows you to synchronize your files automatically. When you sign up Dropbox gives you 2GBs of secure storage that can be shared between two computers. This “access anywhere” folder saves you from all those “I’ll email it to myself” moments. It includes automatic backup of your files, allow you to restore previous versions of your files, and offers 30 days of undo history so you don’t have to recreate a document because you were too hasty with the delete button.It is available to download for Windows, Mac, Linux and also has a mobile app (for iPhone, iPad, BlackBerry, and Android). You can get more space by participating in their incentive based program or by paying a monthly fee.
2) RescueTime is for all those people who can’t figure out where their time vanishes to online. It is a service that installs software to track every task you perform while on your computer and returns a detailed report of your activity, allowing you to manage your time effectively. You can, for example, voluntarily block distracting sites for any period you wish and even track your time offline when you are taking a call or in a meeting. RescueTime offers a free bare-bones version or you can pay a monthly fee for the full suite of tools. Download the 14 day free trial and find out how you really spend your time.
3) Instapaper is a free service that is great for catching up on all those articles that you find online, bookmark, but never get back to. It allows you to save on-line articles and view them in a text-only reformatted version, that can be read off-line. It is ideal for effectively using those free moments during the day, when you are waiting for a bus or grabbing a quick coffee, by reading an article or two on your mobile.
Wildcard) Real paper won’t be going anywhere soon. While it is often associated with inefficiency Scott Belsky, author of the book Making Ideas Happen, has some interesting insights into the positive role that “analog rituals”, such as physically writing to-do lists, can have on productivity
The manual labor involved with productivity is valuable. Repetitive rituals will make you pause. You will feel burdened, but you will also catch a glimpse of just how busy you are and what you should prioritize.
In light of this at the beginning of the year I began an experiment involving a Molskine and a very light weight version of GTD. I’ll let you know later in the year how this goes.
So, reorganize your computer storage, track your internet usage, bookmark those articles you always wanted to read and make your to-do lists and lets see what we can get done.
As you may well have heard, in December there were rumours that the Delicious social bookmarking service may be discontinued. This has caused a flurry of activity in the online world to back up bookmarks and to look for alternative similar services.
Hence we’ve started a page for a new social bookmarking web service which would be 100% open, so that anyone could use the code or the data for any purpose:
Would you like to work on this? Would you support it? Do you think it is a good idea? Do you know of similar initiatives that are out there already or underway? Got a better idea? If you have any ideas, suggestions or links, or if you’d like to volunteer to help build such a service, we’d be grateful if you could drop a note on the idea page via the link above!
Talis is delighted to be one of the sponsors of the 8th European summer School on Ontological Engineering and the Semantic Web (SSSW 2011). There will be more about this in coming posts, but just to start off:
We are sponsoring it for a very simple reason. The mix of theoretical, practical and collaboration skills used by all the students involved from across Europe directly corresponds to how we work at Talis. It’s an environment of support and challenge, contribution and connection that has proved beneficial for all involved over the years. Talis is proud to contribute and participate to further the aims of the community.
Talis is a small and ambitious company of likeminded, motivated people. A phrase we often use here is Human Scale. Culturally what we mean by that is we like working closely with people who we all know, whether as employees of Talis or (more likely) over time collaborating as partners in joint endeavours.
We want to grow our company and contribute to the communities we belong to. We know that it is by fostering relationships with others driven by the same passion to collaborate and learn that we can build on the ambitions we have for ourselves and for the communities we belong to. One particular aspect of the Summer School is this same notion of social connectedness, a personal network of trusted relationships that challenge and enhance the experience for everyone.
If you use Ibis Reader, you will have seen the “Get Books” link. This allows you to view OPDS catalogs (lists of web-accessible ebooks). The Feedbooks catalogs are pre-installed, and some of you may have set up a Calibre/Dropbox OPDS catalog of your own library.
On a whim, I used the “Add Your Own Catalog” link to add the WebScription Stanza link,
It worked! I can access the Baen Free Library directly from Ibis Reader! In “Get Books” I click “Baen” (or whatever you named the OPDS link), then “Baen Free Library”, then the “Read” button next to the book I want to add to my Ibis Reader library. Yay!
Edited the above URL to go straight to the free books, bypassing the top-level catalog.
The Stanza catalog is an earlier form of OPDS, so some features like cover images won’t work in Ibis or other OPDS readers. If Baen updates their catalog to OPDS 1.0 then those features will be enabled, and we’d definitely consider adding it as a built-in catalog like the Feedbooks ones.
Please share any other OPDS catalogs that you’d like to see added (or at least listed as optional catalogs users can add themselves).
Add Open Publishing Distribution System(OPDS) to the slew of metadata acronyms to be aware of.Based on the widely implemented Atom Syndication Format, OPDS Catalogs have been developed since 2009 by a group of ebook developers, publishers, librarians, and booksellers interested in providing a lightweight, simple, and easy to use format for developing catalogs of digital books, magazines, and other content.How this compares to OAI-PMH I'll have to investigate. When would one or the other be most appropriate? What tools are there to create it and use it?
Active forum topics
There are currently 0 users and 16 guests online.