You are here

Feed aggregator

Islandora: iCampFL: Instructors Announced

planet code4lib - Mon, 2016-01-18 17:46

Islandora Camp is going to Fort Myers, FL from May 4 - 6. We'll be holding our traditional three day camp, with two days of sessions sandwiching a day of hands-on training from experienced Islandora instructors. We are very pleased to announce that those instructors will be:

Developers:

Nick Ruest is the Digital Assets Librarian at York University, a cornerstone of the Islandora community, and one of its most experienced instructors, with five camps and the Islandora Conference under his belt. Nick has also been Release Manager for four Islandora releases, is the author of two solutions packs and several tools, and is Project Director of the Islandora CLAW project. 

Diego Pino is an experienced Islandora developer and an official Committer. Although this is his inaugural Islandora Camp as an instructor, he has been helping folks learn how to get the most out of Islandora on our community listserv since he joined up. Diego started with Islandora in the context of handling biodiversity data for REUNA Chile and has transitioned over to develop and support the many Islandora sites of the Metropolitan New York Library Council.

Administrators

Melissa Anez has been working with Islandora since 2012 and has been the Community and Project Manager of the Islandora Foundation since it was founded in 2013. She has been a frequent instructor in the Admin Track and developed much of the curriculum, refining it with each new Camp.

Melissa VandeBurgt is the Head of Archives, Special Collections, and Digital Initiatives at Florida Gulf Coast University. She cut her teeth as an instructor during the Islandora Conference, co-leading a workshop on Building Collections.

Sound like a team you'd like to hear from? Registration is open, with an Early Bird discount until February 15th. You could win a free registration if you design a t-shirt for the camp. You can also submit a proposal to do a session of your own on Day One or Day Three.

David Rosenthal: Bitcoin's Death Spiral

planet code4lib - Mon, 2016-01-18 16:00
More than two years ago in my first post on Bitcoin I wrote about the difficulty of maintaining its decentralized nature. Nearly a year later I wrote Economies of Scale in Peer-to-Peer Networks, a detailed explanation of why peer-to-peer currencies could not maintain decentralization for long. In a long and fascinating post Mike Hearn, one of the original developers of the Bitcoin software, has now announced that The resolution of the Bitcoin experiment is that it has failed.

The fundamental reasons for the failure are lack of decentralization at both the organizational and technical levels. You have to read Mike's post to understand the organizational issues, which would probably have doomed Bitcoin irrespective of the technical issues. They prevented Bitcoin responding to the need to increase the block size. But the block size is a minor technical issue compared to the fact that:
the block chain is controlled by Chinese miners, just two of whom control more than 50% of the hash power. At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage.As Mike says:
Even if a new team was built to replace Bitcoin Core, the problem of mining power being concentrated behind the Great Firewall would remain. Bitcoin has no future whilst it’s controlled by fewer than 10 people. And there’s no solution in sight for this problem: nobody even has any suggestions. For a community that has always worried about the block chain being taken over by an oppressive government, it is a rich irony.Mike's post is a must-read. But reading it doesn't explain why "nobody even has any suggestions". For that you need to read Economies of Scale in Peer-to-Peer Networks.

A summary of this post made it to Dave Farber's IP list, drawing a response from Tony Lauck, which you should read. Tony argues that Bitcoin is not a peer-to-peer system. In strict terms I would agree with him, but my argument does not depend on its being a strict P2P system. I would also point out two flaws in what Tony says here:
Decentralized control of the network depends on the rational behavior of the owners of the hashing power, but this is not concentrated for protocol reasons, rather it is an historical artifact of the evolution of the network due to ASIC supply chain issues and geography (low cost electricity and cold climates for inexpensive cooling). The highly concentrated operators of mining pools serve as representatives of the hash power, who can switch to other pools in less than a minute if they believe the operators are misbehaving.First, I have never argued that the failure of decentralization was for "protocol reasons". It is for economic reasons, namely the inevitable economies of scale. Tony in effect agrees with me when he assigns "ASIC supply chain issues and geography (low cost electricity and cold climates for inexpensive cooling)" as the cause. Economies of Scale in Peer-to-Peer Networks addresses both low costs:
If there is even one participant whose rewards outpace their costs, Brian Arthur's analysis shows they will end up dominating the network.and supply chain issues:
Early availability of new technology acts to reduce the costs of the larger participants, amplifying their economies of scale. This effect must be very significant in Bitcoin mining, as Butterfly Labs noticed.Second, while in principle the "owners of the hashing power" can switch pools, it is a fact that the mining power has been controlled by a small number of pools each much larger than needed to provide stable income to miners for a long time. Mike Hearn's argument that "Bitcoin has no future whilst it’s controlled by fewer than 10 people." is sound at both the technical and organizational levels.

LibUX: Only 75% of Millennials Have Broadband

planet code4lib - Mon, 2016-01-18 14:19

A new Pew report identifies a decline of in-home broadband connections among lower- to middle-income, rural, and minority households from just two years ago. Even millennials, who are part borg, aren’t as tethered as we might assume.

This isn’t to say millennials aren’t plugged in: “80% of American adults have either a smartphone or a home broadband connection,” an increasing number of which are mobile-only, particularly where you see broadband adoption declining.

The increase in the “smartphone-only” phenomenon largely corresponds to the decrease in home broadband adoption over this period. The rise in “smartphone-only” adults is especially pronounced among low-income households (those whose annual incomes are $20,000 or less) and rural adults. African Americans, who saw a marked decline in home broadband adoption, also exhibited a sharp increase in “smartphone-only” adoption (from 10% to 19%), as did parents with school-age children (from 10% in 2013 to 17% in 2015) John B. Horrigan and Maeve Duggan, Pew

I suspect we have seen in-home broadband adoption peak. The implications are that while more people have access to the web, the speeds and quality of that connection are diminished. Unfortunately, such plans are often roped-in by data caps, meaning that just as metrics suffer from services that neglect mobile performance, increased page-weight has literal cost to users.

Home broadband adoption: Modest decline from 2013 to 2015

This post is part of a nascent library UX data collection we hope you can use as a reference to make smart decisions. If you’re interested in more of the same, follow @libuxdata on Twitter, or continue the conversation on our Facebook group. You might also think about signing-up for the Web for Libraries newsletter.

Email Address

The post Only 75% of Millennials Have Broadband appeared first on LibUX.

LITA: Express Your Shelf

planet code4lib - Mon, 2016-01-18 14:00

This won’t be the first time I ever admit this, nor will it be the last, but boy am I out of touch.

I’m more than familiar with the term “selfie”, which is when you take a photo of yourself. Heck, my profile pictures on Facebook, Twitter, and even here on LITA Blog are selfies. As much as I try to put myself above the selfie fray, I find myself smack in the middle of it. (I vehemently refuse to get a selfie stick, though. Just…no.)

But I’d never heard of this “shelfie” phenomenon. Well, I have, but apparently there’s more than one definition. I had to go to Urban Dictionary, that proving ground for my “get off my yard”-ness, to learn it’s a picture of your bookshelf, apparently coined by author Rick Riordan. But I was under the impression that a shelfie is where you take a picture of yourself with a book over your face. Like so:

Promo poster for bookstore Mint Vinetu

But apparently that’s called “book face”, so I’m still wrong.

Also, I just found out there’s an app called Shelfie, which lets you take a picture of your bookshelf and matches your books with free or low-cost digital version (an e-ternative, if you will).

All along, you see, I thought a shelfie was when you took a picture of yourself with your favorite book in front of your bookshelf (because selfie + shelf = selfie?), but it’s just of your bookshelf, not you. Apparently I’m vainer than I thought.

Here’s my version of a shelfie:

I never could get the hang of Thursdays.

Regardless, it’s a cool idea to share our books with our friends, to find out what each other is reading, or just to show off how cool our bookshelves look (and believe me, I’m jealous of a few of you). There are other ways to be social about your books – Goodreads and Library Thing come to mind – but this is a unique way to do it if you don’t use either one.

What does your shelfie look like?

State Library of Denmark: Faster grouping, take 1

planet code4lib - Mon, 2016-01-18 13:13

A failed attempt of speeding up grouping in Solr, with an idea for next attempt.

Grouping at a Statsbiblioteket project

We have 100M+ articles from 10M+ pages belonging to 700K editions of 170 newspapers in a single Solr shard. It can be accessed at Mediestream. If you speak Danish, try searching for “strudsehest”. Searches are at the article level, with the results sorted by score and grouped by edition, with a maximum of 10 articles / edition. Something like this:

q=strudsehest&group=true&group.field=edition&group.limit=10

This works well for most searches. But for the heavier ones, response times creeps into seconds, sometimes exceeding the 10 second timeout we use. Not good. So what happens in a grouped search that is sorted by document score?

  1. The hits are calculated
  2. A priority queue is used to find the top-X groups with the highest scores
    1. For each hit, calculate its score
    2. If the score is > the lowest score in the queue, resolve the group value and update the priority queue
  3. For each of the top-X groups, a priority queue is created and filled with document IDs
    1. For each hit, calculate is score and resolve its group value (a BytesRef)
    2. If the group value matched one of the top-X groups, update that group’s queue
      1. Updating the queue might involve resolving multiple field values for the document, depending on in-group sorting
  4. Iterate the top-X groups and resolve the full documents

Observation 1: Hits are iterated twice. This is hard to avoid if we need more than 1 entry in each group. An alternative would be to keep track of all groups until all the hits has been iterated, but this would be extremely memory costly with high cardinality fields.

Observation 2: In step 3.1, score and group resolving is performed for all hits. It is possible to use the same logic as step 2.1, where the group is only resolved if the score is competitive.

Attempt 1: Delayed group resolving

The idea in observation 2 has been implemented as a kludge-hacky-proof-of-concept. Code is available at the group_4_10 branch at GitHub for those who like hurt.

When the hits are iterated the second time, all scores are resolved but only the group values for the documents with competitive scores are resolved. So how well does it work?

Lazy group value resolving for Solr

Observation: Optimized (aka lazy group value resolving) grouping is a bit slower than vanilla Solr grouping for some result sets, probably the ones where most of the group values has to be resolved. For other result sets there is a clear win.

It should be possible to optimize a bit more and bring the overhead of the worst-case optimized groupings down to near-zero. However, since there are so few best-case result sets and since the win is just about a third of the response time, I do not see this optimization attempt as being worth the effort.

Idea: A new level of lazy

Going back to the algorithm for grouping we can see that “resolving the value” occurs multiple times. But what does it mean?

With DocValued terms, this is really a two-step process: The DocValue ordinal is requested for a given docID (blazingly fast) and the ordinal is used to retrieve the term (fast) in the form of a BytesRef. You already know where this is going, don’t you?

Millions of “fast” lookups accumulates to slow and we don’t really need the terms as such. At least not before we have to deliver the final result to the user. What we need is a unique identifier for each group value and the ordinal is exactly that.

But wait. Ordinals are not comparable across segments! We need to map the segment ordinals to a global structure. Luckily this is exactly what happens when doing faceting with facet.method=fc, so we can just scrounge the code from there.

With this in mind, the algorithm becomes

  1. The hits are calculated
  2. A priority queue is used to find the top-X groups with the highest scores
    1. For each hit, calculate its score
    2. If the score is > the lowest score in the queue, resolve the group value ordinal and update the priority queue
  3. For each of the top-X groups, a priority queue is created and filled with document IDs
    1. For each hit, resolve its group value segment-ordinal and convert that to global ordinal
    2. If the group value ordinal matches one of the top-X groups, update that group’s queue
      1. Updating the queue might involve resolving the document score or resolving multiple field value ordinals for the document, depending on in-group sorting
  4. Iterate the top-X groups and resolve the Terms from the group value ordinals as well as the full documents

Note how the logic is reversed for step 3.1, prioritizing value ordinal resolving over score calculation. Experience from the facet code suggests that ordinal lookup is faster than score calculation.

This idea has not been implemented yet. Hopefully it will be done Real Soon Now, but no promises.


Ariadne Magazine: Editorial: Happy 20th Birthday Ariadne!

planet code4lib - Sun, 2016-01-17 22:26

Ariadne hits its 20th birthday, and its 75th issue.

Back in 1994 the UK Electronic Libraries Programme (eLib) was set up by the JISC, paid for by the UK's funding councils. One of the many projects funded by eLib was an experimental magazine that could help document the changes under way and give the researchers working on eLib projects a means to communicate with one another and their user communities. That magazine was called Ariadne. Originally produced in both print and web versions, it outlived the project that gave birth to it. We are now at the point where we can celebrate 20 years of the web version of Ariadne. Read more about Editorial: Happy 20th Birthday Ariadne!

Article type: Issue number: Authors: Organisations: Date published: Sun, 01/17/201675http://www.lboro.ac.uk/issue75/editorial

Karen Coyle: Sub-types in FRBR

planet code4lib - Sun, 2016-01-17 19:25
One of the issues that plagues FRBR is the rigidity of the definitions of work, expression, and manifestation, and the "one size fits all" nature of these categories. We've seen comments (see from p. 22) from folks in the non-book community that the definitions of these entities is overly "bookish" and that some non-book materials may need a different definition of some of them. One solution to this problem would be to move from the entity-relation model, which does tend to be strict and inflexible, to an object-oriented model. In an object-oriented (OO) model one creates general types with more specific subtypes that allows the model both to extend as needed and to accommodate specifics that apply to only some members of the overall type or class. Subtypes inherit the characteristics of the super-type, whereas there is no possibility of inheritance in the E-R model. By allowing inheritance, you avoid both redundancy in your data but also the rigidity of E-R and the relational model that it supports.

This may sound radical, but the fact is the FRBR does define some subtypes. They don't appear in the three high-level diagrams, so it isn't surprising that many people aren't aware of them. They are present, however in the attributes. Here is the list of attributes for FRBR work:
title of the work
form of work
date of the work
other distinguishing characteristic
intended termination
intended audience
context for the work
medium of performance (musical work)
numeric designation (musical work)
key (musical work)
coordinates (cartographic work)
equinox (cartographic work)I've placed in italics those that are subtypes of work. There are two: musical work, and cartographic work. I would also suggest that "intended termination" could be considered a subtype of "continuing resource", but this is subtle and possibly debatable.

Other subtypes in FRBR are:
Expression: serial, musical notation, recorded sound, cartographic object, remote sensing image, graphic or projected image
Manifestation: printed book, hand-printed book, serial, sound recording, image, microform, visual projection, electronic resource, remote access electronic resourceThese are the subtypes that are present in FRBR today, but because sub-typing probably was not fully explored, there are likely to be others.

Object-oriented design was a response to the need to be able to extend a data model without breaking what is there. Adding a subtype should not interfere with the top-level type nor with other subtypes. It's a tricky act of design, but when executed well it allows you satisfy the special needs that arise in the community while maintaining compatibility of the data.

Since we seem to respond well to pictures, let me provide this idea in pictures, keeping in mind that these are simple examples just to get the idea across.


The above picture models what is in FRBR today, although using the inheritance capability of OO rather than the E-R model where inheritance is not possible. Both musical work and cartographic work have all of the attributes of work, plus their own special attributes.
If it becomes necessary to add other attributes that are specific to a single type, then another sub-type is added. This new subtype does not interfere with any code that is making use of the elements of the super-type "work". It also does not alter what the music and maps librarians must be concerned with, since they are in their own "boxes." As an example, the audio-visual community did an analysis of BIBFRAME and concluded, among other things, that the placement of duration, sound content and color content in the BIBFRAME Instance entity would not serve their needs; instead, they need those elements at the work level.*

This just shows work, and I don't know how/if it could or should be applied to the entire WEMI thread. It's possible that an analysis of this nature would lead to a different view of the bibliographic entities. However, using types and sub-types, or classes and sub-classes (which would be the common solution in RDF) would be far superior to the E-R model of FRBR. If you've read my writings on FRBR you may know that I consider FRBR to be locked into an out-of-date technology, one that was already on the wane by 1990. Object-oriented modeling, which has long replaced E-R modeling, is now being eclipsed by RDF, but there would be no harm in making the step to OO, at least in our thinking, so that we can break out of what I think is a model so rigid that it is doomed to fail.

*This is an over-simplification of what the A-V community suggested, modified for my purposes here. However, what they do suggest would be served by a more flexible inheritance model than the model currently used in BIBFRAME.

Ariadne Magazine: FIGIT, eLib, Ariadne and the Future.

planet code4lib - Sun, 2016-01-17 13:50

Marieke Guy, Philip Hunter, John Kirriemuir, Jon Knight and Richard Waller look back at how Ariadne began 20 years ago as part of the UK Electronic Libraries Programme (eLib), how some of the other eLib projects influenced the web we have today and what changes have come, and may yet come, to affect how digital libraries work.


Ariadne is 20 years old this week and some members of the current editorial board thought it might be useful to look back at how it came to be, how digital library offerings have changed over the years, and maybe also peer into the near future. To do this, we’ve enlisted the help of several of the past editors of Ariadne who have marshalled their memories and crystal balls. Read more about FIGIT, eLib, Ariadne and the Future.

Marieke Guy, Philip Hunter, John Kirriemuir, Jon Knight, Richard Waller

Organisations: Article type: Issue number: Authors: Date published: Sun, 01/17/201675http://www.lboro.ac.uk/issue75/editorsreview

Terry Reese: MarcEdit and OpenRefine

planet code4lib - Sun, 2016-01-17 02:27

There have been a number of workshops and presentations that I’ve seen floating around that talk about ways of using MarcEdit and OpenRefine together when doing record editing.  OpenRefine, for folks that might not be familiar, use to be known as Google Refine, and is a handy tool for working with messy data.  While there is a lot of potential overlap between the types of edits available between MarcEdit and OpenRefine, the strength of the tool is that it allows you to access your data via a tabular interface to easily find variations in metadata, relationships, and patterns.

For most folks working with MarcEdit and OpenRefine together, the biggest challenge is moving the data back and forth.  MARC binary data isn’t supported by OpenRefine, and MarcEdit’s mnemonic format isn’t well suited for import using OpenRefine’s import options as well.  And once the data has been put into OpenRefine, getting back out and turned into MARC can be difficult for first time users as well.

Because I’m a firm believe that uses should use the tool that they are most comfortable with – I’ve been talking to a few OpenRefine users trying to think about how I could make the process of moving data between the two systems easier.  And to that end, I’ll be adding to MarcEdit a toolset that will facilitate the export and import of MARC (and MarcEdit’s mnemonic) data formats into formats that OpenRefine can parse and easily generate.  I’ve implemented this functionality in two places – one as a standalone application found on the Main MarcEdit Window, and one as part of the MarcEditor – which will automatically convert or import data directly into the MarcEditor Window.

Exporting Data from MarcEdit

As noted above, there will be two methods of exporting data from MarcEdit into one of two formats for import into OpenRefine.  Presently, MarcEdit supports generating either json or tab delimited format.  These are two formats that OpenRefine can import to create a new project.


OpenRefine Option from the Main Window


OpenRefine Export/Import Tool.

If I have a MARC file and I want to export it for use in OpenRefine – I would using the following steps:

  1. Open MarcEdit
  2. Select Tools/OpenRefine/Export from the menu
  3. Enter my Source File (either a marc or mnemonic file)
  4. My Save File – MarcEdit supports export in json or tsv (tab delimited)
  5. Select Process

This will generate a file that can used for importing into OpenRefine.  A couple notes about that process.  When importing via tab delimited format – you will want to unselect options that does number interpretation.  I’d also uncheck the option to turn blanks into nulls and make sure the option is selected that retains blank rows.  These are useful on export and reimport into MarcEdit.  When using Json as the file format – you will want to make sure after import to order your columns as TAG, Indicators, Content.  I’ve found OpenRefine will mix this order, even though the json data is structured in this order.

Once you’ve made the changes to your data – Select the export option in OpenRefine and select the export tab delimited option.  This is the file format MarcEdit can turn back into either MARC or the mnemonic file format.  Please note – I’d recommend always going back to the mnemonic file format until you are comfortable with the process to ensure that the import process worked like you expected.

And that’s it.  I’ve recorded a video on YouTube walking through these steps – you can find it here:

This if course just shows how to data between the two systems.  If you want to learn more about how to work with the data once it’s in OpenRefine, I’d recommend one of the many excellent workshops that I’ve been seeing put on at conferences and via webinars by a wide range of talented metadata librarians.

DuraSpace News: VIVO Updates for January 18, 2016

planet code4lib - Sun, 2016-01-17 00:00

The VIVO Committers Group.  The VIVO project now has a committers group!

Ariadne Magazine: Lost Words, Lost Worlds.

planet code4lib - Sat, 2016-01-16 10:04

Emma Tonkin discusses how the words we use, and where we use them, change over time, and how this can cause issues for digital preservation.


      'Now let's take this parsnip in.'
      'Parsnip?'
      'Parsnip, coffee. Perrin, Wellbourne. What does it matter what we call things?'
      – David Nobbs, The Fall And Rise of Reginald Perrin

Introduction Read more about Lost Words, Lost Worlds.

Emma Tonkin

Organisations: Article type: Issue number: Authors: Date published: Sat, 01/16/201675http://www.lboro.ac.uk/issue75/tonkin

ACRL TechConnect: #1Lib1Ref

planet code4lib - Fri, 2016-01-15 19:24

A few of us at Tech Connect participated in the #1Lib1Ref campaign that’s running from January 15th to the 23rd . What’s #1Lib1Ref? It’s a campaign to encourage librarians to get involved with improving Wikipedia, specifically by citation chasing (one of my favorite pastimes!). From the project’s description:

Imagine a World where Every Librarian Added One More Reference to Wikipedia.
Wikipedia is a first stop for researchers: let’s make it better! Your goal today is to add one reference to Wikipedia! Any citation to a reliable source is a benefit to Wikipedia readers worldwide. When you add the reference to the article, make sure to include the hashtag #1Lib1Ref in the edit summary so that we can track participation.

Below, we each describe our experiences editing Wikipedia. Did you participate in #1Lib1Ref, too? Let us know in the comments or join the conversation on Twitter!

 

I recorded a short screencast of me adding a citation to the Darbhanga article.

— Eric Phetteplace

 

I used the Citation Hunt tool to find an article that needed a citation. I selected the second one I found, which was about urinary tract infections in space missions. That is very much up my alley. I discovered after a quick Google search that the paragraph in question was plagiarized from a book on Google Books! After a hunt through the Wikipedia policy on quotations, I decided to rewrite the paragraph to paraphrase the quote, and then added my citation. As is usual with plagiarism, the flow was wrong, since there was a reference to a theme in the previous paragraph of the book that wasn’t present in the Wikipedia article, so I chose to remove that entirely. The Wikipedia Citation Tool for Google Books was very helpful in automatically generating an acceptable citation for the appropriate page. Here’s my shiny new paragraph, complete with citation: https://en.wikipedia.org/wiki/Astronautical_hygiene#Microbial_hazards_in_space.

— Margaret Heller

 

I edited the “Library Facilities” section of the “University of Maryland Baltimore” article in Wikipedia.  There was an outdated link in the existing citation, and I also wanted to add two additional sentences and citations. You can see how I went about doing this in my screen recording below. I used the “edit source” option to get the source first in the Text Editor and then made all the changes I wanted in advance. After that, I copy/pasted the changes I wanted from my text file to the Wikipedia page I was editing. Then, I previewed and saved the page. You can see that I also had a typo in my text  and had to fix that again to make the citation display correctly. So I had to edit the article more than once. After my recording, I noticed another typo in there, which I fixed it using the “edit” option. The “edit” option is much easier to use than the “edit source” option for those who are not familiar with editing Wiki pages. It offers a menu bar on the top with several convenient options.

The menu bar for the “edit” option in Wikipeda

The recording of editing a Wikipedia article:

— Bohyun Kim

 

It has been so long since I’ve edited anything on Wikipedia that I had to make a new account and read the “how to add a reference” link; which is to say, if I could do it in 30 minutes while on vacation, anyone can. There is a WYSIWYG option for the editing interface, but I learned to do all this in plain text and it’s still the easiest way for me to edit. See the screenshot below for a view of the HTML editor.

I wondered what entry I would find to add a citation to…there have been so many that I’d come across but now I was drawing a total blank. Happily, the 1Lib1Ref campaign gave some suggestions, including “Provinces of Afghanistan.” Since this is my fatherland, I thought it would be a good service to dive into. Many of Afghanistan’s citations are hard to provide for a multitude of reasons. A lot of our history has been an oral tradition. Also, not insignificantly, Afghanistan has been in conflict for a very long time, with much of its history captured from the lens of Great Game participants like England or Russia. Primary sources from the 20th century are difficult to come by because of the state of war from 1979 onwards and there are not many digitization efforts underway to capture what there is available (shout out to NYU and the Afghanistan Digital Library project).

Once I found a source that I thought would be an appropriate reference for a statement on the topography of Uruzgan Province, I did need to edit the sentence to remove the numeric values that had been written since I could not find a source that quantified the area. It’s not a precise entry, to be honest, but it does give the opportunity to link to a good map with other opportunities to find additional information related to Afghanistan’s agriculture. I also wanted to chose something relatively uncontroversial, like geographical features rather than historical or person-based, for this particular campaign.

— Yasmeen Shorish

Edited area delineated by red box.

Villanova Library Technology Blog: Martin Luther King, Jr. at Villanova University, January 20, 1965

planet code4lib - Fri, 2016-01-15 19:08

In commemoration of Martin Luther King Day (January 18, 2016), the Rev. Dennis Gallagher, OSA, PhD, University archivist, collaborated with Joanne Quinn, graphic designer and Communication and Service Promotion team leader, to create this exhibit, “Martin Luther King, Jr., at Villanova University, January 20, 1965.” The exhibit fills two cases and features materials from the University Archives which are located in Falvey Memorial Library. All materials were selected by Father Gallagher; he also wrote the captions that accompany the objects. Quinn created the graphics and arranged the exhibit.

The first case displays two large black and white photographs from Martin Luther King’s visit to Villanova on January 20, 1965, and a typewritten copy of the speech he delivered that day. Father Gallagher describes the first photograph thus, “Reverend Doctor Martin Luther King, Jr., civil rights leader and Nobel Prize winner gives an address at Villanova on January 20, 1965.” The second photograph shows the Reverend John A. Klekotka, OSA, University president, Dr. King and Thomas J. Furst, student body president. In the center is the manuscript of King’s talk, “Challenges of the New Age.” This copy of the speech was donated to the Villanova University Archives by Thomas Bruderle in 2015. Dr. King’s speech was part of the Villanova Forum Series.

Three objects fill the second case: a 1965 Belle Air yearbook, a Villanova pennant containing an image of the Wildcat, and a January 8, 1965 issue of the Villanovan newspaper. The yearbook is opened to display four photographs of Dr. King as he gave his speech. The bound volume of the Villanovan (vol. 40, no. 11, p. 1, Jan. 8, 1965) shows the feature story, “Forum Features Dr. Martin Luther King,” and his photograph.

Dig Deeper

If this small exhibit whets your curiosity, Falvey has a multitude of books for you. The sources listed below represent just a small part of the library’s holdings.

Martin Luther King, Jr.: The Making of a Mind (1982) John J. Ansbro.

Martin Luther King  (2010) Godfrey Hodgson.

The Speeches of Martin Luther King  (Video) (1988) Martin Luther King.

My Life with Martin Luther King, Jr. (1994) Coretta Scott King (Martin Luther King’s wife).

The Autobiography of Martin Luther King, Jr. (1998) Martin Luther King.

King:  A Biography (2012) David L. Lewis.

 


Like0

Villanova Library Technology Blog: Library Trials to Routledge Handbooks Online and Taylor & Francis eBooks

planet code4lib - Fri, 2016-01-15 19:01

From January 11 to March 11, the library will be running a trial of two major e-book platforms from Taylor & Francis: Routledge Handbooks Online, and Taylor & Francis eBooks (which contains mostly Routledge titles). Both collections are strong in a wide range of humanities and social science disciplines.

Routledge Handbooks Online contains collections of scholarly review articles on commonly researched topics. More than 600 volumes (about 18,000 chapters) are included. The articles are useful for getting a general overview of a topic, and make good jumping-off points for further investigation. Each chapter can be viewed in HTML or downloaded as a PDF.

Taylor & Francis eBooks contains more than 50,000 ebooks—both single-author texts and edited collections. Many of these are recent publications, but the collection contains works spanning the last century as well. The majority of them are DRM-free, with no time limits or print limitations, though some do have restrictions.

Your feedback about these resources is valuable to us. Please send your comments to Nik Fogle at nikolaus.fogle@villanova.edu. We’d particularly like to know what you found useful about them, what was lacking, and to what extent you would use this material in the future. Please be sure to tell us which of the two platforms your comments are about.


Like0

Villanova Library Technology Blog: ‘Caturday: Service ‘Cats

planet code4lib - Fri, 2016-01-15 16:58

(Left to right) Maleah Bradley, Christina Sebastiao, Cordesia Pope

Thanks to Fiona Chambers, a student leader on the Martin Luther King Jr. Day of Service Committee, library staff did their part to draw attention to the MLK Day of Service by wearing t-shirts provided by the committee.

The Library also served as one of the MLK Day of Service Coat Drive locations on campus after being contacted by Rebecca Lin, another student leader on the MLK Day of Service Committee.

The Library will be closed on Monday, Jan. 18, to honor Dr. Martin Luther King Jr. and to allow library staff and students to participate in MLK Day of Service events.


Like0

Villanova Library Technology Blog: The Highlighter: Is the Library Hiring Student Employees?

planet code4lib - Fri, 2016-01-15 16:53

This video shows how to find library jobs for student employees:

To access the library’s “How to” videos, click the “Help” button on Falvey’s homepage.


Like0

David Rosenthal: The Internet is for Cats

planet code4lib - Fri, 2016-01-15 16:00
It is a truth universally acknowledged that, after pr0n, the most important genre of content on the Internet is cat videos. But in the early days of the Web, there was no video. For sure, there was pr0n, but how did the Internet work without cat videos? Follow me below the fold for some research into the early history of Web content.

Page via Wayback MachineIn those early days, the Web may not have had cat videos, but that didn't mean it lacked cats. Cats colonized the Web very early. Among the leaders were the twins Nijinski and Pavlova, sadly now deceased. 21 years ago this month, on Jan 11th 1995, the late Mark Weiser created Nijinksy and Pavlova's Web page. A year later they were featured in the book Internet for Cats by Judy Heim. On Dec 1st 1998 the Internet Archive's crawler visited the page, then 1,421 days old and thus a veteran among Web pages. This was the first of what would be 39 captures of the page over the next decade, the last being on May 11th 2008.

The page achieved a Methuselah-like longevity of at least 4,870 days, or over 13 years. Fortunately, the last capture shows that Mark never updated the images. Nijinksy and Pavlova remain immortalized in all their kitten-cuteness and similarity. Nijinksy, who inherited all the brains and the energy of the twins, remained svelte and elegant to the end of his days, but in later life his lethargic sister became full-figured. Although we cannot know the precise date the web page vanished, it appears that both cats outlived their page by a year or more.

Page via oldweb.todayThanks to Ilya Kreymer's oldweb.today, we can view the page using Mosaic 2.2, a contemporary browser. Note the differences in the background and the fonts, and the fact that all the resources oldweb.today loaded came from the Internet Archive. This is expected with pages as old as Nijinksy and Pavlova's. In those days the Internet Archive was pretty much alone in collecting and preserving the Web.

Over the last two decades the Internet Archive has become an essential resource. Please support their work by making a donation.


Villanova Library Technology Blog: Content Roundup – Second Week – January 2016

planet code4lib - Fri, 2016-01-15 14:28

Front cover, The secret of the kidnapped heir : a strange detective narrative / by Old Sleuth

As January brings a chill, sit by the fireside and read a newly digitized work! This week brings a host of new titles including:

American Catholic Historical Society

Records of the ACHS (11 articles added)
[http://digital.library.villanova.edu/Item/vudl:441543]

Catholica

Newspapers

[1]p., I.C.B.U. Journal, v. 13, no. 163, April 1, 1885

I.C.B.U. Journal (33 issues added)
[http://digital.library.villanova.edu/Item/vudl:428063]


Contributions from Augustinian Theologians and Scholars

Saint Augustine : monk, priest, bishop / by Luc Verheijen, O.S.A. (campus access only)
[http://digital.library.villanova.edu/Item/vudl:431316]

Dime Novel and Popular Literature


Fiction

Front cover, The hotel tragedy; or, Manfred’s great detective adventures : a strange and wierd detective narrative / by “Old Sleuth.”

Old Sleuth Weekly (9 issues added)
[http://digital.library.villanova.edu/Item/vudl:439284]
[http://digital.library.villanova.edu/Item/vudl:439322]
[http://digital.library.villanova.edu/Item/vudl:439794]
[http://digital.library.villanova.edu/Item/vudl:440379]
[http://digital.library.villanova.edu/Item/vudl:440417]
[http://digital.library.villanova.edu/Item/vudl:440455]
[http://digital.library.villanova.edu/Item/vudl:440493]
[http://digital.library.villanova.edu/Item/vudl:440531]
[http://digital.library.villanova.edu/Item/vudl:440598]


Periodicals

[1] p., Chicago Ledger, v. XLIV, no. 2, January 8, 1916

Chicago Ledger (1 issue added)
[http://digital.library.villanova.edu/Item/vudl:439832]

New York Weekly (2 issues added)
[http://digital.library.villanova.edu/Item/vudl:441314]
[http://digital.library.villanova.edu/Item/vudl:441324]

New York Saturday Journal (1 issue added)
[http://digital.library.villanova.edu/Item/vudl:441334]

Great War

Frederick William Walter Papers (1 item added, set completed)
[http://digital.library.villanova.edu/Item/vudl:405665]

Joseph McGarrity Collection

Newspaper

The Shan Van Vocht (5 issues added)
[http://digital.library.villanova.edu/Item/vudl:291027]

Villanova Digital Collection

Falvey Memorial Library

Doodle, June 7, 2013, “Louise Erdrich” birthday

Daily Doodles (2013: 29 doodles added)
[http://digital.library.villanova.edu/Item/vudl:441442]


Like0

DuraSpace News: Telling DSpace Stories at Izmir Institute of Technology with Gultekin Gurdal

planet code4lib - Fri, 2016-01-15 00:00

The Izmir Institute of Technology Library team.

Pages

Subscribe to code4lib aggregator