The Global Open Data Index measures and benchmarks the openness of government data around the world, and then presents this information in a way that is easy to understand and easy to use. Each year the open data community and Open Knowledge produces an annual ranking of countries, peer reviewed by our network of local open data experts. Launched in 2012 as tool to track the state of open data around the world. More and more governments were being to set up open data portals and make commitments to release open government data and we wanted to know whether those commitments were really translating into release of actual data.
The Index focuses on 15 key datasets that are essential for transparency and accountability (such as election results and government spending data), and those vital for providing critical services to citizens (such as maps and water quality). Today, we are pleased to announce that we are collecting submissions for the 2015 Index!
The Global Open Data Index tracks whether this data is actually released in a way that is accessible to citizens, media and civil society, and is unique in that it crowdsources its survey results from the global open data community. Crowdsourcing this data provides a tool for communities around the world to learn more about the open data available in their respective countries, and ensures that the results reflect the experience of civil society in finding open information, rather than accepting government claims of openness. Furthermore, the Global Open Data Index is not only a benchmarking tool, it also plays a foundational role in sustaining the open government data community around the world. If, for example, the government of a country does publish a dataset, but this is not clear to the public and it cannot be found through a simple search, then the data can easily be overlooked. Governments and open data practitioners can review the Index results to locate the data, see how accessible the data appears to citizens, and, in the case that improvements are necessary, advocate for making the data truly open.
Methodology and Dataset Updates
After four years of leading this global civil society assessment of the state of open data around the world, we have learned a few things and have updated both the datasets we are evaluating and the methodology of the Index itself to reflect these learnings! One of the major changes has been to run a massive consultation of the open data community to determine the datasets that we should be tracking. As a result of this consultation, we have added five datasets to the 2015 Index. This year, in addition to the ten datasets we evaluated last year, we will also be evaluating the release of water quality data, procurement data, health performance data, weather data and land ownership data. If you are interested in learning more about the consultation and its results, you can read more on our blog!
How can I contribute?
2015 Index contributions open today! We have done our best to make contributing to the Index as easy as possible. Check out the contribution tutorial in English and Spanish, ask questions in the discussion forum, reach out on twitter (#GODI15) or speak to one of our 10 regional community leads! There are countless ways to get help so please do not hesitate to ask! We would love for you to be involved. Follow #GODI15 on Twitter for more updates.
The Index team is hitting the road! We will be talking to people about the Index at the African Open Data Conference in Tanzania next week and will also be running Index sessions at both AbreLATAM and ConDatos in two weeks! Mor and Katelyn will be on the ground so please feel free to reach out!
Contributions will be open from August 25th, 2015 through September 20th, 2015. After the 20th of September we will begin the arduous peer review process! If you are interested in getting involved in the review, please do not hesitate to contact us. Finally, we will be launching the final version of the 2015 Global Open Data Index Ranking at the OGP Summit in Mexico in late October! This will be your opportunity to talk to us about the results and what that means in terms of the national action plans and commitments that governments are making! We are looking forward to a lively discussion!
Four weeks to go! Yes, Hydra Connect 2015 is just four weeks away. The Connect 2015 wiki page has full details of the program and other aspects of the event. As I write this there are only 15 tickets left so, if you haven’t booked already, you really ought to do so very soon! All our discounted hotel rooms are sold out, but apparently the discount travel sites can still find you a good deal.
Journal of Web Librarianship: An Exploration of Indexed and Non-Indexed Open Access Journals: Identifying Metadata Coding Variations
Ethan J. Allen
Journal of Web Librarianship: Database Mobile Accessibility Assessment at Adelphi University Libraries
Journal of Web Librarianship: LIBRARY 3.0: INTELLIGENT LIBRARIES AND APOMEDIATION. Kwanya, Tom, Christine Stilwell, and Peter Underwood. Waltham, MA: Chandos, 2015, 174 pp., $80.00, ISBN: 978-1-84334-718-7.
Journal of Web Librarianship: FUNDAMENTALS FOR THE ACADEMIC LIAISON. Moniz, Richard, Jo Henry, and Joe Eshleman. London: Facet Publishing, 2014, 200 pp., $75.07, ISBN: 978-1-78330-005-1.
Journal of Web Librarianship: BUYING AND SELLING INFORMATION: A GUIDE FOR INFORMATION PROFESSIONALS AND SALESPEOPLE TO BUILD MUTUAL SUCCESS. Gruenberg, Michael. NJ: Information Today, Inc., 2014, 195 pp., $49.50, ISBN: 978-1-57387-478-6.
Kristen L. Young
Jason Paul Michel
Join us on CopyTalk in September to hear about the leading legal cases affecting Fair Use and our ability to access, archive and foster our common culture. Our presenter on this topic will be Corynne McSherry, Legal Director at the Electronic Frontier Foundation.
CopyTalk will take place on Thursday, September 3rd at 11am Pacific/2pm Eastern time. After a brief introduction, Corynne will present for 50 minutes, and we will end with a Q&A session (questions will be collected during the presentations).
Please join us at http://ala.adobeconnect.com/r7ivg4sga0f/
We are limited to 100 concurrent viewers, so we ask you to watch with others at your institution if at all possible. The presentations are recorded and will be available online soon after the presentation. Audio is provided online via the webinar software only, so you will need speakers for your computer; there is no call-in number for audio.
I wrote this plugin way back in BL 2.x days, but I think many don’t know about it, and I don’t think anyone but me is using it, so I thought I’d take the opportunity having updated it, to advertise it.
blacklight_cql gives your BL app the ability to take CQL queries as input. CQL is a query language for writing boolean expressions (http://www.loc.gov/standards/sru/cql/); I don’t personally consider it suitable for end-users to enter manually, and don’t expose it that way in my BL app.
But I do it use it as an API for other internal software to make complex boolean queries against my BL app; like “format = ‘Journal’ AND (ISSN = X OR ISSN =Y OR ISBN = Z)” Paired with the BL Atom response, it’s a pretty powerful query API against a BL app.
Both direct Solr fields, and search_fields you’ve configured in Blacklight are available in CQL; they can even be mixed and matched in a single query.
The blacklight_cql plug-in also provides an SRU/ZeeRex EXPLAIN handler, for a machine-readable description of what search fields are supported via CQL. Here’s “EXPLAIN” on my server: https://catalyst.library.jhu.edu/catalog/explain
The plug-in does NOT provide a full SRU/SRW implementation — but as it does provide some of the hardest parts of an SRW implementation, it would probably not be too hard to write a bit more glue code to get a full implementation. I considered doing that to make my BL app a target of various federated search products that speak SRW, but never wound up having a business case for it here. (Also, it may or may not actually work out, as SRW tends to vary enough that even if it’s a legal-to-spec SRW implementation, that’s no guarantee it will work with a given client).
Even though the blacklight_cql plugin has been around for a while, it’s perhaps still somewhat immature software (or maybe it’s that it’s “legacy” software now?). It’s worked out quite well for me, but I’m not sure anyone else has used it, so it may have edge case bugs I’m not running into, or bugs that are triggered by use cases other than mine. It’s also, I’m afraid, not very well covered by automated tests. But I think what it does is pretty cool, and if you have a use for what it does, starting with blacklight_cql should be a lot easier than starting from scratch.
Feel free to let me know if you have questions or run into problems.
Filed under: General
The Islandora community has seen a lot of growth since the Islandora Foundation got its start in 2013. The growth of our user and institutional community has been easy to see, but there has been another layer of growth in a vital part of the community that isn't always as visible: Islandora developers. Modules, bug fixes, and other commits to the Islandora codebase are coming from a much wider varsity of sources that in the early days of Islandora.
Today, we are going to learn more about one of those community developers. Jared Whiklo is an Applications Developer at the University of Manitoba. He has also been an integral part of the Islandora 7.x-2.x development team and will be co-leading Islandora's first Community Sprint at the end of the month. Jared has authored some handy Islandora tools of his own, including Islandora Custom Solr to replace Sparql queries with Solr queries where possible for speed improvements. You can learn more about how he runs Islandora from the University of Manitoba's entry in the Islandora Deployments Repo.
Please tell us a little about yourself. What do you do when you’re not at work?
I am a self-taught programmer from days past (like Turbo Pascal on 14 disks, past). I am married with two young kids. I like to build, fix things, camp (in a tent), bike, skate and run the occasional marathon.
How long have you been working with Islandora? How did you get started?
Over the past 3 years in my current position I have slowly gotten deeper and more involved in Islandora. Our institution had invested early in the Islandora project, we liked the flexibility as we were moving away from about 3 different legacy products.
Sum up your area of expertise in three words:
Master of none
What are you working on right now?
We are migrating content from various different systems into our Islandora instance as well as bringing other groups on campus on-board to store their data.
What contribution to Islandora are you most proud of?
I am proud of each little contribution. Every little bit helps to move the community forward.
What new feature or improvement would you most like to see?
What’s the one tool/software/resource you cannot live without?
Git. When you swing between work for different interests it makes it vital.
If you could leave the community with one message from reading this interview, what would it be?
Don't get discouraged.
A friend asked the internet:
Can anyone recommend a mirrorless camera? I have some travel coming up and I’m hesitant to lug my DSLR around.
Of course I had an opinion:I go back and forth on this question myself. My current travel camera is a Sony RX100 mark 3 (the mark 4 was recently released). Some of my photos with that camera are on Flickr. If I decide to get a replacement for my for my bigger cameras, I’ll probably go with a full frame Sony A7 of some sort. The Fuji X system APS-C, and Olympus and Panasonic Micro 4/3 cameras look great, but they don’t offer enough improvement over the RX100 to excite me much. One of the biggest issues for me is sensor size. The smallest camera with the largest sensor is usually the winner for me. Other compact cameras I like include the Panasonic LUMIX LX100 and Canon PowerShot G1 X Mark II. Both have bigger sensors for shallower depth of field. If the Panasonic supported remote shutter release I would definitely have picked that instead of the Sony (I have a predecessor to the LX100, the LX3, that I loved). If you don’t care to do timelapse like I do, then remote shutter release might not be a requirement for you. Back to my RX100: its my go-to digital. I shoot raw, sometimes with auto-bracketing, to maximize dynamic range. Even without bracketing, the raw files have great dynamic range–much more than my Canon bodies. The only reason I’ve used my Canon bodies recently is when I needed a hot shoe for strobist work (which I’d like to do more of). To give context to my rambling: I offered my camera history up to mid-2014 previously. After that, I got deep into film, including instant and celluloid. My darling wife agreed to let me to buy a Hasselblad in March if I promised not to say a word about buying another camera for a full year. That lasted about a month, but at least (most) film cameras are cheap. I’m easy to find on Flickr and Instagram.
Last week, I posted an update that included the early implementation of the Validate Headings tool. After a week of testing, feedback and refinement, I think that the tool now functions in a way that will be helpful to users. So, let me describe how the tool works and what you can expect when the tool is run.
The Validate Headings tool was added as a new report to the MarcEditor to enable users to take a set of records and get back a report detailing how many records had corresponding Library of Congress authority headings. The tool was designed to validate data in the 1xx, 6xx, and 7xx fields. The tool has been set to only query headings and subjects that utilize the LC authorities. At some point, I’ll look to expand to other vocabularies.
How does it work
Presently, this tool must be run from within the MarcEditor – though at some point in the future, I’ll extract this out of the MarcEditor, and provide a stand alone function and a integration with the command line tool. Right now, to use the function, you open the MarcEditor and select the Reports/Validate Headings menu.
Selecting this option will open the following window:
Options – you’ll notice 3 options available to you. The tool allows users to decide what values that they would like to have validated. They can select names (1xx, 600,10,11, 7xx) or subjects (6xx). Please note, when you select names, the tool does look up the 600,610,611 as part of the process because the validation of these subjects occurs within the name authority file. The last option deals with the local cache. As MarcEdit pulls data from the Library of Congress – it caches the data that it receives so that it can use it on subsequent headings validation checked. The cache will be used until it expires in 30 days…however, a user at any time can check this option and MarcEdit will delete the existing cache and rebuild it during the current data run.
Couple things you’ll also note on this screen. There is an extract button and it’s not enabled. Once the Validate report is run, this button will become enabled if there are any records that are identified as having headings that could not be validated against the service.
Running the Tool:
Couple notes about running the tool. When you run the tool, what you are asking MarcEdit to do is process your data file and query the Library of Congress for information related to the authorized terms in your records. As part of this process, MarcEdit sends a lot of data back and forth to the Library of Congress utilizing the http://id.loc.gov service. The tool attempts to use a light touch, only pulling down headings for a specific request – but do realize that a lot of data requests are generated through this function. You can estimate approximately how many requests might be made on a specific file by using the following formula: (number of records x 2) + (number of records), assuming that most records will have 1 name to authorize and 1 subjects per record. So a file with 2500 records would generate ~7500 requests to the Library of Congress. Now, this is just a guess, in my tests, I’ve had some sets generate as many as 12,000 requests for 2500 records and as few as 4000 requests for 2500 records – but 7500 tended to be within 500 requests in most test files.
So why do we care? Well, this report has the potential to generate a lot of requests to the Library of Congress’s identifier service – and while I’ve been told that there shouldn’t be any issues with this – I think that question won’t really be known until people start using it. At the same time – this function won’t come as a surprise to the folks at the Library of Congress – as we’ve spoken a number of times during the development. At this point, we are all kind of waiting to see how popular this function might be, and if MarcEdit usage will create any noticeable up-tick in the service usage.
When you run the validation tool, the program will go through each record, making the necessary validation requests of the LC ID service. When the service has completed, the user will receive a report with the following information:Validation Results: Process completed in: 121.546001431667 minutes. Average Response Time from LC: 0.847667984420415 Total Records: 2500 Records with Invalid Headings: 1464 ************************************************************** 1xx Headings Found: 1403 6xx Headings Found: 4106 7xx Headings Found: 1434 ************************************************************** 1xx Headings Not Found: 521 6xx Headings Not Found: 1538 7xx Headings Not Found: 624 ************************************************************** 1xx Variants Found: 6 6xx Variants Found: 1 7xx Variants Found: 3 ************************************************************** Total Unique Headings Queried: 8604 Found in Local Cache: 1001 ***************************************************************
This represents the header of the report. I wanted users to be able to quickly, at a glance, see what the Validator determined during the course of the process. From here, I can see a couple of things:
- The tool queried a total of 2500 records
- Of those 2500 records, 1464 of those records had a least one heading that was not found
- Within those 2500 records, 8604 unique headers were queried
- Within those 2500 records, there were 1001 duplicate headings across records (these were not duplicate headings within the same record, but for example, multiple records with the same author, subject, etc.)
- We can see how many Headings were found by the LC ID service within the 1xx, 6xx, and 7xx blocks
- Likewise, we can see how many headings were not found by the LC ID service within the 1xx, 6xx, and 7xx blocks.
- We can see number of Variants as well. Variants are defined as names that resolved, but that the preferred name returned by the Library of Congress didn’t match what was in the record. Variants will be extracted as part of the records that need further evaluation.
After this summary of information, the Validation report returns information related to the record # (record number count starts at zero) and the headings that were not found. For example:Record #0 Heading not found for: Performing arts--Management--Congresses Heading not found for: Crawford, Robert W Record #5 Heading not found for: Social service--Teamwork--Great Britain Record #7 Heading not found for: Morris, A. J Record #9 Heading not found for: Sambul, Nathan J Record #13 Heading not found for: Opera--Social aspects--United States Heading not found for: Opera--Production and direction--United States
The current report format includes specific information about the heading that was not found. If the value is a variant, it shows up in the report as:Record #612 Term in Record: bible.--criticism, interpretation, etc., jewish LC Preferred Term: Bible. Old Testament--Criticism, interpretation, etc., Jewish URL: http://id.loc.gov/authorities/subjects/sh85013771 Heading not found for: Bible.--Criticism, interpretation, etc
Here you see – the report returns the record number, the normalized form of the term as queried, the current LC Preferred term, and the URL to the term that’s been found.
The report can be copied and placed into a different program for viewing or can be printed (see buttons).
To extract the records that need work, minimize or close this window and go back to the Validate Headings Window. You will now see two new options:
First, you’ll see that the Extract button has been enabled. Click this button, and all the records that have been identified as having headings in need of work will be exported to the MarcEditor. You can now save this file and work on the records.
Second, you’ll see the new link – save delimited. Click on this link, and the program will save a tab delimited copy of the validation report. The report will have the following format:
Record ID [tab] 1xx [tab] 6xx [tab] 7xx [new line]
Each column will be delimited by a colon, so if two 1xx headings appear in a record, the current process would create a single column, but with the headings separated by a colon like: heading 1:heading 2.
This function required making a number of improvements to the linked data components – and because of that, the linking tool should work better and faster now. Additionally, because of the variant work I’ve done, I’ll soon be adding code that will give the user the option to update headings for Variants as is report or the linking tool is running – and I think that is pretty cool. If you have other ideas or find that this is missing a key piece of functionality – let me know.
From Andrew Woods, on behalf of on behalf of the Fedora Committers and Leadership Team
Winchester, MA The Fedora Committers and Leadership Teams are pleased to welcome Jared Whiklo, Web Application Developer at the University of Manitoba, to the Fedora Committers team.
From Tim Donohue, DSpace Tech Lead, DuraSpace
Winchester, MA A reminder that the second meeting of the DSpace UI Working Group is TOMORROW (Tues, Aug 25) at 15:00 UTC (11:00am EDT). Connection information is below.
Anyone is welcome to attend and join this new working group. A working group charter, with deliverables, is available at https://wiki.duraspace.org/display/DSPACE/DSpace+UI+Working+Group
Today I found the following resources and bookmarked them on Delicious.
- MediaGoblin MediaGoblin is a free software media publishing platform that anyone can run. You can think of it as a decentralized alternative to Flickr, YouTube, SoundCloud, etc.
- The Architecture of Open Source Applications
- A web whiteboard A Web Whiteboard is touch-friendly online whiteboard app that lets you use your computer, tablet or smartphone to easily draw sketches, collaborate with others and share them with the world.
Digest powered by RSS Digest
Tomorrow is my first convocation at my new university. For my international readers, a convocation in this part of the world is usually a ceremony in the autumn where faculty, students, and the schools that serve them are welcomed into the new academic year. (Although sometimes “convocation” is a graduation, which I suppose makes it a contronym, and it is also the collective noun for eagles).
At Holy Names, convocation was a student-centered event, and began with the university community, dress in its finest, climbing up the 100-plus stairs to the dining hall for speeches and a lunch. I do not know entirely what to expect from tomorrow’s event (except there is no lunch, and it is held in the largest theater on campus, and relatively few students will be present), but I know that it will be different and that in its difference I will learn new meanings, symbols, and ways of being.
All weekend I have had the last four lines of Yeats’ “A prayer for my daughter” running through my mind:
How but in custom and in ceremony
Are innocence and beauty born?
Ceremony’s a name for the rich horn,
And custom for the spreading laurel tree.
There is a saying on the Internet, “do not read the comments,” and when it comes to major poems, I extend this to “do not read the commentary.” I made the mistake of browsing discussions of this poem, only to discover that rather than the sky-wide reflection on chaos versus order I know it to be, it is actually, among other flaws, a poem advocating the oppression of women. The idea that the poem is a product of its time, or that a father would want to be protective of his daughter, or that there is something to be said for the sanity of a well-ordered home life, is pushed aside in favor of squeezing this poem through a highly specific modern sensibility, then finding it wanting.
Higher education has been described as irrelevant, in a crisis, in need of great change, overpriced, stodgy, out of touch with the world, a waste of effort, and most of all, in need of disruption. And yet every fall universities around the country unite the stewards of academia in a ceremony that is anything but disruptive (convocation: convene, come together) and reminds us that the past, however conflicted and flawed, is the inevitable set of struts for building the future. Convocation reminds us that the work of summer is done, and now it is time for students to matriculate, spend a few days having fun and learning the campus culture, then settle down to work. The clock is wound, and begins to tick: professors teaching, administrators administrating, and librarians librarying and otherwise being their bad (as in good) information-professional selves.
When I think about the harsh words tossed at higher education, I am reminded not only of the dishonoring of great poems by forcing them through a chemist’s retort of present-day sensibility, but also how some leaders–and I have been guilty of this myself–are in such a rush to embrace new ideas (particularly our own new ideas) and express our pride in our forward-looking stance that we forget that many times, things were the way they were for a good reason that made sense at the time; and we also forget that in a decade or two our own ideas will be found ill-suited for the way things are done in that new era. When we do that we hurt feelings and body-block the gradual changing of minds, and for what purpose? We can and should continue the hard work of making higher education better, but we should also honor and embrace the past. Give the past its due, because for all of its failings, it birthed the present.
I see now that part of the thrill of convocation for me is how it fills a necessary void: the honoring of my own conflicted past (and all human pasts are conflicted), as well as my commitment to movement into the future. We have events honoring our own birth and also the calendar year, but too many cultures lack a Yom Kippur or Ramadan to help us reset and recommit. Lent comes close, but it is now nearly ruined by Secular Easter and muddy symbolism; as Sandy observes, it is strange behavior to celebrate the Lamb of God, and then roast him for Easter dinner. I am also impressed by how many clueless people schedule ordinary events for Good Friday, which is the religious observance that makes Easter Easter.
So onward into the academic year. The spreading laurel tree of academic custom, framed by convocation in early autumn and graduation in spring, gives my life well-framed pauses for introspection and inventory, pausing the slipstream of dailiness, stirring memories, reflection, atonement, and even where warranted, a little quiet praise. Births and deaths, broken friendships and promises, things (to borrow from the Book of Common Prayer) done and left undone, achievements big and small, harsh words and kind actions, frustrations and triumphs, times of fear and times of fearlessness, critical moments of thoughtlessness and those of careful consideration: tomorrow morning, dressed as one does for signature moments, I will tag along behind librarians as they wend their way to a place I have never visited and yet will come to know well, and learn a new way of coming together, in this autumn that closes one book and starts another.Bookmark to:
An archived copy of the CopyTalk webinar “University Copyright Services” is now available. Originally webcasted on August 6th by the Office for Information Technology Policy’s Copyright Education Subcommittee, presenters were Sandra Enimil, Program Director, University Libraries Copyright Resources Center from the Ohio State University, Pia Hunter, Visiting Assistant Professor and Copyright and Reserve Librarian from the University of Illinois at Chicago, and Cindy Kristof, Head of Copyright and Document Services from Kent State University. They described the copyright services they offer to faculty, staff, and students at their respective institutions.
Plan ahead! One hour CopyTalk webinars occur on the first Thursday of every month at 11am Pacific/2 pm Eastern Time. It’s free!
The post Archived webinar on university copyright services now available appeared first on District Dispatch.