You are here

Feed aggregator

Nicole Engard: IL2014: Driving Our Own Destinies

planet code4lib - Mon, 2014-10-27 16:41

Brendan Howley opened up the Internet Librarian conference this year. Brian designs stories that insight people to “do something”. He’s here to talk to us about the world of media and desired outcomes – specifically the desired outcomes for our libraries. Brendan collected stories from local library constituents to find out what libraries needed to do to get to the next step. He found (among other things) that libraries should be hubs for culture and should connect community media.

Three things internet librarians need to know:

  1. why stories world and what really matters
  2. why networks form (power of the weak not the strong)
  3. why culture eats strategy for lunch (Peter Drucker)

“The internet means that libraries are busting out of their bricks and mortars”

Brendan shared with us how Stories are not about dumping data, they’re about sharing data and teachable moments.

Data is a type of story and where data and stories meet is where change found. If you want to speak to your community you need to keep in mind that we’re in a society of “post-everything” – there is only one appetite left in terms of storytelling – “meaning”. People need to find it relevant and find meaning in the story. The most remarkable thing about librarians is that we give “meaning” away every day.

People want to know what we stand for and why – values are the key piece to stories. People want to understand why libraries still exist. People under the age of 35 want to know how to find the truth out there – the reliable sources – they don’t care about digital literacy. It’s those who are scared of being left behind – those over 35 (in general) who care about digital literacy.

The recipe for a successful story is: share the why of the how of what you do.

The sharing of stories creates networks. Networks lead to the opportunity to create value – and when that happens you’ve proved your worth as a civic institution. Networks are the means by which those values spread. They are key to the future of libraries.

A Pattern Language by Christopher Alexander is a must read by anyone designing systems/networks.

You need to understand that it’s the weak ties that matter. Strong ties are really quite rare – this sounds a lot like the long tail to me.

Libraries are in the business of giving away context – that means that where stories live, breathe, gather and cause people to do things is in the context. We’re in a position where we can give this context away. Libraries need to understand that we’re cultural entrepreneurs. Influencers fuel culture – and that’s the job description for librarians.

The post IL2014: Driving Our Own Destinies appeared first on What I Learned Today....

Related posts:

  1. IL2014: More Library Mashups Signing/Talk
  2. IL2012 Keynote: Library as Platform
  3. Open Source Culture

Islandora: Wagging the Long Tail Again

planet code4lib - Mon, 2014-10-27 12:17

It has been a while since our last foray into the Long Tail of Islandora. Some of those modules have moved all the way from the tail to the head and become part of our regular release. We have been quietly gathering them in our Resources section, but it's more than time for another high level review of the awesome modules that are out there in the community, just waiting to make your repo better.

Islandora XQuery

The ability to batch edit has long been the impossible dream in Islandora. Well, with this little module from discoverygarden, Inc., the dream has arrived. With a basic knowledge of XQuery, you can attack the metadata in your Fedora repository en masse. 

Putting Islandora XQuery into production should be approached with caution for the same reason that batch editing has been so long elusive: if you mass-edit your data, you can break things. That said, the module does come with a helpful install script, so getting it working in your Islandora Installation may be the easiest part!

Islandora Entity Bridge

Much like Islandora Sync, Ashok Modi's Islandora Entity Bridge endeavours to build relationships between Fedora objects and Drupal so you can apply a wider variety of Drupal modules to the contents of your repository without recreating your objects as nodes.

Ashok presented on this module at the recent Islandora Camp in Denver, so you can learn more from his slides here.

Islandora Plupload

This simple but very effective module has been around a while. It makes use of the Plupload library to allow you to exceed PHP file limits when uploading large files.

Islandora Feeds

Mark Jordan has created this tool so you can use the Feeds contrib module to create Islandora objects. This module is still in development, so you can help it to move forward by telling Mark your use cases.

Islandora Meme Solution Pack

The latest in islandora demo/teaching modules, developed at Islandora Camp Colorado by dev instructors Daniel Lamb and Nick Ruest to help demonstrate the joys of querying Solr. This module is not meant to be used in your repo, but rather to act as a learning tool, especially when used in combination with our Islandora VM.

LITA: Are you an iPad or a laptop?

planet code4lib - Mon, 2014-10-27 11:00

I’ve never been a big tablet user. This may come as a surprise to some, given that I assist patrons with their tablets every day at the public library. Don’t get me wrong, I love my Nexus 7 tablet. It’s perfect for reading ebooks, using Twitter, and watching Netflix; but the moment I want to respond to an email, edit a photo, or work my way through a Treehouse lesson, I feel helpless. Several library patrons have asked me if our public computers will be replaced by iPads and tablets. It’s hard to say where technology will take us in the coming years, but I strongly believe that a library without computers would leave us severely handicapped.

One of our regular library patrons, let’s call her Jane, is a diehard iPad fan. She is constantly on the hunt for the next great app and enjoys sharing her finds with me and my colleagues. Jane frequently teases me about preferring computers and whenever I’m leading a computer class she’ll ask “Can I do it on my iPad?” She’s not the only person I know who thinks that computers are antiquated and on their way to obsoletion, but I have plenty of hope for computers regardless of the iPad revolution.

In observing how patrons use technology, and reflecting on how I use technology in my personal and professional life, I find that tablets are excellent tools for absorbing and consuming information. However, they are not designed for creation. 9 times out of 10, if you want to make something, you’re better off using a computer. In a recent Wired article about digital literacy, Ari Geshner poses the question “Are you an iPad or are you a laptop? An iPad is designed for consumption.” He explains that literacy “means moving beyond a passive relationship with technology.”

So Jane is an iPad and I am a laptop. We’ve managed to coexist and I think that’s the best approach. Tablets and computers may both fall under the digital literacy umbrella, but they are entirely different tools. I sincerely hope that public libraries will continue to consider computers and tablets separately, encouraging a thirst for knowledge as well as a desire to create.

Lorcan Dempsey: Research information management systems - a new service category?

planet code4lib - Mon, 2014-10-27 03:02

It has been interesting watching Research Information Management or RIM emerge as a new service category in the last couple of years. RIM is supported by a particular system category, the Research Information Management System (RIMs), sometimes referred to by an earlier name, the CRIS (Current Research Information System).

For reasons discussed below, this area has been more prominent outside the US, but interest is also now growing in the US. See for example, the mention of RIMs in the Library FY15 Strategic Goals at Dartmouth College.

Research information management

The name is unfortunately confusing - a reserved sense living alongside more general senses. What is the reserved sense? Broadly, RIM is used to refer to the integrated management of information about the research life-cycle, and about the entities which are party to it (e.g. researchers, research outputs, organizations, grants, facilities, ..). The aim is to synchronize data across parts of the university, reducing the burden to all involved of collecting and managing data about the research process. An outcome is to provide greater visibility onto institutional research activity. Motivations include better internal reporting and analytics, support for compliance and assessment, and improved reputation management through more organized disclosure of research expertise and outputs.

A major driver has been the need to streamline the provision of data to various national university research assessment exercises (for example, in the UK, Denmark and Australia). Without integrated support, responding to these is costly, with activities fragmented across the Office of Research, individual schools or departments, and other support units, including, sometimes, the library. (See this report on national assessment regimes and the roles of libraries.)

Some of the functional areas covered by a RIM system may be:

  • Award management and identification of award opportunities. Matching of interests to potential funding sources. Supporting management of and communication around grant and contracts activity.
  • Publications management. Collecting data about researcher publications. Often this will be done by searching in external sources (Scopus and Web of Science, for example) to help populate profiles, and to provide alerts to keep them up to date.
  • Coordination and publishing of expertise profiles. Centralized upkeep of expertise profiles. Pulling of data from various systems. This may be for internal reporting or assessment purposes, to support individual researchers in providing personal data in a variety of required forms (e.g. for different granting agencies), and for publishing to the web through an institutional research portal or other venue.
  • Research analytics/reporting. Providing management information about research activity and interests, across departments, groups and individuals.
  • Compliance with internal/external mandates.
  • Support of open access. Synchronization with institutional repository. Managing deposit requirements. Integration with sources of information about Open Access policies.

To meet these goals, a RIM system will integrate data from a variety of internal and external systems.Typically, a university will currently manage information about these processes across a variety of administrative and academic departments. Required data also has to be pulled from external systems, notably data about funding opportunities and publications.

Products

Several products have emerged specifically to support RIM in recent years. This is an important reason for suggesting that it is emerging as a recognized service category.

  • Pure (Elsevier). "Pure aggregates your organization's research information from numerous internal and external sources, and ensures the data that drives your strategic decisions is trusted, comprehensive and accessible in real time. A highly versatile system, Pure enables your organization to build reports, carry out performance assessments, manage researcher profiles, enable expertise identification and more, all while reducing administrative burden for researchers, faculty and staff." [Pure]
  • Converis (Thomson Reuters). "Converis is the only fully configurable research information management system that can manage the complete research lifecycle, from the earliest due diligence in the grant process through the final publication and application of research results. With Converis, understand the full scope of your organization's contributions by building scholarly profiles based on our publishing and citations data--then layer in your institutional data to more specifically track success within your organization." [Converis]
  • Symplectic Elements. "A driving force of our approach is to minimise the administrative burden placed on academic staff during their research. We work with our clients to provide industry leading software services and integrations that automate the capture, reduce the manual input, improve the quality and expedite the transfer of rich data at their institution."[Symplectic]

Pure and Converis are parts of broader sets of research management and analytics services from, respectively, Elsevier (Elsevier research intelligence) and Thomson Reuters (Research management and evaluation). Each is a recent acquisition, providing an institutional approach alongside the aggregate, network level approach of each company's broader research analytics and management services.

Symplectic is a member of the very interesting Digital Science portfolio. Digital Science is a company set up by Macmillan Publishers to incubate start-ups focused on scientific workflow and research productivity. These include, for example, Figshare and Altmetric.

Other products are also relevant here. As RIM is an emerging area, it is natural to expect some overlap with other functions. For example, there is definitely overlap with backoffice research administration systems - Ideate from Consilience or solutions from infoEd Global, for example. And also with more publicly oriented profiling and expertise systems on the front office side.

With respect to the latter, Pure and Symplectic both note that they can interface to VIVO. Furthermore, Symplectic can provide "VIVO services that cover installation, support, hosting and integration for institutions looking to join the VIVO network". It also provides implementation support for the Profiles Research Networking Software.

As I discuss further below, one interesting question for libraries is the relationship between the RIMs or CRIS and the institutional repository. Extensions have been written for both Dspace and Eprints to provide some RIMs-like support. For example, Dspace-Cris extends the Dspace model to cater for the Cerif entities. This is based on work done for the Scholar's Hub at Hong Kong University.

It is also interesting to note that none of the three open source educational community organizations - Kuali, The Duraspace Foundation, or The Apereo Foundation - has a directly comparable offering, although there are some adjacent activities. In particular, Kuali Coeus for Research Administration is "a comprehensive system to manage the complexities of research administration needs from the faculty researcher through grants administration to federal funding agencies", based on work at MIT. Duraspace is now the organizational home for VIVO.

Finally, there are some national approaches to providing RIMs or CRIS functionality, associated with a national view of research outputs. This is the case in South Africa, Norway and The Netherlands, for example.

Standards

Another signal that this is an emerging service category is the existence of active standards activities. Two are especially relevant here:CERIF (Common European Research Information Format) from EuroCRIS, which provides a format for exchange of data between RIM systems, and the Casrai dictionary. CASRAI is the Consortia Advancing Standards in Research Administration Information.

Libraries

So, what about research information management (in this reserved sense) and libraries? One of the interesting things to happen in recent years is that a variety of other campus players are developing service agendas around digital information management that may overlap with library interests. This has happened with IT, learning and teaching support, and with the University press, for example. This coincides with another trend, the growing interest in tracking, managing and disclosing the research and learning outputs of the institution: research data, learning materials, expertise profiles, research reports and papers, and so on. The convergence of these two trends means that the library now has shared interests with the Office of Research, as well as with other campus partners. As both the local institutional and public science policy interest in university outputs grows, this will become a more important area, and the library will increasingly be a partner. Research Information Management is a part of a slowly emerging view of how institutional digital materials will be managed more holistically, with a clear connection to researcher identity.

As noted above, this interest has been more pronounced outside the US to date, but will I think become a more general interest in coming years. It will also become of more general interest to libraries. Here are some contact points.

  • The institutional repository boundary. It is acknowledged that Institutional Repositories (IRs) have been a mixed success. One reason for this is that they are to one side of researcher workflows, and not necessarily aligned with researcher incentives. Although also an additional administrative overhead, Research Information Management is better aligned with organizational and external incentives. See for example this presentation (from Royal Holloway, U of London) which notes that faculty are more interested in the CRIS than they had been in the IR, 'because it does more for them'. It also notes that the library no longer talks about the 'repository' but about updating profiles and loading full-text. There is a clear intersection between RIMs and the institutional repository and the boundary may be managed in different ways. Hong Kong University, for example, has evolved its institutional repository to include RIMs or CRIS features. Look at the publications or presentations of David Palmer, who has led this development, for more detail. There is a strong focus here on improved reputation management on the web through effective disclosure of researcher profiles and outputs. Movement in the other direction has also occurred, where a RIMs or CRIS is used to support IR-like services. Quite often, however, the RIMs and IR are working as part of an integrated workflow, as described here.
  • Management and disclosure of research outputs and expertise. There is a growing interest in researcher and research profiles, and the RIMs may support the creation and management of a 'research portal' on campus. An important part of this is assisting researchers to more easily manager their profiles, including prompting with new publications from searches of external sources. See the research portal at Queen's University Belfast for an example of a site supported by Pure. Related to this is general awareness about promotion, effective publishing, bibliometrics, and management of online research identity. Some libraries are supporting the assignment of ORCIDs. The presentations of Wouter Gerritsma, of Wageningen University in The Netherlands, provide useful pointers and experiences.
  • Compliance with mandates/reporting. The role of RIMs in supporting research assessment regimes in various countries was mentioned earlier: without such workflow support, participation was expensive and inefficient. Similar issues are arising as compliance to institutional or national mandates needs to be managed. Earlier this year, the California Digital Library announced that it had contracted with Symplectic "to implement a publication harvesting system in support of the UC Open Access Policy". US Universities are now considering the impact of the OSTP memo "Increasing Access to the Results of Federally Funded Scientific Research," [PDF] which directs funding agencies with an annual R&D budget over $100 million to develop a public access plan for disseminating the results of their research. ICPSR summarises the memo and its implications here. It is not yet clear how this will be implemented, but it is an example of the growing science and research policy interest in the organized disclosure of information about, and access to, the outputs of publicly funded research. This drives a University wide interest in research information management. In this context, SHARE may provide some focus for greater RIM awareness.
  • Management of institutional digital materials. I suggest above that RIM is one strand of the growing campus interest in managing institutional materials - research data, video, expertise profiles, and so on. Clearly, the relationship between research information management, whatever becomes of the institutional repository, and the management of research data is close. This is especially the case in the US, given the inclusion of research data within the scope of the OSTP memo. The library provides a natural institutional partner and potential home for some of this activity, and also expertise in what Arlitsch and colleagues call 'new knowledge work', thinking about the identifiers and markup that the web expects.

Whether or not Research Information Management become a new service category in the US in quite the way I have discussed it here, it is clear the issues raised will provide important opportunities for libraries to become further involved in supporting the research life of the university.


DuraSpace News: Registration Open: SHARE Hot Topics Community Webinar Series

planet code4lib - Mon, 2014-10-27 00:00

DuraSpace invites you to attend our tenth Hot Topics: The DuraSpace Community Webinar Series, "All About the SHared Access Research Ecosystem (SHARE)."  

Curated by Greg Tananbaum, Product Lead, SHARE

DuraSpace News: New Open Source Preservation Solution—Run Archivematica 1.3.0 Locally or in DuraCloud

planet code4lib - Mon, 2014-10-27 00:00

Archivematica 1.3.0 Features Full DuraCloud Integration

Cynthia Ng: Mozilla Festival Day 2: Webmaking in Higher Education

planet code4lib - Sun, 2014-10-26 15:08
We had a short session on looking at how we might use webmaker in a higher education context. Facilitator Helen Lee Open (Free) Tools We use github firefox webmaker drupal wordpress/buddypress Linux Arduino Pinterest & other social media platforms FLAC Why They Are Awesome content is shareable, reusable/remixable easy to use, quick to do creates online […]

Cynthia Ng: Mozilla Festival Day 2: Notes from Having Fun and Sharing Gratitude in Distributed Online Communities

planet code4lib - Sun, 2014-10-26 13:02
Interesting session on Having Fun and Sharing Gratitude in Distributed Online Communities. Here are some notes. Facilitators J.Nathan Matias (MIT Media Lab / Awesome Knowledge Foundation) @natematias – research on gratitude Vanessa Gennarelli (P2PU) – build communities online Fewer options to celebrate things together in distributed communities. Examples: Yammer Praise KudoNow (performance review) Wikipedia Thanks (for […]

Open Knowledge Foundation: Let’s imagine a creative format for Open Access

planet code4lib - Sun, 2014-10-26 10:34

This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world. It is written by Celya Gruson-Daniel from Open Knowledge France and reports from “Open Access Xsprint”, a creative workshop held on October 20 in the biohackerspace La Paillasse in Paris – as announced here.

More and more information is available online about Open Access. However it’s difficult to process all this content when one is a busy PhD Student or researcher. Moreover, people already informed and convinced are often the main spectators. The question thus becomes : How to spread the world about Open Access to a large audience ? (researchers, students but also people who are not directly concerned). With the HackYourPhD community, we have been developing initiatives to invent new creative formats and to raise curiosity and/or interest about Open Access. Open Access Week was a perfect occasion to propose workshops to experiment with those kinds of formats.

An Open Access XSprint at La Paillasse

During the Open Access Week, HackYourPhD with Sharelex design a creative workshop called the Open Access Xsprint (X standing for media). The evening was held on October 20 in the biohackerspace La Paillasse in Paris with the financial support of a Generation Open Grant (Right to Research Coalition)

The main objective was to produce appealing guidelines about the legal aspects and issues of Open Access through innovative formats such as livesketching, or comics. HackYourPhD has been working with Sharelex on this topic for several months. Sharelex aims at providing access to the law to everyone with the use of a collaborative workshop and forum. A first content has been produced in French and was used during the Open Access XSprint.

One evening to invent creative formats about Open Access

These sessions brings together illustrators, graphic designers, students, researchers. After a short introduction to get to know each other, the group discussed about the meaning of Open Access and its definition. First Livesketching and illustration emerged.

In a second time, two groups were composed. One group worked on the different meaning of Open Access with a focus on the Creative Commons licences.

The other group discussed about the development of the different Open Access models and their evolution (Green Open Access, 100% Gold Open Access, hybrid Journal, Diamond, Platinum). The importance of Evaluation was raised. It appears to be one of the brakes in the Open Access transition.

After an open buffet, each group presented their work. A future project was proposed. It will consist of personalizing a scientific article and inventing its different “”life””. An ingenious way to present the different Open Access Models.

Explore also our storify “Open Access XSprint”

Next Step: Improvisation Theatre and Open Access

To conclude the Open Access Week, another event will be organized on October 24 in a science center (Espace Pierre Gilles de Gennes) with HackYourPhD and Sharelex, and the financial support of Couperin/FOSTER.

This event aims at exploring new format to communicate about Open Access. An improvisation theatral company will participate to this event. The presentations of different speakers about Open Access will be interspersed with short improvisation. The main topic of this evening will be the stereotypes or false ideas about Open Access. Bring an entertaining and original view is a way to discuss about Open Access for a large public, and maybe a starter to help them to become curious and to continue exploring this crucial topic for researchers and all citizen.

Ce(tte) œuvre est mise à disposition selon les termes de la Licence Creative Commons Attribution – Partage dans les Mêmes Conditions 4.0 International.

Open Knowledge Foundation: Nature-branded journal goes Open Access-only: Can we celebrate already?

planet code4lib - Sun, 2014-10-26 10:23

This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world. It is written by Miguel Said from Open Knowledge Brazil and is a translated version of the original that can be found the Brazilian Open Science Working Group's blog.

Nature Publishing Group reported recently that in October, its Nature Communications journal will become open access only: all articles published after this date will be available for reading and re-using, free of charge (by default they will be published under a Creative Commons Attribution license, allowing virtually every type of use). Nature Communications was a hybrid journal, publishing articles with the conventional, proprietary model, or as open access if the author paid a fee; but now it will be exclusively open access. The publishing group that owns Science recently also revealed an open access only journal, Science Advances – but with a default CC-NC license, which prevents commercial usages.

So we made it: the greatest bastions of traditional scientific publishing are clearly signaling support for open access. Can we pop the champagne already?

This announcement obviously has positive aspects: for example, lives can be saved in poor countries where doctors may have access to the most up-to-date scientific information – information that was previously behind a paywall, unaffordable for most of the Global South. Papers published under open access also tend to achieve more visibility, and that can benefit the research in countries like Brazil, where I live.

The overall picture, however, is more complex than it seems at first sight. In both cases, Nature and Science adopt a specific model of open access: the so-called "gold model", where publication in journals is usually subject to a fee paid by authors of approved manuscripts (the article processing charge, or APC). In this model, access to articles is thus open to readers and users, but access to the publication space is closed, in a sense, being only available to the authors who can afford the fee. In the case of Nature Communications, the APC is $5000, certainly among the highest in any journal (in 2010, the largest recorded APC was US $ 3900 – according to the abstract of this article… which I cannot read, as it is behind a paywall).

This amounts to two months of the net salary of a professor in state universities in Brazil (those in private universities would have to work even longer, as their pay is generally lower). Who is up for spending 15%+ of their annual income to publish a single article? Nature reported that it will waive the fee for researchers from a list of countries (which does not include Brazil, China, India, Pakistan and Libya, among others), and for researchers from elsewhere on a "case by case" basis – but they did not provide any further objective information about this policy. (I suspect it is better not to count on the generosity of a publisher that charges us $32 to read a single article, or $18 for a single piece of correspondence [!] from its journals.)

On the other hand, the global trend seems to be that the institutions with which researchers are affiliated (the universities where they work, or the scientific foundations that fund their research) bear part of these charges, partly because of the value these institutions attach to publishing in high-impact journals. In Brazil, for example, FAPESP (one of the largest research foundations in Latin America) provides a specific line of funding to cover these fees, and also considers them as eligible expenses for project grants and scholarships. As it happens, however, the funds available for this kind of support are limited, and in general they are not awarded automatically; in the example of FAPESP, researchers compete heavily for funding, and one of the main evaluation criteria is – as in so many situations in academic bureaucracy today – the researcher's past publication record:

Analysis criteria [...] a) Applicant's Academic Record a.1) Quality and regularity of scientific and / or technological production. Important elements for this analysis are: list of publications in journals with selective editorial policy; books or book chapters [...]

Because of this reason, the payment of APCs by institutions has a good chance of feeding the so called "cumulative advantage" feedback loop in which researchers that are already publishing in major journals get more money and more chances to publish, while the underfunded remain that way.

The advancement of open access via the gold model also involves another risk: the proliferation of predatory publishers. They are the ones that make open access publishing (with payment by authors or institutions) a business where profit is maximized through the drastic reduction of quality standards in peer review – or even the virtual elimination of any review: if you pay, you are published. The risk is that on the one hand, predatory publishing can thrive because it satisfies the productivist demands imposed on researchers (whose careers are continually judged under the light of the publish or perish motto); and on the other hand, that with the gold model the act of publishing is turned into a commodity (to be sold to researchers), marketable under high profit rates - even without the intellectual property-based monopoly that was key to the economic power mustered by traditional scientific publishing houses. In this case, the use of a logic that treats scientific articles strictly as commodities results in pollution and degradation of humankind's body of scientific knowledge, as predatory publishers are fundamentally interested in maximizing profits: the quality of articles is irrelevant, or only a secondary factor.

Naturally, I do not mean to imply that Nature has become a predatory publisher; but one should not ignore that there is a risk of a slow corruption of the review process (in order to make publishing more profitable), particularly among those publishing houses that are "serious" but do not have as much market power as Nature. And, as we mentioned, on top of that is the risk of proliferation of bogus journals, in which peer review is a mere facade. In the latter case, unfortunately this is not a hypothetical risk: the shady "business model" of predatory publishing has already been put in place in hundreds of journals.

Are there no alternatives to this commodified, market-oriented logic currently in play in scientific publishing? Will this logic (and its serious disadvantages) be always dominant, regardless if the journal is "proprietary" or open access? Well, not necessarily: even within the gold model, there are promising initiatives that do not adhere strictly to this logic – that is the case of the Public Library of Science (PLOS), an open access publishing house that charges for publication, but works as a nonprofit organization; because of that, it has no reason to eliminate quality criteria in the selection of articles in order to obtain more profits from APCs. Perhaps this helps explain the fact that PLOS has a broader and more transparent fee waiver policy for poor researchers (or poor countries) than the one offered by Nature. And finally, it is worth noting that the gold model is not the only open access model: the main alternative is the "green model", based on institutional repositories. This model involves a number of challenges regarding coordination and funding, but it also tends not to follow a strictly market-oriented logic, and to be more responsive to the interests of the academic community. The green model is hardly a substitute for the gold one (even because it is not designed to cover the costs of peer review), but it is important that we join efforts to strengthen it and avoid a situation where the gold model becomes the only way for scientists and scholars in general to release their work under open access.

(My comments here are directly related to my PhD thesis on commons and commodification, where these issues are explored in a bit more detail – especially in the Introduction and in Chapter 4, pp. 17-20 and 272-88; unfortunately, it's only available in Portuguese as of now. This post was born out of discussions in the Brazilian Open Science Working Group's mailing list; thanks to Ewout ter Haar for his help with the text.)

Cynthia Ng: Mozilla Festival Day 1: Closing Keynotes

planet code4lib - Sat, 2014-10-25 17:35
We ended the first day with closing plenary featuring numerous people. Marc Surman was back on stage to help set the context of the evening talks. 10 5 minute talks, relay race. Mobile and the Future Emerging Markets and Adoption Chris Locke emerging markets in explosion of adoption of mobile social good example: mobile to […]

Karen Coyle: Citations get HOT

planet code4lib - Sat, 2014-10-25 17:07
The Public Library of Science research section, PLOSLabs (ploslabs.org) has announced some very interesting news about the work that they are doing on citations, which they are calling "Rich Citations".

Citations are the ultimate "linked data" of academia, linking new work with related works. The problem is that the link is human-readable only and has to be interpreted by a person to understand what the link means. PLOS Labs have been working to make those citations machine-expressive, even though they don't natively provide the information needed for a full computational analysis.

Given what one does have in a normal machine-readable document with citations, they are able to pull out an impressive amount of information:
  • What section the citation is found in. There is some difference in meaning whether a citation is found in the "Background" section of an article, or in the "Methodology" section. This gives only a hint to the meaning of the citation, but it's more than no information at all.
  • How often a resource is cited in the article. This could give some weight to its importance to the topic of the article.
  • What resources are cited together. Whenever a sentence ends with "[3][7][9]", you at least know that those three resources equally support what is being affirmed. That creates a bond between those resources.
  • ... and more
As an open access publisher, they also want to be able to take users as directly as possible to the cited resources. For PLOS publications, they can create a direct link. For other resources, they make use of the DOI to provide links. Where possible, they reveal the license of cited resources, so that readers can know which resources are open access and which are pay-walled.

This is just a beginning, and their demo site, appropriately named "alpha," uses their rich citations on a segment of the PLOS papers. They also have an API that developers can experiment with.

I was fortunate to be able to spend a day recently at their Citation Hackathon where groups hacked on ongoing aspects of this work. Lots of ideas floated around, including adding abstracts to the citations so a reader could learn more about a resource before retrieving it. Abstracts also would add search terms for those resources not held in the PLOS database. I participated in a discussion about coordinating Wikidata citations and bibliographies with the PLOS data.

Being able to datamine the relationships inherent in the act of citation is a way to help make visible and actionable what has long been the rule in academic research, which is to clearly indicate upon whose shoulders you are standing. This research is very exciting, and although the PLOS resources will primarily be journal articles, there are also books in their collection of citations. The idea of connecting those to libraries, and eventually connecting books to each other through citations and bibliographies, opens up some interesting research possibilities.

Open Knowledge Foundation: Open Access Week in Nepal

planet code4lib - Sat, 2014-10-25 16:23

This post is part of our Open Access Week blog series to highlight great work in Open Access communities around the world.

Open Access Week was celebrated for the first time in Nepal for the opening 2 days: October 20, 21. The event, which was led by newly founded Open Access Nepal, and supported by EIFL and R2RC, featured a series of workshops, presentation, and peer to peer discussions and training by country leaders in Open Access, Open Knowledge, and Open Data including a 3 hour workshop on Open Science and Collaborative Research by Open Knowledge Nepal on the second day.

Open Access Nepal is a student led initiative that mostly includes students of MBBS. Most of the audience of Open Access Week celebrations here, hence, included med students, but engineering students, management students, librarians, professionals, and academics were also well represented. Participants discussed open access developments in Nepal and their roles in promoting and advancing open access.

EIFL and Right to Research Coalition provided financial support for the Open Access Week in Nepal. EIFL Open Access Program Manager Iryna Kuchma attended the conference as speaker and facilitator of workshops.

Open Knowledge Nepal hosted an interactive session on Open Science and Collaborative Research on the second day of two. The session we led by Kshitiz Khanal, Team Leader of Open Access / Open Science for Open Knowledge Nepal with support from Iryna Kuchma and Nikesh Balami, Team Leader of Open Government Data. About 8-10 Open Access experts of the country were present inside the hall to assist participants. The session began a half an hour before lunch where participants were first asked to brainstorm till lunch was over about what they think Open Science and Collaborative Research is, and the challenges relevant to Open Access that they have faced / might face in their Research endeavors. The participants were seated in round tables in groups of 7-8 persons, making a total of 5 groups.

After lunch, one team member from each group took turns in the front to present the summary of their brain-storming in colored chart papers. Participants came up with near exact definitions and reflected the troubles researchers in the country have been facing regarding Open Access. As we can expect of industrious students, some groups impressed the session hosts and experts with interesting graphical illustrations.

Iryna followed the presentations by her presentation where she introduced the concept, principles, and examples related to Open Science. Kshitiz followed Iryna with his presentation on Collaborative Research.

Session on Collaborative Research featured industry – academia collaborations facilitated by government. Collaborative Research needs more attention in Nepal as World Bank’s data of Nepal shows that total R&D investment is only equivalent to 0.3% of total GDP. Lambert Toolkit, created by the Intellectual Property Office of the UK, was also discussed. The toolkit provides agreement samples for industry – university collaborations, multi–party consortiums and few decision guides for such collaborations. The session also introduced version control and discussed simple web based tools for Collaborative Research like Google Docs, Etherpads, Dropbox, Evernote, Skype etc.

On the same day, Open Nepal also hosted a workshop about open data, and a session on Open Access Button was hosted by the organizers. Sessions in the previous day included sessions that enlightened the audience about Introduction to Open Access, Open Access Repositories, and growing Open Access initiatives all over the world.

This event dedicated to Open Access in Nepal was well received in the Open Communities of Nepal which has mostly concerned themselves with Open Data, Open Knowledge, and Open Source Software. A new set of audience became aware of the philosophy of Open. This author believes the event was a success story.

Nicole Engard: IL2014: More Library Mashups Signing/Talk

planet code4lib - Sat, 2014-10-25 14:03

I’m headed to Monterey for Internet Librarian this weekend. Don’t miss my talk on Monday afternoon followed by the book signing for More Library Mashups.

From Information Today Inc:

This October, Information Today, Inc.’s most popular authors will be at Internet Librarian 2014. For attendees, it’s the place to meet the industry’s top authors and purchase signed copies of their books at a special 40% discount.

The following authors will be signing at the Information Today, Inc., on Monday, October 27 from 5:00 to 6:00 P.M. during the Grand Opening Reception

The post IL2014: More Library Mashups Signing/Talk appeared first on What I Learned Today....

Related posts:

  1. Heading to Internet Librarian!
  2. Call for Chapters: More Library Mashups
  3. Information Today Inc. Book Sale

Cynthia Ng: Mozilla Festival Day 1: CC Tools for Makers

planet code4lib - Sat, 2014-10-25 13:39
Creative Commons folks hosted a discussion on barriers and possible solutions to publishing and using CC licensed content. Facilitators Ryan Merkley (CEO) Matt Lee (Tech Lead) Ali Al Dallal (Mozilla Foundation) Our Challenge our tech is old, user needs are unmet (can be confusing, don’t know how to do attribution) focus on publishing vs. sharing […]

Cynthia Ng: Mozilla Festival Day 1: Notes from Opening Plenery

planet code4lib - Sat, 2014-10-25 09:16
Start of Mozilla Festival 2014 with opening circle. CoderDojo Mary Moloney, Global CEO @marydunph @coderdojo global community of free programming clubs for young people each one of you is a giant, because you understand technology and the options it can give people how can I reach down to a young person and put them on […]

Galen Charlton: Testing Adobe Digital Editions 4.0.1, round 2

planet code4lib - Fri, 2014-10-24 21:04

Yesterday I did some testing of version 4.0.1 of Adobe Digital Editions and verified that it is now using HTTPS when sending ebook usage data to Adobe’s server adelogs.adobe.com.

Of course, because the HTTPS protocol encrypts the datastream to that server, I couldn’t immediately verify that ADE was sending only the information that the privacy statement says it is.

Emphasis is on the word “immediately”.  If you want to find out what a program is sending via HTTPS to a remote server, there are ways to get in the middle.  Here’s how I did this for ADE:

  1. I edited the hosts file to refer “adelogs.adobe.com” to the address of a server under my control.
  2. I used the CA.pl script from openssl to create a certificate authority of my very own, then generated an SSL certificate for “adelogs.adobe.com” signed by that CA.
  3. I put the certificate for my new certificate authority into the trusted root certificates store on my Windows 7 deskstop.
  4. I put the certificate in place on my webserver and wrote a couple simple CGI scripts to emulate the ADE logging data collector and capture what got sent to them.

I then started up ADE and flipped through a few pages of an ebook purchased from Kobo.  Here’s an example of what is now getting sent by ADE (reformatted a bit for readability):

"id":"F5hxneFfnj/dhGfJONiBeibvHOIYliQzmtOVre5yctHeWpZOeOxlu9zMUD6C+ExnlZd136kM9heyYzzPt2wohHgaQRhSan/hTU+Pbvo7ot9vOHgW5zzGAa0zdMgpboxnhhDVsuRL+osGet6RJqzyaXnaJXo2FoFhRxdE0oAHYbxEX3YjoPTvW0lyD3GcF2X7x8KTlmh+YyY2wX5lozsi2pak15VjBRwl+o1lYQp7Z6nbRha7wsZKjq7v/ST49fJL", "h":"4e79a72e31d24b34f637c1a616a3b128d65e0d26709eb7d3b6a89b99b333c96e", "d":[ { "d":"ikN/nu8S48WSvsMCQ5oCrK+I6WsYkrddl+zrqUFs4FSOPn+tI60Rg9ZkLbXaNzMoS9t6ACsQMovTwW5F5N8q31usPUo6ps9QPbWFaWFXaKQ6dpzGJGvONh9EyLlOsbJM" }, { "d":"KR0EGfUmFL+8gBIY9VlFchada3RWYIXZOe+DEhRGTPjEQUm7t3OrEzoR3KXNFux5jQ4mYzLdbfXfh29U4YL6sV4mC3AmpOJumSPJ/a6x8xA/2tozkYKNqQNnQ0ndA81yu6oKcOH9pG+LowYJ7oHRHePTEG8crR+4u+Q725nrDW/MXBVUt4B2rMSOvDimtxBzRcC59G+b3gh7S8PeA9DStE7TF53HWUInhEKf9KcvQ64=" }, { "d":"4kVzRIC4i79hhyoug/vh8t9hnpzx5hXY/6g2w8XHD3Z1RaCXkRemsluATUorVmGS1VDUToDAvwrLzDVegeNmbKIU/wvuDEeoCpaHe+JOYD8HTPBKnnG2hfJAxaL30ON9saXxPkFQn5adm9HG3/XDnRWM3NUBLr0q6SR44bcxoYVUS2UWFtg5XmL8e0+CRYNMO2Jr8TDtaQFYZvD0vu9Tvia2D9xfZPmnNke8YRBtrL/Km/Gdah0BDGcuNjTkHgFNph3VGGJJy+n2VJruoyprBA0zSX2RMGqMfRAlWBjFvQNWaiIsRfSvjD78V7ofKpzavTdHvUa4+tcAj4YJJOXrZ2hQBLrOLf4lMa3N9AL0lTdpRSKwrLTZAFvGd8aQIxL/tPvMbTl3kFQiM45LzR1D7g==" }, { "d":"bSNT1fz4szRs/qbu0Oj45gaZAiX8K//kcKqHweUEjDbHdwPHQCNhy2oD7QLeFvYzPmcWneAElaCyXw+Lxxerht+reP3oExTkLNwcOQ2vGlBUHAwP5P7Te01UtQ4lY7Pz" } ]

In other words, it’s sending JSON containing… I’m not sure.

The values of the various keys in that structure are obviously Base 64-encoded, but when run through a decoder, the result is just binary data, presumably the result of another layer of encryption.

Thus, we haven’t actually gotten much further towards verifying that ADE is sending only the data they claim to.  That packet of data could be describing my progress reading that book purchased from Kobo… or it could be sending something else.

That extra layer of encryption might be done as protection against a real man-in-the-middle attack targeted at Adobe’s log server — or it might be obfuscating something else.

Either way, the result remains the same: reader privacy is not guaranteed. I think Adobe is now doing things a bit better than they were when they released ADE 4.0, but I could be wrong.

If we as library workers are serious about protection patron privacy, I think we need more than assurances — we need to be able to verify things for ourselves. ADE necessarily remains in the “unverified” column for now.

Nicole Engard: Bookmarks for October 24, 2014

planet code4lib - Fri, 2014-10-24 20:30

Today I found the following resources and bookmarked them on <a href=

  • Klavaro Klavaro is just another free touch typing tutor program. We felt like to do it because we became frustrated with the other options, which relied mostly on some few specific keyboards. Klavaro intends to be keyboard and language independent, saving memory and time (and money).

Digest powered by RSS Digest

The post Bookmarks for October 24, 2014 appeared first on What I Learned Today....

Related posts:

  1. My new keyboard
  2. Learn a New Language
  3. Track Prices on Amazon with RSS

CrossRef: CrossRef and Inera Recognized at New England Publishing Collaboration Awards Ceremony

planet code4lib - Fri, 2014-10-24 19:53

On Tuesday evening, 21 October 2014, Bookbuilders of Boston named the winners of the first New England Publishing Collaboration (NEPCo) Awards. From a pool of ten finalists, NEPCo judges October Ivins (Ivins eContent Solutions), Eduardo Moura (Jones & Bartlett Learning), Alen Yen (iFactory), and Judith Rosen of Publishers Weekly selected the following:

  • First Place: Inera, Inc., collaborating with CrossRef

  • Second Place (Tie): Digital Science, collaborating with portfolio companies; and NetGalley, collaborating with the American Booksellers Association

  • Third Place: The Harvard Common Press, collaborating with portfolio companies

Based on an embrace of disruption and the need to transform the traditional value chain of content creation, the New England Publishing Collaboration (NEPCo) Awards showcase results achieved by two or more organizations working as partners. Other companies short-listed for the awards this year were Cenveo Publisher Services, Firebrand Technologies, Focal Press (Taylor & Francis), Hurix Systems, The MIT Press, and StoryboardThat.

Criteria for the awards included, results achieved,industry significance,depth of collaboration, and presentation.

An audience voting component was included--Digital Science was the overall winner among audience members.

Keynote speaker David Weinberger, co-author of Cluetrain Manifesto and senior researcher at the Harvard Berkman Center, was introduced by David Sandberg, co-owner of Porter Square Books.

Source: Bookbuilders of Boston http://www.nepcoawards.com/

Eric Lease Morgan: Doing What I’m Not Suppose To Do

planet code4lib - Fri, 2014-10-24 18:09

I suppose I’m doing what I’m not suppose to do. One of those things is writing in books.

I’m attending a local digital humanities conference. One of the presenters described and demonstrated a program from MIT called Annotation Studio. Using this program a person can upload some text to a server, annotate the text, and share the annotations with a wider audience. Interesting!?

I then went for a walk to see an art show. It seems I had previously been to this art museum. The art was… art, but I did not find it beautiful. The themes were disturbing.

I then made it to the library where I tried to locate a copy of my one and only formally published book — WAIS And Gopher Servers. When I was here previously, I signed the book’s title page, and I came back to do the same thing. Alas, the book had been moved to remote storage.

I then proceeded to find another book in which I had written something. I was successful, and I signed the title page. Gasp! Considering the fact that no one had opened the book in years, and the pages were glued together I figured, “What the heck!”

Just as importantly, my contribution to the book — written in 1992 — was a short story called, “A day in the life of Mr. D“. It is an account of how computers would be used in the future. In it the young boy uses it to annotate a piece of text, and he gets to see the text of previous annotators. What is old is new again.

P.S. I composed this blog posting using an iPad. Functional but tedious.

Pages

Subscribe to code4lib aggregator