You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 16 hours 51 min ago

Open Knowledge Foundation: Announcing the Frictionless Data Tool Fund: Apply for a mini-grant to progress and improve our specifications

Wed, 2017-03-01 14:20

Today, Open Knowledge International is launching the Frictionless Data Tool Fund, a minigrant scheme offering grants of $5,000 to help extend the implementation of software libraries — code that is used to develop software programs and applications—  for the Frictionless Data specifications by developing them in a range of programming languages.

The focus of the Frictionless Data project is about building tools which can help people in removing the friction removed in working with data. These library implementations will support the development of this suite. We are looking for individual developers and organizations to help us improve the specifications and implement further work. The fund will be accepting submissions from now until 31st July 2017 for work which will be completed by the end of the year.

Last year a working group was set up to progress the specifications to our first 1.0 release, and we now have an excellent foundation to add further implementations to complement our core libraries. The Tool Fund is part of the Frictionless Data project at Open Knowledge International, where we are addressing issues related to frictions that occur when working with data. We are doing this by developing a set of tools, standards, and best practices built around the Data Package standard, a containerisation format for any kind of data based on existing practices for publishing open-source software.

Currently, Open Knowledge International maintain reference implementations for the specifications in two programming languages: Python and Javascript. In conjunction with the specifications themselves, these implementations exhibit the core functionality desired for working with data and demonstrate the type of functionality we expect implementations in other programming languages to have. We also have implementations in R and Ruby, stewarded by our technical partners. We want to build on these solid implementations of 1.0 specifications across a range of programming languages and work with  implementers to further improve the specifications and how they are implemented in order to get us closer to our ideal of “frictionless” data transport across a diversity of data applications.

This is an initial list of languages we think are going to be of most benefit to the project. However, we welcome suggestions of languages which are not found here.

  • Go
  • PHP
  • Java
  • C#
  • Swift
  • C++
  • Perl
  • Matlab
  • Clojure
  • R

For some more information and to get a full understanding of what is required, take a close look at our implementation reference documentation and the  v1.0 specifications of on the dedicated Tool Fund Page on the Frictionless Data site.

Applications can be made from today by submitting this form.

The Frictionless Data team will select one implementer per language, and notify all applicants whether they have been successful or not at the very latest by the end of July.  

For more questions on the fund, speak directly to us on our forum or on our on our Gitter chat.

Access Conference: Call for proposals open

Wed, 2017-03-01 11:30

The call for proposals for Access 2017 in Saskatoon is open!

Submit your proposal by April 5th. We are looking for ideas for:

  • 20 min presentations (15 min presentation, ~5 min questions)
    • These could be demos, theory or practice, case studies, original research, etc.
    • These submissions will be double blind peer-reviewed
  • 30 min panel sessions
  • 5 min lightning talks

Questions? Contact us at accesslibcon@gmail.com

Open Knowledge Foundation: Presenting the value of open data to NGOs: A gentle introduction to open data for Oxfam in Kenya

Wed, 2017-03-01 10:50

Open Knowledge International is a member of OD4D, a global network of leaders in the open data community, working together to develop open data solutions around the world. Here, Simeon Oriko talks about his work carrying out an embedded data fellowship with Oxfam in Kenya as part of the Open Data for Development (OD4D) programme. 

For years, traditional non-governmental organisations (NGOs) in Kenya have focused on health, humanitarian and aid initiatives. Most still engage in this type of work but with an increasing shift toward adopting modern approaches that respect historical experience and consider future emerging constraints. Modern approaches such as social innovation models and using solution-based processes such as design thinking are increasingly commonplace in NGOs. Open data practices are also being adopted by some NGOs with the aim of mobilizing all their resources for transformative change.

Image credit: Systems thinking about the society by Marcel Douwe Dekker CC BY 3.0

As part of an ongoing embedded data fellowship programme with Open Knowledge International, I had the opportunity late last year to meet with some Oxfam in Kenya staff and talk to them about their priorities and their organizational shift towards ‘influencing’ and ‘advocacy’. We spoke about the strategies they would like to adopt to make this shift happen. These meetings helped identify key areas of need and opportunities for the organization to adopt open data practices.

I learned from my conversations that the organization collects and uses data but most of it is not open. It was also made plain that staff faced significant challenges in their data workflow; most lack the skills or technical know-how to work with data and often heavily rely on the Monitoring and Evaluation team to handle data analysis and other key processes.

Oxfam in Kenya expressed interest in building their knowledge and capacity to work with data. One tangible outcome from my conversations was to hold workshops to sensitize them on the value of open data. We held two workshops as a result: one with Oxfam in Kenya staff and the other with some of their partner organizations.

Open Data Workshop with Oxfam in Kenya Partners – January 26th, 2017

On January 13th 2017, Oxfam in Kenya hosted me for a 3-hour workshop on open data. The training focused on 2 things:

  • a basic understanding of what open data is and what it does, and,
  • helping them understand and map out its potential strategic value in their organization.
The training emphasized 3 key characteristics of open data:
  • Availability and Access: We spoke on the need for data to be available as a whole and at no more than a reasonable reproduction cost. We emphasized the need for data to be available in a convenient and modifiable form. It became clear that providing an easy way to download data over the internet was the preferred way of making data easily available and accessible.
  • Reuse and Redistribution: The training emphasized the use of licenses or terms under which the data must be provided. Licenses and terms that permit reuse and redistribution including the intermixing with other datasets were recommended. We briefly explored using Creative Commons Licenses to license the data openly.
  • Universal Participation: The idea that anyone must be able to use, re-use and redistribute the data, was emphasized as important practice in both principle and practice.

Part of the training showcased tools and visualizations to show the strategic value of open data. This was helpful in making concrete the concept of open data to the majority in the room who either had a vague idea of open data, or were learning about it for the very first time.

In follow-up meetings, there was an express interest in sharing this knowledge with some of Oxfam in Kenya’s partner organizations. This conversation led to the second workshop hosted by the Oxfam’s tax justice team for some of their partner organization: Inuka ni Sisi, Caritas Lodwar, National Taxpayers Association (NTA) and ALDEF. This training took place in Nairobi on January 26th 2017. The majority of participants for this workshop were people who ran and worked in grassroots organizations in various parts of Kenya including Wajir and Turkana.

The training followed the same format and highlighted the same themes as the one carried out for staff. We also took time during this training to help participants identify data sources and data sets that were relevant to their programs. Some participants were keen to find revenue and expenditure data from their local County Governments. A few more were interested in comparing this data with population distribution data in their regions. Most of this updated data is available on Kenya’s national Open Data portal. This granted us the opportunity to explore data publishing tools.

From conducting these trainings, I learned of a clear need to offer very simple, entry-level workshops to expose traditional NGOs and their local partner organizations to open data and its value. Many of these organizations are already working with data in one way or another but few understand and reap the benefits that data can accord them due to a lack of knowledge and capacity.

In the interest of helping these organizations mobilize their resources for transformative change, I believe actors in the global open data community should seek out local development actors and create programs and tools that easily onboard them into the open data ecosystem with a focus on building their capacity to work with data and collaborating with them to drive increase adoption of open data practices at the grassroots and community level.

In collaboration with the OD4D program, Open Knowledge International coordinates the embedded fellowship programme which places an open data expert in a CSO  over 3 months to provide support in thinking through and working with open data.

LibUX: Zuboff’s Law and the Knowledge Factory

Wed, 2017-03-01 06:12

Friends of mine are in the very early stages planning a sort of cooperative intended to make professional opportunities for beginning developers while getting local non-profits and small businesses in underserved communities online. Our channel in slack is #code-op – you know, co-op but with “code.”

Anyway, this idea’s been kicking around for a couple years, in part inspired by Laurie Voss’s article posted in 2012 titled “Blue-collar knowledge workers will save the economy.” It’s compelling.

The fundamental question is: if there’s 15% unemployment in one industry and 3% in another, why aren’t people switching jobs? One problem is that knowledge work requires high levels of education. A lot has been said about America’s failure to educate its children in math and sciences, and those points are all valid: a huge increase in investment in education at all stages is necessary, and a refocusing of priorities towards the sciences is a good idea.

 What’s talked about less is the obvious fact that not everyone can be a manager, a programmer, a doctor or an accountant. … So that’s my first point: people aren’t switching jobs because the jobs available are too specialized and complicated for them to do.

 Think about how a physical factory worked. The reason unskilled jobs in manufacturing, say, cars existed is because some very highly skilled people first got together and looked at the process of building a car and said “okay, we can automate this bit and this bit and this bit, but making machines to do this bit is way too hard”. The blue collar workers of Detroit didn’t know how to build cars: they knew how to operate one particular bit of machinery. The only people who knew how to build cars were the guys who put the machines together. …

It’s time to build knowledge factories. Where are the website factories? Obviously there are dev shops and agencies that employ hundreds of people and reap economies of scale and specialization, but those aren’t factories as I just described them. If you wanted to follow the model of a factory, then a few very skilled developers would get together and design really good, generic websites: heavily researched, carefully constructed, configured to work with minimal maintenance for long periods under high loads. Then they’d train a bunch of significantly less skilled people to do the “final assembly”: slap the images and text into place, move pages around, turn features on and off. All via highly specialized tools requiring no programming expertise whatsoever and maybe a week’s training. …

 And the analogy holds true, to lesser or greater degrees, across much of the software industry. We need to stop building software for each customer and start building software assembly lines: harder, less fun, but hundreds of times more productive — and profitable. And once we’ve built the assembly lines, a new generation of blue-collar knowledge workers will be able to step up, doing the things that robots can’t do, just like they did before.

Voss’s politics bleed through but the concept of “knowledge factories” capture my fancy.

We already have to look past the daydream of web design by assembly line. In these few short years, we have already crossed the software requirement, experimenting not just with services like SquareSpace, Shopify, or Wix, that so democratize and lower the barrier of entry to websites that often look indistinguishable from many custom jobs, but we are also toying with AI in such a way that websites lay themselves out best suited for their content.

We leapfrogged “blue-collarization.”

The landscape rapidly changes. Cars were mostly built the same way for decades, but software is fluid. We can train a bunch of people to build pieces of applications, but the way applications will be built in the future will change.

It is not sufficient to know React because React won’t always be here. You need to know how to learn. You need to know what to learn.

To that end, I am not sure in our internet world “blue collar” translates. You are either a self-starter, self-learner, gig-economist – or you are not. We may be able to create jobs by training people to write code that is so modular it doesn’t require much adaptation to circumstances, but that is fleeting – an ever diminishing skill that will leave people behind.

Not to mention, what can be automated – will.

Zuboff’s Laws: 1.) Everything that can be automated will be automated. 2.) Everything that can be informated will be informated.

The reality is that although we can keep moving the bar to describe different kinds of software development, it becomes easier and easier, to the point that value and attraction — thus, the money — declines. We must wonder whether “knowledge factories,” like factories, are always on the downward slope toward automation.

If we decide what’s common to the definition of “blue collar” is repetitive work, there will be a time where there is no repetitive work – at least in software.

Instead, the question then is whether it is possible to create this same kind of “knowledge factory” supplying work that proves to be difficult to automate.

I think the answer could be in the humanities.

District Dispatch: Happy Anniversary, Open Internet Order

Tue, 2017-02-28 20:09

Federal policymaking is not exclusively about regulatory minutiae; sometimes we celebrate milestones. Yesterday Office for Information Technology Policy (OITP) Director Alan Inouye and I joined dozens of Congressional staffers, policy analysts and advocates in Cannon House Office Building in honor of the second anniversary of the Open Internet Order.

The star speaker was FCC Commissioner Mignon Clyburn, who highlighted the role of public support in 2015 that made the Order possible. Fortunately, The Hill  captured verbatim the crux of Commissioner Clyburn’s comments in a quote that I (despite my furious notetaking), couldn’t quite get:

FCC Commissioner Mignon Clyburn speaks to net neutrality advocates on the second anniversary of the Open Internet Order  (Photo Credit: Alan Inouye)

“For me it can be summed up in this way: How do we ensure that one of most inclusive, enabling, empowering platforms of our time continues to be one where our applications, products, ideas and diverse points of view have the exact same chance of being seen and heard by everyone, regardless of our class, race, economic status or where we live?”

The Open Internet Order allows the internet to remain the platform for equal access to information that it is. Everyone who uses the internet benefits from the fair playing field provided by the Order. But it holds special significance for minorities and people in low-income or underserved communities because it vests in the FCC the authority to enforce rules against anti-consumer practices that can lead to discrimination.

Imagine the internet without net neutrality. Information collected by internet service providers could be used as a proxy to enable racial profiling and target specific communities, leading to price gouging in underserved areas, predatory lending to certain groups and a host of other abuses. More importantly, a commodified internet could stifle the voices of minorities and groups who historically have struggled to be heard. Without network neutrality, this platform for a free exchange of ideas is vulnerable to a form of censorship by a handful of companies who would profit from it.

To undermine network neutrality is to undercut libraries’ core values.

Hosted by four of ALA’s advocacy allies – the Center for Media Justice, Color of Change, Free Press and National Hispanic Media Coalition – the event focused on the social justice enabled by net neutrality. ALA joined these and many other organizations last December in signing the Technology Rights and Opportunity principles, “advocating for policies that ensure freedom of speech and equality of opportunity for all, while expanding the ability of the internet to drive economic opportunity and education.” These principles clearly echo the core values of libraries. To undermine network neutrality is to undercut our core values.

Actions taken and statements made by FCC Chairman Ajit Pai confirm fears about threats to network neutrality identified by Larra Clark, Krista Cox and Kara Malenfant in a District Dispatch post earlier this year. “Free speech and free expression ensured by net neutrality will not be easy to defend,” said Commissioner Clyburn, and it will take a groundswell of grassroots support to maintain the protections of the 2015 Open Internet Order. ALA will play a big role in that groundswell through strengthening ties with other advocacy groups to defend net neutrality and our core values.

The internet is now a necessity for everyone, not a luxury for a few, and as Commissioner Clyburn put it, consumers expect there to be “a cop on the beat” – somebody protecting their interests in this broadband world. The FCC, says Clyburn, should be that “cop.” Whether or not the FCC will enforce network neutrality, one thing is for certain: librarians worked hard to gain the equal access to online information ensured by the Open Internet Order, and we’re not about to give it up without a fight.

The post Happy Anniversary, Open Internet Order appeared first on District Dispatch.

Open Knowledge Foundation: Why you should take 10 minutes to look at the Open Data Roadmap this Open Data Day

Tue, 2017-02-28 17:00

March 4th is Open Data Day! Open Data Day is an annual celebration of open data all over the world. For the seventh time in history, groups from around the world will create local events on the day where they will use open data in their communities.

 

For me, Open Data Day is a special day. This is not because I am busy organising it, but because I am always inspired by the different activities that we can all pull off as a community one weekend every year. Let’s be fair, while International Pancake day, which is celebrated today, is delicious, Open Data Day is important. It shows our strength as a community and brings new people to the discussions.

Open Data Day in Peru 2016

We all know, however, that open data is not only a one-day thing. It is a 365-day effort. Don’t get me wrong, even if you have done one event this year, and it is Open Data Day, you are fabulous! I do think, however, that this is a time to mention others in the community working all year round to try and make progress on different international topics. Whether that progress is being made through promoting the International Open Data Charter or working on standards for contracting or creating the world’s biggest open data portal for humanitarian crises. In the regional level, we see great examples of initiatives like AbreLatam/ConDatos or the African Open Data Conference.  

 

Open Data Day, whether done locally or on a global-scale, is a good time to reflect on what happens in other places, or how you (yes, you!), can help and shape this open data ecosystem where we work. In my belief, if it’s open, everyone should have a right to express their opinions.

Lucky for us, there is a tool that tries to look at the community’s burning topics and set the way forward. It is called the International Open Data Conference Roadmap, and it is waiting for you to interact with and shape further.

Before you leave this post and read something else, I know what you might be thinking. It goes somewhere along the lines of “Mor, but who cares about my opinion when it comes to such high-level strategy?” Well, the whole community cares! I wrote this blog about the IODC just a year and a bit ago, and look, now I can actually help and shape this event. And who am I really? I am not a CEO of anything or a government official. I don’t think that only the noisy people (like me…) should be the individuals who are shaping the future. This is why your written opinion matters to us, the authors of the roadmap. Without it, this whole movement will stay in place, and without people understanding and working with the roadmap, we will not go anywhere.

 

I am aware that this post might come too late for some of you: your schedule for Open Data Day is full,  you need more time to get organised, etc. Got 30 minutes? Here is my suggested activity with the report and I would love to get comments on it on our forum! Got only 10 minutes? Pick a topic from the roadmap, the one that you feel most connected to, read about, a write a comment about it on our forum.

Activity suggestion: International Open Data Roadmap – what are we missing?

Time: 30 minutes

Accessories: Laptops, post-its, pens, good mood.  

Number of participants: 2-15

Activity:

Step 1: Read the Roadmap main actions to the group :
Open Data principles– Broaden political commitment to open data principles

Standards –  Identify and adopt user-centric open standards

Capacity building – Build capacity to produce and effectively use open data

Innovation – Strengthen networks to address common challenges

Measurement – Make action on open data movement more evidence-based

SDG– Use open data to support the sustainable development agenda

 

Step 2: Choose one action – If you have more than 4 people, divide the big groups into groups of up to 4 people.

 

Step 3: Read about the actions and what the mean in the report (pages 33-43). Discuss in the group about the meaning of the action. Do you understand it? If not what are you missing to understand it better? If yes, do you agree with it?

 

Step 4: On a post-it , write what do you think can help us to act and complete the actions or what are missing.

 

Step 5: Take a picture of your post it, upload it to the forum, with an explanation about it. You are also welcome to share it with on Twitter by using the hashtag: #IODCRoadmap.

 

I will run this session on the London Open Data Day Do-a-thon, if you are around, ping me at mor.rubinstein@okfn.org or my Twitter – @morchickit

Have a great open data day event! Don’t forget to tweet about it #opendataday and send us your posts!

LibUX: First look at Primo’s new user interface

Tue, 2017-02-28 06:21

Ex Libris recently did a complete overhaul of the user interface for their Primo discovery layer. They did a great job and developed a UI that is responsive, modern, and (mostly) easy to use.

In this article I’d like to first look at some key innovations of Primo’s new UI and then discuss the process of implementing local customizations and some of the challenges involved.

The New UI

I’m not a big fan of tabbed interfaces and was always bothered by the distribution of item information across tabs in the old Primo interface (e.g., Locations, Details, and Browse). As a user, I found that even when I saw exactly what I wanted in a list of results, I still had to spend a second thinking about where I needed to click next (assuming I wanted more than just a call number).

Old UI item view. Location info not immediately visible. Actions are under a menu.

The new UI displays everything in a single panel, so anything you want to know about an item is only one click away from the result list. Actions are arrayed across the top rather than being listed in a menu.

New UI item view. Location info is visible. Actions are visible without activating a menu.

With the new UI, Ex Libris made huge strides toward mobile usability. The old UI was usable, but not very pretty. Some features, most notably facets, were not usable. The new UI is a tremendous improvement. I’m especially fond of the facets menu as a sticky footer, a design choice that will keep the possibility of refinement in the user’s mind as she scrolls through results.

Old UI on iPhone 6s, Safari.

New UI on iPhone 6s, Safari.

As with any complex UI, not everything is perfect. The product is still in development and Ex Libris is actively making improvements, many based on input from librarians via the Primo listserv and the Ex Libris Ideas platform. They recently ditched the unpopular “infinite scroll” results list in favor of paginated results, based on accessibility concerns from customers. They’re still addressing a problem with the new UI in “private browsing” mode on iOS.

The speed of the new UI is not good. It seems to have improved marginally with the most recent release, but it still feels slow. This slowness is especially apparent when a facet is applied to a search, requiring a complete redraw of the screen. If you accidentally click on the wrong facet, then have to remove that facet and choose another one, that’s three redraws, and probably at least one expletive.

Possibly related to the speed problem is the number of animations. For the most part, these are subtle, professional, and not distracting. They make the interface feel interactive and modern. But there are an awful lot of them and I wonder what cost they exact on performance.

Customization

For those of us who like to customize and add functionality to library systems, a new interface can mean a lot of work. The new Primo UI was no exception.

Angular?

Very early on, the Primo community learned that the new interface would be based on Angular JS, a JavaScript framework mainly maintained by Google. This seemed like an odd decision on the part of Ex Libris. It isn’t surprising that the Ex Libris developers wanted to work in Angular — it’s a popular framework. For developers.

What surprises me is that they expected librarians to work in Angular. An “average” library, whatever that is, almost certainly has some internal expertise in CSS, basic JavaScript, and jQuery. But Angular? Not as likely.

At my library, there was no internal expertise. So it was off to the Internet. There are many great tutorials out there on Angular JS, and the framework itself has excellent documentation. But the tutorials are generally project based: “Build an X with Angular JS,” where X is some variety of search/display interface on top of some JSON data.

So despite the knowledge I’d gained from the tutorials, I still had no idea how to make even simple modifications to the new Primo UI.

It turns out that the key to inserting content into Primo is something called an Angular “component,” a concept barely touched upon in most tutorials and books. Ex Libris liberally sprinkled their new UI with Angular “directives” (i.e., custom HTML elements) to which developers can add code by creating a corresponding component. The component, in turn, may optionally refer to a “controller,” which is where you can insert your own programming logic. For most things, it’s not terribly difficult once you get used to that model, but it feels very restrictive. What if I want to do something in a location where there isn’t a pre-defined directive? And what if I want to alter existing content rather than create something new?

Editor: Ron gave me the okay to duck-in for a moment to elaborate. An Angular component can be a little weird if you’re predominately coming from jQuery. Think of a component like a custom HTML element, that has styles and functionally baked into it. For instance, you could make a carousel using an arrangement of <div>s and images, and wherever you want to place a carousel you must copy all that markup. Or, you could write it just once and tell Angular to — from now on — recognize the <slider> element instead. Components are both useful for substituting a little code for a lot of code, and also a way to sanitize and streamline how people — like librarians customizing Primo — use it.  — @schoeyfield

The Development Environment

Ex Libris was nice enough to provide a development environment via GitHub so that developers can have a local sandbox for the new interface. This includes a ready-made Gulp workflow to deal with things like minification, concatenation, and bundling the whole thing up into a package that can be uploaded to the Primo Back Office for live implementation.

Again, this is nice if you’re familiar with the technologies, but I get the impression that not all librarians are finding this as approachable as its creators intended.

The Customizations

Ithaca College Library has implemented a few additions in the current version of Primo: a “text this call number” feature, a “not on shelf” (trace request) link, a “notify me on arrival” feature for on-order items, and a map highlighting a physical item’s location.

We’ve managed to implement all of these features in the new UI, but it has been a long process, with a lot of false starts. Once I figured out that I could often drop existing code into an Angular controller with only minor modifications, things went a lot faster.

Lessons Learned

The work we’ve done at Ithaca College so far would have been impossible without the support of the Primo listserv and the Slack group that formed following an Ex Libris-sponsored webinar. There are people in these groups who are far better programmers than I am, and I’ve learned a lot from them. I can’t think of another project I’ve worked on where success has been so dependent on the kindness of strangers.

While I enjoy extolling the generosity and creativity of the many Primo-using librarians out there, the dependence on informal information sharing points to the inadequacy of Ex Libris’s documentation. As with any product, quality documentation in a single location goes a long way toward meeting the needs of customers.

Access Conference: Access 2017 – Runnin’ back to Saskatoon

Tue, 2017-02-28 04:13


It has been 19 years since Access was last hosted in Saskatoon, but we just couldn’t wait for our 20 year anniversary to make that Guess Who joke. Join us September 27-29th, 2017 for the Access conference in downtown Saskatoon, Saskatchewan at the Sheraton Cavalier Hotel overlooking the scenic South Saskatchewan River for three days of cutting-edge library technology, collaboration, and conversation.

The hackathon will take place on Wed. Sept 27th and the conference presentations on Thurs, Sept 28th & Fri. Sept 29th. Watch this site or follow our Twitter channel (@accesslibcon) for updates.

Important dates:

  • call for proposals will be posted on March 1st
  • registration will open April 15th

Check out the FAQ for more details.

Access 2017 is hosted by the University of Saskatchewan Library and your local organizing committee Craig Harkema, Shannon Lucky, and Jaclyn McLean. Please contact us with any questions or suggestions at accesslibcon@gmail.com or on Twitter @accesslibcon

 

DuraSpace News: Subscribe to the Hydra-In-A-Box Update

Tue, 2017-02-28 00:00

This month's issue of Hydra-In-A-Box Update is now available with news and information about community progress, plans, and pilots: 

DuraSpace News: CATCH UP with DSpace 7: Web Seminar Recording Available

Tue, 2017-02-28 00:00

Austin, TX  Did you miss today's Hot Topics webinar—Introducing DSpace 7?  Claire Knowles, The University of Edinburgh, Art Lowel, Atmire, Andrea Bollini, 4Science, and Tim Donohue, DuraSpace, highlighted key aspects of DSpace 7 development with an overview of the Rest API and Angular UI. Listen to the recording and download the slides now available here: http://duraspace.org/hot-topics

DuraSpace News: Curtin University Trades Digitool for DSpace

Tue, 2017-02-28 00:00

From Bram Luyten, Atmire

Heverlee, Belgium  Atmire migrated Curtin's institutional repository espace from Digitool to DSpace 5. Read this article to learn more about the project and the new espace features!

DuraSpace News: Managing Cultural Heritage with DSpace–Yes We Can!

Tue, 2017-02-28 00:00

From Michele Mennielli, International Business Developer 4Science  

Rome, Italy  DSpace provides many of the features of a Digital Library Asset Management System (DAMS), indeed it is already used by hundreds of institutions all around the world as a DAMS. DSpace stores, preserves and disseminates digital cultural heritage content fulfilling the four main tasks required of any Digital Library System:

1. ingestion of digital objects together with their metadata;

DuraSpace News: Oslo and Akershus University College (HiOA) Launch DSpace IRs on KnowledgeArc Platform

Tue, 2017-02-28 00:00

From Michael Guthrie, KnowledgeArc

Hove, UK Oslo and Akershus University College (Høgskolen i Oslo og Akershus or HiOA) have deployed their two new institutional repositories, HiOA Open Digital Archive and HiOA Fagarkivet on the KnowledgeArc managed, hosted DSpace platform.

DuraSpace News: VIVO Updates Feb 26–Camp, Ontologies, Strategy, Outreach

Tue, 2017-02-28 00:00

VIVO Camp Registration extended.  We have a great group signed up to learn about VIVO at VIVO Camp, April 6-8 in Albuquerque, New Mexico.  Registration has been extended and will remain open until we're full.  Register today for Camp here.

LibUX: Listen: Trey Gordner and Stephen Bateman from Koios (23:08)

Mon, 2017-02-27 22:26

In this episode, Trey Gordner and Stephen Bateman from Koios join Amanda and Michael to chat about startup opportunities in this space, the opportunity and danger of “interface aggregation,” the design of their new service Libre, and more.

These two were super fun to interview.

You can also  download the MP3 or subscribe to Metric: A UX Podcast on OverCastStitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.

Open Knowledge Foundation: 7 ways the ROUTE-TO-PA project has improved data sharing through CKAN

Mon, 2017-02-27 10:01

Data sharing has come a long way over the years. With open source tools, improvements and new features are always quickly on the horizon. Serah Rono looks at the improvements that have been made to open source data management system CKAN through the course of the ROUTE-TO-PA project. 

In the present day, 5MB worth of data would probably be a decent photo, a three-minute song, or a spreadsheet. Nothing worth writing home about, let alone splashing across front pages of mainstream media. This was not the case in 1956 though –  in September of that year, IBM made the news by creating a 5MB hard drive. It was so big, a crane was used to lift it onto a plane. Two years later, in 1958, the World Data Centre was established to allow users open access to scientific data. Over the years, data storage and sharing options have evolved to be more portable, secure, and with the blossoming of the Internet, virtual, too.

One such virtual data sharing platform, CKAN, has been up and running for ten years now. CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available.

It is no wonder then that ROUTE-TO-PA, a Horizon2020 project pushing for transparency in public administrations across the EU, chose CKAN as a foundation for its Transparency Enhancing Toolset (TET). As one of ROUTE-TO-PA’s tools, the Transparency Enhancing Toolset provides data publishers with a platform on which they can open up data in their custody to the general public.

So, what improvements have been made to the CKAN base code to constitute the Transparency Enhancing Toolset? Below is a brief list:

1. Content management system support

CKAN Integration with a content management system enables publishers to publish content related to datasets and publish updates related to the portal in an easy way. TET WordPress plugin seamlessly integrates TET enabled CKAN and provides rich content publishing features to publishers and an elegantly organized entry point to data portal. 

2. PivotTable

CKAN platform has limited data analysis capabilities, essential for working with data. ROUTE-TO-PA added a PivotTable feature to allow users to view, summarize and visualize data. From the data explorer in this example, users can easily create pivot tables and even run SQL queries.  See source code here.

3. OpenID

ROUTE-TO-PA created an OpenID plugin for CKAN which enabled OpenID authentication on CKAN. See source code here.

4. Recommendation for related datasets

With this feature, the application recommends related datasets a user can look at based on the current selection and other contextual information. The feature guides users to find potentially useful and relevant datasets. See example in this search result for datasets on bins in Dublin, Ireland.

5. Combine Datasets Feature

This feature allows users to combine related datasets in their search results within TET into one ‘wholesome’ dataset. Along with the Refine Results feature, the Combined Datasets feature is found in the top right corner of the search results page, as in this example. Please note, that only datasets with the same structure can be combined at this point. Once combined, the resulting dataset can be downloaded for use.

6. Personalized search and recommendations

Personalized search feature allows logged-in users to get personalized search based on details provided in their profile. In addition logged-in users are provided with personalized recommendations based on their profile details.

7. Metadata quality check/validation

Extra validations to dataset entry form are added to prevent data entry errors and to ensure consistency.

You can find, borrow from and contribute to CKAN and TET code repositories on Github, join CKAN’s global user group or email serah.rono@okfn.org with any/all of your questions. Viva el open source!

John Miedema: Dusting off the OpenBook WordPress Plugin. There will be Code.

Sun, 2017-02-26 14:41

This morning I dusted off my OpenBook WordPress plugin. WordPress sensibly suppresses old plugins from its search directory, so I had to manually install it from its website. WordPress warns that the plugin has not been updated in over two years and may have compatibility issues. I was fully expecting it to break, if not during the installation then during the book search or preview function. To my surprise, everything worked fine!

Some of you will remember that during my days in library school I programmed the OpenBook plugin to insert rich book data from Open Library into WordPress posts/pages. The plugin got a bit of attention. I wrote an article in Code4Lib. I was asked to write another for NISO (pdf). I did some presentations at library conferences. I got commissioned to create a similar plugin for BookNet Canada; it is still available. OpenBook even got a favourable mention in the Guardian newspaper during the Google Books controversy. 

OpenBook went through a few significant upgrades. Templates allowed users to customize the look. COinS were added for integration with third-party applications like Zotero. OpenURL resolver allowed libraries to point webpages directly to entries in their online catalogues. I’m still proud of it. I had plenty of new ideas. Alas, code projects are never finished, merely abandoned.

Today, I am invested in publishing my After Reading web series. I am not announcing the resurrection of OpenBook. The web series will, however, use technical diagrams and code, to illustrate important bookish concepts. For example, a couple years ago I had an idea for a cloud-based web service that could pull book data from multiple sources. I was excited about this use of distributed data. Today that concept has a name, distributed ledger, that could be applied to book records. I will not be developing that technology in this series, but you can count on at least one major code project. There will be code.

The After Reading series will be posting book reviews, so I figured what the heck, dust off OpenBook. Maybe a small refresh will make it active in the WordPress search directory again. 

 

District Dispatch: ALA in NYC (Part 2): ‘More than a collection of books’

Fri, 2017-02-24 20:46

How many times have you heard that phrase? A visit to the Central Library of the Brooklyn Public Library system proved to make that statement undoubtedly true. I work for ALA, I am a librarian, I worked for a library, I go to my branch library every week, so I know something about libraries. But this visit to this library was the kind of experience when you want to point, jump and exclaim, “See what is going on here!”

Photo credit: Carrie Russell

Indeed, much more than a collection of books. And more than shiny digital things. And more than an information commons. All are welcome here because the library is our community.

A visit to Brooklyn Public was first on the agenda of ALA’s Digital Content Working Group (DCWG), the leadership contingent that meets periodically with publishers in New York City. (Read more about the publisher meetings here.)

On arrival, several FBI agents were assembled outside of the building, which gave us momentary pause until we learned that they were cast members of the television series Homeland, one of the many film crews that use the Central Library for scenic backdrop. Look at the front of the building. Someone said “it’s like walking into a church.” Very cool if you can call this your library home.

Brooklyn is making a difference in its community. Heck, in 2016 it won the Institute of Museum and Library Services’ National Medal, the nation’s highest honor for museums and libraries. Brooklyn was recognized in part for its Outreach Services Department, which provides numerous social services, including a creative aging program, oral history project for veterans, free legal services in multi-languages and a reading program called TeleStory. (TeleStory connects families to loved ones in prison via video conference so they can read books together. Who knew a public library did that?!) All people are welcome here.

The Adult Learning Center, operating at Central and five Brooklyn Library branches under the direction of Kerwin Pilgrim, provides adult literacy training for new immigrants, digital literacy, citizenship classes, job search assistance and an extensive arts program that takes students to cultural centers where some see their first live dance performance! The active program depends on volunteer tutors. “Our tutors are the backbone of our adult basic education program serving students reading below the fifth grade reading level. Without our trained and dedicated volunteers, our program would suffer,” said Pilgrim.

Photo credit: Jim Neal

“Our volunteers are also strong advocates for our students and ambassadors of literacy because many of them share their experiences tutoring and inspire others to join as well. In many of our receptions over the years when we’ve asked tutors to provide reflections on their experiences we’ve heard that volunteering with BPL’s adult literacy programs not only opened their eyes, but also informed their perspective about what it means to be illiterate in a digital world. They empathize and empower our students to go beyond their comfort zones. Tutors have helped students progress to higher level groups and even to our Pre-HSE program. Students have achieved personal and professional goals like passing the citizenship and driver’s license exams, and completing applications for various city agencies. We help students with their stated goals as well as other aspects of their development and growth.”

During our tour of the library, we caught the tail end of story time, and I have never seen so many baby carriages lined up in the hallway. Another crowd scene when we were leaving the building – a huge line of people waiting to apply for their passports. Yes, the library has a passport office. Brooklyn also partners with the City of New York by providing space for the municipal ID program called IDNYC. All New Yorkers are eligible for a free New York City Identification Card, regardless of citizenship status. Already, 10 percent of the city’s population has an IDNYC Card! This proof of identity card provides one with access to city services. With the ID, people can apply for a bank or credit union account, get health insurance from the NY State Health Insurance Marketplace and more.

In the Business and Career Center, Brooklyn provides numerous online databases and other resources, many helpful for entrepreneurs looking to start their own business. The biggest barrier to starting a new business is inadequate start-up funds. Brooklyn Public tries to mitigate this problem with their “Power Up” program. Funded by Citibank, budding entrepreneurs take library training to write a business plan. After classes on business plan writing, marketing, and discovering sources for financing, participants enter a “best business plan” competition with a first prize of $15,000. Nearly one-third of Brooklyn residents do not have sufficient access to the Internet, so the information commons and dedicated staff available to help—open to all residents—is critical.

I know this is just the tip of the iceberg of all the work that Brooklyn Public does. The library—more than a collection of books—is certainly home to Brooklyn’s 2.5 million residents. And all people are welcome here.

The post ALA in NYC (Part 2): ‘More than a collection of books’ appeared first on District Dispatch.

Brown University Library Digital Technologies Projects: Solr LocalParams and dereferencing

Fri, 2017-02-24 18:03

A few months ago, at the Blacklight Summit, I learned that Blacklight defines certain settings in solrconfig.xml to serve as shortcuts for a group of fields with different boost values. For
example, in our Blacklight installation we have a setting for author_qf that references four specific author fields with different boost values.

<str name="author_qf">   author_unstem_search^200   author_addl_unstem_search^50   author_t^20   author_addl_t </str>

In this case author_qf is a shortcut that we use when issuing searches by author. By referencing author_qf in our request to Solr we don’t have to list all four author fields (author_unstem_search, author_addl_unstem_search, author_t, and author_addl_t) and their boost values, Solr is smart enough to use those four fields when it notices author_qf in the query. You can see the exact definition of this field in our GitHub repository.

Although the Blacklight project talks about this feature in their documentation page and our Blacklight instance takes advantage of it via the Blacklight Advanced Search plugin I had never really quite understood how this works internally in Solr.

LocalParams

Turns out Blacklight takes advantage of a feature in Solr called LocalParams. This feature allows us to customize individual values for a parameter on each request:

LocalParams stands for local parameters: they provide a way to “localize” information about a specific argument that is being sent to Solr. In other words, LocalParams provide a way to add meta-data to certain argument types such as query strings. https://wiki.apache.org/solr/LocalParams

The syntax for LocalParams is p={! k=v } where p is the parameter to localize, k is the setting to customize, and v the value for the setting. For example, the following

q={! qf=author}jane

uses LocalParams to customize the q parameter of a search. In this case it forces the query field qf parameter to use the author field when it searches for “jane”.

Dereferencing

When using LocalParams you can also use dereferencing to tell the parser to use an already defined value as the value for a LocalParam. For example, the following example shows how to use the already defined value (author_qf) when setting the value for the qf in the LocalParams. Notice how the value is prefixed with a dollar-sign to indicate dereferencing:

q={! qf=$author_qf}jane

When Solr sees the $author_qf it replaces it with the four author fields that we defined for it and sets the qf parameter to use the four author fields.

You can see how Solr handles dereferencing if you pass debugQuery=true to your Solr query and inspect the debug.parsedquery in the response. The previous query would return something along the lines of

(+DisjunctionMaxQuery(     (     author_t:jane^20.0 |     author_addl_t:jane |     author_addl_unstem_search:jane^50.0 |     author_unstem_search:jane^200.0     )~0.01   ) )/no_coord

Notice how Solr dereferenced (i.e. expanded) author_qf to the four author fields that we have configured in our solrconfig.xml with the corresponding boost values.

It’s worth noticing that dereferencing only works if you use the eDisMax parser in Solr.

There are several advantages to using this Solr feature that come to mind. One is that your queries are a bit shorter since we are passing an alias (author_qf) rather than all four fields and their boost values, this makes reading the query a bit clearer. The second advantage is that you can change the definition for the author_qf field on the server (say to add include a new author field in your Solr index) and the client applications automatically will use the definition when you reference author_qf.

Open Knowledge Foundation: Announcing the 2017 International Open Data Day Mini-Grant Winners!

Fri, 2017-02-24 17:05

This blog was co-written by Franka Vaughan and Mor Rubinstein, OKI Network team.

This is the third year of the Open Knowledge International Open Data Day mini-grants scheme, our best one yet! Building on last year’s lessons from the scheme, and in the spirit of Open Data Day, we are trying to make the scheme more transparent. We are aspiring to email every mini-grant applicant a response email with feedback about their application. This blog is the first in a series where we look at who has received grants, how much has been given, and also our criteria for deciding who to fund (more about that, next week!).

Our selection process took more time than expected due to the right circumstances – the UK Foreign & Commonwealth Office joined the scheme last week and is funding eight more events! Adding it to the support we got from SPARC, the Open Contracting Program of Hivos, Article 19 and our grant from Hewlett Foundation, a total amount of $16,530 worth of mini-grants are being distributed. This is $4,030 more than what we committed to initially.  

The grants are divided into six categories: Open Research, Open Data for Human Rights, Open Data for Environment, Open Data Newbies, Open Contracting and FCO special grantees. Although not a planned category, we decided to be flexible and accommodate places that will be hosting an Open Data Day event for the first time. We call it the Newbie category. These events didn’t necessarily fit our criteria, but showed potential or are hosting Open Data Day events for the first time. Two of these events will get special assistant from our Open Data for Development Africa Lead, David Opoku.

So without further ado, here are the grantees:  

  1. Open Knowledge Nepal’s ODD event will focus on “Open Access Research for Students” to highlight the conditions of Open Access and Open Research in Nepal, showcasing the research opportunities and the moving direction of research trends.  Amount: $350
  2. Open Switch Africa will organise a workshop to encourage open data practises in academic and public institutions, teach attendees how to create / utilize open data sheets and repositories and also build up an open data community in Nigeria.  Amount: $400
  3. The Electrochemical Society’s ODD event in the USA will focus on informing the general public about their mission to Free the Science and make scientific research available to everyone, and also share their plans to launch their open access research repository through Research4Life in March. Amount: $400
  4. Wiki Education Brazil aims to create and build structures to publish Brazilian academic research on Wikipedia and WikiData. They will organise a hackathon and edit-a-thon in partnership with pt.wiki and wikidata communities with support from Wikimedia Foundation research team to create a pilot event, similar to https://meta.wikimedia.org/wiki/WikiCite. Amount: $400
  5. Kyambogo University, Uganda will organise a presentation on how open data and the library promote open access. They will host an exhibition on open access resources and organise a library tour to acquaint participants with the available open access resources in the University’s library. Amount: $400
  6. Kirstie Whitaker will organise a brainhack to empower early career researchers at Cambridge University, England, on how to access open neuroimaging datasets already in existence for new studies and add their own data for others to use in the future. Amount: $375
  7. The University of Kashmir’s ODD event in India will target scholars, researchers and the teaching community and introduce them to Open Data Lab projects that are available through open Knowledge Labs and open research repositories through re3data.org.  Amount: $400
  8. The Research Computing Centre of the University of Chicago will organise a hackathon that will introduce participants to public data available on different portals on the internet. Amount: $300
  9. Technarium hackerspace  in Lithuania will organise a science cafe to open up the conversation in an otherwise conservative Lithuanian scientist population about the benefits of open data, and ways to share the science that they do. Amount: $400
  10. UNU-MERIT /BITSS-YAOUNDE in Cameroon will organise a hands-on practical training courses on Github, OSF, STATA dynamic documents, R Markdown, advocacy campaigns etc. Targeting 100 people.  Amount: $400
  11. Open Sudan will organise a high level conference to discuss the current state of research data sharing in Sudan, highlight the global movement and its successes, shed light on what could be implemented on the local level that is learned from the global movement and most importantly create a venue for collaboration. Amount: $400

  1. Dag Medya’s ODD event will increase awareness on deceased workers in Turkey by structuring and compiling raw data in a tabular format and opening it to the public for the benefit of open data lovers and data enthusiasts. Amount: $300
  2. Election Resource Centre Zimbabwe will organise a training to build the capacity of project champions who will use data to tell human rights stories, analysis, visualisation, reporting, stimulating citizen engagement and campaigns. Amount: $350
  3. PoliGNU ODD event in Brazil will be a discussion on women’s participation in the development of public policies and will be guided by open data collection and visualizations. Amount: $390
  4. ICT4Dev Research Center will organise a press conference to launch their new website [ict4dev.ma] which highlights their open data work, a panel discussion about the relationship between Human Rights and Open Data in Morocco.  Amount: $300
  5. Accountabilitylab.org  will train and engage Citizen Helpdesk volunteers from four earthquake hit districts in Nepal (Kavre, Sindhpalchowke, Nuwakot and Dhading) who are working as interlocutors, problem solvers and advocates on migration-related problems to codify citizen feedback using qualitative data from the ground and amplifying them using open data tool.s Amount: $300
  6. Abriendo Datos Costa Rica will gather people interested in human rights activism and accountability, and teach them open data concepts and the context of open data day, and check for openness or otherwise of the available human rights data. Amount: $300

  1. SpaceClubFUTA will use OpenStreetMap, TeachOSM tasking manager, Remote Sensing and GIS tools to map garbage sites in Akure, Nigeria and track their exact locations, and the size and type of garbage. The data collected will be handed over to the agency in charge of clean up to help them organise the necessary logistics.  Amount: $300
  2. Open Data Durban will initiate a project about the impacts of open data in society through the engagement of the network of labs and open data school clubs (wrangling data through an IoT weather station) in Durban, South Africa. Amount: $310
  3. Data for Sustainable Development in Tanzania ODD event will focus on using available information from opendata.go.tz to create visualization thematic map to show how data can be used in the health sector to track spread of infectious diseases, monitor demand or use demographic factors to look for opportunity in opening of new health facilities. Amount: $300
  4. SubidiosClaros / Datos Concepción will create an Interactive Map of Floods on the Argentine and Uruguayan Coasts of the Uruguay River using 2000-2015 data. This will serve as an outline for implementing warning systems in situations of water emergency. Amount: $400
  5. Outbox Hub Uganda will teach participants how to tell stories using open data on air quality from various sources and their own open data project. Amount: $300
  6. Lakehub will use data to highlight the effects of climate change, deforestation on Lake Victoria, Kenya. Amount: $300
  7. Tupale.co will create the basis for a generic data model to analyze Air Quality in the city of Medellin for the last five years. This initial “scaffolding” will serve as the go-to basis to engage more city stakeholders while putting in evidence for the need for more open data sets in Colombia. Amount: $300
  8. Beog Neere will develop action plan to open up Extractives’ environmental impact data and develop data skills for key stakeholders – government and civil society. Amount: $300

  1. East-West Management Institute’s Open Development Initiative (EWMI-ODI) in Laos will build an open data community in Laos, and promote and localise localization the Open Data Handbook. Amount: $300
  2. Mukono District NGO Forum will use OpenCon resource depositories and make a presentation on Open Data, Open Access, and Open Data for Environment.  Amount: $350
  3. The Law Society of the Catholic University of Malawi will advocate for sexual reproductive health rights by going to secondary schools and disseminate information to young women on their rights and how they can report once they have been victimized. Amount: $350

  1. LabHacker will take their Hacker Bus to a small city near São Paulo and run a hack day/workshop there and create a physical tool to visualize the city budget which will be made available for the local citizens. They will document the process and share it online so other can copy and modify it. Amount: $400
  2. Anti-Corruption Coalition Uganda will organize a meetup of 40 people identified from civil society, media,  government and general public and educate them on the importance of open data in improving public service delivery. Amount: $400
  3. Youth Association for Development, will hold a discussion on the current government [Pakistan] policies about open data. The discussions will cover, open budget, Open Contracting, open bidding, open procurements, open bidding, open tendering, Open spending, Cooking budgets, Cooking budgets, Panama Papers, Municipal Money etc. Amount: $400
  4. DRCongo Open Data initiative will organise a conference to raise awareness on the role of open data and mobile technologies to enhance  transparency and promoting accountability in the management of revenues from extractive industries in DR Congo. Amount: $400
  5. Daystar University in Kenya will organise a seminar to raise awareness among student journalists about using public data to cover public officers’ use of taxpayer money. Amount: $380
  6. Centre for Geoinformation Science, University of Pretoria in South Africa will develop a web-based application that uses gamification to encourage the local community (school learners specifically)  to engage with open data on public funds and spending. Amount: $345
  7. Socialtic will host Data Expedition, Workshops, Panel and lightning talks and Open Data BBQ to encourage the participation of groups like NGOs and journos to use data for their work. Amount: $350
  8. OpenDataPy and Girolabs will show  civil society organizations public contract data of Paraguay and also show visualizations and apps made with that data. Their goal is to use all the data available and generate a debate on how this information can help achieve transparency. Amount: $400
  9. Code for Ghana will bring together data enthusiasts, developers, CSOs and journalists to work on analysing and visualising the previous government’s expenditure to bring out insights that would educate the general public. Amount: $400
  10. Benin Bloggers’ Association will raise awareness of the need for Benin to have an effective access to information law that oblige elected officials and public officials to publish their assets and revenues. Amount: $400

  1. Red Ciudadana will organize a presentation on open data and human rights in Guatemala. They aim to show the importance of opening data linked to the Sustainable Development Goals and human rights and the impact it has on people’s quality of life. Amount: $400
  2. School of Data – Latvia is organizing a hackathon and inviting journalists, programmers, data analysts, activists and the general public interested in data-driven opportunities. Their aim is to create real projects that draws out data-based arguments and help solve issues that are important for society. Amount: $280
  3. Code for South Africa (Code4SA)’s event will introduce participants to what open data is, why it is valuable and how it is relevant in their lives. They are choosing to *not* work directly with raw data, but rather using an interface on top of census and IEC data to create a more inclusive event. Amount: $400
  4. Code for Romania will use the “Vote Monitoring” App to build a user-friendly repository of open data on election fraud in Romania and Moldova. Amount: $400
  5. Albanian Institute of Science – AIS will organize a workshop on Open Contracting & Red Flag Index and present some of their instruments and databases, with the purpose of encouraging the use of facts in journalistic investigations or citizens’ advocacy. Amount: $400
  6. TransGov Ghana will clean data on public expenditure on development projects [2015 to 2016] and show how they are distributed in the Greater Accra Metropolis (data from Accra Metropolitan Assembly) to meet open data standards and deploy on Ghana Open Data Initiative (GODI) platform. Amount: $400

 

For those who were not successful on this occasion, we will providing further feedback and would encourage you to try again next time the scheme is available. We look forward to seeing, sharing and participating in your successful events. We invite you all you register your event on the ODD website.

Wishing you all a happy and productive Open Data Day! #OpenDataDay for more on Twitter!

Pages