You are here

Feed aggregator

LITA: #NoFilter: Designing Social Media Content in Canva

planet code4lib - Tue, 2017-09-12 16:52

Following my last post in the #NoFilter series, I received some feedback indicating that it would be helpful to describe the actual process whereby one uses Canva to create compelling visuals for social media posts. While I do again emphasize taking some time to complete Canva’s immensely helpful Design Essentials tutorials, I will use this entry to describe some of the techniques I have developed for using Canva efficiently. Canva in this discussion refers to the free version of the service. There is a Canva at Work option available as well as a Canva Enterprise option for groups of over 30 members. You can compare the different versions on Canva’s Pricing page.

Before delving into any graphic design project for your library, it is important to check if your institution adheres to any style guidelines for social media content. These guidelines may be a simple list of recommended fonts and colors, or an extensive document detailing the fonts, font size(s), color palettes, tone, logos, etc. that are to accompany different types of social media content. As regards fonts in Canva, the following are some suggestions that have been given to me: Roboto Condensed, Bold, size 28 for headings; Roboto Condensed, Bold, size 21 for sub-headings; and Open Sans, size 16 for body text.

Onto the actual matter of designing content. Where do you start? One of the most valuable and time-saving aspects of using Canva is having access to thousands of free design templates. The set of templates that Canva offers for social media posts includes:

  • Twitter Post
  • Instagram Post
  • Tumblr Graphic
  • Pinterest Graphic
  • Facebook Post
  • Facebook App
  • Social Graphic

For our purposes, let’s say we want to design an eye-catching graphic for Tumblr. I open Canva, select the Tumblr Graphic option, and browse through some of the free templates. I select the following free template to begin my project.

An example of a free Tumblr Graphic template available on Canva.

Canva allows you to upload your own photos and use them in your designs. Canva provides details on this process on its Support Page. From my previous work in Canva, I have already built a storehouse of photos including images of my library and specific items from the library’s collections. I select one of these images and drag it into the template where the original photo is located. I then change the text in the template to a quote from author Madeleine L’Engle, “A book, too, can be a star, a living fire to lighten the darkness, leading out into the expanding universe.” I adjust the font for the quote to Open Sans, Bold, size 18, and the font for the author’s name to Roboto Condensed, Bold, size 21.

And voilà, I’m done:

 

I download either a PNG or JPG of my finished design from Canva and I’m ready to post it on Tumblr. You can see the responses to this graphic on my library’s Tumblr blog.

All told, this project took less than ten minutes to create. There are many other methods for making efficient use of Canva for one’s social media content. Are there any tips and tricks you have developed for your Canva projects, specifically ones for social media? Share them in the comments below!

 

David Rosenthal: The Internet of Things is Haunted by Demons

planet code4lib - Tue, 2017-09-12 15:00
This is just a quick note to get you to read Cory Doctorow's Demon-Haunted World. We all know that the Internet of Things is infested with bugs that cannot be exterminated. That's not what Doctorow is writing about. He is focused on the non-bug software in the Things that makes them do what their manufacturer wants, not what the customer who believes they own the Thing wants.

In particular Doctorow looks at examples such as Dieselgate in which the manufacturer wants to lie to the world about what the Thing does:
All these forms of cheating treat the owner of the device as an enemy of the company that made or sold it, to be thwarted, tricked, or forced into con­ducting their affairs in the best interest of the com­pany’s shareholders. To do this, they run programs and processes that attempt to hide themselves and their nature from their owners, and proxies for their owners (like reviewers and researchers).

Increasingly, cheating devices behave differ­ently depending on who is looking at them. When they believe themselves to be under close scrutiny, their behavior reverts to a more respectable, less egregious standard.Doctorow's piece provides many examples, but a week later he provides another, seemingly benign example. Tesla provided some of their cars with an over-the-air temporary range upgrade to help their owners escape hurricane Irma. They could do this because:
Tesla sells both 60kWh and 75kWh versions of its Model S and Model X cars; but these cars have identical batteries -- the 60kWh version runs software that simply misreports the capacity of the battery to the charging apparatus and the car's owner. And it would be a crime to upgrade yourself to use the battery you bought:
[Tesla] has to rely on the Computer Fraud and Abuse Act (1986), which felonizes violating terms of service. It has to rely on Section 1201 of the DMCA, which provides prison sentences of 5 years for first offenders who bypass locks on the devices they own. It is easy to see that the capability Tesla used could be used for other things:
The implications of this are grim. A repo depot could brick your car over the air (and it would be a felony to write code to unbrick it). Worse, hackers who can successfully impersonate Tesla, Inc. to your car will have the run of the device: it is designed to allow remote parties to override the person behind the wheel, and contains active countermeasures to prevent you from reasserting control.Doctorow concludes:
The software in gadgets makes it very tempting indeed to fill them with pernicious demons, but these laws criminalize trying to exorcise those demons.

There’s some movement on this. A suit brought by the ACLU attempts to carve some legal exemp­tions for researchers out of the Computer Fraud and Abuse Act. Another suit brought by the Electronic Frontier Foundation seeks to invalidate Section 1201 of the Digital Millennium Copyright Act.

Getting rid of these laws is the first step towards restoring the order in which things you own treat you as their master, but it’s just the start. There must be anti-trust enforcement with the death penalty – corporate dissolution – for companies that are caught cheating. When the risk of getting caught is low, then increasing penalties are the best hedge against bad action. The alternative is toasters that won’t accept third-party bread and dishwashers that won’t wash unauthorized dishes.Just go read both of his pieces.

Open Knowledge Foundation: This is what Europe can do to stimulate Text and Data Mining

planet code4lib - Tue, 2017-09-12 13:49

This press release has been reposted from the FutureTDM website

Text and data mining – using algorithms to analyse content in ways that would be impossible for humans – is shaping up to be a vital research tool of the 21st century. But Europe lags behind other parts of the world in adopting these new technologies. The FutureTDM project has just concluded its’ two-year EC-funded research investigating what’s holding Europe back. The project consortium, consisting of 10 European partners led by SYNYO, met with stakeholders and experts from all over Europe, gathering input and carrying out research to understand how Europe can take steps to support the uptake of TDM. Open Knowledge International together with ContentMine led the work on communication, mobilisation and networking and undertook research into best practices and methodologies.

The potential benefits – and risks – are huge. According to the project’s economic analysis, TDM technologies could have an impact of as much as USD 110 billion on the European economy by 2020. If Europe is not ready to foster and support the use of TDM, the risk is seeing talent and economic benefits go elsewhere.

Legal barriers are a big problem. TDM processes often involve copying content for analysis, so applications of TDM may fall foul of copyright laws. The EU has a fragmented landscape of restrictive, often unclear laws that can restrict re-use of content for TDM. Skills and education in this area also need a boost. Data analysis is fast becoming “the new IT”, and people in all fields, from fashion to finance, could benefit from an education in fundamental data literacy and computational thinking skills. Lack of infrastructure and economic incentives are lesser concerns. More information on these barriers is available from the FutureTDM report Policies and Barriers of TDM in Europe.

FutureTDM put together real, practical proposals to support the uptake of TDM in Europe. These are summarised in a Roadmap for the EU which focuses on three key phases of support:

  1. Content Availability: making sure content is legally and practically discoverable and re-usable for TDM. Since rights clearance can be practically impossible for many TDM applications, it almost certainly means copyright reform to allow re-use of content that doesn’t trade on the original creative expression.
  2. Support Early Adopters: there is a need for initiatives that will connect TDM practitioners across domains and sectors, helping them share best practices and learn from each other’s experiences.
  3. The Next Generation: it is important to build a ‘data-savvy’ culture, where all Europeans have a fundamental awareness of the potential uses and benefits of data analytics.

The platform at www.futuretdm.eu brings together all the results of the FutureTDM project. As well as databases of TDM projects, experts, methods and tools, the Knowledge Base includes a series of practical guidelines for stakeholders in the TDM landscape. These are resources offering straightforward, plainly-worded advice on legal, licensing, and data management issues – as well as on how universities in particular can play a key role in supporting the uptake of TDM in Europe. All outcomes are also summarised in the awareness sheet Outcomes of FutureTDM.

 

 

The FutureTDM project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 665940.
For further questions please contact: office@futuretdm.eu / Tel +43 1 9962011

District Dispatch: Copyright in music webinar now available

planet code4lib - Tue, 2017-09-12 13:08

An archived copy of the CopyTalk webinar “Copyright in music and sound recordings: introduction and legislative update” is now available. Originally broadcast on September 7, 2017, presenter Eric Harbeson, music special collection librarian at University of Colorado-Boulder, introduced us to singular and confounding aspects of music copyright—compulsory licensing, state protection of sound recordings, how one song recording has numerous copyrights, how some copyright exceptions don’t apply to music, and more. Eric also discussed recent litigation and pending legislation including the CLASSICS Act.

All CopyTalk webinars archived and available free by using AdobeConnect.

A big thanks to Laura Quilter who stepped in as moderator while Patrick Newell, our regular host, took a long-deserved vacation. And thanks to Julianna Kloeppel at ALA-HQ for running the show while I was not out of town. (I did not take a long-deserved vacation as planned because of Irma—oh well, another vacation some other time.)

The post Copyright in music webinar now available appeared first on District Dispatch.

District Dispatch: Library licensing concerns: beyond legal details

planet code4lib - Mon, 2017-09-11 21:38

In August, the International Federation of Library Associations and Institutions (IFLA) released a “Literature review on the use of licenses in the library context, and the limitations this creates to access to knowledge,” authored by Svetlana Yakovleva. This 40-page report is recommended reading for those interested in gaining an understanding of library licensing concerns that go beyond legal details. Many understand that a private license agreement with terms and conditions determined by the rights holder can sidestep the user rights elements of public copyright law, which for the United States is determined by Congress. But licensing cannot be escaped—this is the business model used by rights holders to make digital resources available. Today, licenses are omnipresent: we agree to licenses when we purchase an airline ticket, when we buy an appliance that includes software, when we buy digital music, when we subscribe to Pandora, when we sign up for cable television. Licensing is here to stay.

A new literature review from IFLA sheds light on library licensing concerns that go beyond legal details. Photo credit: Wikicommons

The IFLA report summarizes how licensing is further befuddled by economic concentration of the publishing industry, pricing models and the transaction costs of libraries having to manage and comply with multiple licenses. Not surprisingly, Yakovleva also notes the lack of literature on licensing content regarding public libraries, including a lack of evidence that “public libraries equally suffer from increasing financial burden of rising licensing fees.” (I know a few public librarians who would argue with that.)

For public libraries, understanding the problems with licensing seemed to hit home only in the last several years, when libraries began to acquire e-books. For academic libraries, discussions regarding “access over ownership” – i.e. licensing over copyright – began more than 30 years ago, fueled in part by journal price inflation that decimated acquisitions budgets. Academic libraries had to act quickly, learn from one another, develop and support new collection policies. Eventually, they formed the Scholarly Publishing and Academic Resources Coalition (SPARC), now a global coalition committed to open access, authors’ rights, open sharing of research data and open educational resources.

A similar trajectory seems less likely with public libraries, for several reasons:

  • While a growing percentage of the collections budget is spent on digital resources, public libraries still have tremendous print collections with circulation stats unheard of in academic libraries.
  • Public libraries collect resources from the trade book market, which is profoundly different from the academic market.
  • Public libraries focus less on collection development than on providing a broad array of user services to their local communities.

The differences between public and academic libraries are numerous, but most licensing issues centered on owning library resources are common. More research on public library funding, licensing and collection development may identify new commonalities and strategies to advance learning and equitable access to information and help better understand the publishing ecosystem.

The post Library licensing concerns: beyond legal details appeared first on District Dispatch.

Evergreen ILS: OpenSRF 2.5.1 released

planet code4lib - Mon, 2017-09-11 21:28

We are pleased to announce the release of OpenSRF 2.5.1, a message routing network that offers scalability and failover support for individual services and entire servers with minimal development and deployment overhead.

OpenSRF 2.5.1 is a major bugfix release, and all users of OpenSRF 2.5.0, including testers of the Evergreen 3.0 beta release, are advised to upgrade as soon as possible.  In particular, 2.5.1 fixes an issue where messages could sometimes exceed the ejabberd limit on the maximum stanza size and get dropped.

OpenSRF 2.5.1 includes various other improvements as detailed in the release notes.

To download OpenSRF, please visit the downloads page.

We would also like to thank the following people who contributed to the release:

  • Bill Erickson
  • Chris Sharp
  • Galen Charlton
  • Graham Billiau
  • Jason Stephenson
  • Mike Rylander

Terry Reese: ASIS&T Midwest Regional Panel

planet code4lib - Sat, 2017-09-09 23:05

I had the opportunity to speak on a panel with a group of fantastic library people.  Here’s the panel description: https://www.asist.org/evolving-landscapes-in-academic-and-public-libraries/

I believe that all the presentation slides will be available soon from the site, but I’ve posted mine to slideshare at: https://www.slideshare.net/reese_terry/rejoining-the-information-access-landscape

–tr

Terry Reese: MarcEdit 7: Harvest OAI UI changes

planet code4lib - Sat, 2017-09-09 22:56

In evaluating the UI in the MarcEditor, the UI for the OAI Harvester seemed to be a little crowded to me.  So, I’m breaking it up a bit.  As of right now, the new updated UI will be the following:

tr

Terry Reese: MarcEdit 7: Add/Delete Field Changes

planet code4lib - Sat, 2017-09-09 22:52

I’m starting to think about the global editing functions in the MarcEditor – and one of the first things I’m trying to do is start to flesh out a few confusing options related to the interface.  This is the first update in thinking about these kinds of changes

The idea here is to make it clear which options belong to which editing groupset as sometimes folks aren’t sure which options are add field options and which are delete field options.  Hopefully, this will make the form easier to decipher.

–tr

District Dispatch: Senate boosts funding for IMLS, LSTA thanks to ALA grassroots

planet code4lib - Fri, 2017-09-08 17:30

Congress delivered good news for library funding after returning from its August recess this week. Yesterday, the Senate Appropriations Committee approved an increase of $4 million in funding for the Institute of Museum and Library Services (IMLS), all of which would go to the formula-based Grants to States program.

Following months of intensive Hill lobbying by ALA Washington Office staff and the emails, phone calls and visits to Congress by ALA advocates, these gains are a win for libraries. According to a key Senate staffer, ALA’s ongoing grassroots campaign to save direct library funding launched last March – and the significant increase in the number of Senators and Representatives signing “Dear Appropriator” letters this year that it produced – played a major role in the gains for IMLS and Grants to States in the Senate Committee’s bill.

The Senate Committee’s bill, approved by the Labor-HHS Subcommittee on Wednesday, would boost IMLS funding to $235 million. Grants to States would receive $160 million. The bill also includes increased funding in FY 2018 for a number of other library-related programs.

Institution/Program Total Increase National Library of Medicine $420.9 million $21 million Title IV Student Support and Academic Enrichment Grants $450 million $50 million Title I Grants to Local Educational Agencies $15.5 billion $25 million Innovative Approaches to Literacy $27 million level Title II Supporting Effective Instruction State Grants $2.1 billion level Career and Technical Education State Grants $1.1 billion level

Overall, education funding in the Senate bill decreased $1.3 billion, but libraries remain a clear priority in Congress. These increases in direct library funding would not be possible without sustained advocacy by ALA staff and members!

The Committee’s funding measure now heads to the full Senate for consideration. If passed, it must eventually be reconciled with House legislation that proposes to fund IMLS and Grants to States for FY2018 at FY2017’s level of $231 million and $156 million, respectively. While yesterday’s vote does not guarantee increased direct library funding, Senate approval of the Appropriations Committee’s bill would leave libraries in a very strong position to avoid any cuts for FY2018 – in spite of the Administration’s proposals (reiterated again this week in a “Statement of Administration Position“) to effectively eliminate IMLS and federal library funding.

While library funding is on track to remain level through the standard appropriations process, final passage of legislation by both chambers of Congress by the end of the 2017 Fiscal Year on September 30 is unlikely.  Congressional staff tells ALA that Congress will not be able to pass most, if any, of its 12 individual appropriations bills by the end of this month. Congress will likely need to enact a Continuing Resolution (CR), which would fund the government at current levels, to avert a government shutdown on October 1.

Thanks to you, the outlook for library funding in FY2018 is promising, but it’s not close to being a done deal. Right now, we must be patient; but please be ready to participate in one last grassroots push this fall when your voice is most needed to maintain – and possibly increase – library funding. We will keep you updated.

The post Senate boosts funding for IMLS, LSTA thanks to ALA grassroots appeared first on District Dispatch.

Archival Connections: Arrangement and Description in the Cloud: A Preliminary Analysis

planet code4lib - Fri, 2017-09-08 16:16
I’m posting a preprint of some early work related to the Archival Connections project.  This work will be published as a book chapter/proceedings by the ArchiveSchule in Marburg.  In the meantime, here is the preprint: Arrangement and Description in the Cloud A Preliminary Analysis

LITA: Announcing the LITA Blog Editors

planet code4lib - Fri, 2017-09-08 14:21

We are pleased to announce that Cinthya Ippoliti and John Klima will serve as joint editors of the LITA Blog. Each is an accomplished writer and library tech leader, and we are confident that their perspectives and skill will benefit the Blog and its readership.

John Klima

Cinthya Ippoliti

Cinthya is Associate Dean for Research and Learning Services at Oklahoma State University where she provides administrative leadership for the library’s academic liaison program as well as services for undergraduate and graduate students and community outreach. As a blogger, she has covered a slew of topics including technology assessment.

John is the Assistant Director of the Waukesha Public Library where one of his many hats is maintaining, upgrading, and innovating technology within the library. He wrote a number of articles on steampunk for Library Journal. As a blogger, he has often provides public library technology perspectives.

Look for updates from our Editors on how you can get involved and contribute to the LITA Blog!

Lucidworks: The Search for Search at Reddit

planet code4lib - Thu, 2017-09-07 19:08

Today, Reddit announced their new search for ‘the front page of the internet’ built with Lucidworks Fusion.

Started back in the halcyon Web 2.0 days of 2005, Reddit has become the fourth most popular site in the US and 9th in the world with more than 300 million users every month posting links, commenting and voting across their  1.1 million communities (called ‘sub-reddits’). Sub-reddits can orbit around such broad mainstream topics as /r/politics, /r/bitcoin, and /r/starwars or as obscure as /r/bunnieswithhats, /r/grilledcheese, and /r/animalsbeingjerks. Search is a key part of trying to find more information on their favorite topics and hobbies across the entire universe of communities.

As the site has grown, the search function has had five different search stacks implemented over the years including Postgres, PyLucene, Apache Solr, IndexTank, and Amazon’s CloudSearch. Each time performance got better but wasn’t keeping up with the pace of the site’s growth and relevancy wasn’t where it should be.

“When you think about the Internet, you think about a handful of sites — Facebook, Google, Youtube, and Reddit. My personal opinion is that Reddit is the most important of all of these,” explained Lucidworks CEO, Will Hayes. “It connects strangers from all over the world around an incredibly diverse group of topics. Content is created at a breakneck pace and at massive scale. Because of this, the search function becomes an incredibly important piece of the UX puzzle. Lucidworks Fusion allows Reddit to tackle the scale and complexity issues and provide the world-class search experience that their users expect. ”

The team chose Lucidworks Fusion for it’s best-in-class search capabilities including efficient scaling, monitoring, and improved search relevance.

“Reddit relies heavily on content discovery, as our primary value proposition is giving our people a home for discovering, sharing, and discussing the things they’re most passionate about,” said Nick Caldwell, Vice President of Engineering at Reddit. “As Reddit has grown, so have our communities’ expectations of the experience we provide, and improving our search platform will help us address a long-time user pain point in a meaningful way. We expect Fusion’s customization and machine learning functionality will significantly elevate our search capabilities and transform the way people discover content on the site.”

Here’s just a few of the results from the new search which is now at 100% availability to all users:

  • ETL indexing pipelines reduced to just 4 Hive queries, which led to a 33% increase in posts indexed
  • Full re-index of all of Reddit content slashed from 11 hours to 5 with constant live updates and errors down by two orders of magnitude
  • Amount of hardware/machines reduced from 200 to 30
  • 99% of queries served search results in 500ms
  • Comparable relevancy to the old search (without any fine-tuning yet!)

That’s just a little bit of the detailed blog post over on the Reddit blog. The Search for Better Search at Reddit.

Don’t miss their keynote at the Lucene/Solr Revolution next week in Las Vegas.

Coverage in TechCrunch and KMWorld. More on the way!

Read the full press release.

Go try out the search on Reddit right now!

 

 

 

 

The post The Search for Search at Reddit appeared first on Lucidworks.

Evergreen ILS: On the Road to 3.0: Small Enhancements to Improve the Staff Experience

planet code4lib - Thu, 2017-09-07 16:33

The upcoming Evergreen 3.0 release, scheduled for October 3, 2017, will bring along a lot of improvements for staff and patrons at Evergreen libraries. Over the next few weeks, we’ll highlight some of our favorite new features in the On the Road to 3.0 video series.

In the first installment of the series, we look at small feature enhancements that will improve the staff experience in Evergreen.

Do you want to be the first to know of new videos in this series as they are added? Be sure to subscribe to our new EvergreenILS YouTube channel.

 

District Dispatch: Latino Cultures platform from Google is a new library resource

planet code4lib - Thu, 2017-09-07 16:02

This post originally appeared on The Scoop.

Libraries across the country are working in a variety of ways to improve the full spectrum of library and information services for the approximately 58.6 million Spanish-speaking and Latino people in the US and build a diverse and inclusive profession.

In honor of National Hispanic Heritage Month, which begins on September 15, Google Cultural Institute has collaborated with more than 35 museums and institutions to launch a new platform within Google Arts & Culture: Latino Cultures. The platform brings more than 2,500 Latino cultural artifacts online and—through immersive storytelling, 360-degree virtual tours, ultra-high-resolution imagery, and visual field trips—offers first-hand knowledge about the Latino experience in America.

The American Library Association’s (ALA) President-Elect Loida Garcia-Febo is excited about this new resource, which she believes will help libraries continue to draw attention to the rich legacy of Latinos and Latinas across America.

“Nationwide, libraries are celebrating Latino cultures by offering programs that highlight our music, cuisine, art, history, and leadership,” says Garcia-Febo. “I know this platform will be a great springboard as we continue to reshape our library collections to include Spanish-language and Latino-oriented materials.”

Latino Cultures pulls from a wide variety of collections to recognize people and events that have influenced Hispanic culture in the US. For example, it highlights the Voces Oral History Project’s interviews with Latinos and Latinas of the World War II, Korean War, and Vietnam War generations. Likewise, the platform showcases luminaries like Mari-Luci Jaramillo, the first Latina Ambassador of the US to Honduras, and civil rights activist and labor leader Dolores Huerta, who co-founded the United Farm Workers with Cesar Chavez in the 1960s.

According to the latest research, America’s Hispanic population reached a record 17% of the US population in 2017. As this segment of the population grows, it is increasingly important for educators, hospitals, civil services, and other institutions to have more information about the diverse experiences and backgrounds of Latino Americans.

“Libraries must make sure that more than the basic services are available to Latino Americans,” says Garcia-Febo. “We have to provide respectful spaces for Latino voices and perspectives.”

Google Cultural Institute aims to inspire Americans to learn more about the cultures of Latinos and Latinas in the US. As a complement to the platform, they are creating lesson plans that support bringing content into classrooms, afterschool programs, and other organizational programming.

Office for Information Technology Policy Director Alan Inouye considers it ALA’s responsibility to bring these resources to the attention of all libraries.

“We are especially excited about this new resource in terms of our policy work,” says Inouye. “Issues of race, ethnicity, and immigration are front and center on the nation’s policy agenda, and diversity and inclusion are central to ALA’s strategic priorities. No doubt, the Latino Cultures platform will be a wonderful resource for libraries to leverage in their programs and services.”

In honor of National Hispanic Heritage Month, Garcia-Febo also gives credit to her personal cultural inheritance as a librarian.

“My mother, Doña Febo, was a librarian who taught me the importance of intellectual freedom and the right of everyone to access information. I always celebrate this month with her in mind.”

The post Latino Cultures platform from Google is a new library resource appeared first on District Dispatch.

Open Knowledge Foundation: Openbudgets.eu: the new platform for financial transparency in Europe

planet code4lib - Thu, 2017-09-07 12:28

Today, OpenBudgets officially launches its fiscal transparency platform. Using
OpenBudgets.eu journalists, civil servants, and data scientists can process, analyse, and explore the nature and relevance of fiscal data.

The platform offers a toolbox to everyone who wants to upload, visualise and analyse fiscal data. From easy to use visualisations and high level analytics to fun games and accessible explanations of public budgeting and corruption practices along with
participatory budgeting tools, it caters to the needs of journalists, activists, policy makers and civil servants alike.

The first successful implementations and projects have been developed in Thessaloniki, Paris, and Bonn, where civil society organisations and civil servants have together built budget visualisation for the general public.The cooperation between IT and administration resulted in 3 local instances of OpenBudgets.eu, setting the example for future implementations around Europe.

On the EU level, the project has campaigned for transparency in MEP expenses and better quality data on the European subsidies. The OpenBudgets.eu project
subsidystories has uncovered how almost 300 billion in EU subsidies is spent. The MEP expenses campaign has led to the President of the European Parliament committing to introduce concrete proposals for reform of the MEPs’ allowance scheme by the end of the year.

Finally, the project has created tailor-made tools for journalists as our research has shown that there was a lack of contextual knowledge and knowledge on the basics of accounting. ‘Cooking budgets’presents the basics of accounting in a satirical website, and the successful game ‘The good, the bad and the accountant’ simulates the struggle of a civil servant to retain its integrity.

The three approaches and audiences to public budgeting have resulted in a holistic platform which tailors to the wider public who wants to have more insights in their local, regional, national and even EU budgets. With the launch of OpenBudgets.eu the field of financial transparency in Europe is enriched by new tools, data, games and research for journalists, civil society organisations and civil servants alike, resulting in a valuable resource for a broad target audience.

OpenBudgets.eu has received funding from the European Union’s H2020 EU research and innovation programme under grant agreement No 645833 and is implemented by an international consortium of nine partners (including Open Knowledge International and Open Knowledge Foundation Germany) under the coordination of Fraunhofer IAIS.

William Denton: Denton Declaration

planet code4lib - Thu, 2017-09-07 01:35

I state, for the record, openly and proudly, that I am in full support of the Denton Declaration.

Evergreen ILS: Evergreen 3.0 first beta release available

planet code4lib - Thu, 2017-09-07 00:18

The first beta release of Evergreen 3.0 is now available for testing from the downloads page.

Evergreen 3.0 will be a major release that includes:

  • community support of the web staff client for production use
  • serials and offline circulation modules for the web staff client
  • improvements to the display of headings in the public catalog browse list
  • the ability to search patron records by date of birth
  • copy tags and digital bookplates
  • batch editing of patron records
  • better support for consortia that span multiple time zones
  • and numerous other improvements

For more information on what’s available in the beta release, please read the initial draft of the release notes.

Users of Evergreen are strongly encouraged to use the beta release to test new features and the web staff client; bugs should be reported via Launchpad. A second beta release that is to include bugfixes and support for Debian Stretch is scheduled to be made on 20 September.

Evergreen admins installing the beta or upgrading a test system to the beta should be aware of the following:

  • The minimum version of PostgreSQL required to run Evergreen 3.0 is PostgreSQL 9.4.
  • The beta release will work on OpenSRF 2.5.0, but OpenSRF 2.5.1 is expected to be released over the next few days and will be recommended for further testing of the Evergreen beta.  In particular, if you run into difficulties retrieving catalog search results, please see OpenSRF bug 1709710 for some workarounds.
  • Evergreen 3.0 requires that the open-ils.qstore service be active.
  • SIP2 bugfixes in Evergreen 3.0 require an upgrade of SIPServer to be fully effective.

Evergreen 3.0.0 will be a large, ambitious release; testing during beta period will be particularly important for a smooth release on 6 October.

District Dispatch: Rep. Pallone talks net neutrality at N.J. library

planet code4lib - Wed, 2017-09-06 22:39

Guest post by Tonya Garcia, director of Long Branch (New Jersey) Public Library

The Long Branch Public Library recently hosted a meeting with their representative, Congressman Frank Pallone (D-NJ6), to discuss net neutrality and its importance to libraries. As the most senior minority representative on the House Energy and Commerce Committee, he is a strong advocate for net neutrality (the principle that internet service providers should pick winners and losers among content and services offered to consumers). The library community is grateful for his interest in how libraries and the people they serve will be affected should rules preserving net neutrality be weakened or completely eliminated, as the Chairman of the Federal Communications Commission (FCC) has proposed.

Left to right: Tonya Garcia, director of Long Branch Public Library; Patricia A. Tumulty, executive director of the New Jersey Library Association; U.S. Representative Frank Pallone (NJ6); Eileen M. Palmer, director of the Libraries of Middlesex Automation Consortium. Photo credit: Eileen Palmer

Following a tour of the library, Congressman Pallone met to discuss net neutrality and how important he believes it is to maintain rules protecting access to high-speed broadband. He invited the library advocates to share our concerns with him.

Patricia A. Tumulty, executive director of the New Jersey Library Association (NJLA), told the Congressman that in their comments filed with the FCC, the NJLA noted:

“The current net neutrality rules promote free speech and intellectual expression. The New Jersey Library Association is concerned that changes to existing net neutrality rules will create a tiered version of the internet in which libraries and other noncommercial enterprises are limited to the internet’s ’slow lanes‘ while high-definition movies and corporate content obtain preferential treatment.

People who come to the library because they cannot afford broadband access at home should not have their choices in information shaped by who can pay the most. Library sites—key portals for those looking for unbiased knowledge—and library users could be among the first victims of slowdowns.”

The availability of affordable high-speed internet has meant that public libraries now serve as incubators for local entrepreneurs, noted James Keehbler, director of the Piscataway Public Library. His makerspace and makers programs within the Piscataway library play a central role in supporting their residents. Without access to high speed internet, their makerspace, for example, could not have been used by local entrepreneurs to develop prototypes that were used in successful crowd-sourced funding efforts to start a local business.

New Jersey State Librarian Mary Chute also discussed the significant current investment in digital resources by the state’s library that are then made available to all New Jersey residents. These expensive resources are relied on by small businesses, students, job seekers and lifelong learners throughout the state. A “slow lane” internet in libraries would hamper access to bandwidth-heavy visual content such as training videos used by those seeking certifications for employment and many others.

Eileen M. Palmer, director of the Libraries of Middlesex Automation Consortium and member of the ALA’s Committee on Legislation, added concerns that the loss of net neutrality rules could negatively impact the many local digital collections housed in public and academic libraries. She also spoke about the potential loss of access to government information, such as the NASA high-speed video feeds used just recently by many libraries to host eclipse programs and viewing events for students and the public.

This was a wide-ranging discussion. Attendees were appreciative of Congressman Pallone’s leadership on this issue and his interest in better understanding how libraries and our patrons will be impacted should we lose rules protecting net neutrality. It also was a conversation that the Congressman was eager to have with his constituents in a library in his congressional district.

Who’ll be writing the next blog about their representative’s visit to their library, I wonder?

The post Rep. Pallone talks net neutrality at N.J. library appeared first on District Dispatch.

Pages

Subscribe to code4lib aggregator