You are here

Feed aggregator

LITA: 2018 LITA Election Results

planet code4lib - Thu, 2018-04-12 14:29

Please join us in congratulating our newly elected LITA officers:

We thank everyone who stood for office in this election. Full ALA election results are available on the ALA website.

Dan Cohen: Help Snell Library Help Others

planet code4lib - Thu, 2018-04-12 13:05

I am extremely fortunate to work in a library, an institution that is designed to help others and to share knowledge, resources, and expertise. Snell Library is a very busy library. Every year, we have two million visits. On some weekdays we receive well over 10,000 visitors, with thousands of them in the building at one time. It’s great to see a library so fully used and appreciated.

Just as important, Snell Library fosters projects that help others in our Boston community and well beyond. Our staff has worked alongside members of the Lower Roxbury community to record, preserve, and curate oral histories of their neighborhood; with other libraries and archives to aggregate and make accessible thousands of documents related to school desegregation in Boston; and with other institutions and people to save the personal stories and images of the Boston Marathon bombing and its aftermath.

Our library is the home of the archives of a number of Boston newspapers, including the The Boston Phoenix, the Gay Community News, and the East Boston Community News, with more to come. The Digital Scholarship Group housed in the library supports many innovative projects, including the Women Writers Project and the Early Caribbean Digital Archive. We have a podcast that explores new ideas and discoveries, and tries to help our audience understand the past, present, and future of our world better.

It’s National Library Week, and today is Northeastern’s Giving Day. So I have a small request of those who read my blog and might appreciate the activities of such a library as Snell: please consider a modest donation to my library to help us help others. And if at least 50 students, parents, or friends donate today—and I’d really love that to be 100, even at $10—I’ll match that with $1,000 of my own. Thank you. 

>> NU Giving Day – Give to the Library <<

HangingTogether: Are distributed models for vocabulary maintenance viable?

planet code4lib - Thu, 2018-04-12 12:30
Mason-ontology.png CC-BY-SA-3.0-migrated via Wikimedia Commons

That was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Steven Folsom of Cornell University and Stephen Hearn of the University of Minnesota. Metadata practitioners can cite many examples of established vocabularies or datasets that have become outdated or do not provision for local needs or sensibilities. Slow or unresponsive maintenance models for established vocabularies have tempted some of us to consider distributed models. High training thresholds to participate in current models have contributed to desiring alternatives. The new PCC Strategic Directions 2018-2021 document cites a more diverse vocabulary landscape.

In theory, linked data would provide the means for local communities to prefer a different label for an established vocabulary’s preferred term for a concept or entity.  One might want to reference a local description of a concept or entity not represented (or not represented satisfactorily) in established vocabularies or linked data sources. If these kinds of amendments and additions are made possible in a linked data environment, then others can agree (or not) with the point of view by linking to the new resource. Such a distributed model for managing both terminology and entity description raises issues around metadata stability expectations, metadata interoperability, and metadata maintenance.

We noted that there are social aspects to this issue, not just technical.  Numerous vocabularies were created for specific projects that, once funding stopped, remained frozen in time.  Observed one discussant: “Nothing is sadder than a vocabulary that someone invented that was left to go stale.”

OCLC Research Library Partners metadata managers discussed whether the impediments for a distributed model were low enough to be viable, and whether the benefits would outweigh the challenges. The most common concerns raised about a distributed model:

  • Stability and versioning
  • Notifications of changes
  • Semantics and their alignments
  • Redundancy—how to prevent people spending time working on the same entity?
  • How to feed local vocabularies into the general environment?

The requirements for distributed vocabulary maintenance converged around:

  • Communities of practice need a hub to aggregate and reconcile terms within their own domains. It was noted that different communities of practice might use terms that conflict with others’ terminologies, or mean different things. The Cornell-led IMLS grant on shared vocabularies is producing a white paper and models for reconciliation and aggregation, describing the different pieces that need to be put in place for linked data to proceed. However, this focused on names, which we all agreed are much easier to deal with than reconciling concepts. Similar ground was covered by the PCC Linked Data Advisory Committee’s Linked Data Infrastructure Models: Areas of Focus for PCC Strategies.
  • Support syndetic relationships among different vocabularies.
  • Replacing text strings with stable, persistent identifiers would facilitate using different labels depending on context. This would accommodate both different languages and scripts (and different spellings within a language, such as American vs. British English), as well as terms that are more respectful to marginalized communities. We referred to the OCLC Research Works in Progress webinar on Decolonizing Descriptions: Finding, Naming and Changing the Relationship between Indigenous People, Libraries, and Archives which described the process launched by the Association for Manitoba Archives and the University of Alberta Libraries to examine subject headings and classification schemes and consider how they might be more respectful and inclusive of the experiences of indigenous people.
  • Communicate the history of changes and the provenance of each new or modified term. Such transparency would contribute to the trustworthiness of the source. The edit history and discussion pages that are included in each Wikidata itemWik is a possible model to follow.
  • The model must be both scalable and extensible. The model needs to accommodate the proliferation of new topics and terms symptomatic of the humanities and sciences, and facilitate contributions by the researchers themselves. It needs to be flexible enough to co-exist with other vocabularies.

Wikidata is an example of a successful model of contributions from a wide variety of different communities. Wikidata is derived from data in Wikipedias, and currently lacks conceptual models for creative works, organizations, and concepts. But it handles person entities well, providing a variety of labels in different languages and scripts and aggregating multiple identifiers referring to the same entity. See, for example, its entry for Jane Austen. Some of the Wikidata attributes might need to be kept local rather than shared to protect the privacy of living persons, however, such as gender, date of birth, and contact information.

Open questions around distributed vocabularies included:

  • Who would take ownership and responsibility to provide stability?
  • How could you verify the provenance?
  • If we no longer rely on governmental bodies for vocabulary management, what alternatives would there be to measure stability?

Expanding vocabularies to include those used in other communities requires building trust relationships. Our discussions converged around the need for a model of “community contribution” for new terms and community voting. If a concept or term becomes controversial, an authorized editorial group would need to step in and mediate. We also need to acknowledge that our current “consensus environment” excludes a lot of people. Requiring provenance as part of a distributed vocabulary model may help us in creating an alternative environment.

Terry Reese: MarcEdit 7 Update: Regular Expression Store, Thread Pooling, and Task Manager Updates

planet code4lib - Thu, 2018-04-12 05:28

I’ve posted a new MarcEdit 7 update. This includes the following changes:

  • Verify URLs – there is an option to manage # of threads used. This is the first time this tool will utilize a thread pool to provide faster query. I wouldn’t recommend using more than 10 threads (3-5 is a good number) as you could start to look like a denial of service attack to those you are checking.
  • Verify URLs – I’ve updated the stylesheet that generates the results. It’s easier to read, and the record #s are now able to be copied so you could put these in a file and select these files through the Extract Selected Records Tool.
  • Build Links – I’m still experimenting with this, but I’m making it available because it works. This tool now uses threads to build links. I’m generally seeing a 15-20% improvement in speed to process files. Not a big change, but I’m happy for it. I have some ideas I’m working on to improve speed further – they might show up in tonight’s release
  • Regular Expression Store – it’s been updated. You can now add new metadata and search across multiple fields of metadata.
  • Replace Function: When using external file criteria, the tool has trouble if your files include BOM values. I’ve added code to filter these out.
  • Task Management – I’ve added an option in the Task manager to allow a task to override the broker’s assessment and run the tasks using the older task method. Generally, the brokers assessment results in significant speed gains, but there are times when you have a task that will touch every record – it may be faster to use the other method. This will give users control to use either.
  • Component updates: I’ve updated core components to the linked data tooling, the saxon xslt processor, and the JSON processing tools.

 

Regular Expression Store changes:

 

One of the new features added to MarcEdit 7 is the Regular Expression Store. I’ve significantly enhanced this feature. Users can now add significant metadata around their stored expressions, as well as search for these resources by metadata or expression. This update also includes all the client-side work necessary to enable public sharing of expressions…I’m currently putting together the server-side components to make this a reality. I’ll start by putting my library of expressions into the public share. Hopefully, folks will find these useful.

New Expression window:

Notice the Actions button – this is a drop down action button that provides access to options to create new expressions or to save an expression.

Thread Pooling

One of the new enhancements in MarcEdit 7 is the introduction of a thread pool. This has been implemented in the Verify URLs tool and the Build Links tool. The Verify URLs tool provides and interface for users wanting to customize the number of threads used to check urls:

 

The default is set to 3 – but users can make this as small or large as they like. I would caution however, of setting this value above 10, as then the resource will start to look like a denial of service attack if you are querying the same domain over and over.

Task Management Changes:

The last change I want to highlight is in the Task Manager. Occasionally, folks will provide me with files and tasks that the task broker has difficulty profiling. In these cases, the new profiled process may be slightly slower than the older by record approach. To give users more control over the process, I’ve added the ability to override the task brokers recommendations and push data through the legacy task processing method. You’ll see this option on the task editor window.

 

Please note, if a task list is embedded into another task list, the tool will respect the override request of any of the combined tasks, and use that option to process all items in the list.

The update is available at the MarcEdit website: http://marcedit.reeset.net/downloads or via the automatic update mechanism. If you have questions, let me know.

–tr

District Dispatch: Watch Senate hearing on Marrakesh Treaty Implementation Act April 18

planet code4lib - Thu, 2018-04-12 01:36
Watch the hearing on the Marrakesh Treaty Implementation Act on April 18th at 10:30 a.m. (Eastern)

There was a time when I thought the Marrakesh Treaty to increase access to information for people with print disabilities would never make its way to the Senate.

The Senate Foreign Relation Committee will hold a hearing on the Marrakesh Treaty Implementation Act on April 18th at 10 a.m. (Eastern). Photo: Dirksen Senate Office Building

Marrakesh is an international treaty after all, and Congress tends to shy away from treaty ratifications. After all, this is the United States, and we don’t need an international body to tell us what to do. (The threat of the loss of sovereignty was the argument when the Senate rejected the treaty that proposed to place a ban on discrimination of disabled people, even after retired Senator Bob Dole, frail and confined to a wheelchair came to the Senate imploring them to do so.) Could this be treaty be different? When the Senate does not ratify the American Convention on Human Rights, does a treaty about advancing opportunities for persons with prints disabilities have a chance?   

We will get a good idea the Senate Foreign Relation Committee holds a hearing on the Marrakesh Treaty Implementation Act (S.2559) on April 18th at 10:30 a.m. (Eastern). You can watch the live proceedings and hear the testimony from the Library Copyright Alliance. Here’s to keeping our fingers crossed!   

The post Watch Senate hearing on Marrakesh Treaty Implementation Act April 18 appeared first on District Dispatch.

DuraSpace News: Join Fedora at IASSIST &amp; CARTO 2018 Conference

planet code4lib - Thu, 2018-04-12 00:00

If you will be traveling to Montreal, Quebec for IASSIST & CARTO Conference May 29 to June 1 please join David Wilcox for a half-day workshop on May 29 “Supporting data storytellers with Fedora“. The Workshop will provide an overview of Fedora–the flexible, extensible, open source repository platform for managing, preserving, and providing access to digital content.

In this workshop both new and existing Fedora users will learn about current Fedora features and functionality first-hand.

Cynthia Ng: Reflection: 18 months as a Manager of Technology and Technical Services

planet code4lib - Wed, 2018-04-11 19:47
It’s taking me a lot longer than usual to write this reflection piece since I’ve been busy with unpacking and settling into a new place. As usual, this piece is meant to be reflective of my experience. No criticism is meant of my former work place as every organization has its quirks and ways it … Continue reading "Reflection: 18 months as a Manager of Technology and Technical Services"

Access Conference: Mark your calendars

planet code4lib - Wed, 2018-04-11 16:07

Join us in fabulous Hamilton, Ontario – Wednesday, October 10 to Thursday October 11, 2018 at Liuna Station.  Hackfest will be held on Friday, October 12, 2018 – venue to be announced shortly.

Terry Reese: MarcEdit 7 Update and bug fix notes

planet code4lib - Wed, 2018-04-11 14:46

(Posting this on my blog as well)

I have a necessary MarcEdit update planned for tonight, as it fixes a problem introduced in the last update that I don’t believe anyone is seeing (I didn’t) until you start working with really large files. I found it yesterday and was up till around 3:30 this morning trying to pin-point what was happening (because it didn’t make any sense to me).

Here’s the issue – in the last update, I worked on adding some code to ensure that temporary files MarcEdit creates get cleaned up. The code works great, but what I’m finding is that it is almost impossible to determine when the garbage collector will finalize the disposal of the object in certain cases. This means that on very large files, I’m finding that the garbage collector is removing the modified file from the MarcEditor – so a change might not stick, or worst, will result in only a partial file being loaded (which is really obvious). This is how I noticed the issue. I was working with files over 400 MBs testing the thread pooling added to the linked data process and couldn’t figure out why only partial results were being loaded. What added to the confusion was that a single Message Box anywhere in the file loading process would enable the process to finish and work as expected. It took a long time of poking around to find the problem, and fix it. Unfortunately, it was too late to get everything done.

What I am doing – I’m issuing an intermediary fix for anyone that might be running into this problem with MarcEdit. The links are below. Tonight, I’ll have the formal build (which will have gone through the unit testing process, etc.) that I’ll post through the normal update process. Really sorry this one slipped by me – but by normal testing process is to work on sets of files up to 10,000 records and unfortunately, this didn’t appear to be a large enough set to reliably see this issue. And given the garbage collection will work differently based on your system – it’s hard to know when (if) anyone would see this issue.

To correct the problem, in the meedit7.dll – the file that handles all global editing – the tool now sets a preservation bit on files that need to exist outside of the calling process. This removes this problem. It may means that a temp file or two stays on the machine – but honestly, in Windows 10 and current OS’s like MacOS – they provide options that allow the operating system to manage temp files, and MarcEdit provides a tool in the Help section to automatically clean all MarcEdit temp files. So, at this point, I think this is the best option going forward.

Since I’m releasing this early – I didn’t get an opportunity to formalize my change log and write the notes on what’s changed – there is a lot. You’ll see the following:

  • Verify URLs – there is an option to manage # of threads used. This is the first time this tool will utilize a thread pool to provide faster query. I wouldn’t recommend using more than 10 threads (3-5 is a good number) as you could start to look like a denial of service attack to those you are checking.
  • Verify URLs – I’ve updated the stylesheet that generates the results. Its easier to read, and the record #s are now able to be copied so you could put these in a file, and select these files through the Extract Selected Records Tool.
  • Build Links – I’m still experimenting with this, but I’m making it available because it works. This tool now uses threads to build links. I’m generally seeing a 15-20% improvement in speed to process files. Not a big change, but I’m happy for it. I have some ideas I’m working on to improve speed further – they might show up in tonight’s release
  • Regular Expression Store – its been updated. You can now add new metadata and search across multiple fields of metadata.
  • Replace Function: When using external file criteria, the tool has trouble if your files include BOM values. I’ve added code to filter these out.
  • Task Management – I’ve added an option in the Task manager to allow a task to override the broker’s assessment and run the tasks using the older task method. Generally, the brokers assessment results in significant speed gains, but there are times when you have a task that will touch every record – it may be faster to use the other method. This will give users control to use either.
  • Component updates: I’ve updated core components to the linked data tooling, the saxon xslt processor, and the JSON processing tools.

This intermediary update is version: 7.0.126 and is found in the normal download links:

I’m providing this download for users that may experience this issue now as it will interrupt current work or would like to test the new functionality (would love to have some feedback). However, tonight, the update will become official and roll over to version 7.0.127 (or 7.0.128) and will initiate through the automated updating mechanism.

If you are unsure which version of MarcEdit 7 you have installed, you can check in two ways:

  1. Click on Help/System Information

    In the System Information box, you’ll see the Install type:
  2. The second method – Open the Windows Control Panel, and look at the Program List. You will see the Install version listed in the program title:

It is important that you pick the version currently installed. While each version of MarcEdit shares the same component, registry entries vary by version installed.

Let me know if you have questions.

–tr

Open Knowledge Foundation: Advancing in consolidating an open data community and practitioners in South America

planet code4lib - Wed, 2018-04-11 10:38
The case of ARTIGO 19 in Brazil and Datalat in Ecuador

Authors: Paulina Bustos (Artigo 19) and Julio López (Datalat)

This blog is part of the event report series on International Open Data Day 2018. On Saturday 3 March, groups from around the world organised over 400 events to celebrate, promote and spread the use of open data. 45 events received additional support through the Open Knowledge International mini-grants scheme, funded by Hivos, SPARC, Mapbox, the Hewlett Foundation and the UK Foreign & Commonwealth Office. The events in this blog were supported through the mini-grants scheme under the Open Science and Equal Development themes.

After almost 5 years of working in the open data movement, it feels like we have come to a crossroads. We are now wondering if we should continue working on creating more generalistic open data or if we need to start opening with specific topics in mind. The reality is that we need to pursue doing both. A very good example of these two approaches are the Open Data Day events that happened in São Paulo and Quito this year. Here we narrate both events highlighting our learnings and outcomes.  

Dados e Feminicídios (Data and Femicides)

Femicides is a great problem in Latin America and in Brazil the numbers are worrisome. According to a study made in 2015, Brazil occupies the 5th place in the world with the highest index of femicides in the world. Because of this, we decided to work on open data from a Femicides perspective. To present our work and kick-off a collaboration with publishers and users of data relating to femicides we choose the Open Data Day.

The event took part in MobiLab, a Mobility Lab in São Paulo. We set up the objective of the event as to improve the quantity and quality of data related to femicides in Brazil. The event was a private one that united people from the government, civil society and journalists working on the topic in Brazil. We had two main activities: Present our research on data and femicides (the event included people who were included in the research) and the second activity was an exercise to understand the barriers and problem with the usage and consumption of this data. As our next steps, we will work with these institutions to improve the quantity and quality of open data related to femicides.

Open Data Day Quito

Working towards consolidating an active community interested in open data was the goal for this year in Quito. Datalat and Medialab have been organising together this event for 3 years, which usually includes workshops, talks and mainly serve as a networking space for the community. This year, around 130 people got together at CIESPAL to celebrate open data. This blog post details what happened that day (in Spanish).

An opening panel set the tone for the event, which included speeches from the national institute of statistics and local experts, including for the first time data-driven journalists. Our main insights is that Open Data had a momentum in government in 2015, with many directives and regulations being implemented; however, it slowly vanished. In order to reach out that momentum again, it is necessary to promote and educate more about the benefits of the use of open data and above all to incentivize people to participate more in this public debate.

On the skills side, during the event 4 workshops were run by local organisations on their fields of expertise including data mapping, open budgets, SDGs and open research data. About this last topic, Datalat has advanced in creating a local chapter to organise the first OpenCon in Ecuador, which later this year will gather academics and professionals interested in open access, open data and open education. A survey is available in Spanish for those who wish to join this effort.

Insights and outcomes

In the last couple of years we have worked with the idea of improving open data in general, but with this project and event we have realized the importance of creating an open data movement that works in parallel with thematic projects and research. Thematic  events allow us to involve a greater range of people that can make a difference for open data.

Education is a huge part of an effective open data movement. We think we will not be able to advance towards an effective use of open data in our region if we do not start education a greater range of people from diverse sectors. We all can do our part to build an open data ecosystem in our communities. With this message, Datalat invites everyone to joint efforts and work collaboratively to have a stronger and diverse open data movement. What started as an effort of 5 people in Quito, has turned into a 28 organisation’s effort to have a local event to visibilize open data in the public agenda.

As we move forward to Open Data in Latin America we will be working in advancing specific topics and general practices, always with education in mind.

Fiona Bradley: Hello world!

planet code4lib - Wed, 2018-04-11 08:25

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

William Denton: Sorting LCC call numbers in R

planet code4lib - Wed, 2018-04-11 03:02

Here’s the easiest way to sort Library of Congress Classification call numbers in R:

call_numbers <- c("QA 7 H3 1992", "QA 76.73 R3 W53 2015", "QA 90 H33 2016", "QA 276.45 R3 A35 2010") library(gtools) mixedsort(call_numbers) ## [1] "QA 7 H3 1992" "QA 76.73 R3 W53 2015" "QA 90 H33 2016" "QA 276.45 R3 A35 2010"

gtools is part of standard R. The docs says about mixedsort and mixedorder:

These functions sort or order character strings containing embedded numbers so that the numbers are numerically sorted rather than sorted by character value. I.e. “Asprin 50mg” will come before “Asprin 100mg”. In addition, case of character strings is ignored so that “a”, will come before “B” and “C”.

(I don’t know why “Aspirin” is misspelled.)

If you have a data frame (df) with column call_number then you would use mixedorder to sort the whole thing by call number thusly:

df[mixedorder(df$call_number), ]

I asked about this on Stack Overflow and on the Code4Lib mailing list last July, then I went on vacation and sort of forgot about it. Nine months later, I thanked Li Kai, who pointed me to a Stack Overflow that solved my problem and let me then answer my own question.

Unrelated library sign.

DuraSpace News: Register Today: “Making DSpace Your Own” webinar

planet code4lib - Wed, 2018-04-11 00:00

There are a few spots left for the DuraSpace community webinar, register today!

“Making DSpace Your Own” on Tuesday, April 24, 2018 at 12:00p.m. EST.
Presented by Terry Brady, Georgetown University Library, Applications Programmer Analyst, Department of Library Information Technology

While there are many exciting changes under development with the future DSpace 7 User Interface, this presentation will describe how to get started TODAY using DSpace 6, the current version in production.

LITA: #LITAchat – ITAL March Issue and Submitting to the Journal

planet code4lib - Tue, 2018-04-10 20:52

The March issue of ITAL is out now!

Information Technology and Libraries (ITAL) publishes material related to all aspects of information technology in all types of libraries. Learn more about submitting to the journal.

Join LITA members and colleagues on

Friday, April 13, 1:00-2:00pm EST

on Twitter to discuss the March issue of ITAL with editor Ken Varnum and the issue’s authors, and ask questions about submitting to the journal and the publication process.

To participate, launch your favorite Twitter mobile app or web browser and search for the #LITAchat hashtag and select “Latest” to follow along and reply to questions asked by moderator or other participants. When replying to discussion or asking questions, add or incorporate the hashtag #LITAchat.

See you there!

Dan Cohen: What’s New, Episode 14: Privacy in the Facebook Age

planet code4lib - Tue, 2018-04-10 20:44

On the latest What’s New Podcast from Northeastern University Library, I interview Woody Hartzog, who has a new book just out this week from Harvard University Press entitled Privacy’s Blueprint: The Battle to Control the Design of New Technologies. We had a wide-ranging discussion over a half-hour, including whether (and if so, how) Facebook should be regulated by the government, how new listening devices like the Amazon Echo should be designed (and regulated), and how new European laws that go into effect in May 2018 may (or may not) affect the online landscape and privacy in the U.S.

Woody provides a plainspoken introduction to all of these complicated issues, with some truly helpful parallels to ethical and legal frameworks in other fields (such as accounting, medicine, and legal practice), and so I strongly recommend a listen to the episode if you would like to get up to speed on this important aspect of our contemporary digital lives. Given Mark Zuckerberg’s testimony today in front of Congress, it’s especially timely.

[Subscribe to What’s New on iTunes or Google Play]

LITA: LITA Education Call for Proposals

planet code4lib - Tue, 2018-04-10 16:49
What library technology topic are you passionate about? Have something to teach?

The Library Information Technology Association (LITA) invites you to share your expertise with a national audience! For years, LITA has offered online learning programs on technology-related topics of interest to LITA Members and wider American Library Association audience.

Submit a proposal by April 30th, 2018
to teach a webinar, webinar series, or online course for Summer/Fall 2018.

We seek and encourage submissions from underrepresented groups, such as women, people of color, the LGBTQ+ community, and people with disabilities.

All topics related to the intersection of technology and libraries are welcomed. Possible topics include, but are not limited to:

  • Visualization
  • Privacy and analytics
  • Data librarianship
  • Technology spaces
  • Ethics and access
  • Project management
  • Augmented and virtual reality
  • Data-driven decision-making
  • Tech design for social justice
  • Diversity in library technology
  • Collection assessment metrics beyond CPU
  • Government information and digital preservation

Instructors receive a $500 honorarium for an online course or $150 for webinars, split among instructors. For more information, access the online submission form. Check out our list of current and past course offerings to see what topics have been covered recently.

Proposals will be evaluated by the LITA Education Committee and will be assigned a committee liaison. That person is responsible for contacting you no later than 30 days after your submission to provide feedback.

We’re looking forward to a slate of compelling and useful online education programs this year!

Questions or Comments?

For all other questions or comments related to the Education Call for Proposals, contact LITA at (312) 280-4268 or Mark Beatty, mbeatty@ala.org

District Dispatch: Libraries Ready to Code to release beta toolkit at 2018 Annual Conference

planet code4lib - Tue, 2018-04-10 16:07

The Libraries Ready to Code (RtC) cohort announced in October 2017 is going strong. The 27 participating school and public libraries are in the midst of implementing their projects. In part, these projects were developed to test a RtC library program framework that fosters computational thinking (CT) literacies among youth. The framework grew from ALA’s ongoing RtC initiative and is the basis for a collection of tools and resources that fully support any library that wants to offer these types of youth programs. ALA continues to collaborate with Google on the initiative.

Release of the RtC beta toolkit is now scheduled for June 2018 during ALA’s Annual Conference. Photo credit: Code.org

Cohort projects range from programs that engage preschoolers and their families in computational thinking literacy activities to teen-centered projects where teen facilitators teach others about computers and coding. In addition to facilitating their youth programs, the cohort meets weekly with the RtC team to delve into RtC concepts that are integral to designing and facilitating successful CT literacy activities. A result of the RtC team’s previous work, these concepts are:

  • providing and creating inclusive learning environments;
  • connecting youth interests and emphasizing youth voice;
  • engaging with communities and families;
  • demonstrating impact through outcomes.

Now the cohort and the RtC team are compiling what they’ve learned, along with feedback collected weekly, into a toolkit that will be widely available this fall. Originally, a beta version of the toolkit was planned for launch during National Library Week. However, to be responsive to cohort feedback and project data, the RtC team is rethinking strategies for toolkit design so to successfully reflect work happening on the ground with the cohort. As a result the beta launch is now scheduled for June, during the ALA Annual Conference.

The RtC team wants to make sure that materials provided are presented in a way that supports the needs of library staff and meet the newly developed mission and vision of the project.

Mission
RtC supports library staff to facilitate computational thinking opportunities for youth in ways that are grounded in research, aligned with library core values and can broaden participation.

Vision
All youth have access to high-quality informal and formal opportunities from preK-12 to engage in computational thinking as a critical literacy, developing knowledge, skills and dispositions that enable them to take advantage of and make informed decisions about their future.

While the beta launch is in the works, cohort members still have a lot to share. That’s why starting with National Library Week, this blog will publish at least one RtC post a week. These posts, written by cohort members, will include videos of projects, articles and photos about what library staff in the cohort are learning, and audio interviews with cohort members. The posts will run until Annual Conference.

When the beta toolkit launches, there will be a feedback mechanism enabling anyone that tests out the materials to let the RtC team know what works, what doesn’t work and what’s missing. That feedback will guide revisions and ensure the final product will provide relevant and useful resources and support materials for the library community.

The RtC team and cohort members are excited about what the final toolkit will include. Stay tuned for updates, and don’t miss the weekly cohort posts starting today.

The post Libraries Ready to Code to release beta toolkit at 2018 Annual Conference appeared first on District Dispatch.

Open Knowledge Foundation: Local open mapping initiatives in Rwanda and Nicaragua

planet code4lib - Tue, 2018-04-10 10:32

This blog is part of the event report series on International Open Data Day 2018. On Saturday 3 March, groups from around the world organised over 400 events to celebrate, promote and spread the use of open data. 45 events received additional support through the Open Knowledge International mini-grants scheme, funded by Hivos, SPARC, Mapbox, the Hewlett Foundation and the UK Foreign & Commonwealth Office. The events in this blog were supported through the mini-grants scheme under the Open Mapping theme.

Local open mapping initiatives are those that end up giving flavor to the world of mapping. It must be because you end up with a product and you generate a data that directly influences your community. Two chapters of the YouthMappers network allowed themselves to be seized by this feeling and carried out two events during this year’s celebration of the Open Data Day. Two projects, two different sites and themes that converge in the same goal: the development of their local communities.

Let’s start with the YouthMappers in INES-Ruhengeri. They created open data for the Kangondo Slum neighborhood in the city of Kigali, Rwanda. The Kangondo slum is the largest slum in Rwanda in the Grade A area (upper area) of the city of Kigali. All the houses in the area are not well planned and the crowded houses lack basic needs such as potable water, adequate sanitation and adequate sewage. The created open data will be used for the marginal neighborhood improvement process (Slum upgrading). For them, this activity was a good opportunity to share not only the importance of open data in the development of the local community with attending authorities, but also a time to discuss the use of open data to address local development challenges.

During the event,  it was shown how to create open data using the OpenStreetMap online mapping platform. Applying participatory mapping was identified as a powerful measure to show the challenges within community development through evidences. However, it was revealed that there is a big gap to obtain open data. The YouthMappers of INES-Ruhengeri were appreciated for their initiative to create open data and the representatives of the authorities agreed to use that data to make evidence based decisions.

As a result, YouthMappers at INES-Ruhengeri have created 1374 data including slum homes, roads and sidewalks.

 

On the other side of the world we find the YEKA Street MGA YouthMappers chapter of the Faculty of Architecture at the National University of Engineering located in Managua, Nicaragua. They decided to organize a Mapathon to finalize the mapping of one of their projects on the categorization and inventory of houses with vernacular construction systems, in the north of the country, more specifically in the limits of the Municipality of Condega in the department of Esteli. This project was led by the “Asociación Mujeres Constructoras de Condega” (Condega Women Builders Association), who have been responsible throughout the years for trying to make these techniques recover their reputation and their importance within the culture and history of Nicaragua, which was affected after the earthquake that affected the country in December 1972.

The purpose of this project is to obtain a count of the number of buildings with land-based construction systems existing in the area, the classification by constructive typology of said buildings and the identification of families or people in the area engaged in construction traditional with these systems.

During the execution of the Mapathon they touched on relevant topics for that day, such as: what is open data and why is it important? Also, how does OpenStreetMap and Mapbox, together with organizations such as YouthMappers and YEKA Street MGA, contribute to this ideal? There was also an explanation of what the project consisted of and its purpose and the training of the participants in relation to the use of the OpenStreetMap platform.

The call to the event was very well received, they had an incredible participation. Taking into account the participation in other events, in their context, this type of event is stressful because of the difficulty of raising awareness among the academy and students about the importance of volunteering and open data. That’s why they felt incredible to see such participation on that day and to add two new members to their chapter YouthMappers. The purpose of Mapathon was not completed in its entirety, due to technical problems and poor satellite image with which they counted. But they were satisfied with the fact that they have been able to open a gap within the academy where the use of open data and programs that support them can fit into.

District Dispatch: Ready to Code library prioritizes community engagement

planet code4lib - Tue, 2018-04-10 07:45

Today’s guest post comes from Susan P. Baier, Director of McCracken County Public Library. This post is the first in a series by Libraries Ready to Code cohort participants, who will release their beta toolkit at ALA’s 2018 Annual Conference.

A key focus of RtC is community engagement, and our library made this a priority as we designed and facilitated coding classes for youth. Increased engagement leads to increased understanding and support for the project and achieves buy-in from staff, library administration and the community as a whole.

Lea Wentworth, McCracken Public Library Children’s Librarian, meets with community members at neighborhood makerspace Sprocket.

Our library is governed by a five member Board of Trustees, and obtaining their buy-in and generating their enthusiasm about the project was critical. In addition to serving as policy and budgetary decision makers, they are among our staunchest community advocates. My project partner, Youth Services Librarian Lea Wentworth, spoke at a recent Board of Trustees meeting about our coding classes and RtC concepts. She also showed the trustees Dot and Dash, the robots we use in some of our programs. The trustees got to practice coding and computational thinking of their own! The trustees had fun, but they also left with a better understanding of the purpose and impact of our project. They are now better equipped to advocate for the value of library coding programs.

Partnerships and outreach are core components of our project, and our classes were designed to go outside the library and into the community. Besides bringing the classes to where our target audience is already gathered, outreach allows the library to create new partnerships and strengthen and reimagine existing ones.

The first phase of our coding classes took place at our local Boys and Girls Club, allowing us to reach a diverse group of youth from underrepresented communities in computer science. In the second phase we are facilitating classes at Sprocket, a newly opened community makerspace in our city. The Paducah Area Chamber of Commerce recently held its Business and Education Partnership meeting at Sprocket, and we were there to speak to attendees from businesses and schools about our project and to demonstrate coding lessons we facilitate with participating youth. Workforce development and entrepreneurship are high-interest topics to these stakeholders. Our message that free library coding classes for youth are one step in developing a local talent pipeline for the future resonated with them.

Our community engagement efforts also resulted in mentors and guest speakers for our youth. A local woman employed as a computer programmer with Computer Services, Inc., was a volunteer facilitator at several of our classes at the Boys and Girls Club. Next month, the director of information technology for the St. Louis Cardinals will speak to our young coders via videoconference.

I recently had a community member tell me our participation in RtC caused him to look the library with a new, fresh perspective. To me, that was the ultimate compliment. RtC granted us the opportunity to reinvent and reposition ourselves in the community, and to be viewed as an innovative partner in education and workforce development.

The post Ready to Code library prioritizes community engagement appeared first on District Dispatch.

Pages

Subscribe to code4lib aggregator