You are here

Feed aggregator

Terry Reese: 2017 Philmont Experience

planet code4lib - Wed, 2017-07-05 05:18


Since Summer 2016, troop 73 has known that we would have an opportunity to go to the Philmont Ranch in New Mexico.  We had won the lottery, and had 24 spots available to our troop.  This means that we had the potential to take 20 kids and 4 adults in two crews out to New Mexico in 2017.  And from the moment I found out that we would have the opportunity to travel, I knew that I would go.  As one of the Assistant Scout Masters in our troop, I knew that I had the certifications that would be needed to attend…but more importantly, I have a son that would be just the right age to make the trip.  At 15, going on 16, my oldest would be the perfect age to really enjoy what this experience had to offer.

Now, I should note, as excited as my son was to make this trip (and he was – he fund-raised nearly all of the $800 camp registration), I was probably more so.  Since leaving Oregon, I think I’ve had the hardest time out of my family adjusting to Ohio.  I love being outdoors, and while Ohio does have some very nice areas to camp and hike (really, it does), they aren’t the same.  I miss the mountains, I miss the forest, I miss the towering fir trees that keep the forests green year round.  That was my childhood…it represents some of my favorite memories with my family, with my father.  And these were some of the memories that I hoped to build with my son…and hoped to relive a little bit while I spent some time in the mountains.

Preparation for the trip

Over the year, there was a lot of preparation that had to be undertaken.  Equipment to be purchased, plans to be made.  Some of the preparation was getting the kids ready to carry a backpack for eleven days.  Some of the preparation was getting ready to hike for 5 hours every day.  Some of the preparation was learning skills that would be required when camping in the back country for eleven days.  Lots of preparations.

Then there was personal preparation.  I turned 40 this year, and one of the things that I got into my head was that for Philmont, I was going to grow out my beard and hair.  Why?  Well, it was fun.  I haven’t grown a beard in close to 20-25 years, so it would be something different.  But it was also somewhat practical.  With my hair long, and my beard full, I wouldn’t have to worry about the sun burns that everyone else in my Crew  would end up worrying about.  So how did it go?  Quite nicely.  I kept a photo record from Feb.  2017 when I started.

February 2017

March 2017

April 2017

May 2017

June 2017

The last picture is in the Chicago Union Station with my son.  I was quite pleased with my final Philmont beard.

The Trip

The trip to and from Philmont ended up taking about 15 days.  We travelled to Philmont via the train, travelling by bus from Columbus, OH to Toledo, and then from Toledo, OH to Raton, NM.  For many of the kids, this represented the first time that they’d been on a train, crossed the Mississippi River, seen the plains of Kansas….it’s a great way to see the country…particularly the fly over country.  On the train, we saw fields of corn, a tremendous lightening storm near Topeka, the snow covered mountains in Colorado, and a bear as we neared Raton.  The kids spent a lot of time going to and from the observation train car, and generally enjoying the ride.  I took a few pictures of the train trip across the country.


For us, the trip officially started in Raton, NM.  This is where the Philmont buses picked us up.  When you get to Philmont, everything is crazy.  To start with, you have to get registered, there is equipment to pick up…lots of things to get done, including a shakedown with the ranger to make sure that everyone has everything that they will need for the trip.  For the kids, this part of the trip is probably the most boring.  We spend a lot of time sitting, a lot of time talking to the ranger about bears, bear protocols, snakes, water purification, etc.  The ranger spends their time telling us all the terrible things that could happen out in the woods (which is fun, because some of the kids are already worried about bears and snakes) and the adults spend our time trying to keep them from going crazy. 

You spend one day in base camp, and then you are on the bus.

For our Philmont trip, we hiked itinerary 9 (  This would take us through Old Abreu, Crags, Beaubien, Black Mountain, the Red Hills, Cyphers Mine, Cimarroncito, Upper Clarks Fork, and then over the Tooth of Time to Base Camp.  In all, it was a 61 mile itinerary, though my Fitbit with GPS clocked us way over that mileage.  I actually journalled the trip, in part, because I wanted to remember what it was like, and in part, because I wanted to be able to give the parents of the kids in my Crew a taste of what the trip was like for the kids.  And it was glorious.  We climbed multiple peaks, including Red Hill and Mt. Philips.  For almost all the kids, every day represented a new tallest mountain.  We camped over 9,000 ft 4 times, over 10,000 ft once.  We danced on the Tooth of Time.  For many of the kids, it was the first time that they rode a horse, or had an opportunity to rock climb, or walk through an old gold mine.  For eleven days, we watched the boys grow, mature, and wonder at the beauty of the New Mexico country-side.  And I got to do this with my son – to make stories that only the two of us share through this very unique experience and bond.  I know that I’ve been told by every kid in our troop that has done Philmont, that it’s a life changing event.  It almost has to be…you are forced to push yourself in ways that you might not have thought possible, and bond with your Crew through this shared experience.  But I think that as adults, we get just as much.  You can’t help but be transported back to your youth.  For me, it took me back to camping with my family, hunting with my dad…it let me slow down and appreciate how lucky I was to be spending this time with my own son.

We took a lot of pictures throughout the trip (hundreds).  I pulled a few of our time at the ranch.

Probably my two favorite pictures though happened off the trail.  The first is of my crew…

We’d just come off the Tooth of Time, down the Ridge Trail, and into basecamp.   We were tired, dehydrated, and excited to be home.  We were also a little sad that it was all over.  While the kids couldn’t talk enough about what they wanted to eat (trail food definitely gets old and hard to stomach), there was also a realization that we were done and would be going home in a couple of days.  It was bittersweet for me as well.  While it was nice to have a cot to sleep in, and some real coffee to drink…I really wasn’t ready to be done.  Even today, as I write this, I wish more than anything that I could get back out on the trail and just walk in the woods.

The other photo is this one:

This is a picture of me and my son, as soon as we got off the trail.  We sent it to my wife…our picture as 2017 Philmont finishers.  I’m incredibly proud of him, and what he’s accomplished.

And that pretty much wrapped up our trip.  Of course, I’m leaving a lot of things out.  I didn’t talk about the poison oak that I got into, and the rash the covered almost my entire body (that was fun), or the numerous trips our crews had to the trail doctors (I did mention, this trip is hard), or the logistics of digging cat holes, or eating trail food and sketchy water for days.  No doubt – it’s a challenging trip.  I’ve done this kind of hiking before (in the Pacific Northwest), and while Philmont is easier (more controlled), it’s still no joke.  But if anyone asks – it is so very worth it.  And I’ll be back.  I have a date with Philmont in 2020, when I’ll take my youngest son, when he’s 15.  And I’m sure the experience will be just as challenging, just as enjoyable, and completely different.  And you know what, I can’t wait.


Cynthia Ng: Learning to be a Systems Administrator for Horizon ILS

planet code4lib - Wed, 2017-07-05 05:01
This is one of those presentations that never was, but I thought it would be interesting to write up anyway as a reflective piece. Interestingly, I didn’t find out that I would be the library’s ILS administrator until after I started the job. It didn’t really make any difference, and if anything, I was glad … Continue reading Learning to be a Systems Administrator for Horizon ILS

Tara Robertson: digital or “inclusive” doesn’t always mean accessible

planet code4lib - Tue, 2017-07-04 16:59

Rajiv Jhangiani’s post Just how inclusive are “inclusive access” e-textbook programs? points out the problems with mandatory course fees for all students to lease access to online textbooks. This so-called “inclusive access” model has been piloted at Algonquin College with the e-textbook platform provider Texidium.

Too often we conflate digital with being accessible. Here’s my thoughts on accessibility of e-textbooks for students with print disabilities. I left this as a comment on Rajiv’s post.

When talking about inclusion and accessibility we can’t forget about students with print disabilities. I’ve seen two major accessibility problems with proprietary “inclusive access” models like Texidium.

First, sometimes the platform isn’t accessible. This is more problematic than a print textbook as there’s workflows for format shifting print content for students with print disabilities. What does an accessible format look like for an online “book” that’s on an inaccessible platform? A whole new accessible website? Also there’s really no excuse for publishers who are building inaccessible web platforms in 2017.

Second, sometimes the content isn’t fully accessible. Many of the online publisher textbooks I’ve seen don’t have image descriptions, have math content that’s not in MathML (and therefore cannot be read by a screenreader), or have videos that lack captions. Again, there’s really no excuse for publishers producing content on the web that is not accessible.

A couple of years ago I used to think that publishers might not be aware of accessibility, but now I believe that they don’t care . I believe they don’t care because it cuts into their profits and they are not responsible for the cost of remediating inaccessible platforms and inaccessible content to provide full access to students with print disabilities.

When we talk about accessibility and open textbooks we usually mean financial accessibility, which is important. It’s also important that we make choices that don’t disable students in our classrooms.

If your college or university is going down this path it is critical to put in clear language around accessibility (like WCAG 2.0 compliant) in the procurement documents and in the contracts with vendors. Benetech has some great resources creating or purchasing content that is born accessible. Their checklist on what to look for in e-books is particularly useful.

It’s also important to include clear information about what the publisher will do if the content is not accessible. Who is responsible for the costs of making this content accessible? If the Disability Service Office, or a service provider like CAPER-BC, needs to do work to make the content accessible who do they contact for the publisher files? What is the turnaround time for this?

Moving to e-textbooks is not necessarily an improvement for students with print disabilities. Digital or “inclusive” doesn’t always mean accessible.

Open Knowledge Foundation: Half of the world languages are dying really fast – how you can save yours

planet code4lib - Tue, 2017-07-04 10:39

Languages are a gateway to knowledge. How can digital tools be used to help native language speakers access and contribute knowledge? In this blog, Subhashish Panigrahi shows how endangered languages can be documented and preserved using open standards and tools.

The world’s knowledge that have been accumulated and coded over ages in different languages are valuable to learn about others’ cultures, traditions, and everything about their life. But not every language is not privileged to be a language of knowledge and governance.

Almost half of the 6909 living languages of the world will be vanishing in a century’s time. The most linguistically diverse places like Papua New Guinea are also the most dangerous places for languages. Every two weeks, a language dies and with it a wealth of knowledge forever. In my home country India alone, there exist more than 780 languages. The rate in which languages are dying here is extremely high as over 220 languages from India have died in the last 50 years, and 197 languages from the country are identified as endangered by UNESCO.

Word cloud depicting several Indian languages in their native scripts

With these languages dying, there die all that knowledge that is preserved in those languages.

Languages that do not have tools for everyone to access knowledge and contribute to often go out of use. India for example is home to the highest number of visually impaired and illiterate people in the entire world: more than 15 million Indians are visually impaired and 30% are illiterate. But there do not exist many digital accessibility tools either for web or mobile, even though there are about 450-465 million internet users and 60% of them are mobile users. In fact, accessibility tools for most Indian languages are not affordable and are proprietary in nature.

There have been some efforts by the Indian government—like the Central Institute of Indian Languages (CIIL)—to grow the 22 officially recognized languages and some of indigenous languages. Founded in 1969, CIIL has been working to deepen research on Indian languages, and a program called “Protection and Preservation of Endangered Languages of India” was introduced in 2014 to help CIIL specifically to begin several projects for the conservation of endangered languages.

Only 10-30% of India’s population can understand English, which is predominantly the language of the Internet. A recent report that was published by Google and KPMG states that more than 70% of the India’s Internet users trust content in their native language over English. The lack of native language content and the lack of electronic accessibility tools therefore plays an important factor in stopping a large number of people from accessing information and contributing to the knowledge commons.

When confronted with a problem of this magnitude, there are a few vital things that must be to done to preserve and grow dying languages. Creation of audio-visual documentation of some of the most important socio-cultural aspects of the language such as storytelling, folk literature, oral culture and history is a start. When done by native language speakers, along with annotations of the same in done in a widely-spoken language such as English or Hindi, it is one way of creating digital resources in a language. These resources can be used to create content and linguistic tools to grow the languages’ reach.

Sadly, there is little focus from the central government on many of these languages, but there are some effort from several organisations to document native languages.

There is something every single individual that speaks a less-spoken language or is in contact with a native speaker of an endangered/indigenous language can do. Languages that are dying need digital activism to grow educational and accessibility tools.That can happen when more public and open repositories like dictionaries, pronunciation libraries, and audio-visual content are created.

Wiki Weekend Tirana 2016 (photo: Anxhelo Lushka)

However, not many people know how to contribute in a form that can used by others to grow resources in a language. Especially in India, contributing to a language is largely skewed by the notion of producing and promoting literature. But in a country where more than 30% of the population is illiterate and a large number of languages are spoken languages (without a written counterpart), it is important that the language content is predominantly audio-visual and not just text-based. More importantly, there is a need for openness so that the whole idea of growing languages does not get jeopardized by proprietary methods and standards.

There are plenty of things anyone can contribute for documenting a language depending on their own skillset.

Every language has a wealth of oral literature, which is the most crucial thing to document for a dying language. Several cultural aspects like folk storytelling, folk songs, other narratives like cooking, local festival celebration, performing art forms and so on can be documented in audio-visual forms.

Thanks to cheaper smartphones and an ocean of free and open source software, anyone can now record audio, take pictures and shoot videos in really good quality without spending anything on gears. There are open toolkits that aggregate open source tools, educational resources and sample datasets that one can modify and use for their own language.

A home recording setup for the Kathabhidhana project (photo: Subhashish Panigrahi)

In the age of AI and IoT, one can indeed build resources that will enable their languages to be more user friendly. As explained earlier, most screen reader software that the visually impaired or illiterate people would use do not exist because of the lack of good quality text-to-speech engines. Creating pronunciation libraries of words in a language can help a lot in building both text-to-speech and speech to text engines that eventually can better the screen readers and other electronic accessibility solutions. Cross-language open source tools like LinguaLibre, Kathabhidhana, and Pronuncify help record large number of pronunciations. Similarly, for languages with an alphabet, educational resources for language learning can be created with open source tools like Poly and OpenWords.

Building these resources might not result in transforming the state of many endangered languages quickly but will certainly help in gradually bettering the way many people access knowledge in their language.

The work of some of the groundbreaking initiatives like the Global Language Hotspots by the Living Tongues Institute for Endangered Languages and National Geographic can be used to start language documentation projects. But it is always recommended to make the work output available with open standards so that others can build solutions on the top of existing interventions.

However, there is not much about the actual outcome of any government-led activities for endangered language documentations, and especially if there is any open access to the published works. “People’s Linguistic Survey of India” (PLSI), a non-government-led survey was being conducted during 2012-13 in the leadership of Ganesh Devy.

A few years back, Gregory Anderson, founder of Living Tongues, and Prof. K. David Harrison, associate professor of Swarthmore College in Pennsylvania, US discovered a hidden language called Koro spoken in Arunanchal Pradesh. In 2014, Marie Wilcox, the last living speaker Wukchumni, a North American language, created a dictionary to keep her language alive. Imagine, where these languages would have ended up if Anderson and Harrison, and Marie did not take these baby steps back then.

Hydra Project: Samvera Virtual Connect – 18th July

planet code4lib - Tue, 2017-07-04 09:00

Don’t forget that this year’s Samvera Virtual Connect is coming up soon!

Tuesday, July 18, 2017

11:00 AM – 2:00 PM EDT / 8:00 AM – 11:00 AM PDT / 16:00-19:00 BST / 15:00-18:00 UTC

Samvera Virtual Connect is an opportunity for Samvera Community participants to gather online to touch base on the progress of community efforts at a roughly halfway point between face-to-face Samvera Connect meetings. Samvera is a growing, active community with many initiatives taking place across interest groups, working groups, local and collaborative development projects, and other efforts, and it can be difficult for community members to keep up with all of this activity on a regular basis. SVC will give the Samvera Community a chance to come together to catch up on developments, make new connections, and re-energize itself towards Samvera Connect 2017 in Evanston in November.

For more information, and to register, go to SVC’s wiki page.

The post Samvera Virtual Connect – 18th July appeared first on Samvera.

LITA: The Lost Art of Conversation

planet code4lib - Tue, 2017-07-04 00:44

Technology is often viewed as a double edged sword: it makes life easier but it also has the power to threaten jobs, privacy, and human connections.  Yale University & Oxford’s Future of Humanity Institute polled industry experts in 2016 and found that machine intelligence (A.I.) will replace all human jobs by 2136. Despite statistics like this, I’d like to make the case that there are many ways that technology actually revitalizes communication. The next few posts will explore tech tools, like podcasts, that encourage rather than diminish human connections.

Podcasts are everywhere, even TV legend Levar Burton recently announced his upcoming podcast: “LeVar Burton Reads.” That’s right, the host of Reading Rainbow who encouraged us to read as children is back, 2.0 style, with a podcast where he will read a “piece of short fiction.” A very brief history of the term “podcast”: Ben Hammersley, a British Journalist, invented the word in 2004 by combining “pod” as in Apple’s portable music player iPod & “cast” from broadcast. Unlike a radio broadcast, podcasts are electronic files that can be downloaded or streamed at any time. As a tech tool, podcasts are affordable and can be used for either instruction or entertainment.

A public library can use them to interview authors and tackle issues important to the community. The New York Public Library podcasts feature interviews with NBA star and author Kareem Abdul-Jabbar and tackled the issue of death with Warner Herzog. Of course most libraries cannot attract such stars but they can interview community leaders and local authors.  Law firms can use podcasts to attract clients, or explore legal issues.  It’s a no brainer that academic libraries can offer podcast lectures for students.

So podcasts are versatile but what tools are required to bring them from conception to fruition? Assuming your organization has established a theme (for help on this check out this In-Depth Guide), the required technology is actually very simple.

Some basics:

  1. Storage. Find a home for your podcasts. Third party sites can be used to host your podcast (Soundcloud, Libsyn) but most libraries already have a website and a podcast page (or section) can easily be added.
  2. Microphone. Many computers come with a built-in microphone but audio quality is very important, so it’s worth it to invest in a mic. Amazon sells mics from as low as $8– look for microphones described as “condenser” or “dynamic” and those that plug directly into your computer’s USB port.
  3. Headphones. Most experts agree that sound quality is better when the headphone and mic are separate.
  4. Software. The actual recording requires software. Free options include Audacity or Avid’s Pro Tools First. Test them out and see which format sounds best. If the funds exist to purchase software, consider Adobe Audition or Magix Sound Forge Audio Studio. Don’t want to invest in software? Many public libraries have a recording studio that you can use free of change, and added bonus: knowledgeable staff to assist in the production.
  5. Extras! These are not essential but cheap and can enhance the recording- pop filter & suspension boom.

A man wearing a radio hat

The Pew Research Center’s 2016 State of News Media found that one in five adults (12 or older) have listened to a podcast in the past month, a 12% increase from six years ago. When I was writing this I came across an article about “radio hats” a functional fashion statement that allowed commuters to follow their favorite radio program while traveling. Eventually this trend was replaced by the transistor radios and today’s podcast.

Much like a radio broadcast, podcasts cannot be automated- they rely on the audience feeling connected with the host, guests, and subject matter. And you don’t need a huge budget or a recording studio, just a few basic tools can spread your organization’s message, develop community relations, and promote education.

Are you using podcasts? What have you find works or doesn’t? Any best practices to share?

OCLC Dev Network: CI and Dependency Management in Ruby

planet code4lib - Mon, 2017-07-03 18:00

Learn about continuous integration and dependency management in the Ruby programming language

Open Knowledge Foundation: Frictionless Data: Introducing our Tool Fund Grantees

planet code4lib - Mon, 2017-07-03 10:49

Frictionless Data is an Open Knowledge International project which started over 10 years ago as a community-driven effort of Open Knowledge Labs. Over the last 3 years, with funding from partners like the Sloan Foundation and Google, the Frictionless Data team has worked tirelessly to remove ‘friction’ from working with data. A well-defined set of specifications and standards have been published and used by organizations in the data space, our list of data packages has grown and our tools are evolving to cater to some of the issues that people encounter while working with data.

Owing to our growing community of users and the need to extend implementation of version 1.0 specifications to additional programming languages, we launched the Frictionless Data Tool Fund in March 2017. The one-time $5,000 grant per implementation language has so far attracted 70+ applications from our community of mostly exceptional developers from all over the world.

At this time, we have awarded four grants for the implementation of Frictionless Data libraries in R, Go, Java and PHP. We recently asked the grantees to tell us about themselves, their work with data and what they will be doing with their grants. These posts are part of the grantee profile series, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.


Ori Hoch: Frictionless Data Tool Fund PHP grantee

“I would also love to have PHP developers use the core libraries to write some more high-level tools. With the availability of the PHP libraries for Frictionless Data the task of developing such plugins will be greatly simplified”

From juggling fire clubs and working with data at The Museum of The Jewish People, to how he envisions use of the Frictionless Data PHP libraries to develop high level tools, read Ori Hoch’s profile to find out what he will be working on and how you can be a part of it.

Daniel Fireman: Frictionless Data Tool Fund Go grantee

I hope to use the Tool Fund grant I received to bring Go’s performance and concurrency capabilities to data processing and to have a set of tools distributed as standalone and multi-platform binaries

Read Daniel Fireman’s profile and find out more about his work on social network platforms, how he’s used GO to improve data transparency in Brazil, the challenges he has encountered while working with data and how he intends to alleviate these things  using his Go grant.

Open Knowledge Greece: Frictionless Data Tool Fund R grantee

We are going to implement two Frictionless Data libraries in R – Table Schema for publishing and sharing tabular-style data and Data Package for describing a coherent collection of data in a single package, keeping the frictionless data specifications.

Read Open Knowledge Greece’s profile and find out more about Open Knowledge festival that they will be hosting in Thessaloniki, Greece in 2018, plus, why they are excited to be implementing Frictionless data libraries in R and how you can contribute to their efforts.

Georges Labrèche: Frictionless Data Tool Fund Java Grantee

Data is messy and, for developers, cleaning and processing data from one project to another can quickly turn an awesome end-product idea into a burdensome chore. Data packages and Frictionless Data tools and libraries are important because they allow developers to focus more on the end-product itself without having to worry about heavy lifting in the data processing pipeline.

From travel, to physics and astronautics, read Georges Labrèche’s profile and find out more about his interests and  work at Open Data Kosovo as well as how you can follow and contribute to his work around Frictionless Data’s Java libraries.


Frictionless Data is a project of Open Knowledge International. Interested in knowing more? Read about the standards, data and tools, and find out how to get involved. All code developed in support of this project is hosted on the frictionlessdata organization on GitHub. Contributions are welcome.

Have a question or comment?  Let us know on our Discuss forum or on our Gitter chat.

Evergreen ILS: Evergreen 3.0 development update #12

planet code4lib - Fri, 2017-06-30 21:39

Ducks army marching by Radoslaw Ziomber (CC-BY-SA)

Another 34 patches have made their way into Evergreen’s master branch since the previous update. This week’s batch includes some significant bug fixes and improvements, including:

  • The public catalog includes a feature to allow patrons to download a list of their past loans to a CSV file. This feature will now no longer time out for patrons who have hundreds of historical loans.
  • egGrids in the web staff client now know how to sort their rows client-side. This can be very useful for grids that are populated by user input (e.g., by scanning item barcodes into the Item Status page) or that are populated using APIs that do not currently support server-side sorting. To enable client-side sorting for a particular grid, a developer can turn on the clientsort grid feature like this: <eg-grid id-field="id" features="clientsort" items-provider="gridDataProvider" persist-key="circ.patron.holds" dateformat="{{$root.egDateAndTimeFormat}}">
  • Under some circumstances, the web staff client’s volume and copy editor could silently fail to add a copy record to an existing volume. This is now fixed.
Duck trivia

The fountain at the Peabody Hotel in Memphis, Tennessee, is visited by ducks every day. As the legend goes:

How did the tradition of the ducks in The Peabody fountain begin? Back in the 1930s Frank Schutt, General Manager of The Peabody, and a friend, Chip Barwick, returned from a weekend hunting trip to Arkansas. The men had a little too much Tennessee sippin’ whiskey, and thought it would be funny to place some of their live duck decoys (it was legal then for hunters to use live decoys) in the beautiful Peabody fountain. Three small English call ducks were selected as “guinea pigs,” and the reaction was nothing short of enthusiastic. Thus began a Peabody tradition which was to become internationally famous.
In 1940, Bellman Edward Pembroke, a former circus animal trainer, offered to help with delivering the ducks to the fountain each day and taught them the now-famous Peabody Duck March. Mr. Pembroke became The Peabody Duckmaster, serving in that capacity for 50 years until his retirement in 1991.
Nearly 90 years after the inaugural march, ducks still visit the lobby fountain at 11 a.m. and 5 p.m. each day.

Many thanks to Jeanette Lundgren for contributing this bit of trivia!


Updates on the progress to Evergreen 3.0 will be published every Friday until general release of 3.0.0. If you have material to contribute to the updates, please get them to Galen Charlton by Thursday morning.

Tara Robertson: How to organize an inclusive and accessible conference

planet code4lib - Fri, 2017-06-30 17:31

I was asked by Brady Yano to offer feedback on the awesome OpenCon Diversity, Equity and Inclusion report that will be publishing as a PDF document in the second week of July.

I love that OpenCon is making their values explicit and transparent and connecting them to how they do their work:

Central to advancing Open Access, Open Data, and Open Education is the belief that information should be shared in an equitable and accessible way. It is important to us that OpenCon reflects these values—equity, accessibility, and inclusion—both in our communities and in the design of our conference. We recognize that although the Open movements are global in nature, privileged voices are typically prioritized in conferences while marginalized ones are excluded from the conversation. To avoid creating an environment that replicates power structures that exist in society, OpenCon does its best to design a meeting that (1) is accessible and inclusive, (2) meaningfully engages diverse perspectives, and (3) centers conversations around equity.

I also love that they’re being transparent about their process and self assessment publicly. I’d love to see more organizations do this.

In preparing feedback on this document I found myself referencing other documents that I’ve found useful. April Hathcock recently seeded a list of women who work in “open” and put it out for the wider community to add to.

Inspired by April’s approach I’ve put some resources for event organizers on inclusion and accessibility together in a Google Doc. This is open for editing, so please add other resources, beef up the annotations or organize the content in a more useful way.


DuraSpace News: Announcement: Fedora API Specification Initial Public Working Draft

planet code4lib - Fri, 2017-06-30 00:00

From Andrew Woods, Fedora Tech Lead, on behalf of the Specification Editors and the Fedora Leadership

After much discussion and iteration, the initial public working draft of the Fedora API Specification is now available for broader public review.

DuraSpace News: LAST CHANCE to Save on VIVO Conference Registration

planet code4lib - Fri, 2017-06-30 00:00

Registration prices for the VIVO Conference go up from $275 to $375 after today, June 30.

David Rosenthal: "to promote the progress of useful Arts"

planet code4lib - Thu, 2017-06-29 15:00
This is just a quick note to say that anyone who believes the current patent and copyright systems are working "to promote the progress of useful Arts" needs to watch Bunnie Huang's talk to the Stanford EE380 course, and read Bunnie's book The Hardware Hacker. Below the fold, a brief explanation.

Carefully and in detail Bunnie explains "gongkai", the Chinese approach to intellectual property in the technology space. He shows how a focus on capabilities rather than products, on embodiment rather than licensing, has led to a technology ecosystem that is faster and more customer-focused than the Western system. Because it doesn't have the Western focus on legal means to exclude competition, it is much more competitive. It is capable of supporting both large companies, such as Xiaomi, Tencent, Alibaba and Baidu, but also a vibrant mass of smaller and smaller companies, down to one-man garage shops, all making money. Bunnie uses many examples, including:
  • The fact that it is essentially impossible for a small Western company to build a cheap smartphone because of the IP licensing involved, whereas in China a complete smartphone motherboard costs $12 quantity one from any number of manufacturers, ready for the case of your dreams. He compares this with a $29 Arduino, with only a fraction of the capability.
  • The comparison between Ailibaba's Alipay ($700B in 2015) system and Apple Pay ($11B in 2015). Note that Alipay is an open platform, Apple Pay is a walled garden.
  • The difficulty Western companies have in monetizing consumer technology products, because it takes only a few weeks from the product becoming available on Amazon to its being swamped by similar, but cheaper, products from Chinese companies. See, for example, the hoverboard:
    Shane Chen patented a device of this type in January 2013 but in 2015 stated that he had not earned anything from sales and would litigate. Separately Segway Inc. sued various manufacturers for infringement of their patents in 2014, before itself being acquired by one of them, Ninebot, in 2015. Note that patent litigation was filed just as the product died in the market; it was basically irrelevant.
This reminds me of John Boyd's OODA loop; observation leads to action in the Chinese ecosystem so much faster than in the Western ecosystem. No need to negotiate for IP, and no exclusion, mean that the gongkai ecosystem is far more competitive, and thus values fast response much more. Note also how much better suited it is to a world of 3D printing.

The function of systems based on legal exclusion, such as patents and copyrights, is to prevent competition and implement monopolies. No system of this kind can survive against a truly competitive system because it cannot respond fast enough. A 20-year patent or a 120-year copyright on technology is guaranteed to be obsolete  long before it expires.

pinboard: LODLAM Challenge Winners

planet code4lib - Thu, 2017-06-29 14:06
RT @LODLAM: #LODLAM Challenge prize winners congrats to DIVE+ (Grand) & WarSampo (Open data) teams #DH #musetech #code4lib

Open Knowledge Foundation: Updates from Open Knowledge Portugal

planet code4lib - Thu, 2017-06-29 10:35

This blog post is part of our summer series featuring updates from local groups across the Open Knowledge Network and was submitted by Open Knowledge Portugal team.

Here is a run-down of our recent activities:

Open Data Day 2017

In March, we joined the international community and organised a local Open Data Day. Unlike the previous two years, we decided to forgo an application to the generous OKI mini-grant scheme since we felt that none of the areas of focus would fit our current practice, and we didn’t want to squander the initiative by shoehorning a subject that we hadn’t developed so far.

Instead, we invited speakers from OpenStreetMap Portugal, Wikimedia and the Lisbon City Hall Open Data Initiative to allow us to touch many facets of open culture. The event was divided into two parts: a Mapping Party and a Discussion & Quiz session. The Mapping Party was aimed at OpenStreetMap newbies (most of us!) who wanted to learn how to contribute to the OSM community mapping initiative. The afternoon session was focused on the nitty-gritty of open data, featuring talks from guest such as Jorge Gustavo Rocha [OpenStreetMap Portugal], João Tremoceiro [Lisbon City Council] and André Barbosa [editor and administrator of Wikipedia] and a conversation between the guest speakers and audience. The day ended on a lighter note with a Quiz dedicated to open culture subjects.

Participants at the Open Data Day event organised by Open Knowledge Portugal

The event was in our view a great success, having served its main purpose of strengthening the national network around open knowledge and open data and cementing OKI-PT’s role in that field. You can read a machine-translated write-up of the details of the event here; for some reason, the photos are missing from the translated version so you can read the original post here for those fluent in Portuguese.

Other Updates…

As showcased in the Open Data Day post, we also had some interesting developments on our projects; our data-package related project Datacentral was adopted by the folks at Open Knowledge Switzerland for their Open Food initiative. We also launched, a central location to provide Portuguese-language information about what exactly is open data — a resource that we had been lacking for years.

We have also been maintaining Central de Dados, our independent data portal built on Datacentral and the data package standard developed by Open Knowledge Labs, and have been assessing ways to move to a more community-centered management for this resource.

We’ve been keeping up as well with our monthly Date With Data meetups, which are dedicated to the collective development of civic tech tools, apps and sites around Portuguese public information and open data.

We have also started a new monthly initiative, OKcafé (where the OK naturally stands for Open Knowledge ;-) ) which we intend to build into a meetup which, unlike the Date With Data meetups, is less focused on hands-on development and more about higher-level discussion and exchange between people interested in open data, and who might want to get closer to OKI. We’re hoping to get the interest of people who can help us develop efforts on the side of advocacy and local/national policy related to open data, which is a field that we haven’t had the manpower to develop properly over the recent years.

Finally, we took part in a debate, representing Open Knowledge Portugal, about the potential and perils of data mining and machine learning in an initiative promoted by the local Google Developers & Users Group in Porto.

Follow OK Portugal’s Twitter page for more information about the team and their projects. For anything specific concerning the team, contact the group leads Ricardo Lafuente and Marta Pinto.

Information Technology and Libraries: Privacy and User Experience in 21st Century Library Discovery

planet code4lib - Thu, 2017-06-29 00:10

Over the last decade, libraries have taken advantage of emerging technologies to provide new discovery tools to help users find information and resources more efficiently. In the wake of this technological shift in discovery, privacy has become an increasingly prominent and complex issue for libraries. The nature of the web, over which users interact with discovery tools, has substantially diminished the library’s ability to control patron privacy. The emergence of a data economy has led to a new wave of online tracking and surveillance, in which multiple third parties collect and share user data during the discovery process, making it much more difficult, if not impossible, for libraries to protect patron privacy. In addition, users are increasingly starting their searches with web search engines, diminishing the library’s control over privacy even further.

While libraries have a legal and ethical responsibility to protect patron privacy, they are simultaneously challenged to meet evolving user needs for discovery. In a world where “search” is synonymous with Google, users increasingly expect their library discovery experience to mimic their experience using web search engines. However, web search engines rely on a drastically different set of privacy standards, as they strive to create tailored, personalized search results based on user data. Libraries are seemingly forced to make a choice between delivering the discovery experience users expect and protecting user privacy. This paper explores the competing interests of privacy and user experience, and proposes possible strategies to address them in the future design of library discovery tools.

Information Technology and Libraries: An Evidence-Based Review of Academic Web Search Engines, 2014-2016: Implications for librarians’ practice and research agenda

planet code4lib - Thu, 2017-06-29 00:10

Academic web search engines have become central to scholarly research. While the fitness of Google Scholar for research purposes has been examined repeatedly, Microsoft Academic and Google Books have not received much attention. Recent studies have much to tell us about the coverage and utility of Google Scholar, its coverage of the sciences, and its utility for evaluating researcher impact. But other aspects have been woefully understudied, such as coverage of the arts and humanities, books, and non-Western, non-English publications. User research has also tapered off. A small number of articles hint at the opportunity for librarians to become expert advisors concerning opportunities of scholarly communication made possible or enhanced by these platforms. This article seeks to summarize research concerning Google Scholar, Google Books, and Microsoft Academic from the past three years with a mind to informing practice and setting a research agenda. Selected literature from earlier time periods is included to illuminate key findings and to help shape the proposed research agenda, especially in understudied areas.

Information Technology and Libraries: Up Against the Clock: Migrating to LibGuides v2 on a Tight Timeline

planet code4lib - Thu, 2017-06-29 00:10

During Fall semester 2015, Librarians at the United States Naval Academy were faced with the challenge of migrating to LibGuides version 2 and integrating LibAnswers with LibChat into their service offerings.  Initially, the entire migration process was anticipated to take almost a full academic year; giving guide owners considerable time to update and prepare their guides.  However, with the acquisition of the LibAnswers module, library staff shortened the migration timeline considerably to ensure both products went live on the version 2 platform at the same time. The expedited implementation timeline forced the ad hoc implementation teams to prioritize completion of the tasks that were necessary for the system to remain functional after the upgrade.  This paper provides an overview of the process the staff at the Nimitz Library followed for a successful implementation on a short timeline and highlights transferable lessons learned during the process.  Consistent communication of expectations with stakeholders and prioritization of tasks were essential to the successful completion of the project.    

Information Technology and Libraries: Picture Perfect: Using Photographic Previews to Enhance Realia Collections for Library Patrons and Staff

planet code4lib - Thu, 2017-06-29 00:10

Like many academic libraries, the Ferris Library for Information, Technology, and Education (FLITE) acquires a range of materials, including learning objects, to best suit our students’ needs. Some of these objects, such as the educational manipulatives and anatomical models, are common to academic libraries but others, such as the tabletop games, are not. After our liaison to the School of Education, Kristy Motz, discovered some accessibility issues with Innovative Interfaces' Media Manager module, we decided to examine all three of our realia collections to determine what our goals in providing catalog records and visual representations would be. Once we concluded that we needed photographic previews to both enhance discovery and speed circulation service, choosing processing methods for each collection became much easier. This article will discuss how we created enhanced records for all three realia collections including custom metadata, links to additional materials, and photographic previews. 

Information Technology and Libraries: Editorial Board Thoughts: Developing Relentless Collaborations and Powerful Partnerships

planet code4lib - Thu, 2017-06-29 00:10
Editorial Board Thoughts: Developing Relentless Collaborations and Powerful Partnerships


Subscribe to code4lib aggregator