You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 1 hour 31 sec ago

Ed Summers: Flawed Humans

Sat, 2017-01-28 05:00
From the readings this week in [Documentation and Appraisal](/tag/lbsc785), here is the concluding paragraph to Richard Cox's entry for Archivists and Collecting in the Encyclopedia of Library and Information Science [@Cox:2010]: > Archival collecting is not just a mindless exercise in sweeping up old records > or sitting back and waiting for the important records to appear for > maintenance by an archives; rather, collecting is a process enmeshed with > political, theoretical, psychological, and historical elements. More research > is needed about the nature of archival collecting. More understanding by > society about how archives and historical manuscripts repositories are formed > is needed as well. The image of an archivist as an Indiana Jones-type > character, hunting out the treasures of the past in exciting pursuits, is > romantic but inaccurate; rather, *archivists are flawed humans* trying to > develop clear and reliable methods for identifying records that should be > acquired by archives. Much of merit has been accomplished by archivists and > manuscripts curators gathering records, but more reflection and > experimentation needs to be done on this topic. It seems that the new archival > hunters and gatherers will be using very different techniques to sleuth about > in the sophisticated record-keeping technologies of the twenty-first century It strikes me that this recognition that archivists are flawed humans, is key to the archival enterprise. While we may have standards, methodologies, processes and tools to assist us, archival work is ultimately guided by our interests as members of communities, cultures and societies. Rather than seeing this as a problem, or a frailty, that needs to be systematically eradicated I wonder if a more satisfying way forward is to celebrate and dignify this aspect of archival work. What are the ways in which we can celebrate the subjectivity that is always present in the archive? In what ways can our tools reflect this orientation? ### References

District Dispatch: Midwinter field trip to Cobb County libraries

Fri, 2017-01-27 23:36

After four days of productive committee meetings and sessions at ALA’s Midwinter Meeting in Atlanta, the opportunity to see real libraries in action was a welcome change of scenery. Monday afternoon Office for Information Technology Policy (OITP) staff (plus me) were given a tour of several libraries in the Cobb County Public Library System (CCPLS) hosted by a CCPLS branch manager (and longtime OITP Advisory Committee member) Pat Ball and CCPLS Director Helen Poyer.

OITP Director Alan Inouye tries out the VR equipment with User Engagement Manager Brazos Price; Photo credit: Tom Brooks for Cobb County Public Library System

The 17 branches of the CCPLS offer a gamut of services. Staff at the county’s main location, Switzer Library, enthusiastically described programs ranging from job skills training (in partnership with the local Jewish Family Services) to virtual reality technology (thanks to an IMLS grant) to falls prevention workshops for older adults (in collaboration with Wellstar Health System). A month of daily blog posts wouldn’t suffice to recount the many ways that CCPLS successfully engages other organizations to serve the needs of people in their communities. But the CCPL program that captured my attention the most, was their “Girls Who Code” club.

Over the past year OITP has been working to promote coding and other programs designed to foster computational thinking in youth, particularly through the Libraries Ready to Code project. More than half of OITP’s sessions at ALA Midwinter were related to coding. Lucky for us, CCPL’s “Girls Who Code” meet every Monday evening, so we had a chance to meet some of them as they worked on their original project. As Stratton Library volunteer Ambrey McWilliams explained to us, the girls started by brainstorming issues of concern in their community and then came up with a way to build awareness of one issue through a coding project.

The issue they chose: texting while driving. The tool: a game hosted on an original website that requires players to resist various distractions while “driving.”

As we chatted with the girls, several aspects of the project struck me:

Several members of Stratton Library’s “Girls Who Code” club

  1. The girls involved range in age from 12 to 17 and come from public, private and home schools. One girl – a home-schooler – traveled an hour each way to be part of this diverse group because it was the coding club closest to her home. Through coding (and eating pizza) together, a sense of community is forming amongst these girls leaning over each other’s computer screens.
  2. Their project emerged from a genuine conversation among this diverse group of girls about the needs they identified in the wider community. Theirs is a mission-driven endeavor. (In addition to the issue of texting while driving, they had considered problems like bus safety and animal treatment.)
  3. The many phases of the project (building the website, creating a PSA, designing characters, writing distractors) require teamwork and scaffolding, so having a committed volunteer to help guide the project is key. Stratton’s “Girls Who Code” are fortunate to have a volunteer that codes professionally and also has the skills to break down the task into manageable parts and facilitate the group’s discovering solutions to challenges of completing the task – which is a key element of computational thinking.

Thanks to the dedicated professionals at Stratton Library and the county that funds their innovative programs, these girls are learning skills that will serve them well in their future careers – not only in tech, but in any profession. As a recent OITP report states, as libraries get ready to code, “communities will see young people who are ready to take on their futures, who have robust career options, and who guarantee the economic and social vitality of the cities, towns and reservations in which they live.”

The post Midwinter field trip to Cobb County libraries appeared first on District Dispatch.

FOSS4Lib Recent Releases: Hydra - 10.4.0

Fri, 2017-01-27 13:10

Last updated January 27, 2017. Created by Peter Murray on January 27, 2017.
Log in to edit this page.

Package: HydraRelease Date: Wednesday, January 25, 2017

FOSS4Lib Recent Releases: Fedora Repository - 4.7.1

Fri, 2017-01-27 13:08

Last updated January 27, 2017. Created by Peter Murray on January 27, 2017.
Log in to edit this page.

Package: Fedora RepositoryRelease Date: Tuesday, January 24, 2017

Eric Hellman: Policy-based Privacy is Over

Fri, 2017-01-27 04:18

Yesterday, President Donald Trump issued an executive order to enhance "Public Safety in the Interior of the United States".

Of interest here is section 14:
Sec. 14.  Privacy Act.  Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.  What this means is that the executive branch, including websites, libraries and information systems may not use privacy policies to protect users other than US citizens and green card holders. Since websites, libraries and information systems typically don't keep track of user citizen status, this makes it very difficult to have any privacy policy at all.

Note that this executive order does not apply to the Library of Congress, an organ of the legislative branch of the US government. Nevertheless, it demonstrates the vulnerability of policy-based privacy. Who's to say that Congress won't enact the same restrictions for the legislative branch? Who's to say that Congress won't enact the same restrictions on any website. library or information system that operates in multiple states?

Lawyering privacy won't work any more. Librarianing privacy won't work any more. We need to rely on engineers to build privacy into our websites, libraries and information systems. This is possible. Engineers have tools such as strong cryptography that allow privacy to be built into systems without compromising functionality. It's not that engineers are immune from privacy-breaking mandates, but it's orders of magnitude more difficult to outlaw privacy engineering than it is to invalidate privacy policies. A system that doesn't record what a user does can't produce user activity records. Some facts are not alternativable. Math trumps Trump.

William Denton: Worst Little Free Library I've ever seen

Fri, 2017-01-27 01:07

My downtown colleague Jane Schmidt, a librarian at Ryerson, has some strong opinions about Little Free Libraries. Ever since she told me about the project she was working on I’ve been keeping an extra close eye out for ones in Toronto. They rarely seemed much good, full of ratty old paperbacks, an out-of-date cookbook, a James Patterson thriller not by James Patterson that’s slightly water-damaged (or at least you hope it’s water), a half-completed Sudoku puzzle book, someone’s old eco and poli sci undergrad readings, and some children’s books the children outgrew but that aren’t good enough to keep or pass on to friends. I don’t think I’ve ever seen a nice old green or orange Penguin in one.

Jane’s work on Little Free Libraries is much more nuanced and political than just that, though. The LFL Project is the home for it all, and last year’s The Trouble with Twee is a good introduction:

For the past year, I’ve been giving a lot of thought to neighbourhood book exchanges, in particular those branded with the Little Free Library (LFL®) trademark. When I announced that I would be doing this research, I received many links to articles about them each article more like the last.

“Person/group installs book exchange. Usually a brief history of the LFL® organization. Neighbourhood agrees it’s lovely. A blurb about building community and encouraging literacy.”

And that’s about it. The narrative is maddeningly homogeneous (some notable exceptions; see the Reading List below) and almost unfailingly obsequious – a sure sign that a critical eye is needed. Here’s where I come in.

Now, not all LFLs are like this. My mother runs one and it is wonderful. It’s carefully tended and has monthly themes. I think it’s everything an LFL should be, and I’m not just saying that because she’s my mother.

At the other extreme, today I found the worst Little Free Library I’ve ever seen. Here it is from the sidewalk:

In front of the LFL.

I looked inside.

Inside the LFL.

That’s no library. That’s nothing. It’s one cold day away from people throwing their little bags of dog poo inside. Would you touch that book with your bare fingers? Would you let a child anywhere near it?

District Dispatch: Upcoming webinar on uneasy sharing (aka piracy)

Thu, 2017-01-26 22:38

Our February 2 CopyTalk is going to be quite interesting. Its title is “Open Access Piracy: Sci-Hub and #icanhazpdf as Resource Sharing.”

Many of you have probably heard about the infamous SciHub website that provides free access to costly scholarly journals that libraries often buy. Our speakers will discuss the use of popular resource sharing methods like SciHub that may violate copyright and database terms of service, including what these users think of the potentially copyright infringing action that they take. This webinar will include a review some empirical evidence that places these non-library resource sharing methods in context with their legal library counterparts. What motivates people who engage in this resource sharing? Do they have access through libraries? And what are the implications for libraries?

Information about our speakers:

  • Carolyn Caffrey Gardner is the Information Literacy Coordinator at California State University, Dominguez Hills. After completing her MLS from Indiana University Bloomington, she worked as an instructional librarian at University of Wisconsin-Superior and the University of Southern California, where she focused on the intersections of first-year writing programs and information literacy instruction. Her research interests include critical pedagogy and assessment, peer-to-peer scholarly resource sharing through social media, and information literacy collaborations in higher education.
  • Gabriel J. Gardner is the librarian for criminal justice, linguistics, and Romance, German, and Russian languages & literatures at California State University Long Beach. He has been working in libraries since 2008 with experience in public, college, and archival settings. His research interests include scholarly communication in the broadest sense, intellectual freedom, and the effectiveness of library privacy efforts.

Mark your calendars and set aside some time for this free webinar. You deserve it!

Date: Thursday, February 2, 2017
Time: 2:00 p.m. Eastern / 11:00 a.m. Pacific
Duration: One hour

Go to http://ala.adobeconnect.com/copytalk and sign in as a guest.

This program is brought to you by OITP’s copyright education subcommittee.

The post Upcoming webinar on uneasy sharing (aka piracy) appeared first on District Dispatch.

Library of Congress: The Signal: Lots of Transfer Collectives Keep Cultural Memory Safe: The Importance of Community Audio/Visual Archiving

Thu, 2017-01-26 17:31

This is a guest post collectively written by the XFR Collective (pronounced “transfer collective”), a grass-roots digitization and digital-preservation organization. They work with artists and media creators to rescue and preserve digital works, utilizing open, free platforms — such as the Internet Archive — for long-term preservation and access. We featured them in two previous Signal posts, May 12, 2014 and July 29, 2014.

Some members of the XFR Collective in 2016. Photo by Yvonne Ng.

This year, XFR Collective is expanding its efforts to develop and make available a reproducible community archiving training and access model. We recognize that XFR’s resources alone will not be enough to provide low-cost services to all those in possession of potentially deteriorating media that falls outside of the attention of large collecting institutions. The model we are developing will be constructed in a way that enables individuals and groups with little-to-no technical or archival expertise to easily understand core principles behind audio/visual transfer technology, determine a preservation strategy within their own means and implement it quickly.

Creating an effective training model is a challenge and developing communities around preservation requires familiarity with a range of ideas and practices that fall outside the scope of traditional archival practice. XFR members, for instance, draw on tenets of community organizing, critical pedagogy, inter-sectional feminism and non-hierarchical ways of working in collaboration. In the spirit of community building and collaborative knowledge, we would like to share a few lessons we learned in 2016 and how they will steer the direction of our collective framework in 2017.

Listening Builds Confidence
In April 2016, we hosted a three-hour workshop for the Asian American Oral History Collective, where we learned some valuable lessons on how to better connect with individuals who are managing digital and analog collections but are unfamiliar with audio/visual preservation strategies. The agenda for the day was:

  • Participants introduce themselves, describe their collections and needs.
  • XFR presents a slideshow on the basic tenets of archival theory.
  • XFR presents a tutorial on using Google Forms to collect descriptive metadata.
  • XFR walks through the process of entering data into a form.

XFR Collective member, Yvonne Ng at the AAOHC workshop. Photo by Rachel Mattson.

We realized that much of what we prepared – mainly, core principles of archival and preservation theory presented in an hour – can inadvertently alienate audiences with limited means and resources. We discovered this during the Q&A period, which turned into an engaging discussion about the participants’ hopes and anxieties for their collections in the face of what often feels like an insurmountable task. We learned that for future workshops we need to designate more time upfront to allow participants to articulate their needs. From there, workshop hosts and participants can work together to find the best path toward making decisions about their archival collections.

In addition, we have made the following improvements to our basic agenda structure:

  • Participants answer: What do I have? What do I need or want to come from this collection?
  • The archivist provides feedback and strategies
  • Question and answer period
  • Participants are invited to a follow-up workshop where tailored solutions are presented.

In recent blog posts, some XFR members have explored the broader issue of changing how archival concepts are communicated. Check out Ethan Gates’s reflection on how conversation is the best way towards solving highly technical problems and Rachel Mattson’s takeaways from time spent at two national conferences that inspired her to re-imagine alternative archival-community centered frameworks.

From a screening of “XFR Collective Selects: As Seen on TV” at Ulterior Gallery on January 6, 2017. Photo by Mary Kidd.

Make It Seen
In addition to leading workshops, we organize public screenings of transferred material, followed by a post-screening discussion with the audience to build awareness of matters related to media archiving. Preservation measures can be sustained by engaging the public, who in turn may find renewed value and meaning in resurrected analog works. We will continue to plan these events in order to bring greater public visibility to our partners’ works, foster a historical/intergenerational sense of community in a shared space and highlight the output of our efforts as a collective.

Share Your Workflows
In the latter half of 2016, we began migrating our workflow documentation onto the open-source platform Github. This has enabled us to further connect with virtual communities, gain their valuable input on our processes and share those processes with those looking to learn and/or adapt them. Prior to this we maintained internal documentation in our Google Drive and shared our documents only with those who knew to ask, which was an obstacle to making our workflows as accessible as possible.

Adopting Github as our documentation platform also opens up a learning and teaching opportunity by and for XFR members. We, as a collective, are always looking to grow and share our skills so that we can refine and re-imagine how we approach community archiving. This feeds the idea of what we like to call “horizontal mentorship,” a concept that fosters non-hierarchical knowledge sharing.

Build a Transfer Network
XFR Collective is just one relatively new model of a community archiving effort that arose out of the needs of the specific communities we are a part of and serve. There are other a/v-centric groups such as the Community Archiving Workshop, Moving Image Preservation of Puget Sound, the Canadian Lesbian and Gay Archive, the Memory Lab at the Washington D.C. Public Library and the Personal Archiving Lab at the Madison Public Library – just to name a few – who are working to provide similar services. Their existence demonstrates that the issue of unaffordable audio/visual transfer services extends across borders but also creates an incredible opportunity for these groups to share their experiences and create new learning networks.

Go Forward
Our dream is to provide the tools that support individuals and communities in working with one another to care for their media by sharing documentation and workflows, workshop syllabi, conference presentation notes, and other resources on a variety of social media and learning platforms. By sharing our experiences and lessons learned, we hope to highlight the idea that personal archiving challenges are collective archiving challenges.

What are some ways that you have addressed or overcome personal archiving obstacles? Please share in the comments!

LITA: Interview with Scott Walter, Candidate for ALA President 2018-2019

Thu, 2017-01-26 17:24

Scott Walter

What changes do you foresee in ALA’s divisional structure over the next 5-10 years?

ALA is a complex and dynamic association, and the “lines” between different organizational units (Divisions, Chapters, Round Tables) are not always clear. Indeed, “lines” are precisely what we don’t want, and there will be continued emphasis in the coming years on efforts to improve communication across these units and to promote shared initiatives. In my “home” division of ACRL, I have seen this change dynamic in action as we have seen new units (Discussion Groups, Interest Groups) draw deep and immediate engagement around well-defined topics, and traditional units (Sections) evolve, merge, etc.

Precisely because divisions are often seen as one’s “home” in ALA, their continued support, and their leadership in ALA-wide issues, will be critical to the continued recruitment (and retention) of engaged members. We may see changes in the scope of existing divisions, or the rise of a new division that reflects a critical area of emphasis for the future of the field. We may see opportunities to align the work of Round Tables with Divisions, and we may see continued growth of the divisional “presence” within the Chapters (especially if members continue to seek to make their mark on the profession through work closer to home). The questions that must guide any changes are: how are divisions able to bring their areas of expertise to the strategic goals and initiatives of ALA; and, how does affiliation with ALA bring demonstrable benefit to divisions in terms of their ability to recruit, retain, and support the continuing professional education and leadership development of their members?

 

What are three things ALA should be doing to improve virtual participation?

Assuming you are referring to participation in the work of the Association (as opposed to participation in programs), the first thing ALA must do is to allow all levels of service, including participation on ALA Council, to be open to members who can only participate virtually. Second, ALA must continue to pay special attention in future contracts with convention centers and hotels on the conference “campus” to costs associated with promoting virtual participation, whether this participation is through audio or video means. The ability to support this sort of participation should be considered as high a priority in negotiation as other “basics” have been in the past (the availability of high-quality and consistent wifi access on a crowded convention center network comes to mind as one “infrastructure” piece that has proven problematic). Third, ALA should provide training on a routine basis in the effective management of virtual meetings, including provision of meeting materials to members participating at a distance, suggestions on how to improve virtual engagement and to ensure the ability of members participating at a distance to contribute to real-time discussion, and checklists of available technology, e.g., conference phones, Skype, other virtual meeting platforms. Bonus answer: ALA should promote a greater degree of sharing across divisions regarding “what works,” comparative costs, and, if possible, shared platforms (to promote a more consistent experience of virtual participation across one’s ALA experience, which often encompasses participation in more than one division).

 

As ALA shifts from in-person collaboration to other forms of participation and, thus, revenue, how do we make up for this lost revenue?

That’s the question in front of any professional or scholarly association, and the one that placed an obstacle for so long in front of open-access initiatives, as scholarly associations expressed concern about what moving a journal to OA might mean for revenues. How do we do things differently without upsetting traditional revenue models and budget planning, and how do we do it in real-time? There is no easy answer, but there are some common ones, e.g., diversify the revenue streams so that there is less dependence on the 2-conference-per-year model, improve the quality of conference programs to make ALA the “don’t miss” event on the professional development calendar that will ensure growing attendance, and make changes that will lower costs (e.g., the potential that the current “conference re-model” proposal has for opening up additional cities as potential ALA conference hosts).

A more radical approach would involve a full-scale re-thinking of the role, scope, and focus of the Midwinter Meeting in the Association, e.g., Midwinter could be reconfigured from another “national” meeting to a “regional” meeting where the program could be attuned to the needs in the host region in any given year. This would allow us to further consider the question you posed earlier regarding virtual participation in Association business, as well as the concern I suggested regarding the need to better understand (and build upon) the relationship of ALA and its Chapters. A focused program like this might actually draw equal (or better) numbers than the more broadly pitched Midwinter meeting does now (and might allow for less direct competition for scarce travel funds in any year when there is also a “can’t miss” national program like the biennial divisional meetings of ALA, PLA, AASL, etc.).

 

How will you encourage library students to get involved, and to take leadership positions, in ALA?

The most important thing we can do to encourage LIS students to get involved in the Association is to show them that membership will be a benefit to them throughout their careers. Promoting the Association through LIS programs (and LIS faculty support for student chapters) is one important initiative. Another is active involvement by practitioners in the development of programming for those student chapters so that students are introduced, from the beginning, to the idea of ALA membership as a path to building a network of colleagues who will support them throughout their careers. Making ALA membership affordable to students is also critical, e.g., encouraging LIS programs to directly fund memberships so that all LIS students are able to join ALA (or, to be fair, another, relevant LIS professional association) during their student year(s) at no cost to them. Continuing to support (and enhance) the work of the New Members Round Table is critical, as is the continued development of ALA and NMRT initiatives that connect new professionals with specific projects and opportunities in the divisions. We can look at projects like Emerging Leaders for the ways in which they reflect what the research tells us about how to engage new members in ways that are more likely to encourage them to make an ongoing commitment to the association.

Member Engagement Continuum

This image from the American Society of Association Executives shows the continuum of engagement that begins with new membership and can, with attention to opportunities, lead to the career-long engagement we would love to see from members. Finally, we cannot encourage LIS students to take leadership positions in ALA if we do not offer such positions, and this is similarly true for new professionals and other new members. A deep look at the way in which those opportunities are (or are not) offered to, and placed within reach, of students and new members (especially those without professional development funds) is a critical step in making any plan that addresses this question.

 

Is ALA a place for MLS-degreed professionals who do not work in libraries? Should it be? Why, or why not?

Of course it is. Next question?

Seriously, though, the ALA mission statement makes clear that the mission of the Association is to “to provide leadership for the development, promotion and improvement of library and information services and the profession of librarianship.” Since one may provide library and information services outside the framework of a library (and, in fact, this is far more the case than it was when this mission statement was first adopted), it stands to reason that anyone doing our work, in any context, should find a home and a network of colleagues in ALA.

The more difficult question that you did not ask is whether or not ALA is a place for people providing these services, and sharing our work both inside and outside of libraries, who do not hold an ALA-accredited degree. The answer to that question is also “Yes!” I once worked in an academic library where the AD for Facilities was a licensed architect, but not a librarian; I certainly think he would have found a home in the LLAMA Buildings and Equipment Section. Likewise, there are many academic librarians who find a home in the Society for College and University Planning, given how important libraries (and librarians) are to institutional planning efforts. Our libraries have become the home for professionals from different backgrounds who come together to provide the highest quality collections, technology, resources, facilities, and services for our communities. If ALA is to be the home for all those whose work contributes directly to “the development, promotion and improvement of library and information services,” we need to welcome all who work in libraries, and to the benefit of library and information services, to the Association.

 

With librarians of all types using technology as part of their everyday work, what specific leadership and expertise do you see LITA bringing to ALA?

LITA has played a critical role in the Association for years in terms of trend-spotting and in helping librarians to see how developments in the broader realm of information technology have relevance to their work. LITA members, often in complementary roles as members of other divisions, have also played a critical role in designing and delivering continuing professional education to ALA members, and in creating resources that introduce the membership at large to emerging technologies (e.g., through the LITA publishing program). As the use of technology has become ubiquitous in our personal lives, as well as our professional lives, LITA members have provided important guidance to ALA members without as strong a background in technology on the impact of technology on library collections, services, management, etc., as in Sarah Houghton’s presentation during this month’s Symposium on the Future of Libraries on “21st Century Library Ethics.” Finally, LITA members can play an important role in LIS education, both in terms of helping to evaluate the ways in which LIS programs introduce pre-service professionals to technology skills needed for professional success, and in terms of teaching the courses that bring those technologies into the LIS curriculum, e.g., Meredith Farkas’s “Information Technology Tools and Applications” at San Jose State.

As the technology environment in libraries continues to expand, and as the borders between the technology found in different types of libraries blurs, LITA can play an important role in helping us to understand how these changes foster new opportunities for collaboration across library types. In Chicago, for example, the earliest adoption of digital media services and maker spaces probably came through Chicago Public Library, but we now see these services provided routinely in academic libraries, high schools, and specialized environments like technology incubators. The work we have been doing at DePaul to launch our new maker space has opened my eyes to a number of new partnership opportunities across campus, and with other libraries and museums across Chicago. I won’t lie – I was also slapped in the face by what’s coming when I learned that my daughter may begin doing “big data analytics” as early as her first year Computer Science course in high school (!). LITA members will be “out front” in considering the collaboration opportunities that will be increasingly possible across ALA divisions as library technology becomes more diverse and ubiquitous in our work.

 

 How can ALA help LITA help everyone?

 

As I said at the start, ALA must “bring demonstrable benefit to divisions in terms of their ability to recruit, retain, and support the continuing professional education and leadership development of their members.” I might now add to that, “and to other members of the Association.” I’ve already mentioned one way in which this happens, i.e., the collaboration between LITA Publishing and ALA Publishing to bring LITA expertise to all (and this need not be limited to traditional publications, but can encompass webinars or other e-learning opportunities). ALA can also highlight cross-cutting programs like the just-concluded Symposium on the Future of Libraries that brought expertise born in the division to the attention of a much wider member audience. ALA can also pursue high-level partnerships with other professional associations in the technology sector that would provide opportunities for LITA members to work with other technology experts, and to bring that broader perspective back to the Association, not just in areas such as information retrieval or user experience testing, but also in information ethics, management of digital identity (and protection of one’s own privacy and security in the digital environment), and assessment of K-20 student learning in the area of information technology. Because technology has become a pervasive influence and experience in our lives, and in the lives of the members of our communities, it is critical for the Association to think creatively about the areas of LIS work that may now be informed by the expertise housed in LITA.

Save

Save

Save

LITA: Interview with Terri Grief, Candidate for ALA President 2018-2019

Thu, 2017-01-26 16:12

What changes do you foresee in ALA’s divisional structure over the next five to ten years?

I don’t think there will be any huge changes. I hope that we more willing to share with one another, have less of a feeling of silos and more a feeling of collaboration. I expect our Retired Members RT and New Member RT will grow and other roundtables will pop up as there is need.

 

What are three things ALA should be doing to improve virtual participation?

  1. Ask the experts–you folks in LITA probably have the best ideas of anyone.
  2. Promote low cost and easy to use ideas across the divisions.
  3. Advertise the virtual opportunities virtually–use social media instead of relying on publications with offerings.

 

As ALA shifts from in person collaboration (Midwinter, ahem) to other forms of participation and thus revenue, how do we make up for the lost revenue?

I don’t feel like Midwinter is going to go away. At least I hope not. The ALA Annual Conference Remodel report sounds like there are going to be great changes that will make the Midwinter conference more appealing, more affordable and more revenue generating. I love face to face interactions and I really hope that it never goes away.

 

How will you encourage library students to get involved/take leadership roles in ALA?

I want to see those students become part of committees, roundtables and divisions as student members but as full participating members. I guess from being in a high school, I see students as powerful, thoughtful, and amazingly perceptive and these are high school students! I intend to appoint students to committees.

 

Is ALA a place for MLS-degreed professionals who do not work in libraries? Should it be? Why or why not?

Of course it is a place for those people. An person with an MLS degree is most likely interested in the same things as we are-access to information, technology, intellectual freedom, etc. I know that they could find a niche in our association.

 

With librarians of all types using technology as part of their everyday work, what specific leadership and expertise do you see LITA bringing to ALA?

I was using LITA as an example of how we can work together in my campaign speech but had to shorten it by about 30 seconds so I took that sentence out. What I said is this, “I plan to use my presidential funds to help you join with other units and divisions across the association to work on issues, develop programs and strengthen internal relationships. For example, if your group is dealing with issues around technology, who better to assist than LITA?” This association has to come together to survive and we have to be willing to ask for help from each other and be willing to give the help when someone asks. If we have open mind, all kinds of ideas bubble up. I’d love to see LITA and AASL do more programming on coding, for example.

 

How can ALA help LITA help everyone[/raise the bar/train the masses/etc]?

One of the things we have to do is to take the ALA strategic areas to heart. ALA has to share those strategic areas in a way that includes more than council if we really believe that those (4) areas are what our decisions should be based on. Where does LITA fit in with the four areas of Advocacy, Information Policy, Educational and Professional Development and (more than likely approved at this Midwinter) Diversity? I believe that LITA can be at the table in every one of those areas. We might naturally see that the membership especially needs you for Educational and Professional Development since you are the ones on the cutting edge of technology and could keep the rest of us in the loop. I remember seeing the first pair of Google glasses on Jenny Levine! But what about Information Policy? ALA needs to remember to include LITA when these issues come up. I know that your group has traditionally not written policy but I am pretty sure you have ideas when OITP presents something. I’d like to see you there before the policy is written. We all have to accept that our diversity will not change unless we all make a concerted effort to recruit. I know that many of your members are in the field of academia and what better place to find candidates that might be interested in your field? We all have to be advocates and that includes LITA, ASCLA, LAMA, and RUSA among others instead of just the traditional thinking of PLA, ACRL and the youth divisions. My platform is about strengthening relationships within ALA. The territorial feeling that we can’t share our expertise because it isn’t our territory has to stop if we want to become the strong and powerful association we need to survive. If I am elected, I want to emulate Courtney Young in her treatment of her co-divisional presidents and take it one step further. I’d like the division president-elects start as soon as I am elected to brainstorm ideas to work together. I will use my presidential funds to support these and watch our relationships get stronger across the association and then outside the association.

Save

Save

Save

LITA: Interview with Loida Garcia-Febo, Candidate for ALA President 2018-2019

Thu, 2017-01-26 15:05

Loida Garcia-Febo

What changes do you foresee in ALA’s divisional structure over the next five to ten years?

It will be of great benefit for Divisions to continue to strive to be more member-driven organizations. I see increase in virtual participation and a restructure of the committee system. All lead by what works best for the members, their professional and job needs. Developing mechanisms to include newer librarians and students in the Division’s work. And a systematic process for Divisions to provide tailored conferences and educational opportunities that would keep members engaged and a steady stream of revenue.

In recent years ACRL, PLA, and LLAMA have restructure their committees and I am watching their very closely. I believe all members and committees are important, they bring expertise and experience that enrich our Association.

What are three things ALA should be doing to improve virtual participation?

As Chair of the Membership Meetings Committee, I lead efforts to present ALA’s first Virtual Town Hall and two subsequent annual virtual meetings for ALA’s 60,000 members. The committee and I worked as a team with the ALA Membership Development Office, ALA Publications, and ALA Information Technology staff members to establish the virtual meetings ALA continues to hold annually. These meetings include mechanisms to submit resolutions, to poll attendees about topics of interest and provide opportunities for attendees to choose top topics for discussion. All the process is saved in the ALA Institutional Repository. Currently, I am Co-Chairing REFORMA’s National Conference and we are in the process of increasing our conference virtual programs following this methodology.

Based on my experience, in order to improve virtual participation, I recommend:
-Survey membership to identify user friendly technology, online formats preferred by members, and professional development areas needed by members.
-Based on results, acquire technology and plan to produce programs in areas indicated by members in formats used by them.
-Strategize to provide opportunities for participation and programming in multiple formats.
-Partner with LITA.

As ALA shifts from in person collaboration (Midwinter, ahem) to other forms of participation and thus revenue, how do we make up for the lost revenue?

Providing products in multiple formats could bring more revenue into the Association. An assessment of high-demand formats will help to identify, for example, which products to offer as podcast, webinar, and video. These will contribute to increase the Association’s General Fund along with revenue from publishing, advertisement, and conferences.

How will you encourage library students to get involved/take leadership roles in ALA?

I invite library students to be bold, dare to take action. Volunteer, show up and say yes! Be passionate about what you do, care. We need you. If you would like to see something that is not in place, create it! I’ve done this. In 2004, two colleagues and I noticed that there wasn’t a forum for new librarians and students within IFLA. We developed the concept, approached the President of IFLA, secured her support, and established the group, IFLA New Professionals.

Within ALA, library students can join their ALA Student Chapter at their school or if there is none, they can form their own Student Chapter. ALA’s Office of Chapter Relations provides help to make this happen. They are available on the phone and have a very informative website for Student Chapters.

The New Members Round Table of ALA is a great space for new librarians to get involved or take leadership roles within ALA. Many of the ALA Divisions such as LLAMA which is the Leadership and Management Association have groups for new librarians. In sum, my advice is to connect, share ideas, and follow up. Great things can happen! We need library students and new librarians.

Is ALA a place for MLS-degreed professionals who do not work in libraries? Should it be? Why or why not?

 

Based on my experience as a library consultant, I can say that ALA is an association for MLS-degreed professionals who work in libraries, are consultants, professors, vendors, work in tech or start-up companies, museums, archives, and public and private organizations. We all serve communities. ALA provides professional development and educational opportunities, facilitate networking among members, and support advocating to place libraries on local and national agendas. These are key elements needed by all professionals, librarians working in libraries or those working in other areas, all serving different types of communities. This said, I support increasing opportunities for MLS-degreed professionals who do not work in libraries. We need to do more to meet the needs of these colleagues. We need to identify those needs and strategize on how ALA’s different groups can serve these colleagues. For instance, ASCLA is a great Division where many librarians working in places that are not libraries converge. LITA is an excellent example of how an association can serve these librarians.

With librarians of all types using technology as part of their everyday work, what specific leadership and expertise do you see LITA bringing to ALA?

I believe in ALA’s core values and intellectual freedom which I have promoted as Chair of ALA’s Intellectual Freedom Round Table and as Expert Resource Person of the Free Access to Information and Freedom of Expression Core Activity of IFLA. In this area, the expertise of LITA members is valuable to advice on strategy related to patron privacy, confidentiality, copyright, usability, accessibility, and encryption. Additionally, LITA could be a key collaborator to help ALA’s technology to be on par with the consumer tech, emerging technologies used by librarians as part of their everyday work. LITA has the expertise to advice, for instance, about effective training design to help librarians acquire skills required to operate in the virtual world. There are many possibilities! The LITA Guide Series is extremely helpful for everyday work including mobile technology in libraries, introduction to programming languages, and visual storytelling for libraries.

 

I believe LITA members’ contributions, expertise, backgrounds and experiences will help ALA in many ways to raise the bar and train the masses. Based on requests I have heard and conversations with many groups, I believe a next logical step is for ALA to increase virtual engagement. For this, we need to train our leaders and members to operate in the virtual arena. We got to work together to empower our librarians and information workers. This team work will benefit our profession and the communities we serve. Together, we can bring change!

Save

Save

Save

Save

William Denton: Notifying the Internet Archive when a new post is published

Thu, 2017-01-26 02:00

I saw a mention of the IndieWeb idea of notifying the Internet Archive when a new page is posted.

Trigger an Archive You can tell archive.org to crawl and archive a specific URL immediately. $ curl -I -H "Accept: application/json" http://web.archive.org/save/{url to archive} | grep Content-Location and you'll get a response like: Content-Location: /web/20160715203015/http://indieweb.org The response includes the path to the archived page on web.archive.org. Append this path to http://web.archive.org to build the final URL for the archived page.

I use Jekyll for this site, and I manage building and publishing with a Makefile. I added this trigger to it, and now the publish part looks like:

publish: rsync --archive --compress --itemize-changes /var/www/miskatonic/production/ myhostingsite:public_html/miskatonic.org/ curl --head --silent --header "Accept: application/json" http://web.archive.org/save/www.miskatonic.org/ | grep Content-Location notify-send "Web site is now live"

Now, this just tells the Internet Archive to get my site’s home page. It doesn’t specify which pages have been added and/or updated. That would require keeping track of all the site’s content and checking for differences every time I publish, which is certainly possible, but would require making a new plugin. Adding one line to the Makefile is far easier and gets 95% of the work done.

Evergreen ILS: Evergreen 2.10.9 and 2.11.2 released

Thu, 2017-01-26 01:08

We are pleased to announce the release of Evergreen 2.10.9 and 2.11.2, both bugfix releases.

Evergreen 2.10.9 fixes the following issues:

  • A fix to the web client patron interface that changed the holds count in the patron summary from total / available to available / total.
  • A fix to an issue where the Closed Dates Editor was displaying an extra day of closure.
  • A fix to the Closed Dates Editor so that it now displays “All Day” when the library is closed for the entire day.
  • A fix to properly format LC Call numbers in spine label printing.
  • A fix to a bug that was causing intermittent search failures.

Evergreen 2.11.2 fixes the same issues fixed in 2.10.8, and also fixes the following:

  • A fix to a bug that was causing search failures for Copy Location Group searches.
  • A fix to significant increased slowness with holds transfers.
  • The addition of an index to the action.aged_circulation table to resolve a problem with long-running queries.
  • A fix to redirects that for one-hit metarecord searches for systems that have enabled the setting to immediately jump to a bib record on one-hit searches.
  • A fix to the new acquisitions cost field available in the copy editor to resolve an issue where accidentally clearing out the value in the field resulted in an error.
  • A fix to a bug that broke the Alternate Printable Hold Pull List and Vandelay uploads on systems that were running OpenSRF 2.5.

Please visit the downloads page to retrieve the server software and staff clients.

District Dispatch: ALA hosts second delegation of librarians from Belarus

Wed, 2017-01-25 22:03

This afternoon, the American Library Association (ALA) was pleased to host six librarians from Belarus. This was our second delegation from Belarus in the last year; the first group visited this past August. These visitors are invited to the United States under the auspices of the U.S. Department of State’s International Visitor Leadership Program.

Librarians from Belarus (four of the six pictured here) visit the ALA Washington Office.

The delegation included: Ms. Maryia Liatsiaha, the Director of the Molodechno Central District Library; Ms. Larysa lotysh, head of the Foreign Literature Department at the E. Karski Grodno Regional Scientific Library; Ms. Maryna Pshybytka, head of the Library Science Department at the National Library of Belarus; Mr. Pavel Ustinov, chief librarian at the Republican Scientific and Technical Library; Ms. Volha Yakubovich, librarian at the Baranovichi Central City Library; and Mr. Mikalai Yatsevich, dean of the Information and Document Communication Department at Belarusian State University of Culture and Arts. The delegation was accompanied by two simultaneous interpreters.

The Department of State outlined the following specific objectives for this visit:

  • Familiarize Belarusian librarians with the educational, social, and economic impact of public libraries in American society;
  • Provide an overview of a variety of library-based, community-tailored programs and services that improve the quality of life of Americans and contribute to their personal and professional growth;
  • Demonstrate how libraries play a central role in the social, professional, and academic activities of individuals and communities;
  • Organize visits to a full range of libraries, including public, academic, research, special, integrated, merged, and others to explore typical U.S. management and operation practices;
  • Discuss how to strengthen the competencies and leadership potential of librarians through professional learning, advocacy, and networking initiatives; and
  • Acquaint the visitors with best-practices in library facility design and spaces’ planning to better meet the demands of targeted community groups and clients.

The group had a chance to to attend a day of sessions during last week’s MidWinter conference in Atlanta. After seeing ALA in action, the Belarusian librarians were interested learning more about our membership, our organizational structure, and how state chapters conduct their business and advocacy—both independently of and in tandem with ALA. They asked for details about ALA Council proceedings and about our vote to amend the ALA strategic plan in order to add a fourth strategic direction focused on equity, diversity, and inclusion.

The conversation also touched on ALA’s advocacy work, broadly, in support of access to free and open information and on the evolving discourse around copyright law, both in America and in Belarus. Before departing to meet staff at the Institute of Museum and Library Services, our visitors shared some information about their success in advocating for changes in their copyright laws and encouraged us to be optimistic as we move forward.

The ALA representatives in this meeting were Alan S. Inouye, Jessica McGilvray, and Emily Wagner. It is thrilling to meet and learn from librarians and information professionals from around the globe. We look forward to future opportunities to represent ALA and U.S. libraries with international delegations.

The post ALA hosts second delegation of librarians from Belarus appeared first on District Dispatch.

Open Knowledge Foundation: Announcing the 2017 International Open Data Day Mini Grants Scheme

Wed, 2017-01-25 09:48

The year is 2017! Some of you (like my fellow Ghanaian citizens) may have just voted in an election that you hope will bring with it the promise of socio-economic growth. You believe that having a better understanding of how government works will foster better engagement and efficiency. Others are exploring new ideas in research that could change the lives of millions if not billions. A new business idea is in the making and you will like to explore a little more about your target demographics. Others may just have realiised the magnitude of the refugee crisis across the world and want to do something practical to help. You can see where I am going with this. If your main challenge at the moment is exactly where to go from here, why not start by organising an event on International Open Data Day this year and join hundreds of events around the world?
For the benefit of those of you who are new to Open Data, one definition is data that can be freely used, re-used and redistributed by anyone – subject only, at most, to the requirement to attribute and share alike. With this comes another avenue to explore many insights, innovations, collaborations that can enhance the social issues we care about as societies. This year’s Open Data Day will take place on Saturday, 4th March, and with funding from SPARC, the Open Contracting Program of Hivos and Article 19, and OKI, we will distribute $12,500 worth of mini-grants to support your event ideas.

I got your attention now, right? So what exactly are mini-grants?

A mini-grant is a grant of between $200-$400 for groups to create Open Data Day events. In past years, we gave grants to groups based on location. This year, we want to take ODD up a notch and focus on problems that open data can solve. This year, there are four categories to the grant – Open Research Data, Open Contracting and tracking public money flows, Open Data for Environment and Open Data for Human Rights.

I hope this has gotten you excited and ready to apply. But if you do so, there are a few important things  to be aware of:

  1. To all grants: We cannot fund government applications, whether federal or local. This is since we support civil society actions. We encourage government to participate in the event themselves!
  2. For Human Rights or Environment: groups based in the US cannot apply for funding due to our funder restrictions.
  3. For  Tracking public money flows: only groups from low/medium income countries (based on this OECD DAC list).

Event organisers can only apply once and for just one category, so choose well.

Open Day 2015, Buenos Aires Argentina

Writing A Successful Application

Now that’s out of the way, here are some tips for a successful grant application. Open Data Day is a great opportunity for outreach to new stakeholders and showoff our great work. However, we want people to work and think on open data as part of their work year round, and not only on one day. Successful applications will be those who will show how open data day is connected to other future activities and not a one off event in the community. Here are some guidelines for successful applications:

 

  • Think of concrete output – Open Data Day is one day, so we don’t expect you to solve global warming in less than 24 hours. Think of tangible outputs like a network map, small prototype or even a video.
  • Less is more – We prefer to see one good, well thought through output, then a lot of them who are not realistic to this timeframe.
  • Part of a process, not standalone – Show us how ODD fit in the grand scheme of things of your community.
  • In the human rights and environment, Priority will be given to:
    • Connected to current datasets – Replication is not a must, but we want to see how these projects are connected to other open data projects that are done already and not only reinventing the wheel. In term of human rights, any event that will use HDX will get a priority. In terms of environment, any event that will use existed datasets (like EU or local open dataset).

OR

    • Connected to current OKI Labs projects – If you can’t find a dataset that is connected to your work, we will give priority to groups who will use / test/ contribute to one of our OK Labs projects.

 

What is the timeline for the mini-grants?

Applications are open now through Monday, 13th February 2017 and the selected grantees will be announced on Monday, 20th February 2017. However, it is important to note that all payments will be made to the teams after ODD when they submit their blog reports and a copy of their expenses. Payment before the event will be considered on a case to case basis. 

 

Need some inspiration for you Open Data Day events? OKI Staff curated some ideas for you!

If you are all set and ready to organise an ODD event, apply for a mini-grant HERE.

 

DuraSpace News: NOW AVAILABLE: Fedora 4.7.1 Release

Wed, 2017-01-25 00:00

From David Wilcox, Fedora Product Manager, on behalf of the Fedora Team

The Fedora Team is proud to announce the release of Fedora 4.7.1. on January 24, 2017. Full release notes are available on the wiki. The Fedora 4.7.1 release is a backwards compatible refinement of the previous release, focused on improvements and bug fixes to the REST-API and the core code base.

Jonathan Rochkind: Heroku auto-scaling, and warning, ask first about load testing

Tue, 2017-01-24 17:07

Heroku auto-scaling looks like a pretty sweet feature, well-implemented as expected from Heroku. (Haven’t tried it out myself yet, just from the docs).

But…

“We strongly recommend that you simulate the production experience with load testing, and use Threshold Alerting in conjunction with autoscaling to monitor your app’s end-user experience. If you plan to conduct significant load testing, you will need to request written consent from Heroku in advance to prevent being flagged as a denial of service attacker.”

They strongly recommend something that requires written consent from Heroku? That’s very un-heroku-like annoying.

I actually recently did some automated load testing of rubyland.news, as well as a different app I was working on for a client, in order to determine the proper puma workers and threads. I hadn’t seen these docs and it hadn’t occured to me I should notify Heroku first.

My load testing was brief, but who knows what is considered ‘significant’ by Heroku’s automated DoS defenses. Glad I seem to have escaped being flagged. Next time I’ll make sure to request written consent… by, filing a support ticket I guess, as the text links to the support area.


Filed under: General

Open Knowledge Foundation: Project PiMaa is building low-cost, open-source data stations to support environmental monitoring in Kampala

Tue, 2017-01-24 13:15

PiMaa is an Internet of Things project in Kampala, Uganda that seeks to build low-cost environment monitoring stations and open-up any data collected. PiMaa is an initiative under Outbox, supported by Open Knowledge International through the Africa Open Data Collaboration Fund.

Kampala is in a lot of growing pains. The current administration is doing their utmost best to increase the living conditions of the inhabitants. Public spaces are beautified, public transport is reformed, roads have been improved and tarmacked, and there is a phone number and contact point where citizens can report noise-pollution. Kampala City Council Authority (KCCA) is really modernising the way the city is being managed. Still, for the naked eye, it is quite obvious Kampala’s environment is suffering from a lot of different challenges mostly caused by human activity. Globally, there has been an urgent action to combat climate change under

Kampala City Council Authority (KCCA) is really modernising the way the city is being managed. Still, for the naked eye, it is quite obvious Kampala’s environment is suffering from a lot of different challenges mostly caused by human activity. Globally, there has been an urgent action to combat climate change under Sustainable Development Goal (SDG) 13 and improve resilience of Cities under SDG 11.

In Kampala, there is no way to determine the air quality given that there is no infrastructure to support environmental monitoring, air quality standards in place nor data on air quality. The growing reliance on diesel fuels for power generation, increased congestion on roads, indoor pollution due to poor connectivity to electricity grid and noise all lead to increasing pollution.

Image credit: Elevated view of Nakasero Market, Kampala (Public Domain)

In a study conducted by Dr. Bruce Kirenga on the state of ambient air quality in two Ugandan cities, it was noticed that air pollution in two Ugandan cities that include Kampala and Jinja is 5.3 times above the standards set by the World Health Organisation (WHO). Further to that, the State of the Environment report in 2010 highlights the lack of air pollution data.

There is a need for a robust, economical and extendable system for measuring the environment in the city of Kampala, Uganda that is low cost, modular and open source.

As a group of enthusiastic technology fellows, we have decided to embark on a project to build low-cost environment stations that may enable us to use open up data to track our environment. We called it “Pimaa” — literally from the local Luganda language word “Okupima” which means “to measure”. We want to embark on gathering data on our environment to enable us to track its quality and measure progress towards fulfilment of SDG 13 and SDG 11. This “Pimaa” project aims to use open data to expose challenges faced and create awareness on the state of the urban environment, allowing organisations like KCCA, National Environment Management Authority (NEMA) and other stakeholders to measure the impact of the balance between an ever growing population, policy and project implementations.

Pimaa is an Internet of Things (IoT) project that seeks to build a low-cost environmental platform that can be easily deployed to public outposts and sites using small monitoring devices that collect data on air quality using sensors. The data collected will be transformed and stored on the Pimaa platform against open data standards that will ease the accessibility and dissemination of collected data.

Pimaa’s data collection hardware (station) will use the Raspberry Pi at its core, and will be attached with air quality sensors to measure various environmental pollutants that include Carbon Monoxide (CO), Sulfur dioxide (SO2), Nitrogen dioxide (NO2), Ozone (O3), Particulate matter (PM2.5). Although not classified as air pollutants, we shall also have sensors to measure ambient environmental temperature and humidity. Noise levels will also be recorded due to the impact it has on the psychological well-being of the general public.

The purpose is to develop a live prototype of the low-cost environment station that can then be adopted by any interested parties to scale.

How can the project scale?

The “Project PiMaa” platform is designed around the Raspberry Pi (a credit card-sized single-board computer) as the core of the system that is used to measure several environmental conditions ranging from temperature, humidity, light or UV, air pressure, and air quality (Nitrogen dioxide, Carbon Dioxide levels etc). To be scalable with such a system, the cost becomes a major issue.

In addition, we can’t handle the cost of data across all these locations but we have a brilliant solution to that. We shall use the the “LoRaWAN network” for Internet of Things networks. We are to roll out a very cheap “Internet of things” across Kampala based on 100% Open Source solutions that the Stations upload their data on-to one (1) base central station that covers a radius of up to 10 kilometers and the network acts in free spectrum, thus eliminating exorbitant network traffic costs that the mobile network operators would have charged us.

A live prototype of PiMAA

We have another challenge. We won’t be able to pay the electricity bills if we are to scale that much and cover the whole city. The Solar Panel and the IOT combined would bring the operational costs of the station to zero (0). (cleaning & maintenance labor excluded). These additional technologies will make the stations much more attractive to roll out in neighbourhoods with less stable power and no wireless Internet.

Also, we need to have an implementation that is based on open source technologies and with modular reconfigurable sensors that can be distributed across wide areas is a plus. Finally, the data needs to be fully accessible remotely anywhere on the web and visualized in a format that can be easily interpreted to make informed decisions on policies, planning and progress towards achievement of SDGs.

How might the data collected and insights developed be useful?

Data is as good as its usage, the conversion to insights and making informed decisions from it. Showing the raw data alone will not be enough, we will also need to do research and explain what the data means. What is bad air quality? What does it lead to? How does urban air quality affect the GDP of the country? What happens when a child gets too little sleep because there is a noisy night-church or night club keeping her awake every night? What is the effect of the weather on the data we find?

We do understand the gap in skills sets to make this happen. We shall have our trained data fellows work closely with the partner organisations on a day to day basis to put together insights from the data coming out of the these stations. The data posted will be pushed to any open data portals in the country, but we will also develop a website that shows our data and other relevant data (wind, noise pollution, air quality etc) on a map and over time.

Our desire is to build a proof of concept that can be adopted to influence policy.

We want to be able to answer these questions and use these measurements to influence policy –

  1. How bad is air quality in Kampala?
  2. How does Kampala improve the quality and timeliness of data gathered on the urban environment to accelerate implementation and action?
  3. Where are the (most) polluted areas of town and where this pollution is coming from?
  4. How can the data inform proponents of alternative cooking solutions make their point based on actual measurements?
Our learnings and next steps

It has been a challenge identifying open-source air quality sensors that work outdoors. Most of the air quality sensors are built to work indoors. When we initially started out on the project, we were under the impression it would be easy to find all the sensors we need. For the rapid prototype, we are opting to test with the indoor sensors and later identify outdoor air sensors for the live prototype.

We need partners on this project. We are interested in working with the national environmental regulatory Authority, the Kampala Capital City Authority division responsible for environmental monitoring and assessment, providers of solar energy equipment to power our stations, students or professors from university departments focused on environmental research and clean energy CSOs/NGOs. Volunteers who would offer to host our stations on their buildings are also welcome.

Lastly, we are interested in talking with people that are implementing or have done similar projects before to share experiences. You can contribute to our project on GitHub. You can also follow us on Twitter @projectpimaa.

This piece originally appeared on the Outbox research Medium blog and is reposted with the author’s permission.

Ed Summers: Facts as Annotations

Tue, 2017-01-24 05:00
You may have noticed back in December that the Washington Post [released] a fact checking plugin for Chrome that provides inline context for Trumps tweets. A few days later an equivalent [Firefox extension] was released as well. At the time I looked at the plugin source that was installed (I couldn't find it on Github) to see how it was gathering facts:

The WaPo browser extension to provide context to Trump’s tweets has a local store of facts https://t.co/gg6RYxYsKj https://t.co/DMsStcr0Ks

— Ed Summers (@edsu) December 17, 2016 The plugin comes bundled with a set of facts: specifically [23 facts] about 28 of Trump's tweets, that are stored as JavaScript. I thought this was significant at the time because tweets can spread quickly, and any lag time between when the tweet is published, when the fact checking is performed, and when the plugin is updated is highly significant. The plugin would need to be fully updated to get the new facts for someone to see them. --- Just a few days ago the Washington Post updated their story to indicate that the extension now will fact check tweets from the [POTUS] account as well (thanks for the heads up [Neil]). I took the opportunity a look under the hood again and can see that now it is fetching the facts dynamically from the web from this URL: > [https://www.pbump.net/files/post/extension/core/data.php](https://www.pbump.net/files/post/extension/core/data.php) Now there are 73 facts about 98 tweets, which is very cool. I put a snapshot I created this afternoon up as [a gist] if you want to take a look at them pretty-printed. But it's not just that there are more facts that's exciting here. The big improvement in my opinion is that the plugin is loading the facts *dynamically*. So as new fact checking is performed the plugin can respond in near real time. The plugin doesn't need to be updated to get the new facts in front of people. This raises the question of what workflow is producing the facts. I think it would be interesting to know a little bit more about how the facts end up in this JSON data being served up at pbump.net. Presumably there are journalists watching what Trump is tweeting and somehow adding them to a database that is being used to serve up the data. It feels like there could be an opportunity to formalize the data structure for the facts, and bootstrap a mini-ecosystem for the sharing of facts, by trusted authorities. Having used the [Hypothesis Annotation plugin] for a few years I can't help but wonder if it might o share these facts using something like [Web Annotations]. Web Annotation provides a distributed data model and format for annotating web content. What if there were a plugin that could be configured to display facts from the Washington Post and the New York Times, or any other authority that wants to put in the work, and someone wants to approve? As much as I loathe the idea of *alternative facts* and how they are being used politically at the moment, I still recognize that facts are based on trust, and trust is fundamentally a social problem. Truth be told I don't think using Web Annotation as a technology in itself is going to solve this problem of trust. Been there done that. What's actually needed is *openness* about how the architecture underpinning the behavior of fact checking, and letting others participate in it. This obviously isn't a fully baked thought, but more of a provocation for further thoughts. One small, perhaps risky but practical step forward would be for the Washington Post to publish their plugin on GitHub, and to start a dialogue about what an small ecosystem for fact checking could look like. [released]: https://www.washingtonpost.com/news/the-fix/wp/2016/12/16/now-you-can-fact-check-trumps-tweets-in-the-tweets-themselves/?tid=sm_tw&utm_term=.1ebfcaa9e0c7 [Firefox extension]: https://addons.mozilla.org/en-US/firefox/addon/real-donald-context/ [POTUS]: https://twitter.com/POTUS [Neil]: https://twitter.com/fraistat [a gist]: https://gist.github.com/edsu/fa5b62a800be8bb9a68e84ef632f4ed7 [Hypothesis Annotation Plugin]: https://hypothes.is/ [Web Annotations]: http://w3c.github.io/web-annotation/model/wd2/ [23 facts]: https://gist.github.com/edsu/8adb6edd07304b89125710c13fd9e40e

Library of Congress: The Signal: Using Three-Dimensional Modeling to Preserve Cultural Heritage

Mon, 2017-01-23 18:54

This is a guest post by Elizabeth England.

The Temple of Bel 3D model from #NEWPALMYRA, http://www.newpalmyra.org/models/temple-of-bel/. Model released under CC0 Public Domain.

In recent years, a few news stories focused on the use of digital tools in preserving cultural heritage three-dimensional objects, stories such as the printed reconstruction of the Arch of Triumph in Palmyra, Syria and the construction of a facsimile of King Tutankhamun’s Egyptian tomb. These two examples recreate something physical but, increasingly, ancient artifacts and monuments are becoming accessible as physical replicas and digital objects.

Both physical and digital reconstructions offer the opportunity to experience objects and sites that might not otherwise be possible due to their far-away location, their fragility or because they’ve been destroyed. 3D modeling can preserve a replica of a site or object in its current state or restore it to an earlier state.

Creating 3D digital reconstructions is no easy task and there is more than one method. For example, the heritage-site preservation organization, CyArk, uses laser scanning. This post is about an example of the Library of Congress’s method of point-cloud generation.

Recently, the Library of Congress’s 2016-17 National Digital Stewardship Residents and their mentors met with Library of Congress staff member John W. Hessler, whose research focuses on the use of computation, computer vision (teaching the computer to “see” and process visual data) and virtual reality in archaeology. Hessler is the Curator of the Jay I. Kislak Collection of the Archaeology and History of the Early Americas at the Library of Congress; he also teaches courses at Johns Hopkins University’s Graduate School of Advanced Studies on the topic of mathematical and algorithmic foundations of computer vision and archaeological imaging.

“Monmouth castle point cloud, created with Photosynth” by John Cummings. CC BY-SA 3.0 via Wikimedia Commons

We started our session with an overview of a 3D modeling process that uses computer vision to create models from 2D photographs. Measuring an object from photographs is known as photogrammetry. Reconstructing a destroyed site is often made possible through crowdsourcing photographs of the site, such as through the #NEWPALMYRA project.

The workflow begins with taking multiple images of the object or site from various angles. The photographs should cover all parts of the object, which will help the computer calculate the distance between points on the object (essentially what our eyes do with depth perception). Precision and quantity are key.

The next step is “feature extraction and matching,” a process that locates common points across images using Scale-Invariant Feature Transform (known as the SIFT algorithm, by David Lowe). From this data, a point cloud is derived. Simply put, a point cloud is a collection of points within three dimensions, each point having X, Y and Z coordinates.

The images are then related to one another using their features and points and molded into a rough 3D model. Self-calibration calculations and density matching follow, to error correct and further shape the three-dimensional depth. Lastly, the final 3D model is built and texture is added to the model.

Access is a key component of digital preservation and the Kislak Collection’s 3D-modeled objects are accessible through Sketchfab, a platform for publishing and sharing 3D content. Sketchfab supports over 30 3D file formats, but some formats are more desirable than others because they’re more widely supported and compatible across platforms, formats such as .PLY (polygon file) or .OBJ (object file). File-format support and longevity are major considerations for digital preservation.

Close-up of a 3D model from the Kislak Collection, Small Vase with High-Relief “Diving God,” Postclassic Maya, 1200-1400 CE. Model is free on Sketchfab

I asked Hessler about preferred file formats for 3D models, or more generally, if best practices exist for the long-term preservation of 3D models. He said that, because the field is still so new, standards are yet to be determined. Hessler foresees the field being in a constant state of flux for many years due to ongoing changes and advancements brought about by technologies such as drones, robotics and computer vision research.

While standards and best practices are needed, Hessler also sees larger considerations. “At some point, we have to start thinking deeply about the purpose of all of this 3D modeling,” Hessler said. “It is one thing to say that it is useful in bringing to life cultural heritage for those who cannot travel to sites or museums. The question of preservation becomes tricky however. We do not want to start thinking that these digital copies, which are easier and easier to make, are actual substitutes for the objects and sites they are meant to record. I can imagine situations where scarce preservation resources are withheld with the idea that ‘we have a copy and why preserve the original?’ The real things are what are most important and their preservation should be our goal.”

Additional concerns have been raised about the loss of cultural context and authority, or as Patty Gerstenblith wrote in Technology and Cultural Heritage Preservation, “This also leads to questions about who has the right to re-create and determine the authenticity of the past.”

For all the affordances of 3D modeling to preserve cultural heritage, there is still much to be addressed in this growing field, from best practices to guiding principles. Establishing standards will be necessary as institutions increasingly become responsible for the long-term preservation of 3D models, which are becoming integral parts of the cultural record.

Pages