You are here

Feed aggregator

District Dispatch: Senate boosts funding for IMLS, LSTA thanks to ALA grassroots

planet code4lib - Fri, 2017-09-08 17:30

Congress delivered good news for library funding after returning from its August recess this week. Yesterday, the Senate Appropriations Committee approved an increase of $4 million in funding for the Institute of Museum and Library Services (IMLS), all of which would go to the formula-based Grants to States program.

Following months of intensive Hill lobbying by ALA Washington Office staff and the emails, phone calls and visits to Congress by ALA advocates, these gains are a win for libraries. According to a key Senate staffer, ALA’s ongoing grassroots campaign to save direct library funding launched last March – and the significant increase in the number of Senators and Representatives signing “Dear Appropriator” letters this year that it produced – played a major role in the gains for IMLS and Grants to States in the Senate Committee’s bill.

The Senate Committee’s bill, approved by the Labor-HHS Subcommittee on Wednesday, would boost IMLS funding to $235 million. Grants to States would receive $160 million. The bill also includes increased funding in FY 2018 for a number of other library-related programs.

Institution/Program Total Increase National Library of Medicine $420.9 million $21 million Title IV Student Support and Academic Enrichment Grants $450 million $50 million Title I Grants to Local Educational Agencies $15.5 billion $25 million Innovative Approaches to Literacy $27 million level Title II Supporting Effective Instruction State Grants $2.1 billion level Career and Technical Education State Grants $1.1 billion level

Overall, education funding in the Senate bill decreased $1.3 billion, but libraries remain a clear priority in Congress. These increases in direct library funding would not be possible without sustained advocacy by ALA staff and members!

The Committee’s funding measure now heads to the full Senate for consideration. If passed, it must eventually be reconciled with House legislation that proposes to fund IMLS and Grants to States for FY2018 at FY2017’s level of $231 million and $156 million, respectively. While yesterday’s vote does not guarantee increased direct library funding, Senate approval of the Appropriations Committee’s bill would leave libraries in a very strong position to avoid any cuts for FY2018 – in spite of the Administration’s proposals (reiterated again this week in a “Statement of Administration Position“) to effectively eliminate IMLS and federal library funding.

While library funding is on track to remain level through the standard appropriations process, final passage of legislation by both chambers of Congress by the end of the 2017 Fiscal Year on September 30 is unlikely.  Congressional staff tells ALA that Congress will not be able to pass most, if any, of its 12 individual appropriations bills by the end of this month. Congress will likely need to enact a Continuing Resolution (CR), which would fund the government at current levels, to avert a government shutdown on October 1.

Thanks to you, the outlook for library funding in FY2018 is promising, but it’s not close to being a done deal. Right now, we must be patient; but please be ready to participate in one last grassroots push this fall when your voice is most needed to maintain – and possibly increase – library funding. We will keep you updated.

The post Senate boosts funding for IMLS, LSTA thanks to ALA grassroots appeared first on District Dispatch.

Archival Connections: Arrangement and Description in the Cloud: A Preliminary Analysis

planet code4lib - Fri, 2017-09-08 16:16
I’m posting a preprint of some early work related to the Archival Connections project.  This work will be published as a book chapter/proceedings by the ArchiveSchule in Marburg.  In the meantime, here is the preprint: Arrangement and Description in the Cloud A Preliminary Analysis

LITA: Announcing the LITA Blog Editors

planet code4lib - Fri, 2017-09-08 14:21

We are pleased to announce that Cinthya Ippoliti and John Klima will serve as joint editors of the LITA Blog. Each is an accomplished writer and library tech leader, and we are confident that their perspectives and skill will benefit the Blog and its readership.

John Klima

Cinthya Ippoliti

Cinthya is Associate Dean for Research and Learning Services at Oklahoma State University where she provides administrative leadership for the library’s academic liaison program as well as services for undergraduate and graduate students and community outreach. As a blogger, she has covered a slew of topics including technology assessment.

John is the Assistant Director of the Waukesha Public Library where one of his many hats is maintaining, upgrading, and innovating technology within the library. He wrote a number of articles on steampunk for Library Journal. As a blogger, he has often provides public library technology perspectives.

Look for updates from our Editors on how you can get involved and contribute to the LITA Blog!

Lucidworks: The Search for Search at Reddit

planet code4lib - Thu, 2017-09-07 19:08

Today, Reddit announced their new search for ‘the front page of the internet’ built with Lucidworks Fusion.

Started back in the halcyon Web 2.0 days of 2005, Reddit has become the fourth most popular site in the US and 9th in the world with more than 300 million users every month posting links, commenting and voting across their  1.1 million communities (called ‘sub-reddits’). Sub-reddits can orbit around such broad mainstream topics as /r/politics, /r/bitcoin, and /r/starwars or as obscure as /r/bunnieswithhats, /r/grilledcheese, and /r/animalsbeingjerks. Search is a key part of trying to find more information on their favorite topics and hobbies across the entire universe of communities.

As the site has grown, the search function has had five different search stacks implemented over the years including Postgres, PyLucene, Apache Solr, IndexTank, and Amazon’s CloudSearch. Each time performance got better but wasn’t keeping up with the pace of the site’s growth and relevancy wasn’t where it should be.

“When you think about the Internet, you think about a handful of sites — Facebook, Google, Youtube, and Reddit. My personal opinion is that Reddit is the most important of all of these,” explained Lucidworks CEO, Will Hayes. “It connects strangers from all over the world around an incredibly diverse group of topics. Content is created at a breakneck pace and at massive scale. Because of this, the search function becomes an incredibly important piece of the UX puzzle. Lucidworks Fusion allows Reddit to tackle the scale and complexity issues and provide the world-class search experience that their users expect. ”

The team chose Lucidworks Fusion for it’s best-in-class search capabilities including efficient scaling, monitoring, and improved search relevance.

“Reddit relies heavily on content discovery, as our primary value proposition is giving our people a home for discovering, sharing, and discussing the things they’re most passionate about,” said Nick Caldwell, Vice President of Engineering at Reddit. “As Reddit has grown, so have our communities’ expectations of the experience we provide, and improving our search platform will help us address a long-time user pain point in a meaningful way. We expect Fusion’s customization and machine learning functionality will significantly elevate our search capabilities and transform the way people discover content on the site.”

Here’s just a few of the results from the new search which is now at 100% availability to all users:

  • ETL indexing pipelines reduced to just 4 Hive queries, which led to a 33% increase in posts indexed
  • Full re-index of all of Reddit content slashed from 11 hours to 5 with constant live updates and errors down by two orders of magnitude
  • Amount of hardware/machines reduced from 200 to 30
  • 99% of queries served search results in 500ms
  • Comparable relevancy to the old search (without any fine-tuning yet!)

That’s just a little bit of the detailed blog post over on the Reddit blog. The Search for Better Search at Reddit.

Don’t miss their keynote at the Lucene/Solr Revolution next week in Las Vegas.

Coverage in TechCrunch and KMWorld. More on the way!

Read the full press release.

Go try out the search on Reddit right now!

 

 

 

 

The post The Search for Search at Reddit appeared first on Lucidworks.

Evergreen ILS: On the Road to 3.0: Small Enhancements to Improve the Staff Experience

planet code4lib - Thu, 2017-09-07 16:33

The upcoming Evergreen 3.0 release, scheduled for October 3, 2017, will bring along a lot of improvements for staff and patrons at Evergreen libraries. Over the next few weeks, we’ll highlight some of our favorite new features in the On the Road to 3.0 video series.

In the first installment of the series, we look at small feature enhancements that will improve the staff experience in Evergreen.

Do you want to be the first to know of new videos in this series as they are added? Be sure to subscribe to our new EvergreenILS YouTube channel.

 

District Dispatch: Latino Cultures platform from Google is a new library resource

planet code4lib - Thu, 2017-09-07 16:02

This post originally appeared on The Scoop.

Libraries across the country are working in a variety of ways to improve the full spectrum of library and information services for the approximately 58.6 million Spanish-speaking and Latino people in the US and build a diverse and inclusive profession.

In honor of National Hispanic Heritage Month, which begins on September 15, Google Cultural Institute has collaborated with more than 35 museums and institutions to launch a new platform within Google Arts & Culture: Latino Cultures. The platform brings more than 2,500 Latino cultural artifacts online and—through immersive storytelling, 360-degree virtual tours, ultra-high-resolution imagery, and visual field trips—offers first-hand knowledge about the Latino experience in America.

The American Library Association’s (ALA) President-Elect Loida Garcia-Febo is excited about this new resource, which she believes will help libraries continue to draw attention to the rich legacy of Latinos and Latinas across America.

“Nationwide, libraries are celebrating Latino cultures by offering programs that highlight our music, cuisine, art, history, and leadership,” says Garcia-Febo. “I know this platform will be a great springboard as we continue to reshape our library collections to include Spanish-language and Latino-oriented materials.”

Latino Cultures pulls from a wide variety of collections to recognize people and events that have influenced Hispanic culture in the US. For example, it highlights the Voces Oral History Project’s interviews with Latinos and Latinas of the World War II, Korean War, and Vietnam War generations. Likewise, the platform showcases luminaries like Mari-Luci Jaramillo, the first Latina Ambassador of the US to Honduras, and civil rights activist and labor leader Dolores Huerta, who co-founded the United Farm Workers with Cesar Chavez in the 1960s.

According to the latest research, America’s Hispanic population reached a record 17% of the US population in 2017. As this segment of the population grows, it is increasingly important for educators, hospitals, civil services, and other institutions to have more information about the diverse experiences and backgrounds of Latino Americans.

“Libraries must make sure that more than the basic services are available to Latino Americans,” says Garcia-Febo. “We have to provide respectful spaces for Latino voices and perspectives.”

Google Cultural Institute aims to inspire Americans to learn more about the cultures of Latinos and Latinas in the US. As a complement to the platform, they are creating lesson plans that support bringing content into classrooms, afterschool programs, and other organizational programming.

Office for Information Technology Policy Director Alan Inouye considers it ALA’s responsibility to bring these resources to the attention of all libraries.

“We are especially excited about this new resource in terms of our policy work,” says Inouye. “Issues of race, ethnicity, and immigration are front and center on the nation’s policy agenda, and diversity and inclusion are central to ALA’s strategic priorities. No doubt, the Latino Cultures platform will be a wonderful resource for libraries to leverage in their programs and services.”

In honor of National Hispanic Heritage Month, Garcia-Febo also gives credit to her personal cultural inheritance as a librarian.

“My mother, Doña Febo, was a librarian who taught me the importance of intellectual freedom and the right of everyone to access information. I always celebrate this month with her in mind.”

The post Latino Cultures platform from Google is a new library resource appeared first on District Dispatch.

Open Knowledge Foundation: Openbudgets.eu: the new platform for financial transparency in Europe

planet code4lib - Thu, 2017-09-07 12:28

Today, OpenBudgets officially launches its fiscal transparency platform. Using
OpenBudgets.eu journalists, civil servants, and data scientists can process, analyse, and explore the nature and relevance of fiscal data.

The platform offers a toolbox to everyone who wants to upload, visualise and analyse fiscal data. From easy to use visualisations and high level analytics to fun games and accessible explanations of public budgeting and corruption practices along with
participatory budgeting tools, it caters to the needs of journalists, activists, policy makers and civil servants alike.

The first successful implementations and projects have been developed in Thessaloniki, Paris, and Bonn, where civil society organisations and civil servants have together built budget visualisation for the general public.The cooperation between IT and administration resulted in 3 local instances of OpenBudgets.eu, setting the example for future implementations around Europe.

On the EU level, the project has campaigned for transparency in MEP expenses and better quality data on the European subsidies. The OpenBudgets.eu project
subsidystories has uncovered how almost 300 billion in EU subsidies is spent. The MEP expenses campaign has led to the President of the European Parliament committing to introduce concrete proposals for reform of the MEPs’ allowance scheme by the end of the year.

Finally, the project has created tailor-made tools for journalists as our research has shown that there was a lack of contextual knowledge and knowledge on the basics of accounting. ‘Cooking budgets’presents the basics of accounting in a satirical website, and the successful game ‘The good, the bad and the accountant’ simulates the struggle of a civil servant to retain its integrity.

The three approaches and audiences to public budgeting have resulted in a holistic platform which tailors to the wider public who wants to have more insights in their local, regional, national and even EU budgets. With the launch of OpenBudgets.eu the field of financial transparency in Europe is enriched by new tools, data, games and research for journalists, civil society organisations and civil servants alike, resulting in a valuable resource for a broad target audience.

OpenBudgets.eu has received funding from the European Union’s H2020 EU research and innovation programme under grant agreement No 645833 and is implemented by an international consortium of nine partners (including Open Knowledge International and Open Knowledge Foundation Germany) under the coordination of Fraunhofer IAIS.

William Denton: Denton Declaration

planet code4lib - Thu, 2017-09-07 01:35

I state, for the record, openly and proudly, that I am in full support of the Denton Declaration.

Evergreen ILS: Evergreen 3.0 first beta release available

planet code4lib - Thu, 2017-09-07 00:18

The first beta release of Evergreen 3.0 is now available for testing from the downloads page.

Evergreen 3.0 will be a major release that includes:

  • community support of the web staff client for production use
  • serials and offline circulation modules for the web staff client
  • improvements to the display of headings in the public catalog browse list
  • the ability to search patron records by date of birth
  • copy tags and digital bookplates
  • batch editing of patron records
  • better support for consortia that span multiple time zones
  • and numerous other improvements

For more information on what’s available in the beta release, please read the initial draft of the release notes.

Users of Evergreen are strongly encouraged to use the beta release to test new features and the web staff client; bugs should be reported via Launchpad. A second beta release that is to include bugfixes and support for Debian Stretch is scheduled to be made on 20 September.

Evergreen admins installing the beta or upgrading a test system to the beta should be aware of the following:

  • The minimum version of PostgreSQL required to run Evergreen 3.0 is PostgreSQL 9.4.
  • The beta release will work on OpenSRF 2.5.0, but OpenSRF 2.5.1 is expected to be released over the next few days and will be recommended for further testing of the Evergreen beta.  In particular, if you run into difficulties retrieving catalog search results, please see OpenSRF bug 1709710 for some workarounds.
  • Evergreen 3.0 requires that the open-ils.qstore service be active.
  • SIP2 bugfixes in Evergreen 3.0 require an upgrade of SIPServer to be fully effective.

Evergreen 3.0.0 will be a large, ambitious release; testing during beta period will be particularly important for a smooth release on 6 October.

District Dispatch: Rep. Pallone talks net neutrality at N.J. library

planet code4lib - Wed, 2017-09-06 22:39

Guest post by Tonya Garcia, director of Long Branch (New Jersey) Public Library

The Long Branch Public Library recently hosted a meeting with their representative, Congressman Frank Pallone (D-NJ6), to discuss net neutrality and its importance to libraries. As the most senior minority representative on the House Energy and Commerce Committee, he is a strong advocate for net neutrality (the principle that internet service providers should pick winners and losers among content and services offered to consumers). The library community is grateful for his interest in how libraries and the people they serve will be affected should rules preserving net neutrality be weakened or completely eliminated, as the Chairman of the Federal Communications Commission (FCC) has proposed.

Left to right: Tonya Garcia, director of Long Branch Public Library; Patricia A. Tumulty, executive director of the New Jersey Library Association; U.S. Representative Frank Pallone (NJ6); Eileen M. Palmer, director of the Libraries of Middlesex Automation Consortium. Photo credit: Eileen Palmer

Following a tour of the library, Congressman Pallone met to discuss net neutrality and how important he believes it is to maintain rules protecting access to high-speed broadband. He invited the library advocates to share our concerns with him.

Patricia A. Tumulty, executive director of the New Jersey Library Association (NJLA), told the Congressman that in their comments filed with the FCC, the NJLA noted:

“The current net neutrality rules promote free speech and intellectual expression. The New Jersey Library Association is concerned that changes to existing net neutrality rules will create a tiered version of the internet in which libraries and other noncommercial enterprises are limited to the internet’s ’slow lanes‘ while high-definition movies and corporate content obtain preferential treatment.

People who come to the library because they cannot afford broadband access at home should not have their choices in information shaped by who can pay the most. Library sites—key portals for those looking for unbiased knowledge—and library users could be among the first victims of slowdowns.”

The availability of affordable high-speed internet has meant that public libraries now serve as incubators for local entrepreneurs, noted James Keehbler, director of the Piscataway Public Library. His makerspace and makers programs within the Piscataway library play a central role in supporting their residents. Without access to high speed internet, their makerspace, for example, could not have been used by local entrepreneurs to develop prototypes that were used in successful crowd-sourced funding efforts to start a local business.

New Jersey State Librarian Mary Chute also discussed the significant current investment in digital resources by the state’s library that are then made available to all New Jersey residents. These expensive resources are relied on by small businesses, students, job seekers and lifelong learners throughout the state. A “slow lane” internet in libraries would hamper access to bandwidth-heavy visual content such as training videos used by those seeking certifications for employment and many others.

Eileen M. Palmer, director of the Libraries of Middlesex Automation Consortium and member of the ALA’s Committee on Legislation, added concerns that the loss of net neutrality rules could negatively impact the many local digital collections housed in public and academic libraries. She also spoke about the potential loss of access to government information, such as the NASA high-speed video feeds used just recently by many libraries to host eclipse programs and viewing events for students and the public.

This was a wide-ranging discussion. Attendees were appreciative of Congressman Pallone’s leadership on this issue and his interest in better understanding how libraries and our patrons will be impacted should we lose rules protecting net neutrality. It also was a conversation that the Congressman was eager to have with his constituents in a library in his congressional district.

Who’ll be writing the next blog about their representative’s visit to their library, I wonder?

The post Rep. Pallone talks net neutrality at N.J. library appeared first on District Dispatch.

LITA: Jobs in Information Technology: September 6, 2017

planet code4lib - Wed, 2017-09-06 20:41

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Purdue University Northwest, Reference/Instruction Librarian, Westville, IN

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

Islandora: CLAW Install Sprint Recap

planet code4lib - Wed, 2017-09-06 15:09
We've just wrapped up our first community sprint on providing a more modular and flexible installer using Ansible, and it was totally awesome. Our team of active community volunteers tackled Ansible head on and nearly brought claw-playbook to feature parity with claw_vagrant. This means that once a few outstanding issues are resolved, claw-playbook will not only replace claw_vagrant for development environments, but can also be used to install CLAW on bare metal! This is what we've all been waiting for! This giant leap forward could not have happened without our talented and dedicated community volunteers. We'd like to thank each and every one of you (and your bosses!) for generously donating your time and talents to our casue:
  • Bryan Brown (Florida State University)
  • Jared Whiklo (University of Manitoba)
  • Adam Soroka (Smithsonian Institution)
  • Natkeeran Kanthan (University of Toronto Scarborough)
  • Marcus Barnes (University of Toronto Scarborough)
  • Jonathan Green (LYRASIS)
  • Diego Pino (Metropolitan New York Library Council)
  • Rosie Le Faive (University of Prince Edward Island)
  • Brian Woolstrum (Carnegie Mellon)
  • Yamil Suarez (Berklee College of Music)
  • Gavin Morris (Born-Digital)
And remember folks, we still have more devops goals to reach for, like muli-server setups and containerization. So be on the lookout for another call for stakeholders soon!

In the Library, With the Lead Pipe: From AASL Standards to the ACRL Framework: Higher Education Shifts in Pedagogical Strategies

planet code4lib - Wed, 2017-09-06 15:00
In Brief

How does the Framework for Information Literacy for Higher Education function in relation to the information literacy standards used with students in K-12 schools and how does it inform academic librarians’ pedagogical strategies? While these documents are strongly related, there are large differences in their theoretical approach to information literacy, which are revealed in their definitions, treatment of dispositions, and approach to measurement. This leaves gaps in instructional approaches and student learning. Understanding these differences enables librarians in higher education to leverage the Framework to teach all students and fill in instructional gaps, regardless of how much information literacy instruction they have received in the past.

Introduction

I became an academic librarian in August 2016, shortly after the ACRL formally adopted the Framework for Information Literacy for Higher Education. The Framework is a fundamentally different document than the AASL Standards for 21st Century Learners, the standards from which I worked as library media specialist in a college preparatory high school. While the student population in that high school was fairly homogeneous (racially, socio-economically, etc), the students in my college classes were from all over the world, had any background imaginable, and were at different skill levels. They had unique K-12 experiences and vastly varying degrees of information literacy instruction prior to coming to campus. I needed to work effectively with the Framework in planning lessons and assessing student learning to properly support all of these students. To better understand a new guiding document, I did close readings, comparing and contrasting the two documents, and delved into articles discussing their usage and teaching philosophy; however, I found no literature on how the two functioned together. In this paper, I use the final draft of The Framework for Information Literacy in Higher Education (2015) and the current 2009 version of the AASL Standards for 21st Century Learners and examine them independently from how they are utilized in classrooms. I set out to compare and contrast the theory behind each of the documents, AASL relying on behaviorist theory and the Framework relying on critical librarianship and social constructivism. These theories speak very different ideas about how students learn about information, creates learning gaps, and effects our pedagogy practices in the classroom.

Theoretical Approaches

Each document presents a set of beliefs and a definition of information literacy that provides groundwork for the formation of their objectives (the Standards for AASL and the Knowledge Practices for ACRL). The AASL Standards’ Common Beliefs are a series of statements placed in the beginning of the document without any introduction as to their meaning or purpose. I am writing with the assumption that these present an underlying philosophy for the standards. The Common Beliefs include statements like “ethical behavior in the use of information must be taught,” “school libraries are essential to the development of learning skills,” and “technology skills are crucial for future employment needs” (AASL, 2009, pg. 2-3). The AASL Standards’ definition of information literacy, placed within the Common Beliefs and further explained directly after, states that “Information literacy has progressed from the simple definition of using reference resources to find information. Multiple literacies, including digital, visual, textual, and technological, have now joined information literacy as crucial skills for this century.” Each of the Standards begin with “Learners use skills, resources, and tools to: [action word here]” (AASL, 2009, pg. 3).

This definition and standard set is skills focused, emphasizing the use of tools and technological skills to find information. This approach, like the older ACRL Standards, views information as “a commodity external to the learner, which can be sought out, possessed, and used” and portrays the students as individuals who acquire information skills through practice (Foasberg, 2015, pg. 702). These foundational principles are much more reflective of behaviorist theory, a teaching theory that asserts that is “typified by rote learning, drill-and-practice” and “manifests itself through changed bahaviours such as the acquisition of new practical abilities” (Elliot, 2009, pg. 1). This approach typically views the instructor as the ultimate authority within the classroom and that teaching and learning is sequential. What this means in information literacy is that if a student acquires the skills to access information through a variety of avenues, in a particular order, then they will have achieved information literacy. Behaviorist focus on sequence is seen through the Standards’ structure. They approach each measure in a highly structured, nested fashion that provides a clear order in which students are to approach research and information. Standard 1.1.1 states students should “follow an inquiry-based process in seeking knowledge in curricular subjects, and make the real-world connection for using this process in own life” (AASL, 2009, p. 3). The rest of the standards in section 1 then present an ordered, linear list of actions students are to take for this “inquiry-based process.”

The Standards for the 21st-Century Learner in Action defines dispositions as “learning behaviors, attitudes, and habits of mind that transform a learner from one who is able to learn to one who actually does learn”, states that they can be taught through assignment structure (i.e., building in activities that require persistence), and “can be assessed through documentation that proves the student has followed the behavior during the learning process” (2009, p. 8). This is reiterated in the Standards document itself which defines dispositions as “ongoing beliefs and attitudes that guide thinking and intellectual behavior that can be measured through actions taken” (AASL, 2009, pg. 8). But does a student’s action and behavior truly reflect an inward attitude? While the actions prescribed in the AASL Standards should not be ignored or undervalued, it is problematic to assume that an attitude can be measured through action or that learning can only occurs with the “right” attitude. Many students learn to simply act the way they think they are supposed to act (I was one of these students!). The Standards also have a tendency to hide additional physical, observable skills within the Dispositions sections, further confusing inward attitudes and outward behavior.

While the AASL Standards mention social context, working together, and thinking skills that launch independent learning, they do not place a lot of focus on student reflection. The Framework’s beliefs explicitly address both the faculty and student roles and responsibilities, reflecting on their own behaviors and actions with the academic community, and how information functions in a cultural landscape. Each of the frames within the Framework provides a concise statement and provides a list that begins with “Learners who are developing their information literate abilities [action word here]” (AASL, 2009, pg. 3). The Standards focus on the student using a tool or someone else’s expertise to complete their tasks; the Framework places the focus on what the student is doing and thinking. The difference is subtle and simple, but significant.

The Framework defines information literacy as a social practice, emphasizing “dynamism, flexibility, individual growth, and community learning.” It reaches beyond skills in locating information and addresses how information is created, how the students use the information to create their own knowledge, and students’ responsibilities to participate in the learning community. This is strongly reflective of critical librarianship and social constructivist pedagogy, a theory that emphasizes how a student’s language and culture, as well as context (of the information and the student’s role in society) affects learning. Critical Librarianship is the “inflection of critical theory in library and information science” (Garcia, 2015, par. 6) and has been defined as “a movement of library workers dedicated to bringing social justice principles into our work in libraries” (Critlib, n.d., par. 1). It states that “critical self-reflection is crucial in becoming more self-directed in the rapidly changing ecosystem.” Critical self-reflection depends heavily on student engagement and requires librarians to create lessons that focus more on the nature of information as opposed to lessons that focus on extracting information (articles and resources) from a system. The Frames, or threshold concepts (ideas that enlarge ways of thinking) are arranged in alphabetical order, providing no sequence and implying that each concept is equal in importance. While some order of research process is provided within the Research as Inquiry and Searching as Strategic Exploration frames, there are major language differences. Instead of emphasizing order (there are no numbers in the Frames’ lists), the Framework breaks down cognitive skills that are at play in searching:

Comparison between AASL Standards and ACRL Framework Standards: Standard 1.1.8 Framework: Searching as Strategic Exploration “Learners demonstrate mastery of tech tools for accessing information and pursuing inquiry.” “Match information needs and strategies to appropriate search tools.”
“Design and refine needs and search strategies as necessary, based on search results.”
“Understand how information systems (i.e. collections of recorded information) are organized in order to access relevant information.”
“Use different types of searching language (e.g., controlled vocabulary, keywords, and natural language) appropriately.”

Staying true to its focus on self-reflection, The framework defines dispositions quite differently than the Standards. Dispositions are a “tendency to act or think in a particular way. More specifically, a disposition is a cluster of preferences, attitudes, and intentions, as well as a set of capabilities that allow the preferences to becomes realized in a particular way” (Salomon, 1994). The Framework views dispositions as a dimension of learning, implying that the student’s attitudes and values are present and active at all times (ACRL, 2015, pg. 2). Because the Framework acknowledges that these attitudes are ongoing, the dispositions set forth are to be born out of, or at the very least, influenced by, the knowledge practices. They are not meant to be a set of measurable outcomes but a set of “good ideas” that instructors are trying to grow within students’ mental landscape during instruction.

Instructional and Learning Gaps

These differences in approach leave gaps within student learning. These gaps include the issue of authority, age-appropriate cognitive awareness, creating personal connections with information, information privilege, awareness of the creation process, and how information is used.

Perhaps the largest gap in focus between the documents is the issue of authority, which is largely absent in the Standards. It is briefly mentioned in Standard 1.1.5: learners “evaluate information found in selected sources on the basis of accuracy, validity, appropriateness for needs, importance, and social and cultural context.” There is no information on how students should determine accuracy or validity, and so it implies that sources should be chosen prior to identifying where authority comes from in their chosen research subject. Because the Standards are largely behavior based, this pedagogical approach relies on the instructor. This is not a comment on the professional knowledge of the instructor but of the tradition of placing authority solely in instructors and in academic resources found through library databases. In addition, this leaves very little room for choosing an informally packaged source, such as social media.

The Framework provides an entire frame on authority, discusses how students will define different types of authority, and acknowledges that different disciplines have accepted authorities. It highlights how the social nature of the information ecosystem effects where researchers go to discuss findings and connect with each other. The Framework also invites students to consider their own views, giving them authority in the research process as well as those who guide them through it. Teaching concepts of authority could lead students to first identify authority figures prior to beginning research; however, the Framework does not prescribe an order in which research should be approached in this area.

Another gap between these documents is the issue of age appropriate cognitive process. Standard 2 states that “learners use skills, resources, and tools to: draw conclusions, make informed decisions, apply knowledge to new situations, and create new knowledge.” This standard is all about drawing conclusions and synthesis. This standard is one of the most difficult to address with younger students. Bloom’s Taxonomy, published in 1956 and revised in 2001, is a framework created to classify thinking into six cognitive levels of complexity. This framework begins at a rote level of thinking (remembering) and works its way up through understanding, applying, analyzing, and evaluating, up to the highest level (creating) (Anderson, et al., 2001, p. 31). K-12 pedagogy still relies heavily on Bloom’s Taxonomy, and while most of the Standards focus heavily on the bottom layers of cognition (using tools, retrieving information, following procedures, etc.), Standard 2 primarily focuses on higher levels of cognition. While this provides one of the best parallels between the Standards and the Framework, and is one of the best transitions between high school and college level thinking, this standard can be challenging when working with younger students. Because the Standards are meant to be applied to a 13 year span of students, librarians must be fully aware of child and adolescent development at multiple stages and be able to apply it with full flexibility—something not expected of classroom teachers. Unless a school librarian has a full pedagogical background in K-12, this can be very difficult.

To make personal connections and organize knowledge in useful ways, students must confront their own thoughts and ideas about how they will use their resources. This reinforces the idea that authority does not lie within information itself but also within those who created it and how they fit into scholarship. This also means that individuals must make decisions about what types of information they are receiving and how to deal with it. The Information Has Value frame calls students to recognize issues of access and their own information privilege, something that is felt long before college (ACRL, 2015, pg. 6). The Common Beliefs of the Standards state that “all children deserve equitable access to books and reading, to information, and to information technology in an environment that is safe and conducive to learning,” but this statement is only presented to the professional and does not directly appear in the standards list. The Framework takes this further, stating that the professional must “acknowledge biases that privilege some sources of author over others, especially in terms of others’ worldviews, gender, secual orientation, and cultural orientations” (ACRL, 2015, pg. 4). This is supporting in targeted objectives by saying learners “understand how and why some individual or groups of individuals may be underrepresented or systematically marginalized within the systems that produce and disseminate information” and that students should “develop awareness of the importance of assessing content…with a self-awareness of their own biases” and “examine their own information privilege” (ACRL, 2015, pg. 6). While these discussions may be happening in K-12 schools, there is no expectation of it as it is absent from the actual set of standards. This is a large step away from a behaviorist approach and toward critical library pedagogy. In thinking about what voices may or may not be represented and the ability (or lack of ability) of some groups to access information, students can begin to understand that they are only finding information that represents certain ideas. This could encourage them to dig deeper when researching and find alternative points of view. Middle and high school students not only deal with academic literature but must deal with the high influence of social media and need to be able to recognize any manipulation that may be occurring in information they encounter outside the classroom.

For students to understand how information can be manipulative, they must understand the nature of how different resources are created and for what purpose. The Framework specifically addresses information creation. It says that learners must “articulate the traditional and emerging processes of information creation and dissemination in a particular discipline” and “recognize that information may be perceived differently based on the format in which it is packaged” (ACRL, 2015, pg. 5). Students are often told that they cannot use resources on the internet or that they can only use resources found through the library’s database and rely on a “model of information literacy instruction which universally praises scholarly research and devalues alternative venues of information dissemination” (Seeber, 2015, pg. 162). This is often for the sake of guiding the students toward only information that is accurate and reliable, however, students must resist this assumption. Library databases (K-12 database packages included) access information that comes from news sources, websites, non-peer reviewed journals, and portions of peer-reviewed journals that are not refereed. The Standards state that students should “make sense of information gathered from diverse sources” but the Framework explicitly points to the information creation process as a guide for students to determine accuracy, not where it appears (AASL, 2009, pg. 3). In terms of learning, students are at a distinct advantage if they know, explicitly, that “information is made in different ways, valued for different reasons, and used to achieve different ends” (Seeber, 2015, p. 161). Identifying how a resource was created lets them more readily see bias, holes in arguments, and author agendas over those who rely on prescribed sources. This also means that students can then more easily justify using resources they have found on social media if they can determine that the author and process of creation was accurate. It shifts the authority of the information away from its packaging and onto the actual process of creation.

Closely akin to how information is created is how information is used. Most teenagers have social media accounts and are actively sharing information (personal or otherwise) on these platforms. Information Has Value directs students to “understand how the commodification of their personal information and online interactions affects the information they receive and the information they produce or disseminate online,” an issue wholly absent from the Standards. Research has demonstrated how impactful our emotions can be on social media. A 2014 study detailed how Facebook’s control of the emotional nature of a person’s newsfeed impacted their interaction with the media platform and how this control affected the tone of the information that the individual would share (Kramer, Guillory, & Hancock). This and similar research (Detecting emotional contagion in massive social networks by Coviello et al., 2014, and Measuring emotional contagion in social media by Ferrara & Yang, ) shows the large-scale impact that social media can have on people, particularly young individuals. The impact of this type of manipulation may be tempered by a systematic, curricular awareness through information literacy standards.

Effects on Pedagogy

Skills-focused teaching is the simplest way to create lessons from the Standards since they provide point-by-point, measurable actions. This style of teaching relies heavily on database demonstrations, point-and-click skills practice, checklist-style website tests, and worksheets. While some activities can be built to push students to think critically, most of the standards point to abilities and dispositions that are visible. Assessments in skills-focused lessons often focus on if the student was able to find a source for a paper or project that the instructor or librarian deems “reliable,” or if the student was able to cite the source correctly. The end result is king; anything else is rarely assessed, even the steps in the middle. If those steps are assessed, it is typically in the fashion of a yes-or-no formula of the students performing particular actions and in the “right” order.

The following is an example of a skills-focused activity that was embedded into a 65 minute workshop I created in the fall of 2014 for a middle school Social Studies class:

Lecture & Activity:
  • Go through the research process using the 7 (altered) steps of research:
    • identify your topic
    • create a keyword list
    • find background info
    • find resources (websites) on your topic
    • evaluate your findings
    • publish your project
    • cite your resources
  • After an explanation of each step, the students will do an activity to respond to, thus completing that step of their project. These include:
    • a keyword building activity
    • locating an encyclopedia article
    • finding websites and applying a checklist-style evaluation
    • recording information about the site and how they will use it, and finally,
    • creating citations for any resources they plan to use.

This workshop was built on a scaffolding method referred to as the “I Do, We Do, You Do” method. The idea is that students see a concept first, then practice it with guidance, and finally do it on their own. I embedded this method into each step with the exception of publishing. This method is useful for all age levels, but especially for a class of 6th grade students who are, developmentally, operating heavily in the stages 2 and 3 of Bloom’s Taxonomy, a framework published in 1956 and revised in 2001. This framework begins at a rote level of thinking (remembering) and works its way up through understanding, applying, analyzing, and evaluating, to the highest level (creating) (Anderson, et al., 2001, p. 31). There are some higher-level elements, but the lesson relies heavily on observable actions and the hands-on skills that students need to practice. As a high school librarian, I struggled with providing enough higher level activities for my students while still following the behaviorist standards. I often had discreetly file lesson plans that did not actually follow the true activities that I was utilizing in the classroom.

Some high school students receive basic instruction in database use only or no information literacy instruction at all. According to a 2013 study done by the National Center for Education Statistics, only around 62% of reporting traditional public schools and 16% of reporting charter schools in the United States employed a full-time, paid, state-certified library media specialist (Bitterman, Gray, & Goldring, 2013, pg. 3). We do not know if these professional are providing information literacy instruction according to the AASL Standards. This creates a necessity for higher education librarians to teach information literacy “from scratch.” While academic librarians cannot change gaps in the Standards or the fact that many of our students have never met with a librarian in a classroom setting, we can build a bridge between skills-focused instruction and student-centered activities to meet the needs of young adults and adults, particularly in their first year.

The use of a First Year or Foundational Experiences program is an example of how some universities support the transition between high school and college information literacy. These programs typically focus on students who are transitioning from high schools and community colleges and those who are first generation students in their families. Another method of supporting this transition is through programmatic assessments of basic information skills. These assessments provide insight to the librarians and faculty about the nature and level of student information literacy. The Framework calls librarians to a greater responsibility in “identifying core ideas within their own knowledge domain” and “collaborating more extensively with faculty” (ACRL, 2015, pg. 2). Moving away from the traditional point-of-service for First Year classes and toward a programmatic approach not only increases collaboration with subject faculty, but also ensures better library exposure to students who may not be inclined to walk through the doors which may result in greater library use through their college years.

Because of the swiftly changing nature of information and how we interact with it, academic librarians have a greater responsibility to teach skills that can be applied outside the institutional walls, particularly with regard to issues of information creation, access, and motives within the publishing process. While it is safe to assume that incoming students know how to perform basic searches using internet search engines, they may not be able to know what to do with that information or distinguish the differences between scholarly and popular materials. The Framework, being supported by social constructivist ideas, moves beyond skills-based instruction requires us to ask students to think critically, to utilize their own experiences, and to use resources that could have been previously prohibited, such as social media.

To include more hands-on, directly applicable activities, and incorporate more critical information literacy theory into my lesson plans demanded that I refocus lessons on the “why” instead of the “what.” This effectively shifted my role from “expert” to “guide.” This is supported by andragogy, one of the most well-known adult learning theories. Andragogy, or the methods and principles of adult learning, leans on the principles that adult learners are self-directed and responsible for learning, work best under problem solving and hands-on practice, and seek information that has direct application to their immediate situations (Knowles, 1980, p. 44). The Research Process was a very common request for lesson topics in my university teaching. Below is how I restructured the K-12 Research Process activity to focus on the Framework.

K-12 Research Process Activity
  • Activity: Students are given 2 popular articles and 2 scholarly articles. They are tasked with creating two short lists of common features for each of the categories. Each group has a student come up to the board and records their list (the result will be a large list for each category).
  • Discussion: Librarian calls the class together to discuss the lists. Questions could include:
    • Who write scholarly articles? Popular articles?
    • What kind of credentials (degrees, jobs, etc.) do they have to have to write these?
    • Who do they write for? Why do they write these?
    • How long does it take for a scholarly article to be written? A popular article?
  • Short video on peer-review process (creation, purpose, etc.)
    • Librarian draws a timeline on the board for the peer-review process
    • Discussion: Let’s talk about the timeline for peer-review articles.
      • What is peer-review? Who are the author’s peers?
      • What are reviewers looking for when they read an article?
      • How long does this process take?
      • How might a peer-review article from one discipline look different from another discipline?
    • Add to the timeline by including popular articles, books, and other types of resources
      • How does the length of time each resources takes to create affect how you use it for your paper/project?
  • Demo of main database search tool:
    • Using one of the student’s topics, demonstrate the usage of the library’s discovery tool, its filters, and its citation tool, pointing out the difference in search terms with the database vs Google. This should only take 5-10 minutes.
  • Student searching:
    • Students are directed to search either the main database tool or the internet (or both) for a source they may want to use.
    • Remind them to think about HOW the information was created, WHO created it, and WHY when determining if they will use the source.

This activity takes students through the principles of information creation and forces them to consider how the information fits into the cultural landscape of higher education and society, leveraging the highest level of Bloom’s Taxonomy at almost every step. One of the ending measurements is the same as my original K-12 lesson (a useful source for their project), but the cognitive demands and understanding of information are vastly different. Instead of being told to simply evaluate information for accuracy, students must consider the context in which the writer’s authority resides. From this, students can then reason why different points of view (academic and non-academic) matter in research. If students do not understand the nature of authority and focus on the skills of retrieval only, they run the risk of devaluing a professional’s knowledge simply because they do not work in an academic field or because the information is coming from a website. While I do recognize that this lesson is built for college students, this is not beyond the cognitive levels of high school students.

Structurally, the Framework alleviates obstacles in lesson planning in terms of targeting specific objectives. For example, Standard 3.2.3 states that a student should “demonstrate teamwork by working productively with others” (AASL, 2009, pg. 5). While working productively with others is a good social skill, it has no specific placement in information literacy without bringing in at least one other Standard for context. The Framework, focused on principles, reworks the idea of cooperation into students seeing themselves as “contributors to the information marketplace/scholarship rather than only consumers of it” (ACRL, 2015, pg. 8). This can be a teachable objective on its own without having to be paired with another standard or objective. The practices and dispositions with the Framework can be utilized and targeted alone and can be taught within the context of school or outside of it. The Framework “contains ideas that are relevant for anyone interacting with information in a contemporary society” (Seeber, 2015, pg. 159). Moving from a statement of action (behaviorist) to a statement of metacognition (constructivist) moves the standards out of isolation and into a larger context. It allows me to take a frame as a whole or a particular Knowledge Practice and embed it into a course assignment because I am not teaching a discipline—I am teaching ideas that nest within any circumstance, both academically and personally. This results in net gains for our students, empowering them in their research, in their interactions on social networks, and in their encounters with media.

While the Framework does not provide point-by-point measureable activities, it does not run counter to measurable assessments, and it can be situated into any library’s missions and goals and lends itself to working with inclusive populations. It allows for one lesson to be applied to multiple classes regardless of the students that make up that particular class, that particular day. Support level can be changed, students can “drive” more or less, activities and tools can be exchanged, all while teaching to the same frame. Because the Framework addresses almost every single area of the Standards (the exception is some of the standards pertaining to personal growth), even students who have never had information literacy instruction during K-12 aren’t “behind” students who have. All students benefit from thinking about why and how they, and others, make information choices.

Discussion

While the observable, measureable skills in the AASL Standards are positive skills to have, being based on behaviorist-style ( lecture, point-and-click demonstrations, and students’ abilities to simply find information) does not prepare students properly for modern university level instruction. By drawing on social constructivist and critical librarian pedagogy, the Framework for Information Literacy for Higher Education encourages student self-reflection and examining how information functions in a greater context beyond an assignment. It pushes librarians to create learner-centered and authentic activities through which finding information becomes a cognitive process, not just a physical one. By being more aware of how information is used socially, politically, and culturally, students are empowered to understand how articles are created, how to evaluate arguments, and how to apply new knowledge to their own scholarly work. Skills like these are vital in our everyday, technology-driven, socially connected world.

Examining these documents and their theories brings to light a number of issues that I simply cannot address in one paper: 9-12 students could benefit from having a separate higher-level set of Standards; how the American Association of School Libraries and the Association of College & Research Libraries are failing to work together to create cohesive standards; how much pressure is being put on K-12 librarians to have a more thorough knowledge of child and adolescent development than classroom teachers; how high school students should also receive the benefit of being taught in a constructivist manner through the lens of social equality; etc. While these issues that have been brought to the surface through this process, we can and should be taking immediate steps in our pedagogy practices to help alleviate that the strain in K-12 to college transition in our students. By understanding how our students have been taught, we can build on this to create lessons that take information literacy further.

How these two documents function together will change soon. AASL is currently in the process of revising the Standards for the 21st Century Learner. The new Standards are forecasted to be launched in the fall of 2017. I am very interested to see the changes and how they might affect our K-12 colleagues and their students and, a few years down the line, academic librarians who work with these same students in higher learning.

Many thanks to my publishing editor, Ian Beilin, and my peer-reviewers, Amy Koester and Kyle Harmon, for working so hard to put out great material. A special thanks as well to Kevin Seeber for all of your advice and guidance, particularly in the beginning stages of this publication journey. I greatly appreciated all the wonderful feedback from all of you!

References

American Association of School Librarians. (2007). Standards for the 21st-century learner. Retrieved from http://www.ala.org/aasl/standards/learning

American Association of School Librarians. (2009). Standards for the 21st-century learner in action. Chicago: ALA.

Anderson, L.W. & Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman.

Association of College & Research Libraries (2015). Framework for information literacy for higher education. Retrieved from http://www.ala.org/acrl/standards/ilframework

Bitterman, A., Gray, L., & Goldring, R. (2013). Characteristics of public elementary and secondary school library media center in the United States: Results from the 2011-12 schools and staffing survey (NCES 2013-315). U. S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubs2013/2013315.pdf

Coviello, L., Sohn, Y., Kramer, A. D. I., Marlow, C., Franceschetti, M., Christakis, N. A., & Fowler, J. H. (2014). Detecting emotional contagion in massive social networks. PLoS One, 9(3), 1-6. https://doi.org/10.1371/journal.pone.0090315

Critlib. (n.d.) About/Join the conversation. Retrieved from http://critlib.org/about/

Elliot, B. (2009). E-pedagogy: Does e-learning require a new approach to teaching and learning? Scottish Qualifications Authority, January 2009. Retrieved from https://www.scribd.com/document/932164/E-Pedagogy

Ferrara, E., & Yang, Z. (2015). Measuring emotional contagion in social media. PLoS One, 10(10), 1-14. https://10.1371/journal.pone.0142390

Foasberg, N. (2015). From standards to frameworks for IL: How the ACRL framework addresses critiques
of the standards. Portal-Libraries and the Academy. 15(4). 699-717. doi:10.1353/pla.2015.0045

Garcia, K. (2015). Keeping up with…critical librarianship. Keeping up with…, June 2015. Retrieved from http://www.ala.org/acrl/publications/keeping_up_with/critlib

Knowles, M. S. (1980). The modern practice of adult education: From pedagogy to andragogy. Chicago: Association Press.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, 111(24). 8788-8790. http://www.pnas.org/content/111/24/8788.full

Salomon, G. (1994). To be or not to be (mindful). Paper presented at the American Educational Research Association Meetings, New Orleans, LA.

Seeber, K. P. (2015). This is really happening: Criticality and discussions of context in ACRL’s framework for information literacy. Communications in Information Literacy, 9(2), 157-163. Retrieved from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=v9i2p157&path%5B%5D=218

Open Knowledge Foundation: Research call: Mapping the impacts of the Global Open Data Index

planet code4lib - Wed, 2017-09-06 07:35

Note: The deadline for proposal submission has been extended until Sunday, 17 September, 21:00 UTC.

The Global Open Data Index (GODI) is a worldwide assessment of open data publication in more than 90 countries. It provides evidence how well governments perform in open data publication. This call invites interested researchers and organisations to systematically study the effects of the Global Open Data Index on open data publication and the open data ecosystem. The study will identify different actors engaged around GODI, and how the information provided by GODI helped advance open data policy and publication. It will do so by investigating a sample of three countries with different degrees of open data adoption. The work will be conducted in close collaboration with Open Knowledge International’s (OKI) research department who will provide guidance, review and assistance throughout the project.  

We invite interested parties to send their costed proposal to research@okfn.org. In order to be eligible, the proposal must include research background, a short description why they are interested in the topic and how they want to research it (300 words maximum), a track record demonstrating knowledge of the topic, as well as a written research sample around open data or related fields. Finally, the proposal must also specify how much time will be committed to the work and for what cost (in GBP or USD). Due to the nature of the funding supporting this work, we unfortunately cannot accept proposals from US-based people or organisations. Please make sure the submission is made before the proposal deadline of Wed 13 Sept, 21:00 UTC Sunday 17 Sept, 21:00 UTC.

Outline

Background

The Global Open Data Index (GODI) is a worldwide assessment of open data publication in more than 90 countries. It provides evidence how well governments perform in open data publication. This includes mapping accessibility and access controls, findability of data, key data characteristics, as well as open licensing and machine-readability.

At the same time GODI provides a venue for open data advocates and civil servants to discuss the production of open data. Evidence shows that governance indicators drive change if they embrace dialogue and mutual ownership of those who are assessed, and those who assess. This year we wanted to use the launch of GODI to spark dialogue and provide a venue for the ensuing discussions.

Through this dialogue, governments learn about key datasets and data quality issues, while also receiving targeted feedback to help them improve. Furthermore, every year many interactions happen outside of the GODI process, not including the GODI staff or public discussions. Instead results are discussed within public institutions, or among civic actors and public institutions. Some scarce evidence of GODI’s outcomes is available, yet a systematic understanding of the diverse types of effects is missing to date.

Scope of research

This research is intended to get a systematic understanding of the effects of the Global Open Data Index on open data publication and the open data ecosystem. It addresses three research questions:

  1. In what ways does the Global Open Data Index process mobilize support for open data in countries with different degrees of open data policy and publication? How does this support manifest itself?
  2. How does the Global Open Data Index influence open data publication in governments both in terms of quantity and quality of data?
  3. How do different elements of the Global Open Data Index help governments and civil society actors to drive progress  around question 1 and 2?

GODI’s effects can tentatively be grouped into high-level policy and strategy development as well as strategy implementation and ongoing publication. This research will assess how different actors such as civil servants, high-level government officials, open data advocates and communities engage with different elements of GODI and how this helps advancing open data policy and publication. The research should also, whenever applicable, provide a critical account of GODI’s adverse effects. This can include ‘ceiling effects’, tunnel vision and reactivity, or other effects. The research will assess these effects in three countries. These may include Argentina, Colombia, Ukraine, South Africa, Thailand, or others. It is possible to propose alternative countries, if the researcher has strong experience in those or if it would help gathering data for the research. Proposals should specify which three  countries would be assessed. If alternative countries are proposed, they should meet the following criteria:

  1. One country without national open data policy, one country with a recent open data policy (in effect between 3 months and 2 years), as well as countries with established open data policies older than 2 years)
  2. A mix of countries with different endorsement for GODI, including countries who actively announced to increase their ranking (high importance) and countries where no public claims for open data improvement are documented
  3. Presence of country in past two GODI editions
  4. May include members of the Open Government Partnership and Open Data Charter adopters, as well as non-members.
Deliverables

The work will provide a written report between 5000 and 7000 words length addressing each of the research questions. The report must include a clearly written methodology section and country sampling approach. The desired format is a narrative report in English. A qualitative, critical assessment of GODI’s effects on open data policy and publication is expected. It needs to describe the actors using GODI, how they interacted with different aspects of GODI, and how this helped to drive change around the first two research questions outlined above. Furthermore following deliverables are expected:

  • Interviews with least four interviewees per country
  • A semi-structured  interview guide
  • Draft report by 15 October, structured around country portraits for three sample countries.
  • Weekly catch-ups with the Research team at OKI
  • Final report by 1 November
Methods and data sources

The researcher can draw from several sources to start this research, including OKI’s country contacts, Global Open Data Index scores, etc. Suggested methodology approaches include interviews with government officials and GODI contributors, as well as document analysis. Alternative research approaches and data sources shall be discussed with OKI’s research team. The research team will provide assistance in sampling interviewees in the initial phase of the research.

Activities

It is expected that this work is conducted in close contact with OKI’s research department. We will arrange a kick-off meeting to discuss your approach and have weekly calls to discuss activity and progress on the work. Early drafts will be shared with the OKI team to provide comments and discuss them with you. In addition we will have a final reflection call. Remote availability is expected (via email, Skype, Slack, or other channels). Overall research outline and goals will be discussed and agreed upon with the research lead of GODI who will help in sampling countries and will review project progress.

Decision criteria

We will base our decision of selecting a research party on following criteria:

  • Evidence of an understanding of open data assessments and indicators, and their influence on policy development and implementation.
  • Track record in the field of open data assessment and measurement.
  • Clarity and feasibility of methodology you propose to follow.

Due to the nature of the funding supporting this work, we unfortunately cannot accept proposals from US-based people or organisations. Please make sure the submission is made before the proposal deadline of Wed 13 Sept, 21:00 UTC Sunday 17 Sept, 21:00 UTC.

ACRL TechConnect: Working with a Web Design Firm

planet code4lib - Tue, 2017-09-05 15:01

As I’ve mentioned in the previous post, my library is undergoing a major website redesign. As part of that process, we contracted with an outside web design and development firm to help build the theme layer. I’ve done a couple major website overhauls in the course of my career, but never with an outside developer participating so much. In fact, I’ve always handled the coding part of redesigns entirely by myself as I’ve worked at smaller institutions. This post discusses what the process has been like in case other libraries are considering working with a web designer.

An Outline

To start with, our librarians had already been working to identify components of other library websites that we liked. We used Airtable, a more dynamic sort of spreadsheet, to collect our ideas and articulate why we liked certain peer websites, some of which were libraries and some not (mostly museums and design companies). From prior work, we already knew we wanted a few different page templates types. We organized our ideas around how they fit into these templates, such as a special collections showcase, a home page with a central search box, or a text-heavy policy page.

Once we knew we were going to work with the web development firm, we had a conference call with them to discuss the goals of our website redesign and show the contents of our Airtable. As we’re a small art and design library, our library director was actually the one to create an initial set of mockups to demonstrate our vision. Shortly afterwards, the designer had his own visual mockups for a few of our templates. The mockups included inline comments explaining stylistic choices. One aspect I liked about their mockups was that they were divided into desktop and mobile; there wasn’t just a “blog post” example, but a “blog post on mobile” and “blog post on desktop”. This division showed that the designer was already thinking ahead towards how the site’s theme would function on a variety of devices.

With some templates in hand, we could provide feedback. There was some push and pull—some of our initial ideas the designer thought were unimportant or against best practices, while we also had strong opinions. The discussion was interesting for me, as someone who is a librarian foremost but empathetic to usability concerns and following web conventions. It was good to have a designer who didn’t mindlessly follow our every request; when he felt like a stylistic choice was counterproductive, he could articulate why and that changed a few of our ideas. However, on some principles we were insistent. For instance, we wanted to avoid multiple search boxes on a single page; not a central catalog search and a site search in the header. I find that users are easily confused when confronted with two search engines and struggle to distinguish the different purposes and domains of both. The designer thought that it was a common enough pattern to be familiar to users, but our experiences lead us to insist otherwise.

Finally, once we had settled on agreeable mockups, a frontend developer turned them into code with an impressive turnaround; about 90% of the mockups were implemented within a week and a half. We weren’t given something like Drupal or WordPress templates; we received only frontend code (CSS, JavaScript) and some example templates showing how to structure our HTML. It was all in single a git repository complete with fake data, Mustache templates, and instructions for running a local Node.js server to view the examples. I was able to get the frontend repo working easily enough, but it was a bit surprising to me working with code completely decoupled from its eventual destination. If we had had more funds, I would have liked to have the web design firm go all the way to implementing their theme in our CMS, since I did struggle in a few places when combining the two (more on that later). But, like many libraries, we’re frugal, and it was a luxury to get this kind of design work at all.

The final code took a few months to deliver, mostly due to a single user interface bug we pointed out that the developer struggled to recreate and then fix. I was ready to start working with the frontend code almost exactly a month after our first conversation with the firm’s designer. The total time from that conversation to signing off on the final templates was a little under two months. Given our hurried timeline for rebuilding our entire site over the summer, that quick delivery was a serious boon.

Code Quirks

I’ve a lot of opinions about how code should look and be structured, even if I don’t always follow them myself. So I was a bit apprehensive working with an outside firm; would they deliver something highly functional but structured in an alien way? Luckily, I was pleasantly surprised with how the CSS was delivered.

First of all, the designer didn’t use CSS, he used SASS, which Margaret wrote about previously on Tech Connect. SASS adds several nice tools to CSS, from variables to darken and lighten functions for adjusting colors. But perhaps most importantly, it gives you much more control when structuring your stylesheets, using imports, nested selectors, and mixins. Basically, SASS is the antithesis of having one gigantic CSS file with thousands of lines. Instead, the frontend code we were given was about fifty files neatly divided by our different templates and some reusable components. Here’s the directory tree of the SASS files:

components     about-us     blog     collections     footer     forms     header     home     misc     search     services fonts reset settings utilities

Other than the uninformative “misc”, these folders all have meaningful names (“about-us” and “collections” refer to styles specific to particular templates we’d asked for) and it never takes me more than a moment to locate the styles I want.

Within the SASS itself, almost all styles (excepting the “reset” portion) hinge on class names. This is a best practice for CSS since it doesn’t couple your styles tightly to markup; whether a particular element is a <div>, <section>, or <article>, it will appear correctly if it bears the right class name. When our new CMS output some HTML in an unexpected manner, I was still able to utilize the designer’s theme by applying the appropriate class names. Even better, the class names are written in BEM “Block-Element-Modifier” form. BEM is a methodology I’d heard of before and read about, but never used. It uses underscores and dashes to show which high-level “block” is being styled, which element inside that block, and what variation or state the element takes on. The introduction to BEM nicely defines what it means by Block-Element-Modifier. Its usage is evident if you look at the styles related to the “see next/previous blog post” pagination at the bottom of our blog template:

.blog-post-pagination {   border-top: 1px solid black(0.1);     @include respond($break-medium) {     margin-top: 40px;   } }     .blog-post-pagination__title {     font-size: 16px;   }     .blog-post-pagination__item {     @include clearfix();     flex: 1 0 50%;  }     .blog-post-pagination__item--prev {     display: none;   }

Here, blog-post-pagination is the block, __title and __item are elements within it, and the --prev modifier effects just the “previous blog post” item element. Even in this small excerpt, other advantages of SASS are evident: the respond mixin and $break-medium variables for writing responsive styles that adapt to differing device screen sizes, the clearfix include, and these related styles all being nested inside the brackets of the parent blog-post-pagination block.

Trouble in Paradise

However, as much as I admire the BEM class names and structure of the styles given to us, of course I can’t be perfectly happy. As I’ve started building out our site I’ve run into a few obvious problems. First of all, while all the components and templates we’d asked for are well-designed with clearly written code, there’s no generic framework for adding on anything new. I’d hoped, and to be honest simply assumed, that a framework like Bootstrap or Foundation would be used as the basis of our styles, with more specific CSS for our components and templates. Instead, apart from a handful of minor utilities like the clearfix include referenced above, everything that we received is intended only for our existing templates. That’s fine up to a point, but as soon as I went to write a page with a HTML table in it I noticed there was no styling whatsoever.

Relatedly, since the class names are so focused on distinct blocks, when I want to write something similar but slightly different I end up with a bunch of misleading class names. So, for instance, some of our non-blog pages have templates which are littered with class names including a .blog- prefix. The easiest way for me to build them was to co-opt the blog styles, but now the HTML looks misleading. I suppose if I had greater time I could write new styles which simply copy the blog ones under new names, but that also seems unideal in that it’s a) a lot more work and b) leads to a lot of redundant code.

Lastly, the way our CMS handles “rich text” fields (think: HTML edited in a WYSIWYG editor, not coded by hand) has caused numerous problems for our theme. The rich text output is always wrapped in a <div class="rich-text">, which made translating some of the HTML templates from the frontend code a bit tricky. The frontend styles also included a “reset” stylesheet which erased all default styles for most HTML tags. That’s fine, and a common approach for most sites, but many of the styles for elements available in the rich text editor ended up being reset. As content authors went about creating lower-level headings and unordered lists, they discovered that they appeared just as plain text.

Reflecting on these issues, they boil primarily down to insufficient communication on our part. When we first asked for design work, it was very much centered around the specific templates we wanted to use for a few different sections of our site. I never specifically outlined a need for a generic framework which could encompass new, unanticipated types of content. While there was an offhand mention of Bootstrap early on in our discussions, I didn’t make it explicit that I’d like it or something similar to form the backbone of the styles we wanted. I should have also made it clearer that styles should specifically anticipate working within our CMS and alongside rich text content. Instead, by the time I realized some of these issues, we had already approved much of the frontend work as complete.

Conclusion

For me, as someone who has worked at smaller libraries for the duration of their professional career, working with a web design company was a unique experience. I’m curious, has your library contracted for design or web development work? Was it successful or not? As tech savvy librarians, we’re often asked to do everything even if some of the tasks are beyond our skills. Working with professionals was a nice break from that and a learning experience. If I could do anything differently, I’d be more assertive about requirements in our initial talks. Outlining expectations about that the styles include a generic framework and anticipate working with our particular CMS would have saved me some time and headaches later on.

LITA: The Lost Art of Creation

planet code4lib - Tue, 2017-09-05 14:29

While technology is helpful, it also contributes to people becoming, well, more robotic. Siri can define the word “banausic,” eliminating the need to pull out a dictionary; while Google Maps can help you navigate to the closest ramen bowl spot, eliminating the need to look at an actual map. This series looks at technology that counteracts this trend, tools that help spark conversation, create 3-D designs, and encourage creativity. This month’s post explores YouTube and specifically the combo video/blog known as “vlog.”

Launched in 2005, YouTube is a free video sharing website where users can easily upload videos and subscribe to channels.  Many libraries and library associations have dedicated YouTube Channels and it’s the 3rd most popular media outlet used by digital natives (behind Facebook & Twitter). Some people might dismiss the site as just a repository of silly videos but the site’s wide reach has sparked careers and even business ventures.

Salman Khan speaking at a TED Conference

Sal Khan, the creator of Khan Academy, got started by posting tutoring videos for his cousins on YouTube.  Discussing the appeal of this tech tool at a recent TED Conference, Khan explains that his cousins preferred watching videos, over face-to-face tutoring sessions. This is a major benefit of YouTube, people watch instructional videos at their own pace and re-visit whenever they need a refresher.

Within the YouTube world there are specific types of videos known as vlogs. Just how popular are video blogs? PewDiePie, a Swedish comedian, has over 56 million subscribers and the teaching vlog AsapScience has over 6 million subscribers. The content isn’t important, vlogs are simply video productions with a host (or hosts) that might incorporate music, animation, or memes. Video blogging has been around for at least 10 years. The launch of YouTube, combined with the increased use of social media and smartphones created the perfect atmosphere for vlogs to thrive.

Skeptical about vlogs, its popularity or ability to be an effective marketing tool? So was I. Most of the YouTube library vlogs I found feature trips to the library or studying at the library as part of a personal video blog, rather than a series published by a library.  That being said, some major organizations use vlogs.

The American Library Association (ALA) has a vlog series and a major reason they started the series is that they love watching YouTube videos. ALA also recognizes the potential for connecting with members by answering questions, promoting ALA initiatives, and introducing staff members. Other practical uses can be found on David Lee King’s list of vlogging ideas for libraries.

How can you get started vlogging? Similar to considering a blog or podcast the first step is to decide on a focus and audience. A major factor that separates vlogs from other social media is that the host needs to be comfortable talking in front of the camera to an imaginary audience.  If you want to explore vlogging, below are some things to consider:

  1. Equipment- Basic videos can be created on a smartphone- allowing you to vlog from anywhere. For a more polished broadcast you’ll need a camera, lighting, a mic, and a tripod. Amazon’s top rated vlogging products and reviews is a good place to start.
  2. Software– Simple editing can be done right on YouTube but more advanced editing requires software. CamtasiaScreenFlow,Final Cut, and Sony Vegas Studio are a few options.
  3. Extras- YouTube offers a Free Music channel with “uncopyrighted music for commercial use.” Free Music Archive & Bensound also offer “free” music. Another option is to make and record your own music. Software, such as Camtasia comes with animations, annotations, media, and other fancy features that can be incorporated into the video.
  4. Publishing/Storage– YouTube is the most popular, but those averse to YouTube can check out VimeoStreamable,Vidme, or Dailymotion.

 Are you vlogging? What has the experience been like? Share some of your favorite vlogs.  

I am very excited to announce the next series, which will star tech librarians. Have you ever looked at a job title and wondered how the person got there? I virtually interviewed librarians working in academic, public, and special libraries to learn more about their journey. Stay tuned to hear from Digital Services, Library Systems, and Innovation Librarians from all over the U.S.

 

 

Library of Congress: The Signal: Using data from historic newspapers

planet code4lib - Tue, 2017-09-05 13:57

This post is derived from a talk David Brunton, current Chief of Repository Development at the Library of Congress, gave to a group of librarians in 2015. 

I am going to make a single point this morning, followed by a quick live demonstration of some interfaces. I have no slides, but I will be visiting the following addresses on the web:

The current Chronicling America API: http://chroniclingamerica.loc.gov/about/api/

The bulk data endpoint: http://chroniclingamerica.loc.gov/batches/

The text only endpoint: http://chroniclingamerica.loc.gov/ocr/

As you can probably tell already, there is a theme to my talk this morning, which is old news. I’ve participated in some projects at the Library of Congress that involve more recent publications, but this one is my favorite. I will add, at this point, that I am not offering you an official position of the Library of Congress, but rather some personal observations about the past ten years of this newspaper website.

For starters, complicated APIs are trouble.

You may be surprised to hear that from a programmer, but it’s true. They’re prone to breakage, misunderstanding, small changes breaking existing code, backward-incompatible changes (or forward-incompatible features), and they inevitably leave out something that researchers will want.

I don’t mean to suggest that nobody has gotten good use out of our complicated APIs, many people have. But over time it has been my unscientific observation that researchers are, in general, subject to at least three constraints that make simplification of APIs a priority:

  • Most researchers are gathering data from multiple sources
  • Most researchers don’t have unlimited access to a group of professional developers
  • Most researchers already possess a tool of choice for modeling or visualization

I’m not going to belabor the point, because I think anyone in this room who is a researcher will probably agree immediately. There is an even more important constraint in the case where researchers are using data as a secondary or corollary source, which is that they may not be able to pay for it, and they may (or may not) be able to agree to any given licensing terms of the data. But I digress.

Multiple sources, no professional developers, and a preferred tool for modeling and visualization.

Interestingly, there is some research about that last point that we may come back to if there is time. So on to the demonstration.

The first URL, the API. This is an extremely rich set of interfaces. As you can tell from the length of this help page (which is far from exhaustive), we have spent a lot of effort creating a set of endpoints that can provide a very rich experience. You can’t blame us, right? We’re programmers, so we made something for programmers to love!

Chronicling American API

Now, lest anyone misconstrue my description of this application programming interface, I want to stress this point: it is a truly wonderful Application Programming Interface. Unfortunately, an Application Programming Interface isn’t exactly what researchers want most the time. This is not to say that folks haven’t written some lovely applications with this API as a backend, because they have. But any time there is lots of network latency between their servers and ours, or any time our site is (gasp!) slow, it slows down those applications.

Over time, it has been my unscientific observation that when it is an option, it’s generally better for all parties involved to simply have their own copy of the data. This lets them do at least three cool things:

  • Mix the data with data from other sources.
  • Interact with the data without intermediation of custom software.
  • Use their tools of choice for modeling the data.

Sound familiar?

I’ll continue by directing everyone’s attention to the next two endpoints, which seem to be getting an increasingly large share of our use. The first is the place where someone can simply download all our data in bulk.

Chronicling America batch data download

So, the only problem we’ve discovered about this particular endpoint is that researchers would just as soon not pore through everything, which leads me to the next one, where researchers can download the newspaper text only, but still in bulk.

It’s perfectly reasonably to go to these pages, and tell some poor undergraduate to click on all the links and download the files, maybe put them on a thumb drive. But we’ve also made a very minimal layer on top of this, which makes them available as a feed. Since I’ve just finished saying how important it is to keep these things simple, I won’t belabor this addition too much, but I will point out that there is support in nearly every platform for the “feed” format.

Chronicling America text download

The last point I will make is that for a library, in particular, these three points are critical: when was it made, where was it obtained, and has it maintained fixity?

Mark E. Phillips: Metadata Interfaces: Search Dashboard

planet code4lib - Tue, 2017-09-05 13:26

This is the next blog post in a series that discusses some of the metadata interfaces that we have been working on improving over the summer for the UNT Libraries Digital Collections.  You can catch up on those posts about our Item Views, Facet Dashboard, and Element Count Dashboard if you are curious.

In this post I’m going to talk about our Search Dashboard.  This dashboard is really the bread and butter of our whole metadata editing application.  About 99% of the time a user who is doing some metadata work will login and work with this interface to find the records that they need to create or edit. The records that they see and can search are only ones that they have privileges to edit.  In this post you will see what I see when I login to the system, the nearly 1.9 million records that we are currently managing in our systems.

Let’s get started.

Search Dashboard

If you have read the other post you will probably notice quite a bit of similarity between the interfaces.  All of those other interfaces were based off of this search interfaces.  You can divide the dashboard into three primary sections.  On the left side there are facets that allow you to refine your view in a number of ways.  At the top of the right column is an area where you can search for a term or phrase in a record you are interested in.  Finally under the search box there is a result set of items and various ways to interact with those results.

By default all the records that you have access to are viewable if you haven’t refined your view with a search or a limiting facet.

Edit Interface Search Dashboard

The search section of the dashboard lets you find a specific record or set of records that you are interested in working with.  You can choose to search across all of the fields in the metadata record or just a specific metadata field using the dropdown next to where you enter your search term.  You can search single words, phrases, or unique identifiers for records if you have those.  Once you hit the search button you are on your way.

Search and View Options for Records

Once you have submitted your search you will get back a set of results.  I’ll go over these more in depth in a little bit.

Record Detail

You can sort your results in a variety of ways.  By default they are returned in Title order but you can sort them by the date they were added to the system, the date the original item was created, the date that the metadata record was last modified, the ARK identifier and finally by a completeness metric.   You also have the option to change your view from the default list view to the grid view.

Sort Options

Here is a look at the grid view.  It presents a more visually compact view of the records you might be interested in working with.

Grid View

The image below is a detail of a record view. We tried to pack as much useful information into each row as we  could.  We have the title, a thumbnail, several links to either the edit or summary item view on the left part of the row.  Following that we have the system, collection, and partner that the record belongs to. We have the unique ARK identifier for the object, the date that it was added to the UNT Libraries’ Digital Collections, and the date the metadata was last modified.  Finally we have a green check if the item is visible to the public or a red X if the item is hidden from the public.

Record Detail

Facet Section

There are a number of different facets that a user can use to limit the records they are working with to a smaller subset.  The list is pretty long so I’ll first show you it in a single image and then go over some of the specifics in more detail below.

Facet Options

The first three facets are the system, collection and partner facets.  We have three systems that we manage records for with this interface, The Portal to Texas History, the UNT Digital Library, and the Gateway to Oklahoma History.

Each digital item can belong to multiple collections and generally belongs to a single partner organization.  If you are interested in just working on the records for the KXAS-NBC 5 New Collection you can limit your view of records by selecting that value from the Collections facet area.

System, Collections and Partners Facet Options

Next are the Resource Type and Visibility facets.  It is often helpful to limit to just a specific resource type, like Maps when you are doing your metadata editing so that you don’t see things that you aren’t interested in working with.  Likewise there are some kinds of metadata editing that you want to focus primarily on items that are already viewable to the public and you don’t want the hidden records to get in the way. You can do this with the Visibility facet.

Resource Type and Visibility Facet Options

Next we start getting into the new facet types that we added this summer to help identify records that need some metadata uplift.  We have the Date Validity, My Edits, and Location Data facets.

Date Validity is a facet that allows you to identify records that have dates in them that are not valid according to the Extended Date Time Format (EDTF).  There are two different fields in a record that are checked, the date field and the coverage field (which can contain dates).  If any of these aren’t valid EDTF strings then we mark the whole record as having Invalid Dates.  You can use this facet to identify these and go in a correct those values.

Next up is a facet for just the records that you have edited in the past.  This can be helpful for a number of reasons.  I use it from time to time to see if any of the records that I’ve edited have developed any issues like dates that aren’t valid since I last edited them.  It doesn’t happen often but can be helpful.

Finally there is a section of Location Data.  This set of facets is helpful for identifying records which have or don’t have a Place Name, Place Point, or Place Box in the record.  Helpful if you are working through a collection trying to add geographic information to the records.

Date Validity, My Edits, and Location Data Facet Options

The final set of facets are Recently Edited Records, and Record Completeness.  The first is the Recently Edited Records which is pretty straight forward.  This just a listing of how many records have been edited in the past 24h, 48h, 7d, 30d, 180d, 365d in the system.  One note that causes a bit of confusion here is that these are records that are edited by  anyone in the past period of time.  It is often misunderstood as “your edits” in a given period of time which isn’t true.  Still very helpful but can get you into some strange results if you think about it the other way.

The last facet value is for the Record Completeness. We really have two categories, records that have a completeness of 1.0 (Complete Records) or records that are less than 1.0 (Incomplete Records).  This metric is calculated when the item is indexed in the system and based on our notion of a minimally viable record.

Recently Edited Records and Record Completeness Facet Options

This finishes this post about the Search Dashboard for the UNT Libraries Digital Collections.  We have been working to build out this metadata environment for about the last eight years and have slowly refined it to the metadata creation and editing workflows that seem to work for the widest number of folks here at UNT.  There are always improvements that we can make and we have been steadily chipping away at those over time.

There are a few other things that we’ve been working on over the summer that I will post about in the next week or so, so stay tuned for more.

If you have questions or comments about this post,  please let me know via Twitter.

District Dispatch: ALA appoints Alisa Holahan OITP Research Associate

planet code4lib - Tue, 2017-09-05 13:10

Alisa Holahan will serve as a Research Associate in its Office for Information Technology Policy.

Today, the American Library Association (ALA) announced that Alisa Holahan will serve as a Research Associate in its Office for Information Technology Policy (OITP). In that role, Alisa will provide policy research assistance on copyright, telecommunications and E-rate and other issues within the OITP portfolio.

Alisa just completed her term as ALA’s Google Policy Fellow for 2017, during which she explored copyright, telecommunications (especially the E-rate program) and other policy topics. Other activities as a fellow ranged from attending briefings on Capitol Hill to meeting with librarians from Kazakhstan. Google pays the summer stipends for the fellows and the respective host organizations determine the fellows’ work agendas.

During her fellow tenure, Alisa published “Lessons From History: The Copyright Office Belongs in the Library of Congress,” a report that explains how Congress repeatedly has considered the best locus for the U.S. Copyright Office and consistently reaffirmed that the Library of Congress is its most effective and efficient home. She also completed background research on the stakeholders and influencers of the E-rate program and prospective champions we may wish to cultivate.

Alisa Holahan is a second-year master’s candidate at the School of Information at the University of Texas at Austin, where she serves as a Tarlton Fellow in the law library. Previously, she completed her J.D. at the University of Texas Law School where she graduated with honors and served as associate editor of the Texas Law Review. Holahan also completed her undergraduate degree at the University of Texas.

She has interned twice in Washington, D.C., at the U.S. Department of Justice and U.S. Department of Health and Human Services. Holahan is licensed to practice law in Texas.

We look forward to Alisa’s contributions to our work in the coming year.

The post ALA appoints Alisa Holahan OITP Research Associate appeared first on District Dispatch.

FOSS4Lib Recent Releases: YAZ - 5.23.1

planet code4lib - Tue, 2017-09-05 12:19

Last updated September 5, 2017. Created by Peter Murray on September 5, 2017.
Log in to edit this page.

Package: YAZRelease Date: Monday, September 4, 2017

Pages

Subscribe to code4lib aggregator