planet code4lib

Syndicate content
Planet Code4Lib - http://planet.code4lib.org
Updated: 20 weeks 5 days ago

Miedema, John: Pier Gerlofs Donia, “Grutte Pier”: Legendary warrior, video game hero, my ancestor?

Fri, 2014-04-04 12:35

Pier Gerlofs Donia was a sixteenth century warrior in Friesland, best known as Grutte Pier (Big Pier).

A tower of a fellow as strong as an ox, of dark complexion, broad shouldered, with a long black beard and moustache. A natural rough humorist, who through unfortunate circumstances was recast into an awful brute. Out of personal revenge for the bloody injustice that befell him (in 1515) with the killing of kinsfolk and destruction of his property he became a freedom fighter of legendary standing. (Pier Gerlofs Donia).

Grutte Pier just might be an ancestor. He fought for his home in Friesland, the northern Dutch province where my folks came from. I visited Friesland in 2000. It is a pastoral province, with rolling fields and cows and churches, a lot like Prince Edward Island in Canada. It is natural that Pier began his life as a farmer. His home was destroyed and his family killed by the Black Band, a violent military regiment. Pier led a rebellion. “Leaver Dea as Slaef” (rather dead than slave) (Battle of Warns).  Pier was legendary for his height and strength, wielding a massive long sword that could take down many enemies with a single stroke.

Braveheart. Gladiator. It is time to tell the story of Grutte Pier. Cross of the Dutchman is an upcoming video game by Triangle. Amused by my remote ancestral connection and sharing an interest in warrior sports, I made a small donation to the project. My Friesian name, Miedema, will appear in the end credits.

Grimmelmann, James: It's All About Class

Thu, 2014-04-03 17:46

Willa Paskin’s Slate review of Mike Judge’s Silicon Valley quotes a line from the show’s opening:

Rarely has a show had to do so little to find so much to mock. The series opens with a group of nerdy techies attending the massive soiree of a newly minted multimillionaire. Kid Rock performs as no one listens, and then the host climbs onstage and shrieks, “We’re making the world a better place … through constructing elegant hierarchies for maximum code reuse and ostensibility.

That doesn’t ring true. “Ostensibility” is not a programming term. “Extensibility” is, and goes hand-in-glove with “code reuse.” Sure enough, that’s exactly how the line actually read on the series. And sure enough, that’s exactly what they say about Scala:

Object-Oriented Meets Functional:

Have the best of both worlds. Construct elegant class hierarchies for maximum code reuse and extensibility, implement their behavior using higher-order functions. Or anything in-between.

I suppose plagiarism is one way to get the jargon right.

Grimmelmann, James: It's All About Class

Thu, 2014-04-03 17:46

Willa Paskin’s Slate review of Mike Judge’s Silicon Valley quotes a line from the show’s opening:

Rarely has a show had to do so little to find so much to mock. The series opens with a group of nerdy techies attending the massive soiree of a newly minted multimillionaire. Kid Rock performs as no one listens, and then the host climbs onstage and shrieks, “We’re making the world a better place … through constructing elegant hierarchies for maximum code reuse and ostensibility.

That doesn’t ring true. “Ostensibility” is not a programming term. “Extensibility” is, and goes hand-in-glove with “code reuse.” Sure enough, that’s exactly how the line actually read on the series. And sure enough, that’s exactly what they say about Scala:

Object-Oriented Meets Functional:

Have the best of both worlds. Construct elegant class hierarchies for maximum code reuse and extensibility, implement their behavior using higher-order functions. Or anything in-between.

I suppose plagiarism is one way to get the jargon right.

Open Knowledge Foundation: Skillshares and Stories: Upcoming Community Sessions

Thu, 2014-04-03 17:37

We’re excited to share with you a few upcoming Community Sessions from the School of Data, CKAN, Open Knowledge Brazil, and Open Access. As we mentioned earlier this week, we aim to connect you to each other. Join us for the following events!

What is a Community Session: These online events can be in a number of forms: a scheduled IRC chat, a community google hangout, a technical sprint or hackpad editathon. The goal is to connect the community to learn and share their stories and skills.

We held our first Community Session yesterday. (see our Wiki Community Session notes) The remaining April events will be online via G+. These sessions will be a public Hangout to Air. The video will be available on the Open Knowledge Youtube Channel after the event. Questions are welcome via Twitter and G+.

All these sessions are Wednesdays at 10:30 – 11:30 am ET/ 14:30 – 15:30 UTC.

Mapping with Ketty and Ali: a School of Data Skillshare (April 9, 2014)

Making a basic map from spreadsheet data: We’ll explore tools like QGIS (a free and Open-source Geographic Information System), Tilemill (a tool to design beautiful interactive web maps) Our guest trainers are Ketty Adoch and Ali Rebaie.

To join the Mapping with Ketty and Ali Session on April 9, 2014

Q & A with Open Knowledge Brazil Chapter featuring Everton(Tom) Zanella Alvarenga (April 16, 2014)

Around the world, local groups, Chapters, projects, working groups and individuals connect to Open Knowledge. We want to share your stories.

In this Community Session, we will feature Everton (Tom) Zanella Alvarenga, Executive Director.

Open Knowledge Foundation Brazil is a newish Chapter. Tom will share his experiences growing a chapter and community in Brazil. We aim to connect you to community members around the world. We will also open up the conversation to all things Community. Share your best practices

Join us on April 16, 2014 via G+

Take a CKAN Tour (April 23, 2014)

This week we will give an overview and tour of CKAN – the leading open source open data platform used by the national governments of the US, UK, Brazil, Canada, Australia, France, Germany, Austria and many more. This session will cover why data portals are useful, what they provide and showcase examples and best practices from CKAN’s varied user base! Our special guest is Irina Bolychevsky, Services Director (Open Knowledge Foundation).

Learn and share your CKAN stories on April 23, 2014

(Note: We will share more details about the April 30th Open Access session soon!)

Resources

  • About the Community Sessions programme
  • Community Sessions Schedule
  • Add your Community Session Ideas
  • Open Knowledge Foundation: Skillshares and Stories: Upcoming Community Sessions

    Thu, 2014-04-03 17:37

    We’re excited to share with you a few upcoming Community Sessions from the School of Data, CKAN, Open Knowledge Brazil, and Open Access. As we mentioned earlier this week, we aim to connect you to each other. Join us for the following events!

    What is a Community Session: These online events can be in a number of forms: a scheduled IRC chat, a community google hangout, a technical sprint or hackpad editathon. The goal is to connect the community to learn and share their stories and skills.

    We held our first Community Session yesterday. (see our Wiki Community Session notes) The remaining April events will be online via G+. These sessions will be a public Hangout to Air. The video will be available on the Open Knowledge Youtube Channel after the event. Questions are welcome via Twitter and G+.

    All these sessions are Wednesdays at 10:30 – 11:30 am ET/ 14:30 – 15:30 UTC.

    Mapping with Ketty and Ali: a School of Data Skillshare (April 9, 2014)

    Making a basic map from spreadsheet data: We’ll explore tools like QGIS (a free and Open-source Geographic Information System), Tilemill (a tool to design beautiful interactive web maps) Our guest trainers are Ketty Adoch and Ali Rebaie.

    To join the Mapping with Ketty and Ali Session on April 9, 2014

    Q & A with Open Knowledge Brazil Chapter featuring Everton(Tom) Zanella Alvarenga (April 16, 2014)

    Around the world, local groups, Chapters, projects, working groups and individuals connect to Open Knowledge. We want to share your stories.

    In this Community Session, we will feature Everton (Tom) Zanella Alvarenga, Executive Director.

    Open Knowledge Foundation Brazil is a newish Chapter. Tom will share his experiences growing a chapter and community in Brazil. We aim to connect you to community members around the world. We will also open up the conversation to all things Community. Share your best practices

    Join us on April 16, 2014 via G+

    Take a CKAN Tour (April 23, 2014)

    This week we will give an overview and tour of CKAN – the leading open source open data platform used by the national governments of the US, UK, Brazil, Canada, Australia, France, Germany, Austria and many more. This session will cover why data portals are useful, what they provide and showcase examples and best practices from CKAN’s varied user base! Our special guest is Irina Bolychevsky, Services Director (Open Knowledge Foundation).

    Learn and share your CKAN stories on April 23, 2014

    (Note: We will share more details about the April 30th Open Access session soon!)

    Resources

  • About the Community Sessions programme
  • Community Sessions Schedule
  • Add your Community Session Ideas
  • ALA Equitable Access to Electronic Content: Jim Neal represents libraries at House Judiciary subcommittee copyright hearing

    Thu, 2014-04-03 16:45

    Yesterday, the U.S. House Judiciary Subcommittee on Courts, Intellectual Property and the Internet held a hearing entitled, “Preservation and Reuse of Copyrighted Works.” The hearing convened a panel of witnesses representing both the content and user communities to discuss a variety of copyright issues, including orphan works, mass digitization and specific provisions of the Copyright Act that concern preservation and deteriorating works. Representing the library community on the panel was Jim Neal, Columbia University librarian and vice president for Information. Neal’s statement discussed fair use in the context of library preservation, the relationship between fair use and the library exceptions language of Section 108 of the Copyright Act, and the issue of orphan works. His statement was endorsed by the Library Copyright Alliance (LCA), which includes ALA, the Association of Research Libraries and the Association of College and Research Libraries. LCA also submitted a statement to the Subcommittee.

    Video streaming by Ustream

    The Importance of Fair Use to Library Preservation Efforts

    In his statement, Neal used examples of some of the preservation efforts currently underway in the Columbia University Library System to illustrate how fair use is essential to helping libraries confront preservation challenges specific to the digital age. He argued that without fair use, libraries would not be able to digitize information stored in antiquated formats or salvage content from now-defunct websites.

    “Digital resources are not immortal,” said Neal. “In fact, they are in formats that are more likely to cease to exist, and must be transferred to new digital formats repeatedly as technology evolves. Libraries charged with this work require robust applications of flexible exceptions such as fair use so that copyright technicalities do not interfere with their preservation mission.”

    The Relationship Between Fair Use and Section 108 of the Copyright Act

    In his written testimony, Neal argued that the specific library exceptions language contained in Section 108 of the Copyright Act provides additional certainty to libraries as they work to preserve their collections:

    Library exceptions in Section 108 of the Copyright Act supplement, and do not supplant, the fair use right under Section 107…Congress enacted Section 108 in 1976 to provide libraries and archives with a set of clear exceptions with regard to the preservation of unpublished works; the reproduction of published works for the purpose of replacing a copy that was damaged, deteriorating, lost, or stolen; and the making of a copy that would become the property of a user. Over the past 38 years, Section 108 has proven essential to the operation of libraries. It has guided two core library functions: preservation and inter-library loans.”

    The existing statutory framework, which combines the specific library exceptions in Section 108 with the flexible fair use right, works well for libraries, and does not require amendment.

    Neal used the example of the Authors Guild v. HathiTrust case to rebut arguments that the protections provided to libraries under Section 108 represent the totality of copyright exemptions and privileges for which libraries may qualify. He asserted that Judge Baer’s district court decision in favor of the Hathitrust Digital Library, as well as the plain language of Section 108 itself, suggest that just because a specific exemption or privilege is not listed under Section 108 does not mean it cannot be claimed under the doctrine of fair use. Neal devoted an entire section of his written testimony to praising the digitization efforts of Hathitrust and expressing his hope that the Second District Court would uphold Judge Baer’s decision.

    Neal also argued against the need for additional orphan works legislation. He suggested that recent judicial decisions clarifying the scope of fair use and eliminating the automatic injunction rule, as well as the lack of legal challenges to recent library efforts to engage in mass digitization of orphan works, illustrate that current law is sufficient to address the orphan works issue.

    Neal was a passionate and articulate voice for libraries at yesterday’s hearing. Nonetheless, when asked whether there was any hope for resolving the most hot-button copyright issues of the day, he expressed hope that libraries and rights holders could engage in a substantive, cordial discussion on fair use, mass digitization, orphan works and other matters moving forward.

    The post Jim Neal represents libraries at House Judiciary subcommittee copyright hearing appeared first on District Dispatch.

    ALA Equitable Access to Electronic Content: Jim Neal represents libraries at House Judiciary subcommittee copyright hearing

    Thu, 2014-04-03 16:45

    Yesterday, the U.S. House Judiciary Subcommittee on Courts, Intellectual Property and the Internet held a hearing entitled, “Preservation and Reuse of Copyrighted Works.” The hearing convened a panel of witnesses representing both the content and user communities to discuss a variety of copyright issues, including orphan works, mass digitization and specific provisions of the Copyright Act that concern preservation and deteriorating works. Representing the library community on the panel was Jim Neal, Columbia University librarian and vice president for Information. Neal’s statement discussed fair use in the context of library preservation, the relationship between fair use and the library exceptions language of Section 108 of the Copyright Act, and the issue of orphan works. His statement was endorsed by the Library Copyright Alliance (LCA), which includes ALA, the Association of Research Libraries and the Association of College and Research Libraries. LCA also submitted a statement to the Subcommittee.

    Video streaming by Ustream

    The Importance of Fair Use to Library Preservation Efforts

    In his statement, Neal used examples of some of the preservation efforts currently underway in the Columbia University Library System to illustrate how fair use is essential to helping libraries confront preservation challenges specific to the digital age. He argued that without fair use, libraries would not be able to digitize information stored in antiquated formats or salvage content from now-defunct websites.

    “Digital resources are not immortal,” said Neal. “In fact, they are in formats that are more likely to cease to exist, and must be transferred to new digital formats repeatedly as technology evolves. Libraries charged with this work require robust applications of flexible exceptions such as fair use so that copyright technicalities do not interfere with their preservation mission.”

    The Relationship Between Fair Use and Section 108 of the Copyright Act

    In his written testimony, Neal argued that the specific library exceptions language contained in Section 108 of the Copyright Act provides additional certainty to libraries as they work to preserve their collections:

    Library exceptions in Section 108 of the Copyright Act supplement, and do not supplant, the fair use right under Section 107…Congress enacted Section 108 in 1976 to provide libraries and archives with a set of clear exceptions with regard to the preservation of unpublished works; the reproduction of published works for the purpose of replacing a copy that was damaged, deteriorating, lost, or stolen; and the making of a copy that would become the property of a user. Over the past 38 years, Section 108 has proven essential to the operation of libraries. It has guided two core library functions: preservation and inter-library loans.”

    The existing statutory framework, which combines the specific library exceptions in Section 108 with the flexible fair use right, works well for libraries, and does not require amendment.

    Neal used the example of the Authors Guild v. HathiTrust case to rebut arguments that the protections provided to libraries under Section 108 represent the totality of copyright exemptions and privileges for which libraries may qualify. He asserted that Judge Baer’s district court decision in favor of the Hathitrust Digital Library, as well as the plain language of Section 108 itself, suggest that just because a specific exemption or privilege is not listed under Section 108 does not mean it cannot be claimed under the doctrine of fair use. Neal devoted an entire section of his written testimony to praising the digitization efforts of Hathitrust and expressing his hope that the Second District Court would uphold Judge Baer’s decision.

    Neal also argued against the need for additional orphan works legislation. He suggested that recent judicial decisions clarifying the scope of fair use and eliminating the automatic injunction rule, as well as the lack of legal challenges to recent library efforts to engage in mass digitization of orphan works, illustrate that current law is sufficient to address the orphan works issue.

    Neal was a passionate and articulate voice for libraries at yesterday’s hearing. Nonetheless, when asked whether there was any hope for resolving the most hot-button copyright issues of the day, he expressed hope that libraries and rights holders could engage in a substantive, cordial discussion on fair use, mass digitization, orphan works and other matters moving forward.

    The post Jim Neal represents libraries at House Judiciary subcommittee copyright hearing appeared first on District Dispatch.

    Open Knowledge Foundation: Coding da Vinci – Open GLAM challenge in Germany

    Thu, 2014-04-03 16:18

    The following blog is by Helene Hahn, Open GLAM coordinator at Open Knowledge Germany. It is cross-posted from the Open GLAM blog

    More and more galleries, libraries, archives and museums (GLAMs) are digitizing their collections to make them accessible online and to preserve our heritage for future generations. By January 2014, over 30 million objects have been made available via Europeana – among which over 4.5 million records were contributed from German institutions.

    Through the contribution of open data and content, cultural institutions provide tools for the thinkers and doers of today, no matter what sector they’re working in; in this way, cultural heritage brings not just aesthetic beauty, but also brings wider cultural and economic value beyond initial estimations.

    Coding da Vinci, the first German open cultural data hackathon will take place in Berlin to bring together both cultural heritage institutions and the hacker & designer community to develop ideas and prototypes for the cultural sector and the public. It will be structured as a 10-week-challenge running from April 26th until July 6th under the motto “Let them play with your toys!”, coined by Jo Pugh of the UK National Archives. All projects will be presented online for everyone to benefit from, and prizes will be awarded to the best projects at the end of the hackathon.

    The participating GLAMs have contributed a huge range of data for use in the hackathon, including highlights such as urban images (including metadata) of Berlin in the 18th and 19th centuries, scans of shadow boxes containing insects and Jewish address-books from the 1930s in Germany, and much more! In addition, the German Digital Library will provide their API to hackathon participants. We’re also very happy to say that for a limited number of participants, we can offer to cover travel and accommodation expenses – all you have to do is apply now!

    All prices, challenges and datasets will soon be presented online.

    This hackathon is organized by: German Digital Library, Service Centre Digitization Berlin, Open Knowledge Foundation Germany, and Wikimedia Germany.

    Open Knowledge Foundation: Coding da Vinci – Open GLAM challenge in Germany

    Thu, 2014-04-03 16:18

    The following blog is by Helene Hahn, Open GLAM coordinator at Open Knowledge Germany. It is cross-posted from the Open GLAM blog

    More and more galleries, libraries, archives and museums (GLAMs) are digitizing their collections to make them accessible online and to preserve our heritage for future generations. By January 2014, over 30 million objects have been made available via Europeana – among which over 4.5 million records were contributed from German institutions.

    Through the contribution of open data and content, cultural institutions provide tools for the thinkers and doers of today, no matter what sector they’re working in; in this way, cultural heritage brings not just aesthetic beauty, but also brings wider cultural and economic value beyond initial estimations.

    Coding da Vinci, the first German open cultural data hackathon will take place in Berlin to bring together both cultural heritage institutions and the hacker & designer community to develop ideas and prototypes for the cultural sector and the public. It will be structured as a 10-week-challenge running from April 26th until July 6th under the motto “Let them play with your toys!”, coined by Jo Pugh of the UK National Archives. All projects will be presented online for everyone to benefit from, and prizes will be awarded to the best projects at the end of the hackathon.

    The participating GLAMs have contributed a huge range of data for use in the hackathon, including highlights such as urban images (including metadata) of Berlin in the 18th and 19th centuries, scans of shadow boxes containing insects and Jewish address-books from the 1930s in Germany, and much more! In addition, the German Digital Library will provide their API to hackathon participants. We’re also very happy to say that for a limited number of participants, we can offer to cover travel and accommodation expenses – all you have to do is apply now!

    All prices, challenges and datasets will soon be presented online.

    This hackathon is organized by: German Digital Library, Service Centre Digitization Berlin, Open Knowledge Foundation Germany, and Wikimedia Germany.

    ALA Equitable Access to Electronic Content: How libraries are expanding the frontier of digital technology

    Thu, 2014-04-03 15:51


    The American Library Association’s (ALA) Program on America’s Libraries for the 21st Century (AL21C) monitors and evaluates technological trends with a view to helping libraries identify ways to better serve their patrons in the digital world. In the interest of supporting AL21C’s mission to scour the technology horizon and share these trends in a library context, I recently attended a panel discussion at the Bipartisan Policy Center in Washington, D.C. on the future of the innovation economy. The panel, which was comprised of six tech industry leaders, spent a great deal of time talking about the transformative power of “smart” technology. Their discussions highlighted the fact that everywhere we look, some ordinary human tool is being “animated” by digital processes. This trend is important because it means that a growing number of tools we have traditionally used to interact with the world can now also help us make sense of the world.

    As all of us in the library community know, libraries are getting “smarter.” New broadband-enabled video equipment in libraries can virtually transport students to museums and other educational institutions located in other cities, states and countries; new printing technology can help innovators bring their designs to life; and new computer software can provide jobseekers with interactive skills training.

    The longer the digital frontier continues to expand, the more tempted we may feel to embrace the notion that it is our manifest destiny to live in an ever “smarter” world. In reality, however, we can only sustain our current rate of progress if we take steps to ensure that young Americans are being furnished with the skills they need to become the digital innovators of tomorrow.

    During this week’s discussion at the Bipartisan Policy Center, Weili Dai, president and co-Founder of Marvell Technology Group, suggested that the key to preparing our students to take the reins of the “smart” revolution is to rethink the traditional roles of computer science and math in American education. She called high-level computer code “smart English,” and “the language that facilitates our lives,” and advocated for making computer science education universal.

    As policymakers debate the merits of curricular reforms, libraries are creating more opportunities for our patrons to gain coding skills, in addition to other 21st Century digital literacy competencies. Recently, the Denver Public Library’s Community Technology Center participated in the Hour of Code, a nationwide program that offers instruction in JavaScript, Puzzlescript, Arduino and more. The Chattanooga Public Library ran a four-week summer camp last year which offered students an introduction to HTML, Python, CSS and the science of robotics. Children’s and school librarians also are exploring ways to bring coding skills to ever-younger audiences.

    The ALA is excited about the role of libraries in America’s digital future. If, as Dai says, computer code is the language of 21st Century progress, then libraries are already taking steps to ensure America’s continued leadership in the global innovation economy.

    The post How libraries are expanding the frontier of digital technology appeared first on District Dispatch.

    ALA Equitable Access to Electronic Content: How libraries are expanding the frontier of digital technology

    Thu, 2014-04-03 15:51


    The American Library Association’s (ALA) Program on America’s Libraries for the 21st Century (AL21C) monitors and evaluates technological trends with a view to helping libraries identify ways to better serve their patrons in the digital world. In the interest of supporting AL21C’s mission to scour the technology horizon and share these trends in a library context, I recently attended a panel discussion at the Bipartisan Policy Center in Washington, D.C. on the future of the innovation economy. The panel, which was comprised of six tech industry leaders, spent a great deal of time talking about the transformative power of “smart” technology. Their discussions highlighted the fact that everywhere we look, some ordinary human tool is being “animated” by digital processes. This trend is important because it means that a growing number of tools we have traditionally used to interact with the world can now also help us make sense of the world.

    As all of us in the library community know, libraries are getting “smarter.” New broadband-enabled video equipment in libraries can virtually transport students to museums and other educational institutions located in other cities, states and countries; new printing technology can help innovators bring their designs to life; and new computer software can provide jobseekers with interactive skills training.

    The longer the digital frontier continues to expand, the more tempted we may feel to embrace the notion that it is our manifest destiny to live in an ever “smarter” world. In reality, however, we can only sustain our current rate of progress if we take steps to ensure that young Americans are being furnished with the skills they need to become the digital innovators of tomorrow.

    During this week’s discussion at the Bipartisan Policy Center, Weili Dai, president and co-Founder of Marvell Technology Group, suggested that the key to preparing our students to take the reins of the “smart” revolution is to rethink the traditional roles of computer science and math in American education. She called high-level computer code “smart English,” and “the language that facilitates our lives,” and advocated for making computer science education universal.

    As policymakers debate the merits of curricular reforms, libraries are creating more opportunities for our patrons to gain coding skills, in addition to other 21st Century digital literacy competencies. Recently, the Denver Public Library’s Community Technology Center participated in the Hour of Code, a nationwide program that offers instruction in JavaScript, Puzzlescript, Arduino and more. The Chattanooga Public Library ran a four-week summer camp last year which offered students an introduction to HTML, Python, CSS and the science of robotics. Children’s and school librarians also are exploring ways to bring coding skills to ever-younger audiences.

    The ALA is excited about the role of libraries in America’s digital future. If, as Dai says, computer code is the language of 21st Century progress, then libraries are already taking steps to ensure America’s continued leadership in the global innovation economy.

    The post How libraries are expanding the frontier of digital technology appeared first on District Dispatch.

    OCLC Dev Network: Join us for the WorldCat Registry API Workshop on April 18

    Thu, 2014-04-03 15:30

    Please join us for the WorldCat Registry API Workshop at 11am ET on Friday, April 18, 2014

    OCLC Dev Network: Join us for the WorldCat Registry API Workshop on April 18

    Thu, 2014-04-03 15:30

    Please join us for the WorldCat Registry API Workshop at 11am ET on Friday, April 18, 2014

    Open Knowledge Foundation: The School of Data Journalism 2014!

    Thu, 2014-04-03 15:07

    We’re really excited to announce this year’s edition of the School of Data Journalism, at the International Journalism Festival in Perugia, 30th April – 4th May.

    It’s the third time we’ve run it (how time flies!), together with the European Journalism Centre, and it’s amazing seeing the progress that has been made since we started out. Data has become an increasingly crucial part of any journalists’ toolbox, and its rise is only set to continue. The Data Journalism Handbook, which was born at the first School of Data Journalism is Perugia, has become a go-to reference for all those looking to work with data in the news, a fantastic testament to the strength of the data journalism community.

    As Antoine Laurent, Innovation Senior Project Manager at the EJC, said:

    “This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

    Here’s the press release about this year’s event – share it with anyone you think might be interested – and book your place now!

    PRESS RELEASE FOR IMMEDIATE RELEASE

    April 3rd, 2014

    Europe’s Biggest Data Journalism Event Announced: the School of Data Journalism

    The European Journalism Centre, Open Knowledge and the International Journalism Festival are pleased to announce the 3rd edition of Europe’s biggest data journalism event, the School of Data Journalism. The 2014 edition takes place in Perugia, Italy between 30th of April – 4th of May as part of the International Journalism Festival.

    A team of about 25 expert panelists and instructors from New York Times, The Daily Mirror, Twitter, Ask Media, Knight-Mozilla and others will lead participants in a mix of discussions and hands-on sessions focusing on everything from cross-border data-driven investigative journalism, to emergency reporting and using spreadsheets, social media data, data visualisation and mapping techniques for journalism.

    Entry to the School of Data Journalism panels and workshops is free. Last year’s editions featured a stellar team of panelists and instructors, attracted hundreds of journalists and was fully booked within a few days. The year before saw the launch of the seminal Data Journalism Handbook, which remains the go-to reference for practitioners in the field.

    Antoine Laurent, Innovation Senior Project Manager at the EJC said:

    “This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

    Guido Romeo, Data and Business Editor at Wired Italy, said:

    “I teach in several journalism schools in Italy. You won’t get this sort of exposure to such teachers and tools in any journalism school in Italy. They bring in the most avant garde people, and have a keen eye on what’s innovative and new. It has definitely helped me understand what others around the world in big newsrooms are doing, and, more importantly, how they are doing it.”

    The full description and the (free) registration to the sessions can be found on http://datajournalismschool.net You can also find all the details on the International Journalism Festival website: http://www.journalismfestival.com/programme/2014

    ENDS

    Contacts: Antoine Laurent, Innovation Senior Project Manager, European Journalism Centre: laurent@ejc.net Milena Marin, School of Data Programme Manager, Open Knowledge Foundation, milena.marin@okfn.org

    Notes for editors

    Website: http://datajournalismschool.net Hashtag: #DDJSCHOOL

    The School of Data Journalism is part of the European Journalism Centre’s Data Driven Journalism initiative, which aims to enable more journalists, editors, news developers and designers to make better use of data and incorporate it further into their work. Started in 2010, the initiative also runs the website DataDrivenJournalism.net as well as the Doing Journalism with Data MOOC, and produced the acclaimed Data Journalism Handbook.

    About the International Journalism Festival (www.journalismfestival.com) The International Journalism Festival is the largest media event in Europe. It is held every April in Perugia, Italy. The festival is free entry for all attendees for all sessions. It is an open invitation to listen to and network with the best of world journalism. The leitmotiv is one of informality and accessibility, designed to appeal to journalists, aspiring journalists and those interested in the role of the media in society. Simultaneous translation into English and Italian is provided.

    About Open Knowledge (www.okfn.org) Open Knowledge, founded in 2004, is a worldwide network of people who are passionate about openness, using advocacy, technology and training to unlock information and turn it into insight and change. Our aim is to give everyone the power to use information and insight for good. Visit okfn.org to learn more about the Foundation and its major projects including SchoolOfData.org and OpenSpending.org.

    About the European Journalism Centre (www.ejc.net) The European Journalism Centre is an independent, international, non-profit foundation dedicated to maintaining the highest standards in journalism in particular and the media in general. Founded in 1992 in Maastricht, the Netherlands, the EJC closely follows emerging trends in journalism and watchdogs the interplay between media economy and media culture. It also hosts each year more than 1.000 journalists in seminars and briefings on European and international affairs.

    Open Knowledge Foundation: The School of Data Journalism 2014!

    Thu, 2014-04-03 15:07

    We’re really excited to announce this year’s edition of the School of Data Journalism, at the International Journalism Festival in Perugia, 30th April – 4th May.

    It’s the third time we’ve run it (how time flies!), together with the European Journalism Centre, and it’s amazing seeing the progress that has been made since we started out. Data has become an increasingly crucial part of any journalists’ toolbox, and its rise is only set to continue. The Data Journalism Handbook, which was born at the first School of Data Journalism is Perugia, has become a go-to reference for all those looking to work with data in the news, a fantastic testament to the strength of the data journalism community.

    As Antoine Laurent, Innovation Senior Project Manager at the EJC, said:

    “This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

    Here’s the press release about this year’s event – share it with anyone you think might be interested – and book your place now!

    PRESS RELEASE FOR IMMEDIATE RELEASE

    April 3rd, 2014

    Europe’s Biggest Data Journalism Event Announced: the School of Data Journalism

    The European Journalism Centre, Open Knowledge and the International Journalism Festival are pleased to announce the 3rd edition of Europe’s biggest data journalism event, the School of Data Journalism. The 2014 edition takes place in Perugia, Italy between 30th of April – 4th of May as part of the International Journalism Festival.

    A team of about 25 expert panelists and instructors from New York Times, The Daily Mirror, Twitter, Ask Media, Knight-Mozilla and others will lead participants in a mix of discussions and hands-on sessions focusing on everything from cross-border data-driven investigative journalism, to emergency reporting and using spreadsheets, social media data, data visualisation and mapping techniques for journalism.

    Entry to the School of Data Journalism panels and workshops is free. Last year’s editions featured a stellar team of panelists and instructors, attracted hundreds of journalists and was fully booked within a few days. The year before saw the launch of the seminal Data Journalism Handbook, which remains the go-to reference for practitioners in the field.

    Antoine Laurent, Innovation Senior Project Manager at the EJC said:

    “This is really a must-attend event for anyone with an interest in data journalism. The previous years’ events have each proven to be watershed moments in the development of data journalism. The data revolution is making itself felt across the profession, offering new ways to tell stories and speak truth to power. Be part of the change.”

    Guido Romeo, Data and Business Editor at Wired Italy, said:

    “I teach in several journalism schools in Italy. You won’t get this sort of exposure to such teachers and tools in any journalism school in Italy. They bring in the most avant garde people, and have a keen eye on what’s innovative and new. It has definitely helped me understand what others around the world in big newsrooms are doing, and, more importantly, how they are doing it.”

    The full description and the (free) registration to the sessions can be found on http://datajournalismschool.net You can also find all the details on the International Journalism Festival website: http://www.journalismfestival.com/programme/2014

    ENDS

    Contacts: Antoine Laurent, Innovation Senior Project Manager, European Journalism Centre: laurent@ejc.net Milena Marin, School of Data Programme Manager, Open Knowledge Foundation, milena.marin@okfn.org

    Notes for editors

    Website: http://datajournalismschool.net Hashtag: #DDJSCHOOL

    The School of Data Journalism is part of the European Journalism Centre’s Data Driven Journalism initiative, which aims to enable more journalists, editors, news developers and designers to make better use of data and incorporate it further into their work. Started in 2010, the initiative also runs the website DataDrivenJournalism.net as well as the Doing Journalism with Data MOOC, and produced the acclaimed Data Journalism Handbook.

    About the International Journalism Festival (www.journalismfestival.com) The International Journalism Festival is the largest media event in Europe. It is held every April in Perugia, Italy. The festival is free entry for all attendees for all sessions. It is an open invitation to listen to and network with the best of world journalism. The leitmotiv is one of informality and accessibility, designed to appeal to journalists, aspiring journalists and those interested in the role of the media in society. Simultaneous translation into English and Italian is provided.

    About Open Knowledge (www.okfn.org) Open Knowledge, founded in 2004, is a worldwide network of people who are passionate about openness, using advocacy, technology and training to unlock information and turn it into insight and change. Our aim is to give everyone the power to use information and insight for good. Visit okfn.org to learn more about the Foundation and its major projects including SchoolOfData.org and OpenSpending.org.

    About the European Journalism Centre (www.ejc.net) The European Journalism Centre is an independent, international, non-profit foundation dedicated to maintaining the highest standards in journalism in particular and the media in general. Founded in 1992 in Maastricht, the Netherlands, the EJC closely follows emerging trends in journalism and watchdogs the interplay between media economy and media culture. It also hosts each year more than 1.000 journalists in seminars and briefings on European and international affairs.

    Morgan, Eric Lease: Digital humanities and libraries

    Thu, 2014-04-03 15:02

    This posting outlines a current trend in some academic libraries, specifically, the inclusion of digital humanities into their service offerings. It provides the briefest of introductions to the digital humanities, and then describes how one branch of the digital humanities — text mining — is being put into practice here in the Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame.

    (This posting and its companion one-page handout was written for the Information Organization Research Group, School of Information Studies at the University of Wisconsin Milwaukee, in preparation for a presentation dated April 10, 2014.)

    Digital humanities


    For all intents and purposes, the digital humanities is a newer rather than older scholarly endeavor. A priest named Father Busa is considered the “Father of the Digital Humanities” when, in 1965, he worked with IBM to evaluate the writings of Thomas Aquinas. With the advent of the Internet, ubiquitous desktop computing, an increased volume of digitized content, and sophisticated markup languages like TEI (the Text Encoding Initiative), the processes of digital humanities work has moved away from a fad towards a trend. While digital humanities work is sometimes called a discipline this author sees it more akin to a method. It is a process of doing “distant reading” to evaluate human expression. (The phrase “distant reading” is attributed to Franco Moretti who coined it in a book entitles Graphs, Maps, Trees: Abstract Models for a Literary History. Distant reading is complementary to “close reading”, and is used to denote the idea of observing many documents simultaneously.) The digital humanities community has grown significantly in the past ten or fifteen years complete with international academic conferences, graduate school programs, and scholarly publications.

    Digital humanities work is a practice where digitized content of the humanist is quantitatively analyzed as if it were the content studied by a scientist. This sort of analysis can be done against any sort of human expression: written and spoken words, music, images, dance, sculpture, etc. Invariably, the process begins with counting and tabulating. This leads to measurement, which in turn provides opportunities for comparison. From here patterns can be observed and anomalies perceived. Finally, predictions, thesis, and judgements can be articulated. Digital humanities work does not replace the more traditional ways of experiencing expressions of the human condition. Instead it supplements the experience.

    This author often compares the methods of the digital humanist to the reading of a thermometer. Suppose you observe an outdoor thermometer and it reads 32° (Fahrenheit). This reading, in and of itself, carries little meaning. It is only a measurement. In order to make sense of the reading it is important to put it into context. What is the weather outside? What time of year is it? What time of day is it? How does the reading compare to other readings? If you live in the Northern Hemisphere and the month is July, then the reading is probably an anomaly. On the other hand, if the month is January, then the reading is perfectly normal and not out of the ordinary. The processes of the digital humanist make it possible to make many measurements from a very large body of materials in order to evaluate things like texts, sounds, images, etc. It makes it possible to evaluate the totality of Victorian literature, the use of color in paintings over time, or the rhythmic similarities & difference between various forms of music.

    Digital humanities centers in libraries

    As the more traditional services of academic libraries become more accessible via the Internet, libraries have found the need to necessarily evolve. One manifestation of this evolution is the establishment of digital humanities centers. Probably one of oldest of these centers is located at the University of Virginia, but they now exist in many libraries across the country. These centers provide a myriad of services including combinations of digitization, markup, website creation, textual analysis, speaker series, etc. Sometimes these centers are akin to computing labs. Sometimes they are more like small but campus-wide departments staffed with scholars, researchers, and graduate students.

    The Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame was recently established in this vein. The Center supports services around geographic information systems (GIS), data management, statistical analysis of data, and text mining. It is located in a 5,000 square foot space on the Libraries’s first floor and includes a myriad of computers, scanners, printers, a 3D printer, and collaborative work spaces. Below is an annotated list of projects the author has spent time against in regards to text mining and the Center. It is intended to give the reader a flavor of the types of work done in the Hesburgh Libraries:

    • Great Books – This was almost a tongue-in-cheek investigation to calculate which book was the “greatest” from a set of books called the Great Books of the Western World. The editors of the set defined a great book as one which discussed any one of a number of great ideas both deeply and broadly. These ideas were tabulated and compared across the corpus and then sorted by the resulting calculation. Aristotle’s Politics was determined to be the greatest book and Shakespeare was determined to have written nine of the top ten greatest books when it comes to the idea of love.
    • HathiTrust Research Center – The HathiTrust Research Center is a branch of the HathiTrust. The Center supports a number of algorithms used to do analysis against reader-defined worksets. The Center For Digital Scholarship facilitates workshops on the use of the HathiTrust Research Center as well as a small set of tools for programmatically searching and retrieving items from the HathiTrust.
    • JSTOR ToolData For Research (DFR) is a freely available and alternative interface to the bibliographic index called JSTOR. DFR enables the reader to search the entirety of JSTOR through a faceted querying. Search results are tabulated enabling the reader to create charts and graphs illustrating the results. Search results can be downloaded for more detailed investigations. JSTOR Tool is a Web-based application allowing the reader to summarize and do distant reading against these downloaded results.
    • PDF To Text – Text mining almost always requires the content of its investigation to be in the form of plain text, but much of the content used by people is in PDF. PDF To Text is a Web-based tool which extracts the plain text from PDF files and provides a number of services against the result (readability scores, ngram extraction, concordancing, and rudimentary parts-of-speech analysis.)
    • Perceptions of China – This project is in the earliest stages. Prior to visiting China students have identified photographs and written short paragraphs describing, in their minds, what they think of China. After visiting China the process is repeated. The faculty member leading the students on their trips to China wants to look for patterns of perception in the paragraphs.
    • Poverty Tourism – A university senior believes they have identified a trend — the desire to tourist poverty-stricken places. They identified as many as forty websites advertising “Come vist our slum”. Working with the Center they programmatically mirrored the content of the remote websites. They programmatically removed all the HTML tags from the mirrors. They then used Voyant Tools as well as various ngram tabulation tools to do distant reading against the corpus. Their investigations demonstrated the preponderant use of the word “you”, and they posit this because the authors of the websites are trying to get readers to imagine being in a slum.
    • State Trials – In collaboration with a number of other people, transcripts of the State Trials dating between 1650 and 1700 were analyzed. Digital versions of the Trails was obtained, and a number of descriptive analyses were done. The content was indexed and a timeline was created from search results. Ngram extraction was done as well as parts-of-speech analysis. Various types of similarity measures were done based on named entities and the over-all frequency of words (vectors). A stop word list was created based on additional frequency tabulations. Much of these analysis was visualized using word clouds, line charts, and histograms. This project is an excellent example of how much of digital humanities work is collaborative and requires the skills of many different types of people.
    • Tiny Text Mining Tools – Text mining is rooted in the counting and tabulation of words. Computers are very good at counting and tabulating. To that end a set of tiny text mining tools has been created enabling the Center to perform quick & dirty analysis against one or more items in a corpus. Written in Perl, the tools implement a well-respected relevancy ranking algorithm (term-frequency inverse document frequency or TFIDF) to support searching and classification, a cosine similarity measure for clustering and “finding more items like this one”, a concordancing (keyword in context) application, and an ngram (phrase) extractor.
    Summary


    Text mining, and digital humanities work in general, is simply the application computing techniques applied against the content of human expression. Their use is similar to use of the magnifying glass by Galileo. Instead of turning it down to count the number of fibers in a cloth (or to write an email message), it is being turned up to gaze at the stars (or to analyze the human condition). What he finds there is not so much truth as much as new ways to observe. The same is true of text mining and the digital humanities. They are additional ways to “see”.

    Links

    Here is a short list of links for further reading:

    • ACRL Digital Humanities Interest Group – This is a mailing list whose content includes mostly announcements of interest to librarians doing digital humanities work.
    • asking for it – Written by Bethany Nowviskie, this is a through response to the OCLC report, below.
    • dh+lib – A website amalgamating things of interest to digital humanities librarianship (job postings, conference announcements, blog posttings, newly established projects, etc.)
    • Digital Humanities and the Library: A Bibliography – Written by Miriam Posner, this is a nice list of print and digital readings on the topic of digital humanities work in libraries.
    • Does Every Research Library Need a Digital Humanities Center? – A recently published, OCLC-sponsored report intended for library directors who are considering the creation of a digital humanities center.
    • THATCamp – While not necessarily library-related THATCamp is a organization and process of facilitating informal digital humanities workshops, usually in academic settings.

    Morgan, Eric Lease: Digital humanities and libraries

    Thu, 2014-04-03 15:02

    This posting outlines a current trend in some academic libraries, specifically, the inclusion of digital humanities into their service offerings. It provides the briefest of introductions to the digital humanities, and then describes how one branch of the digital humanities — text mining — is being put into practice here in the Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame.

    (This posting and its companion one-page handout was written for the Information Organization Research Group, School of Information Studies at the University of Wisconsin Milwaukee, in preparation for a presentation dated April 10, 2014.)

    Digital humanities


    For all intents and purposes, the digital humanities is a newer rather than older scholarly endeavor. A priest named Father Busa is considered the “Father of the Digital Humanities” when, in 1965, he worked with IBM to evaluate the writings of Thomas Aquinas. With the advent of the Internet, ubiquitous desktop computing, an increased volume of digitized content, and sophisticated markup languages like TEI (the Text Encoding Initiative), the processes of digital humanities work has moved away from a fad towards a trend. While digital humanities work is sometimes called a discipline this author sees it more akin to a method. It is a process of doing “distant reading” to evaluate human expression. (The phrase “distant reading” is attributed to Franco Moretti who coined it in a book entitles Graphs, Maps, Trees: Abstract Models for a Literary History. Distant reading is complementary to “close reading”, and is used to denote the idea of observing many documents simultaneously.) The digital humanities community has grown significantly in the past ten or fifteen years complete with international academic conferences, graduate school programs, and scholarly publications.

    Digital humanities work is a practice where digitized content of the humanist is quantitatively analyzed as if it were the content studied by a scientist. This sort of analysis can be done against any sort of human expression: written and spoken words, music, images, dance, sculpture, etc. Invariably, the process begins with counting and tabulating. This leads to measurement, which in turn provides opportunities for comparison. From here patterns can be observed and anomalies perceived. Finally, predictions, thesis, and judgements can be articulated. Digital humanities work does not replace the more traditional ways of experiencing expressions of the human condition. Instead it supplements the experience.

    This author often compares the methods of the digital humanist to the reading of a thermometer. Suppose you observe an outdoor thermometer and it reads 32° (Fahrenheit). This reading, in and of itself, carries little meaning. It is only a measurement. In order to make sense of the reading it is important to put it into context. What is the weather outside? What time of year is it? What time of day is it? How does the reading compare to other readings? If you live in the Northern Hemisphere and the month is July, then the reading is probably an anomaly. On the other hand, if the month is January, then the reading is perfectly normal and not out of the ordinary. The processes of the digital humanist make it possible to make many measurements from a very large body of materials in order to evaluate things like texts, sounds, images, etc. It makes it possible to evaluate the totality of Victorian literature, the use of color in paintings over time, or the rhythmic similarities & difference between various forms of music.

    Digital humanities centers in libraries

    As the more traditional services of academic libraries become more accessible via the Internet, libraries have found the need to necessarily evolve. One manifestation of this evolution is the establishment of digital humanities centers. Probably one of oldest of these centers is located at the University of Virginia, but they now exist in many libraries across the country. These centers provide a myriad of services including combinations of digitization, markup, website creation, textual analysis, speaker series, etc. Sometimes these centers are akin to computing labs. Sometimes they are more like small but campus-wide departments staffed with scholars, researchers, and graduate students.

    The Hesburgh Libraries’ Center For Digital Scholarship at the University of Notre Dame was recently established in this vein. The Center supports services around geographic information systems (GIS), data management, statistical analysis of data, and text mining. It is located in a 5,000 square foot space on the Libraries’s first floor and includes a myriad of computers, scanners, printers, a 3D printer, and collaborative work spaces. Below is an annotated list of projects the author has spent time against in regards to text mining and the Center. It is intended to give the reader a flavor of the types of work done in the Hesburgh Libraries:

    • Great Books – This was almost a tongue-in-cheek investigation to calculate which book was the “greatest” from a set of books called the Great Books of the Western World. The editors of the set defined a great book as one which discussed any one of a number of great ideas both deeply and broadly. These ideas were tabulated and compared across the corpus and then sorted by the resulting calculation. Aristotle’s Politics was determined to be the greatest book and Shakespeare was determined to have written nine of the top ten greatest books when it comes to the idea of love.
    • HathiTrust Research Center – The HathiTrust Research Center is a branch of the HathiTrust. The Center supports a number of algorithms used to do analysis against reader-defined worksets. The Center For Digital Scholarship facilitates workshops on the use of the HathiTrust Research Center as well as a small set of tools for programmatically searching and retrieving items from the HathiTrust.
    • JSTOR ToolData For Research (DFR) is a freely available and alternative interface to the bibliographic index called JSTOR. DFR enables the reader to search the entirety of JSTOR through a faceted querying. Search results are tabulated enabling the reader to create charts and graphs illustrating the results. Search results can be downloaded for more detailed investigations. JSTOR Tool is a Web-based application allowing the reader to summarize and do distant reading against these downloaded results.
    • PDF To Text – Text mining almost always requires the content of its investigation to be in the form of plain text, but much of the content used by people is in PDF. PDF To Text is a Web-based tool which extracts the plain text from PDF files and provides a number of services against the result (readability scores, ngram extraction, concordancing, and rudimentary parts-of-speech analysis.)
    • Perceptions of China – This project is in the earliest stages. Prior to visiting China students have identified photographs and written short paragraphs describing, in their minds, what they think of China. After visiting China the process is repeated. The faculty member leading the students on their trips to China wants to look for patterns of perception in the paragraphs.
    • Poverty Tourism – A university senior believes they have identified a trend — the desire to tourist poverty-stricken places. They identified as many as forty websites advertising “Come vist our slum”. Working with the Center they programmatically mirrored the content of the remote websites. They programmatically removed all the HTML tags from the mirrors. They then used Voyant Tools as well as various ngram tabulation tools to do distant reading against the corpus. Their investigations demonstrated the preponderant use of the word “you”, and they posit this because the authors of the websites are trying to get readers to imagine being in a slum.
    • State Trials – In collaboration with a number of other people, transcripts of the State Trials dating between 1650 and 1700 were analyzed. Digital versions of the Trails was obtained, and a number of descriptive analyses were done. The content was indexed and a timeline was created from search results. Ngram extraction was done as well as parts-of-speech analysis. Various types of similarity measures were done based on named entities and the over-all frequency of words (vectors). A stop word list was created based on additional frequency tabulations. Much of these analysis was visualized using word clouds, line charts, and histograms. This project is an excellent example of how much of digital humanities work is collaborative and requires the skills of many different types of people.
    • Tiny Text Mining Tools – Text mining is rooted in the counting and tabulation of words. Computers are very good at counting and tabulating. To that end a set of tiny text mining tools has been created enabling the Center to perform quick & dirty analysis against one or more items in a corpus. Written in Perl, the tools implement a well-respected relevancy ranking algorithm (term-frequency inverse document frequency or TFIDF) to support searching and classification, a cosine similarity measure for clustering and “finding more items like this one”, a concordancing (keyword in context) application, and an ngram (phrase) extractor.
    Summary


    Text mining, and digital humanities work in general, is simply the application computing techniques applied against the content of human expression. Their use is similar to use of the magnifying glass by Galileo. Instead of turning it down to count the number of fibers in a cloth (or to write an email message), it is being turned up to gaze at the stars (or to analyze the human condition). What he finds there is not so much truth as much as new ways to observe. The same is true of text mining and the digital humanities. They are additional ways to “see”.

    Links

    Here is a short list of links for further reading:

    • ACRL Digital Humanities Interest Group – This is a mailing list whose content includes mostly announcements of interest to librarians doing digital humanities work.
    • asking for it – Written by Bethany Nowviskie, this is a through response to the OCLC report, below.
    • dh+lib – A website amalgamating things of interest to digital humanities librarianship (job postings, conference announcements, blog posttings, newly established projects, etc.)
    • Digital Humanities and the Library: A Bibliography – Written by Miriam Posner, this is a nice list of print and digital readings on the topic of digital humanities work in libraries.
    • Does Every Research Library Need a Digital Humanities Center? – A recently published, OCLC-sponsored report intended for library directors who are considering the creation of a digital humanities center.
    • THATCamp – While not necessarily library-related THATCamp is a organization and process of facilitating informal digital humanities workshops, usually in academic settings.

    Grimmelmann, James: A DVR in the Cloud Is Still a DVR, and Still Legal

    Thu, 2014-04-03 03:39

    Today, David Post and I filed an amicus brief for ourselves and thirty-four other law professors, arguing that Aereo should win its Supreme Court case. Its service, which lets users record live TV and stream the recordings back to themselves, is functionally identical to a VCR, and it has been settled law for three decades that consumers have a fair use right to use VCRs.

    The broadcasters suing Aereo have tried to portray it as being more like a cable network; they argue that it is in the business of retransmitting live TV. But if Aereo retransmits anything, then so do you every time you hit play on your Streambox or your uploaded Dropbox videos. Streambox and Dropbox aren’t cable networks, and neither is Aereo. The broadcasters’ entire theory—that Aereo directly infringes the public performance right—is a gerrymander, an attempt to hide the ball and obscure the fact that it helps consumers record live TV for their own personal consumption.

    My long-time readers may remember that I’m not personally a fan of Aereo. But when the Supreme Court took the case, I stepped forward to write an amicus because the issues are much larger than any one company. Others have emphasized the danger the broadcasters’ theory poses to consumer products and cloud computing. Our brief notes the danger the case poses to the integrity and coherence of copyright law itself: the broadcasters’ theory mixes up direct and secondary infringement, confuses the reproduction and public performance rights, and disregards consumers’ fair use rights. It is a “dangerous shortcut” through copyright law.

    I am particularly grateful to my co-author, David Post, for his immense effort in pulling the brief together. (Of the two of us, only he is admitted to the Supreme Court bar, so it bears his name as counsel of record.) I am at once proud of and humbled by the many distinguished copyright experts who signed on. And I hope the brief does some good in helping the Supreme Court shape healthy copyright law for our digital future.

    Grimmelmann, James: A DVR in the Cloud Is Still a DVR, and Still Legal

    Thu, 2014-04-03 03:39

    Today, David Post and I filed an amicus brief for ourselves and thirty-four other law professors, arguing that Aereo should win its Supreme Court case. Its service, which lets users record live TV and stream the recordings back to themselves, is functionally identical to a VCR, and it has been settled law for three decades that consumers have a fair use right to use VCRs.

    The broadcasters suing Aereo have tried to portray it as being more like a cable network; they argue that it is in the business of retransmitting live TV. But if Aereo retransmits anything, then so do you every time you hit play on your Streambox or your uploaded Dropbox videos. Streambox and Dropbox aren’t cable networks, and neither is Aereo. The broadcasters’ entire theory—that Aereo directly infringes the public performance right—is a gerrymander, an attempt to hide the ball and obscure the fact that it helps consumers record live TV for their own personal consumption.

    My long-time readers may remember that I’m not personally a fan of Aereo. But when the Supreme Court took the case, I stepped forward to write an amicus because the issues are much larger than any one company. Others have emphasized the danger the broadcasters’ theory poses to consumer products and cloud computing. Our brief notes the danger the case poses to the integrity and coherence of copyright law itself: the broadcasters’ theory mixes up direct and secondary infringement, confuses the reproduction and public performance rights, and disregards consumers’ fair use rights. It is a “dangerous shortcut” through copyright law.

    I am particularly grateful to my co-author, David Post, for his immense effort in pulling the brief together. (Of the two of us, only he is admitted to the Supreme Court bar, so it bears his name as counsel of record.) I am at once proud of and humbled by the many distinguished copyright experts who signed on. And I hope the brief does some good in helping the Supreme Court shape healthy copyright law for our digital future.

    ALA Equitable Access to Electronic Content: 404 Day: Stopping excessive internet filtration

    Wed, 2014-04-02 21:58

    Every day, libraries across the country are routinely overblocking content far more than is necessary under the law in order to comply with the Children’s Internet Protection Act (CIPA), the law that requires public libraries and K-12 schools to employ internet filtering software in exchange for certain federal funding. This week, patrons and students will get the chance to call attention to banned websites and excessive Internet filtration in libraries.

    Library advocates will have the opportunity to participate in a no-cost educational internet filtering event. Join the Electronic Frontier Foundation, the MIT Center for Civic Media and the National Coalition Against Censorship on Friday, April 4, 2014, at 3:00pm EST, when they collaborate to host a digital teach-in that will include discussions with top researchers and librarians working to push back against the use of Internet filters on library computers. The digital teach-in will be archived.

    Digital Teach-in
    When: Friday, April 4, 2014
    Time: 3:00pm EST
    Watch event live

    Speakers:

    • Moderator: April Glaser, activist at the Electronic Frontier Foundation
    • Deborah Caldwell-Stone, Deputy Director of the American Library Association’s Office for Intellectual Freedom. She has written extensively about CIPA and blocked websites in libraries.
    • Chris Peterson, a research affiliate at the Center for Civic Media at the MIT Media Lab. He is currently working on the Mapping Information Access Project.
    • Sarah Houghton, Director for the San Rafael Public Library in Northern California. She has also blogged as the Librarian in Black for over a decade.

    The post 404 Day: Stopping excessive internet filtration appeared first on District Dispatch.