“It was a foul, rainy, muddy, sloppy morning, without a glimmer of sun, with that thick, pervading, melancholy atmosphere which forces for the time upon imaginative men a conviction that nothing is worth anything,” —Anthony Trollope, Ralph the Heir (1871), chapter XXIX.
An archive of the CopyTalk webinar on fan fiction and copyright issues originally broadcast on Thursday, November 5, 2015 is available.
Fan-created works are in general broadly available to people at the click of a link. Fan fiction hasn’t been the subject of any litigation, but it plays an increasing role in literacy as its creation and consumption has skyrocketed. Practice on the ground can matter as much as court cases and the explosion of noncommercial creativity is a big part of the fair use ecosystem. This presentation touched on many of the ways in which creativity has impacted recent judicial rulings on fair use, from Google books , to putting a mayor’s face on a T-shirt, to copying a competitor’s ad for a competing ad. Legal scholar and counsel to the Organization for Transformative Works, Rebecca Tushnet enlightened us.
This was a really interesting webinar. Do check it out!
Rebecca Tushnet clerked for Chief Judge Edward R. Becker of the Third Circuit Court of Appeals in Philadelphia and Associate Justice David H. Souter of the United States Supreme Court and spent two years as an associate at Debevoise & Plimpton in Washington, DC, specializing in intellectual property. After two years at the NYU School of Law, she moved to Georgetown, where she now teaches intellectual property, advertising law, and First Amendment law.
Her work currently focuses on the relationship between the First Amendment and false advertising law. She has advised and represented several fan fiction websites in disputes with copyright and trademark owners. She serves as a member of the legal team of the Organization for Transformative Works, a nonprofit dedicated to supporting and promoting fanworks, and is also an expert on the law of engagement rings.
Our next CopyTalk is December 3rd at 2pm Eastern/11am Pacific. Our topic will be the 1201 rulemaking and this year’s exemptions. Get ready for absurdity!
The mission of Open Knowledge International is to open up all essential public interest information and see it utilized to create insight that drives change. To this end we work to create a global movement for open knowledge, supporting a network of leaders and local groups around the world; we facilitate coordination and knowledge sharing within the movement; we build collaboration with other change-making organisations both within our space and outside; and, finally, we prototype and provide a home for pioneering products.
A decade after its foundation, Open Knowledge International is ready for its next phase of development. We started as an organisation that led the quest for the opening up of existing data sets – and in today’s world most of the big data portals run on CKAN, an open source software product developed first by us.
Today, it is not only about opening up of data; it is making sure that this data is usable, useful and – most importantly – used, to improve people’s lives. Our current projects (OpenSpending, OpenTrials, School of Data, and many more) all aim towards giving people access to data, the knowledge to understand it, and the power to use it in our everyday lives.
Now, we are looking for an enthusiasticProject Assistant
(flexible location, part time)
to join the team to help deliver our projects around the world. We are seeking people who care about openness and have the commitment to make it happen.
We do not require applicants to have experience of project management – instead, we would like to work with motivated self-starters, able to demonstrate engagement with initiatives within the open movement. If you have excellent written and verbal communication skills, are highly organised and efficient with strong administration and analytical abilities, are interested in how projects are managed and are willing to learn, we want to hear from you.
The role includes the following responsibilities:
- Monitoring and reporting of ongoing work progress to Project Managers and on occasion to other stakeholders
- Research and investigation
- Coordination of, and communication with, the project team, wider organisation, volunteers and stakeholders
- Documentation, including creating presentations, document control, proof-reading, archiving, distributing and collecting
- Meeting and event organisation, including scheduling, booking, preparing documents, minuting, and arranging travel and accommodation where needed
- Project communication and promotion, including by email, blog, social media, networking online and in person
- Liaising with staff across the organisation to offer and for support, eg public communication and finance
This role requires someone who can be flexible and comfortable with remote working, able to operate in a professional environment and participate in grassroots activities. Experience working as and with volunteers is advantageous.
You are comfortable working with people from different cultural, social and ethnic backgrounds. You are happy to share your knowledge with others, and you find working in transparent and highly visible environments interesting and fun.
Personally, you have a demonstrated commitment to working collaboratively, with respect and a focus on results over credit.
The position reports to the Project Manager and will work closely with other members of the project delivery team.
The role is part-time at 20 hours per week, paid by the hour. You will be compensated with a market salary, in line with the parameters of a non-profit-organisation.
This would particularly suit recent graduates who have studied a complementary subject to Open Knowledge International, looking for some experience in the workplace.
Successful applicants must have excellent English language skills in both speaking and writing.
You can work from home, with flexibility offered and required. Some flexibility around work hours is useful, and there may be some (infrequent) international travel required.
We offer employment contracts for residents of the UK with valid permits, and services contracts to overseas residents.
Interested? Then send us a motivational letter and a one page CV via https://okfn.org/about/jobs/. Please indicate your current country of residence, as well as your salary expectations (in GBP) and your earliest availability.
Early application is encouraged, as we are looking to fill the positions as soon as possible. These vacancies will close when we find a suitable candidate.
If you have any questions, please direct them to jobs [at] okfn.org.
The publisher will argue that this one-sided agreement, often transferring all possible rights to the publisher, is absolutely necessary in order that the article be published. Despite their better-than-average copyright policy, ACM's claims in this regard are typical. I dissected them here.
The SPARC addendum was written by a lawyer, Michael W. Carroll of Villanova University School of Law, and is intended to be attached to, and thereby modify, the publisher's agreement. It performs a number of functions:
- Preserving the author's rights to reproduce, distribute perform, and display the work for non-commercial purposes.
- Acknowledges that the work may already be the subject of non-exclusive copyright grants to the author's institution or a funding agency.
- Imposes as a condition of publication that the publisher provide the author with a PDF of the camera-ready version without DRM.
Of course, many publishers will refuse to publish, and many authors at that point will cave in. The SPARC site has useful advice for this case. The more interesting case is the third, where the publisher simply ignores the author's rights as embodied in the addendum. Publishers are not above ignoring the rights of authors, as shown by the history of my article Keeping Bits Safe: How Hard Can It Be?, published both in ACM Queue (correctly with a note that I retained copyright) and in CACM (incorrectly claiming ACM copyright). I posted analysis of ACM's bogus justification of their copyright policy based on this experience. There is more here.
So what will happen if the publisher ignores the author's addendum? They will publish the paper. The author will not get a camera-ready copy without DRM. But the author will make the paper available, and the "kicker" above means they will be on safe legal ground. Not merely did the publisher constructively agree to the terms of the addendum, but they failed to deliver on their side of the deal. So any attempt to haul the author into court, or send takedown notices, would be very risky for the publisher.
2012 data from Alex HolcombePublishers don't need anything except permission to publish. Publishers want the rights beyond this to extract the rents that generate their extraordinary profit margins. Please use the SPARC addendum when you get the chance.
Open Knowledge project The Public Domain Review is very proud to announce the launch of its second book of selected essays! For nearly five years now we’ve been diligently trawling the rich waters of the public domain, bringing to the surface all sorts of goodness from various openly licensed archives of historical material: from the Library of Congress to the Rijksmuseum, from Wikimedia Commons to the wonderful Internet Archive. We’ve also been showcasing, each fortnight, new writing on a selection of these public domain works, and this new book picks out our very best offerings from 2014.
All manner of oft-overlooked histories are explored in the book. We learn of the strange skeletal tableaux of Frederik Ruysch, pay a visit to Humphry Davy high on laughing gas, and peruse the pages of the first ever picture book for children (which includes the excellent table of Latin animal sounds pictured below). There’s also fireworks in art, petty pirates on trial, brainwashing machines, truth-revealing diseases, synesthetic auras, Byronic vampires, and Charles Darwin’s photograph collection of asylum patients. Together the fifteen illustrated essays chart a wonderfully curious course through the last five hundred years of history — from sea serpents of the 16th-century deep to early-20th-century Ouija literature — taking us on a journey through some of the darker, stranger, and altogether more intriguing corners of the past.Order by 18th November to benefit from a special reduced price and delivery in time for Christmas
If you are wanting to get the book in time for Christmas (and we do think it’d make an excellent gift for that history-loving relative or friend!), then please make sure to order before midnight on Wednesday 18th November. Orders placed before this date will also benefit from a special reduced price!
Please visit the dedicated page on The Public Domain Review site to learn more and also buy the book!
Thanks for the opportunity to participate in this panel today. I’m really looking forward to the panel conversation so I will try to keep my remarks brief. A little over a year ago I began working as a software developer at the Maryland Institute for Technology in the Humanities (MITH). MITH has been doing digital humanities (DH) work for the last 15 years. Over that time it has acquired a rich and [intertwingled] history of work at the intersection of computing and humanities disciplines, such as textual studies, art history, film and media studies, music, electronic literature, games studies, digital forensics, the performing arts, digital stewardship, and more. Even after a year I’m still getting my head around the full scope of this work.
To some extent I think MITH and similar centers, conferences and workshops like ThatCamp have been so successful at infusing humanities work with digital methods and tools that the D in DH isn’t as necessary as it once was. We’re doing humanities work that necessarily involves now pervasive computing technology and digitized or born digital collections. Students and faculty don’t need to be convinced that digital tools, methods and collections are important for their work. They are eager to engage with DH work, and to learn the tools and skills to do it. At least that’s been my observation in the last year. For the rest of my time I’d like to talk about how MITH does its work as a DH center, and how that intersects with material saved from the Web.
Traditionally, DH centers like MITH have been built on the foundation of faculty fellowships, which bring scholars into the center for a year to work on a particular project, and spread their knowledge and expertise around. But increasingly MITH has been shifting its attention to what we call the Digital Humanities Incubator model. The incubator model started in 2013 as a program to introduce University Library faculty, staff and graduate assistants to digitization, transcription, data modeling and data exploration through their own projects. Unfortunately, there’s not enough time here to describe the DH incubator in much more detail, but if you are interested I encourage you to check out Trevor Munñoz and Jennifer Guillano’s Making Digital Humanities Work where they talk about the development of the incubator. In the last year we’ve been experimenting with an idea that grew out of the incubator, which Neil Fraistat (MITH’s Director) has been calling the data first approach. Neil described this approach earlier this year at DH 2015 in Sydney using this particular example:
This past year, we at MITH experimented with digital skills development by starting not with a fellow or a project, but with a dataset instead: an archive of over 13 million tweets harvested … concerning the shooting of Michael Brown by a police officer in Ferguson, Missouri and the protests that arose in its wake. Beginning with this dataset, MITH invited arts and humanities, journalism, social sciences, and information sciences faculty and graduate students to gather and generate possible research questions, methods, and tools to explore it. In response to the enthusiastic and thoughtful discussion at this meeting, MITH created a series of five heavily attended workshops on how to build social media archives, the ethics and rights issues associated with using them, and the tools and methods for analyzing them. The point here was not to introduce scholars to Digital Humanities or to enlist them in a project, but to enable them through training and follow up consultations to do the work they were already interested in doing with new methods and tools. This type of training seems crucial to me if DH centers are actually going to realize their potential for becoming true agents of disciplinary transformation. And with something less than the resources necessary to launch and sustain a fellowship project, we were able to train a much larger constituency.
13 million tweets might sound like a lot. But really it’s not. It’s only 8GB of compressed, line-oriented JSON. The 140 characters that make up the text of each tweet is actually only 2% of the structured data that is made available from the Twitter API for each tweet. The five incubator workshops Neil mentioned were often followed with a sneakernet style transfer of data onto a thumb drive, accompanied by a brief discussion of the Twitter Terms of Service. The events in Ferguson aligned with interests of the student body and faculty. The data collection lead to collaborations with Bergis Jules at UC Riverside to create more Twitter datasets for Sandra Bland, Freddie Gray, Samuel Dubose and Walter Scott as awareness of institutionalized racism and police violence grew. The Ferguson dataset was used to create backdrops in town hall meetings attended by hundreds of students who were desperate to understand and contextualize Ferguson in their lives as students and citizens.
For me, this experience wasn’t about the power of Big Data. Instead it was a lesson in the necessity and utility of Small Data. Small data that is collected for a particular purpose, and whose provenance can fit comfortably in someone’s brain. Small data that intervened in the business as usual, collection development policies to offer new perspectives, inter or anti-disciplinary engagement, and allegiances.
I think we’re still coming to understand the full dimensions of this particular intervention, especially when you consider some of the issues around ethics, privacy, persistence and legibility that it presents. But I think DH centers like MITH are well situated to be places for creative interventions such as this one around the killing of Michael Brown. We need more spaces for creative thinking, cultural repair, and assembling new modes of thinking about our experience in the historical moments that we find ourselves living in today. Digital Humanities centers provide a unique place for this radically interdisciplinary work to happen.
I’d be happy to answer more questions about the Ferguson dataset or activities around it, either now or after this session. Just come and find me, or email me…I’d love to hear from you. MITH does have some work planned over the coming year for building capacity specifically in the area of Digital Humanities and African American history and culture. We’re also looking at ways to help build a community of practice around the construction of collections like the Ferguson Twitter dataset, especially with regards to how they inform what we collect form the Web. In addition to working at MITH I’m also a PhD student in the iSchool at UMD where I’m studying what I’m calling computer assisted appraisal, but which is actually a strand of work going I would be happy to talk to you about that stuff to, but that would be a different talk in itself. Thanks!
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.
New This Week:
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
Recently in a conversation with one of my daughters I remarked that the only issue I thought might be more devastating to humanity than the status of girls and women in society is global warming. Upon reflection, I now feel that I was wrong. There is nothing more devastating to humanity than the status of girls and women in society.
There are a few reasons for this, but I will simply cite an essential one. The essential reason is that for us to be able to collectively solve the biggest problems we face as humanity, we need ALL of our assets brought to bear, and that certainly includes the more than half of the planet who are female. The fact that many girls and women are marginalized, abused, and denied their full rights as human beings means that we are collectively severely crippled. And it has to stop.
Meanwhile, although I am in a female dominated profession, there are still a disproportionate number of men in positions of power and in generally higher-paying technical positions. For years the tech community Code4Lib has struggled to diversify and make women more comfortable in joining in, both virtually and at the annual conference. Thankfully, it appears that progress is being made.
But it is just a beginning — both for Code4Lib and for society more generally. So these are things I pledge to do to help:
- Shut up. As a privileged white male, I’ve come to realize that my voice is the loudest in the room. And I don’t mean that in actual fact, although it is often true in that sense. I mean it figuratively. People pay attention to what I have to say just by the mere fact of my placement in the power hierarchy. The fact that I am speaking means a lesser-heard voice remains lesser-heard. So I will strive to not speak in situations where doing so can allow space for lesser-heard voices to speak.
- Listen up. Having made space for lesser-heard voices, I need to listen to what they have to say. That means actively engaging with what they are saying, thinking carefully about it, and finding points of relevance to my situation.
- Speak up. As someone in a position of power I know that it can be used for good or evil. Using it for evil doesn’t necessarily mean I knowingly cause harm, mind you, but I can cause harm nonetheless. Using my power for good may mean, at times, speaking up to others in power positions to create more inviting and inclusive situations for those with less power in social situations.
- Step down. As someone who is often offered the podium at a conference or meeting, I’m trying to do better about not accepting offers until or unless there is at least equity in gender representation. This means sometimes walking away from gigs, which I have done and which I will continue to do until this female-dominated profession gives women their due.
- Step up. Whether Edmund Burke said this or someone else, I nonetheless hold it to be true: “All that is necessary for the triumph of evil is that good men do nothing.” So sometimes I will need to spring into action to fight the evil of misogyny, whether it is overt and intended or subtle and unintentional.
There are no doubt other ways in which I can help, and I look forward to learning what those are. It’s a journey, I’ve found, in trying to understand what being on the top of the societal heap means and how it has shaped my perceptions and, unfortunately, actions.
I decided to secure the gitenberg.org website as my test example. It's still being developed, and it's not quite ready for use, so if I screwed up it would be no disaster. Gitenberg.org is hosted using Elastic Beanstalk (EB) on Amazon Web Services (AWS), which is a popular and modern way to build scaleable web services. The servers that Elastic Beanstalk spins up have to be completely configured in advance- you can't just log in and write some files. And EB does its best to keep servers serving. It's no small matter to shut down a server and run some temporary server, because EB will spin up another server to handle rerouted traffic. These characteristics of Elastic Beanstalk exposed some of the present shortcomings and future strengths of the Let's Encrypt project.
Here's the mission statement of the project:
Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit.While most of us focus on the word "free", the more significant word here is "automated":
Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.Note that the objective is not to make it painless for website administrators to obtain a certificate, but to enable software to get certificates. If the former is what you want, in the near term, then I strongly recommend that you spend some money with one of the established certificate authorities. You'll get a certificate that isn't limited to 90 days, as the LE certificates are, you can get a wildcard certificate, and you'll be following the manual procedure that your existing web server software expects you to be following.
The real payoff for Let's Encrypt will come when your web server applications start expecting you to use the LE methods of obtaining security certificates. Then, the chore of maintaining certificates for secure web servers will disappear, and things will just work. That's an outcome worth waiting for, and worth working towards today.
So here's how I got Let's Encrypt working with Elastic Beanstalk for gitenberg.org.
The key thing to understand here is that before Let's Encrypt can issue me a certificate, I have to prove to them that I really control the hostname that I'm requesting a certificate for. So the Let's Encrypt client has to be given access to a "privileged" port on the host machine designated by DNS for that hostname. Typically, that means I have to have root access to the server in question.
In the future, Amazon should integrate a Let's Encrypt client with their Beanstalk Apache server software so all this is automatic, but for now we have to use the Let's Encrypt "manual mode". In manual mode, the Let's Encrypt client generates a cryptographic "challenge/response", which then needs to be served from the root directory of the gitenberg.org web server.
Even running Let's Encrypt in manual mode required some jumping through hoops. It won't run on Mac OSX. It doesn't yet support the flavor of Linux used by Elastic Beanstalk, so it does no good configuring Elastic Beanstalk to install it there. Instead I used the Let's Encrypt Docker container, which works nicely, and I ran a Docker-Machine inside "virtualbox" on my Mac.
Having configured Docker, I ran
docker run -it --rm -p 443:443 -p 80:80 --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest -a manual -d www.gitenberg.org \
--server https://acme-v01.api.letsencrypt.org/directory auth
(the --server option requires your domain to be whitelisted during the beta period.) After paging through some screens asking for my email address and permission to log my IP address, the client responded with
Make sure your web server displays the following content at http://www.gitenberg.org/.well-known/acme-challenge/8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ before continuing:
8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ.hZuATXmlitRphdYPyLoUCaKbvb8a_fe3wVj35ISDR2ATo do this, I configured a virtual directory "/.well-known/acme-challenge/" in the Elastic Beanstalk console with mapped to a "letsencrypt/" directory in my application. I then made a file named "8wBDbWQIvFi2bmbBScuxg4aZcVbH9e3uNrkC4CutqVQ" with the specified content in my letsencrypt directory, committed the change with git, and deployed the application with the elastic beanstalk command line interface. After waiting for the deployment to succeed, I checked that http://www.gitenberg.org/.well-known/acme-challenge/8wBD... responded correctly, and then hit <enter>. (Though the LE client tells you that the MIME type "text/plain" MUST be sent, elastic beanstalk sets no MIME header, which is allowed.)
IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/www.gitenberg.org/fullchain.pem. Your cert will expire on 2016-02-08. To obtain a new version of the certificate in the future, simply run Let's Encrypt again....except since I was running Docker inside virtualbox on my Mac, I had to log into the docker machine and copy three files out of that directory (cert.pem, privkey.pem, and chain.pem). I put them in my local <.elasticbeanstalk> directory. (See this note for a better way to do this.)
The final step was to turn on HTTPS in elastic beanstalk. But before doing that, I had to upload the three files to my AWS Identity and Access Management Console. To do this, I needed to use the aws command line interface, configured with admin privileges. The command was
aws iam upload-server-certificate \--server-certificate-name gitenberg-le \--certificate-body file://<.elasticbeanstalk>/cert.pem \--private-key file://<.elasticbeanstalk>/privkey.pem \--certificate-chain file://<.elasticbeanstalk>/chain.pemOne more trip to the Elastic Beanstalk configuration console (network/load balancer section), and gitenberg.org was on HTTPS.
Given that my sys-admin skills are rudimentary, the fact that I was able to get Let's Encrypt to work suggests that they've done a pretty good job of making the whole process simple. However, the documentation I needed was non-existent, apparently because the LE developers want to discourage the use of manual mode. Figuring things out required a lot of error-message googling. I hope this post makes it easier for people to get involved to improve that documentation or build support for Let's Encrypt into more server platforms.
(Also, given that my sys-admin skills are rudimentary, there are probably better ways to do what I did, so beware.)
If you use web server software developed by others, NOW is the time to register a feature request. If you are contracting for software or services that include web services, NOW is the time to add a Let's Encrypt requirement into your specifications and contracts. Let's Encrypt is ready for developers today, even if it's not quite ready for rank and file IT administrators.
We wrapped up the 2015 Hack-A-Way this past Friday and after a few days to reflect I wanted to write about the event and on future events. Saying what the impact of a coding event will be can be difficult. Lines of code written can be a misleading metric and a given patch may not make it into production. However, a casual discussion could have major consequences years down the road. Indeed, it was at the first Hack-A-Way that Bill Erickson’s presentation on web sockets had no immediate impact but over the next year was a key component in the decision to go to a web based safe client. Still, I’m going to venture into saying that there are impacts both immediate and long term.
I won’t got into detail on each thing that was worked on, you can read the collaborative notes here for that: https://docs.google.com/document/d/1_wbIiU47kSElcg0hG2o-ZJ9hNmyM8v7rOWyi8jGvA38/edit?usp=sharing
But, some highlights, for me, included:
The Web Based staff client – Bill Erickson has done a lot of work on the Windows installer for Hatch and Galen will help with the OS X version. Both PINES and SCLENDS have libraries looking forward to getting the web based client into production to do real world tests. I’m very excited about this.
Galen Charlton was confirmed as the release manager for Evergreen 3.0 (or whatever version number is selected).
Syrup – A fair bit of bandwidth was spent on how Syrup could be more tightly integrated into Evergreen for course reserves. I’m always excited to see academics get more support with Evergreen even if it’s not my personal realm.
Sqitch – Bill Erickson presented on this, a tool for managing SQL scripts. Sqitch plans let you specify dependencies between SQL scripts; avoids need for numbering them so that they run in a particular order and encourages creation of create, revert, and verify scripts. This may be a good tool to use during development though production deployments are likely to still use the traditional upgrade scripts.
Twenty patches got merged during the Hack-A-Way with more following over the weekend that were tied to work done then.
Ken Cox, an Evergreen user joined us as a developer and showed us the great work he has done on an Android app for Evergreen.
We discussed the steps to becoming as a core committer and several ideas were thrown around about how to encourage the growth among potential developers via a mentoring program. No firm consensus came about in terms of what that program should look like but I’m glad to say that in an epilogue to the discussion Kathy Lussier has been made a core committer! Kathy has been a long time consistent contributor to Evergreen and bug reviewer so I’m excited to have seen this happen.
Search speed continues to be a contentious issue in the Evergreen community. Search relies on a lot of layers from hardware to Apache to Postgres to SQL queries to even the web site rendering and things beyond Evergreen like the speed between server and client. As a result comparisons need discipline and controls in place. Using external indexing and search products was discussed but it’s a hard discussion to have. Frankly, it’s very easy to end up comparing apples to oranges even between projects with similar tasks and goals. For example, Solr was referenced as a very successful product that is used commercially and with library products but research and exploration will be needed before we can have a more full discussion about it (or other products). MassLNC shared their search vision – http://masslnc.org/search_vision which was a good starting place for the dialogue. Many systems administrators shared their best practices. We also discussed creating a baseline of searching taking into account variables such as system setups and record sizes and then creating metric goals. Even possible changes to Postgres to accommodate our needs was thrown out for consideration.
Related to the core committer discussion we did an overview of the technical layout of Evergreen and common paths in and out of the system for data. https://docs.google.com/drawings/d/17aNEr8vLen5wBjCAP4NPnjL7fYT3VxK6_9wVArR9VII/edit?usp=sharing
Now, as all this wonderful work happened it’s still an incomplete picture. It doesn’t capture the conversations, the half patches, the bug testing, the personal growth of participants that happened as well. Nor does it capture the kind hosting we received from MassLNC and NOBLE who help ferry us about, sent staff to participate, arranged hotels, kept coffee flowing and in general were as kind a host as we could hope for. I feel like I should write far more about the hosts but I probably can’t thank each one individually as I’m sure I don’t know what each even did but the contribution of hosting the Hack-A-Way is always a big task and the folks at MassLNC and NOBLE did a wonderful job that we are all very thankful for.
Now, about the operational side of the Hack-A-Way. There were some discussions about the future of the event and managing remote participation. Remote participation in the Hack-A-Way has always been problematic. When the Hack-A-Way began remote participation largely amounted to people updating IRC with what was going on. Then we tried adding a camera and using Google Hangouts. Then, the limitations of Google Hangouts became apparent. We tried a FLOSS product the next year and that didn’t work well at all. Through all of this more people wanting to participate remotely has become a consistent issue. So, through the event this year I created a list of things I want to do next year. Tragically, this will put more bandwidth strain on the hosting site but we always seem to push the bandwidth to it’s limit (and sometimes beyond).
- Ensure that the main presentation computer for group events has a microphone that can be used, is setup as part of the group event and has it’s screen shared.
- Have the secondary station with the microphone / camera that can be mobile instead of on a stationary tripod. This will mean a dedicated laptop for this purpose. If I have time I may setup a Raspberry Pi with a script to watch IRC and allow IRC users to control the camera movement remotely, which might be fun.
- Move to a more robust commercial product that has presentation controls (the needs this year showed that was necessary). We also have needs to occasionally break into small groups with remote presence that this won’t solve so Google Hangouts will still probably have a use. We are going to try out a commercial product next year for this but look at our options that support our community as best we can, namely looking for Chrome native support via an HTML5 client.
Beyond that, we discussed the frequency of the Hack-A-Way and locations. Next year Evergreen Indiana is kind enough to host us and already has a submission in place for 2017. Several ideas that were floated were extending the conference by 1 – 3 days for a hacking event there or even having a second Hack-A-Way each year situated to break the year into even segments of Conference / Hack-A-Way / Hack-A-Way rather than the Hack-A-Way being mid year between conferences as it is now. No decision was made except to continue the conversation and try to come to some decisions by the time of the conference in Raleigh.
The only sure thing is that those months will pass very quickly between now and then. I felt the Hack-A-Way was very successful with a lot of work done and a lot of good conversations started, which is part of the function of gathering into one spot so many of us that are spread out and used to only communicating via IRC and email (with occasional Facebook postings thrown in).
MARC is an acronym for Machine Readable Cataloging. It was designed in the 1960’s, and its primary purpose was to ship bibliographic data on tape to libraries who wanted to print catalog cards. Consider the computing context of the time. There were no hard drives. RAM was beyond expensive. And the idea of a relational database had yet to be articulated. Consider the idea of a library’s access tool — the card catalog. Consider the best practice of catalog cards. “Generate no more than four or five cards per book. Otherwise, we will not be able to accommodate all of the cards in our drawers.” MARC worked well, and considering the time, it represented a well-designed serial data structure complete with multiple checksum redundancy.
Someone then got the “cool” idea to create an online catalog from MARC data. The idea was logical but grew without a balance of library and computing principles. To make a long story short, library principles sans any real understanding of computing principles prevailed. The result was a bloating of the MARC record to include all sorts of administrative data that never would have made it on to a catalog card, and this data was delimited in the MARC record with all sorts of syntactical “sugar” in the form of punctuation. Moreover, as bibliographic standards evolved, the previously created data was not updated, and sometimes people simply ignored the rules. The consequence has been disastrous, and even Google can’t systematically parse the bibliographic bread & butter of Library Land.* The folks in the archives community — with the advent of EAD — are so much better off.
Soon after XML was articulated the Library Of Congress specified MARCXML — a data structure designed to carry MARC forward. For the most part, it addressed many of the necessary issues, but since it insisted on making the data in a MARCXML file 100% transformable into a “traditional” MARC record, MARCXML falls short. For example, without knowing the “secret codes” of cataloging — the numeric field names — it is very difficult to determine what are the authors, titles, and subjects of a book.
The folks at the Library Of Congress understood these limitations almost from the beginning, and consequently they created an additional bibliographic standard called MODS — Metadata Object Description Schema. This XML-based metadata schema goes a long way in addressing both the computing times of the day and the needs for rich, full, and complete bibliographic data. Unfortunately, “traditional” MARC records are still the data structure ingested and understood by the profession’s online catalogs and “discovery systems”. Consequently, without a wholesale shift in practice, the profession’s intellectual content is figuratively stuck in the 1960’s.
* Consider the hodgepodge of materials digitized by Google and accessible in the HathiTrust. A search for Walden by Henry David Thoreau returns a myriad of titles, all exactly the same.Readings
- MARC (http://www.loc.gov/marc/bibliographic/bdintro.html) – An introduction to the MARC standard
- leader (http://www.loc.gov/marc/specifications/specrecstruc.html#leader) – All about the leader of a traditional MARC record
- MARC Must Die (http://lj.libraryjournal.com/2002/10/ljarchives/marc-must-die/) – An essay by Roy Tennent outlining why MARC is not a useful bibliographic format. Notice when it was written.
- MARCXML (https://www.loc.gov/standards/marcxml/marcxml-design.html) – Here are the design considerations for MARCXML
- MODS (http://www.loc.gov/standards/mods/userguide/) – This is an introduction to MODS
This is much more of an exercise than it is an assignment. The goal of the activity is not to get correct answers but instead to provide a framework for the reader to practice critical thinking against some of the bibliographic standards of the library profession. To the best of your ability, and in the form of an written essay between 500 and 1000 words long, answer and address the following questions based on the contents of the given .zip file:
- Measured in characters (octets), what is the maximum length of a MARC record? (Hint: It is defined in the leader of a MARC record.)
- Given the maximum length of a MARC record (and therefore a MARCXML record), what are some of the limitations this imposes when it comes to full and complete bibliographic description?
- Given the attached .zip file, how many bibliographic items are described in the file named data.marc? How many records are described in the file named data.xml? How many records are described in the file named data.mods? How do did you determine the answers to the previous three questions? (Hint: Open and read the files in your favorite text and/or XML editor.)
- What is the title of the book in the first record of data.marc? Who is the author of the second record in the file named data.xml. What are the subjects of the third record in the file named data.mods? How did you determine the answers the previous three questions? Be honest.
- Compare & contrast the various bibliographic data structures in the given .zip file. There are advantages and disadvantages to all three.
We are excited to announce that the first face-to-face Mashcat event in North America will be held on January 13th, 2016, at Simmons College in Boston, Massachusetts. We invite you to view the schedule for the day as well as register at http://www.mashcat.info/2016-event/. We have a strict limit on the number of participants we can accept for this inaugural North America Mashcat face to face event, so register early! If you run into any issues with registering, you can email signup AT mashcat.info.
York University, where I work, meets the international institutional cleanliness standard of Moderate Dinginess. I won’t show you pictures.
APPA: Leadership in Educational Facilities (formerly the Association of Physical Plant Administrators, then “APPA: The Association of Higher Education Facilities Officers”) is the body that sets these standards. Their documentation seems to be behind a subscription wall, and I don’t have access to Custodial Staffing Guidelines for Educational Facilities (where this seems to originate) in print, but searching for ‘moderate dinginess’ turns up lots of presentations from universities, and detailed information on the Asset Insights site that covers all five cleanliness categories, from Orderly Spotlessness down to Unkempt Neglect.
About Moderate Dinginess, it says:
- Floors are swept or vacuumed clean, but are dull, dingy, and stained.
- There is a noticeable buildup of dirt and/or floor finish in corners and along walls.
- There is a dull path and/or obviously matted carpet in walking lanes.
- Base molding is dull and dingy with streaks or splashes. All vertical and horizontal surfaces have conspicuous dust, dirt, smudges, fingerprints, and marks.
- Lamp fixtures are dirty and some lamp (up to 5 percent) is burned out.
- Trash containers have old trash and shavings. They are stained and marked.
- Trash containers smell sour.
This accurately describes all of the libraries and almost all the other buildings on campus, exceptions being the brand new engineering building, the building where the president’s office is, and the building where marketing and fundraising are.
Meanwhile, improving the student experience is one of the key priorities of the university.
I felt really sad when I read Kyle Shockey’s post on the Librarian Burnout blog about feeling burnout after library school and being in the midst of the job hunt. By all indications, he is one of those rare recent grads who followed the advice so many of us give to LIS students — don’t rely solely your LIS program to prepare you for the profession. He published, presented, worked, volunteered, and even won awards while still in library school! How many of you did all that?? And yet he found that not only was it all mentally and physically exhausting and not encouraged by his LIS program (shame on you LIS faculty!), but he’s discovered that it didn’t lead to the job he thought he’d get if he did all the right things. Horrible.
I remember my own first library job search like it was yesterday. A lot of it was chronicled on this blog, but I tried to stay upbeat in my writing because I didn’t want to hurt my chances of getting a job by being negative. By Spring of 2005 (I graduated in December 2004), I was starting to think that I needed to look for jobs back in my previous field (social work) because I was clearly not seen as a promising librarian by anyone. Thinking about that time in my life even now gives me a sick feeling in the pit of my stomach. It was really that traumatic. So I feel viscerally bad for anyone going through that.
Unlike Kyle, I didn’t find library school exhausting (probably because, with a few small exceptions, I did rely on my LIS program to prepare me for the profession), but I found the job hunt isolating and demoralizing. The whole process feels like is designed to make people feel like there is something inherently wrong with them, and some have suggested that the system supports the idea that you’re out of work and searching for a job because you are flawed. For folks who already have depressive tendencies, it is all too easy to believe that the problem is you and that the problem is unfixable.
It’s easy to just say “wow, that sucks” and move on with your professional life, but there are things each and every one of us can do to make library school and/or the first professional job hunt less of a soul-crushing experience for others:
- If your library is hiring for a position that doesn’t strictly require professional experience for the person in the position to be successful, don’t require it. It doesn’t mean you’ll definitely hire a fresh-out-of-school librarian, but it opens you up to finding an extraordinary candidate who doesn’t have professional experience. I remember at my last job, we were going to be hiring some one-year temporary reference and instruction librarians to cover a bunch of people leaving and a retirement. They seemed like perfect positions for just-graduated librarians to earn a little experience doing instruction, reference, and collection development, but one of our administrators insisted that we ask for several years of experience. His main argument was that, because we’re in Portland, we’ll get plenty of applicants with the requisite experience. So depressing, but sadly not uncommon. When you’re thinking of requiring a few years of experience because it will give you fewer cover letters to read, remember that you are also limiting the options for people who need library jobs most.
- When someone is interviewing for a position at your library, be humane, be kind, and remember that the impression they get from your actions is at least as important as the impression you get from them. You are representing the library and, unless you hate working there, you should work to give the candidates a positive impression of your workplace. Treat the job candidate as you would any valued member of our profession. I remember interviewing for my first professional position at a well-respected library that treated me terribly during my interview. In addition to many small things, they made me prepare and give THREE separate presentations during my day and made me choose the restaurants at which we had lunch and dinner (which felt very much a test of my coolness or interest in the cuisines of other cultures). During dinner, they pretty much just talked amongst themselves and didn’t include me in the conversation or ask me anything about myself. Years later, when I had built a positive reputation in the profession, someone there suggested I apply for a job they had open. Based on my experience, there is NO WAY I WOULD EVER BE WILLING TO WORK THERE. I even know and like people who work there now, and yet I still think of the place as a toxic snake pit based on my experience 11 years ago. First impressions matter, even when it might seem like the person you’re interviewing really doesn’t matter.
- If you’re an experienced librarian (and I don’t mean a decade, even a little experience is great!) mentor or micro-mentor a new librarian. The support and encouragement I got from a few librarians towards the end of my job hunt saved my bacon. One person basically tore apart my cover letter and resume and helped me rebuild them so they didn’t suck and played up the value of my previous experience as a social worker. Suddenly I was getting second interviews at just the sorts of places I’d hoped to work. Soon I had my first professional job. But it was more than just the feedback on my resume and cover letters that helped. That successful people in the field believed in me and were willing to help me (even in small ways) was hugely encouraging. Why would they want to help me unless they saw something worthwhile in me? As someone who has served as a mentor, I can tell you that it doesn’t take a lot to be one. You just have to care about people and maybe have a little more experience (in some areas, not all) than the person you’re mentoring.
- Develop programs locally that support new librarians (because not everyone can afford to attend ALA). Does your local, state, or regional library association have a resume review program or an early-career librarian mentoring program? If not, maybe it’s worth building one. When I asked at my first Oregon Library Association Conference if there was a mentoring program for new librarians, I quickly found myself swept up into the Membership Committee and creating a mentoring program from scratch with another interested librarian. Our program started matching its first pair of mentors in May 2013 and has matched around 60 mentoring pairs since. It is one of the things I’m most proud of in my career. In collaboration with the head of the OLA New Member Round Table, we’re expanding the program to offer a Resume and Cover Letter Review program to meet the needs of OLA Members who just need short-term mentoring focused on the job hunt. ALA/NMRT offers a great online resume review service, but I really like the idea of having people interact with folks who know the local library scene (since most people in Oregon seem to want to stay here). While I’m a big believer in informal mentoring, there are so many people who don’t have the political or social capital to find a mentor themselves, and I want to make sure those folks have just as much of an opportunity for mentoring support as I did way back when.
- If you have an LIS intern, you really should focus on making the internship a good learning and growth experience for the student. I had an archives practicum where I was basically given boxes of (really freaking boring) university records and told to process the collection and create finding aids. I didn’t get any support on how to do it and was pretty much left to my devices the entire time. The experience convinced me that I didn’t want to be an archivist, but, for all I know, I might have loved the work in a less sucky setting. Supervising an intern is more than just about giving them work to do. It’s about teaching them about the setting you work in, giving them meaningful experiences and interactions, and mentoring them as they learn the role they’re in. Erin at Constructive Summer suggest that we pay our LIS interns, but that isn’t always possible. The least we can do though is make it an amazing learning and networking experience for them.
- If you work in a library school, you should constantly remind yourself that your goal is to help your students get jobs. You should make sure your curriculum is helping students develop the skills and real-life knowledge that will help them be successful in the field. Keep what you’re teaching up-to-date and focused on real-life problem-based learning. If students do the sorts of extra things Kyle did, you should encourage them and give them reasonable work extensions. In a professional program, what will make a student marketable really should take precedence. As an LIS instructor, I can’t imagine penalizing or not offering flexibility to a student who is presenting at a conference!
- In the big picture, we should advocate to decrease the number of people going into LIS programs. It’s obvious that there are way more people graduating with the degree than there are jobs, even when you consider positions outside of libraries in which the MLS is a valuable credential. Instead of discouraging potential graduate students, we should find ways to push programs that do not have exceptionally strong placement stats into decreasing the number of students they accept. ALA isn’t going to do this work because it goes against their interests, so it’s deeply problematic that accreditation of LIS programs happens through ALA. Maybe that needs to change. I don’t have answers to all this, but I know what we have here is deeply problematic.
Individually we probably can’t change these big systemic problems, but anyone can help individual librarians and I’m proof that little things people do to help do matter. It doesn’t take much to be kind, share our experiences, be encouraging, and be helpful to a new librarian. That the first job hunt is demoralizing and painful shouldn’t be seen as a normal rite-of-passage for librarians. We can make things better.
And Kyle, where you are now sucks epically, but things usually do get better. And, given what I’ve seen from you on Twitter, you are absolutely meant to be a professional librarian.
This is not a faithful recording of the CanUX conference from Nov.7-8, 2015 but the things that I most wanted to remember for further action or reflection. Two presentations in particular really resonated with me. Happily, they were the two speakers I was most looking forward to.
Shelley Bernstein is the Manager of Information Systems at the Brooklyn Museum and talked about the visitor experience to both the online and physical museum space. She explained that the museum had changed its mission statement to be more visitor-friendly and committed to serving their immediate community.
She mentioned that in the mid-2000s (pre-smartphone) they noticed that people wanted to take pictures of the art so they removed their “no photography” policy. This reminded me of libraries trying so hard to enforce so many rules. What are people doing in our spaces? Are there very good reasons to prohibit this behaviour? If not, why not let people do what they’re already doing without feeling like they’re breaking rules?
She talked about an exhibit they created called “Click!” where people submitted their own photographs on the theme “Changing Faces of Brooklyn” and these photographs were then evaluated by anyone who visited the online forum. Bernstein noted that they didn’t have a “like” button but had people rate the photographs on a sliding scale (which takes more thought), and you couldn’t skip photographs but had to go through one by one. People still evaluated most of the 300+ photographs. Her comment on this was “The harder you make it, the deeper the engagement.”
I find this fascinating. Obviously, context is everything here. We try our best to make it easy to use our library systems because they tend to be needlessly complicated. We want to get our users to the content they want as quickly as possible so that they can engage deeply with that content. But are there occasions where it would make sense to actually make things a little more difficult, to slow people down a bit? The obvious answer would be where people are engaging with our digitized collections. But are there others? Would it ever make sense to slow down the research process itself? Not to needlessly complicate it, but to consciously add decision points or interactions beyond click-to-content?
Bernstein went on to talk about efforts to improve engagement within the walls of the museum. They put more staff on the floor, wearing (in her words) “hideous vests” to identify them. Visitors LOVED this, asking lots of questions. However, (and this should sound very familiar), this solution simply could not scale with the museum’s many galleries over five floors. Visitors would not always be able to find staff member when they had a question about the work they were looking at. So the museum bought a bunch of iPhones and had them available for visitors to use. They created an app, Ask Brooklyn Museum, that visitors could use to ask museum staff questions. They installed iBeacons around the museum to show staff where people were and what exhibits were nearby in order to provide proper context for their answers. Another great aspect of this is that museum staff now have a huge amount of data about the questions people are asking. They can use this information to make decisions about placement of signage, curatorial notes, etc. That’s really a side benefit though; the main positive aspect is that museum staff now have a way into visitors’ conversations and can use that opportunity to provide a richer experience. A question about the dim lighting around an exhibit provides an opportunity to talk about preservation, for example. Awesome! Oh, and one other takeaway: visitors were as happy to ask questions through the app as they were to people on the floor; they still felt the personal touch of a real person responding to their question in real time.
So this made me think a lot about libraries and reference. It’s a different environment for sure. The people in our spaces are not engaging with content that we have created and/or understand deeply; for the most part we don’t interpret content for our users. However, there may be ways we can increase the personal touch of our reference services without having to put our staff all over the library in hideous vests.
Leisa Reichelt was the other speaker who I found pretty amazing. She was the Head of User Research for Government Digital Services (known for their great work on GOV.UK) and is now working for the Australian government doing similar work. She started off talking about how a lot of organizations – even GOV.UK – talk the talk about being user-focused but often still rely on analytics and “thinking about users” or “thinking like users” rather than actually doing the work of talking directly to (and testing with) users themselves.
She had some examples that were perhaps more relevant to people working in a project-based environment, but still interesting:
- Have a rule that a researcher has to be embedded in a team at least 3 days per week (so teams can’t share a single researcher).
- User researchers should spend about 30% of their time on research (learning about users) and 70% making sure their team knows about and understands that research (helping their team learn about users).
- If you’ve got hard problems, you need more researchers. (Leisa mentioned a project she was one that had more researchers than developers for a while.)
- For project budgeting, budget for 5 people in a lab doing usability testing every two weeks. (This will be a placeholder in the budget; if you hire a smart researcher they will then have the budget to do something better. If you hire a researcher who’s not so smart, at the very least you get usability testing done.)
- Jared Spool has advice about “user exposure hours” that everyone on a team needs to have; if you haven’t spent x amount of time directly engaged in user testing – or at least watching user testing – then you’re not doing part of your job.
She talked about how a measure of engagement (traffic + time on page) can often mask experiences of confusion and anxiety as people spend more time on a page if they don’t know what to do. I know I look for very short amounts of time on page for most of our web content.
This may have been my favourite slide of hers:Leisa Reichelt at CanUX
She showed a video of a user struggling mightily with a drop-down box and reminded us that just because certain interface elements are ubiquitous doesn’t make them easy to use. Test test test.
She spoke about the discovery phase of research and the importance of figuring out the real problem we are trying to solve. I took that very much to heart, perhaps because that’s the essence of my next research project – taking a step back and looking at students’ research processes. I will try to keep in mind that I don’t know what problem(s) the library is solving. I will try to banish preconceptions about what we do, or what we try to do, and try to focus on what students do. It was a nice and timely reminder for me.
In talking about her own transition from GOV.UK to the Australian government, Leisa said she will continue to use (steal) the GDS Design Principles, the Digital by Default Service Standard, and Design Patterns since these were based on a lot of research and continue to have relevance. I’ve read them before but will make a point of revisiting them.
Peter Merholz’s presentation (slides) on organizations made me think about the organization of libraries, not so much about the UX work I do specifically:
- All design is service design (in libraries, absolutely everything we do is a service)
- It’s important to do capability assessment, not just in terms of skills but in terms of how people are thinking (this reminded me of conversations about job descriptions and expecting people to do all the strategy stuff plus the detail work and everything in-between; I think we have to decide what level is most important — the 10,000 foot view or the 1 foot view — and focus people’s efforts there. They might do all of it, but their strengths should be at the level the organization needs most. If the organization needs all of it, they have to hire or assign people to cover all four levels. Expecting one person to be the strategy person AND the details person and do both well is a recipe for failure. I think Peter’s talk makes that point even more clear.
- Something about leverage and power…
Brent Marshall’s talk was really fun, probably because he was talking about the element of play in design, and creating playful experiences. He talked about helping to create Molson’s Canadian Anthem Fridge (which made an appearance at the CanUX after party) and other interactive installations. He said that play creates memories (reminiscent, I think, of what Shelley Bernstein said about deep engagement). While I’m not a big proponent of the gamification of libraries, I did wonder about what we can do to bring a sense of play, or to enhance a sense of wonder, in our library spaces both physical and virtual.
Shannon Lee and Rob Rayson gave a delightful presentation about their award-winning work to build a prosthetic hand for a boy in Ottawa. It was obviously a labour of love for the two engineering students from UofO, and spoke volumes about how hard we can work to get something right when we can tangibly see the good it does and the difference it makes.
Ann Marie Lesage talked about her research into the UX of an office chair. Lesage spoke about the aesthetic experience of the chair where you become aware of the experience and then get rewarded for being aware (it’s not just a nice chair to sit in, but you notice that you’re enjoying sitting it in and that makes it more enjoyable). This reminded me of Shelley Bernstein’s comment about deep engagement and Brent Marshall’s talk about play creating memories. Experience plus awareness of the experience can create a better experience.
Jennifer Hogan from Getty Images said some interesting things about watching where our users are hacking our stuff. They may be creating functional prototypes that we can then develop further. It would be interesting to see what our students or faculty have done with our stuff, although I suspect they are more likely to hack other things (see #icanhazpdf).
Steve Hillenius had the most awe-inspiring job title of the conference: he is a UX Manager and Designer at NASA. Yes, NASA. A lot of what he said was quite amazing, but not so applicable to life in the library. (I just go downstairs to recruit users for testing; Steve can’t test directly on the International Space Station so they validate designs during NEEMO simulation missions on the ocean floor. Pretty similar really.) However, a few things stood out for me:
- Only show the possibilities to the user; don’t show them what they can’t use or don’t need to care about
- Seeing what people actually do (hello ethnography!) not only shows us current pain points but can help us see emerging user needs
- With the time lag of space-to-Earth communications (8 – 48 minutes between Mars and Earth, depending on positioning), it’s important to tell the astronauts how long it will take until someone sees their message and the earliest response time they can count on. We provide generic information about response times but being more specific about the actual time lag of student-to-library communications would be useful.
Cennydd Bowles gave a talk on how he sees the UX industry changing in the next 5 years or so. I’m not the biggest fan of “future” talks but he raised some interesting points. He said that he doesn’t see another OS rivaling iOS and Android, but made the point that these systems are trying harder to keep their users away from the Web. I hadn’t thought of Siri actively discouraging people from interacting with the Web, but it’s true. I would be interested to know if students or faculty try to use Siri (or Cortana or Google Now) to access library content. Cennydd also talked about an increased role of motion and sound in interfaces, though his examples (beyond Final Fantasy) were largely about branding and not function.
Boon Sheridan’s talk was a highly entertaining account of his process of examining what he knows, what he may no longer know, what he needs to rethink, and so on. He talked about how best practices can change over time, but also how opinion often masquerades as best practice, which led to this fabulous slide:Boon Sheridan at CanUX
His talk started with a really great story about a deaf cat that I won’t be able to do justice to here. The moral was that sometimes the new way of doing things is expensive and complicated and no better than the old way of doing things, but my notes summed it up like so:
Sometimes you just need to clap behind the cat.
Derek Featherstone spoke about designing for context and how our content and layouts can change with the context of time and/or location to provide better UX. He recommended designing to provide users the right content in the right context.
Carine Lallemand summarized various research that’s going on in the discipline of HCI and UX, challenging us to change our methods to reflect that research. One thing that stood out for me was her point about UX happening over time: that there is anticipated UX before the interaction and then afterward episodic UX as people reflect on the interaction and cumulative UX when people recall many interactions. She said that “the memory of the experience can matter more than the experience itself.” Have a look at that again: “the memory of the experience can matter more than the experience itself.” This seems so wrong to me. We do user testing and not user interviews because what people say they do is not what they actually do. What they do is more important than what they say, right? But how they remember an experience will be a good predictor for whether they seek to repeat the experience again.
Maybe yet again this ties back to “the harder you make it, the deeper the engagement.” Maybe, if you want to provide a great experience it really is important to go beyond what people do. But again, in a library context not everything has to be a great experience. Renewing a book doesn’t have to be the highlight of someone’s day. No one has to be deeply engaged when they’re booking a group study room. But I’d love to start thinking about where we can create and enable playful experiences in the library, where we can encourage deep engagement, the aesthetic UX that slows people down and provides a great memory.
Thanks to CanUX for once again providing great food for thought.
I had the pleasure of being on Circulating Ideas with Steve Thomas. We talked about a bunch of things including open textbooks, accessibility, alternate formats, and being a systems librarian. He’s a great host and an interesting person to chat with. The interview went up last week.
Without a transcript a podcast isn’t accessible to Deaf and some Hard of Hearing people. It felt strange to be talking about accessibility and universal design and have it be in an audio-only format. So I decided to produce a transcript.
I heard the folks from Pop Up Archive present at code4lib in Portland. Pop Up Archive makes sound searchable using speech-to-text technology. Their clients are mostly public radio broadcasters who are looking to make their sound archives searchable. I remember thinking at code4lib that this could be an interesting tool to help make politics more accessible and transparent. For example, transcripts could be made available fairly quickly after a municipal committee (or provincial or federal committee) met. The transcript is almost the byproduct of this process.
I was curious how it could be used to produce a transcript. I was also curious about how accurate the machine transcript was, as well as how long it would take me to clean up. First, you upload the sound file. Next, you can add metadata about the file you uploaded. Then Pop Up Archive processes your sound file. The machine transcript takes as long as your file is, in my case 39 minutes, to process. The machine transcript was about 80% accurate. Finally you can edit the machine transcript on their platform. It took me about 2 hours to clean up a 39 minute interview.
I like the interface. It was intuitive and once I’d learned the keyboard shortcuts I was able to clean up this file more quickly. On my work monitor I couldn’t see the highlighting of the line that was being played, but it’s much clearer on my laptop. I would’ve appreciated a global find and replace feature. It’s possible to export in various file formats: audio file, text without timestamps, text with timestamps, SRT format (captions), XML format (WC3 transcript), or as JSON. I grabbed the text with timestamps and then plopped it into Word to use spellcheck to catch misspelt words. Steve spent another hour editing it to make it easier to read (I say “like” and “so…” quite a bit) and formatting it so it’s clear who was saying what. He also added links which took another 30 minutes.
I’m sure there’s a more efficient workflow but I was really impressed with the machine transcript that Pop Up Audio generated. According to this company, it takes a professional transcriber 1 hour to transcribe 15 minutes of clearly recorded audio and then additional time to proofread.
With improvements in speech-to-text technology and machine transcripts I think tools like this can make it easier for podcasters to produce transcripts. I can also see this being used (along with human editors) as a faster way to produce transcripts for audio and video as part of a disability accommodation in education.
Here’s the final transcript.