Journal of Web Librarianship: Seeing Library Data: A Prototype Data Visualization Application for Librarians
Journal of Web Librarianship: Seeing Library Data: A Prototype Data Visualization Application for Librarians
DuraSpace News: VIVO Updates Nov 21–New Member Stanford U, Theming task force, Conference Web Site–Workshop
From Mike Conlon, VIVO project director
Stanford University becomes a member. Stanford University has joined Duraspace as a member supporting VIVO. Stanford is very active in the Fedora project, and Linked Data for Libraries.
PLA’s service awards and grants highlight the best in public library service and honor those bringing innovation, creativity, and dedication to public libraries. The deadline to apply for PLA 2017 Service Awards and Grants is December 5, 2016 at 11:59 PM Central.
The Baker & Taylor Entertainment Audio Music / Video Product Award is designed to provide a public library the opportunity to build or expand a collection of either or both formats in whatever proportion the library chooses. The grant consists of $2,500 of Audio Music or Video Products. Sponsored by Baker & Taylor.
The John Iliff Award honors the life and accomplishments of John Iliff, early adopter and champion of technology in public libraries, and recognizes the contributions of a library worker, librarian, or library that has used technology and innovative thinking as a tool to improve services to public library users. The award provides a $1,000 honorarium, a plaque and a bouquet of roses for the workplace. Sponsored by Innovative.
Nominate yourself, a colleague, or your library today!
For more information, or to submit an application or nomination, please visit
If you ask Greg Arnette if the cloud is more secure than on-premises infrastructure he’ll say “absolutely yes.” Arnette is CTO of cloud archive provider Sonian, which is hosted mostly in AWS’s cloud. The public cloud excels in two critical security areas, Arnette contends: Information resiliency and privacy.But even if the cloud provider's infrastructure were completely secure, using the cloud does not free the user from all responsibility for security. In Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository as a Malicious Service, a team from Georgia Tech, Indiana U., Bloomington and UCSB report on the alarming results of a survey of the use of cloud services to store malware components. Many of the malware stashes they found were hosted in cloud storage rented by legitimate companies, presumably the result of inadequate attention to security details by those companies. Below the fold, some details and comments.
The team discovered:
694 malicious or compromised repositories, involving millions of files, ... These buckets are hosted by the most reputable cloud service providers. For example, 13.7% of Amazon S3 repositories and 5.5% of Google repositories that we inspected turned out to be either compromised or completely malicious. Among those compromised are popular cloud repositories such as Groupon’s official bucket. Altogether, 472 such legitimate repositories were considered to be contaminated, ... infecting 1,306 legitimate websites, including Alexa top 300 sites like groupon.com, Alexa top 5,000 sites like space.com, etc.The details are in Section 4.2 of the paper. Briefly, many of the compromised repositories had:
a misconfiguration flaw ... which allows arbitrary content to be uploaded and existing data to be modified without proper authorization.Because the legitimate renters of the bucket had not been sufficiently careful to fully define the bucket's access policy:
by default, ... the cloud only checks whether the authorization key (i.e., access key and secret key) belongs to an S3 user, not the authorized party for this specific bucket: in other words, anyone, as long as she is a legitimate user of the S3, has the right to upload/modify, delete and list the resources in the bucket and download the content. This problem has been exploited for a long time:
Groupon’s official bucket, was apparently compromised five times between 2012 and 2015 ... according to the changes to the bucket we observed from the bucket historical dataset we collected from archive.org.Because, like other cloud providers, Amazon's S3 charges for storage, requests and bytes transferred out, the legitimate renters of the compromised buckets were paying much of the costs of the malware attacks.
Paul Kunert at The Register reports on Canalys estimates of the cloud services market:
Amazon’s cloud subsidiary turned over $3.23bn, up 55 per cent growth on the year ago period, and held 32 per cent market share.
Microsoft Azure hauled in $1.736bn, up 116 per cent, giving it a 17 per cent share of the spoils; Google Cloud hauled in $764m, up 80 per cent year on year, giving it an eight per cent share.
Over at Big Blue, IBM Software sold $654m worth of services, up 51 per cent, and this handed it a seven per cent slice of total market sales, while Chinese outfit Alibaba sold $221m, up 128 per cent and taking a market share of two per cent.
Much like in the world of on-premise tech, the [five] cloud giants keep getting bigger and bigger, equating to 66 per cent of all money splashed on IaaS and PaaS.These five giant services are thus very high-value targets. Finding a weakness, like the propensity for careless setting of access policies above, gives access to huge resources for spreading malware, and is thus extremely valuable.
As the paper points out, the cloud providers:
are bound by their privacy commitments and ethical concerns, they tend to avoid inspecting the content of their customers’ repositories in the absence of proper consent. Even when the providers are willing to do so, determining whether a repository involves malicious content is by no means trivial: nuts and bolts for malicious activities could appear perfectly innocent before they are assembled into an attack machine; ... even for the repository confirmed to serve malicious content like malware, today’s cloud providers tend to only remove that specific content, instead of terminating the whole account, to avoid collateral damage (e.g., compromised legitimate repositories). Thus the compromise is unlikely to be rapidly detected and, even if detected, only treated symptomatically. This research should give advocates of cloud-based preservation plenty to think about.
Islandora: Dispatches from the User List: Islandora Managed Access, OCR from PDF Books, and Multiple Batch Loads at Once
Time again to highlight some great conversations for our listserv, with tips and tricks that you might want for your Islandora:here. Next up, Pat Dunlavey from Common Media is looking for the best way to extract OCR text from PDF-based books, so as to maintain IA Book Reader's text search feature while using the embedded OCR text already present in the PDF. Giancarlo Birello from CNR-Ceris has a solution:
I manage pdf/a (pdf scanned + OCR text) as book and I use this steps:
- pdftk + imagemagick to generate tiff, 1 file x page
- docsplit utility to extract text from pdf pages, 1 file x page
- prepare dir structure as needed by book batch ingesting (1 dir x page with OBJ.tif, OCR.txt, DC.xml, ...)
- batch ingest (see islandora book ingest module)
while OCR.txt is indexed by solr and used by simple or advanced search block, IA uses HOCR datastream that at the moment is generated by tesseract during ingesting derivatives generation, I searched but I didn't found any way to generate HOCR from pdf/a directly,
so I have a full-text search based on OCR datastream while IA search is based on HOCR datastream, at the moment this is ok for me.
And finally, a stop on the islandora-dev listserv, with an update to a question from back in April, when SFU's Mark Jordan asked:
Has anybody tried running multiple (say 2 or 3) Islandora Batch loads via drush at one time? Or would that be a Dumb Thing To Do? Would love to hear if anyone has any experience.
Back then, UNCC's Brad Spry noted that it could cost ingest failures but that there was a possible solution he would explore. Last week, he updated the community with his work and some promising prospects:
I've been working to implement a cool server-side book batch pre-processing workflow this week, so I've been working on our nifty ingest scripts.
I ran into the "collision" issue I wrote about previously... After wrestling with it for days, my current theory is issues can be caused by multiple simultaneous or near-simultaneous execution(s) of islandora_batch_scan_preprocess.
I have one error documented so far:<ASSERT>Datastream must have a datastream id. (foxml:datastream: value of ID is missing)</ASSERT>
The cause of that error is still eluding me; I've even been disassembling BLOBs created by islandora_batch_scan_preprocess in search of answers :-)
I had to keep moving forward though, so I implemented a locking mechanism and precision set ingest. All of my ingest-ready objects and related directories pass through here:
batch_set_id=$(/usr/local/bin/drush -c /usr/local/drush/drushrc.php -v --user=user --uri=https://server islandora_batch_scan_preprocess --namespace=$1 --content_models=$2 --parent=$3 --parent_relationship_pred=isMemberOfCollection --type=directory --target=$4 2>&1 | sed -E '/^SetId:/! d; s/^SetId: ([0-9]+).*/\1/')#ready_for_ingest
/usr/local/bin/drush -c /usr/local/drush/drushrc.php vset islandora_bagit_create_on_modify '0'
/usr/local/bin/drush -c /usr/local/drush/drushrc.php -v --user=user --uri=https://server islandora_batch_ingest --ingest_set=$batch_set_id >> /mnt/islandora-loadingdock/ingest_log/ingest.log
/usr/local/bin/drush -c /usr/local/drush/drushrc.php vset islandora_bagit_create_on_modify '1'
After implementing the locking mechanism and precision set ingest, I've seen no "collisions". My testbed has been 2-3 books, audio, and images all trying to ingest simultaneously. I no longer allow them to fight each other; objects now form a single filed line.
I intend to keep pushing it and see how it holds up!
Open Government Data from around the world session is back at the OGP summit, this time with a twist! Come and join as active participants and share open data updates from your country on Thursday, December 8th on 12 pm!
What is Open Government Data from around the world session?
In this one hour session, we are trying to connect the open data community and to get as many updates as we can from all over the world. It is a rapid session, where each participant can speak for 2 minutes and give a quick update about their country status. This year, celebrating 5 years of OGP, we will also ask you to share the good, the bad and the ugly of OGP in your country.
There is no session without your participation, so we encourage you to sign up and take part of it! There is no right or wrong, just a time limit and you must have an update about a country (i.e geographical place). Government officials, CSOs and others are welcome to present! We can host potentially up to 60 different speakers!
Why should I come to this session?
- Learn about other initiatives in the world in one hour!
- It is fun and informal
- Great place to network
- Good place to get your OGD initiative known
What will come out of this session?
Daniel and Mor will tweet and use Facebook Live during the event, and will summarise it to you in a blog post, so we can keep collaborating after the OGP summit
So how can I participate?
Learn more about this session in this doc – https://docs.google.com/document/d/1rlE–j9lNhyEHSUcYTdSL4Kit-sUuvlCdQxOugCevlg/edit#heading=h.jstk65wkq7f0
If you have more questions, just reach out to us – Daniel Dietrich – firstname.lastname@example.org and Mor Rubinstein – email@example.com
Today I happened to come across three very good articles which to me all seemed to form a theme: Ethical and political considerations of information and information technology.
Consider contexts and who is driving the data: The problem of people not from communities affected by communities making decisions for those who are is very prevalent in our field, and the work around data is no exception. Who created the data? Was the right mix of people involved? Who interpreted the data? The rallying cry among marginalized communities is “Stop talking about us without us,” and this applies to data collection and interpretation.
I think there’s deeper things to be said about ‘weaponized data’ too that have been rattling around in my brain for a while, this essay is a useful contribution to the mix.
For more on measurement and data as a form of power and social control, and not an ‘objective’ or ‘neutral’ thing at all, see James C. Scott’s Seeing Like a State, and the works of Michel Foucault.
Second, from Business Insider, Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do by Julie Bort.
I’m not sure I buy the conclusion that “what developers really need is an organization that governs and regulates their profession like other industries have” — professional licensure for developers, you can’t pay someone to write a program unless they are licensed? I don’t think that’s going to work, and it’s kind of the opposite of democratization of making software that I think is actually important.
But requiring pretty much any IT program anywhere to include 3 credits of ethics would be a good start, and is something academic credentialing organizations can easily do.
“We rule the world,” he said. “We don’t know it yet. Other people believe they rule the world but they write down the rules and they hand them to us. And then we write the rules that go into the machines that execute everything that happens.”
I don’t think that means we “rule the world”. It means we’re tools. But increasingly important and powerful ones. Be careful who’s rule you are complicit with.
Thirdly and lastly but not leastly, a presentation by Tara Robertson, Not all information wants to be free. (Thanks for the link Sean Hannan via facebook).
I can’t really find a pull quote to summarize this one, but it’s a really incredible lecture you should go and read. Several case studies in how ‘freeing information’ can cause harm, to privacy, safety, cultural autonomy, and dignity.
This is not a topic I’ve spent a lot of time thinking about, and Robertson provides a very good entry to it.
The original phrase “information wants to be free” was not of course meant to say that people wanted information to be free. Quite the opposite, it was that many people, especially people in positions of power did not want information to be free — but it is very difficult to keep information under wraps, it tends toward being free anyway.
But yes, especially people in positions of power — the hacker assumption was that the digital era acceleration of information’s tendency toward unrestricted distribution would be a net gain to freedom and popular power. Sort of the “wikileaks thesis”, eh? I think the past 20 years have definitely dashed the hacker-hippy techno-utopianism of Steward Brand and Mondo 2000 in a dystopian world of state panopticon, corporate data mining (see the first essay on data as a form of power, eh?), information overload distraction and information bubble ignorance.
Information may want to be free, but the powerful aren’t the only ones that are harmed when it becomes so.
Still, while it perhaps makes sense for a librarian’s conference closing lecture, I can’t fully get behind Robertson’s conclusion:
I’d like to ask you to listen to the voices of the people in communities whose materials are in the collections that we care for. I’d also like to invite you to speak up where and when you can. As a profession we need to travel the last mile to build relationships with communities and listen to what they think is appropriate access, and then build systems that respect that.
Yes, and no. “Community’s” ideas of “appropriate access” can be stifling and repressive too, as the geeks and queers and weirdos who grew up to be hackers and librarians know well too. Just because “freeing” information can do and has done real harm to the vulnerable, it doesn’t mean the more familiar story of censorship as a form of political control by the powerful isn’t also often true.
In the end, all three of these essays I encountered today, capped off by Robertson’s powerful essay, remind us that information is power, and, like all power, it’s formation and expression and use is never neutral, it has real consequences, for good and ill, intended and unintended. Those who work with information need to think seriously about their ethical responsibilities with regard to that power they wield.
Filed under: General
In March 2015, Michigan Publishing was awarded a grant from the Andrew W. Mellon Foundation for a project entitled “Building a Hosted Platform for Managing Monographic Source Materials.” In a nutshell, Fulcrum, as the platform is now called, is about building an online platform using the Hydra/Fedora framework to publish media-rich scholarship. The core team consists of a project lead, project manager, data librarian, UI/UX specialist and three developers. Below is one of our stories, boldly told through the lens of the project manager. No developers were seriously harmed in the writing of this post.
Open Knowledge Foundation: What’s next for the open data community in Latin America: How to take Abrelatam 2016 discussion forward
Work that strives to make a lasting social impact is a lot like a marathon. You need to learn how to go through long distances in spite of the exhaustion and how to manage your energy; you need to know when to go faster or slower, etc. But in order to get to the finish line, you need to have some sort of company. This is the case with the work undertaken to make Latin America a more open region. Fortunately, we have the right tool for this: Abrelatam|Condatos, the regional meeting for open data, which gathers activists, journalists, public servants, designers and anyone interested and working in the use of open data and the change they can generate in the region.
This was the event’s fourth edition, which took place in the beautiful city of Bogotá. I was happy to see that the subjects we talked about have evolved since past editions and we passed from “How do we open data?” to talk about gender and (the lack of) data, entrepreneurship based on open data, business models and standards and much, much more!
This doesn’t mean everything is perfect in Latin America. In fact, as the discourse evolves, it is vital that we can name and know what to do when our governments use openness as a screen without generating deep changes or promoting accountability or entrepreneurship inside and outside of government.
These challenges we identified were put on paper as eight specific points that will help us understand where we, as a region, want to go in the coming years.
- Create awareness between public servants and civil society about how to make use of data and the normativity around it.
- Build capacities for structuring, transforming, capturing, liberating and using data
- Have relevant data that responds to the needs of society and governments
- Have data with local impact, based on context and sub national needs
- Have data for social change and inclusion, which will help fulfill the SDGs, human rights and the principles of open government.
- Promote entrepreneurship around data through financing and incentives that seek more sustainable business models
- Develop communities for knowledge exchange, experiences and good practices from journalism, startups, academia, private sector, CSOs and other initiatives
- Create strategies for opening, liberating and using data with clear regulations, that grow success cases and care about privacy.
In addition to these challenges I’d like to discuss three other subjects I identified during a bunch of hall chats with members of the community.We need to balance the discussion before the conference
Every year we have new participants in Abrelatam and Condatos. This doesn’t mean they’re new in their field. In spite of this, we still spend a lot of time providing background to new participants. If we immerse everyone before the conference in previous conclusions we could probably advance the conversations instead spending time giving context.We spoke enough, let’s take action
I don’t think that this conference should become a hackathon, but we need to find ways to get our hands dirty. We have attendees from three or four editions of the conference that could set realistic goals with tangible results and we could work on them with little time, resulting in something useful for the region during the year between each event.Let’s create space for new people
As much as I love meeting people I’ve known for years and being able to continue conversations that we left pending in previous years, it’s important having new voices in this discussion. The selection committee, who also gave the travel support grants, did a great job at this but we, as individuals, need to start identifying and getting new people into these conversations, even if that means that we get to stay home for the next Abrelatam.
There are many other points to talk about but I’d like to know what other people think. If you attended Abrelatam 2016, don’t forget to continue talking about it (like commenting on this point, speaking in the telegram group or writing a response post to this one) and don’t forget to include the hashtag #retos2017, since they will help us shape the agenda for the coming months and years.
P.S. I’d like to thank all those who, during these four years have been part of hallway chats, with whom I have learned and shared. I’d also like to give public recognition to the great Fabrizio Scrollini, an always inspiring person with all he has done and keeps doing to create the great community we have.
What do you prefer: to click a link and it open in a new tab or for it to open in the same page? Is there a best practice? Sarah Dauterive
There is.The best practice is to leave the default link behavior alone.
Usually, this means that the link on a website will open in that same window or tab. Ideas about what links should do are taken for granted, and “best practices” that favor links opening new windows – well, aren’t exactly.
There are two recurring themes in arguments favoring opening links in new windows:
- we don’t want users to leave the website
- users find new tabs or windows convenient
But these don’t tend to be substantiated by much more than a gut feeling. By all means, go with it, but the argument otherwise relies on the power of convention and smart defaults, and that subverting those — going against the grain — might involve more complexity, confusion, and cost than you might expect.Browsers bake-in consistency
Best-in-show user experience researchers Nielsen Norman Group write that “links that don’t behave as expected undermine users’ understanding of their own system,” where unexpected external linking is particularly hostile.
See, one of the benefits of the browser itself is that it frees users “from the whims of particular web page or content designers.” For as varied and unique as sites can be, browsers bake-in consistency. Consistency is crucial.
Jakob’s Law of the Internet User Experience: users spend most of their time on other websites.
Design conventions are useful. The menu bar isn’t at the top of the website because that’s the most natural place for it; it’s at the top because that is where every other website puts it. The conventions set by the sites that users spend the most time on–Facebook, Google, Amazon, Yahoo, and so on–are conventions users expect to be adopted everywhere.These conventions give users control
[A] user-friendly and effective user interface places users in control of the application they are using. Users need to be able to rely on the consistency of the user interface and know that they won’t be distracted or disrupted during the interaction.
Users … may be search-navigators or link-clickers, but they all have additional mental systems in place that keep them aware of where they are on the site map. That is, if you put the proper markers in place. Without proper beacons to home in on, users will quickly become disoriented.A link is a promise
This is all to stress the point that violating conventions, such as the default behaviors of web browsers, is a dangerous play. The default behavior of hyperlinks is that they open within the same page.
Kara Pernice — the managing director at Nielsen Norman Group — wrote in December 2014 about the importance of confirming the person’s expectation of what a link is and where the link goes. A link is a promise that if broken endangers the trust and credibility of the brand.
The absolute worst is when some links are target=_blank and others aren’t, all on the same website, usually because of multiple authors and lack of style guidelines. Library websites are especially guilty of this. I want to think about your content, not get confused and irritated by your inconsistent linking behaviour! Ruth Collings
A Comment on Links Should Open in the Same Window
Say you’re reading In the Library with the Lead Pipe where long-form articles can get pretty long. There are links of interest and further reading peppered throughout the content, and choosing one — especially if you’re ten minutes into reading — that bounced you out off page could definitely be distracting. Sometimes, having a link open in a new tab or window makes sense.
@schoeyfield I don’t disbelieve you, but I do find it difficult to comprehend. If I’m reading something I want to look at the refs later.
— Hugh Rundle (@HughRundle) January 6, 2015
But hijacking default behavior isn’t a light decision. Chris Coyier shows how to use target attributes in hyperlinks to force link behavior, but gives you no less than six reasons why you shouldn’t. Consider this: deciding that such-and-such link should open in a new window ultimately reduces the number of navigation options available to the user.
If a link is just marked up without any frills, like <a href=http://link.com>, users’ assumed behavior of that link is that it will open in the same tab/window, but by either right-clicking, using a keyboard command, or lingering touch on a mobile device, the user can optionally open in it in a new window. When you add target=_blank to the mix, alternate options are mostly unavailable.
I think it’s a compelling use-case of opening reference links in new windows midway through long content, but it’s worth considering whether the inconvenience of default link behavior is greater than the interaction cost and otherwise downward drag on overall user experience.Exit intent
What is even more nefarious than poor content strategy is this notion that we don’t want users to leave our website. This feels, … mm — gross. Marketing folks say this sort of thing, those who would use exit-intent as an opportunity to convert. It works. There’s no debate. It is the success on which popular WordPress plugins that pop-up bullshit are built on. There are, however, design strategies wherein the user experience and the sales department run parallel – but “we don’t want users to leave the page” is not that kind of strategy.
Thankfully, data shows that tricks like this are self-defeating — at least when poorly implemented: gross user experiences negatively impact conversion rate and the bottom line.Accessibility
Newer screen readers alert the user when a link opens a new window, though only after the user clicks on the link. Older screen readers do not alert the user at all. Sighted users can see the new window open, but users with cognitive disabilities may have difficulty interpreting what just happened.
Compatibility with WCAG 2.0 involves an “Understanding Guideline” which suggests that the website should “provide a warning before automatically opening a new window or tab.” Here is the technique. It’s not in wide use.Exceptions, but opinions vary
Normally, it is a good idea to use target=_blank when opening the link will interrupt an ongoing process:
- the user is filling out a form and needs to click on a link to review, say, terms of service
- the user is watching video or listening to audio
The point is that however trivial-seeming, changing the default link behavior isn’t a light decision. Personal preference, which often informs what appears to be general behavior – isn’t. Unless the interaction cost of opening a link in the same tab is too great, then we shouldn’t betray the promises we make.
Thank you so much for reading this far. I recently updated and read-aloud this post in a show I do. You might have seen it at the top. It would be really nice of you if you gave it a try. You can download the MP3 or subscribe to Metric: A UX Podcast on Stitcher, iTunes, YouTube, Soundcloud, Google Music, or just plug our feed straight into your podcatcher of choice.
Let’s chat on Twitter.
I feel a great deal of gratitude towards many people who really have my back.
I’m one of those annoying extroverts who needs to think out loud. I appreciate the generosity that all of these people have extended to me. These people are friends, colleagues, comrades, librarians, sex worker activists, academics, feminists, queers, artists and pornographers. I think it’s important for me to acknowledge all of these people as extended feminist citation practice but also because I wouldn’t have the courage to speak today. I’m standing on the shoulders of these giants:
Dan Goodin writes in New attack reportedly lets 1 modest laptop knock big servers offline that Danish security company TDC has identified "BlackNurse", a relatively low-bandwidth attack that uses ICMP type 3 code 3 packets. TDC reports (PDF) that the attack causes firewall CPU saturation:
BlackNurse is based on ICMP with Type 3 Code 3 packets. We know that when a user has allowed ICMP Type 3 Code 3 to outside interfaces, the BlackNurse attack becomes highly effective even at low bandwidth. Low bandwidth is in this case around 15-18 Mbit/s. This is to achieve the volume of packets needed which is around 40 to 50K packets per second. It does not matter if you have a 1Gbit/s Internet connection. The impact we see on different firewalls is typically high CPU loads. When an attack is ongoing, users from the LAN side will no longer be able to send/receive traffic to/from the Internet. All firewalls we have seen recover when the attack stops.ICMP type 3 code 3 means "port unreachable" is true but "net unreachable" and "host unreachable" are false. Why would handling "net unreachable" and "host unreachable" be cheap but "port unreachable" be expensive? According to Johannes Ullrich:
this is likely due to the firewall attempting to perform stateful analysis of these packets. ICMP unreachable packets include as payload the first few bytes of the packet that caused the error. A firewall can use this payload to determine if the error is caused by a legit packet that left the network in the past. This analysis can take significant resources. Again we see that expensive operations with cheap requests create a vulnerability that requires mitigation. In this case rate limiting the ICMP type 3 code 3 packets that get checked is perhaps the best that can be done.
My library hosts several book groups; last year, I facilitated 10 groups, with members reading everything from graphic novels to Iranian literature, at an average attendance of 7 members per group meeting. I arrange reading groups with an eye to what might appeal to a wide range of patrons, whether groups are led by experts in their fields, librarians, or patron volunteers.
Last year, I conducted a book group survey, and the respondents indicated that the main barriers to attending book groups at our library included the inability to attend at the dates or times of the scheduled meetings, as well as significant geographical distance from the library. I’m always thinking about how tech tools might assist in improving public services, so I decided to try something I hadn’t seen in libraries: an online book group.
The first decision to make was the reading focus. I chose non-fiction because several survey respondents had requested a non-fiction group, and there was an intersection of those people with those who were geographically distant or couldn’t make it to the library due to scheduling.
The second decision was where to host the book group site. I scoured the web to find examples of library online book groups; the few I found operated via goodreads. Since our library is a public subscription library, we needed to limit participation to our membership. We also needed to ensure standards of communication among members that met our library’s anti-harassment policy, which meant that I would need to be able to block members who violated the policy. The Electronic Services Librarian, who is also our Webmaster, created a page on the library’s website for me, and I learned the basics of Drupal to kit it out.
I penned an etiquette and conduct policy to link at the landing page where members would log in to the site or find instructions on how to obtain a login and starter password if they didn’t yet have one. Interested members contacted me and I manually added them to the site’s user list; this was feasible for my library because we have 4500 members, which in San Francisco is a relatively small service community; about 125 members (3%) use library services on a given day, with an average of 70 members attending at least one book group during the month. Depending on the size of your library, you might prefer to run login through your ILS and let any library member sign in using their card number.
My idea about design was that basic is better; a simple UI would foster a focus on the material, so members wouldn’t have to learn how to use too many bells & whistles in order to contribute. Once members logged in, they’d see three tabs: This Month’s Book, Discussion, and Past Book Selections.
- This Month’s Book mirrored the introductory material leaders begin with in an in-person book group: a brief author bio, a bit of background on the book, reviewer quotes, and any other relevant material.
- Discussion encompassed two major categories: Group Info and Book Talk.
- Group Info was where we’d discuss book group “business”, e.g., choosing next books, discussing any in-person meetups, or posting optional reader bios.
- In the Book Talk area of the discussion board, I’d post four or five starter questions each month to get the conversational ball rolling.
- Past Book Selections collected all of the previous This Month’s Book entries as a linked list. In the spirit of an in-person book group, and in service to library privacy standards, i.e., non-retention of patron records, I wanted to keep the discussion portion ephemeral. I didn’t preserve past discussions, clearing everything when a new book was posted. The reading list was the only material that persisted on the site after a discussion month had ended.
I tested the design with librarians who were familiar with discussion forum interaction, as well as those who were not; I used their feedback to tweak the particulars of the site, trying to strike a balance between “too complicated for beginning users” and “not functional enough for experienced users”; the launch was publicized in the library’s book group brochure, the monthly newsletter, on our website, and by creating a special poster for each of the first six books on the reading list. I also hosted two “introduction to the online book group” hands-on tutorial classes.
As you may have intuited from my past tense verbs, this book group has now folded. In the launch month of the online book group, 13 members requested login credentials, but many of them failed to discuss the book in the forum. By the ninth month, when we decided to fold, discussion had dwindled from 5 active members to 1; my book group policy for librarian-led groups is a minimum of 4 average attendees in months 6 – 9 to continue after the incubation period. This group discontinued after the September 2016 meeting.
Since then, I’ve been gathering feedback from members who participated in discussion at least once, and have found that book selection and site design matter a lot. Some members found one of the early books too dense, and gave up on the group altogether. Other members said that after the first month, they forgot they’d signed up and the login page was a deterrent because they couldn’t remember their login credentials. I’ve also touched base with a couple of members who signed up but never got around to participating in discussion. A majority of them said that they were confused about how to post, or felt anxious because what they had to say wasn’t “important” enough.
Although this group didn’t resonate with my library’s membership in its first iteration, I think it’s important to reach library members where they are — and where they are may be online. When planning library services, it’s worth remembering this contingent of library patrons: those who are homebound, distant, or have work schedules or life responsibilities that make a midnight book group their ideal time, and the internet their ideal meeting place.
Have you tried anything like this at your library? How did it go? Any tips you’d like to share with librarians who may be interested in starting an online book group for their service communities? Share in the comments!
The Digital Public Library of America (DPLA) seeks a Business Development Director to implement and grow revenue opportunities for the organization. It is expected that the Business Development Director will forge extensive new partnerships and relationships to further expand DPLA’s visibility, impact, and financial resources to pursue its social mission. A passion for that mission of widespread access to the contents of America’s libraries, archives, and museums is essential.
The Business Development Director will be responsible for business strategic planning, client development and relationships, and retention and growth of accounts over time. The Director will also head research initiatives to understand existing and new markets and client needs, and will use quantitative and qualitative methods to identify promising markets to enter and the best approaches to those markets. She or he must also develop and implement a stewardship program aimed at cultivating deeper ties with clients, and monitor and report regularly on the progress of revenue programs.
The Business Development Director will report to the Executive Director, and work with the Executive Director and DPLA’s fiscal manager and accountants on budgeting and revenue projections. The Business Development Director is expected to have an MBA or equivalent business training and experience. Ten-plus years of professional experience is desired, as is experience with implementing large-scale digital programs, such as the ones that DPLA currently has in data management, digital repositories, and ebook delivery. Existing connections to the library, archive, and museum communities is also helpful. The Director must also have strong interpersonal and marketing skills and a record of success building sustainability models.
This position is full-time and ideally based in DPLA’s Boston headquarters. Expressions of interest for remote arrangements will be considered, although priority will be given to those who can arrange to work closely in or with the Boston office.
Like its collection of materials from across the United States, DPLA is strongly committed to diversity in all of its forms. We provide a full set of benefits, including health care, life and disability insurance, and a retirement plan. Starting salary is commensurate with experience.
The Digital Public Library of America strives to contain the full breadth of human expression, from the written word, to works of art and culture, to records of America’s heritage, to the efforts and data of science. Since launching in April 2013, it has aggregated more than 14 million items from over 2,000 institutions. DPLA is a registered 501(c)(3) non-profit.
To apply, send a letter of interest detailing your qualifications, a resume, and a list of three references in a single PDF to firstname.lastname@example.org, with the subject line “Business Development Director.” First preference will be given to applications received by December 1, 2016, and the review will continue until the position is filled.
The reason the media covered Trump so extensively is quite simple: that is what users wanted. And, in a world where media is a commodity, to act as if one has the editorial prerogative to not cover a candidate users want to see is to face that reality square in the face absent the clicks that make the medicine easier to take.
Indeed, this is the same reason fake news flourishes: because users want it. These sites get traffic because users click on their articles and share them, because they confirm what they already think to be true. Confirmation bias is a hell of a drug — and, as Techcrunch reporter Kim-Mai Cutler so aptly put it on Twitter, it’s a hell of a business model.No feet on the street But, as I pointed out in Open Access and Surveillance using this graph (via Yves Smith, base from Carpe Diem), there is another problem. Facebook, Google et al have greatly increased the demand for "news" while they sucked the advertising dollars away from the companies that generated actual news. The result has to be a reduction in the quality of news. The invisible hand of the market ensures that a supply of news-like substances arises, from low-cost suppliers to fill the gap.
I am well aware of the problematic aspects of Facebook’s impact; I am particularly worried about the ease with which we sort ourselves into tribes, in part because of the filter bubble effect noted above (that’s one of the reasons Why Twitter Must Be Saved). But the solution is not the reimposition of gatekeepers done in by the Internet; whatever fixes this problem must spring from the power of the Internet, and the fact that each of us, if we choose, has access to more information and sources of truth than ever before, and more ways to reach out and understand and persuade those with whom we disagree. Yes, that is more work than demanding Zuckerberg change what people see, but giving up liberty for laziness never works out well in the end.Its hard to disagree, but I think Thompson should acknowledge that the idea that "each of us ... has access to more information and sources of truth than ever before" is imperiled by the drain of resources away from those whose job it is to seek out the "sources of truth" and make them available to us.