Today I found the following resources and bookmarked them on Delicious.
- Meerkat Tweet live video
- 23andMe Genetic Testing for Ancestry
- Find My Past Trace your Family Tree Online – Genealogy & Ancestry
Digest powered by RSS Digest
- SxSW: We’re All Related. The (Big) Data Proves It
- 10 Ancestry.com Search Tips
- Search all U.S. Censuses free
This is one of our periodic messages sent to all LITA members. This update provides
- Election details
- An urgent call to action from the Washington Office
- Current Online Learning Opportunities
ALA Candidates who are LITA members include:
- Presidential candidate:
- Joseph Janes
- Council candidates:
- Brett Bonfield
- Megan Drake
- Henry Mensch
- Colby Mariva Riggs
- Jules Shore
- Eric Suess
- Joan Weeks
LITA Division Candidates include
- President Candidates:
- Aimee Fifarek
- Nancy Colyar
- Director-at-large candidates:
- Ken Varnum
- Susan Sharpless Smith
- Martin Kalfatovic
- Frank Cervone
“Voting will begin at 9 a.m. Central time on March 24. Between March 24 and March 26, ALA will notify voters by email, providing them with their unique passcodes and information about how to vote online. To ensure receipt of their ballot, members should watch for emails from ALA Election Coordinator, firstname.lastname@example.org. The subject line will be “ALA 2015 election login information below.” The polls will close on Friday, May 1, at 11:59 p.m. Central time.
For the seventh year in a row, ALA is holding its election exclusively online. To be eligible to vote, individuals must be members in good standing as of January 31, 2015. Although the election is being conducted online, there remains one exception: Members with disabilities and without internet access may obtain a paper ballot by contacting ALA customer service at 1-800-545-2433, ext. 5. Those without internet access at home or work can easily access the election site by visiting their local public (or in many instances academic or school) library.
Voters will receive email reminders on April 7 and April 14. Voting may be completed in one sitting, or an individual may park their ballot and return at a later date; however, a ballot is not cast until the “submit” button is clicked. Anyone with a parked ballot will receive an email reminder to complete the voting process before May 1.”
Please take 60 seconds to help libraries by March 20, 2015
Emily Sheketoff, director, ALA Washington Office
“Millions in federal funding for libraries is currently hanging in the balance. In order to save library funding from the chopping block – particularly the Library Services Technology Act (LSTA) and Innovative Approaches to Literacy (IAL) programs —library supporters need to contact offices of their Representative and Senator and ask them to show support for continued library funding by signing “Dear Appropriator” letters about LSTA & IAL that three Members of Congress who are huge library champions have drafted to the Appropriations Committees in the House and Senate. The more Members of Congress that we can get to sign these “Dear Appropriator” letters, the better the chance of preserving and securing real money for libraries so that libraries can continue to do all the great work they do in their communities. The only way we can achieve this is through grassroots efforts. Members of Congress need to hear from as many voters as we can rally to action.
Please email or phone your members of Congress and ask them to sign the Dear Appropriator letter supporting LSTA and IAL, then ask all other library supporters you know to do the same by no later than March 20th.
Contact info is here:
http://cqrcengage.com/ala/home (just put in your zip code in the box on the lower right side).
You are welcome to forward this email to local, state or regional library listservs.
To see whether your Members of Congress signed the letters last year, view the FY 2015 Funding Letter Signees document (pdf). If so, please be sure to thank and remind them of that when you email or call! More information can be found on District Dispatch and here’s some helpful background information:
BACKGROUND INFORMATION for “DEAR APPROPRIATOR” LETTERS
LSTA is the only source of funding for libraries in the federal budget. The bulk of this funding is returned to states through a population-based grant program through the Institute of Museum and Library Services (IMLS). Libraries use LSTA funds to, among other things, build and maintain 21st century collections that facilitate employment and entrepreneurship, community engagement, and individual empowerment. For more information on LSTA, check out this document LSTA Background and Ask (pdf)
HOUSE STAFF/ CHAMPION Norma Salazar (Representative Raul Grijalva)
SENATE STAFF/ CHAMPION Elyse Wasch (Senator Jack Reed)
IAL is the only federal program supporting literacy for underserved school libraries and has become the primary source for federal funding for school library materials. Focusing on low income schools, these funds help many schools bring their school libraries up to standard. For more information on IAL, view School Libraries Brief (pdf).
HOUSE STAFF/ CHAMPION Don Andres (Representative Eddie Bernice Johnson)
SENATE STAFF/ CHAMPION James Rice (Senator Charles Grassle)
Current Online Learning Opportunities
Beyond Web Page Analytics: Using Google tools to assess user behavior across web properties
Presenters: Ed Sanchez, Rob Nunez and Keven Riggle
Offered: March 31, 2015
Currently sold out. To be placed on the wait list send an email to email@example.com
Yes, You Can Video: A how-to guide for creating high-impact instructional videos without tearing your hair out
Presenters: Anne Burke and Andreas Orphanides
Offered: May 12, 2015
Register Online page arranged by session date (login required)
I encourage you to connect with LITA by:
- Exploring our web site.
- Subscribing to LITA-L email discussion list.
- Visiting the LITA blog and LITA Division page on ALA Connect.
- Connecting with us on Facebook and Twitter.
- Reaching out to the LITA leadership at any time.
Please note: the Information Technology and Libraries (ITAL) journal is available to you and to the entire profession. ITAL features high-quality articles that undergo rigorous peer-review as well as case studies, commentary, and information about topics and trends of interest to the LITA community and beyond. Be sure to sign up for notifications when new issues are posted (March, June, September, and December).
If you have any questions or wish to discuss any of these items, please do let me know.
All the best,
Mary Taylor, Executive Director
Library and Information Technology Association (LITA)
50 E. Huron, Chicago, IL 60611
312-280-4267 (direct line)
mtaylor (at) ala.org
Join us in Minneapolis, November 12-15, 2015 for the LITA Forum.
So, I missed writing this for Open Access Week, or Fair Use Week, or Open Education Week, but I think these are topics that we should be focusing on every day of our professional lives; not just 3 weeks of the year.
Imagine for a moment that you’re doing an ego search (not that I would ever do that) and you find that someone is selling an article you wrote (with your name on it) as part of a book or journal that you never contracted with. Sure, you published the article, but for a completely different publisher. Now you find that some random company is making money off your work. You contact them and demand that they remove your article because what they’re doing is illegal, but they insist they are in the right.
Sound implausible? Did you sign away your copyright to the publisher? Or is your book chapter or article licensed under Creative Commons CC-BY? Then what they might be doing is perfectly legal based on what you agreed to. You, like most of us, just didn’t understand the implications of what you were signing.
Making your work open access is a fantastic thing to do. Our goals as faculty should be to promote and share knowledge as widely as possible, and the fruits of our research will be much more likely to benefit society when they are freely available for anyone to access. Too many authors completely sign away any rights to their work, often forcing libraries at their institutions to pay for students and faculty to use something the institution has essentially already funded. Sometimes they feel they have to in order to get tenure, but some are just unwilling or ill-equipped to read the terms of their contract. I still remember an instructor being annoyed with me that we didn’t have access to her Springer book chapter because PSU had “already paid for my research.” Sorry, I wasn’t the one who blindly signed a contract that didn’t even allow for depositing a preprint into an institutional repository.
When we were getting PDXScholar off the ground at Portland State, I talked to many faculty in my disciplines about getting their work into the repository. So many had no idea what the contract they signed did or did not allow them to do with their work and what it did or not not allow the publisher to do. I think we focus so much on the research and then the writing, then we spend months going through the peer-review process so that, by the time we receive an author agreement, we kind of mentally feel like our work is already done. I’ll admit that I wasn’t too savvy about this myself early on, but I’ve always made sure that I could make my articles and book chapters free available online in some way, shape, or form. I wrote about this back in 2013 when I was on the tenure track at PSU.
Now, I think the work we do in terms of negotiating the contract is as important as all the work that came before it. If few people can access your research, what was the point of doing it in the first place?
But it is also short-sighted to only think about whether or not we can make our work open to the public. We should also be concerned with what the publisher can do with our work. We usually think that once the work is published in the journal that the publisher is done with it, but we are sometimes signing contracts that allow them to do much, much more with our intellectual output. I once signed a contract for a book chapter that essentially said I could do anything I wanted with the work (in terms of republication), but so could the publisher. It was better than the first contract I was offered, which gave me no rights to do anything, even put the chapter in our repository, but it gave them the ability to republish my chapter in any other publications in the future.
How would you feel if you found your work published in a book that you knew nothing about? How would you feel to know that random people were making money off work you didn’t see a dime for (even originally)?
Several articles from The Scholarly Kitchen blog have made the point that just because something is published OA doesn’t necessarily mean that it can’t be reproduced for profit:
- CC-Bye Bye! Some Consequences of Unfettered Reproduction Rights Become Clearer
- More Creative Commons Confusion: When Does NC Really Mean “Non-Commercial”?
- Getting Beyond “Post and Forget” Open Access
Many OA journals make their work available through the Creative Commons CC-BY license, which allows for the maximum reuse, including the creation of derivative works and selling the work commercially, so long as the creator is given credit. I could take a bunch of articles with CC-BY licenses, package them into an anthology, and sell that anthology. All I’d have to do to stay within the license is to credit each author. But I could sell their work and not give them a penny of the profit. So could any publisher.
I would be deeply uncomfortable with the idea of licensing any of my work under a Creative Commons CC-BY license. I’m not ok with random people with whom I have no relationship making money off my work. I would guess that many people feel that way.
I’ve published in two open access journals in the past 18 months, both of which had Creative Commons non-commercial licenses. In Collaborative Librarianship, they license the work under a CC-BY-NC-ND, which allows people to share the work, with credit, so long as they don’t make money off it, but people also can’t make derivative works. I’m ok with that. The idea of someone being able to change and republish my article in some way I hadn’t intended does make me slightly uncomfortable, though I have no problem with my work being open to anyone to read, share, and benefit from. College and Research Libraries‘ default license is CC-BY-NC (which does allow for derivative works), but I love that C&RL allows the author to specify a different license for their work in the author agreement, giving the ultimate freedom to the author to define how their work can be used.
While I’d rather my articles be CC-BY-NC-ND, there are other materials I create of which I would be happy to allow derivative works to be created. Those include tutorials, presentation slides, LibGuides, and perhaps some course materials. For those, a CC-BY-NC lisense should do the trick.
My blog is licensed under a CC-BY-NC-SA license (SA=Share Alike), which seriously limits the use of my content. People who use it must not only be non-commercial entities, but must license what they create from my work using the same license. That means that whatever they add to my work must be licensed in the exact same way. I feel ok about having that requirement with my blog content.
When we were looking at what license to use for our LibGuides at PCC, we toyed with the idea of a share-alike license in the spirit of “we want everyone to share their stuff.” Ultimately, we went with a CC-BY-NC license because we know that many libraries do not have the ability to put any sort of license on their LibGuides (due to college/university rules) and this would limit their use more than having no license at all. We want to make it clear that we welcome other librarians grabbing our content. Why reinvent the wheel?
But we need to consider more than just under what license our content is being released. If the publisher retains copyright of your work, they can ultimately do whatever they want with it. Just because it has a non-commercial license doesn’t mean that the publisher can’t allow another publisher to use your work for their profit. The Creative Commons license just tells people what they can do without needing to ask permission. Ultimately the copyright holder has the right to do what they want with the content unless your contract specifically spells out limitations. With the exception of the two articles I published in Emerald Journals, which I’m pretty sure I dropped the ball on, I’ve retained copyright on all of my publications, including my book Social Software in Libraries.
I’m not a lawyer or any kind of expert on contracts, but, ultimately, there are four key things I look for in any contract these days:
1. Who will hold the copyright?
2. What rights are we giving to the publisher and what could they consequently do with our work in a worst case scenario?
3. What rights do we have to the work and can we make it available, in some way, freely online (if it isn’t already through the journal)?
4. If the work is open access, under what license is it made available to the public?
Whether a contract says it or not, this is our intellectual property. It came from our minds and our considerable efforts. We should work to make sure we have some agency over how our work is made available and who benefits financially from it.
What success stories have you had in dealing with publishers? What frustrations? Any tips for those new to the universe of scholarly publishing?
Update: Just after posting this, Micah Vandergrift shared with me Bethany Nowviskie’s post which came to a very different conclusion about Creative Commons licensing. I think that’s great! Whatever decision we come to as individuals about how we’d like our work to be used, let it be well-considered.
The exponential growth in the number of scientific papers makes it increasingly difficult for researchers to keep track of all the publications relevant to their work. Consequently, the attention that can be devoted to individual papers, measured by their citation counts, is bound to decay rapidly. ... The decay is ... becoming faster over the years, signaling that nowadays papers are forgotten more quickly. However, when time is counted in terms of the number of published papers, the rate of decay of citations is fairly independent of the period considered. This indicates that the attention of scholars depends on the number of published items, and not on real time.Below the fold, some thoughts.
Their analysis is similar to many earlier analyses of the attention decay of general online content, except that their statistics aren't as good:
one cannot count on the high statistics available for online contents: the number of tweets posted on a single popular topic may exceed the total number of scientific publications ever made.Nevertheless, the similarity between the attention decay of papers and that of online content in general is striking. They argue:
Hence, the process of attention gathering needs to take into account the increasing competition between scientific products. With the increase of the number of journals and increasing number of publications in each journal ..., a scientist inevitably needs to filter where to allocate its attention, i.e. which papers to cite, among an extremely broad selection. This may also question whether a scientist is actually fully aware of all the relevant results available in scientific archives. Even though this effect is partially compensated by the increase of the average number of references, one needs to consider the impact of increasing publication volume on the attention decay.They conclude:
The existence of many time-scales in citation decay and our ability to construct an ultrametric space to represent this decay, leads us to speculate that citation decay is an ultradiffusive process, like the decay of popularity of online content. Interestingly, the decay is getting faster and faster, indicating that scholars “forget” more easily papers now than in the past. We found that this has to do with the exponential growth in the number of publications, which inevitably accelerates the turnover of papers, due to the finite capacity of scholars to keep track of the scientific literature. In fact, by measuring time in terms of the number of published works, the decay appears approximately stable over time, across disciplines, although there are slight monotonic trends for Medicine and Biology.Clearly, the response to this problem should not be for publishers to return to their role as gatekeepers, publishing only the good stuff. Research has conclusively shown that they are unable to recognize the good stuff well enough. Rather, in a world where everything gets published, the only question is where it gets published, and the where is not a reliable indicator of quality, we need to stop paying publishers vast sums for minimal value add, and devote the funds to better search, annotation and reputation tools.
March is Women’s History Month and the sky’s the limit with our new exhibition, “Women With Wings: American Aviatrixes.” The exhibition spotlights the groundbreaking and courageous women who took to the skies as barnstormers and record breakers in the 1910s and 1920s, daredeviled their way across the US in the 1930s, and supported the American military in WWII in the 1940s, inspiring generations of women pilots and astronauts along the way.
The exhibition was created by students Megan DeArmond, Diana Moronta, and Laurin Paradise as part of Professor Debbie Rabina’s course “Information Services and Sources” in the School of Information and Library Science at Pratt Institute. It highlights the rich collections held by institutions throughout the US including the Boston Public Library, California Historical Society, Missouri History Museum, National Archives and Records Administration, The New York Public Library, The Portal to Texas History, and more.
Check out the exhibition.
Featured image credit: “Women pilots of the All-Woman Transcontinental Air Race,” 1929. Courtesy of St. Louis University via the Missouri Hub.
All written content on this blog is made available under a Creative Commons Attribution 4.0 International License. All images found on this blog are available under the specific license(s) attributed to them, unless otherwise noted.
Part 20 of Amazon crawl..
This item belongs to: data/ol_data.
This item has files of the following types: Data, Data, Metadata, Text
I’d love to see more of this at the other conferences I attend!
When working on a new website it is so easy to want to jump right in and start coding, or at least start storyboarding. I have been working on a new website with the marketing department at my university, and before we could start any work they asked me to first define my audience.
Once I clearly articulated who I was trying to reach I was asked to provide information for specific pages. As a team we decided to include the following pages:
- Landing Page
- Supplemental Pages
- Contact us
- Event Slider
Then, I was asked to identify the sub-groups of my broadly defined audience that each page should target. We only started writing once each page had a specified population. The words used for each page needed to align with both the broad audience and the targeted sub-group.
The process of planning a website before creating the website has been a great learning experience. I have been forced to articulate my goals and specify end users. I’ve been more thoughtful about this project than I have been in previous instances when many decisions were left up to me. Our marketing team has been an amazing resource and I hope to apply their thinking to future projects.
Does anyone else ever feel like they need a public relations training to be a librarian?
Also, I’d love to hear any advice you have for me as I plan websites.
List of changes below:
** Bug Fix: Delimited Text Translator: Constant data, when used on a field that doesn’t exist, is not applied. This has been corrected.
** Bug Fix: Swap Field Function: Swapping control field data (fields below 010) using position + length syntax (example 35:3 to take 3 bytes, starting at position 35) not functioning. This has been corrected.
** Enhancement: RDA Helper: Checked options are now remembered.
** Bug Fix: RDA Helper: Abbreviation Expansion timing was moved in the last update. Moved back to ensure expansion happens prior to data being converted.
** Enhancement: Validator error message refinement.
** Enhancement: RDA Helper Abbreviation Mapping table was updated.
** Enhancement: MarcEditor Print Records Per Page — program will print one bib record (or bib record + holdings records + authority records) per page.
** Bug Fix: Preferences Window: If MarcEdit attempts to process a font that isn’t compatible, then an error may be thrown. A new error trap has been added to prevent this error.
You can get the new download either through MarcEdit’s automatic update tool, or by downloading the program directly from: http://marcedit.reeset.net/downloads
Here at the U-M Library, we’re committed to identifying opportunities for engagement between Library staff and students. But identifying these opportunities can be difficult for our Library’s IT unit since we’re not involved with students as part of our day-to-day work. How do we as tech professionals engage with the student community?
From Michael Metcalf, Communications Officer, Symplectic
London, UK Symplectic has been a supporter of the VIVO project for many years. Back in 2011, we developed an open-source VIVO harvester, to allow the feeding of information into VIVO profiles using the rich data captured by our flagship software, Elements. It’s available to fork on our Github page.
Are you letting the opportunity to defend library and literacy funding slip by? Are you expecting others to call on Congress to fund our vital programs?
Now is the time for ALA members to make the case for continued funding of the Library Services and Technology Act (LSTA) and Innovative Approaches to Literacy (IAL). Strong support at the beginning of the Appropriations discussion on Capitol Hill sets the tone now as Appropriators seek programs to reduce funding for and/or eliminate.
We cannot let this happen. Several letters are circulating among Members of Congress supporting these programs and we need to ensure as many Members of Congress as possible add their names to these letters. It is imperative this week that Members of Congress show support for continued funding for these important programs.
ALA members must contact their Senators and Representative this week and ask their boss to sign these letters. Staff from your Congressional offices must contact the staff members listed below to let them know their boss will sign the letters. The deadline for House letters is Friday while Senate letters are due early next week.
Please take a few minutes and contact your Members of Congress. You can find their contact information and talking points at our action center.
- LSTA Letter – The Library Services and Technology Act is the primary source of federal funding for libraries in the federal budget. The bulk of the program is a population-based grant funded to each state through the Institute of Museum and Library Services.
- IAL Letter – The Innovative Approaches to Literacy ensures children and schools in underserved communities have access to books and other reading material and services. Through IAL programs, children are better prepared to succeed in high school, college, and in 21st century jobs.
Dr. Astro Teller was our final keynote of the conference.
Astro created his second company in 1999 to take advantage of the future of wearables – BodyMedia. They designed a vest that would be used as an EKG. As an afterthought they brought people in to ask what they though of it – the interviews did not go well – and so we never saw this vest. The mistake they made was asking people what they though last instead of first. The longer you work on something the more you don’t really want to know what the world is going to tell you – you have to rush out there as fast as you can and as often as you can to get that info. In addition to getting this painful news from reality you have to find a way to use it.
The lesson failing at the beginning is what he has taken with him to Google X and the bumps and scrapes that it takes to improve is something we all share as life experiences. So today Astro is going to tell us what he’s learned and how he’s learned it at Google X.
Moonshots mean that they are shooting for things that are 10x better – not incremental improvement. They are trying to produce items with real value – they embraced failure before deciding if they were going to take a moonshot. You have to make a ton of mistakes if you want to make great progress. All the bumps and scrapes have been well worth it. Some of the examples that he’s about to give us can be used to help us all with our experiments.
One of the projects out of Google X is Project Loon. “Project Loon looks to use a global network of high-altitude balloons to connect people in rural and remote areas who have no Internet access at all.” The problem in the beginning was that they didn’t want these balloons to fly in to a territory they weren’t allowed in. So what they did was design their first balloons to fail so that they didn’t have to deal with that issue. That allowed them to work on other areas of the project without having to worry about creating an international incident. Now they can fly their balloons 10-20 kilometers away to where they want it to go – up until then those they had to teach the balloons how to sale and how to destroy themselves so that if they went somewhere they weren’t supposed to go they could be destructed.
Another example is the self driving car. If you could build a car that was safer than humans it would solve so many problems. When they started they didn’t have the list of 10 thousand things they’d need to make this car actually work. Making that list though was the hardest part of this process and there was no other way to make this list then by going out there in to the world. One example was that their car came across a lady in a motorized wheelchair in the middle of the road with a broom trying to shoo a duck out of the street – the car needed to stop for this event and there was no way that the they would have envisioned that scenario if they hadn’t gone out there in the world.
Google Glass is another example – they wanted to get the prototypes out quickly to a wide range of people so they could see how it was used and where it was used. They made one great decision and one not so great decision with Google Glass. The great one is that they got the glass out there. What they did not do well was that they got too much attention on the product – what they wanted to say was that this was an early prototype and was out there as a beta test. The problem is that the world saw it as a finished product when it was not that. They needed to prevent it from getting to be as loud a conversation as it got.
The key from these stories is that you have to prototype your tool – beta test it – you’re never going to get the right answer sitting in a conference room.
Project Wing is Google’s project for delivering things by self flying vehicles. They tried to find a vehicle that existed already that they could use, but they couldn’t find any out there. The decided they would build their own device. Even though 80% of the team knew it was the wrong answer after a year and a half they didn’t want to admit that. They wanted to get out in to the real world and test things. They were able to do 9 deliveries to Googlers and got proof that the device was wrong. This means that they were able to come back to the office and start designing a new vehicle.
Next a story of failing to fail … Project Makani. It’s an energy kite used to harness the wind. The higher up you are the more wind you have and the more power you get. The problem is that the towers you see are huge and you have to spend a ton to to build, move and put one of these up. The new kite tower weighs only 1% of the towers we’re used to seeing. So for the testing the team picked the windiest place possible – the speed and the direction of the wind can change in seconds. While they learned a ton – they did not fail – they never crashed one device like they thought they should. The team felt that they failed because they didn’t fail – there is magic in that.
Going back to the car … they thought they were ready in 2012 when their had Google employees using the cars. That car required human interaction to do things like exit and pay attention. They learned pretty quickly though that the only way was to have the car drive itself because the theory that humans can be a reliable backup for the system is a total fallacy. People do really stupid things when driving – they do really stupid things when driving for real – and when the they started to trust the system they really just let go of all reason.
All that said Astro doesn’t feel that he could have avoided the mistakes – he wishes that they had made the mistakes faster so that they could have learned faster. And he hopes that we can take away from his stories and set ourselves up for creative/productive failure.
- SxSW: Fixing Transportation with Humanity & Technology
- SxSW: Magical UX and the Internet of Things
- Self Driving Cars
Pete Cashmere from Mashable was here to update us on tech.
Pete says that there is no excuse for media companies to not be involved in tech – enter Mashable Velocity. This is a tool in the Mashable app that predicts what’s trending in news.
While this technology is cool, it’s not going to replace the humans in news – we still need people to go out and get the new stories. We us AI learn about what’s going on on the web, it can learn patterns and it can learn from language – but it can’t write the story for you yet. Being successful on the web today requires that you’re “pro-human”. Human expression, creativity and human are all things that you can’t automate. Velocity can start the conversation, but it can’t tell the story.
Mashable wants to be your most faithful Facebook friend and your most trusted social network connection. They want to be your friend but they want to be trusted. Mashable wants to be accessible and approachable and still be reliable. Their target audience is the digital generation. The bulk of their audience is in the millennial group – but they see it as more of an attitude than an age group! I love that – I’m not a millennial but I do relate to a lot of the characteristics of a millennial.
Pete is a huge fan of Meerkat – live tweeting video (an article about Meerkat at SxSW). This is a new side of reporting. It’s important to not focus on making this perfect though – the beauty of it is that it feels human. Snapchat is another tool that Mashable uses a bunch. Facebook is joining the video realm and Mashable is using that tool too. They keep in mind that there are different ways to tell stories on each of these platforms and you have to keep that in mind.
We’re in an era of video – people want to see more video now and are producing more video. Mashable is going to head in this direction – produce more video. It’s way more obvious when you do video who you are – they want their videos to be fun and sharable. We’ll probably see in 3 to 5 years that most media companies are producing a majority of their content in video – it’s going to be a huge trend because it’s an easy medium to view on our mobile devices. (I’d argue that I don’t always have my headphones with me and so I don’t watch video when I’m mobile – I prefer to read).
When asked who the competition for Mashable was Pete said it depends on the platform. On Facebook they’re competing with your friend’s baby photos and on Twitter they’re competing with other news reporting outlets. Media has changed so much – you used to only consume media watching the news at night – but now you can be in line at the store reading news on your phone. People are consuming news in so many new places – and it’s all because of mobile. Mobile is making it so that media companies have to tell their stories more succinctly – writing/producing for the medium.
- Social Media Policies
- SxSW: Behind the Social at WGBH
- SxSW: End to Brogramming: How Women are Shaping Tech
This is the good stuff.
Zipper bots. Want. Could be handy on the ski slopes or on the bike.
A good companion to the article on phonetically balanced sentences.
A good companion to the article on phonetically balanced sentences.
It’s easy to tell the depth of a well.
This would have been perfect on campus this winter.
An easy online tool for creating simple typefaces
AJ Jacobs, Joanna Mountain and Katarzyna Bryc were on the panel this morning titled “SxSW: We’re All Related. The (Big) Data Proves It” Joanna and Katarzyna are from 23andMe the makers of a DNA test for genetics. AJ is not a scientist like his fellow panelists but he is an author interested in research.
AJ got into this because he got an email one day saying “you don’t know me but I’m your 12th cousin”. He of course thought it was a hoax, but he looked in to it and found that this gentleman was actually part of a group who is creating the biggest family tree – right now there are 270 million people linked together on this tree. This got him in to genealogy and it’s a thrilling era for this type of research. One reason is DNA testing and the second reason is the Internet. I can confirm this – this is how I found most of my family between sites like Ancestry and Geni and FamilySearch and then social networks. With these sites it seems likely that in a few years we’ll have one giant tree connecting us to everyone on earth.
AJ found that he’s linked to Gwyneth Paltrow as his 16th cousin. Judge Judy is AJ’s 6th cousin 4 times removed. He’s even related to Barack Obama in some insane way that I couldn’t write down and George Bush and Daniel Radcliffe and so many others.
23andMe is a service where you can send them your DNA and they will show you the breakdown of where you came from and list for you a list of hundreds of people who are related to you. They will also tell you what branch of the huge family you are on. Unfortunately AJ found that he is related to his wife – but in reality we’d all find the same thing.
We’re only 10 minutes in and I want to get one of these kits for everyone in my family and see what it turns up!!
AJ feels that we treat our family better than strangers and this research will show that we’re all related and will be bad news for bigots. AJ has decided to throw the biggest family reunion in history – it’s in New York on June 6th. There will be speakers, classes, and games for all the cousins. If you can’t attend in New York you can participate in branch parties or contribute to the IndieGoGo campaign.
Joanna was up next to talk to us about what 23andMe does. Joanna showed us how DNA is the same in a family and explained how they use this technology to find out how much DNA we all share with each other. 850,000 people have been tested and your DNA is compared to all of those people. You can also allow 23andMe to share your info if you want with your cousins so that you can contact each other.
Other than learning that we have new cousins – what can we learn from all of this? Sometimes we don’t have our family stories due to a loss or a lie or a disease and Joanna shared stories with us of people were got connected to their families who had stories of their own to share.
Katarzyna was up to talk to us about the numbers and the big data. She started with a lot of math! If every family had just 2 children you would have 1024 5th cousins. As you go back in time you have more and more cousins and maybe you do or don’t share DNA with. In 23andMe the typical customer has 2556 cousins.
So, how much DNA do you need to share to be cousins? You have to have a certain segment of DNA in common to be cousins – but people who are cousins may not have the DNA to prove it. DNA is quite clear on who our first cousins are, but it’s harder as you get further out. The best way to do this is to get a bunch of your cousins tested.
I had a chance to ask the panel about data sharing. While I don’t want my DNA to be open sourced – I do want it shared across multiple genealogy DNA services. I pay for Ancestry.com for example to do research and I was going to use their DNA service, but it sounds like 23andMe has a better service that won’t be linked to my Ancestry account. The answer was that these services are all silos at this time and each offers a different service. Apparently people will submit their DNA to multiple services (which for me seems like it would get expensive). I’d love to see some service that shared information across services in a secure fashion.
After this talk I think I’m going to start a saving account just to get this info and I’m going to share this story with my family because I would love for us to share as much info as possible so that maybe I can connect to my family in Italy because as of now I’ve hit a wall in the paper only research method. Right now most of the customers for 23andMe are living in the US so I might hit a wall in genetic research as well.
- SxSW: New Social Networks Are Changing Entire Industries
- SxSW: Building the Open Source Society
- Library Related Conferences
Join us in San Francisco during the ALA Annual Conference at the LITA Awards Presentation & LITA President’s Program, when LITA President Rachel Vacek welcomes Lou Rosenfeld to present on the latest cutting edge issues of concern to technology librarians.
Sunday, June 28, 2015, 3:00 – 4:00pm
Location to be announced on the ALA Schedule soon
You’ve heard a lot about—and maybe even tried to do something—about user experience. And naturally, you have questions. Do librarians have an edge when it comes to UX, or are we behind UX’s other feeder disciplines? Why is UX research so important for libraries? Can libraries even afford to provide good experiences these days? Lou Rosenfeld sits squarely at the intersection of UX and librarianship (and hopes not to get run over). He is a former librarian who many consider one of the “fathers of information architecture” and who now publishes books on user experience. He’ll tackle your questions with moderation from LITA President Rachel Vacek—and may even answer some.
Lou Rosenfeld has been instrumental in helping establish the fields of information architecture and user experience, and in articulating the role and value of librarianship within those fields. Lou is co-author of Information Architecture for the World Wide Web (O’Reilly; 4th edition to be published in 2015) and Search Analytics for Your Site (Rosenfeld Media, 2011), co-founder of the Information Architecture Institute and the Information Architecture Summit and Enterprise UX conferences, and a former columnist for Internet World, CIO, and Web Review magazines.
Lou founded the ground-breaking information architecture consultancy Argus Associates in the early 1990s. As an independent consultant, he has helped a variety of large and highly-political enterprises make their information more findable, including Caterpillar, PayPal, Ford, AT&T, the Centers for Disease Control, Accenture, and the NCAA. Lou now manages Rosenfeld Media, which publishes some of the best-loved books in user experience, produces UX events, and equips UX teams with coaching and training.
We’re pleased to announce that hydra-head 9.1.0 and 9.1.1 have been released.
Version 9.1.0 brings support for Blacklight 5.10. See the upgrade guide here: https://github.com/projecthydra/hydra-head/releases/tag/v9.1.0
Version 9.1.1 fixes policy based access controls. They were not working in hydra-head 9 due to using incorrect Solr queries.
Thanks to Justin Coyne for the work.
- V = 1
- H = 2
- P = 3
- I = 4
- C = 5
- T = 6
- U = 7
- R = 8
- E = 9
- S = 0
That should work for both the body and film magazines, though there are some exceptions noted in the comments at Blue Moon Camera:
Keep in mind if you have an EL or EL/M there will be a third letter, indicating motor driven. (“E” for electric)…I believe this is true up to 1978 or so….and they may have used a W for the superwide bodies.
Dating the Zeiss lenses is a different matter, though. For lenses prior to 1980, a three or four digit date code is stamped at the back of the lens, just inside the mount. The last two digits represent the month, the first one or two digits represent the number of years after 1957. Hasselblad Historical has more detail on how to find the number and how to read lenses manufactured in 1980 and later.
According to all that, my kit was assembled from parts as old as 1963:
- Body, UPE = 1973
- Magazine, TT = 1966
- Lens, 604 = April 1963