You are here

planet code4lib

Subscribe to planet code4lib feed
Planet Code4Lib - http://planet.code4lib.org
Updated: 58 min 18 sec ago

Eric Hellman: Google's "Crypto-Cookies" are tracking Chrome users

Sat, 2017-01-14 17:16
Ordinary HTTP cookies are used in many ways to make the internet work. Cookies help websites remember their users. A common use of cookies is for authentication: when you log into a website, the reason you stay logged is because of a cookie that contains your authentication info. Every request you make to the website includes this cookie; the website then knows to grant you access.

But there's a problem: someone might steal your cookies and hijack your login. This is particularly easy for thieves if your communication with the website isn't encrypted with HTTPS. To address the risk of cookie theft, the security engineers of the internet have been working on ways to protect these cookies with strong encryption. In this article, I'll call these "crypto-cookies", a term not used by the folks developing them. The Chrome user interface calls them Channel IDs.


Development of secure "crypto-cookies" has not been a straight path. A first approach, called "Origin Bound Certificates" has been abandoned. A second approach "TLS Channel IDs" has been implemented, then superseded by a third approach, "TLS Token Binding" (nicknamed "TokBind"). If you use the Chrome web browser, your connections to Google web services take advantage of TokBind for most, if not all, Google services.

This is excellent for security, but might not be so good for privacy; 3rd party content is the culprit. It turns out that Google has not limited crypto-cookie deployment to services like GMail and Youtube that have log-ins. Google hosts many popular utilities that don't get tracked by conventional cookies. Font libraries such as Google Fonts, javascript libraries such as jQuery, and app frameworks such as Angular, are all hosted on Google servers. Many websites load these resources from Google for convenience and fast load times.  In addition, Google utility scripts such as Analytics and Tag Manager are delivered from separate domains so that users are only tracked across websites if so configured.  But with Google Chrome (and Microsoft's Edge Browser), every user that visits any website using Google Analytics, Google Tag Manager, Google Fonts, JQuery, Angular, etc. are subject to tracking across websites by Google. According to Princeton's OpenWMP project, more than half of all websites embed content hosted on Google servers.
Top 3rd-party content hosts. From Princeton's OpenWMP.
Note that most of the hosts labeled "Non-Tracking Content"
are at this time subject to "crypto-cookie" tracking.

While using 3rd party content hosted by Google was always problematic for privacy-sensitive sites, the impact on privacy was blunted by two factors – cacheing and statelessness. If a website loads fonts from fonts.gstatic.com, or style files from fonts.googleapis.com, the files are cached by the browser and only loaded once per day. Before the rollout of crypto-cookies, Google had no way to connect one request for a font file with the next – the request was stateless; the domains never set cookies. In fact, Google says:
Use of Google Fonts is unauthenticated. No cookies are sent by website visitors to the Google Fonts API. Requests to the Google Fonts API are made to resource-specific domains, such as fonts.googleapis.com or fonts.gstatic.com, so that your requests for fonts are separate from and do not contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail. But if you use Chrome, your requests for these font files are no longer stateless. Google can follow you from one website to the next, without using conventional tracking cookies.

There's worse. Crypto-cookies aren't yet recognized by privacy plugins like Privacy Badger, so you can be tracked even though you're trying not to be. The TokBind RFC also includes a feature called "Referred Token Binding" which is meant to allow federated authentication (so you can sign into one site and be recognized by another). In the hands of the advertising industry, this will get used for sharing of the crypto-cookie across domains.

To be fair, there's nothing in the crypto-cookie technology itself that makes the privacy situation any different from the status quo. But as the tracking mechanism moves into the web security layer, control of tracking is moved away from application layers. It's entirely possible that the parts of Google running services like gstatic.com and googleapis.com have not realized that their infrastructure has started tracking users. If so, we'll eventually see the tracking turned off.  It's also possible that this is all part of Google's evil master plan for better advertising, but I'm guessing it's just a deployment mistake.

So far, not many companies have deployed crypto-cookie technology on the server-side. In addition to Google and Microsoft, I find a few advertising companies that are using it.  Chrome and Edge are the only client side implementations I know of.

For now, web developers who are concerned about user privacy can no longer ignore the risks of embedding third party content. Web users concerned about being tracked might want to use Firefox for a while.

Notes:

  1. This blog is hosted on a Google service, so assume you're being watched. Hi Google!
  2. OS X Chrome saves the crypto-cookies in an SQLite file at "~/Library/Application Support/Google/Chrome/Default/Origin Bound Certs". 
  3. I've filed bug reports/issues for Google Fonts, Google Chrome, and Privacy Badger. 
  4. Dirk Balfanz, one of the engineers behind TokBind has a really good website that explains the ins and outs of what I call crypto-cookies.


DPLA: DPLA to Expand Access to Ebooks with Support from the Alfred P. Sloan Foundation

Fri, 2017-01-13 15:45

The Digital Public Library of America is thrilled to announce that the Alfred P. Sloan Foundation has awarded DPLA $1.5 million to greatly expand its efforts to provide broad access to widely read ebooks. The grant will support improved channels for public libraries to bolster their ebook collections, and for millions of readers nationwide to access those works easily.

DPLA will leverage its extensive connections to America’s libraries through its national network to pilot new ways of acquiring ebook collections. In the same way that DPLA has worked with its hubs in states from coast to coast to improve access to digitized materials from America’s archives, museums, and libraries, DPLA will collaborate with other institutions to improve access to ebooks through market-based methods.

As part of the grant, DPLA will also develop an expansive, open collection of popular ebooks, formatted in the EPUB format for smartphones and tablets, and curated so that readers can find works of interest. Together, these programs will increase substantially the number of ebooks that are readable by all Americans, on the devices that are now broadly held throughout society.

“From its inception, DPLA has sought to maximize access to our shared culture,” Dan Cohen, DPLA’s Executive Director, said at the announcement of the new Sloan grant. “Books are central to that culture, and the means through which everyone can find knowledge and understanding, multiple viewpoints, history, literature,  science, and enthralling entertainment. We deeply appreciate the Sloan Foundation’s support to help us connect the most people with the most books, which are now largely in digital formats.”

“The Sloan Foundation is delighted to support the Digital Public Library of America’s efforts to create new channels for better ebook access,” said Doron Weber, Vice President and Program Director at the Alfred P. Sloan Foundation. “Sloan was the founding funder of DPLA and its mission, enabling a nationwide, grassroots and non-profit collaboration that to date has provided access to over 15 million digitized items from over 2,000 cultural heritage institutions across the U.S. With its timely new focus on ebooks, DPLA will leverage its national network to expand reading opportunities for thousands of schools and libraries and millions of students, scholars, and members of the public.”

The Sloan grant will help DPLA build upon its existing successful ebook work, such as in the Open eBooks Initiative, which has provided thousands of popular and award-winning books to children in need. Recently, DPLA announced with its Open eBooks partners the New York Public Library, First Book, Baker & Taylor, and Clever that well over one million books were read through the Sloan-supported program in 2016.

Galen Charlton: Truth-seeking institutions and strange bedfellows

Fri, 2017-01-13 14:19

I was struck just now by the confluence of two pieces that are going around this morning. One is Barbara Fister’s Institutional Values and the Value of Truth-Seeking Institutions:

Even if the press fails often, massively, disastrously, we need it. We need people employed full-time to seek the truth and report it on behalf of the public. We need to defend the press while also demanding that they do their best to live up to these ethical standards. We need to call out mistakes, but still stand up for the value of independent public-interest reporting.

Librarians . . . well, we’re not generally seen as powerful enough to be a threat. Maybe that’s our ace in the hole. It’s time for us to think deeply about our ethical commitments and act on them with integrity, courage, and solidarity. We need to stand up for institutions that, like ours, support seeking the truth for the public good, setting aside how often they have botched it in the past. We need to apply our values to a world where traditions developed over years for seeking truth – the means by which we arrive at scientific consensus, for example – are cast aside in favor of nitpicking, rumor-mongering, and self-segregation.

The other is Eric Garland’s Twitter thread on how the U.S. intelligence community gathers and analyzes information:

<THREAD> I've been an intelligence practitioner for 20 years. What we're seeing is the *process* of intel in public. It's without precedent.

— Eric Garland (@ericgarland) January 12, 2017

In particular,

This is actually what I love about this work. You aggressively attack your own intellectual weakness. Assume it's wrong. Because it matters.

— Eric Garland (@ericgarland) January 12, 2017

Of course, if it is easy nowadays to be cynical about the commitment of the U.S. press to truth-seeking, such cynicism is an even easier pose to adopt towards the intelligence community. At the very least, spreading lies and misinformation is also in the spy’s job description.

But for the purpose of this post, let’s take the latter tweet at face value, as an expression of an institutional value held by the intelligence community (or at least by its analysts).

I’m left with a couple inchoate observations. First, a hallmark of social justice discourse at its best is a radical commitment to centering the voices of those who hitherto have been ignored. Human nature being what it is, at least a few folks who understood this during during their college days will end up working for the likes of the CIA. On the one hand, that sort of transition feels like a betrayal. On the other hand, I’m not Henry L. Stimson: not only is it inevitable that governments will read each other’s mail, my imagination is not strong enough to imagine a world where they should not. More “Social Justice Intelligence Analysts” might be a good thing to have — as a way of mitigating certain kind of intellectual weakness.

However, one of the predicaments we’re in is that the truth alone will not save us; it certainly won’t do so quickly, not for libraries, and not for the people we serve. I wonder if the analyst side of the intelligence community, for all their access to ways of influencing events that are not available to librarians, is nonetheless in the same boat.

Ed Summers: Tracking Changes With diffengine

Fri, 2017-01-13 05:00
Our most respected newspapers want their stories to be accurate, because once the words are on paper, and the paper is in someone's hands, there's no changing them. The words are literally fixed in ink to the page, and mass produced into many copies that are near impossible to recall. Reputations can rise and fall based on how well newspapers are able to report significant events. But of course physical paper isn't the whole story anymore. News on the web can be edited quickly as new facts arrive, and more is learned. Typos can be quickly corrected--but content can also be modified for a multitude of purposes. Often these changes instantly render the previous version invisible. Many newspapers use their website as a place for their first drafts, which allows them to craft a story in near real time, while being the first to publish breaking news. News travels *fast* in social media as it shared and reshared across all kinds of networks of relationships. What if that initial, perhaps flawed version goes viral, and it is the only version you ever read? It's not necessarily fake news, because there's no explicit intent to mislead or deceive, but it may not be the best, [most accurate] news either. Wouldn't it be useful to be able to watch how news stories shift in time to better understand how the news is produced? Or as Jeanine Finn memorably put it: how do we understand the news [before truth gets its pants on]? --- As part of [MITH]'s participation in the [Documenting the Now] project we've been working on an experimental utility called [diffengine] to help track how news is changing. It relies on an old and quietly ubiquitous standard called [RSS]. RSS is a data format for syndicating content on the Web. In other words it's an automated way of sharing what's changing on your website. News organizations use it heavily, and if you've every subscribed to a podcast you're using RSS. If you have a blog or write on [Medium] an RSS feed is quietly be generated for you whenever you write a new post. So what diffengine does is really quite simple. First it subscribes to one or more RSS feeds, for example the Washington Post, and then it watches to see if any articles change their content over time. If a change is noticed a representation of the change, or a "[diff]" is generated, archived at the [Internet Archive] and (optionally) tweeted. We've been experimenting with an initial version of diffengine by having it track the Washington Post, the Guardian and Breitbart News which you can see on the following Twitter accounts: [wapo_diff], [guardian_diff] and [breitbart_diff]. Here's an example of what a change looks like when it is tweeted:

Deportation force is ‘not happening,’ Paul Ryan tells undocumented family - The Washi… https://t.co/OQEpG1Inj3 -> https://t.co/NsDNI5Dflt pic.twitter.com/t0Q6iuG2qX

— Editing the Wapo (@wapo_diff) January 13, 2017 The text highlighted in red has been deleted and the text highlighted in green has been added. But you can't necessarily take diffengine's word for it that the text has been changed, right? Bots are [sending] all kinds of fraudulent and intentionally misleading information out on the web, and in particular in social media. So when diffengine notices new or changed content it uses Internet Archive's [save page now] functionality to take a snapshot of the page, which it then references in the tweet so you can see the original and changed content there. You can see those links in the tweet above. --- diffengine draws heavily on the inspiration of two previous projects, [NYTDiff] and [NewsDiffs], which did very similar things. [NYTdiff] is able to create presentable diff images and [tweet them] for the New York Times. But it was designed to work specifically with the NYTimes API. NewsDiffs provides a comprehensive framework for watching changes on multiple sites (Washington Post, New York Times, CNN, BBC, etc). But you need to be a programmer to add a [parser module](https://github.com/ecprice/newsdiffs/tree/master/parsers) for a website that you want to monitor. It is also fully functional web application which requires some commitment to install and run. With the help of [feedparser] diffengine takes a different approach of working with any site that publishes an RSS feed of changes. This covers many news organizations, but also personal blogs and organizational websites that put out regular updates. And with the [readability] module diffengine is able to automatically extract the primary content of pages, without requiring special parsing to remove boilerplate material. To do its work diffengine keeps a small database of feeds, feed entries and version histories that it uses to notice when content has changed. If you know your way around a sqlite database you can query it to see how content has changed over time. The database could be a valuable source of research data if you are studying the production of the news, or the way organizations or people communicate online. One possible direction we are considering is creating a simple web frontend for this database that allows you to navigate the changed content without requiring SQL chops. If this sounds useful please get in touch with the DocNow project, by joining our [Slack] channel or emailing us at info@docnow.io. [Installation] of diffengine is currently a bit challenging if you aren't already familiar with installing Python packages from the command line. If you are willing to give it a try let us know how it goes over on [GitHub]. Ideas for sites for us to monitor as we develop diffengine are also welcome! --- *Special thanks to [Matthew Kirschenbaum] and [Gregory Jansen] at the University of Maryland for the intial inspiration behind this idea of showing rather than telling what news is. The [Human-Computer Interaction Lab] at UMD hosted an informal workshop after the recent election to see what possible responses could be, and diffengine is one outcome from that brainstorming.* [tweet them]: https://twitter.com/nyt_diff [NYTDiff]: https://github.com/j-e-d/NYTdiff [NewsDiffs]: http://newsdiffs.org/ [feedparser]: https://pythonhosted.org/feedparser/ [readability]: https://github.com/buriy/python-readability [Medium]: https://help.medium.com/hc/en-us/articles/214874118-RSS-Feeds-of-publications-and-profiles [wapo_diff]: https://twitter.com/wapo_diff [guardian_diff]: https://twitter.com/guardian_diff [breitbart_diff]: https://twitter.com/breitbart_diff [diff]: http://catb.org/jargon/html/D/diff.html [Internet Archive]: https://archive.org [Documenting the Now]: https://www.docnow.io [save page now]: https://archive.org/about/faqs.php#1050 [most accurate]: http://www.forbes.com/sites/kalevleetaru/2017/01/01/fake-news-and-how-the-washington-post-rewrote-its-story-on-russian-hacking-of-the-power-grid/#780dc24e291e [before truth gets its pants on]: https://jeaninefinn.me/2016/11/15/understanding-fake-news-in-2016-before-the-truth-gets-its-pants-on/ [MITH]: http://mith.umd.edu [diffengine]: https://github.com/docnow/diffengine [RSS]: https://en.wikipedia.org/wiki/RSS [sending]: http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653 [Installation]: https://github.com/docnow/diffengine/#Install [Slack]: https://docs.google.com/forms/d/e/1FAIpQLSf3E7PAXPoT-XoedpEy9UCTpDPS8kPj5JkMwpaWbuqVP0bTrQ/viewform [GitHub]: https://github.com/docnow/diffengine [Matthew Kirschenbaum]: https://twitter.com/mkirschenbaum [Gregory Jansen]: https://twitter.com/gregj [Human-Computer Interaction Lab]: http://www.cs.umd.edu/hcil/

Equinox Software: Equinox Transitions to NonProfit to Benefit Libraries

Thu, 2017-01-12 17:48

Equinox Transitions to Nonprofit to Benefit Libraries

FOR IMMEDIATE RELEASE

Duluth, Georgia, January 12, 2017 – On January 1, 2017, Equinox Software, Inc., the premiere support and service provider for the Evergreen Integrated Library System, became Equinox Open Library Initiative Inc., a nonprofit corporation serving libraries, archives, museums, and other cultural institutions. This change comes after several years of consideration, evaluation of community needs, planning, and preparation.  The change allows Equinox to better serve its customers and communities by broadening its mission of bringing more open source technology to a wide array of institutions dedicated to serving the public good.

About the conversion from for-profit to nonprofit, Mike Rylander, president of the new Equinox Open Library Initiative said, “Everyone at Equinox is dedicated to the mission of helping libraries of all types adopt and use open source software.  We have been involved in this work for ten years now, and our move to become a nonprofit helps us further that mission.  Importantly, this change also matches more closely the cooperative, community-focused ethos of the open source technologies with which we work.  We could not be more excited to move forward in this new direction.”

Jason Etheridge, an Equinox founder, added, “In 2009, we wrote an open letter to the community called the Equinox Promise, where we pledged to adhere to ideas such as transparency, code sharing, maintaining a single code set, and, in general, working with and within the Evergreen and Koha communities.  This built on the original vision of Evergreen as software that should be open source for both philosophical and pragmatic reasons.  Equinox becoming a nonprofit is another promise, one with legal teeth, where our charitable purpose is put front and center.  I see no better way to participate in the gift culture known as open source, and in our Evergreen and Koha communities.”

While daily operations at Equinox will not change, company leaders highlight that going forward there will be new opportunities for service expansion and enhancement, as well as creative funding options for projects that enhance library services.  Grace Dunbar, Equinox Vice President, pointed out, “By becoming a nonprofit organization, Equinox will actually be able to do more and grow our service offerings to the library community.  I think it’s important to note we’re not changing our services—we still offer a complete suite of services for seamless migration, support, and development for open source software library software. However, by making the change to nonprofit we will be able to grow in a way that does not require a merger or acquisition with a proprietary software company and will allow us to integrate more resources into our mission.”

For more information, please visit our FAQ.

About Equinox Open Library Initiative Inc.
Equinox Open Library Initiative Inc. is a nonprofit company engaging in literary, charitable, and educational endeavors serving cultural and knowledge institutions.  As the successor to Equinox Software, Inc., the Initiative carries forward a decade of service and experience with Evergreen and other open source library software.  At Equinox OLI we help you empower your library with open source technologies.

Open Knowledge Foundation: CSV,Conf is back in 2017! Submit talk proposals on the art of data collaboration.

Thu, 2017-01-12 15:47

CSV,Conf,v3 is happening! This time the community-run conference will be in Portland, Oregon, USA on 2nd and 3rd of May 2017. It will feature stories about data sharing and data analysis from science, journalism, government, and open source. We want to bring together data makers/doers/hackers from backgrounds like science, journalism, open government and the wider software industry to share knowledge and stories.

csv,conf is a non-profit community conference run by people who love data and sharing knowledge. This isn’t just a conference about spreadsheets. CSV Conference is a conference about data sharing and data tools. We are curating content about advancing the art of data collaboration, from putting your data on GitHub to producing meaningful insight by running large scale distributed processing on a cluster.

Talk proposals for CSV,Conf close Feb 15, so don’t delay, submit today! The deadline is fast approaching and we want to hear from a diverse range of voices from the data community.

Talks are 20 minutes long and can be about any data-related concept that you think is interesting. There are no rules for our talks, we just want you to propose a topic you are passionate about and think a room full of data nerds will also find interesting. You can check out some of the past talks from csv,conf,v1 and csv,conf,v2 to get an idea of what has been pitched before.

If you are passionate about data and the many applications it has in society, then join us in Portland!

Speaker perks:

  • Free pass to the conference
  • Limited number of travel awards available for those unable to pay
  • Did we mention it’s in Portland in the Spring????

Submit a talk proposal today at csvconf.com

Early bird tickets are now on sale here.

If you have colleagues or friends who you think would be a great addition to the conference, please forward this invitation along to them! CSV,Conf,v3 is committed to bringing a diverse group together to discuss data topics.

For questions, please email csv-conf-coord@googlegroups.com, DM @csvconference or join the public slack channel.

– the csv,conf,v3 team

Archival Connections: Preserving Email Report Summary

Wed, 2017-01-11 22:20
Earlier today, I provided a summary of Preserving Email, a Technology Watch Report I wrote back in 2011. I'll leave it to others to judge how well that report holds up, but I had the following takeaways when re-reading it:

LITA: Jobs in Information Technology: January 11, 2017

Wed, 2017-01-11 19:43

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Mid-Hudson Library System, Technology Operations Manager, Poughkeepsie, NY

Visit the LITA Job Site for more available jobs and for information on submitting a job posting.

OCLC Dev Network: Update to WMS CirculationAPI

Wed, 2017-01-11 19:15

The latest release of the Update to WMS Circulation API includes new operations for forwarding holds, pulling items, inventory and in-house check-ins of items.

Equinox Software: DNA experts claim that Cherokees are in the Middle East

Wed, 2017-01-11 17:37

We were holding some superior hints for ending newcomers, which you are capable to use in nearly any article or speech. It must be inviting to your very own audience, also it would do you fantastic to begin your article that’s a great anecdote. Now there is no need to visit fantastic lengths to purchase composition. Decide what compartmentalization of position you’ll be using for your own essay. The judgment is only to refresh your composition within the audience’s head. Topic phrase must certanly be created in the best saying the primary topic area of an essay. It’s possible to be equally as innovative as you choose to be, s O long as your composition carries the appropriate info to the reader. There are many ways about how to compose an essay. Ourpany provides to purchase documents on line. Only the ideal writers, simply the perfect high quality Essay on Love expert essay tok documents 2008 solutions for cheap.

She was interested and also re-read part of the section (i examine her it while she drove) later.

Ergo, you should choose the beginning of your own reflective composition seriously. This list relates to a number of the simple to compose article subjects. Consequently, follow this advice to compose an excellent article in easy method. Your article needs to be up to-date with all the details, particularly the performance data of the gamers. Your satirical article may make extra brownie points with a suitable title. To be able to compose a high-quality dissertation composition you might have to be convincing and can show your claim regardless of what. Once, you have your name on you, you are able to truly start seeking pertinent information all on your own article. Allow your first-hand experience be placed into phrases, whenever you’re writing a reflective essay. Writing this type of essay is not a easy job.

This may cause sales that is more shut with each group.

Writing a suitable cover for an article you’ve created isn’t an extremely tough job whatsoever, nevertheless it truly is the most discounted. You may also attempt to locate specialist essay writing solutions which is able enough to finish your writing needs. Certainly, custom paper writing services aren’t free. Your thesis statement should educate your readers exactly what the paper is about, as well as assist guide your writing. Writing a paper is only a speciality that needs writing gift. Web is really an professional article writing service available on the net to anybody who requires an article document. Merely be sure that your essay WOn’t seem just informative. That is all you will need to understand so as to compose a great dissertation composition.” Thanks so much, it’s really a decent article! To start, make an outline or prewriting of your own article when preparing the initial write.

Length mba in india could be the right option for these aspirants.

One ought to comprehend the 3 conventional sections of the article. Purchase essays that absolutely trust your demands. GW graduates utilize company to generate favorable, where to purchase essays alter. The finest component about creating an enlightening essay may be the big assortment of topics you are able to select from. Should you be confident with the manner you’ve written your relative article and you also really believe you haven’t left actually one level found then you’ve all the chances of developing a fantastic impact on the readers. The kind of theme you determine on is going to count on the purpose why you’re composing the article in the very first affordable papers spot. The prime thought that you simply have to concentrate up on initially, is the objective of creating this essay. The very first step to creating a flourishing school essay is selecting the best theme.

District Dispatch: ALA urges Senators to probe Sessions on privacy

Wed, 2017-01-11 15:31

ALA, together with a baker’s dozen of allied organizations, has written to the members of the Senate Judiciary Committee on the eve of its hearings on the confirmation of Sen. Jeff Sessions

Source: 13newsnow

(R-AL) to serve as the nation’s next Attorney General. Detailing concerns about Sen. Sessions’ record on a host of issues – including expressly his opposition to the special protection of library patron records – the letter calls on Committee members to use the hearings to “carefully investigate Senator Sessions’ record on privacy and seek assurances that he will not pursue policies that undermine Americans’ privacy and civil liberties.”

Orchestrated by the Center for Democracy & Technology, the American Association of Law Libraries and Association of Research Libraries also signed the letter, as did other prominent national groups, including: Access Now, Amnesty International USA, the Constitutional Alliance and Electronic Frontier Foundation.

The post ALA urges Senators to probe Sessions on privacy appeared first on District Dispatch.

Pages