Keywords: big data, privacy, algorithm, ideological bubbles, selfies, news feeds, etc.
This weeks readings focus the implications of big data on privacy and authenticity. The article Oremus focuses on Facebook and its algorithm. More importantly on how its developed and why it has many shortcomings. Baruh and Popescu’s article focuses big data and its effects in the changes of industries and our consumerism. The Silverman article focuses on the concept that selfies are a tool for facial recognition software and further lead to complex questions about law and privacy.
“ privacy understood as an individual good, which decrees that consumers should be free to negotiate their acceptable levels of privacy, and privacy understood as indivisible collective value that can be enjoyed by the society only if a similar ‘minimum’ level is afforded to every member” (2)
“Facebook’s news feed algorithm can be tweaked to make us happy or sad; it can expose us to new and challenging ideas or insulate us in ideological bubbles.”
“At a time when Facebook and other Silicon Valley giants increasingly filter our choices and guide our decisions through machine-learning software…”
In the article from Oremus, we are forced to consider the way algorithmic data affects our daily life. In the article Oremus, is invited to meet the team that develops the many level, many part, complex algorithm that compiles things in our newsfeeds. In that meeting they discuss the idea that the main reason the algorithm fails is because of the information we feed it. The outweighing point being that Facebook’s shortcoming are a result of the data it mines is fundamentally human. Further, the algorithm also takes a lot of factors into account. According to Alison, the Facebook employee in charge of the algorithm, it takes into account hundreds of features. This shows that nothing is static. It constantly changes to adapt.
Further illustrating this point is the article from Baruh and Popescu. In this article the writers use the example of cars and insurance to show the way this kind of is reflected in market practices. Baruh and Popescu explain, the development of sensors, Global Positioning Systems, and wireless communication, moved the insurance industry from risk based models to that of habit based models. However, the idea of privacy is very apparent due to the fact that the way these habits are coded relies on data mining and the collection of big data. More specifically they need to grab specific information in order to develop the types of habits they deem good or bad.
Moreover, this issue of privacy arises agin in the reading from Sullivan about selfies and facial recognition software. Sullivan articulates, “We’re increasingly using data about our face to authenticate our identities to our smartphones and user accounts. That’s reason enough to be skeptical of widespread deployment of facial recognition technologies and the proliferation of name-face databases. Like passwords, faceprints can be compromised. They’re a data security risk.” As such all technology from GPS to the practice of taking selfies incure an added risk to our authenticity, our privacy, and more importantly our safety.
What do you think about the practice of facial recognition? Do you think you should be notified if you enter a space where this is a practice? If so, how do you think the law could address this privacy concern?
Do you think that your right to privacy does, can, or should extend to all aspects of social media? If so, what do you think this would look like?
What does privacy look like to you on social media, in cars with added technologies, and in the digital realm?