Thesis update

Hi everyone, just a small update on my thesis (a big one actually if you look at it). I finally managed to get into contact with my thesis coordinator. While my topic is still Algorithmic Awareness, my focus has shifted a bit. Instead of looking at the consumer side, I’ll turn towards the producer-side. I’m taking a look at how news producers think about the algorithms behind Facebook and how they try to circumvent it. The central theoretical framework in this will be gatekeeping-theory, whereby personalisation through Facebook can be seen as a second gatekeeper above the news organisations. The bulk of my literature review still remains the same since I’m still talking about personalisation on the web, how Facebook works, the filter bubble and Algorithmic Awareness. The only difference is that in my last part I’ll be focusing on the producer side instead of on the consumer side.

So from all the things I talked about during my presentation on Thursday, a few things have fundamentally shifted. For that reason I’m actually not going to upload it, since I feel it does not represent the structure nor goal of my thesis well enough anymore


The Silent Majority

Read Nicholas’ readings about the silent majority and found them interesting!

It was fun to get an analyst’s view of how to reach this silent majority, for example, by
having anonymous surveys when dealing with subjects you’d rather not be too public about. I think that generally the term “Silent Majority” has a somewhat bad rep in this da and age, probably stemming from Trump supporters claiming the term by saying that “The Silent Majority stands with Trump” over and over, often putting this on signs at protests or posting about it on social media….Which is a bit ironic.

I’ve generally thought of the term as a way of saying that you, for example, disagree with current immigration laws etc. but don’t want to be vocal about it because of the backlash that often follows from, in my opinion, sane people.

So bearing this in mind, Nicholas’ readings showed me that from a data mining/analytical perspective the Silent Majority can be anything related to people “lurking” and not necessarily engaging in the same manner as the more vocal participants of, say, a message board.

On old message boards, before Reddit pretty much decimated them, you could always see how many people were on right now as “lurkers” or logged in, which I think maybe helped you to get a picture of  how vast the Silent Majority was.

Maybe something like that should be implemented on Facebook etc? So that whenever you’re browsing a comment field you could get an estimate of how many people were lurking and how many people were contributing. I’m sure Facebook already has algorithms for this, I mean, this is the kind of thing they earn money from, but it would be nice, I think, for vocal contributors to see that people are reading their comments so that the contributors don’t feel that they’re “shouting into nothing”, so to speak. It could prevent the growing tide of disenchantment with online discussion that I feel is growing–of course, it could just make it worse.


Facebook and Hecking Algorithms

I read: Understanding User Beliefs About Algorithmic Curationin the Facebook News Feed by Emilee Rader and Rebecca Gray.

It’s research paper that looks at how people perceive their Facebook News Feed, and how it they think it works. Interesting stuff!

What stood out to me was this little piece of information:
“Respondents indicate they believe an entity, characterized as Facebook or as an algorithm, prioritizes posts for display in the News Feed. Also, which posts they see depends on what the system knows about their preferences and characteristics, post popularity, and past interaction with other users. 80% No, 20% Maybe/Yes”

I thought it was common knowledge that the News Feed and other similar algorithms cherry pick what is presented to you. Like if google the word “Horse”, I will get a completely different list of hits than somebody else. It’s interesting to see that the people in the survey are unaware to what extent Facebooks tracks them.

Everything from you IP address, to analyzing your pictures, to following your location even when facebook, or your phone is switched off, is used. As well as how you comment, what you comment on, what you share etc. to better direct ads your way, and also show you posts you might be interested in interacting with. A good ol’ ad blocker does wonders for most of this, coupled with a VPN, but I guess those things have yet to seep into the mainstream conscious.


Filter bubble.

This weeks blog will be on algorithms, how they work and how they shape our movement on the web, whats available to us and how to break the cycle.
A fellow students thesis revolves around algorithms and the filter bubble, so this week, its his readings I`ve been looking into and it will be his thesis in the crosshairs in the days to come. If the readings are anything to go by, it will be a most productive session we have in store.

The term filter bubble was first coined by Eli Pariser around 2010, and here you have the Wikipedia definition of what it is;
“A filter bubble is a state of intellectual isolation[1] that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history.[2][3][4] As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.[5] The choices made by these algorithms are not transparent. Prime examples include Google Personalized Search results and Facebook‘s personalized news-stream. The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal[6] and addressable.

In the opening pages of chapter two in his book “The Filter Bubble. What the Internet is Hiding from you” Eli Pariser talks about how the news press and published journals lost their advertisement revenue due to the same content being available online. Those who used to purchase ads in newspapers now turned to websites instead. Anyone who has spent time online over the past few years will have noticed the evolution of online advertisement. At first it was “pay to be on the site”, and you got the same ads on the same pages because that’s what companies paid for. Then it evolved into more regional ads, suddenly they where in your native language, and for stores and companies in your country. This again evolved into the stage of I.P targeted commercials, where they used your I.P address to give you ads from local stores and businesses. Lastly, this again, evolved into the data mining algorithms that tailor online ads especially for you, by looking at your search history, website visits and what links you`ve clicked on other websites. Algorithms are now in charge of all online advertisement, and they are uncannily accurate.

It is hard not to leave any sort of traces behind when traversing the web, but if you manage to stay somewhat under the radar, the algorithms will have a  hard time to target you. They will instead show you commercials of interest for the populus in your general area or town instead.
Some easy steps you can do is to clear your web history, and make sure to delete cookies aswell, since this is where most of the algorithms gather their information. You can also make sure not be logged in on sites like YouTube or your google account when doing searches. This will prevent them to link and store information about you on their servers aswell as your cookies.
Something that was very popular was Ad-block extensions to your web-browsers, but websites soon learned how to block their content from being shown if you had such an extension. Sites like YouTube took this a step further and deliberately gave users with Ad-blockers the longest commercials and removed the “skip” function that commercials that last more than 30 seconds have.
Ad-blockers still work, though more and more web-sites are getting better at blocking the blocker, literally.

Pariser later talks about how the future of news online will be personally tailored, with a few major events being present and the rest being all local news, tailored to meet your specific interests and likes. The danger of having such a personalized news filter is that the odds of missing out on a major event becomes all the more present. By filtering in only a few global events, there are plenty of cases that might be ignored and left out, case that you might find interesting and of importance. The algorithms wont take this into account though, it will only report to you that which it has parameters to do. Today at least, you can get varied news by visiting the different major new sites and local sites, but when you read articles like this http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/ where a scary high amount of people state social media as their main source of news than things get complicated.

These algorithms are affecting all our lives, whether we are aware or not, and it can be an increasingly difficult task to circumvent, break or reset them.
When reading the work of Emilee Rader and Rebecca Gray on algorithmic curation in the Facebook news feed, it is apparent that we share concern. Concern at people’s ignorance at what algorithms actually produce.
The algorithms are biased, the information the filter and show you on your feed are biased and in the end, if you do not realize this, those “objective and partial” pieces of information you are given will give you a false sense of neutrality.
Knowing the information you receive is biased is one thing,  but doing something to change that is night but impossible, at least when it comes to Facebook.
There are ways to increase the amount of difference you can be shows, and that is simply by pressing like on a lot of different and unique things. The more stuff you like, the more diverse ( or not at all ) your Facebook wall will become, or at least that is the thought behind the algorithm. So keeping in mind what you give a thumbs up and not can make a big difference in the long run.

One issue what Rader and Gray points out is that in privacy settings on Facebook, you can elect who can and cannot see your posts and you have no real way of telling if someone has elected to put you on such a list. From the questionnaire they ran, they were given the result that 73% of those that answered believed that they where not shown all of their friends posts. This could be due to different reasons, like mentioned above, people electing to remove a person from viewing posts.  An issue that was also brought up from the questionnaire was the fact the some of those that answered felt that Facebook filled their wall with posts that the algorithm “thought” they would find interesting. In effect, the algorithms taking away choices from us.

My personal issue and use of algorithms.
Firstly I must say that I am a victim of these algorithms as much as the next, but I am fully aware of them, and I actually go to great lengths to throw them off-balance.
I have both  a Netflix and YouTube account, where algorithms are hard at work tailoring films, series, streamers and content just for me.
The way I break the Netflix algorithm is that I have created multiple profiles, I have my own, which I use for movies and series that I like, namely sci-fi and crime, but I have another profile that I share with my wife. On this profile, we look at series together, comedies, stand up shows and the odd documentary. I also share the Netflix account with a friends of mine, who in return, shares her ViaPlay account. We have vastly different tastes in both film and series, and by letting her use my account, she looks up stuff I would never consider. Or so I thought. It turns out, we have a few interests in common, films and series I would not have found, if not for my friend using my account.
As for YouTube, I have channels I subscribe to, I have my musicians that I look up and I have my favorite streamers. This gives me basically the same content every time I log on, my “recommended” tab is always the same. Not the same videos or songs, but the same in ways of content. Its gaming, music and british panel shows.
The way I break this cycle is that  once or twice a month, I have friends over for  a “YouTube” night.
It basically consists of my friends and I, looking up all sorts of stuff, showing each other certain gems we`ve found in the course of our browsing of YouTube. What happens it that in a week or so after my friends have been over, my “recommended” tab is full of new and unique content. Suddenly I have  a ton of new stuff to explore, or not to if I so choose, but at least I have fresh content and new stuff to view.

How do you break or interact with the algorithms affecting your time online? Please leave a comment if you have any comments or thoughts on the issue.

Until next time.


The Filter Bubble and the News

The filter bubble is a technological phenomenon, where one’s opinions are amplified by algorithms that recommend content that one is more likely to be interested in, while filtering out all other content (Flaxman, Goel and Rao 2016, 299). If, for example, Google’s search algorithms have learned that you are a liberal person, the results of political search queries may be more likely to be liberal than conservative. And if you watch a lot of horror movies on Netflix, you are more likely to see suggestions for these types of movies in the future.

In his book, “The Filter Bubble” (2011), Eli Pariser, tells the story of how journalism has gone from being a passive receiving of information by a few publishers, to an overwhelming wealth of articles produced by both professionals and amateurs. This creates a problem of how articles are being presented to the reader. Sense no one is capable of reading every article being produced, some filtration has to take place. The problem is that when this filtration is based on algorithms filtering information based on what they think we like, people are less likely to be exposed to new ideas and challenging information.

I believe that the Filter Bubble is potentially a serious problem for democracy and public debate. I also believe, however, that it is necessary and a result of the natural development of the digital world. In his book, “The Googlization of Everything” (2011), Siva Vaidhyanathan argues that the number of available information online leads to information overload. The very title of one of his chapters, “The Googlization of Memory”, hints at how our very human and biological processes – such a as memory – is being digitally expanded. If one accepts the wealth and availability of information online as an extension of our memory, then there must also be an extension of our biological filtration processes and working memory, that – just like the algorithms of the filter bubble – filtrate information based on what is believed to be in our interest.

It is difficult to find a balance between the the necessary algorithmic filtration systems and the democratic dangers of the filter bubble. For starters, I do miss the option to turn off filtration for a while – an exploration mode where information is presented that is not based on any guesses of what I might like. And hopefully, awareness of the filter bubble will help people become more critical of their news sources.

 

Sources

Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter bubbles, echo chambers, and online news consumption”. Public Opinion Quarterly 80, no. S1: 298-320

Pariser, Eli. 2011. The filter Bubble: What the Internet Is Hiding from You. London: Penguin Books

Vaidhyanathan, Siva. 2011. The Googlization of Everything. Berkeley: University of California Press.

The Filter Bubble and the News

The filter bubble is a technological phenomenon, where one’s opinions are amplified by algorithms that recommend content that one is more likely to be interested in, while filtering out all other content (Flaxman, Goel and Rao 2016, 299). If, for example, Google’s search algorithms have learned that you are a liberal person, the results of political search queries may be more likely to be liberal than conservative. And if you watch a lot of horror movies on Netflix, you are more likely to see suggestions for these types of movies in the future.

In his book, “The Filter Bubble” (2011), Eli Pariser, tells the story of how journalism has gone from being a passive receiving of information by a few publishers, to an overwhelming wealth of articles produced by both professionals and amateurs. This creates a problem of how articles are being presented to the reader. Sense no one is capable of reading every article being produced, some filtration has to take place. The problem is that when this filtration is based on algorithms filtering information based on what they think we like, people are less likely to be exposed to new ideas and challenging information.

I believe that the Filter Bubble is potentially a serious problem for democracy and public debate. I also believe, however, that it is necessary and a result of the natural development of the digital world. In his book, “The Googlization of Everything” (2011), Siva Vaidhyanathan argues that the number of available information online leads to information overload. The very title of one of his chapters, “The Googlization of Memory”, hints at how our very human and biological processes – such a as memory – is being digitally expanded. If one accepts the wealth and availability of information online as an extension of our memory, then there must also be an extension of our biological filtration processes and working memory, that – just like the algorithms of the filter bubble – filtrate information based on what is believed to be in our interest.

It is difficult to find a balance between the the necessary algorithmic filtration systems and the democratic dangers of the filter bubble. For starters, I do miss the option to turn off filtration for a while – an exploration mode where information is presented that is not based on any guesses of what I might like. And hopefully, awareness of the filter bubble will help people become more critical of their news sources.

 

Sources

Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter bubbles, echo chambers, and online news consumption”. Public Opinion Quarterly 80, no. S1: 298-320

Pariser, Eli. 2011. The filter Bubble: What the Internet Is Hiding from You. London: Penguin Books

Vaidhyanathan, Siva. 2011. The Googlization of Everything. Berkeley: University of California Press.

Exams…

In this week’s blog, I will quickly go over what I would like to do for my exam for DIKULT 303: Digital Media Aesthetics.

 

As you may know my thesis research handles about how people are aware of the fact that what they see on Facebook is filtered tot heir needs and interests. Pariser called this phenomenon ‘the filter bubble’. As I started reading more and more about the topic, I came to realize that it is interwoven with a lot of other topics as well. Or rather that it involves a lot of different ones. Algorithmic Awareness, online news, content diversity, recommender systems, Google and Facebook are just a few of them.

 

As more and more people start to consider Facebook as a news source, we need to make sure that the platform offers enough different stances on issues and conflicting information as the traditional media. If not we risk falling in a vicious circle where our view on the world and interests keep being reinforced because of our lack of exposure to conflicting information and ideologies.

 

Looking at it, I would like to make a sort of mini-literature review, like those that you find in an academic article for the exam assignment. First starting with why the topic of Filter Bubbles is important in today’s society, thus giving the scientific and societal value of the research, followed by an overview on the implications, critiques and past research and findings on the filter bubble and online news consumption.

At the moment my idea is to link this issue to the so-called cognitive-dissonance theory in social psychology. This states that we don’t like seeing information that conflict with our view on the world, as well in the fact that we feel uncomfortable when our actions go against our ideas. This is closely linked to the confirmation bias-theory, which argues that we seek out information that reinforces our ideology.

 

In summary the structure of my paper would thus be the following: start by giving the societal value of research on the topic, which is closely tied to the recent media-use statistics. Next would then be the academic value by giving an overview on what has already been said/done on the topic and where there are still flaws that need to be given attention. These two parts would in essence comprise the introduction to the literature review. The second part, the actual literature review would be made up in three different sections, starting with the cognitive dissonance theory, as this is the theoretical framework out of which I would look at the issue. The proceeding two parts would first be an explanation of the filter bubble (what has been written about it) and secondly a look at how people are aware of it, by looking at literature about algorithmic awareness.

 

This is roughly an idea on how I would like to organise the assignment. A lot of it still depends on my discussion about it with my professor. I am also still not sure on the theoretical framework that I would use in my thesis. On the hand, the cognitive dissonance theory says a lot about how people handle their view on the world and information opposing it, but the other viewpoint, namely Gatekeeping-theory, says a lot more about news dissemination and what information makes it through. In the latter case algorithms could thus be seen as a form of gatekeepers that decide what information the user will see.

 

As you can see, the last part is still a mystery even for me. In general this is how I would like to do the assignment though.


Social Media and its Impact on the Music Industry

So for my contribution to the e-book, I was thinking of submitting a paper that looks at how social media has changed the way the music industry works.

I’ve done some similar work before, looking at how Spotify has affected musicians compared to the old model of the pre Napster days. In this case I think I would keep it simple and look at how Facebook has changed the game when it comes to reaching your audience, building said audience and connecting with it. Before social media came along there existed much more mystery regarding bands and musicians and most of what you could find out about them came from tabloids or from music magazines. I happen to know a band that managed to have an impact riiiiight before the great age of piracy came about, and so I would like to interview them and get their opinion of how they feel the industry has changed and how it has affected them.

I would obviously also connect this to factual statistics, as well as relevant readings where it can be found. There hasn’t been too much written about this yet, that I can find, but I’m sure there’s good stuff out there.

The paper would be presented in a sort of investigative manner, I think.
I know that, right now, the music industry is making more money than it ever did back in the heyday of vinyl/CD’s, but the peripheral artists are suffering more than they used to, it seems. One of the reasons behind this, I believe, is social media. At the same time, it also seems to be a great way of communicating with your fans and reaching a broader audience–so there’s a paradox here that would be fun to explore.

I’m also a big fan of making webpages, so if it’s possible, it would be fun to present the paper as a sort of narrative webpage where you click your way forwards to the conclusion. Much like several articles do on the web, I guess. Or perhaps create a Youtube-video where I outline my findings backed by some visual design to complement what I’m saying.


This week in science

Today my post will be on the subject of my Masters Thesis and how I aim to present it at the end of this semester. Not my full thesis, but my layout, ideas and the thoughts I`v made on how I’m going to present it once it reaches full-scale.

My topic, for those of you who are wondering, is going to be The Silent Majority.
Now what is this silent majority you say? It is the hundreds of millions of internet users, that during their time online, ever so seldom, or never participate in a productive way. When I say productive, I mean participating in online discussion on open forums, so that its available to those who would like to see it. It are those who only use instagram/twitter/pinterest/imgur to browse pictures or look up people, without engaging with the community. Perhaps some will leave a comment, but I do not count #cool as a means of participating in a productive way.

My thesis will focus on 3 main aspects ( might change ).
1. What is the Silent Majority –  how do I define it in my thesis and how do I separate meaningful participation from meaningless participation.
2. How do you reach the Silent Majority and how can you increase the “want” to participate
3. How do you increase participation in a group that avoids just that and what would it take to make this happen. Also, what would be the potential consequences of a vastly increased amount of participation in the online communities.

For the end of this semester, I will focus on the first topic, what is the Silent Majority.  I will use the introduction and the definition to build anticipation and interest in my thesis and the work I will have ahead of me.
It will be in comparison to the first chapters of a book, in where you get to know the protagonists, where they come from, and get the gist of how the plot will develop.

For my research, I will be reading on the topic of Participatory culture, since it is my goal to have the Silent Majority engage in just this. I will also look at those I call the Vocal Minority.  The group of people who actually are very active and produce not only a great deal of public content, but actively engage each other in discussion and in creating communities where they thrive.
I will look at why they are active, what they stand to gain from creating public content, i.e.g guides for computer games, learning guides on forums and/or streaming sites. The reason I will look to this group aswell, is to get a ground for comparison with my findings on the silent majority.
I feel it is of great importance to represent both sides in a matter, even though my focus will be mostly on the silent majority.

My biggest fear on this thesis is the lack of work done previously around the silent majority and participatory culture. So I will have a hard time finding relevant literature, but that will not discourage me from pursuing this topic.

A short post from me this time, but that is basically what I have to offer on the insight on my thesis at the moment. I am just in the opening phase of my writing, so there are bound to be changes along the way, but my topic will very much remain.

As always, leave a comment if you have one.