Category Archives: Student Blogs

How having more music has made me less interested in it

As Facebook is making a play for the music industry, commentators are speculating about how social media is affecting artists and music producers. Digby Pearson argues that social media is making music fans more fragmented, and that being a fan of an artist has gone from being about going to concerts to clicking “like” on Facebook. Vince Neilstein argues for social media in his article, claiming that social media has helped artists to reach more listeners (Source).

The arguments by Pearson and Neilstein are typical of the debate about music in a social media age. On the one side there are those who praise social media as a way to reach a larger audience. And on the other side there are those who think that social media belittles music by changing and simplifying the relationships between artists and fans. But reading Neilstein’s article made me think about another issue: how does the modern music industry, with Spotify as the main source of music for a lot of people, change our relationship to music as an art form? I have no answers to this question other than to reflect on how my own relationship to music has changed over the years.

 

The first music I can remember owning was a vinyl record by Steve Harley and Cockney Rebel, handed down to me from my father. Later, when my childhood bedroom was updated with a CD-player, most of the music I listened to was collections: movie soundtracks and rock- and pop collections. I never had a lot of music, and some of it was bad. But I listened to the music I had again and again, until I knew most of the songs by heart. And I appreciated every song, good or bad.

When I got my first computer at age 14, at a time of dial-up modems and a painfully slow internet connection, CD’s (and occasionally a floppy disk) with Mp3 files were traded amongst my friends. I remember having a collection of about 150 Mp3 files, including rock, rap, pop and some comedy songs. Just like with my earlier CD collection, I listen to these few songs so many times I can still remember the lyrics to many of them.

As internet speeds improved, and I discovered torrent sites, my music collection started to increase. For the first time I couldn’t listen to all my music in a day. I had to start organizing my files into folders. My collection of music, although not dramatically large, became something I had to manage. And even though all the music in the world was now easily obtainable, I built a carefully selected collection of music – I only wanted to have music I liked.

From my first vinyl record to my collection of less then legally obtained music, one thing was always true: I knew my music. I knew what music I had, what I liked and I knew some of it by heart. Today, I don’t own any vinyl records. My CD collection is very limited. And I don’t have a collection of downloaded Mp3-files, because I eventually grew up and wanted to get my music legally.

 

Today I have a Spotify subscription, and all the music in the world has never been so easily available to me. But what does that mean for my relationship to music? Unlike before, I no longer know my music. In stead of CD’s or folders with Mp3 files, I now have a collection of playlists on Spotify, many of which are labeled “something something – check out later”. Ironically, I felt more of an ownership of the music I previously downloaded from torrent sites. I at least had to work for that music – I had to battle sleazy ads for magic pills and dating sites, search for and find the right files, and risk getting a computer virus or a Scientology documentary instead of music (yeah, that really happened once).

On Spotify I don’t have to do anything. And everything is there. And yet, I never feel like there’s anything to listen to (talk about a first-world problem). There’s too much music to browse through, too much to feel any kind of ownership over. And Spotify is filter-bubbling me the same music suggestions all the time, so even when I do try finding something new, it’s still the same.

Of course I enjoy Spotify, and I’m not going to end my subscription anytime soon. But I can’t help feeling that, with the massive music library Spotify offers, something has been lost. And yes, I am spoiled – complaining about too much and too easy to find music. I guess if I have to find some sort of moral to this rambling, it’s that the more you have of something, the less it is worth.

Presentation time

As I’ve been following Mia Zamora’s class, it’s now my turn to do take over the class for parts of tomorrow and present my thesis.

My thesis will focus on Participatory culture within game developement and gaming communities. I will present my practical projec,t the Twine game I’ve been blogging about, my positive and negative experiences about it and how I connect it to participatory culture. I will also talk about what my plans is for the rest of my master program, what I plan to do and how to go about it.

I hope to get a fruitful discussion going in class about participatory culture, the reading I assigned for them (chapte 5 of Participatory culture in a networked ers and a video with Henry Jenkins), and any input they might have on the topic.


My turn

This week its my turn to provide the other students with readings, so I wont be commenting on that. Rather, I will do a short post on what my thoughts for class will be, since I`m going to be in charge of parts of it, and then I`ll write a bit on what I hope to achieve, both with my masters and in class.

Firstly, my thesis will be on The silent majority and participatory culture. What I hope to achieve, my “end goal” so to speak, will be to identify reasons why people want to participate in the online discourse, how to generate an interest in participating and lastly reasons why people avoid participating.
I hope to produce something akin to a book, or a guide to participatory culture, and I think the key to success here, is to identify why and why not people want to partake in this. By reading my thesis, people would gain a greater understanding of what it is to participate and the benefits from that one can reap from this.

I will have to divide my focus into two groups, the silent majority and the vocal minority. Hopefully by identifying key reasons why people participate, I will be able to come up with a sort of guide or rule of thumb on how to increase participation. My thoughts are that this will be useful in any scenario where one is dependent on the crowd and their feedback.
I aim to look at participatory culture in a few distinct areas with a different form of participation. The ideas I have at the moment are the gaming community and specifically those who produce content made to benefit others, guides, lore, tactics on forums and bulletin boards, and those who stream or produce video content and are engaging their audience that way.
I will also look at other forms of participation, like those who produce and/or correct information on sites like Wikipedia and lastly I will look at participation and the lack thereof as a whole.

One of the biggest issues I have encountered so far will be to define participation and the quality of contributions. Do I need to split them into different categories or genre’s? Will it suffice to call something useful or useless? An example would be someone who has spent 50 hours creating a game guide for no other reason then to help others V.S. one who posts a picture of food on a website or social media and just types #dinner #food.
Creating these definitions will be a challenge, and also trying to avoid being biased when labeling contributions. We all have biases, but being aware of them and hopefully being considerate while working might help me avoid the bigger issues, or so I hope.

The second large problem I know I will encounter, is how to reach out to the silent majority!? By posting on different forums, by using amazon Turk or by actively engaging with streamers, wont I just be reaching the vocal minority? So how do I reach the counterpart then? One idea I have would be to create an anonymous questionnaire and hopefully have the faculty spread it to students at UIB, and going by unconfirmed statistics, most of the answers I get would be from the silent majority. I can also post it on open forums and take my chances that seeing who its anonymous and does not require a login or giving up credentials to answers it, I might get a few lurkers there aswell.
Who knows, and that is the hard part of trying to research the silent majority, they are silent… And therefore hard to reach, and harder to research.

I’m thinking that my research will be part case study part elimination process, by eliminating factors as I go, I hopefully will end up with a few key factors that play an important role in participating or not. These factors will then be easier to research once they are narrowed down.
One topic I will also look at, which is more theoretical and academic will be the consequences of participation.  I will use the 90–9–1 rule as a basis here. This is translated into 90% lurkers, 9% vocal but less engaged and 1% being the most vocal and those who regularly produce content.
Going by these numbers, it would mean that EVERYTHING we see online today, all the websites, all the forums, all the blogs and all the user-created content you can think of, is created by 10% of the internet users we`ve had since its origins… Digest that for a minute.
Now, imagine we could bump that number up to say 15 or even 20%. How would that change the web as we know it? We already have an incredible amount of information online, and we live in a society of total and utter information overload. What then, will be the consequences of increased participation. Would it cause a collapse, seeing how incredibly much content could be produced. Would sites like Reddit and Wikipedia soar to new heights and in turn become major online economics, like others have before them, Facebook, YouTube and Google to name a few.
Will crowdsourcing become the new way of getting things done? Crowdfunding be the new investors? If 15% of those who have access to the web gave you 0.1$ you would have 55,500,000$. That is insane, and surely more than enough money for any startup business to get on its feet.

So there you have it, that’s what I have planned for my thesis, as of now at least, and parts of what I have in mind for my session in class.
So if there are any lurkers out there, which I know there is, gimme a feedback, write me a comment, or even better, tell me why you don’t want to or like to participate!
In advance, thank you.

 


Digital Literacy and The Silent Majority

This week’s blog will be divided into two separate parts. Two of my classmates will be presenting their thesis, each dealing with a different topic.

 

First of all, I’ll be talking about Digital Literacy. In my eyes this concept signifies the capability someone has to deal with digital media. A digitally literate person knows how to make use of online tools, what the implications behind it are, what the dangers are, etc. A topic that gets brought up a lot in the context of this is privacy on social media. It is often stated that many people don’t stop to think about the implications or possible consequences when they post something themselves. Various people say that this should be something that schools need to adopt in their curriculum. I do agree that schools can play an important role in teaching the skills needed to foster a good handling of online media, but I think that you will never be able to do this solely through education. A lot of it, I feel, comes down to trail and error. You can tell children all about the dangers on something, but if they don’t experience it themselves or through someone close to them they will never truly learn it. The majority of it is dependent on themselves and their direct environment as well. It is a very important skill, one that will be fundamental in the future.

 

As a second topic here is that of the silent majority in participatory culture. I never thought of a concept like this in a digital context. My understanding of it was always through a political view. An interesting statement was that big data could potentially lead us to knowing what the general line of thought is of the silent majority. While this could indeed be a big development, a few concerns can be brought up. First of all that of privacy, many people have already voiced their discontent of data gathering to make advertising more personalised. Is the use of this data for political reason than so much better? If anything abuse of personal data in this context could have some very dangerous implication. What I do find interesting is ‘the 90-9-1 principle’ that was proposed in one of the articles. This states that 90% of a community are lurkers, 9% are sporadically vocal and 1% is incredibly vocal. I’ve always thought online communities were more defined by a sort of 80-20 or pareto-principle. Whereby 80% of all the content comes from 20% of the community. I will look further into it though.
All in all, I’m curious what my classmates will bring tomorrow.


Lurkers and the Silent Majority

“The Silent Majority” is, according to Julia Kirby, a phrase that President Nixon used to describe the people who were not against the Vietnam war, who Nixon believed to be in majority, but were less vocal than the anti-war protesters. And during the 2016 presidential election, then Republican candidate Donald Trump claimed that he would win the election, despite the polls saying the opposite. Trump justified his claim by referring to the silent majority – claiming that there were far more Trump-voters than what the polls suggested.

The idea behind the silent majority is simple: the most vocal are not necessarily the majority. Kelly McNamara writes about the 90-9-1-rule about online communities, which states that 90% tend to be engaged but less vocal, 9% tends to be more vocal by commenting and sharing, and 1% tend to be the most vocal by creating new content. While the numbers may not be exactly 90, 9 and 1, the idea is simply that most engaged people don’t contribute. These are often referred to as lurkers.

Whether you call them the silent majority or lurkers, I can’t help thinking that someone is making a big deal about something that is actually quite simple: not everyone has a desire to expose themselves by contributing online, and we can’t know what everyone is thinking about something. The silent majority is not some organized, underground revolutionary force. It’s a statistical blind spot. It’s not knowing everything about everyone (thankfully).

Of course it’s interesting to look into why some people don’t wish to contribute much online. And it’s interesting to ask: how would things look if they did? If the internet is to be a democratic tool, then everyone should have the same opportunities to contribute. So if lurkers are not contributing because of some external factors such as fear of internet trolling or low digital literacy, then that is a problem. And it should be addressed.

Thesis update

Hi everyone, just a small update on my thesis (a big one actually if you look at it). I finally managed to get into contact with my thesis coordinator. While my topic is still Algorithmic Awareness, my focus has shifted a bit. Instead of looking at the consumer side, I’ll turn towards the producer-side. I’m taking a look at how news producers think about the algorithms behind Facebook and how they try to circumvent it. The central theoretical framework in this will be gatekeeping-theory, whereby personalisation through Facebook can be seen as a second gatekeeper above the news organisations. The bulk of my literature review still remains the same since I’m still talking about personalisation on the web, how Facebook works, the filter bubble and Algorithmic Awareness. The only difference is that in my last part I’ll be focusing on the producer side instead of on the consumer side.

So from all the things I talked about during my presentation on Thursday, a few things have fundamentally shifted. For that reason I’m actually not going to upload it, since I feel it does not represent the structure nor goal of my thesis well enough anymore


The Silent Majority

Read Nicholas’ readings about the silent majority and found them interesting!

It was fun to get an analyst’s view of how to reach this silent majority, for example, by
having anonymous surveys when dealing with subjects you’d rather not be too public about. I think that generally the term “Silent Majority” has a somewhat bad rep in this da and age, probably stemming from Trump supporters claiming the term by saying that “The Silent Majority stands with Trump” over and over, often putting this on signs at protests or posting about it on social media….Which is a bit ironic.

I’ve generally thought of the term as a way of saying that you, for example, disagree with current immigration laws etc. but don’t want to be vocal about it because of the backlash that often follows from, in my opinion, sane people.

So bearing this in mind, Nicholas’ readings showed me that from a data mining/analytical perspective the Silent Majority can be anything related to people “lurking” and not necessarily engaging in the same manner as the more vocal participants of, say, a message board.

On old message boards, before Reddit pretty much decimated them, you could always see how many people were on right now as “lurkers” or logged in, which I think maybe helped you to get a picture of  how vast the Silent Majority was.

Maybe something like that should be implemented on Facebook etc? So that whenever you’re browsing a comment field you could get an estimate of how many people were lurking and how many people were contributing. I’m sure Facebook already has algorithms for this, I mean, this is the kind of thing they earn money from, but it would be nice, I think, for vocal contributors to see that people are reading their comments so that the contributors don’t feel that they’re “shouting into nothing”, so to speak. It could prevent the growing tide of disenchantment with online discussion that I feel is growing–of course, it could just make it worse.


Facebook and Hecking Algorithms

I read: Understanding User Beliefs About Algorithmic Curationin the Facebook News Feed by Emilee Rader and Rebecca Gray.

It’s research paper that looks at how people perceive their Facebook News Feed, and how it they think it works. Interesting stuff!

What stood out to me was this little piece of information:
“Respondents indicate they believe an entity, characterized as Facebook or as an algorithm, prioritizes posts for display in the News Feed. Also, which posts they see depends on what the system knows about their preferences and characteristics, post popularity, and past interaction with other users. 80% No, 20% Maybe/Yes”

I thought it was common knowledge that the News Feed and other similar algorithms cherry pick what is presented to you. Like if google the word “Horse”, I will get a completely different list of hits than somebody else. It’s interesting to see that the people in the survey are unaware to what extent Facebooks tracks them.

Everything from you IP address, to analyzing your pictures, to following your location even when facebook, or your phone is switched off, is used. As well as how you comment, what you comment on, what you share etc. to better direct ads your way, and also show you posts you might be interested in interacting with. A good ol’ ad blocker does wonders for most of this, coupled with a VPN, but I guess those things have yet to seep into the mainstream conscious.


Filter bubble.

This weeks blog will be on algorithms, how they work and how they shape our movement on the web, whats available to us and how to break the cycle.
A fellow students thesis revolves around algorithms and the filter bubble, so this week, its his readings I`ve been looking into and it will be his thesis in the crosshairs in the days to come. If the readings are anything to go by, it will be a most productive session we have in store.

The term filter bubble was first coined by Eli Pariser around 2010, and here you have the Wikipedia definition of what it is;
“A filter bubble is a state of intellectual isolation[1] that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history.[2][3][4] As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.[5] The choices made by these algorithms are not transparent. Prime examples include Google Personalized Search results and Facebook‘s personalized news-stream. The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal[6] and addressable.

In the opening pages of chapter two in his book “The Filter Bubble. What the Internet is Hiding from you” Eli Pariser talks about how the news press and published journals lost their advertisement revenue due to the same content being available online. Those who used to purchase ads in newspapers now turned to websites instead. Anyone who has spent time online over the past few years will have noticed the evolution of online advertisement. At first it was “pay to be on the site”, and you got the same ads on the same pages because that’s what companies paid for. Then it evolved into more regional ads, suddenly they where in your native language, and for stores and companies in your country. This again evolved into the stage of I.P targeted commercials, where they used your I.P address to give you ads from local stores and businesses. Lastly, this again, evolved into the data mining algorithms that tailor online ads especially for you, by looking at your search history, website visits and what links you`ve clicked on other websites. Algorithms are now in charge of all online advertisement, and they are uncannily accurate.

It is hard not to leave any sort of traces behind when traversing the web, but if you manage to stay somewhat under the radar, the algorithms will have a  hard time to target you. They will instead show you commercials of interest for the populus in your general area or town instead.
Some easy steps you can do is to clear your web history, and make sure to delete cookies aswell, since this is where most of the algorithms gather their information. You can also make sure not be logged in on sites like YouTube or your google account when doing searches. This will prevent them to link and store information about you on their servers aswell as your cookies.
Something that was very popular was Ad-block extensions to your web-browsers, but websites soon learned how to block their content from being shown if you had such an extension. Sites like YouTube took this a step further and deliberately gave users with Ad-blockers the longest commercials and removed the “skip” function that commercials that last more than 30 seconds have.
Ad-blockers still work, though more and more web-sites are getting better at blocking the blocker, literally.

Pariser later talks about how the future of news online will be personally tailored, with a few major events being present and the rest being all local news, tailored to meet your specific interests and likes. The danger of having such a personalized news filter is that the odds of missing out on a major event becomes all the more present. By filtering in only a few global events, there are plenty of cases that might be ignored and left out, case that you might find interesting and of importance. The algorithms wont take this into account though, it will only report to you that which it has parameters to do. Today at least, you can get varied news by visiting the different major new sites and local sites, but when you read articles like this http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/ where a scary high amount of people state social media as their main source of news than things get complicated.

These algorithms are affecting all our lives, whether we are aware or not, and it can be an increasingly difficult task to circumvent, break or reset them.
When reading the work of Emilee Rader and Rebecca Gray on algorithmic curation in the Facebook news feed, it is apparent that we share concern. Concern at people’s ignorance at what algorithms actually produce.
The algorithms are biased, the information the filter and show you on your feed are biased and in the end, if you do not realize this, those “objective and partial” pieces of information you are given will give you a false sense of neutrality.
Knowing the information you receive is biased is one thing,  but doing something to change that is night but impossible, at least when it comes to Facebook.
There are ways to increase the amount of difference you can be shows, and that is simply by pressing like on a lot of different and unique things. The more stuff you like, the more diverse ( or not at all ) your Facebook wall will become, or at least that is the thought behind the algorithm. So keeping in mind what you give a thumbs up and not can make a big difference in the long run.

One issue what Rader and Gray points out is that in privacy settings on Facebook, you can elect who can and cannot see your posts and you have no real way of telling if someone has elected to put you on such a list. From the questionnaire they ran, they were given the result that 73% of those that answered believed that they where not shown all of their friends posts. This could be due to different reasons, like mentioned above, people electing to remove a person from viewing posts.  An issue that was also brought up from the questionnaire was the fact the some of those that answered felt that Facebook filled their wall with posts that the algorithm “thought” they would find interesting. In effect, the algorithms taking away choices from us.

My personal issue and use of algorithms.
Firstly I must say that I am a victim of these algorithms as much as the next, but I am fully aware of them, and I actually go to great lengths to throw them off-balance.
I have both  a Netflix and YouTube account, where algorithms are hard at work tailoring films, series, streamers and content just for me.
The way I break the Netflix algorithm is that I have created multiple profiles, I have my own, which I use for movies and series that I like, namely sci-fi and crime, but I have another profile that I share with my wife. On this profile, we look at series together, comedies, stand up shows and the odd documentary. I also share the Netflix account with a friends of mine, who in return, shares her ViaPlay account. We have vastly different tastes in both film and series, and by letting her use my account, she looks up stuff I would never consider. Or so I thought. It turns out, we have a few interests in common, films and series I would not have found, if not for my friend using my account.
As for YouTube, I have channels I subscribe to, I have my musicians that I look up and I have my favorite streamers. This gives me basically the same content every time I log on, my “recommended” tab is always the same. Not the same videos or songs, but the same in ways of content. Its gaming, music and british panel shows.
The way I break this cycle is that  once or twice a month, I have friends over for  a “YouTube” night.
It basically consists of my friends and I, looking up all sorts of stuff, showing each other certain gems we`ve found in the course of our browsing of YouTube. What happens it that in a week or so after my friends have been over, my “recommended” tab is full of new and unique content. Suddenly I have  a ton of new stuff to explore, or not to if I so choose, but at least I have fresh content and new stuff to view.

How do you break or interact with the algorithms affecting your time online? Please leave a comment if you have any comments or thoughts on the issue.

Until next time.


The Filter Bubble and the News

The filter bubble is a technological phenomenon, where one’s opinions are amplified by algorithms that recommend content that one is more likely to be interested in, while filtering out all other content (Flaxman, Goel and Rao 2016, 299). If, for example, Google’s search algorithms have learned that you are a liberal person, the results of political search queries may be more likely to be liberal than conservative. And if you watch a lot of horror movies on Netflix, you are more likely to see suggestions for these types of movies in the future.

In his book, “The Filter Bubble” (2011), Eli Pariser, tells the story of how journalism has gone from being a passive receiving of information by a few publishers, to an overwhelming wealth of articles produced by both professionals and amateurs. This creates a problem of how articles are being presented to the reader. Sense no one is capable of reading every article being produced, some filtration has to take place. The problem is that when this filtration is based on algorithms filtering information based on what they think we like, people are less likely to be exposed to new ideas and challenging information.

I believe that the Filter Bubble is potentially a serious problem for democracy and public debate. I also believe, however, that it is necessary and a result of the natural development of the digital world. In his book, “The Googlization of Everything” (2011), Siva Vaidhyanathan argues that the number of available information online leads to information overload. The very title of one of his chapters, “The Googlization of Memory”, hints at how our very human and biological processes – such a as memory – is being digitally expanded. If one accepts the wealth and availability of information online as an extension of our memory, then there must also be an extension of our biological filtration processes and working memory, that – just like the algorithms of the filter bubble – filtrate information based on what is believed to be in our interest.

It is difficult to find a balance between the the necessary algorithmic filtration systems and the democratic dangers of the filter bubble. For starters, I do miss the option to turn off filtration for a while – an exploration mode where information is presented that is not based on any guesses of what I might like. And hopefully, awareness of the filter bubble will help people become more critical of their news sources.

 

Sources

Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter bubbles, echo chambers, and online news consumption”. Public Opinion Quarterly 80, no. S1: 298-320

Pariser, Eli. 2011. The filter Bubble: What the Internet Is Hiding from You. London: Penguin Books

Vaidhyanathan, Siva. 2011. The Googlization of Everything. Berkeley: University of California Press.