The post is worth a read. But this, and much of the other stories about fake news on Facebook, reminded me of this from back in 2009, Scamville: The Social Gaming Ecosystem Of Hell. For me, the point that came to mind was something that I started writing about this back in 2009, but it looks like I didn't publish. Here's a snippet from that:
What I see is not necessarily what you get...
It's no secret that the core technology of the Internet enables one party to personalize a message for another. From the moment that you're client computer sends a request for content to a web server, that server is able to shape the content that it sends back to you based on who you are, what you were requesting, and where you came from. With modern internet marketing, we use this capability all the time, and futurists suggest that everything we experience will only become more personalized.
The problem is that, even for those of us that get it, it can be difficult to fully grasp the scope of "what I'm seeing may not be what you are seeing" means. This issue is magnified when organizations deliberately use these capabilities to essentially hide their worst practices:
From the Video Professor post
What you see when you first hit the site depends on how you got there -- directly or via an advertising partner. The least scammy version is what you see if you go to videoprofessor.com directly. On the home page in very small font is a statement that you are going to be charged $290 if you engage in a transaction with them. But that’s the only on-screen disclosure you’ll see.From the How To Spam Facebook Like A Pro post
Cloaking: This is when you show a different page based on IP address. We and most other ad networks would geo-block northern California -- showing different ads to Facebook employees than to other users around the world. One of the largest Facebook advertisers (I’m not going to out you, but you know who you are) employs this technique to this day, using a white-listed account. Our supposition is that it makes too much money for Facebook to stop him. Believe me, we have brought this to Facebook’s attention on several occasions. Here’s what this fellow does -- he submits tame ads for approval, and once approved, redirects the url to the spammy page. To be fair, players like Google AdWords have had years more experience in this game to close such loopholes.The thing is, compliance and auditing is all about third party perspective -- that the reviewer sees what you or I see. What happens when the regulator doesn't seeing the same thing that the customer sees? From restaurant reviews to personalized customer experiences, on some levels, people expect that the average customer experience will not be equal to the 'reviewer' experience. VIPs often get special treatment. But, if that VIP experience is built around circumventing rules or laws, what kind of label do you put on it?
The same can be said of shaped experiences in order to increase the likelihood of a transaction. Remember the movie, The Sting? The scam is all about creating an illusory experience for the mark, shaping reality into an environment that's favorable for a transaction. So where does optimization end and scam begin? To quote from Arrington's post:
Here’s an easy way to determine if something is a scam – would users pay for it if they knew exactly what they were buying? In Video Professor’s case, the answer is no, and the company has to resort to tricking the user into paying nearly $300 for a bunch of CDs.As we move down the path of personalized experiences, the capability to use technology to manipulate consumers through shaped reality is getting easier. Even with today's technology, it's possible for two computer users sitting right next to each other to be visit a site (or series of sites), and receive a completely different content experience. In The Sting, the mark is convinced of the manipulated reality through the introduction of a host of actors that help endorse the experience, but on the computer there it's easy to get sucked into the idea that what you are seeing is the same thing as everyone else sees.
What's more, most of our defenses against this revolve around the idea that for our reality to be manipulated, we need to be in a 'closed' environment that prevents third party validation. You might think, "If I Google the Video Professor guy, perhaps I can find out if it's legitimate." Or, perhaps you take it one step further and Google "Video Professor Scam", you might expect to find a series of top ranked pages detailing customer complaints or other news. Instead, the top result from my most recent search returns a link to a press release archive site that includes a link to a 50% off discount off of the Video Professor product.
So back to the election and fake news -- later this morning, I came across this post on Recode, Let’s get real. Facebook is not to blame for Trump, by Joshua R. Williams. Okay, so here's the emphasis in this one:
Much of the coverage and outrage has been directed toward social media, its echo chambers, and specifically those of the Facebook platform. While, to be sure, much of the fake or inaccurate news is found and circulated on Facebook, Facebook is not a news outlet; it is a communication medium to be utilized as its users so choose. It is not the job of Facebook’s employees, or its algorithms, to edit or censor the content that is shared; in fact it would be more detrimental to do so. This is for two very good reasons:
One, either human editors, or artificial intelligence editors, by removing one item or another will appear to introduce bias into the system. The group who’s content is being removed or edited will feel targeted by the platform and claim, rightly or wrongly, it is biased against their cause. Even if the content is vetted and found to be true or false.
Two, censorship in any form is bad for the national discourse.
So rather than blaming Facebook or other platforms for the trouble in which we find ourselves, let’s give credit where credit is due: The American people.The emphasis has been added by me, because this point is fundamentally wrong. Facebook is a marketing platform that makes the majority of it's revenue connecting businesses that have promotional goals to the "users" on the platform. To quote from this post, Why You Should Sponsor Your Social Media Posts, (emphasis added by me)...
Because of all this, Facebook is usually the first place business marketers turn to for the distribution, promotion, and amplification of their ad content and campaigns, which makes it hard for businesses, especially new or small businesses, to find a place among all the clamor and competition for their posts to find an audience. Keep in mind, Facebook is no longer a good source for organic marketing outreach. It is now a pay-to-play network, or a network that gives special preference and advertising priority to businesses that can pay the most to be the first result viewers see on their social media pages, which can be bad for small or new businesses trying to find sponsorship for their own social media posts.Well, it's pay-to-play unless you can work your way organically into the feed. In broader terms, you might consider that, "native advertising." To quote from wikipedia,
Native advertising is a type of disguised advertising, usually online, that matches the form and function of the platform upon which it appears. In many cases, it manifests as either an article or video, produced by an advertiser with the specific intent to promote a product, while matching the form and style which would otherwise be seen in the work of the platform's editorial staff. The word "native" refers to this coherence of the content with the other media that appears on the platform.The thing is, Facebook has already built marketing personas for these demographics. This happens through the content that's delivered in the feed. And similar to the Scamville era issue, if you can target the gullible, the likely to be scammed, and significantly increase your ROI.
The reality is that there are some broader fundamental problems with Facebook and the manipulation of it's "user" base. Much like the happy-feeds-make-people-post-more-happy-stuff, sad-feeds-make-them-post-more-sad-stuff tests, the reality of the impact of the "feed" is probably far more frightening that most would choose to admit. At some level, review and regulation should probably be considered -- but that probably won't happen as a result of the election that was probably manipulated in some part by the platform.