Facebook declared war on the ‘fake news’, after several media found that in a locality in Macedonia a group of young people earned a lot of money at the cost of taking advantage of the platform to spread false news beneficial to Donald Trump.
A few days before, Google had suspended the AdSense ads with which Boris, one of the protagonists of this peculiar industry, obtained around $ 16,000 in just four months. “We can’t make money here with real work,” he told Wired.
Earning revenue is one of two great incentives for the creators of fake news. The other is to influence people ideologically to motivate accessions or opposing positions in a specific political sphere, as Madhav Chinnappa, director of strategic relations and news at Google Europe, recalled at a meeting of the Association of Information Media in Madrid.
A few meters from the headquarters of that meeting, in Congress, the ‘fake news’ had already entered the deputies’ agenda and the Government included them in its national security strategy. The interest and concern for misinformation grows in Spain while some media play the limit to get benefit from it.
The profitable scale of lies
It is relatively easy to make money with fake news thanks to the lack of supervision that involves automation in large volumes of information and operations. If a user creates a website that hosts this kind of content, in a few minutes you can surround them with managed advertising through various services such as MediaFem, AdSense, Infolinks or Popunder. The degree of control in these cases is often reduced to routine checks to ensure that there is no sensitive content such as pornography or weapons.
Once the site’s monetization structure is armed, traffic is needed, and Facebook and Google are the best allies in this regard. Investing a minimum budget in advertising on these platforms can attract well segmented users so that their visit, in addition to possible clicks and income, provides more natural and free dissemination through the action of sharing or linking. It only remains to sit and wait for results.
If fake news rolls and reaches more and more people, the cents that each click on an ad means can become very respectable figures. Dimitri, another of the young creators of fraudulent content from Macedonia, assured NBC to have entered $ 60,000 in six months thanks to the clicks of Trump supporters. That same audience provided the creators of libertywriters.com with up to $ 40,000 a month in the months leading up to the US elections.
Therefore, the responsibility of the big internet agents is transversal in the problem. Google offers support to monetize fake news even if you remove it if you report and verify it, and you also receive money from the promotion of these contents. And Facebook advertising is also used to spread them, in addition to being able to serve as a natural viralizer at zero cost, even with pages that bring together respectable audiences.
The Zuckerberg’s platform already announced months ago that it would block ads from pages where false news is detected repeatedly, although this experiment calls into question its effectiveness. For its part, Google has also announced that it works on the improvement of controls, although a few weeks ago it was discovered that it was showing ads of fraudulent content on sites that are precisely dedicated to combat them and a report figures in millions of search engine revenues thanks to The fake news.
What does my brand do there?
The problem of lack of control is accentuated with the rise of the programmatic. This is an automatic bidding of real-time ads that actually offers poor control to both parties: both who pays because their brand appears somewhere and who charges for reserving a space on their website to show an advertising that He doesn’t really know. It is impossible to monitor the contents of such rapid and numerous operations, except when they have already been done and are offering results.
That is why in recent times the concept of ‘brand security’ has been imposed, which is based on the fact that more and more companies want to make sure that they will not appear in sites that are not recommended for their image and will not finance websites that are contrary to their values. Among them, fraudulent content.
Instruments such as ads.txt (a list of domains allowed to advertise) help brands to protect themselves from appearing on inappropriate sites of fake news that often pretend to be respectable media. Sometimes the creators of this class of websites opt for domains that remind those of well-known publications or give it an appearance similar to their website, so that they take money that would correspond to those media.
Those weird stories under the news
Another of the essential engines to obtain false news revenue is the link recommendation services that many pages currently use. They are modules that are usually seen at the end of the contents, based clickbait headlines (deceptive, torticeros, sensationalists) to attract the reader’s prick, served by Outbrain, Taboola or the aforementioned RevContent.
Everyone wins with these suggested content networks, except users. Pablo Reyes, creator of several pages of fake news interviewed by BuzzFeed, said that “as long as the ads are served to real people, there is no problem,” alluding to the fact that they don’t mind working with fraudulent content. Where there is, it is in reliable media that use some of these recommendation systems, so after credible and worked content there are others that are not so.
How to dismantle the disinformation business
Facebook reached agreements with verifying entities a year ago to indicate to users if content was more or less credible, but the operation of the system was not in question. Some of the journalists who worked on it spoke with The Guardian about conflicts of interest and improvable mechanics, and a study warned that unlabeled false news now has a greater presumption of truthfulness, which implies a new problem of scale.
Therefore, the platform has decided to stop marking the items in dispute, and instead offer other visions of the same topic, so that users can have context to decide on their reliability, something that was already in evidence since April. The product design team of the Facebook News Feed explains it in Medium, appealing among other arguments to an investigation that links the marking with the reinforcement of the perspective for those who share something false. Another study suggests that providing several points of view on the same issue is effective in neutralizing false news.
From the economy of attention to the economy of trust
Apart from the initiatives of each company, Google, Facebook and Twitter have recently joined forces around a project marked by ‘trust indicators’, which allows users to establish the credibility of what they find based on who publishes it .
This concept is also at the base of the improvement of the visibility of logos and information associated with the creators of content on Facebook. The platform tries that the users can evaluate those contents from the credibility that the authorship inspires. This change contributes to correcting the fact that approximately half of the users did not remember who had published a story that had been found on the platform, which increased the risk of involuntary dissemination of false news.
The purpose of these initiatives is to reduce the visibility of this type of content, which in turn would have consequences on the traffic and the ability of its creators to generate income based on scale with them. And also to lessen the possible consequences in the internal politics of any country, the fear that has caused Ireland to raise fines of up to 10,000 euros or five-year prison sentences for spreading false news.
Also published on Medium.