What are digital media giants like Google, Facebook and Twitter doing to combat the rise of fake news? And what steps can your agency take now to protect your brand and your media investment?

Portrait of Rowdie Erwin, Digital Media Strategist at KC

Rowdie Erwin, Digital Media Strategist

With the explosive growth of the internet over the last 20 years, the availability and breadth of available information and shared knowledge has never been greater. Like any repository of information, this information is represented by all manners of subject matter and an endless number of use cases.

That said, for each website that is fact checked, credible and trustworthy—there are ten (and likely orders of magnitude more) that are the opposite. User Generated Content (UGC) has become king in the last ten years; Millennials say that information received through UGC is trusted 50% more than information from other media sources, including TV, newspapers and magazines (Ipsos Millenial Social Influence Study, 2014)

Unfortunately, included in UGC are mountains of misleading, incredulous, and often damaging content that goes unchecked and into the free ether of the web, available for anyone to interpret and spread as fact. The issue has been increasingly recognized since the 2016 election cycle, most popularly under the “Fake News” moniker.

Though “Fake News” (potentially damaging and verifiably false stories or articles written to discredit an opposing viewpoint’s credibility) was birthed as a political issue, UGC has developed and contorted its self into a whole new slew of health risks and societal burdens for the general public, the tech companies that are responsible for hosting the content, and the political infrastructure that is responsible for these company’s governance.

The spread of existing conspiracy theories and the rise of new ones has picked up significant momentum in recent years. This has resulted in irreversible damage on the micro (Trapped in a hoax: survivors of conspiracy theories speak out, Ed Pilkington – The Guardian) and has proven deadly on the macro (Drowned out by the algorithm: Vaccination advocates struggle to be heard online.)

To aid in combatting the rise of these issues, on January 25thGoogle’s sister company YouTube announcedplans to improve their recommendation algorithm by “reducing recommendations of borderline content and content that could misinform users in harmful ways – such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

Facebook has also been in a very public position of damage control and increased regulatory pressure since their implication in the Cambridge Analytica scandal, which raised a mountain of red flags about user privacy and security, as well as who can purchase ads on their platform (and how easily—looking at you, Russia.) They’ve also faced similar struggles to Google in the criticism they’ve received from all directions for the mismanagement or complete lack of moderation on their platform—

“…by conservatives for what they perceive is a liberal bias, by liberals for allowing white nationalism and Holocaust denial on the platform, by governments and news organizations for allowing fake news and disinformation to flourish, and by human rights organizations for its use as a platform to facilitate gender-based harassment and livestream suicide and murder. Facebook has even been blamed for contributing to genocide.” Jason Koebler & Joseph Cox

Their initial (publicized) steps taken towards a solution for these issues was to hire 7,500 moderators and increase the capabilities of the platform’s AI, but given the scale of the issue (~7 bilion posts a week) it’s proven quite difficult.

Twitter is an important part of this conversation, as well. The platform has completely reconfigured the geo-political and civic landscape the last 10 years, changing the way conversations around important issues are discussed and progress on the local, city, state, national and international stage. The hashtag began as a way for brands to engage customers but has since been used to amplify the conversation around social issues, trigger political engagement and protests, and overthrow dictatorial tyrants.

Twitter faces many of the same difficulties that Facebook does, though. There has been a massive amount of criticism over methodology used to moderate their content.  While bullying, blatantly racist or homophobic attacks and similar devices fall neatly into a violation of Twitter’s platform usage policy; the conversation becomes a grey area and increasingly difficult to police when it comes to dealing with the expression of social/political ideologies, most notably white nationalism, which Twitter CEO Jack Dorsey has been slow to condemn or speak to directly.

An entire article could be written on each of the three companies and associated issues mentioned above: How do you moderate content posted by the general public in a way that doesn’t compromise the principle of free speech that our country was founded upon? Should companies with as much influence on the everyday lives of the public be responsible for making decisions on universal “truths”, especially in instances where a high percentage of the population might believe in an opposing viewpoint? If a government can indirectly exert this much control over the thoughts, opinions, and beliefs of its constituents, where is the line drawn?

These issues are layered and multi-faceted and will likely take years to truly address in full, but brands need to advertise today. So, what can agencies do to protect their clients from the negative impact of unregulated user generated content in the meantime?

At KC, we address brand safety at every step of the media buying process:

  1. Rigorous 3P vetting process – Before adding any 3rdparty vendors to a media plan, we ensure that the partner in question is willing to provide complete transparency into where all ads will be served via “domain level reporting.” Any hesitation on the matter will be considered an immediate point of disqualification from our media plan.
  2. Partner Solutions for Increased Brand Safety – Each media plan is subject to a unique combination of IAS, DoubleVerify, Peer39, Grapeshot and MOAT brand safety parameters that ensures every impression served will be viewable (by humans) in contextually relevant, brand-safe locations.
  3. The Trade Desk and White Ops MediaGuard integration – Our media buying platform is integrated with White Ops, an industry leading invalid traffic and fraud detection platform credited for its partnership with the US Government in foiling several multi-million dollar international ad-fraud schemes, including the storied Methbotand 3eve. This integration scans impressions in real time across participating supply-side platforms, blocking fraudulent impressions before they’re purchased. KC’s media plans only serve impressions against White Ops verified SSP’s.
  4. Ongoing inventory management – Media buyers at KC are monitoring domain level reporting on a weekly basis for opportunities to blacklist sites that appear suspicious or do not meet our standard of quality.
  5. Transparent Client-Communications – KC will always make site lists available to clients for inspection—at any time. Simple as that.

It is the responsibility of agencies and their media buyers to take ownership for where their client’s ads are served using the tools that are available. At KC, we’ve placed a high level of importance around this by taking the steps detailed above. If your agency isn’t willing to talk about the steps they’re taking to address brand safety, give us a call.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.