fbpx
Home » Humanness Score – Open-Standard, Peer-Reviewed for Digital Advertising

Humanness Score – Open-Standard, Peer-Reviewed for Digital Advertising

0 comments 1.2K views

Having spent the last 20 years in New York’s Silicon Alley and seen the entire story-arc of how the Internet has changed digital advertising forever, I have the privilege and the duty to publish and submit for peer review a Humanness Score(tm) for Digital Advertising. This was born out of the need to unify the industry in the war against all forms of digital ad fraud and against an enemy that operates in the shadows remotely.

It is my conviction that digital advertising is the most effective form of all marketing activity and is poised to become the central and dominant means by which advertisers reach and interact with human customers, something not possible with historic, one-way media. To expeditiously reach this ambitious future state, every actor in the digital ecosystem must act with firm resolve to eliminate fraud and malpractices and to earn and retain the trust of peers.

An ecosystem that persists must balance the needs and motivations of all of its members — in our case, the users, the publishers, and the advertisers — and the value-added practitioners that serve each of these. Any imbalance will fail; any selfish gains will be short-lived. In digital advertising, there are countless inputs, metrics, and things to optimize. But none is more important than ensuring the ads are viewed by humans, and not by bots — the fraudulent agents used by criminals to create ad impressions and clicks.

Why We Need the Humanness Score

The reason for the Humanness Score (HT @BradBerens) is that there is no central, standard database of verified humans that can be used in digital advertising. We simply have users who visit websites, in most cases anonymously because they are not required to log in — think weather, news, sports, magazine sites. We do already derive data points that yield better targeting — e.g. list of sites users visit, social media chatter, search keywords, and even items added to shopping carts and then abandoned. But due to the prolific activity of bots over the entire history of digital advertising, none of these parameters alone can guarantee “humanness;” every single parameter has been documented to be faked by bots. With the Humanness Score, advertisers can choose to optimize their ad spend by favoring those entities (publishers, exchanges, etc.) that show higher humanness.

The Humanness Score(tm) for Digital Advertising

The Humanness Score is an indexed number from 0 (non-human) to 100 (human) which shows the relative “humanness” of the users that visit websites and cause ad impressions to load. It can be measured on-site (on a website) and in-ad (in an ad impression). The score is indexed relative to peers, using then-current data for the calculations. For example, publishers are indexed against other publishers, while ad impressions (e.g. from ad networks) are indexed against each other. This ensures that as the entire “sea level” rises, each certified member must continue to innovate to maintain higher scores.

A higher Humanness Score means a higher proportion of confirmed humans and good policies and disclosures that go along with ensuring a human audience. A higher Humanness Score should also lead to higher premium CPM or CPC — this rewards premium publishers for their good work and “playing by the rules” and rewards advertisers with better performance for their ad spending.

Score Syntax: < J9NGT6H3 | 65 (i^9) >

Definitions: J9NGT6H3 = client/entity identifier; 65 = humanness score – 0 (bots) to 100 (human); i = in-ad | o = on-site; ^9 indicates the order of magnitude of the data set – e.g. billions.

The Inputs for Calculating the Humanness Score

The score is based on the three categories of inputs, as specified below and takes into account policies, continuously audited data, and the willingness to disclose data for verification. The Humanness Score of entities of the same type (e.g. ad exchanges) can be directly compared.

Policies – 10% of score

  • does the entity purchase impressions or source traffic of any kind?
  • does the entity have published policies protecting users’ privacy and does it consistently act according to these policies (see the EFF’s Privacy Badger Initiative)
  • does the entity sell data — e.g. cookie matching, cookie profile, collected or derived data

Disclosures – 20% of score

  • whether the entity provides full transparency to peers by providing access to visit level data so that peer can verify parameters like placement, viewability, and other metrics
  • whether the entity provides access to auditors of the sites on which the ads were run, the sources of traffic, and the recipients of media payments.
  • whether the entity generates and shares threat data with peers

 Data – 70% of score

  • continuously measured data points on ad impressions and visits to websites – a minimum of 1 billion in-ad data points required for certification; and X million on-site data points for websites (depending on natural traffic volumes).
  • how often data like website and cookie blacklists are updated
  • whether the appropriate anti-fraud vendors are used and how the technology is deployed

 

Example of Initial Scores

Different ad networks show vastly different…

Read The Full Article

related posts

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept