fbpx
Home » Ban Online Behavioral Advertising

Ban Online Behavioral Advertising

0 comment 914 views

Tech companies earn staggering profits by targeting ads to us based on our online behavior. This incentivizes all online actors to collect as much of our behavioral information as possible, and then sell it to ad tech companies and the data brokers that service them. This pervasive online behavioral surveillance apparatus turns our lives into open books—every mouse click and screen swipe can be tracked and then disseminated throughout the vast ad tech ecosystem. Sometimes this system is called “online behavioral advertising.”

The time has come for Congress and the states to ban the targeting of ads to us based on our online behavior. This post explains why and how.

The harms of online behavioral advertising

The targeting of ads to us based on our online behavior is a three-part cycle of track, profile, and target.

  1. Track: A person uses technology, and that technology quietly collects information about who they are and what they do. Most critically, trackers gather online behavioral information, like app interactions and browsing history. This information is shared with ad tech companies and data brokers.
  2. Profile: Ad tech companies and data brokers that receive this information try to link it to what they already know about the user in question. These observers draw inferences about their target: what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, attending, or voting for.
  3. Target: Ad tech companies use the profiles they’ve assembled, or obtained from data brokers, to target advertisements. Through websites, apps, TVs, and social media, advertisers use data to show tailored messages to particular people, types of people, or groups.

This business has proven extremely lucrative for the companies that participate in it: Facebook, Google, and a host of smaller competitors turn data and screen real estate into advertiser dollars at staggering scale. Some companies do all three of these things (track, profile, and target); others do only one or two.

The industry harms users in concrete ways. First, online behavioral targeting is almost single-handedly responsible for the worst privacy problems on the internet today. Behavioral data is the raw fuel that powers targeting, but it isn’t just used for ads. Data gathered for ad tech can be shared with or sold to hedge fundslaw enforcement agencies, and military intelligence. Even when sensitive information doesn’t leave a company’s walls, that information can be accessed and exploited by people inside the company for personal ends.

Moreover, online behavioral advertising has warped the development of technology so that our devices spy on us by default. For example, mobile phones come equipped with “advertising IDs,” which were created for the sole purpose of enabling third-party trackers to profile users based on how they use their phones. Ad IDs have become the lynchpin of the data broker economy, and allow brokers and buyers to easily tie data from disparate sources across the online environment to a single user’s profile. Likewise, while third-party cookies were not explicitly designed to be used for ads, the advertising industry’s influence has ensured that they remain in use despite years of widespread consensus about their harms.

Targeted advertising based on online behavior doesn’t just hurt privacy. It also contributes to a range of other harms.

Such targeting supercharges the efforts of fraudulent, exploitive, and misleading advertisers. It allows peddlers of shady products and services to reach exactly the people who, based on their online behavior, the peddlers believe are most likely to be vulnerable to their messaging. Too often, what’s good for an advertiser is actively harmful for their targets.

Many targeting systems start with users’ behavior-based profiles, and then perform algorithmic audience selection, meaning advertisers don’t need to specify who they intend to reach. Systems like Facebook’s can run automatic experiments to identify exactly which kinds of people are most susceptible to a particular message. A 2018 exposé of the “affiliate advertiser” industry described how Facebook’s platform allowed hucksters to make millions by targeting credulous users with deceptive ads for modern-day snake oil. For example, this technology helps subprime lenders target the financially vulnerable and directs investment scams to thousands of seniors. Simply put, tracking amplifies the impact of predatory and exploitative ads.

Furthermore, ad targeting based on online behavior has discriminatory impacts. Sometimes, advertisers can directly target people based on their gender, age, race, religion, and the like. Advertisers can also use behavior-based profiles to target people based on proxies for such demographic characteristics, including “interests,” location, purchase history, credit status, and income. Furthermore, using lookalike audiences, advertisers can specify a set of people they want to reach, then deputize Facebook or Google to find people who, based on their behavior profiles, are “similar” to that initial group. If the advertiser’s list is discriminatory, the “similar” audience will be, too. As a result of all this, targeted advertising systems – even those that only use behavioral data – can enable turnkey housing discrimination and racist voter suppression. Behavioral targeting systems can have discriminatory impacts even when the advertiser does not intend to discriminate.

How to…

Read The Full Article at EFF

related posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept