Marketing Automation: Utopia or Dystopia?
Businesses need to consider consumer psychology and resist the temptation to maximize short-term profits at the expense of consumers.
From segmentation to pricing, virtually any process involved in marketing can now be automated. The ability to track individual online behavior and merge data sources increasingly allows marketers to target consumers at a granular level. Using algorithms based on machine learning, individuals can receive tailored product offers and advertisements, all in real time.
Such precise targeting increases the profitability of businesses, while allowing consumers to take advantage of amenities and offers tailored to their needs. However, it can also lead to negative economic and psychological consequences for consumers. The question becomes: how do you make sure that marketing automation doesn’t create a dystopia?
Maximize Profits ââ
Businesses maximize their profits when they sell their product or service for the higher price of what each customer is willing to pay. In the past, marketers could not easily determine willingness to pay (WTP), a situation that often resulted in consumers getting good value for money. Today, machine learning-based prediction algorithms can provide ever more accurate estimates of a consumer’s WTP.
In one experiment, recruiting company ZipRecruiter.com found that it could increase profits by over 80% by adopting individualized, algorithm-based pricing, using over a hundred consumer variables. Uber reportedly used machine learning to set prices specific to the route and time of day. Uber could easily use customer trip histories and other personal data to further personalize prices.
These developments can be alarming for consumers. While personalized pricing can benefit consumers with a lower WTP who might otherwise be excluded from the market, many consumers will likely end up paying prices closer to their WTP.
Low remuneration for personal data
As a rule, consumers freely give the information necessary to infer their preferences and WTP. But shouldn’t we compensate them for the drawbacks of personalization? For their part, companies claim that consumers are rewarded with better offers and free services like YouTube videos, social media, etc.
In research I conducted with INSEAD Daniel Walters and Geoff tomaino, it was found that consumers routinely undervalue their private data when exchanging it for goods or services instead of selling it for cash. Take users from social media platforms. They âpayâ for these services with private data, which platforms use to generate advertising profits. Our experiences suggest that consumers undervalue their private data in such non-cash exchange contexts, even though they know how profitable social media platforms are. This unequal exchange of value likely contributes to the extraordinary valuations of dominant tech companies.
We all appreciate being autonomous in our choices, free from any outside influence. But such autonomy requires confidentiality. Without privacy, we become predictable. The algorithms can then easily predict anything from our risk of credit default to our likelihood of buying certain products.
Further away experiences I directed with Wharton’s Rom Schrift and Yonat Zwebner have shown that consumers act as if they feel a threat to their autonomy when they understand that algorithms can predict their choices. When participants learned that an algorithm could predict their choices, they chose less preferred options to restore their sense of empowerment. To maximize acceptance of prediction algorithms, marketers will need to frame them in such a way that they do not threaten the perceived autonomy of consumers.
Algorithms like a black box
The complexity of algorithms often makes them difficult to explain. In addition, many cannot be made transparent for competitive reasons. Regulators worry – and consumers get angry – when they can’t understand why an algorithm is doing what it is doing, such as when it blocks a desired financial transaction or grants a specific credit limit .
Articles 13 to 15 of the GDPR require companies to provide customers with âmeaningful information about the logic involvedâ in such automated decisions. In another set of experiences, informing rejected consumers of the goals of an algorithm was just as meaningful to them as how the algorithm arrived at its negative evaluation. Consumers derived a sense of fairness from understanding the purpose of the algorithm.
How to alleviate the dystopia associated with automated marketing
Preventing dystopian outcomes is usually the job of regulators, but companies also need to put in place policies to address consumer concerns. Marketing automation poses complex challenges that require a range of solutions. These include data privacy regulations, mechanisms to ensure effective prices for personal data, and the deployment of fair privacy policies by businesses. The following measures should also have a mitigating effect.
Regulation to support both privacy and competition
To improve market efficiency (by preventing the collection of personal data without adequate compensation for consumers), regulators must both protect consumer privacy and encourage competition. This poses a conundrum: Policymakers need to protect innovation and competition among data-driven companies so that companies cannot monopolize their markets too easily. But fostering competition requires sharing of consumers’ personal data between businesses, which implies less privacy (witness Apple’s iOS requirement that apps obtain user permission to be tracked on other applications, which has an impact on, among other things, Facebook’s targeting ability). This paradox requires a nice balance. One solution could be to give consumers legal ownership of their data and create mechanisms for them to sell or rent their data to promote competition.
Instead of opposing the efforts of regulators, companies should give consumers more say in their own data. Transparency about the collection and use of personal data can help restore consumer confidence in automated marketing routines. Losing some control over consumer data can limit opportunities for price discrimination, but will protect brands and profits in the long run.
Frame algorithms in a positive light
Although algorithms sometimes breed mistrust, they can be more efficient and accurate than humans and improve our lives. However, companies must take into account the concerns of consumers and regulators when designing; otherwise, they risk triggering great resistance. Rather than stressing that algorithms can predict what a consumer will do, marketers should present them as tools that help consumers make choices consistent with their preferences. The transparency of algorithms can further reduce skepticism. Otherwise, explaining the objectives of algorithms can go a long way in reducing fears associated with AI-based decisions.
Avoiding a marketing automation dystopia is in the best interests of everyone in the market, at least in the long run. With this horizon in mind, companies must take into account the psychology of the consumer and resist the temptation to maximize their short-term profits at the expense of consumers.
This article is an adaptation of an original play published in the NIM Marketing Intelligence Review.
Klaus Wertenbroch is Professor of the Novartis Chair in Management and Environment and Professor of Marketing at INSEAD. He runs the Strategic Marketing Program, one of INSEAD’s continuing education programs.
INSEAD Knowledge is now active LinkedIn. Join the conversation today.
Follow INSEAD’s knowledge on Twitter and Facebook.