How can marketers cash in without becoming enemies of the people?
July 12, 2015
From Boston to Beijing, municipalities and governments across the world are pledging billions to create “smart cities”—urban areas covered with Internet-connected devices that control citywide systems, such as transit, and collect data. Although the details can vary, the basic goal is to create super-efficient infrastructure, aid urban planning and improve the well-being of the populace.
A byproduct of a tech utopia will be a prodigious amount of data collected on the inhabitants. For instance, at the company I head, we recently undertook an experiment in which some staff volunteered to wear devices around the clock for 10 days. We monitored more than 170 metrics reflecting their daily habits and preferences—including how they slept, where they traveled and how they felt (a fast heart rate and no movement can indicate excitement or stress).
Photo: Getty Images
If the Internet age has taught us anything, it’s that where there is information, there is money to be made. With so much personal information available and countless ways to use it, businesses and authorities will be faced with a number of ethical questions.
In a fully “smart” city, every movement an individual makes can be tracked. The data will reveal where she works, how she commutes, her shopping habits, places she visits and her proximity to other people. You could argue that this sort of tracking already exists via various apps and on social-media platforms, or is held by public-transport companies and e-commerce sites. The difference is that with a smart city this data will be centralized and easy to access. Given the value of this data, it’s conceivable that municipalities or private businesses that pay to create a smart city will seek to recoup their expenses by selling it.
By analyzing this information using data-science techniques, a company could learn not only the day-to-day routine of an individual but also his preferences, behavior and emotional state. Private companies could know more about people than they know about themselves.
For marketers, this is a dream come true. Imagine the scenario: A beverage company knows a particular individual’s Friday or Saturday night routine. The company knows what he drinks, when he drinks, who he drinks with and where he goes. It also knows how the weather affects what beverage the individual chooses and how changes in work patterns influence how much alcohol he consumes. By combining this information with the individual’s social-media profile, the company could send marketing messages to the person when he is most susceptible to the suggestion to buy a drink.
Businesses could market divorce services to couples who, through data analysis, are shown to exhibit behavior that indicates that their relationship could be in trouble—things like unusual travel patterns, and changes in work-life balance, such as a rapid increase in the amount of time both individuals spend at work or in separate bars. Individuals who are shown to lead very unhealthy lifestyles could be deliberately targeted by brands selling fatty foods.
The scenarios are endless, ranging from the genuinely useful to the potentially terrifying. But what will moderate how a smart city works and how brands can use data?
Recent history—issues of privacy and security on social networks and chatting apps, and questions about how intellectual-property regulations apply online—has shown that the law has been slow to catch up with digital innovations. So businesses that can purchase smart-city data will be presented with many strategic and ethical concerns.
What degree of targeting is too specific and violates privacy? Should businesses limit the types of goods or services they offer to certain individuals? Is it ethical for data—on an employee’s eating habits, for instance—to be sold to employers or to insurance companies to help them assess claims? Do individuals own their own personal data once it enters the smart-city system?
With or without stringent controlling legislation, businesses in a smart city will need to craft their own policies and procedures regarding the use of data. A large-scale misuse of personal data could provoke a consumer backlash that could cripple a company’s reputation and lead to monster lawsuits. An additional problem is that businesses won’t know which individuals might welcome the convenience of targeted advertising and which will find it creepy—although data science could solve this equation eventually by predicting where each individual’s privacy line is.
A smart city doesn’t have to be as Orwellian as it sounds. If businesses act responsibly, there is no reason why what sounds intrusive in the abstract can’t revolutionize the way people live for the better by offering services that anticipates their needs; by designing ultraefficient infrastructure that makes commuting a (relative) dream; or with a revolutionary approach to how energy is generated and used by businesses and the populace at large.
Mr. Weston is the CEO of the London- and Dubai-based data-science consultancy Profusion.
There are 19 comments.
The next growth industry? Privacy apps that counter the expanding intrusiveness of government and business. Any computer program designed to monitor and track you can be defeated. In the mean time, pay with cash, opt out of social media, disable your smart phone GPS tracking program, and insist that E Tailers guarantee your on line privacy. Please let me know when the average American city manages to computerize the traffic control system so that I’m not sitting at a red light at two o’clock in the morning. The current system hasn’t changed in 100 years.
“A smart city doesn’t have to be as Orwellian as it sounds.” And human nature isn’t human nature. Get a grip, Mike.
George Orwell is smiling and patting himself on the back, somewhere.