By now, most companies have taken long strides toward automating their business processes through technology with the goals of making their operations more efficient, saving money, and generating the analytical insights that can inform decision-making. But what if the underlying data is incorrect? Or if the processes it’s being put through, the way it’s being manipulated, or the algorithms that are being applied to it are faulty? Businesses are only realizing this risk more fully.
These issues can be built into the original process, but they often arise out of overreliance on automation—overreliance on technology, that is. For example, at Procter & Gamble (P&G), the development of artificial intelligence for customer relationship management (CRM) systems led to predictive models becoming increasingly accurate—based on data collected through automated phone calls and email messages. But the data used to make these models came largely from automated voicemails sent by P&G employees who wanted to send messages in a more personal manner. Only later were the models trained on the actual phone calls and email exchanges that resulted from such messages.
The consequences of such a failure trace back to the automated processes themselves, which evolved without sufficient testing and validation. But technology can also damage data, especially when key decisions are made based on false or inaccurate information. At Credit Suisse, for example, a cyberattack took place in 2016 and affected up to nine million clients and 1 million employees worldwide. In addition to exposing confidential data to hackers, it also compromised important customer information that had been used in making credit decisions—information that would have been otherwise kept private.
The underlying data can also be damaged as a by-product of the way it’s analyzed. At one insurance company, for example, a team—using a state-of-the-art tool—analyzed “big data” on past claims for heart attacks and prescribed risk factors. The data indicated that underwriters could more accurately predict which applicants would suffer heart attacks just by looking at certain key conditions, such as age and cholesterol levels. Having reduced sophisticated modeling and predictive analysis to its most basic elements, the team ignored other factors that were just as relevant, such as race and family history: It was assumed these weren’t statistically significant.
The result: The recommendations generated by the system were biased against women and African-Americans. When the findings were published, the team and the company faced a barrage of lawsuits that cost hundreds of millions of dollars.
These examples illustrate some of the risks of using unvalidated technology to guide business decisions. But what about those times when people—not algorithms—are the key culprits? Data scientists at one organization, for example, developed a highly accurate algorithm for predicting which customers were likely to churn based on behavioral and social factors such as their spending habits and their online presence on social networks. The company used this information to target these customers with marketing campaigns designed to keep them from leaving.
But, as with the heart-attack claims above, the algorithms were only as good as the data used to train them. The data scientists found that, increasingly, their predictions about customer behavior were being thrown off by a growing number of bots and cyborgs—useless messages sent by automated accounts or human users whose identities had been hijacked. The result: They promoted the wrong messages to the wrong customers and wasted valuable marketing budgets.
The examples we’ve just cited involve both technology and people at their core. IBM’s research focuses on what we call X-Risks: emerging threats that companies should identify and manage now before they become disruptive liabilities. X-risk issues include climate change, human error in complex systems (such as nuclear power plants and air traffic control), and systemic financial failure. But the technology itself, and how it’s used, is also an X-risk. We call this issue system-blindness: It describes the inability of people to see beyond their reliance on systems and algorithms without losing trust in them as a result.
For example, Google unintentionally demonstrated system-blindness through one of its “smart replies” to the query “What’s up?” It offered three short responses: “Kittens!” “Nothing much.” and “Cats.” But the company was also blind to what happened next: Using machine learning, it used the replies to identify and tag photos on its users’ photo-sharing network and use them for targeted advertisements.
The problem: The system had been trained using a carefully curated list of photos that included those uploaded by humans as well as those automatically generated by Google’s algorithms. As a result, the system learned to correct itself when it (erroneously) tagged some real people as cats—and then used its findings as to the basis for powering new ads. While the ads themselves didn’t appear based on the photos, they did appear on users’ news feeds. This led to some issues for the recipient, who was likely to have had some purr-fectly delightful photos of cats that they wanted to share with friends on social media—but whose cats were being incorrectly identified as kittens.
The problem with system blindness is twofold: First, an organization can lose sight of who its customers are and what motivates them. Second, it doesn’t know how to solve or prevent problems when they arise. For example, oil giant ConocoPhillips had a problem with one of its oil wells in 2015 growing too quickly.
Intelligence can fail without being anachronistic or going wrong. It can also be achieved by being clever and creative, but not necessarily with a long-term goal in mind. Such intelligence is a non-short-term option, but it is prone to making mistakes that may not be rectified for years.
Interested in reading more about how technology can benefit your business? Manufacturing Made Smarter Through MRP discusses how technology can benefit in manufacturing through the use of MRP.
If you would like TSVMap™ to assist your business with assessing your essential systems and applying the TSVMap methodology to ERP Systems, MRP Systems, Cyber Security, IT Structure, Web Applications, Business Operations, and Automation, please contact us at 864-991-5656 or firstname.lastname@example.org.