AI explains itself to humans. And it pays off | Market area

LinkedIn, a subsidiary of Microsoft Corp, increased its subscription revenue by 8% after arming its sales team with artificial intelligence software that not only predicts customers likely to cancel, but also explains how it happened. this conclusion.

The system, introduced last July and described in a LinkedIn blog post Wednesday marks a breakthrough in getting AI to “show off its work” in a useful way.

While AI scientists have no problem designing systems that make accurate predictions about all sorts of business outcomes, they are discovering that to make these tools more effective for human operators, AI may need s explain through another algorithm.

The emerging field of Explainable AI, or XAI, has sparked heavy investment in Silicon Valley, where start-ups and cloud computing giants are vying to make opaque software more understandable, and has fueled discussions in Washington and Brussels, where regulators want to ensure that automated decisions are made in a fair and transparent manner.

AI technology can perpetuate societal biases such as those related to race, gender and culture. Some AI scientists see explanations as a crucial part of mitigating these problematic outcomes.

US consumer protection regulators, including the Federal Trade Commission, have warned for the past two years that AI that is unexplainable could be investigated. The EU could pass the Artificial Intelligence Act next year, a comprehensive set of requirements including that users be able to interpret automated predictions.

Proponents of Explainable AI say it has helped increase the effectiveness of AI applications in areas such as healthcare and retail. Google Cloud sells Explainable AI services that, for example, tell customers trying to fine-tune their systems which pixels and soon which training examples mattered the most in predicting a photo’s subject.

But critics say explanations of why the AI ​​predicted what it did are too unreliable because the AI ​​technology for interpreting the machines isn’t good enough.

LinkedIn and other explainable AI developers recognize that every step of the process — analyzing predictions, generating explanations, confirming their accuracy, and making them actionable for users — can still be improved.

But after two years of trial and error in a relatively low-stakes application, LinkedIn says its technology has delivered practical value. Proof of this is the 8% increase in renewal bookings in the current fiscal year compared to normally expected growth. LinkedIn declined to specify the profit in dollars, but described it as considerable.

Previously, LinkedIn salespeople relied on their own intuition and a few ad hoc automated alerts about customer adoption.

Now, AI quickly takes care of research and analysis. Dubbed CrystalCandle by LinkedIn, it flags unnoticed trends and its reasoning helps salespeople hone their tactics to keep customers at risk and offer upgrades to others.

LinkedIn says recommendations based on the explanation have been extended to more than 5,000 of its sales employees, covering recruiting, advertising, marketing and education deals.

“It helped experienced salespeople by arming them with specific insights to navigate conversations with prospects. It also helped new salespeople get started right away,” said Parvez Ahammad, director of machine learning and chief marketing officer. applied research in data science at LinkedIn.

TO EXPLAIN OR NOT TO EXPLAIN?

In 2020, LinkedIn first provided predictions without explanations. A score of approximately 80% accuracy indicates the likelihood that a soon-to-be-renewed customer will shift up a gear, remain stable, or be canceled.

The sellers were not entirely won over. The team that sold LinkedIn’s Talent Solutions recruiting and hiring software weren’t sure how to adapt their strategy, especially when the odds of a client not renewing their contract were no better than a coin toss. .

Last July, they started seeing a short auto-generated paragraph that highlights the factors influencing the score.

For example, the AI ​​decided that a client was likely to shift up a gear because it had grown by 240 workers over the past year and applicants had become 146% more responsive over the past month. .

Additionally, an index that measures a client’s overall success with LinkedIn recruiting tools has jumped 25% in the past three months.

Lekha Doshi, vice president of global operations at LinkedIn, said that based on the explanations, sales reps are now directing customers to training, support and services that improve their experience and keep them spending.

But some AI experts wonder if the explanations are necessary. They could even do harm, by instilling a false sense of security in the AI ​​or prompting design sacrifices that make predictions less accurate, the researchers say.

Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered Artificial Intelligence, said people use products like Tylenol and Google Maps whose inner workings aren’t fully understood. In such cases, rigorous testing and monitoring have dispelled most doubts about their effectiveness.

Similarly, AI systems as a whole could be deemed fair even if individual decisions are impenetrable, said Daniel Roy, associate professor of statistics at the University of Toronto.

LinkedIn asserts that the integrity of an algorithm cannot be assessed without understanding its reasoning.

He also argues that tools like his CrystalCandle could help AI users in other areas. Doctors could learn why the AI ​​predicts a person is at higher risk of getting a disease, or people could learn why the AI ​​recommended denying them a credit card.

The hope is that the explanations reveal whether a system aligns with the concepts and values ​​one wants to promote, said Been Kim, an AI researcher at Google.

“I see interpretability as ultimately enabling a conversation between machines and humans,” she said.

We wish to thank the author of this write-up for this amazing material

AI explains itself to humans. And it pays off | Market area


Visit our social media profiles as well as other related pageshttps://www.ai-magazine.com/related-pages/