“We focus heavily on process change management as the application of AI always has a significant impact on the ways of working.”
Please tell us a little bit about your journey and what inspired you start at BCG?
I’m currently a Scientific Advisor on NLP for BCG, developing NLP solutions in finance and spend data, aerospace non-quality, gender bias analysis, contract excellence, employee survey analysis and customer service analysis. I studied AI back in the early days of its existence. At the time AI tried to model expert knowledge with little success.
I began in a technology start-up which was acquired by big tech player Oracle, and I moved to Silicon Valley. Then, through customer projects in the Sensor Networks (IoT) and Big Data fields, I joined the advisory network of a large aerospace company, Airbus. After helping define their AI strategy, I joined the aerospace company to lead their AI strategy, roadmap, and implementation.
I joined BCG GAMMA four years ago to help build up a fantastic group of Data Scientists trying to help our clients with high impact and value by generating AI solutions. I’ve now been working on NLP for six years and have also developed a patented solution to analyse financial spend data for companies using NLP models for sparse data.
Tell us a little bit about BCG and its recent foray into smart technologies. What does your data science projects look like?
At BCG we are very focussed on value generation. Our Data Scientists continuously work on the cutting edge, and we partner with various organisations to understand the direction and maturity of new technologies.
We have a scientific network community with leaders in different topics. These communities work with partners and universities and provide input and help to our broader BCG teams and clients. We try to make sure we don’t see technology as a solution, but as a way that can help to unlock client value. In most cases, the value comes through the combination of smart algorithms and AI, data engineering and process change management.
Sometimes new technology approaches can provide a competitive advantage along with a solution to a problem and sometimes it provides an extra couple of percentage points to the value generation.
GAMMA is focussed on making sure that technology is used to maximize the value in a pragmatic way.
What is the most contemporary definition of Natural Language Processing? Why has NLP emerged as an important technology to business success?
Traditionally Language Technologies were defined as an umbrella term for computational linguistics and NLP. But more frequently, those terms are being diluted – and today, I would put them all under NLP.
NLP enables computers to recognize, analyse, understand (NLU), produce (NLG), modify, or respond to human text and speech. Today’s NLP solutions are increasingly powered by Large Language Models (LLMs), trained on massive amount of curated and uncurated documents with models encompassing billions of parameters.
How NLP helps with speech recognition software? Could you highlight some of the top applications of NLP?
Speech recognition (both through OCR and audio) has to tackle a lot of noise (bad visual quality, handwriting, background noise, strong accents, jargon language, etc.). We have found that a pretrained NLP model can vastly help disambiguate the upstream speech recognition by leveraging the context (e.g. pilots discourse in the cockpit) using pre-trained models (jargon and process encodings of pilot handbooks) to increase the accuracy of understanding (e.g. identify when a plane goes from cruising to approach by virtue of verbal pilot instructions).
Even if not every word is correctly understood, it suffices to decipher sufficient key elements to be able to rule out other wrong interpretations.
The top applications of NLP are but to name a few: a triage of customer requests, predictive maintenance, fraud detection, customer service analysis, uncovered problem, trend monitoring and many more.
Tell us more about AI ML projects as part of GAMMA. How do you work with AI ethics and responsible AI concepts?
At GAMMA, we pioneered a responsible AI framework to help assess any possible ethical or bias issues which might arise during a project. We put training and processes in place to monitor the responsible and transparent development of AI for our clients. We also focus heavily on the process change management as the application of AI always has a significant impact on the ways of working.
We train our client teams on the intricacies of the models and data to avoid the possible development of bias through the lifetime of the project. For example, even ‘gold standard’ AI models, where a human first validates an AI outcome before a decision is made, can develop bias when humans start trusting the machine so much, that occasional false positives are wrongly confirmed as positives. For such cases, we typically generate a process monitoring which alerts if the model accuracy as confirmed by those human operators appears too high, possibly reflecting a confirmation bias.
Please help us understand why there are issues of bias in NLP and speech recognition?
As mentioned above, pre-trained NLP models can greatly increase the understanding of speech. However, as described in a previous set of articles, language models trained on internet-based documents such as through the news or social media also exhibit internet-scale bias (i.e. bias reflected in society). Pre-trained NLP models help disambiguate speech recognition by trying to predict the probability of certain words in sequence and if that prediction is based on a biased model, that bias can be transmitted into the speech recognition.
How does AI bias manifest itself in speech recognition software and what the setbacks are of NLP and speech recognition due to this bias?
Speech recognition relies on NLP and speech models that are trained on available data. If we have bias in that data, it will be similarly reflected in the speech recognition. The result is that we risk losing nuances in the language reflecting sentiment, or language finesse which in certain situations might represent a significant difference in interpretation.
From an industry perspective, what’s needed to overcome and regulate bias in NLP technology?
One way of overcoming bias is through the use of AI for AI – whereby an AI algorithm is developed to detect possible bias in its own models. While it can’t correct it, it can be trained to monitor drop-off levels of specific bias indicators and so pinpoint to the algorithmic pipeline possibly introducing the bias.
We have recently used such an NLP and AI for AI techniques to detect societal bias – for example by analysing language differences from feedback managers give about their employees. While these biases are often reflective of underlying societal biases and therefore often unconscious, AI can help reduce such bias to make language more inclusive, actionable by (in real time) by bringing it to awareness.
Your take on the future of AI and NLP operations and how big the industry will become by 2025 for the consumer markets:
NLP has seen an incredible development in the last 3 years and the large language models such as GPT-3 have shown the incredible potential these models hold. We know that 80% of the data in companies today is unstructured; most of that is in textual form. The success of AI typically relies on the 20% of structured data that is real and disruptive. With NLP technologies, we now have a way to access and address the 80% which typically holds a lot of crucial knowledge.
NLP has in the past been mainly focussed on efficiency such as RPA and triaging. We have now entered a phase where NLP supports and drives decisions with business insights such as topic extraction, question answering, semantic similarity, categorization. We are increasingly moving towards use of NLP as strategic differentiator with automated, data driven decision making in key business processes.
This also means that companies will increasingly have to consider retaining even developing large portions of these NLP and AI solutions in-house as it leverages strategic knowledge and processes and therefore represents their competitive advantage.
Advice to every data science / AI professional looking to start in this space?
NLP has gone from statistical, frequency-based analysis to (semantic) similarity analysis and is increasingly moving to tackle contextual analysis. The majority of NLP use-cases generating substantial impact have to understand the context in order to understand jargon and ambiguity.
Pre-trained large language models are very powerful, but also generally don’t have such context awareness for most business problems. One has to tailor domain specificality and leverage human-in-the-loop to help NLP to deliver the precision needed for business decision support.
For example, if one analysis customer service transcripts for topics to find problem clusters, straight NLP topic extraction might yield irrelevant clusters such as “have a nice day” as frequent topic, since almost every transcript has such a topic. One can leverage the client service context to first narrow down on the portion of the transcript where the problem is being described, and do semantic topic clustering around that portion.
Thank you, Ronny! That was fun and we hope to see you back on itechnologyseries.com soon.
[To participate in our interview series, please write to us at firstname.lastname@example.org]
Ronny believes that a competitive advantage exists at the intersection of data science, technology, people, and deep business expertise. He prides himself on being able to align organisational vision with technical solutions in order to achieve business growth and impact. His strength is working with global companies across industries to create value through AI, to disrupt themselves before they get disrupted. He guides businesses to achieve growth through vision and technical leadership in these uncharted territories with a relentless focus on impact.
The gap between digital thinking and traditional transaction models is widening. It is altering how consumers shop, entertain, socialize, learn, and do business. Companies must harness the data that contextualizes their business, customers, and partners to increase personalisation and analytical capabilities. It’s an exciting time to align technology with their business objectives and processes to deliver market leadership and value.
Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we work closely with clients to embrace a transformational approach aimed at benefiting all stakeholders—empowering organizations to grow, build sustainable competitive advantage, and drive positive societal impact.
Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, fueled by the goal of helping our clients thrive and enabling them to make the world a better place.