Artificial intelligence (AI) has been a hot topic of discussion in technical circles for years, but 2023 marks the year where it captured the attention and imagination of the mainstream. ChatGPT and similar technologies made AI accessible to the layperson, resulting in emotional reactions ranging from euphoria to despair and executive orders to ensure AI is safe, secure, and trustworthy.
In August, Gartner placed generative AI at the Peak of Inflated Expectations for Emerging Technologies, which begs the question, where will AI be in 2024?
AI predictions for 2024
In 2024, we can expect to see even more progress in AI. We will likely see the development of even more powerful AI systems, as well as the widespread application-specific adoption of AI in a variety of industries. AI will continue to be a topic of discussion and debate as we grapple with the implications of this powerful technology.
Here are five predictions on how AI will change the technology landscape in the coming year:
- Expect and prepare for mistakes
- “Centaur teams” will reduce turnover
- Mainstream applications will embed AI
- No single winner will be declared
- Predictive AI will bring generative AI to life through automation
1. Expect and prepare for mistakes
As IT leaders, we want to move fast, but not necessarily break things in catastrophic and highly visible ways. Paul Virilio, French cultural theorist, urbanist, and aesthetic philosopher once noted that with every innovation, we also invent the “integral accident” – the accidents and mistakes that come along with innovation.
“When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution…
Every technology carries its own negativity, which is invented at the same time as technical progress.”
– Paul Virilio
The industry is still learning about AI and how to best apply it. As we learn, we’ll make mistakes, and some of the more spectacular mistakes will undoubtedly make front-page headlines. For example, a lawyer used ChatGPT to generate a legal brief, which caused issues to trickle down:
- It’s likely that the prompt the lawyer used wasn’t crafted to use actual case law exclusively and not make things up. The problem? Proper prompt design.
- The lawyer didn’t use a third-party source to fact-check the content. He did ask ChatGPT if the cases were legit, but ChatGPT hallucinated and said yes – something the lawyer didn’t account for.
- ChatGPT is based on data scraped from all over the internet. There are new models coming out that are trained and/or refined exclusively on case law, medical data, etc., to minimize this kind of hallucination.
- In lieu of training your own model, ground it in real data and craft a prompt to “error out” if it can’t cite real data instead of making things up.
Generative AI tends to tell you what you want to hear as opposed to providing you with factual answers that you may not want to hear. Crafting a prompt to base the output on actual data is key for better results. Third-party checking is also needed to trust, but verify.
So what are the best ways to move fast, minimize risk, and make headlines in a good way? For generative AI, in particular, grounding prompts in real-world data minimizes hallucination. One way to do that is to use APIs that extract data from your systems of record to build your prompts. By grounding your prompt in real-world data, generative AI models will be less inclined to fill in the blanks with plausible but made-up content. Additionally, you and your vendors should have an auditable AI trust model to identify and eliminate bias, toxicity, and sensitive data leakage.
2. “Centaur teams” will reduce turnover
When IBM’s Deep Blue defeated former world champion chess grandmaster Gary Kasparov in 1996, many thought the machines won for good. That’s not the case, but not in the way many would have expected. Yes, computers are typically better in human vs. machine chess matches. Still, studies have shown that pairing humans with computers – thus marrying human intellect to machine technology – creates a centaur team that performs better than humans or computers individually.
Fast forward to today, Harvard Business School professor Karim Lakhani recently said, “AI is not going to replace humans, but humans with AI are going to replace humans without AI.” Further, many toil-laden jobs are prone to high turnover rates. For customer service reps, Gartner benchmarks show that the median attrition rate is 25%.
By building a customer service centaur team pairing humans with integration-fueled AI, manual swivel chair integration and hand-written case summaries can be replaced by AI-generated integrations and summaries grounded in real customer data and supervised for quality by customer service reps. This leads to faster case resolution, better service rep utilization, improved customer service, and happier reps who are less likely to quit due to frustrating manual toil.
3. Mainstream applications will embed AI
Services like ChatGPT, Bard, and others have shown mainstream users the power and magic of generative AI. However, this approach is limited as these tools aren’t integrated with systems of record, causing yet another screen for workers to swivel-chair and integrate manually. To address this, software providers are embedding AI in their products through copilots.
In-house-developed software can take advantage of AI, too. Developers have a wealth of AI models to choose from, and they, too, can augment their home-grown applications with API calls to open source and proprietary models that run on-premise and in the cloud. AI can breathe new life and exciting capabilities and efficiencies into existing applications, but concerns can still lie in data privacy and security. Companies need to ensure that developers can safely use and implement AI without sharing proprietary information with external models.
4. No single winner will be declared in 2024
Given the run-up to the Peak of Inflated Expectations in 2023, many of 2023’s most exciting AI companies and models may not exist in 2024 and beyond as the market matures. On the positive side, cloud AI services are embracing regulated industries, receiving government certifications like FedRAMP, and appearing in regions where data sovereignty is required. In both cases, IT leaders must embrace a loosely coupled API-led approach to AI adoption.
For example, a government agency can experiment today with AI in non-government-certified cloud regions. When the desired AI capabilities are available in a government-certified region, they can change the API endpoint once instead of refactoring dozens of point-to-point integrations, which can be time-consuming, error-prone, and fraught with security risks. Further, they can change endpoints as often as they like as models continuously improve to better suit their needs, including the possibility of running the models on-premise in secure enclaves.
5. Predictive AI will bring generative AI to life through automation
2023 was the year of generative AI, so it’s easy to overlook other uses of AI, like predictive AI and pairing the two together. Predictive analytics allows organizations to predict what is likely to happen in the future, by looking for patterns in the information they already have. By integrating and analyzing sales, service, marketing, and other systems of record, organizations can gain insight into customer and constituent behavior.
By using automation, workflows can be triggered to engage customers with generative AI-generated content, grounded in real-world customer engagements to drive desirable outcomes. So, when considering AI, consider the gamut of everything AI can offer, including predictive and generative, and use automation to connect the AIs together.
2023 was the year of exploration; 2024 is the year of advancement
The coming year is poised to be full of significant advancement – and surprises – in AI. We can expect to see the development of even more powerful AI systems as well as the widespread adoption of AI in a variety of industries. However, going in with the knowledge that mistakes will happen presents us with an opportunity to learn, teach, and expand our existing AI education.
To mitigate risk and ensure successful AI implementation, organizations should focus on grounding prompts in real-world data, building centaur teams, embedding AI into mainstream applications, and adopting a loosely coupled API-led approach. Predictive AI will also bring generative AI to life through automation. As AI continues to evolve, we can expect to see even more transformative applications emerge in the years to come.