The AI-tsunami of 2024

“In the forty years that Forrester has existed, we have rarely advised clients to immediately embrace new technology. Cautious experimentation until the technology matures and the associated vendor landscape rationalizes was the motto. Those rules are now being discarded with generative artificial intelligence (genAI). You need to act now,” said CEO George Coloney of the global market analyst firm. According to him, the impact of genAI is difficult to overestimate. This bodes well for the new year. At least the IT manager does not have to worry that 2024 will be boring.

The ups and downs in the world of AI have long been a hot issue. For example, the OpenAI affair, in which executive Sam Altman was fired, was watched with open mouths. In no time, news spread that he, along with hundreds of employees, moved to Microsoft only to return to OpenAI. It resembled a reality soap opera, but with significant consequences. After all, billions of dollars are at stake and we are living in a global AI experiment; a large mass of users of the OpenAI platform can no longer do without it. It is just a taste of the turmoil awaiting us in the AI landscape of 2024.

In 2024, AI will further establish itself as a consumer product. This is underscored by the 100 million weekly users of ChatGPT. The smart content generator is burrowing deeper into organizations through employees’ smartphones and laptops. For instance, a simple custom GPT can now be created in just 10 minutes. Just give it a name, an icon, and a task, while the specific knowledge base is uploaded. A smart AI assistant answers questions for that particular project or can even help with writing and programming. It is reminiscent of the time when the first smartphones and laptops were brought to the office, causing widespread panic because it was a significant disruption to IT policies that did not account for different consumer tastes. Even if IT managers do not have their own AI projects to manage, employees still turn to AI, accompanied by various questions, risks, changes, but also opportunities. Risk managers are puzzling over the hallucinatory nature of Generative AI. Perhaps they can breathe a sigh of relief at the thought that next year, the first hallucination insurance policies will enter the market.

Voice assistants like Alexa had a false start years ago. These voice-activated devices failed because their responses and answers were too limited. Each question required a separate answer application. This labor-intensive approach is now a thing of the past with the deployment of so-called Large Language Models (LLMs). These new large language models answer questions about their training data, and to the user, it seems as if they can reason like a human. They even process current information.

The transformation of the user interface is now genuinely underway. Consider OpenAI’s ChatGPT chat window or the voice interface now also accessible to users of the free ChatGPT version. Thus, our way of building, disseminating, and consuming knowledge changes. There is talk of interactive search and learning assignments. The passive podcast gives way to a conversation with the content. Collaborating with the AI tool on programming is also becoming common.

The significant growth in AI users also leads to considerable energy consumption. Previously, attention was primarily focused on the energy costs of training a model. For example, the CO2 emissions from one ChatGPT training round are estimated to be about 500 tons. That’s about 1000 cars each driving 1000 kilometers. This is nothing compared to the massive adoption of AI tools. Asking a question to ChatGPT is estimated to consume ten to twenty times more energy than asking the same question to Google. Last October, the prediction by PhD candidate Alex de Vries at the VU University of Amsterdam was still global news. He predicted that the AI sector would use as much energy as the entire Netherlands by 2027. Previously, the scientist researched the energy consumption of blockchain and crypto. Now, De Vries has decided to focus on AI’s energy consumption. Also, associate professor Luís Cruz from the TU Delft is aiming to train the new generation of IT professionals in green AI. His advice is not to always use all data simply because it can be done. Retrain models when they are complete. Cruz wants to help companies with relevant knowledge build a green AI pipeline. The sustainable and cost-conscious IT manager would do well to invest in that green AI knowledge now.

At the beginning of the current AI era, it’s one big adventure for AI scientists. According to them, this is because so much happens unexpectedly. The easiest prediction is thus to expect the unexpected. This requires even more adaptability from the IT manager.

Thijs Pepping is a trend analyst at the Verkenningsinstituut Nieuwe Technologie (VINT) of IT service provider Sogeti.

A version of this article was originally published @ ICT Magazine for IT Managers