t may be the topic du jour, but AI is much more than a flash in the pan. In 2024, the evolution of artificially-intelligent software will continue to rock the boat – for organisations, governments, and the public alike.
So, what’s coming down the line? How will the technology evolve over the next 12 months? And what might this mean for businesses that need to work with and alongside AI?
We’ve identified five AI predictions for 2024 – plus the steps your organisation can take to raise the bar with artificial intelligence.
Five AI predictions for 2024
1. Responsible AI becomes a business imperative with the AI Act
At the core of the EU’s fledgling AI Act are a set of rules and processes designed to stop more sinister uses of the technology – like biometric categorisation systems, behavioural manipulation, and social scoring.
The Act requires any business using AI to self-declare the risk levels of those systems, with fines for those who misrepresent their products.
There’s no regulatory body for this, so businesses must get to grips with AI regulation quickly. And start codifying what responsible AI means for their organisation.
Of course, the timetable for compliance will be staggered. Some organisations will be expected to comply sooner than others based on categories and risk levels laid out in the draft Act.
‘The incoming AI Act brings much needed guidance, but businesses must move quickly to codify what responsible AI means for their organisation’.
The Act’s impact on business innovation will also vary widely from one industry to the next. For highly regulated sectors like medtech, where extensive safety measures are already the norm, the AI Act won’t mean a great deal of change. And the likes of embedded systems in medical devices will reportedly have much longer to comply.
Ultimately, the EU AI Act brings much needed guidance around responsible AI development. It provides more clarity on the subject than we’ve ever had before, helping organisations ensure the AI systems they’re using and developing do not have unintended bad consequences.
The regulation should help rather than hinder AI innovation for organisations that adopt responsible AI frameworks and bake transparency and ethical practice into their innovation and AI development processes.
Best practice here will be to fully audit your AI use and put processes in place that adhere to the AI Act at every step of the development and production of AI-based applications.
At Zühlke, we’ve been helping clients cement responsible AI practices. You can explore how to develop and scale your platforms, products, and processes in a human-centred and responsible way with our four-part responsible AI framework.
2. Generative AI rewrites the rulebook on software development
Fire up the latest version of ChatGPT and it’s hard not to marvel at just how far we’ve come with large language models (LLM) in the space of a year. And the fact that these models are publicly available.
But while it seems a bit trite to say ‘this is just the beginning’, there’s one field where that’s precisely the case: the field of software development.
Generative AI is already getting good at spitting out basic code. But 2024 will be the year in which AI truly redefines how software development works. Smarter, more robust LLMs – built directly into commercial products like Microsoft’s CoPilot – will reshape the entire software development field. Along with redefining how it’s taught.