Explorative and creative AI

Written by Peter Grindrod CBE, CEO, Astut Limited

AI needs to become creative and explorative, and to take on reasoning tasks at which humans naturally excel – yet with greater capacity for complexity, bandwidth, and background knowledge. In turn, this opens up or responds to (with both a “push” and “pull”), radical applications where there is little or no relevant data, yet inferences, decision options and hypotheticals are to be sought, valued, and put into action.  

This AI will need to be accessible, transparent and yet intuitive to a wide application-domain experts, who have much common knowledge and constraints, as well as individual blind spots and baggage. This field is rather separate from common data-driven decision-making within data-rich applications, which is grunt, high-frequency, decision-making that has been applied in many areas. It is time for AI to move up to strategic and radical novelty-seeking challenges. 

Many distinct sectors are deeply interested, as there is almost always no prior experience nor relevant case data on the table when they have to make high-stakes calls.  Yet, many existing AI players and users are either conflating both straightforward and hard issues; believing (wishing) that the present AI data-driven paradigm is the only game in town; or else being simply focussed on doing what everybody else is doing (but hopefully better) – transferring supervised and unsupervised decision-making, anomaly detection, object recognition, and all types of data-driven inferences into different fields of applications. 

They are all merely swimming while the next generation AI, working on no-data crises, will be “flying”. 

 And, as Nietzsche said, “He who would learn to fly one day must first learn to stand and walk and run and climb and dance; one cannot fly into flying.”

 

What is under the hood?

Such AI must be a hybrid, combining a generative layer with a logical-foraging-evolutionary layer. Ideally, this accomplishes two things in response to any given challenge.

  1.         Iteratively finding hypotheses (response options), while prizing possibly “novelty”. It is termed “imaginative” or “creative” since such hypothesis generation is not merely regurgitating and combining incremental hypotheses that are presently available.
  2.         Refuting or validating any hypothesis, depending on whether it does or does not contradict any sector knowledge or constraints and accepted logic. The rejected hypotheses are termed “fallacious”; while the accepted hypotheses are termed “irrefutable”.

This process can maintain a growing archive of all (so far) irrefutable hypotheses; which is useful in successively defining “novelty” measures that drive types of Novelty search. There are many possible alternative approaches to various elements within both steps. But the principal aim is clear at a high level.

Hybrid AIs such as these have attracted some recent attention during 2025: at Vancouver ICML 2025, where the Exploration in AI Today workshop, where AI moves from exploitation (emulating intelligent functions based on its ability to recombine, paraphrase, and simulate, yet rarely discovering novel things) towards the exploration of new ideas and knowledge discovery.  

 

The role of government and public funding

Whilst there are obvious radical and entrepreneurial elements in developing next generation AI, the process by which the UK Government supports such innovation is all hobbled by adherence to consensus-seeking peer review; or else by pre-defining fields of interest and applications (thinking inside the box). At a high level, governments desire R&D of radical, distinctive, AI concepts and applications, within “Sovereign AI”, yet the mechanics have little appetite for risk. They should instead champion controversial R&D: as a taxpayer, I want every AI programme to have a policy of investing, say 25% of the resource, into ideas that would not have an expert consensus (and avoid groupthink); and that are so controversial that they would start a fist fight in a pub full of experts. 

It is a problem of framing. The taxpayers want risk and growth from potential high-impact and fail-fast investments, yet the programme managers want to avoid failures and to remove project and business risks. Consequently, the Government confounds its own strategic mission by top-slicing the ambition and risks, and very often deploys expert consensus to justify investment (as “excellence”) to HMT. Furthermore, by pre-defining strategic and focus areas of interest, it maintains groupthink and eschews radical ideas. The government only address the known unknowns. Yet the only thing we know about the unknown unknowns is that they are out there. No wonder that most disruptive paradigm changes and next-generation AI, usually emerge from the venture space. 


Read More AI for good

Comments are closed.