Generative artificial intelligence has achieved a remarkable milestone, surpassing average human performance on specific creativity tests, according to a recent study. This extensive research, involving over 100,000 human participants, reveals that advanced AI models like GPT-4 now demonstrate a surprising capacity for original thought and idea generation, as detailed by a report on ScienceDaily published on January 25, 2026.
The findings mark a significant turning point in the ongoing debate about AI’s creative potential. While AI’s ability to generate novel content has been evident in various domains, its direct comparison against human ingenuity on such a large scale provides concrete evidence of its evolving capabilities. This shift challenges previous assumptions about the unique nature of human creativity, pushing the boundaries of what machines can achieve.
Despite this impressive leap, the study also highlights a crucial distinction: the most imaginative human minds still maintain a clear and consistent advantage over even the most sophisticated AI systems. Specifically, the top 10% of human participants consistently outperformed AI, particularly in more complex creative endeavors such as poetry and intricate storytelling. This suggests a ceiling for current AI models, indicating that while they can mimic and even exceed average human output, they have yet to replicate the depth and nuanced originality of peak human creativity.
AI reaches average human creativity levels
Researchers evaluated several leading large language models, including ChatGPT, Claude, and Gemini, comparing their results with those from more than 100,000 human participants. The study, led by Professor Karim Jerbi from the Université de Montréal and involving renowned AI researcher Yoshua Bengio, used the Divergent Association Task (DAT) as a primary metric. This widely recognized psychological test measures divergent creativity, or the ability to generate diverse and original ideas from a single prompt.
Professor Jerbi noted, “Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks.” This outcome, he added, might be unsettling, yet the research also underscored that even the best AI systems currently fall short of the levels achieved by the most creative humans. The gap widened significantly when comparing AI performance to the top half, and especially the top 10%, of creative individuals.
This performance threshold illustrates that while AI excels at pattern recognition and generating variations within established parameters, it struggles to achieve the truly novel, paradigm-shifting ideas that characterize elite human creativity. Understanding this distinction is vital for both AI development and for recognizing the enduring value of unique human cognitive abilities. For instance, research published in Scientific Reports often explores these complex cognitive boundaries.
Measuring the nuances of creative thought
To ensure a fair comparison between humans and machines, the research team employed multiple methods, with the Divergent Association Task (DAT) at its core. Developed by study co-author Jay Olson, the DAT asks participants to list ten words as semantically unrelated as possible. A highly creative response might include words like “galaxy, fork, freedom, algae, harmonica, quantum, nostalgia, velvet, hurricane, photosynthesis.”
Performance on the DAT correlates strongly with other established creativity tests, including those used in writing and problem-solving. While language-based, the task engages broader cognitive processes involved in creative thinking across various domains. This methodological rigor ensures that the assessment of AI creativity is not merely superficial but delves into the underlying mechanisms of original thought, providing a robust framework for comparison, as explored in discussions on the future of creative work.
Beyond simple word lists, the researchers extended their evaluation to more complex creative challenges, such as composing haikus, writing movie plot summaries, and crafting short stories. Here, the pattern held: AI systems demonstrated competence, sometimes exceeding average human output, but they consistently failed to match the richness, emotional depth, and intricate narrative structures produced by the most gifted human storytellers. This highlights that while AI can generate coherent text, the subjective and deeply human elements of creativity remain largely inaccessible to current models, a topic frequently debated in technology research circles.
The study provides a nuanced view of AI’s burgeoning creative capabilities, asserting its capacity to augment human efforts and handle routine creative tasks. However, it also reaffirms the irreplaceable role of exceptional human imagination. As AI continues to evolve, future research will likely focus on bridging this gap, but for now, the pinnacle of creative genius remains a distinctly human domain, prompting us to rethink how we value and cultivate unique human skills in an increasingly automated world.












