- October 31, 2024
- allix
- Research
Large Language Models (LLMs) are advanced AI systems capable of understanding and generating human speech in response to written prompts. All over the world, they are used for a wide range of tasks, such as composing work emails, creating reports, creating lists, and creating articles and essays, as well as writing poems, stories, screenplays, and even song lyrics.
People often turn to the LLM for inspiration or brainstorming, sometimes adapting the AI-generated ideas into their language, while others may use the texts as is. Although these tools are generally effective in creative endeavors, their impact on human creativity remains an area of incomplete understanding.
Researchers from the University of Toronto investigated how LLM can affect a person’s creativity during creative tasks. Their findings, presented on the arXiv preprint server, suggest that these models can potentially inhibit human creative thinking, leading to less diverse and innovative outcomes.
As co-author Harsh Kumar explained to Tech Xplore, “Generative AI tools like ChatGPT are increasingly being used for creative tasks, from email writing to brainstorming. However, there is concern about their long-term effects on human creativity, an issue that remains under-researched.” The hypothesis is that frequent use of LLMs may reduce our capacity for independent creative thinking, even if these tools increase performance during their use, similar to how steroids temporarily improve athletic performance. Kumar and his team aimed to assess the long-term effects of using LLM on human creativity in a controlled experimental setting. They conducted two experiments focusing on divergent and convergent thinking, which are key components of creativity.
“For each experiment, the tasks were adapted from well-known psychological studies,” Kumar noted. In the divergent-thinking experiment, participants were asked to think of alternative uses for a particular object, whereas in the convergent-thinking experiment, they were given three words and asked to identify a fourth word that related to all three (e.g., “book” reference “shelf,” “magazine ” and “worm”).
The experiments were divided into two phases: an exposure phase and a testing phase. During the exposure phase, participants in the experimental group received GPT-4o responses adapted to specific tasks. For example, the GPT-4o provided relevant ideas for the divergent thinking task and a conjunctive word for the convergent thinking task.
Kumar explained: “We explored the model’s potential to act as a coach, offering a structured framework for thinking instead of direct decisions.” However, participants in the control group completed these tasks without the assistance of the LLM. During the testing phase, all participants performed the tasks independently, which allowed the researchers to assess the long-term effects of using the LLM.
The researchers observed that while the GPT-4o improved participants’ performance during the test phase, those who did not initially use the model outperformed those who did during the test phase. “This finding means that developers of LLM-based creativity tools should not only focus on immediate benefits, but also consider possible long-term cognitive consequences for users,” Kumar said. “Otherwise, these tools could potentially lead to cognitive decline over time. We also found, consistent with the existing literature, that ideas generated by LLMs during contact can cause homogenization of ideas within the group.” The team was particularly surprised to find that the homogenization effect persisted even after participants stopped using GPT-4o, especially when they were given a structured framework to guide their thinking during exposure.
Kumar and colleagues’ findings provide valuable information to guide the future development of LLM and creative AI tools. Their experiments so far have taken place in controlled laboratory settings where participants have had limited exposure to GPT-4o, but the team plans to expand their research to more realistic environments. “Real-world settings are more complex and involve a longer exposure period,” Kumar added. “We aim to do field research with more natural tasks, such as writing creative stories. We also believe that the homogenization of ideas is a serious problem with long-term cultural and societal consequences. As a next step, we intend to explore the development of LLM agents that can mitigate this homogenization effect.”
Categories
- AI Education (39)
- AI in Business (64)
- AI Projects (87)
- Research (59)
- Uncategorized (1)
Other posts
- Platform Allows AI To Learn From Continuous Detailed Human Feedback Instead Of Relying On Large Data Sets
- Ray – A Distributed Computing Framework for Reinforcement Learning
- An Innovative Model Of Machine Learning Increases Reliability In Identifying Sources Of Fake News
- Research Investigates LLMs’ Effects on Human Creativity
- Meta’s Movie Gen Transforms Photos into Animated Videos
- DIY Projects Made Easy with EasyDIYandCrafts: Your One-Stop Crafting Hub
- Why Poor Data Destroys Computer Vision Models & How to Fix It
- Youtube Develops AI Tools For Music And Face Detection, And Creator Controls For Ai Training
- Research Shows Over-Reliance On AI When Making Life-Or-Death Decisions
- The Complete List of 28 US AI Startups to Earn Over $100 Million in 2024
Newsletter
Get regular updates on data science, artificial intelligence, machine