AI – Artificial Intelligence, or Automated Idleness?

As AI platforms become more capable, we are becoming more complacent

Is AI making us lazy? 

As artificial intelligence becomes more capable, there is a risk of us unknowingly becoming more complacent. New research from MIT suggests that over-reliance on large language models like ChatGPT may actually weaken our neural connectivity and memory recall, leading to automated idleness. This article explores how AI can dull creativity and critical thinking when used passively, and how to reclaim its power as a tool for smarter, more active engagement. 

The facts behind the fearmongering 

Have you found yourself feeling uncomfortable after using AI for a creative task? Do you feel that you’ve relied on it a little more than you should have done? Are you worried that AI is making you lazy? Well, it seems your concerns may be justified. A new MIT study has used ECG brain scans to show that using LLMs such as ChatGPT can negatively influence neural connectivity and memory skills.  

Researchers tested three groups who wrote essays either using an LLM (Large Language Model), a search engine, or without any supporting technology, and found that of the three, the group who relied entirely on the LLM platform had the weakest neural connectivity, with impaired and decreased memory recall.  

Furthermore, in post-task interviews, 83.3% of the LLM user group were unable to quote a single sentence from the essay they had produced. And of the 16.7% who could recall what had been written, not a single user produced a correct quote. By comparison, 88.9% of search engine and brain-only users could recall sentences from what they had produced. When the three groups were then tasked with writing a second essay without any assistance, the LLM group showed weaker memory recall and signs of cognitive adaptation towards passivity; the group’s neural activity remained below baseline levels even after relinquishing AI. 

What are the implications? 

The worrying implication here is that overuse of AI can reduce our capacity for critical thinking, engagement, and creativity. AI is already notorious for producing bland, samey output; many people report being able to spot ‘AI phrasing’, those cliched and hackneyed terms that can act as red flags to readers that the copy they’re reading has been generated by an LLM.  

Furthermore, if we’re not applying critical reviewing to what AI is producing for us, we are contributing to the publication of bland output without meaningful connection and insight. In today’s oversaturated market this is especially dangerous; not only does bland content make your brand forgettable, it also contributes to the content saturation that is causing fatigue among so many clients and consumers.  

What’s the best way forward? 

So, should you ban ChatGPT from your workplace entirely? Should you denounce AI as a threat to humanity and intelligent thinking? No, not yet anyway. LLMs have an undeniable benefit to efficiency and can be great for bouncing ideas around. The answer lies not in what we’re using, but how we’re using it. We need to strike a balance and know when to offload tasks to AI and when to engage genuine creativity and critical thinking.  

Research 

AI is great for researching and gathering source materials, for summarising documents and discovering new topics and insights. However, you need to stay engaged and critical when evaluating its responses, and don’t just copy and paste its returns into your own work. 

Some key tips to remember when using AI to research a topic: 

  • Gather background information and surface relevant sources quickly

  • Summarise complex topics or long articles for rapid understanding 

  • Spot connections or trends across multiple pieces of information 

  • Verify accuracy of cited sources by clicking through to the original source material, and don’t assume AI’s summary of it is always correct 

  • Check for AI hallucinations – these are plausible-sounding but incorrect facts – by cross-referencing with secondary data 

  • Re-prompt or refine questions to encourage deeper insights 

  • Use it as a springboard, not a substitute – let AI do the legwork so you can focus on analysis and interpretation 

Creative 

Using AI for creative output is one area that carries the most polarised opinion. Creativity is innately human, and many argue that using an LLM for creative copy output demeans the work of truly talented writers. However, there is an argument for using AI as a creative collaborator, rather than a ghostwriter. 

Here are 4 tasks to ask your chosen AI to help with when producing copy: 

  • Generate first drafts or document outlines to overcome creative blocks  

  • Explore tone and voice variations to find phrases that best suit your brand 

  • Experiment with phrasing and structure to refine a message 

  • Check for clarity, flow and grammar before you refine into a final draft 

Ideation 

There’s nothing more intimidating than a blank page. AI can be a fantastic tool for breaking that initial creative block, and for brainstorming and strategy planning. 

Try asking your LLM platform to: 

  • Generate fresh ideas for campaigns, headlines or content topics 

  • Explore alternative perspectives or audience angles that you might have overlooked 

  • Combine unrelated concepts to spark of a new idea 

  • Stress-test or critique your ideas 

  • Provide loose structures from your notes for you to further refine 

  • Prioritise ideas or projects and produce timelines and critical paths.  

The way we use AI in our lives is changing and adapting as fast as the technology itself, but there’s no denying that, when used actively rather than passively, it can be an invaluable tool to help us reach greater potential.  

 

Next
Next

Double, Double, Dingbat Trouble