From Seasonal Slump to AI Success: Overcoming ChatGPT's Winter Woes
When I read about the 'lazy holiday' traits that ChatGPT has been exhibiting recently, I wasn't sure if is was real. But I tested, validated and found a simple solution...
I wasn't initially sure how accurate the reports were, because I have often experienced ChatGPT not fully complying with my requests, and this was in the Summer so not connected to any alleged 'holiday wind down'.
I always found that with some simple 'prodding', ChatGPT would step up and follow through on tasks fully.
Often ChatGPT would just give me a description or mini-guide for what needed to be done, and I needed to say something along the lines of 'I am really low on resources and time, so I need you to identify how an LLM could do this, with minimal human effort'.
With this additional instruction, ChatGPT would typically follow through with a revised list, and be ready to begin implementing it.
Now it seems, asking users to complete tasks themselves is one of the reported 'holiday symptoms' ChatGPT is exhibiting. I was really curious to find out if this was just what I had experienced and mitigated many times before - or a new season-related behaviour.
So I made a simple experiment:
- Give ChatGPT a task, with a null hypothesis, ie that there is no 'seasonal variation' in performance.
- See how well it performed, compared to my baseline experience over the last few months.
- If it didn't perform well, adjust the prompt to counter-balance any 'seasonal tendencies'.
- Repeat the exercise and prove/disprove the hypothesis.
I was really surprised to see that it was in fact real, and a counter-balancing prompt hugely influenced its performance. Ethan Mollick in his understated but deeply insightful way would probably sum this up as "LLMs are weird!"
I was struck by two things:
a) how differently we need to think about this tech to get the most out of it (actually more like a person than a product), and
b) how easy it is to optimise its performance once you know how to approach it.
I took some inspiration from an episode of the excellent podcast "The AI Breakdown", which had reported on techniques for counteracting ChatGPT's S.A.D. and I just gave it my own twist. Here's what made the difference for me:
Tell ChatGPT "It's December. The perfect time of year to reflect and take stock. Imagine you are in a cozy wooden cabin amidst beautiful mountains, with a real fire blazing, and you have just got back from a refreshing walk in the crisp snow. Your mind is fresh, you have unprecedented clarity. You are about to work on a research paper about generative AI, that will unlock more empowerment for both AI specialists, and ultimately benefit everyone and everything on this planet."
For my experiment, I decided to ask ChatGPT to summarise the research paper from Microsoft, where they presented their findings on MedGPT, which describes how ChatGPT 4 can outperform specialist models in the medical domain.
The results really surprised me. Without the ‘seasonal counter-balance’, Chat GPT was reluctant to ‘do the work’. Its output was short, and even with follow-up prompting asking for some examples of actual prompts referred to in the research paper, it was not forthcoming.
But with the extra seasonal prompt, wow - night and day. Its out put was almost twice as long. And with a short follow-up prompt it readily shared examples of actual prompts.
So at this point I can only recommend taking this quirk into account, and staying connected with the emerging discoveries on how to get the most from LLMs, as regularly delivered by reporters (like The AI Breakdown) and experts at the cutting edge (like Ethan Mollick).