At this year’s HELOA National Conference, Gecko’s Jonny AW ran a hands-on workshop called Prompt Perfect: Using AI to Power Your Work and Life. The goal was straightforward: help people move AI from “interesting but intimidating” to “actually useful in my day-to-day work”.
Rather than dwelling on theory or future possibilities, the session focused on real tasks HE professionals handle every week. Student emails. Open day comms. School visits. Reports. Policies. Workshop delivery. The kind of work that quietly eats up time and energy.
Based on the response in the room, it worked. Here’s what happened – and the cheat sheet from the session if you want to jump straight in.
From curiosity to confidence
The strongest theme from attendee feedback was confidence.
Many people arrived unsure where to start with AI, or worried about using it wrong. Some had tried tools like ChatGPT before but felt like they were only scratching the surface or getting inconsistent results.
By the end of the workshop, that had shifted. Attendees left feeling more confident about:
- Choosing the right AI model for the task
- Writing prompts that actually produce usable output
- Refining and improving results rather than starting from scratch
- Understanding where human judgement still matters
As one attendee put it:
“After being on maternity leave for a year, I’ve come back and felt like I’d missed a whole revolution. This session really helped me start to catch up.”
That reassurance mattered. The workshop wasn’t about replacing expertise. It was about showing how AI can act like an enthusiastic placement student: fast, tireless and helpful, but needs clear instructions and human oversight.
The four parts of a prompt that actually work
A key part of the session was introducing a simple framework for prompting AI effectively. Rather than vague instructions like “write me an email”, attendees learned to structure prompts around four elements:
- Purpose – What’s the goal? Inform, persuade, summarise, analyse?
- Persona – Who’s writing and who’s reading? Tone matters, especially in HE.
- Parameters – Length, format, must-haves, things to avoid.
- Polish – What makes it sound like you and not generic AI output?
This framework gave people a repeatable way to improve results quickly. The difference between a basic prompt and a well-structured one was obvious when tested live, and it helped explain why AI sometimes feels hit and miss.
Just as importantly, it showed that good outputs come from conversation and iteration, not one-shot perfection.
Real work, real AI
The most energy in the room came from the practical exercises.
Attendees worked on tasks like:
- Turning long open day reports into social media content
- Analysing school visit data to spot patterns and insights
- Rewriting formal policy language for Year 12 students
- Researching schools to generate conversation starters for visits
These weren’t hypothetical examples. They were the exact kinds of jobs people needed to get done the following week.
Several attendees commented that they finally understood how AI could support their workload without feeling like extra effort to learn or manage.
One piece of feedback summed this up well:
“I really enjoyed the session and now understand how to use different models to achieve different goals, for example using Claude to enhance and refine workshop delivery.”
Training matters more than tools
Another clear takeaway was that many HE professionals have access to AI tools but lack structured guidance on how to use them safely and effectively.
Before the workshop, some attendees admitted they felt nervous about:
- Data privacy
- Saying the wrong thing to students
- Trusting AI outputs
- Not knowing where the boundaries are
By covering simple rules around anonymisation, institutional policy and human review, the workshop helped remove that anxiety. People left feeling empowered rather than overwhelmed.
That empowerment matters. AI doesn’t increase capacity on its own. Confidence, clarity and good habits do.
From prompt to product
The session closed by connecting these individual skills back to how AI is being used in practice at Gecko. The same principles of prompting, refinement and human oversight underpin:
- AI-powered chatbots answering student questions 24/7
- Automated insights for recruitment and outreach teams
- Personalised email journeys at scale
- Tools that free staff up for the work that genuinely needs a human
The difference is that the prompt engineering has already been done.
Try it for yourself
The challenge we set attendees was simple: pick one prompt from the cheat sheet, use it in real work over the following week, then reflect on what happened – good or hilariously bad.
That small step is where change actually starts.
You can do the same. The cheat sheet includes the four-part prompt framework, practical examples and the prompts from the session. Download it, pick one to test in your work this week, and see what happens.
The feedback from HELOA made one thing clear. When AI training is practical, relevant and grounded in real HE work, people stop fearing it and start using it. And when that happens, capacity grows without losing the human touch that matters most in higher education.