Asking a good question to AI determines the difference between a useful answer and a generic story you can't do anything with. AI systems like ChatGPT, Claude and Gemini literally respond to what you type. The more concrete your question, the more useful the output.
Many professionals type a short sentence and expect a ready-made result. That doesn't work. A query to AI requires structure, context and clear expectations, just as you give a briefing to a colleague.
Why does the way you ask a question to AI matter so much?
The way you ask a question to AI determines the quality of the answer, because AI models do not interpret intent like a human does. They recognise patterns in the words you use and generate a likely text based on them. You can find out more about this mechanism in our article on generative AI.
This means that vague questions lead to vague answers. “Help me with marketing” gives the AI no direction. The system fills in itself whether you want a campaign plan, a social media post or an audience analysis. The choice it makes is rarely what you intended.
A structured question forces the AI to stay within your framework. You decide the topic, format, target audience and depth. The AI fills in, but you steer. This is also exactly the principle on which the LearnLLM approach is built: first a clear framework, then output.
How do you build a good query to AI?
A good question to AI contains four elements: a purpose, context, specifications and constraints. Together, they form the briefing the AI needs to respond in a targeted way.
The objective describes what you want to achieve. Not “write something about onboarding” but “write a checklist for the first working week of new employees”. The difference is that in the first variant, the AI can write an essay, a presentation or a policy document. In the second variant, the system knows exactly what format you expect.
Context tells the AI who it is for and in what situation. “It is for an HR department of an accounting firm with 80 employees” gives the model enough background to make relevant choices in tone, terminology and examples.
Specifications determine the format and scope. How many words, what structure, in what language, with or without enumerations. The more you specify, the less the AI fills in itself.
Restrictions indicate what you don't want. “No listings longer than five points” or “no English terms where a Dutch alternative exists”. This prevents you from having to correct afterwards.
What mistakes do professionals make when questioning AI?
The biggest mistake in asking AI a question is combining multiple tasks in one prompt. “Write a summary of this report, provide areas for improvement and make a presentation” forces the AI to perform three tasks at once. None of the three is then done well. Ask one question per prompt and build on the answer.
A second common mistake is the lack of a target audience. The question “explain machine learning” yields a different answer from “explain machine learning to a marketing manager with no technical background”. Without a target audience, the AI chooses an arbitrary level of knowledge, which is rarely the right one.
The third mistake is not checking AI output. AI models present incorrect information with the same certainty as correct facts. Modern models with web search (such as ChatGPT, Claude, Gemini and Perplexity) do cite sources correctly, but in pure chat mode without search they can make up links and references. Always check factual claims yourself through reliable sources, even if there is a source citation underneath. You can read more about the risks in our article on the risks of AI.
How do you use roles when posing a question to AI?
A question to AI becomes more precise if you give the model a role. A role controls the tone, level of knowledge and terminology of the answer. If you ask “explain the new labour law” without a role, you will get a general summary with a lot of context. Enter “you are an employment lawyer advising an HR manager”, and you will get legal implications and points of interest, without an introductory basic explanation.
Roles work best when you combine them with a concrete scenario. “You are a financial controller. A colleague asks you to summarise the quarterly figures for the management team. Write a summary of up to 300 words in a businesslike tone.” This gives the AI a clear framework within which it operates.
Be realistic in the role you choose. “You are the best marketer in the world” adds nothing. “You are a B2B marketer with experience in SaaS companies” does, because it forces the AI to stay within a specific domain. The role should be informative, not flattering.
How do you refine a question to AI via follow-through questions?
You refine your question to AI by building the conversation in steps, not by writing one perfect prompt. The first answer is rarely the final result. Further questioning is a normal part of the work process, similar to giving feedback on a colleague's first draft.
Start broadly and refine step by step. First, ask for a structure for your document. Assess whether the outline is correct. Then ask to work out the first section. Give feedback on what could be improved. Then have the rest completed with the same adjustments.
AI systems like ChatGPT remember the context of your conversation as long as the window is open. That means each follow-up question builds on what was discussed earlier. Use that: refer to previous answers, indicate what you want to keep and what needs to change. That conversation building produces better results than starting over and over again. In the ChatGPT course learn how to secure this iterative process in a repeatable workflow.
What are concrete examples of good and bad AI questions?
An example of a query to AI that doesn't work: “Write an e-mail about the new workflow.” The AI does not know what workflow, who the e-mail is for, how long it should be or what tone is desired. The result is a generic e-mail that you have to completely rewrite.
Same question, phrased effectively: “Write an e-mail of no more than 200 words to the sales team (15 people) about the new CRM working method that goes into effect on Monday. The tone is informal but clear. Name the three most important changes and conclude with where they can go for questions.”
Another example. Weak: “Make a presentation on AI.” Effective: “Create an outline for a 15-minute presentation on AI applications in customer service, aimed at team leaders at an insurance company. Focus on three concrete use cases with expected time savings per case.”
The pattern is the same every time: target, audience, format, constraints. The AI tools you use for this, such as Google Gemini or Claude, all work better with structured input.
How do you apply questions to AI to your own work?
The query to AI that delivers the most is the one you link to a task you do regularly. Not a loose prompt for a one-off task, but a workflow for a task that recurs: a weekly report, client mailings, meeting summaries, policy texts. That way, you build a repeatable workflow rather than a series of separate experiments.
Judge the result not on “is this perfect” but on “how much work does this save me, and what errors should I be able to intercept”. Build checkpoints into your workflow: what claims do you check, what assumptions should the model not make, what nuance should be preserved. AI is never responsible for the output you send. You remain that yourself.
Lay down your working methods. A fixed structure for emails, a template for meeting summaries, a format for customer communications, including the control questions you go through each time. Reuse what works and adjust where necessary. This is exactly the approach LearnLLM uses: framework, output, control, as a set order.
Want to master this method structurally? Sign up for Working professionally with ChatGPT. In this e-learning, you will build three complete AI workflows based on your own work, learn to systematically control output with five fixed control questions and finish with a personal AI work file and a certificate.


