table-treeThe Anatomy of a Well-Formed Prompt

A prompt is not simply a question or an instruction. A well-formed prompt is a structured input containing several distinct components, each of which contributes to the model's ability to produce a useful response. Identifying these components and understanding their function is the foundation of systematic prompt construction.

Component 1: Task Instruction

The task instruction is the core directive. It tells the model what you want it to do. This is the component most users treat as the entire prompt, and doing so is the most common source of underperformance.

An effective task instruction is explicit about the action required, the object of that action, and the purpose or intended use of the output. Compare:

Weak: "Summarise this report."

Strong: "Produce a five-sentence executive summary of the following report. The summary will be included in a board briefing. Prioritise financial implications and strategic risks over operational detail."

The second instruction specifies the action (produce a summary), the form (five sentences), the audience (board level), the intended use (briefing), and the content priorities. None of this information requires special knowledge. It simply requires the discipline of making implicit intentions explicit.

Component 2: Context

Context is information the model needs to understand the situation surrounding the task. It is distinct from the input data (the thing you want the model to act on) and from the task instruction (what you want done). Context answers the question: what does the model need to know about the broader situation to perform this task well?

Relevant context might include:

  • The user's role and expertise level

  • The intended audience for the output

  • Prior decisions or constraints that should be respected

  • The relationship between this task and a broader project or workflow

  • Organisational or domain-specific conventions

Context is particularly important when the same task instruction might yield different appropriate responses depending on circumstances. "Write a response to this complaint" means something very different for a frontline customer service agent versus a senior legal counsel. Without context, the model defaults to a generalised interpretation.

Component 3: Input Data

Input data is the material the model is being asked to act on. It might be a document to summarise, a dataset to analyse, a piece of code to review, a draft to edit, or a set of facts to synthesise. In longer prompts, clearly delineating the input data from the task instruction and context is essential for model comprehension.

A reliable technique is to use explicit delimiters. Triple backticks, XML-style tags, or clearly labelled sections all work well. The goal is to ensure the model does not confuse your instructions with the content it is being asked to process.

Example using XML-style delimiters:

Delimiters become especially important in longer prompts where multiple pieces of input data are present, or where the input data itself contains text that resembles instructions.

Component 4: Output Specification

Output specification tells the model exactly what the response should look like. This component is frequently absent from prompts written by users who assume the model will infer an appropriate format. Sometimes it does. But consistent, predictable formatting requires explicit specification.

Output specification can address:

  • Format (prose, list, table, JSON, markdown, code)

  • Length (word count, sentence count, number of items)

  • Structure (specific sections, labels, or headings)

  • Tone and register (formal, technical, conversational)

  • What to include and what to exclude

The more precisely you specify the output, the more directly usable the response will be. This is especially important in GLBNXT Workspace workflows where model output feeds into downstream documents, communications, or automated processes.

Component 5: Constraints and Negative Instructions

Constraints define what the model should not do. Negative instructions are often more precise than positive ones for certain types of requirement. "Do not include implementation details" is more precise than "keep it high level." "Do not use passive voice" is more actionable than "write clearly."

A common oversight is to omit constraints entirely, then be frustrated when the model includes content you did not want. Every assumption you make about what the model will obviously not do is an opportunity for a constraint.

Putting the Components Together

Not every prompt requires all five components. A short, simple request to a well-configured model may need only a task instruction and an output specification. But for complex, high-stakes, or repeated tasks, assembling all five components is the most reliable path to consistent output.

A useful drafting habit is to run through the five components as a checklist before submitting a prompt, asking: have I specified the task, provided the necessary context, delimited the input data, defined the output format, and identified the constraints? Any absent component is a potential source of variance in the response.

Last updated

Was this helpful?