grid-4Reference Prompts

The following annotated prompts illustrate the techniques and components described in this tutorial applied to common GLBNXT Workspace use cases. Each prompt is annotated to show which components and techniques are present and why they were chosen.


Reference Prompt 1: Document Analysis and Briefing

Use case: Analysing a lengthy internal report and producing an executive briefing.

You are a senior analyst supporting executive decision-making in a European enterprise organisation.
Your output will be used directly in a board briefing. Write accordingly.

Analyse the document provided and produce a structured briefing in the following format:

## Executive Summary
Three sentences maximum. State the central finding and its strategic significance.

## Key Findings
Up to five findings. Each as a single sentence followed by one sentence of supporting evidence.

## Risks and Considerations
Up to three items. Each as a labelled risk with a one-sentence description.

## Recommended Next Actions
Up to three actions. Each beginning with an imperative verb.

Do not include background context about the document's origin or purpose.
Do not speculate beyond what the document supports.
If the document does not contain sufficient information to populate a section, 
write "Insufficient information" for that section rather than speculating.

<document>
[paste document here]
</document>

Annotations:

  • Role prompting establishes the analytical frame and audience register

  • Output specification is precise and structural, reducing format variance

  • Constraints address the three most common failure modes for this task type (padding, speculation, hallucination)

  • Delimiter separates instructions from input data unambiguously

  • The "Insufficient information" instruction is a targeted anti-hallucination measure


Reference Prompt 2: Few-Shot Classification

Use case: Classifying support tickets by category and priority for routing.

Annotations:

  • Few-shot examples demonstrate the classification logic for edge cases (ambiguous priority boundaries)

  • Output format is specified both verbally and demonstrated in examples, maximising format compliance

  • Three examples cover a representative spread of category and priority combinations

  • The examples are ordered from most to least urgent to prime the model's sense of the priority scale


Reference Prompt 3: Chain-of-Thought Analysis

Use case: Evaluating a proposed vendor contract for risk and compliance considerations.

Annotations:

  • Chain-of-thought structure is explicit and ordered, preventing the model from compressing reasoning

  • Role prompt activates compliance-relevant analytical frameworks

  • The instruction to show reasoning before proceeding enforces the chain-of-thought discipline

  • The final summary section instruction separates the reasoning trace from the actionable output

  • Delimiter prevents the model from conflating contract language with instructions


Reference Prompt 4: Iterative Document Drafting

Use case: Drafting a structured policy document in multiple passes.

Pass 1: Structure generation

Pass 2: Section drafting (repeated per section)

Pass 3: Review and consolidation

Annotations:

  • Instruction decomposition distributes a complex task across three distinct passes with clear handoffs

  • Each pass has a tightly scoped task instruction that prevents scope creep

  • Pass 2 includes a constraint specifically designed to prevent forward references to undrafted content

  • Pass 3 closes the loop with a structured review and a change log for auditability


Reference Prompt 5: Adversarial Review

Use case: Stress-testing a business proposal before stakeholder presentation.

Annotations:

  • Role prompting establishes an adversarial frame that directly counteracts sycophancy bias

  • The explicit directive not to improve the proposal enforces the adversarial scope

  • Structured output format ensures the critique is organised and complete rather than impressionistic

  • The "Questions the Committee Will Ask" section converts the critique into a practical preparation tool


Closing Notes

Prompt engineering is a skill with a measurable ceiling when approached casually and no practical ceiling when approached systematically. The techniques in this tutorial are not exhaustive. The field develops continuously, and the specific behaviour of any given model is subject to change with each new version. But the underlying principles, grounded in how transformer-based language models process text, are durable.

The path from competent to expert is largely a matter of observation and iteration. Pay close attention to the responses you receive. Read failures as diagnostic signals. Build a library of tested prompts. Document what works and why. Over time, the patterns that separate high-quality prompts from mediocre ones will become intuitive, and the craft will become fast.

In GLBNXT Workspace, every conversation you have and every prompt you write takes place within a sovereign European infrastructure designed to support exactly this kind of deep, iterative, context-rich work. The investment you make in prompt quality compounds within an environment where your context, your templates, and your configurations are yours to keep, refine, and share on your own terms.

That is the foundation on which serious AI work is built.

Last updated

Was this helpful?