A Self-Referencing Tutorial with Live Examples
Meta-note: This tutorial demonstrates prompt engineering techniques by using them in its own construction. Each technique is explained and immediately exemplified in the tutorial’s structure itself.
Foundation: Why Structure Matters
When crafting prompts, unclear instructions lead to inconsistent outputs. This tutorial itself follows structured markup principles—notice how I’m using:
- Clear headers to separate sections
- Bold emphasis for key concepts
- Consistent formatting throughout
Poor prompt: "Write something about cats make it good"
Better prompt: "Write a 200-word informative article about domestic cats, focusing on their behaviour and care requirements."
Level 1: Basic Markdown Structure
Headers Create Hierarchy
# Primary Instruction (H1)
## Task Breakdown (H2)
### Specific Requirements (H3)
Live example: Notice how this tutorial uses exactly this header structure—each section builds logically on the previous one.
Emphasis and Lists
Bold text highlights critical requirements:
Required: Response must be exactly 500 words
Optional: Include relevant examples
Forbidden: Do not use technical jargon
Numbered lists create clear sequences:
- Analyse the provided text
- Identify key themes
- Summarise findings
- Provide recommendations
Self-reference: This tutorial follows this exact pattern—each section builds systematically on the previous one.
Level 2: Content Boundaries and Examples
Code Blocks for Clarity
Triple backticks separate different content types:
The quick brown fox jumps over the lazy dog.
Analysis: This pangram contains all 26 letters of the English alphabet.
- Word count: 9 words
- Character count: 43 characters
- Unique letters: 26
Tutorial demonstration: I’m using this exact technique throughout—each code block clearly separates example content from instructional text.
Quotation Marks for Literal Text
When you need exact reproduction:
Respond with exactly: "Analysis complete. Results below."
Then provide your analysis.
Level 3: XML-Style Semantic Tags
Creating Logical Sections
<input_document>
[Document content goes here]
</input_document>
<evaluation_criteria>
- Clarity of argument
- Evidence quality
- Citation accuracy
- Writing style
</evaluation_criteria>
<output_requirements>
Provide a structured review with ratings (1-10) for each criterion.
</output_requirements>
Meta-demonstration: This tutorial could be restructured using XML tags:
<tutorial_section id="xml_tags">
<learning_objective>Understand semantic markup in prompts</learning_objective>
</tutorial_section>
Referencing Sections
XML tags allow you to reference specific parts:
Follow the guidelines in <evaluation_criteria>.
Format your response according to <output_requirements>.
If the document doesn't meet standards in <evaluation_criteria>, explain why.
Level 4: Advanced JSON Schema
Structured Data Specification
{
"task": "content_analysis",
"input": {
"type": "blog_post",
"word_count": 1500,
"topic": "sustainability"
},
"analysis_framework": {
"readability": {
"metrics": ["flesch_reading_ease", "sentence_length", "paragraph_structure"],
"target_score": "60-70"
},
"engagement": {
"elements": ["hook", "storytelling", "call_to_action"],
"rating_scale": "1-10"
},
"seo_factors": {
"keyword_density": "1-3%",
"meta_elements": ["title", "description", "headers"]
}
},
"output_schema": {
"summary": "string (max 100 words)",
"scores": {
"readability": "number",
"engagement": "number",
"seo": "number"
},
"recommendations": {
"immediate": ["array of strings"],
"long_term": ["array of strings"]
},
"confidence_level": "number (0-100)"
}
}
Self-referential example: This tutorial itself could be analysed using this schema—it has clear structure (readability), engaging examples (engagement), and organised headers (SEO factors).
Template Variables in JSON
{
"task": "{{TASK_TYPE}}_analysis",
"target_audience": "{{USER_EXPERTISE_LEVEL}}",
"constraints": {
"word_limit": "{{MAX_WORDS}}",
"tone": "{{REQUIRED_TONE}}",
"format": "{{OUTPUT_FORMAT}}"
}
}
Level 5: Multi-Layered Hybrid Approaches
Combining All Techniques
# Document Enhancement System
## Core Instructions
<primary_task>Enhance provided content for maximum impact</primary_task>
## Input Processing
Document type: {{DOCUMENT_TYPE}}
Current word count: {{CURRENT_LENGTH}}
Target audience: {{AUDIENCE_PROFILE}}
Distribution channel: {{PLATFORM}}
## Enhancement Framework
{
"content_analysis": {
"structure": "assess logical flow and organisation",
"clarity": "evaluate language accessibility",
"engagement": "measure hook effectiveness and retention"
},
"optimisation_strategy": {
"IF platform == 'social_media'": "prioritise brevity and visual appeal",
"ELSE IF platform == 'academic'": "focus on rigour and citations",
"ELSE": "balance accessibility with authority"
}
}
## Quality Assurance
- Mandatory: Preserve author's voice and intent
- Enhanced: Improve clarity without changing meaning
- Optimised: Adapt for specified platform requirements
## Output Specification
<enhanced_content>
[Revised content here]
</enhanced_content>
<change_log>
- List of modifications made
- Rationale for each change
- Impact assessment
</change_log>
Tutorial meta-analysis: This tutorial demonstrates this hybrid approach—it combines markdown headers, XML-style tags, JSON schemas, and conditional logic all within a coherent structure.
Advanced Techniques: Conditional Logic and Context Management
Conditional Processing
CONTEXT_CHECK:
IF user_expertise == "beginner":
Use simple language and provide background explanations
ELSE IF user_expertise == "intermediate":
Assume basic knowledge but explain advanced concepts
ELSE IF user_expertise == "expert":
Use technical terminology and focus on nuanced insights
ENDIF
CONTENT_ADAPTATION:
IF content_type == "tutorial":
Include step-by-step examples
ELSE IF content_type == "reference":
Provide comprehensive but concise information
ENDIF
Context Variables and State Management
{
"conversation_context": {
"previous_topics": ["{{TOPIC_HISTORY}}"],
"user_preferences": {
"detail_level": "{{PREFERRED_DEPTH}}",
"example_types": ["{{EXAMPLE_PREFERENCES}}"],
"format_style": "{{OUTPUT_STYLE}}"
},
"session_state": {
"current_focus": "{{ACTIVE_TOPIC}}",
"completed_sections": ["{{FINISHED_TOPICS}}"]
}
}
}
Expert-Level: Self-Modifying and Adaptive Prompts
Dynamic Prompt Evolution
## Adaptive Learning Framework
<initial_assessment>
Analyse user's first response to determine:
- Technical knowledge level
- Preferred communication style
- Specific interests within the topic
</initial_assessment>
<prompt_modification_rules>
BASED ON initial_assessment:
UPDATE explanationdepth TO match userknowledge_level
ADJUST examplecomplexity TO usercomprehension_signals
MODIFY interactionstyle TO usercommunication_preferences
</prompt_modification_rules>
<continuous_calibration>
THROUGHOUT conversation:
MONITOR userengagementindicators
ADAPT responselength BASED ON userattention_signals
REFINE exampleselection USING userfeedback_patterns
</continuous_calibration>
Self-Referencing Validation
## Prompt Quality Assurance
<self_check_protocol>
BEFORE responding:
- Verify all
are internally consistent - Ensure
actually demonstrate stated principles - Confirm <output_format> matches
- Validate that conditional logic in IF/ELSE statements is complete </self_check_protocol>
<meta_validation> This tutorial demonstrates self-referencing by:
- Using the markup techniques it teaches
- Providing live examples within its own structure
- Following the progressive complexity it advocates
- Implementing the quality assurance it recommends </meta_validation>
Practical Application: Building Your Own Structured Prompts
Template Framework
# {{YOUR_TASK_NAME}}
## Context Setting
## Input Specification Input type: {{DATA_TYPE}} Format: {{INPUT_FORMAT}} Quality requirements: {{QUALITY_STANDARDS}} ## Processing Instructions { "analysis_steps": [ "{{STEP_1}}", "{{STEP_2}}", "{{STEP_3}}" ], "quality_checks": { "accuracy": "{{ACCURACY_CRITERIA}}", "completeness": "{{COMPLETENESS_STANDARDS}}", "consistency": "{{CONSISTENCY_REQUIREMENTS}}" } }
## Output Requirements
## Validation Protocol
- Self-check: Does output meet all specifications?
- Quality assurance: Are examples relevant and accurate?
- Completeness: Have all requirements been addressed?
Common Pitfalls and Solutions
Inconsistent Markup
Problem: Mixing different markup styles without clear boundaries
❌ Poor approach:
# Main heading
code style emphasis
{json: "mixed with markdown"}
Solution: Choose consistent markup hierarchy
✅ Better approach:
# Main Instructions
## Task Breakdown
- Use consistent formatting
- Maintain clear hierarchy
- Separate different content types
Over-Specification
Problem: Too many constraints that conflict with each other ❌ Problematic: { "length": "exactly 500 words", "requirements": ["comprehensive coverage", "detailed examples", "brief summary"], "constraints": ["no technical jargon", "expert-level analysis"] } Solution: Prioritise requirements and resolve conflicts ✅ Improved: { "primary_requirement": "comprehensive analysis in 500 words", "secondary_goals": ["clear examples", "accessible language"], "flexibility": "adjust detail level to maintain word count" }
Tutorial Conclusion: Meta-Learning Check
Self-Assessment Questions
This tutorial has demonstrated its own principles. Can you identify:
- Markdown usage: How many header levels were used consistently?
- XML tags: Which semantic sections were created?
- JSON schemas: What structured data was specified?
- Code blocks: How were examples separated from instructions?
- Self-reference: Where did the tutorial analyse its own structure?
Implementation Checklist
When creating your own structured prompts:
- Clear hierarchy: Headers progress logically from general to specific
- Consistent formatting: Same markup style throughout
- Semantic boundaries: Different content types are clearly separated
- Self-validation: Prompt includes quality checks
- Adaptive elements: Accounts for different user needs
- Live examples: Demonstrates rather than just describes
Next Steps
Immediate application: Take an existing prompt you use and restructure it using these techniques. Progressive enhancement: Start with basic markdown, then add XML tags, then incorporate JSON schema as needed. Continuous improvement: Test your structured prompts and refine based on output quality.
This tutorial has demonstrated advanced prompt engineering by using every technique it teaches within its own construction—a practical example of self-referencing documentation that serves as both instruction and implementation guide.
Latest
More from the site
New SEO vs Traditional SEO - Core Mindset Shifts and Objectives
Focus on the Topic, Not Just Keywords: Semantic SEO centres on creating content for an entire topic, not just a single keyword. This means publishing content for multiple semantic keywords that cover
Read post
AI Prompt Engineering Markup Best Practices
When crafting prompts for AI systems, clear markup and structure significantly improve the quality and consistency of responses. Here's a progression from basic to advanced techniques: Basic Text Form
Read post
Why your extended coding prompt works fine in ChatGPT (GPT-4o) but struggles in Claude (even with a larger token window)
🧠 Key Differences: Claude 3 vs GPT-4o in Practice FactorClaude 3 (Sonnet / Opus)ChatGPT (GPT-4o)Context window200K tokens128K tokensContext compressionWeak (Sonnet)
Read post