Notes by Bachynski
Smart organization, Google Drive backup — built for creators.
Here's something nobody tells you when you're starting out with AI: you don't need JSON to write great prompts. I spent weeks wrestling with brackets and commas before I realized I'd been overcomplicating everything. The truth? Most of the time, a simple, well-structured text prompt outperforms a fancy JSON setup.
Think about it. When you talk to a friend, you don't format your words like {"greeting": "hello", "question": "how are you"}. You just... talk. The same principle applies to AI. Sure, JSON has its place—particularly in API integrations and structured data exchanges—but for everyday prompt engineering? You're probably making your life harder than it needs to be.
This guide will show you how to ditch the JSON training wheels and craft powerful AI prompts using formats you already know: plain text, Markdown, XML, and YAML. Whether you're using ChatGPT, Claude, or Gemini, these techniques will help you communicate more clearly with AI while spending less time debugging syntax errors.
Let's cut through the jargon. Non-JSON AI prompt formats are simply ways of structuring your instructions to AI models without using JavaScript Object Notation. Instead of wrapping everything in {"key": "value"} pairs, you're using:
<instruction>do this</instruction>These alternatives aren't just simpler to write—they're often more intuitive, easier to debug, and can actually improve your AI's understanding of what you want.
The difference isn't just cosmetic. JSON was designed for machines to talk to machines. It's rigid, unforgiving of typos, and requires you to escape special characters with backslashes. One missing comma can break everything.
Non-JSON formats, especially plain text and Markdown, were designed for humans. They're forgiving, readable, and natural. You can spot errors immediately. You can edit them in any text editor. Most importantly, modern AI models like GPT-4, Claude, and Gemini are trained on massive amounts of plain text—which means they're actually better at understanding natural language than structured formats.
Here's the kicker: when you send a JSON prompt to an AI, it still has to convert it into tokens and interpret the meaning. You're adding an extra layer of translation that doesn't always add value.
I'll be honest—when I first heard someone suggest using Markdown instead of JSON for complex prompts, I was skeptical. But after experimenting with both approaches across hundreds of prompts, the advantages became undeniable.
Six months from now, when you need to modify a prompt, which would you rather decipher: a nested JSON structure with escaped quotes, or a clean Markdown document with headers and bullet points? Markdown wins every time. Your future self will thank you.
No more counting brackets or debugging mysterious parsing errors. With text-based prompts, you write naturally and move on. I've cut my prompt development time in half simply by switching from JSON to Markdown for most tasks.
Try explaining a JSON prompt to a non-technical stakeholder. Now try showing them a Markdown document. Which conversation goes more smoothly? Non-JSON formats are inherently more collaborative because anyone can read and understand them.
Here's something that affects your bottom line: JSON syntax itself consumes tokens. All those curly braces, quotes, and commas add up. A well-structured text prompt can convey the same information using 15-20% fewer tokens, which means lower API costs and faster responses.
| Format | Token Count | Characters | Readability Score |
|---|---|---|---|
| JSON | 487 | 2,340 | Low |
| Markdown | 392 | 1,875 | High |
| Plain Text | 378 | 1,820 | Very High |
Not every task benefits equally from ditching JSON. Through trial and error, I've found that certain use cases absolutely shine with text-based prompts.
When you're generating blog posts, stories, or marketing copy, natural language prompts work beautifully. They give the AI context and tone in the same format you want the output. Using ChatGPT Plus or Jasper AI with plain text prompts feels like collaborating with a writing partner rather than programming a machine.
Building dialogue systems? Text prompts capture personality and nuance far better than JSON. You can show examples, demonstrate tone, and provide context naturally. Anthropic Claude API excels with conversational prompts that read like actual conversations.
When you need the AI to explain concepts, teach, or break down complex topics, structured text with headers works wonders. I use Markdown extensively with Google Gemini API for creating educational materials—the results are consistently more coherent than JSON-prompted alternatives.
If you're testing ideas quickly, the last thing you want is to wrestle with JSON syntax. Plain text lets you iterate fast. Services like Co:here AI API support flexible text inputs that make experimentation painless.
To be fair, JSON shines in specific scenarios:
The key is knowing when you actually need that structure versus when you're just following convention.
Here's where things get interesting. You might be thinking: "Sure, simple prompts work in plain text, but what about complex instructions with multiple components?"
I get it. I had the same concern. But here's what I've learned: clarity beats structure every time. Let me show you some techniques that work.
Use Markdown headers to organize different sections of your prompt:
# Task
Write a product description for eco-friendly water bottles
# Context
- Target audience: environmentally conscious millennials
- Tone: enthusiastic but not preachy
- Length: 150-200 words
# Key Points to Include
- BPA-free materials
- Keeps drinks cold for 24 hours
- 10% of profits go to ocean cleanup
# Style Examples
[Insert 2-3 example sentences showing desired tone]
This structure is immediately scannable. You can edit any section without worrying about breaking syntax. The OpenAI GPT API handles this beautifully.
Use clear separators to segment your instructions:
===INSTRUCTION===
Analyze the sentiment of customer reviews
===INPUT===
[paste reviews here]
===OUTPUT FORMAT===
For each review, provide:
- Overall sentiment (positive/negative/neutral)
- Confidence score
- Key phrases supporting the assessment
===CONSTRAINTS===
- No reviews longer than 500 words
- Focus on explicit statements, not implications
This approach works exceptionally well with Microsoft Azure OpenAI Service for enterprise applications.
Structure prompts as a series of questions and answers:
What do you need to do?
Create a weekly meal plan
Who is this for?
A vegetarian athlete training for a marathon
What are the constraints?
- 2,800 calories per day
- High protein (120g+)
- Budget: $80/week
- Prep time: under 30 minutes per meal
What should the output include?
Daily breakdown with recipes, shopping list, and macro calculations
I've used this successfully with Claude API for complex planning tasks.
Let me save you some headaches by sharing mistakes I've made—and seen others make—when overusing JSON.
JSON requires escaping quotes and special characters. This becomes a mess fast:
{"instruction": "Say \"hello\" and explain it's \"great\" to meet them"}
Versus plain text:
Say "hello" and explain it's "great" to meet them
Which would you rather write and maintain?
I've reviewed prompts where someone created elaborate JSON schemas for tasks like "summarize this paragraph." That's like using a sledgehammer to crack a nut. Writesonic and Copy.ai prove that simple text prompts work perfectly fine for straightforward tasks.
Developers love structure, so they default to JSON assuming it's "more professional" or "more precise." But unless you're programmatically generating prompts or integrating with strict APIs, you're optimizing for the wrong thing. Optimize for clarity and iteration speed instead.
Try tracking changes in a 50-line JSON prompt in Git. Now try the same with a Markdown document. JSON diffs are ugly and hard to review. Text-based prompts create clean, readable version histories.
Short answer: absolutely, yes.
Long answer: The ease comes from several factors. First, you're writing in a format your brain naturally processes. You don't need to mentally translate between "what I want to say" and "how to structure this in JSON."
Second, debugging is visual and immediate. In a text prompt, if something's wrong, you can see it. In JSON, you might have a subtle syntax error that takes ten minutes to locate. I've wasted hours hunting for misplaced commas in JSON prompts—time I'll never get back.
Third, iteration is faster. Want to add a new instruction? Just type it. Want to rearrange sections? Cut and paste. No worrying about whether your commas and brackets still match up.
Tools like Stable Diffusion WebUI demonstrate this perfectly. Their text-based prompt system for image generation is intuitive and powerful precisely because it doesn't force unnecessary structure on users.
Definitely, and each has its sweet spot.
XML offers a middle ground between JSON's rigidity and plain text's flexibility. It's especially useful when you have nested, hierarchical information:
<prompt>
<task>Translate the following</task>
<source language="French">
Bonjour, comment allez-vous?
</source>
<target language="English"/>
<style>Formal</style>
</prompt>
XML is self-documenting—the tags tell you what each element represents. It's particularly effective with AI21 Studio for structured generation tasks.
YAML combines structure with readability. It uses indentation instead of brackets, making it cleaner than JSON but more structured than plain text:
task: Generate product descriptions
products:
- name: Wireless Earbuds
features:
- noise cancellation
- 8-hour battery
- water resistant
tone: professional yet approachable
- name: Smart Watch
features:
- fitness tracking
- heart rate monitor
- sleep analysis
tone: technical and precise
YAML shines for configuration-style prompts where you have multiple items with consistent properties. The Hugging Face Inference API works beautifully with YAML-structured inputs.
This is where I need to challenge conventional wisdom. Many prompt engineering tutorials push JSON as the "professional" or "advanced" approach. But effectiveness isn't about complexity—it's about results.
In my testing across different models and tasks, natural language prompts consistently deliver:
The exception? When you're programmatically generating thousands of prompts or integrating with systems that require strict schemas. Then JSON makes sense as an interchange format.
But for human-written prompts? Natural language wins on nearly every metric that matters in real-world usage.
The OpenAI Codex API and DALL·E 3 API demonstrate this principle perfectly. Both accept descriptive text prompts and produce exceptional results without requiring JSON formatting.
Let's talk money. If you're using AI APIs at scale, tokens directly impact your budget. And here's an uncomfortable truth: JSON syntax wastes tokens.
Every curly brace, quotation mark, colon, and comma counts as characters that get tokenized. Over thousands of API calls, this adds up significantly.
Consider this simple instruction:
JSON version (142 characters):
{"task":"summarize","input":"article text","length":"100 words","tone":"professional"}
Text version (98 characters):
Summarize this article in 100 words with a professional tone:
[article text]
That's a 30% reduction in characters for identical instructions. Multiply that across thousands of requests, and you're looking at substantial cost savings.
Token Optimization Strategies:
Services like Pinecone Vector Database benefit from token-efficient prompts since you're often processing large volumes of queries. Every token saved is latency reduced and money kept in your pocket.
Theory is great, but let's get practical. Here are templates I use regularly that deliver consistent results:
CONTENT TYPE: Blog post introduction
TOPIC: Sustainable fashion trends 2025
AUDIENCE: Environmentally conscious consumers, ages 25-40
TONE: Conversational, optimistic, informative
LENGTH: 200-250 words
KEY POINTS:
- Growing consumer awareness
- Innovation in recycled materials
- Cost competitiveness improving
HOOK STYLE: Start with a surprising statistic or question
ANALYZE THIS DATA:
[paste data]
FOCUS ON:
• Trends over time
• Anomalies or outliers
• Correlations between variables
PROVIDE:
1. Executive summary (3-4 bullets)
2. Detailed findings
3. Actionable recommendations
FORMAT: Business report style, assume audience has no statistical background
EXPLAIN THIS CODE:
[paste code]
AUDIENCE: Junior developers with basic Python knowledge
BREAK DOWN:
- What the code does overall
- How each section works
- Why certain approaches were chosen
- Potential gotchas or edge cases
USE: Simple analogies and avoid jargon when possible
These templates work across platforms—EleutherAI GPT-NeoX, Inferkit Text Generation API, and commercial services alike—because they prioritize clarity over format.
After thousands of prompts, these principles consistently produce the best results:
Don't assume the AI knows what you know. Provide background information naturally within your text prompt. This is easier in prose than in JSON where you'd need to create artificial "context" fields.
Show, don't just tell. Include 2-3 examples of what you want—or what you don't want. Plain text makes this intuitive:
Write headlines like these:
✓ "The Hidden Cost Nobody Mentions"
✓ "Why Experts Are Changing Their Minds"
Not like these:
✗ "10 Amazing Tips You Won't Believe"
✗ "This One Weird Trick"
Even without JSON, create visual hierarchy. Use line breaks, headers, bullet points, and emoji markers (sparingly) to make your prompt scannable. The LlamaIndex framework encourages this approach for document indexing.
Start simple. Add complexity only when needed. One of plain text's advantages is how easy it is to add or remove elements without restructuring everything.
Create a personal library of prompt templates that work for you. Unlike JSON schemas, these are readable enough that you'll actually reference them months later.
Ready to move beyond JSON? Here's how to transition smoothly:
Week 1: Experiment
Week 2: Build Templates
Week 3: Optimize
Week 4: Scale
The transition isn't all-or-nothing. I still use JSON when integrating with Narrative Science Quill or other systems that expect it. The goal isn't to eliminate JSON entirely—it's to use the right tool for each job.
Here's my prediction: as AI models improve, the distinction between "prompting" and "conversing" will blur further. We're moving toward systems that understand context, remember preferences, and respond to natural language with minimal structure required.
JSON made sense in the early days of AI when models needed rigid guidance. But GPT-4, Claude 3, and Gemini already demonstrate sophisticated understanding of unstructured input. Future models will be even better.
This means the skills that matter most aren't about mastering JSON schemas—they're about clear communication, logical thinking, and understanding how to provide effective context. These are inherently human skills that translate directly into better prompting.
Tools like LangChain are already embracing this shift by offering flexible template systems that work with natural language. The trend is clear: simplicity and clarity are winning over complexity and structure.
As we wrap up, here are the essential tools and platforms that work excellently with non-JSON prompts:
For Content Creation:
For Development:
For Specialized Tasks:
For Enterprise:
Each of these platforms proves that sophisticated AI applications don't require JSON complexity—they require clarity, purpose, and thoughtful prompt design.
Let me leave you with this: the best prompt format is the one that helps you think clearly and iterate quickly. For most people, most of the time, that's not JSON.
Plain text, Markdown, and YAML offer the perfect blend of structure and flexibility. They're easier to write, simpler to debug, more collaborative, and often more effective. They save tokens, reduce costs, and make prompt engineering accessible to everyone—not just developers comfortable with data structures.
I'm not suggesting you abandon JSON entirely. Use it when it makes sense. But question the default. Challenge the assumption that structure equals quality. In my experience, the prompts that get the best results are the ones that communicate most clearly—and clarity rarely requires curly braces.
So go ahead. Open a text editor. Write what you mean in plain English (or Markdown). You might be surprised how well it works.
What's your experience with different prompt formats? Have you found non-JSON approaches that work particularly well? I'd love to hear what's working for you—drop your insights in the comments below.