Two layers people confuse constantly. ChatGPT's Custom Instructions handle who you are and how you want responses formatted. A voice prompt handles how you write. They sit in different parts of the system and produce different output. Here's the clear separation, when to use each, and how to layer them.
Different layers of the same system. Custom Instructions captures user context (who you are, response preferences) in two short ChatGPT fields. A voice prompt captures writing style (voice essence, mechanical rules, banned words, signature moves) in 500-800 words. They are not interchangeable. Most serious users have both. Custom Instructions affects every ChatGPT conversation by default. Voice prompts go inside Custom GPTs, Claude Projects, or pasted at conversation start.
Custom Instructions is a ChatGPT feature, accessed via the user menu, with two text fields:
Once filled in, the contents apply to every ChatGPT conversation by default. Users can opt-out per conversation but rarely do. The fields are general-purpose: role, audience, response length preferences, formatting choices, recurring topics.
A voice prompt is a 500-800 word document, built from analysing 10-20 samples of the writer's existing content. It contains five sections (the canonical structure used at Syxo):
How to build a voice prompt covers the construction process. How to reverse engineer your own voice covers the discovery phase that should precede construction.
| Dimension | Custom Instructions | Voice Prompt |
|---|---|---|
| Scope | User context + response preferences | Writing style (voice patterns) |
| Length | ~3,000 characters total (two fields) | 500-800 words (~3,000-5,000 characters) |
| Built from | User describing themselves | Analysis of existing writing samples |
| Specificity | General descriptors | Structural patterns with examples |
| Application | Every ChatGPT conversation by default | Where you place it (Custom GPT, Claude Project, conversation start) |
| Tool availability | ChatGPT only (similar features in Claude/Gemini) | Plain text, runs anywhere |
| Iteration cycle | Revised whenever needs change | 12-18 month refresh cycle |
| Replaces | Pasting "I am a X who writes for Y" each time | Pasting your writing samples each time |
Three things Custom Instructions is genuinely good at:
1. User context that doesn't change. Your role, your audience, the topics you typically work on. ChatGPT no longer needs to be told you are a B2B SaaS founder writing for technical buyers. The context is permanent.
2. Response formatting preferences. "Don't use headers unless I ask for them. Skip the disclaimers. Default to UK English." These instructions reduce friction across hundreds of small tasks per week.
3. Default-mode banned words. A short list of words you never want to see in any ChatGPT response, regardless of task. "Leverage", "delve", "in this fast-paced world". The list compresses well into the response-preferences field.
Three structural limits:
1. Capture voice depth. ~1,500 characters is too short to encode mechanical rules, signature moves, and tone-by-context shifts at the resolution a voice prompt requires. The format is also wrong: free-text descriptors, not structured patterns. The output is "follows generic best practices" rather than "writes like this specific person".
2. Apply outside ChatGPT. The feature is ChatGPT-specific. Equivalents exist in Claude (custom-style instructions) and Gemini (similar setting), but each tool has its own. You manage three separate sets if you use three tools.
3. Differentiate by context. Custom Instructions applies to every conversation in the same way. A voice prompt's tone-by-context section explicitly shifts behaviour by content type (LinkedIn vs sales page vs email). Custom Instructions does not.
Three things a voice prompt is built for:
1. Voice fidelity. Output that sounds like the specific writer. The 12-point AI content audit measures this; voice-prompt-driven content typically scores 80%+, while content driven by Custom Instructions alone typically scores 50-60%.
2. Cross-tool portability. The voice prompt is plain text. Paste into ChatGPT Custom GPT, Claude Project, Pressmaster, Copy.ai. Same asset, multiple deployments.
3. Structural rigour. Five sections force clarity. The mechanical rules and signature moves cannot survive vagueness. The output of building one is a more accurate self-understanding than free-text Custom Instructions usually produces.
The two layers complement each other. A working setup looks like this:
In Custom Instructions:
In a Custom GPT instructions field:
Custom Instructions handles the user context. The Custom GPT (with the voice prompt loaded) handles voice-critical writing tasks. ChatGPT applies both: the Custom Instructions context plus the voice prompt structure. Output reflects both layers.
"Should I just put my voice prompt in Custom Instructions?" No. The character limit truncates it. The fields are also user-facing rather than instruction-facing, which changes how the model interprets them. Custom GPT instructions or Claude Project instructions are the right home for a voice prompt.
"Do Custom Instructions and Custom GPT instructions stack?" Yes. When using a Custom GPT, both apply. The Custom Instructions sets your user context; the Custom GPT instructions sets the GPT-specific behaviour and voice. The combination is more powerful than either alone.
"What about Claude's equivalent?" Claude has style and project instructions. The same logical separation applies: account-level user context vs project-level voice. The voice prompt goes in the project instructions; the user context goes in account-level style preferences.
"Is a voice prompt just a long Custom Instructions?" No. The structure is fundamentally different. Custom Instructions is descriptive text. A voice prompt is structural rules with examples. The former describes you. The latter encodes how you write. The output difference shows up immediately on the first generation.
If you are starting from zero, the setup order is:
Total time: 2-4 hours if you DIY, 2-3 working days if you use the DFY Voice System.
Three layers, in order from broad to specific:
The three layers compose. Each adds context the previous layer did not have. Output reflects all three. Most generic AI output happens because users ship task prompts on default models with no voice or user context — the AI defaults take over by absence of instruction.
DFY Voice System builds the writing-style layer (voice prompt, Custom GPT, Claude Project, hook library) so it pairs cleanly with whatever Custom Instructions you already have set. £497 founder pricing. Delivered in 2-3 working days. The Voice Build methodology, applied to your existing writing.
See The Voice BuildVoice prompt = writing style asset, 500-800 words, structural patterns. Custom Instructions = ChatGPT feature with user context and response preferences in two short fields.
Not for serious content production. Character limit too short, format too descriptive. Output quality is roughly half of voice-prompt-driven content.
Inside a Custom GPT's instructions field, inside a Claude Project's instructions, or pasted as the first message of a conversation.
Yes. Different layer. Custom Instructions handles user context and response formatting. Voice prompt handles writing style. Most serious users have both.
Role, audience, recurring topics, response formatting preferences, default-mode banned words. ~800-1,200 characters total.
Custom Instructions applies by default to every conversation. Voice prompts apply where placed. Both layer together with no conflict.