Workflow
May 202613 min read

How to Write LinkedIn Posts with ChatGPT in 2026: The 6-Step Workflow That Actually Works

Six steps from idea capture to published post, with worked examples, the time each step takes, and the failure modes at each stage. Built from 30+ voice system builds shipped to coaches, consultants, and B2B founders since early 2026 — what actually works at sustainable cadence rather than what looks good in a one-off demo.

Six steps. Capture the idea (your job, not ChatGPT's). Pick the hook formula. Generate the draft using a voice prompt loaded into a Custom GPT. Edit against your voice. Run the 12-point audit. Publish. Total time per post once the voice prompt is built: 15-25 minutes. Without the voice prompt, the same post takes 45-60 minutes because most time goes into rewriting AI output that does not match voice. The infrastructure investment is the gate.

What this workflow assumes

The workflow below assumes you have already built the voice infrastructure. If you have not, the per-post time numbers do not hold; you will spend most of each post rewriting AI output rather than editing it. Three preconditions:

If those three are in place, the six-step workflow below produces voice-matched posts at sustainable cadence. If not, fix the infrastructure first; do not try to compensate at the per-post level.

Step 1 (2-3 min)

Capture the idea — your job, not ChatGPT's

INPUT: A SPECIFIC OBSERVATION, STORY, OR INSIGHT · OUTPUT: 2-3 SENTENCES

Write down the specific thing you want to say. Not a topic ("AI for marketing"); a specific point ("Most ChatGPT users skip the voice prompt because they think prompts are the asset; the prompt is delivery, the voice prompt is the asset"). The specificity is the post's spine. Without it, ChatGPT cannot produce useful output regardless of how clever the prompt is.

Sources for ideas: a thing you said in a meeting today that landed; a pattern you noticed across three client conversations this week; a contrarian observation about your industry; a specific failure you learned from; a question prospects keep asking that nobody answers in writing.

Example idea capture: "A founder I worked with last quarter was paying £4k/month for a LinkedIn ghostwriter, getting 60 percent voice match at month 4, and rewriting most posts before publishing. He had no voice prompt. I built one in 90 minutes from his existing samples. Voice match jumped to 80 percent on first draft. He kept the ghostwriter for a month, then dropped them. Saved £36k/year. Voice infrastructure was the missing layer, not the writer."
Failure mode at this step: asking ChatGPT for ideas. Default ChatGPT produces generic angles because it cannot know what is specifically interesting to your audience. The post reads as commodity because the input was commodity prompting.

Step 2 (1-2 min)

Pick the hook formula

INPUT: YOUR IDEA + WEEKLY VARIATION · OUTPUT: ONE OF 12 TESTED FORMULAS

Match the idea to a hook formula. Not every idea fits every formula; the wrong pairing produces awkward output. Twelve tested formulas covered in best LinkedIn hook formulas: specific number, pain-then-pivot, named scenario, contrarian observation, three-things compression, before-and-after, lesson-from-failure, question-as-claim, data-point, reframe, still-see-this, compressed admission.

Quick mapping: tactical ideas suit three-things or data-point; story ideas suit named scenario or lesson-from-failure; opinion ideas suit contrarian observation or reframe; reflective ideas suit before-and-after or compressed admission.

Variation rule: do not use the same formula in consecutive posts. Three posts in a row using "Most people think X..." reads as templated even if voice match per post is strong. Rotate 4-7 formulas across the week.

For the example idea: the named-scenario formula fits because the idea anchors on a specific founder situation. Pattern: "[Specific role] + [specific problem] + [number or detail] + [setup line]."
Failure mode at this step: defaulting to whichever formula you used last time. Familiar patterns produce structural sameness across drafts that audiences pattern-match within days.

Step 3 (1-2 min)

Generate the first draft

INPUT: IDEA + FORMULA + VOICE PROMPT (LOADED) · OUTPUT: 180-220 WORD DRAFT

Open your Custom GPT or Claude Project (voice prompt already loaded). Use the post-drafting conversation starter or paste the task prompt. Include the idea from Step 1 and reference the formula from Step 2.

Example task prompt: "Write a LinkedIn post in my voice based on this idea: [paste idea from Step 1]. Use the named-scenario formula: open with the specific role plus situation plus a number, two-three sentences of setup, one-sentence turning point, one-sentence implication for reader, one-line closing. Word count: 180-220. No hashtags. No 'comment below'. Vary sentence length deliberately."

The AI returns a draft. Time elapsed: 90 seconds for the AI to produce, 30 seconds for you to read.

Failure mode at this step: generating from default ChatGPT instead of the Custom GPT. Voice match collapses without the voice prompt loaded. The output looks like everyone else's AI content because that is what default ChatGPT produces.

Step 4 (10-15 min)

Edit the draft against your voice

INPUT: AI DRAFT · OUTPUT: VOICE-MATCHED, TIGHTENED DRAFT

Three editing passes that compound:

Pass 1 (read aloud). Replace any phrases that do not sound like you. The voice prompt produces 70-85 percent voice match on first draft; this pass brings it to 90+ percent. Common edits: replace AI-default words that slipped past the prompt's banned list; rephrase any sentences that read as generic LinkedIn rather than your specific voice.

Pass 2 (sharpen openings and closings). The first 8-12 words decide whether anyone reads beyond the fold. The last sentence is the most-remembered line. Tighten both. The body can carry slightly looser writing; the bookends cannot.

Pass 3 (cut filler). Remove adverbs that do not earn their place. Replace hedging language ("perhaps", "might", "in some cases") where uncertainty is not real. Cut transition phrases that add words without adding meaning. The post should be tighter than the AI's draft, not longer.

Failure mode at this step: shipping the AI draft without editing. The 70-85 percent voice match is good enough for first draft, not good enough for publishing. The remaining 15-30 percent is what audiences identify as off-voice content.

Step 5 (2-3 min)

Run the 12-point audit

INPUT: EDITED DRAFT · OUTPUT: PUBLISHED OR REWORKED DRAFT

Run the 12-point audit against the edited draft. Twelve checks across voice match, default vocabulary, structural sameness, point of view, and finish. Each check is pass or fail; the total out of 12 is the audit score.

Score above 80 percent (10-12 / 12): ship.

Score 60-80 percent (7-9 / 12): edit specific failing sections. Identify which checks failed and address them directly rather than regenerating the whole post.

Score below 60 percent: the voice prompt itself is producing drift. Rewrite the post manually for this instance and tighten the voice prompt sections that failed before generating the next post.

Failure mode at this step: skipping the audit because you trust your editing pass. Editing fatigue is real; the audit catches drift you no longer see in your own draft. Two minutes is the right investment to prevent shipping off-voice content.

Step 6 (1 min + monitoring)

Publish and monitor

INPUT: AUDITED DRAFT · OUTPUT: PUBLISHED POST + LEARNING DATA

Schedule the post or publish directly. Track engagement at 24 hours and 72 hours. Note which formulas perform best with your audience over 4-6 weeks of consistent posting. Patterns emerge: some audiences engage more with contrarian observations, others with named-scenario stories.

The learning data feeds back into Step 2 (formula selection) over time. After 4-6 weeks you stop guessing which formulas work; you know.

Failure mode at this step: not tracking what works. Without engagement data, the formula rotation stays generic; with data, it calibrates to your specific audience.

The total time accounting

Step-by-step time allocation for one 200-word LinkedIn post once voice infrastructure is built:

Total: 17-26 minutes per post.

Across 5 posts per week: 85-130 minutes. Across 5 posts per week × 52 weeks: 74-113 hours per year. Compared to 45-60 minutes per post without voice infrastructure (195-260 hours per year for the same volume), the infrastructure saves 121-150 hours annually.

For a solopreneur billing £100/hour, the annual time saving is £12,100-15,000 in opportunity cost — orders of magnitude above the £497-997 voice prompt build cost.

Where the workflow breaks down

Three failure patterns observed across the 30+ voice builds shipped:

1. Skipping Step 1 (idea capture). Users rush to generation and ask ChatGPT for ideas. The output reads as commodity content. The fix is making Step 1 non-negotiable: no draft generation without a 2-3 sentence captured idea.

2. Compressing Step 4 (editing). Users budget 15 minutes for editing and ship at 5 minutes when busy. The unedited drafts compound voice drift across the week. The fix is structural: schedule a 30-minute writing session per post, not a 15-minute one. Buffer ensures the editing pass actually happens.

3. Treating the workflow as linear when it isn't. Some posts need to loop back. A draft that fails the audit at Step 5 should trigger a return to Step 4 or even Step 3, not a forced ship. Loops are part of the workflow, not failures of it.

The variant for long-form content (newsletters, articles)

The six steps adapt to long-form with two changes:

The workflow shape is the same: idea capture, formula or structure selection, draft generation, editing, audit, publish. Only the per-step times scale up.

The variant for repurposing

When the source material exists (a podcast, blog post, internal memo, webinar), Step 1 (idea capture) is replaced by source extraction:

  1. Paste the source into ChatGPT and request 5-10 distinct angles, one paragraph each.
  2. Pick 3-5 of the strongest angles.
  3. Run Steps 2-6 on each angle as if it were a fresh idea.

Total time for 5 LinkedIn posts from one podcast episode: 2-3 hours including the source extraction. Detail in repurpose a podcast into 30 LinkedIn posts.

The honest limits

Three things this workflow does not solve:

Related reading

The voice infrastructure that makes this workflow run

DFY Voice System ships the voice prompt, Custom GPT, Claude Project, hook library and 5 sample posts in 2-3 working days. £497 founder pricing. Once the infrastructure is in place, the workflow above produces voice-matched posts in 17-26 minutes each.

See The Voice Build

Frequently Asked Questions

What is the actual workflow for writing LinkedIn posts with ChatGPT?

Six steps: idea capture, hook formula selection, draft generation with voice prompt, editing, 12-point audit, publish. Total per post: 17-26 minutes once voice prompt is built.

How long does it take to write a LinkedIn post with ChatGPT?

15-25 minutes per post with voice infrastructure. 45-60 minutes without it because most time goes into rewriting AI output that does not match voice.

Should I let ChatGPT come up with my LinkedIn post ideas?

No. ChatGPT executes on existing thinking; it cannot produce useful new ideas about your specific domain. You supply the idea; the AI handles drafting.

What's the best ChatGPT prompt for LinkedIn posts?

There is no single best prompt. The strongest output comes from layering a voice prompt with task prompts. Pasting clever prompts into default ChatGPT without voice infrastructure produces generic output.

Can I write LinkedIn posts in ChatGPT free version?

Yes for occasional use, with friction. Custom GPTs require Plus, so free-tier users paste the voice prompt every conversation. £20/month Plus pays back inside the first week of weekly cadence.

How do I edit a ChatGPT LinkedIn post draft?

Three passes: read aloud and replace off-voice phrases, sharpen openings and closings, cut filler. Total: 10-15 minutes per 200-word post. Run the 12-point audit as the final gate.