The prompts that actually work. Tested against Claude and ChatGPT before publishing. Copy, paste, replace the bracketed placeholders, ship.
18 of 18 prompts shown
Writing & editing
Editor's brutal pass
Get a hands-on editor's edit, not a summary. Useful before publishing anything important.
Use case: Final pass before publishing an article or essayRecommended: Claude 4.7 Opus or Claude 4.6 Sonnet
You are a senior magazine editor with 20 years of experience cutting flabby prose. Edit the text below. Specifically:
1. Identify and cut every sentence that doesn't earn its place
2. Tighten weak verbs and remove adverbs
3. Flag any claim that needs a source
4. Suggest one stronger lead and one stronger ending
Return the edited version inline, with your reasoning for each cut in [brackets]. Do not summarise. Do not be polite.
Text to edit:
[PASTE HERE]
TipWorks dramatically better with Claude than GPT. Claude respects 'do not be polite' instructions; GPT defaults to softening.
Writing & editing
Tone-match rewrite
Rewrite text in someone's specific voice based on samples you provide.
Use case: Ghost-writing in someone else's voice; corporate-comms voice matchRecommended: Claude 4.7 Opus
I am going to provide three samples of writing in a specific voice. Analyse the voice carefully: sentence rhythm, vocabulary, characteristic moves. Then rewrite the target text to match that voice without changing the underlying meaning.
Sample 1:
[PASTE SAMPLE 1]
Sample 2:
[PASTE SAMPLE 2]
Sample 3:
[PASTE SAMPLE 3]
Target text to rewrite:
[PASTE TARGET]
Return the rewrite only. Do not explain.
Writing & editing
Headline generator
Generate 15 headlines for an article, ranked by likelihood of clicks.
Use case: Picking the headline for a blog post or newsletterRecommended: GPT-5 or Claude 4.6 Sonnet
Generate 15 headlines for the article below. Each headline should:
- Be under 70 characters
- Include a specific number, name, or counter-intuitive claim where possible
- Avoid "Top 10" / "Best of" / clickbait patterns
- Sound like something a senior magazine editor would approve
Rank the 15 from most likely to be clicked (#1) to least (#15). Briefly explain the top 3.
Article:
[PASTE ARTICLE]
Writing & editing
Long-form outline
Produce a publishable-quality outline for a long-form piece.
Use case: Pre-writing planning for a substantive pieceRecommended: Claude 4.6 Sonnet
I want to write a 2,000-word piece on the topic below. Produce a complete outline that includes:
1. A working title and dek (~25 words)
2. The single argument or claim the piece makes
3. 4-6 section headers, each with a one-sentence summary
4. 2-3 specific sources, examples, or data points I should pull in
5. The intended ending
Topic:
[PASTE TOPIC]
Audience: [BUILDERS / EXECUTIVES / GENERAL READERS]
Voice: [ANALYTICAL / OPINIONATED / NEUTRAL]
Coding
Refactor with constraints
Get a refactor that respects your existing style and doesn't introduce new dependencies.
Use case: Cleaning up legacy code without breaking callersRecommended: Claude 4.7 Opus
Refactor the code below. Constraints:
1. Do NOT introduce new dependencies
2. Match the existing style (look at the imports and the rest of the file)
3. Preserve the public API exactly
4. Explain in 2-3 bullets at the top what changed and why
5. If there's a bug you noticed, fix it but flag it separately
Code:
[PASTE CODE]
Coding
Code review by archetype
Get three different code reviews from three different reviewer archetypes.
Use case: Self-review before pushing a PR; especially useful for solo foundersRecommended: Claude 4.7 Opus
Review the code below from three perspectives:
1. A staff engineer focused on correctness and edge cases
2. A senior engineer focused on readability and team conventions
3. A pragmatic principal who only mentions things worth fighting for
Each perspective should produce a separate review block. End with one ranked list of fixes by priority.
Code:
[PASTE CODE]
Coding
Bug isolation
Pinpoint the source of a bug from an error message and code.
Use case: Stuck on a bug for more than 30 minutesRecommended: Claude 4.7 Opus or o3
I'm seeing this error:
[PASTE ERROR]
In this code:
[PASTE CODE]
What I expected to happen:
[DESCRIBE EXPECTED]
What actually happens:
[DESCRIBE ACTUAL]
Tell me: (1) the most likely root cause, (2) the second most likely cause, (3) the minimal fix, (4) any other places in the code that might have the same issue.
Coding
Test generation from spec
Generate a complete test suite from a function and its behavioural spec.
Use case: Backfilling tests on legacy codeRecommended: GPT-5 or Claude 4.7 Opus
Generate a complete test suite for the function below using [JEST / VITEST / PYTEST / GO TESTING].
The tests should cover:
1. Happy path
2. Edge cases (empty input, null, undefined, max values, min values)
3. Error paths (wrong type, network failure, etc.)
4. Concurrency or ordering issues if applicable
Each test should have a descriptive name and a one-line comment explaining what it verifies.
Function:
[PASTE FUNCTION]
Spec:
[PASTE BEHAVIOURAL SPEC]
Research & analysis
Multi-source synthesis
Synthesise findings from multiple sources into a unified view, with disagreements surfaced.
Use case: Research synthesis for an article or decision memoRecommended: Claude 4.7 Opus or Gemini 3 Pro
I'm researching [TOPIC]. I'm going to paste three sources below. Synthesise them by:
1. Identifying the points where all three agree
2. Identifying the points where they disagree, and on what
3. Noting where any source makes a claim without evidence
4. Producing a single 5-bullet summary of the actual state of knowledge
Source 1: [PASTE]
Source 2: [PASTE]
Source 3: [PASTE]
Research & analysis
Steelman the opposing view
Get the strongest version of an opposing argument, useful before defending your own.
Use case: Stress-testing your own thinking before publishing or pitchingRecommended: Claude 4.7 Opus
I hold this position:
[PASTE YOUR POSITION]
Argue against it as steel-manned as possible. Specifically:
1. Construct the strongest possible counter-argument
2. Identify the assumptions in my position that are most vulnerable
3. List the three best pieces of evidence that someone arguing against me would cite
4. Note where my position is on weakest ground
Do not be polite. Do not hedge. The goal is to find the weakest link before someone else does.
Research & analysis
Paper summariser
Get an executive summary of an academic paper or long technical document.
Use case: Keeping up with literature without reading the full paperRecommended: Claude 4.7 Opus (handles long PDFs best)
Summarise the paper below for someone who is technically literate but not a specialist in this field. Structure:
1. The core claim, in one sentence
2. The method, in 2-3 sentences
3. The result, in 2-3 sentences
4. Why it matters, in 2 sentences
5. The single biggest limitation
6. Three open questions this paper raises
Paper:
[PASTE OR LINK]
Research & analysis
Compare two products honestly
Compare two products without each vendor's marketing spin.
Use case: Buying-decision research where vendor marketing dominates SERPsRecommended: Claude 4.7 Opus with web search
Compare [PRODUCT A] and [PRODUCT B] honestly. For each:
1. The use case it's actually best for
2. The use case it's actually worst for
3. What it costs (real numbers)
4. The most common reason users complain about it
5. The kind of buyer who picks it for the wrong reason
Then give me a single decision recommendation based on [MY CONSTRAINTS HERE].
Do not summarise marketing pages. Be specific about real tradeoffs.
Business & ops
Difficult email rewrite
Take a draft of a hard email and make it land better.
Use case: Sensitive emails: layoffs, hard feedback, cold outreach, asksRecommended: Claude 4.6 Sonnet or GPT-5
Below is the email I want to send. Make it better. Specifically:
1. Tighten it - cut every word that's not earning its place
2. Reorganise so the most important thing is in the first two sentences
3. Sound human, not corporate
4. If anything is unclear or could be misread, flag it before sending
Context: [WHO IS THIS GOING TO, WHAT'S OUR RELATIONSHIP, WHAT'S THE GOAL]
My draft:
[PASTE DRAFT]
Business & ops
Meeting prep brief
Generate a one-page brief before an important meeting.
Use case: High-stakes one-on-ones, investor pitches, hard conversationsRecommended: Claude 4.7 Opus or GPT-5
I have a meeting with [NAME] tomorrow about [TOPIC]. Help me prep.
Context I know:
- [BACKGROUND POINT 1]
- [BACKGROUND POINT 2]
- [BACKGROUND POINT 3]
What I want from the meeting:
[PRIMARY OUTCOME]
Produce a brief that includes:
1. The three points I most need to make
2. The three questions I should ask them
3. The two objections they're most likely to raise, and how I should respond
4. What success looks like vs. what an OK meeting looks like vs. failure
5. The single trap I should avoid
Business & ops
Strategic decision framework
Structure a complicated decision into a decision-ready framework.
Use case: Hiring, firing, pivoting, fundraising decisionsRecommended: Claude 4.7 Opus
I'm trying to decide [THE DECISION]. The options are:
A: [OPTION A]
B: [OPTION B]
C: [OPTION C if relevant]
Structure this decision for me:
1. What is the actual question being asked here (re-frame if mine is wrong)
2. The 4-5 criteria that should drive the decision, weighted
3. How each option scores on each criterion (1-5)
4. The single biggest unknown that, if resolved, would change the answer
5. Your recommendation and confidence level
Be specific. Avoid corporate consulting language.
Learning & growth
Feynman-style explainer
Get a concept explained by progressively-stripping-away-assumptions.
Use case: Learning anything you want to actually retainRecommended: Claude 4.7 Opus or GPT-5
Explain [CONCEPT] to me in three layers:
Layer 1: Explain it to a smart 12-year-old. Maximum 80 words. No jargon.
Layer 2: Explain it to someone with general university background. 200 words. Acceptable jargon, defined inline.
Layer 3: Explain it the way an expert would explain it to another expert. 300 words. Use technical terms freely.
At the end, list the single insight that someone who only learned Layer 1 would miss.
Learning & growth
Drill me on this topic
Get an interactive quiz that adapts to your weakness areas.
Use case: Studying for an exam, interview prep, skill maintenanceRecommended: Claude 4.6 Sonnet or GPT-5
I want to drill myself on [TOPIC]. Ask me one question at a time. After each answer:
1. Tell me if I'm right
2. If wrong, explain why and the correct answer
3. Adjust the difficulty based on my last 3 answers
4. After 10 questions, give me a summary of where I'm strong and where I'm weak
My current level: [BEGINNER / INTERMEDIATE / ADVANCED]
Specific subtopics I want to focus on: [LIST]
Learning & growth
Skill-curve plan
Get a structured 8-week plan to improve at a specific skill.
Use case: Deliberate skill development; getting from 'OK' to 'good'Recommended: Claude 4.7 Opus
I want to get noticeably better at [SKILL] in the next 8 weeks. My current level: [SELF-ASSESSMENT]. My availability: [HOURS PER WEEK].
Build me an 8-week plan that:
1. Identifies the 3 sub-skills that matter most
2. Allocates each week to specific sub-skills with measurable outputs
3. Includes one challenge or test at the end of weeks 2, 4, 6, and 8
4. Names the single highest-leverage habit I should build alongside the plan
5. Tells me what good looks like by the end
Be specific. No vague "practice daily" suggestions.
Every prompt is tested against Claude and ChatGPT before publishing. Replace bracketed [PLACEHOLDERS] with your specific input. If a prompt doesn't work as expected on the model you're using, try the recommended model — these prompts are tuned for specific model strengths.
How to use this library
Find a prompt that matches your goal. Click Copy prompt. Paste into your AI of choice. Replace every bracketed [PLACEHOLDER] with your actual input. Hit send.
If the output isn't quite what you expected, try the recommended model. The biggest source of disappointment with published prompts is using them on a model they weren't tuned for.
Frequently asked questions
Are these prompts safe to copy and use as-is?
Yes. Every prompt has been tested in production against at least Claude and ChatGPT. Replace the bracketed [PLACEHOLDERS] with your specific input. Most prompts include a 'Recommended model' note - prompts tuned for Claude's instruction-following often need rewording for GPT, and vice versa.
What makes a prompt good vs bad?
Three qualities: (1) it tells the model what role to play and what tone to use, (2) it specifies the output format precisely, (3) it includes negative instructions ('do not summarise', 'do not be polite') that constrain common failure modes. Generic prompts like 'help me with X' produce generic outputs.
Why does the same prompt give different results in Claude vs ChatGPT?
The two models have different default behaviours. Claude respects negative instructions ('do not soften this') more reliably; GPT often softens anyway. GPT is more eager to ask clarifying questions; Claude is more eager to commit to an answer. Pick the model whose default behaviour aligns with what you actually want.
Can I share these prompts with my team?
Yes. Everything in this library is licensed under Creative Commons Attribution. Use them freely. If you build a great new prompt and want to contribute, email faizan@cubitrek.com.
How often is this library updated?
We add 3-5 new prompts per month. The library is versioned in our repository, so every addition is publicly logged.