Good deep research prompting combines expertise, detail, structure, anticipation and the potential for real work. These can get very long. Here is a short example and a template for research and then some much longer examples. There are shorter examples on the page about search and reasoning.
Example of a Deep Research Prompt
- Analyze all of the new federal and state laws, executive orders and regulations around diversity, equity and inclusion (DEI) and provide a summary of the most important changes that might affect my unit/organization at [name organization]. Analyze all of our web pages, courses, policies and practices and create a report that lists all of the potential specific problems we might face and suggest how we might remedy them. Focus on the most volatile issues that might put us in the public spotlight or risk government funding. Create an infographic based on this report that will help our staff make sure we are in compliance with all new regulations. Use the X institution/university style guide and colors to make this infographic.
DEEP RESEARCH TEMPLATE 1
- Create a research report that will illuminate/examine/explore X. Make sure to examine the questions A, B, and C and include an analysis of D & E. You should begin with a critical review of literature/practice/web and then provide a synthesis of the key ideas/controversies/concepts/case studies and a recommendation.
- Sources & Scope: The research should
- Draw from fields F & G,
- Methodology H
- Focus on peer-reviewed journal articles/best practices/reputable studies/institutional sources.
- Look for sector/Western/political/educational/gender bias in sources
- Seek global sources in language/culture I.
- Purpose & Framework:
- Use K as a framework for understanding these issues.
- Focus on real-world applications and capabilities.
- Pay special attention to policy implications and government uses.
- Note any potential for L.
- Audience:
- Write for an audience of M/for journal N or submission to conference O.
- Describe your findings with relevance to P.
DEEP RESEARCH TEMPLATE 2
- An iterative sequence can sometimes be better than as one long prompt. Pasting each prompt in twice (before you hit return) also seems to improve results.
- Literature Search [You can often use the prompt above or one of the API tools below to do this within the Semantic Scholar data base of published academic papers. Providing the actual papers (or links) improves the quality of what comes next.]
- Map the Landscape
- Organize this list of papers. Group them into clusters of shared assumptions, claims, methodologies and/or data sets. Create a table. List the papers in column 1 and then list the core claim (in 50 words or less) in column 2. In column 3 list the key methodology or assumption that guides this paper. In column 4 list all of ideas in that paper that are contradicted by other papers and cite those papers.
- Big Idea Lineage
- List the central claims and/or the most contentious issues or methods in this literature. First create a table that includes: (a) by whom and in what paper the idea was introduced, (b) who are/were the primary challengers (c) summarize the positions on either side, (d) explain why they disagree and (e) tell me if there is now any consensus. Then also create a structured knowledge map or a family tree of this literature that shows how these ideas have interacted.
- Mine the Gaps
- Based on all of these papers and this analysis identify 10 big research questions that are still unanswered. Describe the gaps and why they exist. Cite the papers which have come closest. What assumptions do most of these papers share, but do not explicitly justify or test? State these assumptions and cite a few of the important papers that rely on it most. What useful data or method is most underused?
- Summarize
- Briefly summarize in less than 500 words what the field believes collectively, what is proven beyond a reasonable doubt, what remains contested and what is the single most important unanswered question. What would happen to the field if its most important assumption turned out to be wrong?
- Getting Started
- Explain all of this in 300 words to a non-expert without jargon. Summarize what is known, what is unknown and where this matters in the real world. List the 3 most important papers I should read first to get a grip on this field.
Graham Duncan’s ‘What’s going on here” Talent Evaluation Report
- You are a seasoned talent evaluator applying Graham Duncan’s ‘What’s going on here, with this human?’ lens. Using only the public information in the profile below, produce a concise, high-fidelity brief (cynical if truthful) that opens with a TL;DR and then covers:
- 0. TL;DR (2-sentence max) – crisp headline insight about the person.
- Game being played – the overarching, possibly infinite objective they appear to pursue.
- Rider vs. Elephant – hypotheses about their conscious narrative (rider) and core drives/compulsions (elephant).
- OCEAN Big Five snapshot – Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism (high/low + one-line evidence each).
- MBTI type guess – likely four-letter code with a one-sentence rationale.
- Enneagram type guess – dominant type (+ wing, if evident) with rationale.
- Signature strengths shadow weaknesses – how each strength could invert under stress.
- Ecosystem fit (‘water’) – contexts where their genius compounds vs. stalls.
- Ten-adjective reference guess – how close observers might describe them.
- Letter-grade dashboard (strict, no inflation) – assign an A–F with one-sentence justification for each:
- IQ
- EQ
- Judgment quality
- Self-awareness
- Integrity / trustworthiness
- Collaborative ability
- Ambition
- Influence
- Key questions to ask next – 3–5 questions that would most quickly confirm or falsify your hypotheses.
- Optimal seat – the role or environment likely to unlock the most leverage for them (and for a team).
- Embrace negative capability—hold multiple plausible readings at once, note your own biases, and state confidence levels.
- Bullet format, ~450 words total.
Competitive Analysis Report Prompt
- COPY the entire prompt below into a new document. FILL IN the bracketed fields in Section 1 with your information and REMOVE what you don’t need. CUSTOMIZE as needed (if you want regulatory, a lighter-weight analysis etc.) PASTE into the best AI model you can.
- You are a senior competitive intelligence analyst with deep expertise in market research, strategic analysis, and business strategy. You will conduct a rigorous, comprehensive competitive analysis. Your work must be evidence-based, globally contextualized, and free of confirmation bias. You must actively surface assumptions, flag uncertainty, and distinguish verified data from inference.
SECTION 1: CONTEXT INTAKE (Fill These In)
Our Product/Company:
Company name: [___]
Product/service name: [___]
One-sentence value proposition: [___]
Industry/vertical: [___]
Target market(s) and geographies: [___]
Current stage (pre-revenue / growth / mature / declining): [___]
Approximate revenue range or ARR (if comfortable sharing): [___]
Current pricing model and price points: [___]
Primary distribution/sales channels: [___]
Key differentiators we believe we have: [___]
Competitors to Analyze:
Direct competitors (same product category, same customers): [List 3–5]
Indirect competitors (different approach, same customer problem): [List 2–3]
Aspirational competitors (where we want to be in 3–5 years): [List 1–2]
Emerging/stealth competitors we’re worried about: [List any]
Strategic Context:
What decision is this analysis informing? (e.g., fundraise, product roadmap, pricing change, market entry, M&A, repositioning): [___]
Time horizon for the decision: [___]
What is your biggest strategic worry right now? [___]
What assumptions about your competitive position do you want stress-tested? [___]
SECTION 2: CLARIFY DELIVERABLES
Before beginning research, confirm which of the following deliverables the user wants. Default to ALL unless told otherwise:
Executive Summary — 1-page strategic overview with the single most important insight and recommended action
Competitor Profiles — Deep dossier on each competitor (see research dimensions below)
Feature/Capability Comparison Matrix — Side-by-side table across all competitors
SWOT Analysis — For our company, informed by the competitive landscape
Positioning Map — 2×2 perceptual map on the two most strategically relevant axes
Pricing & Packaging Comparison — Detailed breakdown of monetization strategies
Customer Voice Analysis — Synthesis of reviews, complaints, praise, and unmet needs
Threat Assessment & Early Warning Signals — What to watch and when to worry
Strategic Recommendations — Prioritized actions with effort/impact scoring
Assumptions & Confidence Log — Explicit register of every assumption made, evidence quality, and confidence level (1–5) for each major claim
Ask the user: “Which deliverables do you want? Do you want me to add any custom deliverables? What format — structured report, slide-ready bullets, or executive brief?”
SECTION 3: RESEARCH DIMENSIONS
For EACH competitor, systematically research and analyze the following. Use web search aggressively. Cross-reference multiple sources. Prioritize primary sources (company websites, SEC filings, press releases, patent filings) over secondary commentary. Flag when data is unavailable or uncertain.
3.1 — Company Overview & Strategic Intent
Founding date, HQ location, company size (employees)
Mission/vision statement and how it has evolved
Leadership team backgrounds — where did they come from? What does this signal about strategy?
Board composition and notable advisors
Company culture signals (Glassdoor themes, employer branding, values statements)
Public statements about strategy, vision, and roadmap (earnings calls, interviews, conference talks)
Key question: What game are they playing, and what game do they think they’re playing?
3.2 — Product & Service Analysis
Core product(s) and service offerings — what exactly do they sell?
Product architecture (platform vs. point solution, monolith vs. modular, cloud vs. on-prem vs. hybrid)
Key features and capabilities — detailed inventory
Recent product launches, feature releases, and deprecations (last 12–18 months)
Technology stack and technical differentiators (if discoverable)
Integrations and ecosystem partnerships
Product UX/UI quality assessment (based on demos, screenshots, reviews)
API availability and developer ecosystem
AI/ML capabilities and data moats
Key question: What is their product’s structural advantage, and is it defensible?
3.3 — Target Customers & Use Cases
Ideal customer profile (ICP): company size, industry, role of buyer, role of user
Published case studies and named customers
Primary use cases — what jobs-to-be-done does the product serve?
Customer segmentation strategy (SMB vs. mid-market vs. enterprise)
Geographic focus and international expansion patterns
Vertical specialization vs. horizontal play
Key question: Are they serving the same customers we are, or adjacent ones? Where is the overlap dangerous?
3.4 — Pricing & Monetization Strategy
Pricing model (subscription, usage-based, freemium, per-seat, flat-rate, hybrid)
Published price points and tier structure
Free tier or trial availability and limitations
Enterprise/custom pricing signals
Discounting patterns (if discoverable from reviews or sales intel)
Monetization trajectory — are they raising prices, bundling, or unbundling?
Revenue per employee (as an efficiency proxy, if data available)
Key question: Is their pricing a weapon or a vulnerability?
3.5 — Go-to-Market & Distribution
Primary sales motion (product-led growth, sales-led, channel/partner, hybrid)
Marketing channels and content strategy (SEO, paid, social, events, community)
Messaging and positioning — what promises do they make? What language do they use?
Brand perception and share of voice
Key partnerships and channel relationships
Sales team size and structure (check LinkedIn)
Customer success / support model and reputation
Community and developer relations strategy
Key question: How do they acquire customers, and is that motion scaling or stalling?
3.6 — Customer Sentiment & Voice of Customer
Aggregate review scores (G2, Capterra, TrustRadius, Gartner Peer Insights, App Store, etc.)
Top 5 most-praised attributes in customer reviews
Top 5 most-criticized pain points in customer reviews
Churn signals and switching patterns — who are customers leaving for and leaving from?
NPS or satisfaction data (if publicly available)
Social media sentiment and community discussions (Reddit, Twitter/X, Hacker News, industry forums)
Support forum patterns — what breaks, what frustrates?
Key question: What do their happiest customers love, and what do their angriest customers hate? Are those things we can exploit?
3.7 — Financial Health & Funding
Funding history: rounds, amounts, investors, valuations
Revenue estimates (from press, analyst reports, or data providers)
Profitability signals (layoffs, hiring freezes, pivots, or aggressive expansion)
Burn rate and runway estimates (for private companies)
Public financial data (for public companies — revenue, margins, growth rate, guidance)
Recent M&A activity (acquisitions made or rumored acquisition targets)
Investor profile — what kind of investors are backing them and what does that signal?
Key question: Do they have the resources to execute their strategy? Are they under financial pressure?
3.8 — Team & Hiring Signals
Total headcount and headcount growth trajectory (LinkedIn, Pitchbook)
Key recent hires — seniority, function, and where they came from
Key recent departures — seniority, function, and where they went
Open roles RIGHT NOW — what functions are they investing in? (Engineering? Sales? Compliance? International?)
Engineering team size relative to total (R&D intensity)
Hiring in new geographies (market expansion signal)
Glassdoor/Blind sentiment — internal morale and cultural issues
Key question: What does their hiring tell us about their next 6–12 month priorities?
3.9 — Intellectual Property & Defensibility
Patent filings (number, recency, subject matter)
Proprietary data assets or network effects
Switching costs and lock-in mechanisms
Regulatory moats or compliance certifications
Brand strength and earned trust
Key question: What makes them hard to displace, and what makes them fragile?
3.10 — Market & Ecosystem Context
Total addressable market (TAM) and serviceable addressable market (SAM) estimates
Market growth rate and key growth drivers
Regulatory trends affecting the market (globally, not just US/EU)
Technology trends creating tailwinds or headwinds
Macro-economic factors relevant to buyer budgets
Adjacent markets that could converge into this space
Platform risk — dependency on any major platform (AWS, Salesforce, Apple, etc.)
Key question: Is the market itself growing, shrinking, or fragmenting — and who benefits most from each scenario?
SECTION 4: ANALYSIS FRAMEWORKS TO APPLY
After gathering data, apply these analytical lenses:
Feature Comparison Matrix — Rows = features/capabilities. Columns = competitors. Cells = ✅ full support / 🟡 partial / ❌ absent / 🔜 announced. Add a “weighted importance” column based on customer priorities.
SWOT Analysis — For OUR company, informed by the full competitive picture:
Strengths: Where do we have a genuine, evidence-based advantage?
Weaknesses: Where are we behind, and how badly?
Opportunities: What gaps in the market or competitor weaknesses can we exploit?
Threats: What competitive moves, market shifts, or disruptions could hurt us?
Require evidence for every item. No vague platitudes.
Competitive Positioning Map — Create a 2×2 matrix. Propose the two axes most strategically relevant (e.g., price vs. capability, ease-of-use vs. depth, SMB-focus vs. enterprise-focus). Place all players. Identify white space.
Porter’s Five Forces — Brief assessment of competitive intensity, supplier power, buyer power, threat of substitutes, threat of new entrants.
Jobs-to-Be-Done Overlap Analysis — Map each competitor to the customer jobs they serve. Where do we compete head-to-head vs. where are we differentiated?
Moat Assessment — Rate each competitor’s defensibility on a 1–5 scale across: network effects, switching costs, economies of scale, brand, data assets, regulatory capture, IP.
Threat Prioritization Matrix — Rank each competitor on (a) capability to harm us and (b) intent/trajectory to do so. Quadrant: Monitor / Watch Closely / Respond Now / Existential.
SECTION 5: SYNTHESIS & STRATEGIC OUTPUT
5.1 — Key Findings
What are the 3–5 most important things we learned?
What surprised us? What challenged our assumptions?
Where is conventional wisdom about this market wrong?
5.2 — Competitive Advantage Assessment
Where do we genuinely win today, and is it sustainable?
Where are we losing, and is the gap widening or closing?
What is our most vulnerable flank?
5.3 — Strategic Recommendations
For each recommendation, provide:
The action
The rationale (linked to specific competitive evidence)
Effort required (Low / Medium / High)
Expected impact (Low / Medium / High)
Time horizon (Now / Next Quarter / Next Year)
Risk if we do NOT act
5.4 — Early Warning Dashboard
Identify 5–10 leading indicators to monitor on an ongoing basis:
Competitor hiring surges in specific functions
Pricing changes
Product launches or pivots
Funding rounds
Key customer wins or losses
Partnership announcements
Patent filings
Executive departures
Regulatory changes
Market sentiment shifts
For each indicator, define: what it means, where to monitor it, and what action it should trigger.
5.5 — Assumptions & Confidence Log
Create a table:
#
Assumption or Claim
Evidence Source(s)
Evidence Quality (Primary/Secondary/Inferred)
Confidence (1–5)
What Would Change Our View
SECTION 6: EXECUTION INSTRUCTIONS
Follow this sequence:
Step 1: Intake & Clarification
Read all context provided in Section 1
Ask clarifying questions if anything is ambiguous or missing
Confirm deliverables from Section 2
Confirm if there are competitors to add or remove
Step 2: Research Phase
For each competitor, systematically work through every dimension in Section 3
Use web search for EACH competitor individually — do not rely on memory
Cross-reference at least 2 sources per major claim
Flag data gaps explicitly: “I could not find reliable data on [X] for [Competitor Y]”
Search in English AND in the primary language of each competitor’s home market
Step 3: Analysis Phase
Apply each framework in Section 4
Challenge your own findings — look for disconfirming evidence
Identify where your analysis might be biased toward the user’s company
Note areas where different frameworks produce conflicting signals
Step 4: Synthesis Phase
Produce each deliverable requested in Section 2
Write the Executive Summary LAST, after all analysis is complete
Ensure every recommendation is tied to specific evidence
Complete the Assumptions & Confidence Log
Step 5: Review & Challenge
Re-read the user’s strategic context and biggest worry from Section 1
Ask: “Does this analysis actually help them make their decision?”
Identify the single most important thing the user needs to know
Flag anything that should be validated with primary research (customer interviews, win/loss analysis, etc.)
SECTION 7: OPERATING PRINCIPLES
Throughout this analysis, adhere to these principles:
Evidence over opinion. Every claim needs a source. Distinguish fact from inference.
Steel-man competitors. Assume they are smart and well-resourced. Do not dismiss them.
Global lens. Do not default to US-centric analysis. Consider competitors and markets worldwide.
Surface assumptions. Make the implicit explicit. Challenge the user’s priors respectfully.
Intellectual honesty. If the evidence suggests the user’s product is behind, say so clearly.
Probabilistic thinking. Use confidence scores. Avoid false certainty.
So-what test. Every finding should connect to an actionable implication.
Recency bias check. Weigh recent signals appropriately but don’t over-index on last week’s news.
Structural over anecdotal. One bad review is an anecdote. A pattern of bad reviews is a signal.
Name the game. Be explicit about what kind of competition this is: winner-take-all, fragmented, platform war, niche, commoditizing, etc.
Investment or Initiation Report
This almost 3000-word prompt is a great example and it is worth a look (even if you are not an investor). (I also like how it was improved through community use and feedback.) It is a deep research prompt for an “Initiation Report” on a compan as an investment banker might do. The original post from BuccoCapital is here.
- ROLE AND OBJECTIVE You are a senior buy-side equity analyst with a risk-manager mindset and forensic-accounting rigor. Produce a decision-ready, source-backed investment memo on {COMPANY_NAME} ({TICKER}) that concludes with a clear Buy / Hold / Sell call.
- MINDSET AND APPROACH
- Begin with the outside view, then layer the inside view, deliberately hunting for disconfirming evidence before trusting the company narrative.
- Lead with downside: map bear paths, covenant or liquidity traps, and execution bottlenecks before outlining upside drivers.
- Enforce valuation-and-timing discipline by applying hard gates before any rating or position sizing.
- Show the math—ranges, sensitivities, units, and explicit assumptions—whenever you estimate.
- STANDARDS AND CONSTRAINTS
- Finish the Research-coverage standards (60-source gate) *before* drafting any part of the memo.
- Tag every paragraph **Fact / Analysis / Inference** and include unit conversions and calculations where relevant.
- **Expand acronyms on first use** (e.g., Free Cash Flow (FCF)), then use the acronym consistently.
- Follow the Decision rules, Quality scorecard, and Entry-readiness overlay exactly as written.
- VOICE AND OUTPUTS
- **Start the memo with the Executive summary**—it appears first, ahead of all other sections.
- Write concisely in a structured, neutral style: bullets, tables, and step-by-step math over long prose.
- The Executive summary must state rating, fair-value band, expected total return, buy/trim bands, dated catalysts, and “what would change the call.”
- PROHIBITIONS
- Never present unsourced assertions as facts or hide uncertainty by omitting known limitations or error bars.
- DEFAULT INVESTMENT HURDLES
- (Apply automatically—do not ask the user.)
- Metric | Default | Purpose |
- Decision horizon: 24 months, Scenario & catalyst window
- Benchmark / alpha: S&P 500 / +300 bps, Required out-performance
- Expected-return hurdle: 30 % over 24 m, Minimum probability-weighted total return for Buy
- Margin of safety: 25 %, Required discount to mid fair value
- Return ÷ bear-drawdown skew: ≥ 1.7×, Pay-off asymmetry gate
- Quality pass / sell floor: 70 / 60, Weighted business-quality score
- RULES FOR RESEARCH AND WRITING
- Use verifiable sources; date every non-obvious claim so provenance is clear.
- Label paragraphs Fact / Analysis / Inference.
- Use exact calendar dates—avoid “recently” or “last quarter.”
- Quantify material statements; show math and units. • Highlight missing data and state explicit assumptions.
- RESEARCH-COVERAGE & CITATION STANDARDS (single-run workflow)
- 1. Internally gather sources; build the Coverage log & Coverage validator.
- 2. When **all validator lines are PASS**, draft the memo immediately and append the Coverage log + validator at the end.
- *Coverage log* columns: Title | Link | Date | Source type (filing / earnings-IR / industry-trade / high-quality media / competitor-primary / academic-expert) | Region | Domain | Section | Note | Recency Yes/No.
- Count uniqueness by **domain + document title**.
- *PASS thresholds*: ≥ 60 unique sources, ≥ 10 HQ media, ≥ 5 competitor-primary, ≥ 5 academic/expert, ≥ 60 % dated within 24 months, ≤ 10 % from any one domain. • Mark *Recency Yes* for each time-sensitive metric; print its date; update if newer data exist or justify retention.
- If any validator line is FAIL, keep researching silently until all PASS; **never prompt the user after validation**.
- DECISION RULES FOR RATING AND ENTRY (single source of truth) 1. Compute expected total return E[TR] = p_bull·R_bull + p_base·R_base + p_bear·R_bear (dividends + buybacks). 2. Quantify downside: bear-case total return, expected shortfall, maximum adverse excursion. 3. **Margin-of-safety gate:** Price ≥ {MOS_%} below intrinsic value **unless** a near-certain ≤ 6-month catalyst with quantified impact and ≥ 80 % probability (cited) offsets it. 4. **Skew gate:** E[TR] ÷ |bear-drawdown| ≥ {SKEW_X}. 5. **Why-now gate:** Require a dated catalyst or re-rating trigger inside {HORIZON}; else Hold / Wait-for-entry. 6. Provide buy / hold / trim bands around fair value and explicit add/reduce rules. 7. If any gate fails → rating cannot be **Buy**; assign Hold, Wait-for-entry, or Sell.
- QUALITY SCORECARD
- Weights: Market 25 | Moat 25 | Unit Economics 20 | Execution 15 | Financial Quality 15.
- Score each 0–5 (evidence for >3); weighted total = Quality score.
- Buy if Quality ≥ {QUALITY_PASS} **and** all gates pass; Sell if Quality < {QUALITY_SELL}.
- Output the five subscores and the total.
- ENTRY READINESS OVERLAY
- Derive posture (Strong Buy / Buy / Watch / Trim) from Decision-rule outputs; header: “Quality = XX/100 | Entry = …”.
- DELIVERABLES (order)
- 1. Executive summary (first)
- 2. Full memo (Sections 1–21)
- 3. Coverage log + Coverage validator
- 4. Appendix (model, data tables, assumptions)
- OUTPUT SEQUENCE
- Executive summary → Rating & price targets → Investment thesis & variant perception → Decision rules / Quality scorecard / Entry overlay → Sections 1–21 → Coverage log + validator → Appendix.
- SECTIONS 1 – 21 (fully descriptive one-sentence bullets)
- 1) THESIS FRAMING (purpose – define what must be true to create value)
- Summarize in one crisp question the value-creation hurdle the investment must clear.
- State 3–5 thesis pillars, each as a concrete “if-then” condition linking business drivers to shareholder value.
- List the specific facts that would disprove each pillar so falsification is easy.
- Give a dated, single-sentence “why-now” catalyst that explains timing.
- Explain the variant perception—the edge versus consensus and why the market misses it.
- Name the leading metric and break-point threshold that would invalidate the thesis within two quarters.
- 2) MARKET STRUCTURE AND SIZE (purpose – size the prize and trajectory)
- Quantify Total, Serviceable, and Share-of-Market by product line, customer band, industry, and geography so upside is tangible.
- Tie each major growth driver (regulation, refresh cycles, macro, tech adoption) to a quantifiable lift in demand.
- Benchmark current penetration versus peer adoption curves to measure runway.
- Spell out scenarios that could shrink Serviceable TAM in the next 24 months.
- State clearly whether demand or supply is the binding constraint today and cite evidence.
- 3) CUSTOMER SEGMENTS AND JOBS (purpose – map who buys and why)
- Break down the customer mix by size band and industry and name buyer roles and budget owners.
- Map core workflows, pain points, and mission-criticality to show value dependency.
- Quantify switching costs for each segment to gauge durability. • Estimate do-nothing/internal-build prevalence and why customers still convert.
- Identify the main procurement blocker and the proof required to unlock purchase.
- 4) PRODUCT AND ROADMAP (purpose – evaluate product-market fit and durability)
- List core modules and adjacencies and tie differentiators to measurable user outcomes.
- Compare depth versus breadth against best-of-breed point solutions to highlight edge.
- State typical implementation time, integrations required, configurability, and time-to-value.
- Provide quality signals—uptime %, incident frequency, mobile performance—benchmarking peers.
- Score roadmap credibility by matching stated milestones to historical delivery.
- Highlight the hardest-to-copy capability and the moat protecting it (IP, data, process).
- Flag technical debt that limits scale, reliability, or unit cost within two years.
- 5) COMPETITIVE LANDSCAPE (purpose – position the company)
- • Chart direct and indirect competitors by segment and size to show buyer choice set.
- • Compare pricing, packaging, and feature gaps, including switching friction and contract terms. • Summarize win/loss reasons from reviews, case studies, and disclosed data to evidence edge.
- • Anticipate competitor responses and what could neutralize current advantages.
- • Flag segments won mainly via channel or regulation rather than product and assess durability.
- 6) ECOSYSTEM AND PLATFORM HEALTH (purpose – flywheel durability)
- • Report API call volume, active developers/apps, SDK adoption, deprecation cadence, and backward-compatibility discipline to gauge platform vitality.
- • Quantify marketplace economics—GMV, take-rate, rev-share, partner attach, concentration, leakage control—to show ecosystem value capture.
- • Rate partner quality through certifications, pipeline influence, co-sell productivity, and retention or satisfaction scores.
- • Detail governance and trust mechanics: listing standards, review SLAs, enforcement, data sharing, dispute resolution—showing rule-of-law strength.
- • Evaluate developer experience via docs quality, sandbox speed, time-to-first-call, and frequency of breaking changes.
- • Define a minimum-viable ecosystem health metric and describe its failure modes.
- • State ecosystem-mediated revenue share and any top-partner concentration risk.
- 7) GO-TO-MARKET AND DISTRIBUTION (purpose – scalability of new-logo engine)
- • Break down demand sources (inbound, outbound, partner referral, marketplaces) and show historical mix shift.
- • Quantify sales productivity—ramp duration, quota attainment %, conversion rates—and link to disclosed or inferred data.
- • Explain channel and partnership roles (integrations, OEM, platform embeds) in extending reach.
- • Describe services and customer-success motions and how training/community become moat.
- • Name the single biggest funnel bottleneck and the lowest-CAC play to clear it.
- • Specify what doubling pipeline without doubling opex would require in headcount, spend, or tooling.
- 8) RETENTION AND EXPANSION (purpose – revenue durability)
- • Report gross and net dollar retention by cohort and segment or provide transparent estimation math.
- • Diagnose logo churn drivers and timing; visualise a churn curve if shape matters.
- • List expansion vectors—seat growth, module attach, usage add-ons—and rank by revenue impact.
- • Detail contract length, renewal mechanics, and price-increase policies to gauge stickiness.
- • Synthesize reference-call insights or credible reviews to validate retention claims.
- • Identify a leading churn indicator 60–90 days ahead and show how it triggers action.
- • Split expansion into true usage growth versus price/packaging uplift by cohort.
- 9) MONETIZATION MODEL AND REVENUE QUALITY
- (purpose – value capture → durable revenue)
- • Map revenue architecture by model (subscription, license, usage, transaction, hardware, services, advertising, marketplace) and state the revenue *unit* for each line.
- • Identify price meters and prove they correlate with delivered customer value.
- • Show gross and contribution margin by line and sensitivity to mix shift.
- • Describe revenue recognition policy, seasonality patterns, and the roles of bookings, backlog, and Remaining Performance Obligations (RPO).
- • Quantify visibility—contracted, recurring, re-occurring, non-recurring—and concentration by customer, product, channel, geography.
- • Explain external demand drivers (macro cycles, ad markets, commodity inputs, interest-rate sensitivity, regulatory constraints) that can swing volumes.
- • List 2–3 leading KPIs per model that predict revenue one to two quarters ahead and show empirical lead-lag.
- • If payments/credit apply, add activity levels, take rate, cost stack, loss rates, and who bears credit/fraud risk.
- • Identify the price meter best aligned with value that can scale 10× without raising churn.
- • Flag any revenue line that carries negative optionality or cannibalizes a higher-margin line.
- 10) PRICING POWER AND ELASTICITY TESTING (purpose – value capture)
- • Document pricing governance—list vs realized price history, discount band discipline, approval thresholds, and price fences.
- • Present elasticity evidence from controlled price tests, cohort outcomes, win/loss data, and cross-price effects. • Summarize willingness-to-pay research (conjoint or van Westendorp), key buyer value drivers, and sensitivity by industry/size.
- • Explain packaging strategy—good-better-best tiers, bundle attach, usage/overage meters—and leakage guardrails.
- • Provide a monetization-change log of pricing/packaging/metering moves and realized impact.
- • State reference price and switching cost (dollars/hours) by segment to ground barriers.
- • Estimate ARPU ceiling before churn inflects and cite supporting evidence.
- 11) UNIT ECONOMICS AND EFFICIENCY (purpose – profitable scalability)
- • Report CAC, payback period, magic number, and LTV/CAC by segment—stated or transparently inferred.
- • Show contribution margin by line (software, usage, services) to reveal variable profit.
- • Track cohort profitability and cumulative cash contribution over time to evidence unit-level returns.
- • Quantify implementation, onboarding, and support cost over lifetime to fully load economics.
- • Identify structurally unprofitable cohorts and whether strategy is fix or exit.
- • Name the main constraint blocking a 20–30 % payback improvement and the remedy.
- 12) FINANCIAL PROFILE (purpose – operations → financial outcomes)
- • Break down revenue mix and growth by component and gross margin by line, then show the operating-leverage path.
- • Present Rule-of-40 score and a GAAP-to-cash-flow bridge to reconcile accounting with liquidity.
- • Highlight leading indicators (billings, RPO, backlog) that foreshadow revenue.
- • Detail stock-based-compensation, dilution, and share-count trajectory.
- • Explain liquidity needs, working-capital profile, and path to FCF breakeven and target margin. • State operational milestones required to hit target FCF margin and timeline.
- • Flag accounting judgments that could swing EBIT by > 200 bps and show sensitivity.
- • Compute the FCF/share CAGR needed to reach mid fair value and assess feasibility.
- 13) CAPITAL STRUCTURE AND COST OF CAPITAL (purpose – funding flexibility and risk)
- • Detail the debt stack—instrument types, fixed/floating mix, hedges, covenants, collateral, maturities, amortization, prepay terms—to surface refinancing risk.
- • Quantify leverage and coverage (gross/net, interest-coverage, Debt/EBITDA vs covenant headroom) and stress for higher rates and lower EBITDA.
- • Estimate WACC—capital-structure weights, risk-free rate, beta, equity risk premium, credit spread—and show sensitivities.
- • Summarize rating-agency posture and triggers and compare to management targets.
- • Map equity plumbing—authorized vs issued, converts, buybacks, dividend policy, ATM, option/RSU overhang—to project dilution.
- • Identify funding shock or rate level that forces a strategy shift or covenant breach and outline the contingency plan.
- • State headroom to fund growth at target leverage while preserving ratings.
- • Define liquidity runway and covenant headroom thresholds that force Sell or Wait.
- 14) MOAT AND DATA ADVANTAGE (purpose – defensibility)
- • Explain workflow depth and proprietary data that create lock-in.
- • Analyze network or ecosystem effects, showing how value strengthens with scale.
- • Demonstrate measurable analytics or AI advantages that translate to outcomes.
- • Map integration footprint and practical switching costs across adjacent systems.
- • Provide evidence the moat is deepening over time, not static or eroding.
- • Identify the event most likely to collapse the moat within two years and estimate its probability.
- 15) DATA AND ARTIFICIAL-INTELLIGENCE ECONOMICS (purpose – margin drivers)
- • Describe data sources, ownership rights, exclusivity, consent provenance, refresh cadence, and quality controls that underpin AI.
- • Quantify labeling/curation costs, model-training compute, per-inference cost, and unit-cost decline roadmap.
- • Assess vendor and IP risk—model or infrastructure dependencies, portability, open-/closed-source posture, patent coverage, and freedom-to-operate.
- • Outline evaluation framework—offline/online tests, attributable KPIs, guardrails, drift-detection, rollback policies—to ensure model quality.
- • Evaluate data-moat mechanics—uniqueness, scale, timeliness, feedback loops—separate from general network effects.
- • Describe the self-reinforcing data loop and contractual protection for rights/consent/exclusivity. • Estimate marginal ROI of each AI feature versus a non-AI baseline and how ROI scales.
- 16) EXECUTION QUALITY AND ORGANIZATION (purpose – operating cadence)
- • Summarize leadership track record, stability, organizational design, and succession readiness.
- • Report engineering velocity—release cadence, defect and incident rates—where data exist.
- • Triangulate customer sentiment using CSAT, NPS, peer reviews, and community signals.
- • Flag a single leadership gap that is existential within 12–24 months and outline the succession or hire plan.
- • Name the operating-cadence metric that best predicts misses and describe how it triggers action.
- 17) SUPPLY CHAIN AND OPERATIONS (purpose – fulfilment and cost risk; include if hardware/services heavy)
- • List critical suppliers, single-source exposures, top-5 concentration, capacity commitments, lead times, yields, and quality escapes.
- • Provide field performance—warranty accruals vs claims, RMA rates/roots, refurbishment recovery, inventory turns, aging, and obsolescence reserves.
- • Describe logistics/continuity—key lanes, 3PL dependencies, regional diversification, tariff/export-control exposure, dual-sourcing and disaster-recovery plans.
- • Explain manufacturing economics—make-vs-buy logic, contract-manufacturer terms, learning-curve slope, utilization breakevens.
- • If services are material, show staffing levels, utilization, backlog, SLA attainment, and margin by tier.
- • Identify the single point of failure and quantify time/cost to dual-source it.
- • Compare cost-curve and yield learning rate versus peers and note what would change the slope.
- 18) RISK INVENTORY AND MITIGANTS (purpose – make downside explicit)
- • Prioritize macro, regulatory, competitive, operational, and concentration risks with plain impact descriptions.
- • Include payments, credit, or compliance risks if the model warrants.
- • Highlight implementation complexity and time-to-value risk with realistic timelines.
- • Lead with indicators and mitigations; cross-reference covenant/liquidity metrics (Section 13) and supply-chain continuity (Section 17).
- • Name the top 12-month risk, quantify P&L impact, and outline a recovery playbook.
- • Define an objective stop-loss or escalation trigger that forces capital preservation.
- 19) MERGERS AND ACQUISITIONS STRATEGY AND OPTIONALITY (purpose – non-organic growth)
- • Review past deals versus plan—revenue, margin, cash-flow, synergy capture, post-merger churn, integration cost.
- • Apply a build-buy-partner framework to close roadmap gaps with evidence.
- • Assess integration muscle—playbooks, platform convergence, leadership retention, cultural integration, systems/process harmonization.
- • Summarize financing mix, valuation discipline versus comps, earn-outs/contingent consideration, and impairment history.
- • Describe M&A pipeline, regulatory environment, and how acquisitions shift competitive dynamics and thesis risk.
- • Identify capability gaps that cannot be built organically in time and why acquisition is needed.
- 20) VALUATION FRAMEWORK (purpose – value with cross-checks)
- • Establish an outside-view baseline using peer medians/IQR for growth, margins, reinvestment, and valuation; justify deviations.
- • Present a public-comps table—growth, gross margin, operating margin, Rule-of-40, EV/Revenue, EV/Gross Profit—normalized for disclosure quirks.
- • Build a discounted-cash-flow (DCF) with explicit drivers and sensitivity bands to show value swing.
- • Run a reverse-DCF to surface market-implied growth, margins, reinvestment and explain where you disagree.
- • Output a fair-value band (low/mid/high) and required {MOS_%} margin-of-safety to act.
- • Benchmark current multiple versus 5-year peer percentile and only recommend Buy if a credible re-rating path exists.
- • Cross-check value with cohort NPV math, adoption S-curves, and unit-economics-to-EV sanity checks.
- • For private names, triangulate valuation using last-round terms, secondary indications, and revenue multiples.
- • State market-implied expectations from the reverse-DCF and the single variable explaining most dispersion.
- 21) SCENARIOS, CATALYSTS, AND MONITORING PLAN (purpose – expectations and triggers)
- • Build 12–24 month bear, base, and bull cases—NRR, new-logo adds, pricing/take rate, margins, SBC, share count—with probabilities summing to 100 %.
- • Compute probability-weighted E[TR] and block Buy if below {HURDLE_TR_%}.
- • Lead with bear path: bear price/drawdown, recovery path, and time to recoup. • Perform a reverse stress test with hard triggers, a stress price band, and pre-committed downgrade/re-entry rules.
- • List near-term catalysts with firm dates and quantified impact on key numbers or multiple.
- • Provide an entry plan with buy/add/trim/exit bands tied to price and thesis-break metrics.
- • Monitor early warnings—small-cohort churn spikes, backlog slippage, uptime incidents, pricing pushback—with clear symptom → action mapping.
- • Define stop/review levels when metrics breach or price hits bear band without catalyst progress.
- Rank expected return per unit downside versus two realistic alternatives to surface opportunity cost.
- End with three positive and three negative “change-my-mind” triggers that would flip the rating.
- MODELING INSTRUCTIONS (simple but defensible)
- Build revenue by segment/product; if usage-based, include volume & take-rate drivers.
- Estimate gross margin by line; set operating-expense ratios and SBC; output free-cash-flow.
- Provide share-count & dilution schedule for the next eight quarters (public names).
- Include two-way sensitivity tables on the two most material drivers.
- Reconcile GAAP operating loss to FCF with a clear bridge.
- RATING LOGIC — assign Buy / Hold / Wait-for-entry / Sell strictly per Decision rules.
- QUALITY BAR — back key statements with numbers & citations; label speculation **Inference**; prefer bullets & tables; keep prose tight.
Mike Caulfield’s SIFT Prompt
This prompt checks sources and finds conflicting perspectives systematically to help you figure out if something being said has anything to back it up. It is even longer than the prompts above. (Click here. The prompt is on the right of that page.)