Brad Menezes, CEO of Superblocks, believes that the next billion-dollar AI startup idea is hidden in every "system prompt" you can't see.
When he recently released his company's enterprise coding AI agent product, Clark, not only did this founder introduce a new product but also openly shared a document containing system prompts from 19 well-known AI coding products, which quickly sparked attention on social platforms. These system prompts come from popular tools like Windsurf, Manus, Cursor, Lovable, Bolt, and others, pulling back the curtain on the previously obscure field of "system prompt engineering."
"Each company has completely different system prompts for the same large model; they are training the model to become an expert in a particular field," Menezes said in an interview with TechCrunch.
He refers to these system prompts as a "public course in prompt engineering for large models (LLMs)," not only showcasing the "personality and context" behind model behavior but also revealing the secret weapon of startups' core control logic for AI products.
System Prompts ≠ The Real Secret: 80% of the Magic is "Beyond the Prompt"
Although he disclosed several examples of system prompts, Menezes pointed out that the real core competitiveness goes beyond this. "System prompts account for about 20% of the secrets," he said, while the remaining 80% involves "rich prompt engineering"—including how to attach auxiliary instructions and what kind of error correction and optimization to perform after generating results.
"Every single thing you teach AI must be clearly expressed in human-level language."
The Superblocks team breaks down prompt engineering into three main parts:
Role Prompt: Define the model's identity, style, and goals. For example, the introduction of the Devin model: "You are Devin, a code genius..."
Situation Prompt: Set task boundaries and behavioral rules. For example, Cursor's prompt specifies "do not reveal tool names to users" and "no more than three fixes."
Tool Invocation Prompt: Tell the model how to trigger external functions, such as calling databases, editing code, executing shell commands, etc. Replit detailed the execution standards for multiple technical actions.
From "Developer Tools" to "Enterprise Internal Tools"
Superblocks just completed a $23 million Series A funding round, bringing its total financing to $60 million. Its Clark product is positioned as an "enterprise internal AI agent," targeting non-technical roles such as sales and operations instead of developers—allowing them to build internal tools like CRM assistants and performance monitoring systems on their own using AI.
To test this concept, Menezes himself conducted an experiment within the company: the engineering team focused solely on product development, while the operations and sales teams used Clark to create their own internal tools for tasks like customer identification, balancing sales tasks, and monitoring customer service data. "We achieved 'not buying tools, but building them ourselves' with Clark," he said.
He believes that current AI coding tools, which are still focused on "raw code output" like Manus, Devin, OpenAI Codex, and Replit, have opened up a blank market for Clark—providing truly "plug-and-play" AI programming experiences for non-developer users in enterprise scenarios.
Final Thoughts: Is the Golden Age of Prompt Engineering Here?
In the increasingly standardized AI tools landscape, the ability to build system prompts and their accompanying mechanisms may become the core weapon for startups to break through in the era of large models.
"The smartest startup ideas are hidden behind the complex system prompts in public code," Menezes admitted. To him, these prompts are not only control instructions for AI but also the "cultural encoding" and foundation of differentiation for each startup.
Prompt engineering is moving from backstage to the forefront, and whoever masters it will hold the power to define the next generation of AI tools.