The OpenSymbolicAI Manifesto
How do we build non-deterministic intelligence on top of a deterministic foundation, without compromising either?
Determinism Was the Deal#
Before large language models, software engineering was built on a simple but powerful premise: determinism. Given the same inputs, software produced the same outputs: reliably, predictably, and repeatedly. A program behaved the same whether it ran on a laptop or in a data center, today or years from now.
This determinism made software dependable. It enabled testing, debugging, auditing, security reviews, scalability, and long-term maintenance. In short, it made software something that could be engineered and trusted.
LLMs Broke the Deal#
With the rise of large language models, a new class of applications has emerged: systems built primarily around LLM calls, where the model itself drives most of the behavior. While undeniably powerful, these systems break the assumptions that traditional software relied on.
LLMs generate tokens auto-regressively, guided by probability distributions rather than explicit logic. The same input can yield different outputs. Small changes can cause cascading failures. Behavior drifts over time.
Consider: a prompt that works today may fail after a model update. A test that passes once may fail the next run. Even the best models hallucinate on 1-3% of queries; in domains like law or medicine, rates can exceed 50%. These aren't bugs to be fixed. They're fundamental properties of probabilistic systems.
The result? Systems that are fragile, hard to test, and difficult to trust. We have traded decades of hard-won engineering guarantees for convenience and raw capability. The current state of AI applications represents a step backward from the reliability that defined classical software.
The Fundamental Question#
This leads to a deeper, more foundational question:
How do we build non-deterministic intelligence on top of a deterministic foundation, without compromising either?
LLMs are inherently probabilistic, and that is precisely where their power comes from. Traditional software derives its strength from determinism, explicit structure, and repeatability. Treating one as a substitute for the other produces fragile systems. Rejecting either leaves enormous potential unrealized.
The answer is not to make intelligence deterministic, nor to abandon determinism in favor of intelligence.
The answer is to compose the two correctly.
Our Thesis#
LLMs should not be the system. They should be components within it.
Use LLMs for what they excel at: understanding human intent, resolving ambiguity, translating natural language into structured representations, and assisting with reasoning.
Use deterministic code for everything else: execution, validation, orchestration, state management, and control flow.
Isolate uncertainty. Constrain it. Surface it explicitly. Treat non-deterministic behavior as a first-class concern rather than an emergent side effect.
The result is a system that remains reliable, testable, auditable, and secure, even as it incorporates powerful, probabilistic intelligence.
What This Looks Like#
OpenSymbolicAI is our answer to the fundamental question.
Deterministic operations form the foundation: typed, tested, and predictable. They do exactly what they say, every time.
Symbolic structure makes reasoning explicit. Plans, workflows, and decisions are represented as data: inspectable, replayable, and verifiable.
The runtime orchestrates execution, maintains state, and enforces contracts. It traces every step. It validates every output. It provides the guarantees.
The LLM understands intent. The system executes.
This is not a wrapper around an API. It is a framework and runtime designed from the ground up to restore engineering guarantees while using LLMs intentionally where they add the most value.
Why This Matters#
As AI systems move from experiments to infrastructure, the cost of fragility grows rapidly. Production systems must be understandable, governable, and dependable. They must support debugging, compliance, security reviews, and long-term maintenance.
Without these properties, AI cannot become a true foundation for critical software.
The era of prompt-driven behavior must give way to designed systems. The era of probabilistic hacks must give way to software you can trust.
If you can't test it, debug it, and trust it, it's not software. It's a demo.
We are building AI systems that are powerful and reliable. Systems you can test, debug, and trust.
Systems finally worthy of being called software.