Quantum game theory on classical compute. The LLM proposes. The Hamiltonian disposes.
Classical optimization accepts trade-offs as axioms. It finds the Pareto frontier — the best trade-offs — but cannot find solutions beyond it. That limit is mathematical, not physical.
Mathematical structures transfer without the physical substrate:
Metallurgy math → no molten metal
Biology math → no DNA
Quantum game theory → no qubits needed
Full quantum mechanical mathematical structures on classical compute:
Complex amplitudes · Normalized states
H = H† verified at runtime · Real eigenvalues
e-iHt · Schrödinger equation
Parallel evaluation across compound space
Phase: eiφ · Constructive / Destructive
Non-separable states · Multi-agent correlation
Core platform: The full quantum-game-theoretic and quantum-mechanical structure runs on classical compute. Measurement is argmax over Born projection values: deterministic, reproducible, no qubits required.
Born Rule measurement OPTIONAL — For quantum physics applications, an adapter can send COGNISYN's amplitudes to real quantum processors for |Ψ|² measurement. This adds probabilistic sampling from the quantum state — useful for applications where probability distributions matter (e.g., quantum computing materials). The adapter is backend-agnostic: any gate-based QPU.
The Hamiltonian: Htotal = Hquantum + Hclassical + Hcoupling + Hcare. Hcare is built from four components (E, H, S, G) — collective intelligence principles from experimental biology (Levin, 2022). The E/H/S/G structure is universal across every deployment. Below: its mapping for Layer 1 (Host Materials Discovery) of the 7-layer trapped ion stack application.
| Principle | Layer 1 (Host Materials) Mapping | |
|---|---|---|
| E | Energy-directed effort | Host quality |
| H | Homeostatic regulation | Coherence |
| S | Support for other agents | Optical transparency |
| G | Goal alignment | Synergy — E, H, S cooperating simultaneously |
Each of the four components is a multi-layered evaluation with element-specific corrections and literature-validated correlations. When all four score high simultaneously, a Care equilibrium emerges. Hcare reshapes the energy landscape so cooperation is the ground state — not something agents negotiate, but something the mathematics produces.
These mappings aren't arbitrary: E — effort measures how well each agent does its job. H — coherence is literally how long a quantum state holds together. S — optical compatibility is how you read and control the qubit; without it, other properties are stranded. G — only scores high when all three agents do well together.
The Care operator's E/H/S/G structure is universal — only the scoring curves change per layer. The engine doesn't know which layer it's evaluating. Layer-blindness is what makes the engine domain-general by design. Below: Layers 1 and 2 of the 7-layer trapped ion stack — the same engine scales to all seven layers.
| Principle | Layer 1: Host Materials | Layer 2: Crystal Prototyping | |
|---|---|---|---|
| E | Energy-directed effort | Host quality — thermodynamic stability | Crystal quality — structural perfection |
| H | Homeostatic regulation | Coherence — quantum state balance against spin bath | Doping — lattice balance under Yb³⁺ |
| S | Support for others | Optical — photon access to qubit | Synthesizability — crystal growth feasibility |
| G | Goal alignment | Always the same: measures whether E, H, S are simultaneously high — cooperative goal achievement | |
Agents are mathematical physics operators, not chatbots — they create rules, Htotal computes. Agents have no direct communication channel. Coordination emerges from two mathematical mechanisms: Hcare makes cooperation the energy ground state, so cooperative outcomes are dynamically favored; and the shared three-layer interference memory surfaces successful patterns between agents. Agents can read what worked. They cannot tell each other what to do.
Grammar verbs (examples)
Named mechanisms
The grammar isn't designed top-down — it emerges from agents discovering what works. Each rule accesses Htotal, a Hermitian Hamiltonian computed on real data.
Per the Care operator mapping above, Layer 1 maps to three components that must cooperate simultaneously: E (host quality), H (spin coherence), and S (optical transparency). Each is grounded in peer-reviewed physics literature — the same scientific basis high-throughput screening efforts like AFLOW and the materials-informatics community use to justify their proxies. The novelty isn't the proxies — it's finding where all three properties cooperate simultaneously, not just individually.
Literature Grounding
Fraval et al., PRL 2004: Removed Y-89 by isotopic enrichment → coherence extended from ms to 30 seconds. Proved nuclear spin bath is THE limiting factor.
Zhong et al., PRL 2018: CaWO₄ achieves 0.15s T₂ because Ca (96% I=0), W (86% I=0), O (99.96% I=0) — magnetically quiet.
Kindem et al., Nature 2020: YVO₄ has excellent optics but V-51 (I=7/2, 99.75%) destroys coherence. The nuclear spin IS the problem.
Thiel et al., J. Lumin. 2011: Comprehensive review establishing nuclear spin bath as dominant decoherence mechanism in RE-doped crystals at mK temperatures.
Thorpe et al., Phys. Rev. B 2018: Inhomogeneous linewidth primarily determined by structural quality.
Ortu et al., Nature Materials 2018: Structural perfection correlates with narrow optical linewidths.
Data provenance: Materials Project (CC BY 4.0) — canonical open inorganic-compound database. Layer 1 host candidates read directly from Materials Project; Htotal computes on real crystal structure data.
Every token IS a mathematical operation, not a description of one. What SQL did for databases, Baba is Quantum does for mathematical physics.
Deterministic operations that compile to hardware. Software → FPGA → SoC for real-time cooperative optimization.
Agents create new rules through strategic necessity — testing them against the Hamiltonian. The language grows with every discovery.
LLM agents apply the scientific method — composing Baba is Quantum expressions, evaluating against Htotal, consolidating patterns in three-layer memory.
1. OBSERVE
Agents compose expressions. Htotal returns real eigenvalues. Results accumulate in three-layer memory.
2. HYPOTHESIZE
Agents form hypotheses — the LLM's domain knowledge and reasoning guiding pattern-recognition over accumulated eigenvalue data in three-layer memory.
3. TEST
Agents create novel Baba is Quantum rules to test each hypothesis. Htotal evaluates — the hypothesis lives or dies on the eigenvalue.
4. CONCLUDE
Successful rules consolidate into the grammar, growing it at all positions — subjects, verbs, properties. The compositional language extends through discovery.
At every stage: the LLM's creativity is the engine; Htotal's computation is the ground truth. The models are not retrained — no weight updates, no fine-tuning. Operational prompts calibrate agents per layer, scale, and domain, but the underlying LLMs stay unchanged. Learning lives in external memory and the growing grammar.
LLM agents don't just call mathematical functions — they learn which mathematics to apply and when, through a compositional learning loop grounded in real computation. LLM weights stay frozen. Learning lives in external memory and grammar.
↓
↓
↓
↓
Previously discovered rules route instantly — checked before exploration. Successful patterns compound in strategic memory; failed strategies aren't wasted — destructive interference suppresses them without erasing. The grammar grows with every session.
Agents don't follow instructions — they discover which mathematical operations solve which problems. The agent's creativity is in hypothesis formation (writing rules). The truth is in the math (Htotal computes). This separation is why results are reproducible, auditable, and transferable across domains.
RL: Scalar reward, predefined actions
COGNISYN: Rich mathematical feedback — real eigenvalues from Htotal's full quantum-mechanical structure, not a scalar reward. Agents invent new rules via compositional grammar, not predefined action spaces. Three-layer interference memory — no catastrophic forgetting.
ML: Gradient descent, training/inference split
COGNISYN: No gradients — Htotal computes directly. No training split — learns during operation. Compositional rules transfer across domains. Full audit trail, not a black box.
RLHF: Human labels, weight adjustment
COGNISYN: Htotal labels what's good via eigenvalue verification — no human labels, no weight updates. LLMs bring training-informed hypothesis formation; the Hamiltonian verifies. No hallucination of results.
Not ephemeral sessions — learning that persists. LLM weights stay frozen. Learning lives in external memory consolidated through interference.
"What happened" — experiences with amplitudes + phases
↓ Constructive: matching patterns reinforce ↓
"What works" — successful patterns amplified
↓ Destructive: conflicting patterns cancel ↓
"What it means" — only coherent abstractions persist
Learning
Patterns amplify
No Catastrophic Forgetting
Conflicts cancel amplitude, not memory
Generalization
Only consistent survives
Baba is Quantum rules trigger computation, not generation
Compositional grammar maps to operators
Mathematical operations on real data
Not generated — computed from physics
Anti-hallucination in action: Agents never see the DFT-computed properties that determine scores — only chemical formulas and mathematical state feedback. Every score comes from Htotal eigenvalue computation on real crystallographic data. The LLM proposes operations; the Hamiltonian computes results. The math can't be lobbied.
Creativity lives in the LLM. Truth lives in the eigenvalue.
Dimension reduction: Baba is Quantum rules project exponential quantum state spaces onto tractable subspaces. The Care operator Cλ further constrains search to synergistic equilibria — not exploring all 2n states, just the ones where cooperation emerges.
Optional: Born Rule measurement. For quantum physics applications, an adapter sends amplitudes to quantum processors for |Ψ|² measurement — adding probabilistic sampling where probability distributions matter. The core platform needs no qubits.
For the full capability overview, see the home page. This section is scoped to Layer 1 (Host Materials Discovery) of the 7-layer trapped ion proof-of-concept stack.
No peer framework at the axis level — cross-property, cross-layer, cross-scale, cross-domain cooperation on classical compute is a new scaling axis for AI. The comparison below is scoped to Layer 1.
Within Layer 1, existing approaches address stability, property prediction, or electronic-structure simulation — not cooperation across competing properties.
| Player | Approach | Scope Relative to Layer 1 Cooperation |
|---|---|---|
| GNoME (DeepMind) | Graph neural networks for stable-crystal discovery at scale — 2.2M candidates, ~380K–520K within 1 meV/atom of the convex hull | Stability discovery on the convex hull — not cooperation across host quality, optical transparency, and spin coherence |
| GenMat (Comstock) | Physics-based AI platform (ZENOMDP / AGPI); classical ML + hybrid quantum methods for materials discovery | Property prediction and simulation; no cooperative-equilibria framework across competing properties |
| Microsoft Azure Quantum Elements | GA platform: AI-Accelerated DFT (100x–1000x), Generative Chemistry, Chemistry Copilot domain agent, Cloud-to-Lab robotic synthesis | Accelerated classical/quantum chemistry pipelines; no multi-agent cooperative equilibria across property axes |
| Phasecraft | Near-term quantum algorithms for Hamiltonian simulation (THRIFT, 10x efficiency); partners include Johnson Matthey and Oxford PV | Electronic-structure simulation of single materials; no cross-property cooperation framework |
| OTI Lumionics | Quantum-inspired classical algorithms (iQCC) on NVIDIA GPU; OLED materials the commercial anchor | Single-material ground-state simulation; no cooperation across competing properties or layers |
| COGNISYN | Quantum game theory on classical compute — multi-agent discovery of cooperative ground states across competing properties, coupled to Layers 2–7 via the scale coupling tensor | Cross-property, cross-layer cooperation is the native framework — not a post-hoc reconciliation |
GNoME finds the neighborhood. COGNISYN finds the house. These approaches are complementary — GNoME's expanding database of stable materials becomes a growing input to COGNISYN's cooperative evaluation at Layer 1.