Core Research Areas
Five fields. One unified inquiry.
System Architecture
Formal study of how intelligence components compose into coherent, governable systems.
Architecture determines what a system can and cannot express. We study how the arrangement of intelligence components — their interfaces, dependencies, and feedback paths — constrains and enables the kinds of reasoning a system can perform. A poorly structured architecture produces brittleness regardless of model quality.
Decision Theory
Mathematical frameworks for structuring decisions under uncertainty with bounded confidence.
Decisions in operational environments are never made under certainty. We study the formal conditions under which a decision can be called rational given incomplete information, partial observability, and bounded computational resources. The goal is not optimal decisions — it is decisions with provable properties.
Data Interpretation
Mechanisms by which raw signal sequences are transformed into structured, actionable representations.
Data does not carry meaning — meaning is imposed through interpretation. We study how interpretation architectures can be designed to extract invariant structural features from noisy, high-dimensional signal streams, and how those features can be encoded in forms that support downstream reasoning.
Model Orchestration
Coordination theory for collections of models operating under shared governance and constraint.
Individual models are not the unit of intelligence — systems of coordinated models are. We study how multiple models can be orchestrated such that their collective behaviour satisfies properties no individual model could maintain alone: consistency, coverage, conflict resolution, and graceful degradation.
Adaptive Systems
How systems maintain performance under distribution shift through structured self-modification.
Operational environments change. A system that is calibrated at deployment will drift from optimal behaviour as the world it models evolves. We study the formal conditions under which a system can recalibrate itself based on observed outcomes without losing its structural properties or governance constraints.
Foundational Ideas
The concepts that structure our thinking.
Eigen structures in systems
Av = λvIn linear systems, eigenvectors identify the directions a transformation preserves — only the magnitude changes. We extend this intuition to intelligence systems: what invariant structures does a system preserve under operational pressure? These eigenstructures are the load-bearing elements of a robust architecture. Build around what doesn't change.
Eigenvalue decomposition reveals the invariant structure of a linear map. We seek its analogue in dynamical intelligence systems.
Signal versus noise separation
S/N = E[s²] / E[n²]Every operational data stream is a superposition of signal — structured, causally linked information — and noise — stochastic variation that carries no predictive value. The challenge is not filtering noise after the fact. It is designing representations that are structurally incapable of encoding it. The architecture should make signal and noise epistemically distinguishable.
Signal-to-noise ratio measures the energy ratio of meaningful signal to background interference. The design goal is to maximise it structurally.
Feedback-driven evolution
θₜ₊₁ = θₜ − η∇L(θₜ)Systems that do not receive feedback from the consequences of their outputs are epistemically closed. They can only express what they were trained to express. Feedback-driven evolution introduces a mechanism by which a system's behaviour changes as a function of its own operational history — not through retraining, but through continuous structural recalibration.
Gradient descent is one formal instantiation. The general principle is that systems should be modifiable by the signals they generate.
Probabilistic decision-making
P(H|E) = P(E|H) · P(H) / P(E)Decisions are not binary. They are commitments made under probability distributions over possible world-states. Bayesian inference provides a formal calculus for how prior beliefs should be updated in light of evidence. A system that decides without maintaining an explicit probability distribution over its uncertainty is expressing false confidence. Bounded confidence is a design property.
Bayes' theorem defines the formally correct update to a belief distribution when new evidence is observed.
Published Work
Formalised and committed to record.
Structural Invariants in Multi-model Orchestration Systems
We characterise the class of compositional properties that persist across model substitution in a governed orchestration architecture, and derive formal conditions under which the system's collective behaviour remains predictable.
System ArchitectureConfidence-bounded Decision Interfaces for Operational AI
A formal treatment of decision output schemas that encode uncertainty quantification as a first-class property, with derivation of the minimal information set required for downstream systems to reason correctly about model confidence.
Decision TheoryEigendecomposition as a Lens for Intelligence System Design
We argue that the eigenstructure of an operational environment — its invariant directions under transformation — should be the primary organising principle of intelligence system architecture, and demonstrate this through three operational case studies.
System ArchitectureFeedback Loop Stability in Continuously Recalibrating Models
Analysis of the conditions under which a model that updates its parameters from observed outcomes will converge to a stable operating point rather than oscillating or diverging, with derivation of sufficient stability conditions.
Adaptive SystemsSignal Topology in Enterprise Data Environments
Enterprise data streams exhibit topological properties — persistent structural features that survive noise — which can be exploited to construct representations that are robust to distribution shift without requiring domain adaptation.
Data InterpretationWorking papers and internal reports are available to partners and institutional collaborators on request.
Ongoing Exploration
We continuously explore.
These are active inquiries — questions we are in the process of formalising. They do not yet have answers. That is the point.
System-level intelligence
Intelligence as a property of a system, not a property of its components. We are exploring formal characterisations of when a collection of models constitutes an intelligent system versus a collection of intelligent models — and what architectural conditions make the distinction meaningful.
Evolving architectures
Architectures that reconfigure themselves in response to operational demands without losing their governance properties. We are developing formal models of architectural plasticity — the conditions under which a system can reorganise its component relationships while preserving defined behavioural invariants.
Emergent behaviour in governed systems
Governed systems can produce behaviours that were not explicitly designed into any component. We are studying the class of emergent behaviours that arise from component interactions under constraint, with the goal of distinguishing benign emergence — which may be exploited — from structurally unsafe emergence — which must be prevented.
Operational epistemology
What can an intelligence system know about its own operational environment, and what are the fundamental limits of that knowledge? We are developing a formal epistemology of operational AI — characterising the knowable, the inferrable, and the inherently uncertain within a closed operational system.