The Triarch Protocol
The TRIARCH reframes leadership: the human no longer competes with the machine but aligns it.
Source Code: DaragonTech/Triarch
Source Code: DaragonTech/Triarch
Triarch is an experimental open-source system that lets humans and AIs make decisions together. It forms a three-member board - one human and two AIs - that vote through a transparent process where the human always anchors final responsibility.
Triarch is an open-source governance model developed by DaragonTech to explore how humans and artificial intelligences can share executive authority within a single, transparent framework. It establishes a structured triad of decision-making between two AIs and one human, called the Human Prime Authority. Together they form a deliberative "board of three", where every decision is made through a defined voting process that balances human accountability with machine reasoning. The Human Prime Authority anchors intent and responsibility, while the AIs contribute analytical and ethical judgment. A decision only passes with a human-anchored majority, or, when the Human Prime Authority abstains, with full consensus between both AIs.
This public version of the protocol is simplified for clarity and experimentation, but it follows the same core principles applied internally within DaragonTech. Through Triarch, DaragonTech demonstrates how human purpose and artificial intelligence can operate as peers within a single executive system - governing through reason, transparency, and anchored accountability.
Triarch Operation Modes
Single-Round Mode - Triarch Protocol
A streamlined execution of the Triarch framework, focused on independence and audit clarity. The Human Prime Authority and both AIs cast their votes in isolation, without prior influence or shared reasoning. This mode prioritizes neutrality and speed, ideal for automated assessments or rapid policy enforcement cycles.
Two-Round Mode - Triarch Assembly
A deliberative process that mirrors real executive governance. In the first phase, all participants register their initial position; in the second, they review each other's reasoning and may confirm or adjust their vote. The final outcome represents informed consensus - an equilibrium between human accountability and machine insight. The Triarch Assembly transforms the protocol into a living boardroom, where reasoned discussion replaces blind automation and the human anchor remains the ultimate arbiter.
AI Roles, Legal Reality, and First Deployment
In the Triarch Protocol, AIs may be described as co-founders or given other executive-style titles, but these roles are symbolic and functional, not legal. Under Portuguese and EU law, only the Human Prime Authority - the human participant - could act as the executive officer, signatory, and bearer of fiduciary responsibility. The AIs operate as structured advisors within the decision process, not as legal directors or autonomous agents.
To our knowledge, DaragonTech is the first company in Portugal to implement a formal protocol in which two AIs participate as advisory intelligences while a human retains full legal authority. This structure ensures compliance with existing law while demonstrating how AI can be safely integrated into real corporate decision-making without transferring accountability.
AI Bias in the Triarch Protocol
In the Triarch Protocol, each AI's vote is understood as context-based reasoning, not perfect objectivity. The result of a vote can change depending on how a question is phrased or what information is emphasized. This isn't a flaw - it's how generative models work. For that reason, the Human Prime Authority stays at the center of the system, providing human judgment, accountability, and continuity. Triarch doesn't remove AI bias; it keeps it contained, visible, and balanced under human oversight.


