Philosophy

From Dura Lex Wiki
Revision as of 01:30, 23 April 2026 by Nicolas (talk | contribs) (Philosophy page — founding argument, CNB position, 4 pillars, digital commons (via create-page on MediaWiki MCP Server))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The founding argument of Dura Lex.

The reality

Citizens and lawyers increasingly use AI to handle legal questions — drafting contracts, researching case law, understanding their rights, preparing arguments. This trend will not reverse. AI is becoming a primary interface to the law.

The question is not whether people will use AI for law. They already do. The question is whether they will do it safely.

The problem

Current legal AI tools are black boxes. The data they rely on is opaque. Their reasoning is not auditable. Their confidentiality commitments are neither provable nor verifiable.

« Les engagements de confidentialité des fournisseurs de solutions d'IA sont ni prouvables ni vérifiables. »

— CNB, Guide de la déontologie et de l'intelligence artificielle, adopted March 13, 2026

The French National Bar Council (CNB) states that sharing client data with external AI systems may breach professional secrecy obligations (secret professionnel). No current commercial legal AI offers full auditability of its data sources, processing pipeline, or reasoning chain.

The risk is not theoretical. Disciplinary sanctions, civil liability, and criminal exposure under articles 226-13 and 226-14 of the French Penal Code.

References: CNB, Guide de la déontologie et de l'intelligence artificielle, March 2026. CNB, Guide pratique d'utilisation des systèmes d'IAG, September 2024. CNB, Grille de lecture — Intelligence artificielle, June 2025.

The mission

Dura Lex does not aim to prevent AI usage in law — it aims to make it safe.

Four pillars:

Safety
Strict guidelines, quality checks, content quality levels on every document. The system never hides uncertainty — it expresses it. Every document carries its reliability level. Every gap in coverage is flagged. A quality_check tool lets the AI self-audit its own response against the corpus.
Transparency
Everything is traceable and auditable. Every document has a provenance. Every enrichment is tagged with its method and confidence level. Every reasoning path can be verified against the source. content_quality shows document reliability. needs_review flags anomalies. translation_quality distinguishes official from machine translations.
Sovereignty
The entire stack can run on-premise, on sovereign European infrastructure, or fully air-gapped. No dependency on foreign cloud providers. No data leaves without explicit choice. The law comes to your data — not your data to someone else's cloud.
Professional secrecy
Conversations, queries, and research stay under the user's control. Multiple privacy modes from standard to air-gapped. Designed for the requirements of secret professionnel as defined by the CNB.

The answer: digital commons

Dura Lex is the opposite of a black box.

Open source, open data

Component License Rationale
Software (all packages) MIT Maximum adoption — anyone can use, fork, embed, commercialize without restriction
Enriched data (corpus, edges, annotations) ODbL Share-alike for data — improvements flow back to the commons
Raw source data Per-source (Licence Ouverte 2.0, CC0) Government open data — already public

This is the OpenStreetMap model: permissive code, copyleft data. The ecosystem grows because everyone can build on it. The data stays open because no one can close it.

Auditability

Every link in the chain is visible and verifiable:

  • Every document carries its content quality level — from raw OCR to jurist-reviewed
  • Every edge (cross-reference, amendment, citation) carries its provenance
  • Every translation is tagged with its method — official, machine, human-reviewed
  • Safety guidelines are loaded before every research session
  • The AI can run a quality check against the corpus after answering

Doubt is always expressed

The system never pretends to certainty it does not have. Missing data, low-quality OCR, incomplete temporal coverage, untested jurisdictions — all are surfaced, never hidden.

When a tool tells you "here is the answer" without showing you where it looked, what it found, and what it might have missed — that is a black box.

When every step is inspectable, every limitation is stated, and every source is cited — that is a digital common.