Axiomatic
Support

How It Works

Architecture

Axiomatic is composed of three layers:

CLI (user interface)
 └── Core (agent loop, tools, caching)
      └── LLM Provider (Anthropic, OpenAI)

The CLI parses commands, loads configuration, discovers test files, and renders output. It delegates all analysis work to the Core.

The Core orchestrates the agent loop. For each test, it constructs a system prompt from the condition, provides the agent with tools, and manages the conversation until the agent submits a verdict.

The LLM Provider layer handles API communication with the configured provider (Anthropic or OpenAI), including authentication, request formatting, and response parsing.

The Agent Loop

When Axiomatic runs a test, the following sequence occurs:

  1. System prompt - the core builds a system prompt that includes the condition text, the file scope (on glob), and instructions for how to explore the codebase and submit findings.

  2. Tool use - the agent calls tools to read files, search for patterns, and navigate the project structure. Each tool call returns results that the agent uses to build its understanding.

  3. Iteration - the agent continues calling tools and reasoning until it has gathered enough evidence. Most tests complete in 3 to 10 tool-call rounds.

  4. Verdict - the agent calls submit_verdict with its conclusion: pass or fail, a confidence score, and a list of violations (if any) with file paths and line numbers.

You can watch the agent's reasoning in real time with axm run --verbose.

Agent Tools

The agent has access to these sandboxed, read-only tools during analysis:

ToolDescription
read_fileRead the contents of a file by path
globFind files matching a glob pattern
grepSearch file contents using regex patterns
list_dirList the contents of a directory
treeShow a recursive directory tree
submit_verdictSubmit the final pass/fail verdict with evidence

The agent cannot modify files - it has read-only access. It uses these tools strategically: typically starting with glob or tree to understand the project structure, then grep to find patterns, and read_file to examine specific code in detail.

How the on Field Guides the Agent

The on globs serve as entry points. The agent is told to start its investigation with these files, but it can read any file in the repository from there. This means a test scoped to src/api/**/*.ts can still follow imports to check how a utility function in src/lib/auth.ts works.

Agent Memory

The agent maintains navigation plans — persistent memory of which files matter and how to verify each condition. This is the key to Axiomatic's cost efficiency and accuracy.

How It Works

First run: The agent thoroughly explores your codebase, discovering relevant files and developing a verification strategy for each condition.

Subsequent runs: The agent loads its navigation plan and goes directly to the relevant files, skipping cold-start exploration. This significantly reduces token usage, cost, and runtime.

Shared Agent Memory

When authenticated, navigation plans are automatically synced to the cloud and shared across your team. A teammate running the same test on a fresh clone — or a CI runner — benefits from navigation plans built during previous runs, skipping exploration entirely.

Plans are matched by model: a navigation plan created with Claude Sonnet is available to all team members using Claude Sonnet. Shared agent memory is available on both Free and Cloud tiers.

What Gets Recorded

Each navigation plan stores:

  1. Relevant files — the key files needed to verify a specific condition
  2. Verification strategy — how the agent approached verification
  3. Key findings — important observations from the investigation

Storage

Agent memory is stored locally in .axiomatic/cache.db and synced to the cloud when authenticated. Local files are:

  • Auto-generated — you never need to create or edit them manually
  • Safe to gitignore — they are machine-specific and rebuild automatically

Caching

Axiomatic caches test results in a local SQLite database at .axiomatic/cache.db to avoid redundant LLM calls.

Cache Keys

Each cache entry is keyed on:

  • A hash of the test condition and configuration (provider, model, severity)
  • Content hashes of all files matching the on glob pattern

Invalidation

The cache is automatically invalidated when:

  • The condition text changes
  • Any file matching the on glob is modified, added, or deleted
  • The provider or model configuration changes

You can manually bypass the cache:

axm run --no-cache

Storage

The cache database is lightweight (typically under 1 MB) and should be added to .gitignore. The axm init command does this automatically.

Cost Management

LLM API calls have associated costs. Axiomatic provides several mechanisms to manage spending.

Token Budgets

Each test run has an implicit token budget based on the model. If the agent approaches the budget limit, it submits a verdict with the evidence gathered so far rather than continuing exploration.

Model Selection

Choose models based on the importance of the test:

ModelCost per testBest for
Claude Haiku$0.01-0.05Most tests, fast iteration
Claude Sonnet$0.05-0.20Standard accuracy tests
Claude Opus$0.20+Critical security and architecture audits

You can set model overrides per test:

# Use the best model for security-critical tests
condition: >
  All SQL queries use parameterized queries, never string concatenation.
on:
  - "src/db/**/*.ts"
severity: error
model: claude-opus-4-20250514

Caching

The cache is the most effective cost control. Tests that pass against unchanged code are served from cache at zero cost. Keep caching enabled and use reasonable TTLs.

Scoping with on

Narrow the on glob to limit how many files the agent needs to examine. Scoping to src/api/**/*.ts is cheaper than scanning the entire src/ tree.

Axiomatic vs. Other Tools

Axiomatic enforces standards that other tools can't check.

Linters (ESLint, golangci-lint, Pylint)

Linters operate on syntactic patterns within individual files. They are fast and catch surface-level issues, but cannot reason about behavior across files. A linter detects eval() usage, but only Axiomatic can confirm that user input cannot reach eval() after sanitization across modules.

Semgrep

Semgrep performs structural AST pattern matching - syntactically intelligent text searching. Axiomatic operates at a higher abstraction level, reasoning about behavioral intent regardless of implementation variations.

ArchUnit / ArchUnitNET

ArchUnit is the closest comparable tool, but limited to Java/C# and import graph rules. Axiomatic works with any language and handles broader behavioral and security properties described in plain English.

Unit Tests

Unit tests validate individual function behavior; Axiomatic verifies cross-cutting properties spanning packages and files. They are complementary.

Code Review

Axiomatic automates the repeatable, mechanical aspects of code review - verifying consistent application of known architectural invariants. Human review remains essential for nuanced design decisions.

When to Use Which

The rule of thumb: if an existing tool can verify the property deterministically, instantly, and at zero cost, use that tool. Axiomatic is for standards that require understanding intent and behavior across files — things no syntactic tool can enforce.

PropertyBest toolWhy?
Code formattingprettier, gofmt, blackDeterministic, auto-fixable
Banned syntax patternsSemgrep, linter rulesExact pattern matching, zero cost
Type correctnessCompiler / type checkerSound and complete
Single function behaviorUnit testsDeterministic, millisecond-fast
Import graph rulesDependency-cruiser, ArchUnitPurpose-built, exhaustive
Cross-cutting standardsAxiomaticRequires reasoning about behavior across files
Security standards spanning modulesAxiomaticLinters see syntax, not cross-module data flow
Architectural boundary enforcementAxiomaticRequires understanding what code does, not just what it imports

On this page