Axiomatic
Support

Troubleshooting

Common Issues

"No API key found"

Set the appropriate environment variable for your provider:

# Anthropic (default)
export ANTHROPIC_API_KEY=sk-ant-...

# OpenAI
export OPENAI_API_KEY=sk-...

Or reference it in your axiomatic.yml config using ${VAR_NAME} syntax:

# axiomatic.yml
api_key: ${ANTHROPIC_API_KEY}

Run axm doctor to verify your setup.

"No tests found"

Ensure you have .yml files in your axiomatic/ directory. Create tests with:

# Initialize with an example test
axm init

# Or create one interactively
axm add

Tests Are Slow

  1. Narrow the on glob. src/api/**/*.ts is much faster than src/**/*.ts or omitting on entirely.
  2. Enable caching (on by default). Unchanged tests are served from cache at zero cost.
  3. Use a faster model. claude-sonnet-4-6 is significantly faster than claude-opus-4-20250514 for non-critical tests.
  4. Reduce concurrency if you are hitting rate limits: axm run --parallel 2

Agent Exhausts Iterations Without a Verdict

When the agent runs out of iterations, it means the condition requires too much exploration. Fix this by:

  1. Making the condition more specific. Instead of "the codebase handles errors correctly," write "functions calling external APIs in src/services/ must handle timeout errors."
  2. Narrowing the on glob. Give the agent a smaller set of files to examine.
  3. Increasing max_iterations in the test or config file if the condition genuinely requires deep exploration.

YAML Parse Errors

Run axm validate to lint all test files and surface syntax issues before running:

axm validate

Common YAML mistakes:

  • Missing condition field
  • Unquoted glob patterns containing * or { characters (always wrap in quotes)
  • Indentation errors in multi-line conditions (use > for folded blocks)

Unexpected Results

  1. Inspect the agent's reasoning: axm show <test> displays the cached reasoning, referenced files, and evidence.
  2. Watch in real time: axm run --verbose shows every tool call and reasoning step.
  3. Check if the condition is falsifiable. Ask yourself: "Can I picture the agent pointing at a specific file and line saying 'this violates the condition'?" If not, the condition may be too vague.
  4. Check for compound conditions. If a condition tests two properties at once, the agent may focus on one and miss the other. Split them into separate tests.

Rate Limiting

Axiomatic automatically retries with exponential backoff (5s, 15s, 30s, 60s) when hitting provider rate limits. If you are consistently rate-limited:

  • Reduce parallel test execution: axm run --parallel 2
  • Use caching to reduce total API calls
  • Spread tests across multiple CI runs using tags

Cache Not Working

Tests without on globs cannot be cached -- they run every time. Add file patterns to enable caching:

# Before: runs every time
condition: >
  All routes require authentication.

# After: cached when matching files are unchanged
condition: >
  All routes require authentication.
on:
  - "src/routes/**/*.ts"
  - "src/middleware/**/*.ts"

Run axm validate to identify tests missing on patterns.

Debugging

Verbose Output

See the complete agent reasoning, including every tool call:

axm run --verbose

Inspect Cached Results

View reasoning from previous runs without re-running:

# View reasoning for a specific test
axm show auth-required

# Show status change info
axm show auth-required --diff

Dry Run

Preview which tests would run and estimate costs without making API calls:

axm run --dry-run

Validate Configuration

Check your entire setup for issues:

# Full diagnostic
axm doctor

# Validate test files only
axm validate

FAQ

How much does it cost?

A typical test costs $0.01--0.05 with Claude Haiku, $0.05--0.20 with Sonnet, and $0.20+ with Opus. Caching means you only pay when relevant code changes. A project with 10 tests typically costs a few cents per CI run after the initial analysis.

Can the agent modify my code?

No. The agent has read-only access. It can read files, search with grep, and list directories, but it cannot write, delete, or execute anything.

Is my code sent to the LLM provider?

Yes. File contents are sent to the configured LLM provider (Anthropic or OpenAI) for analysis. The agent reads files on demand -- only files it needs to inspect are sent, not your entire codebase. Review your provider's data policies if this is a concern.

How is this different from a linter?

Linters match syntactic patterns within individual files. Axiomatic reasons about behavioral properties across files. A linter can catch eval() usage; Axiomatic can verify that user input never reaches eval() through any code path across multiple modules. See How It Works for a detailed comparison.

Can I use Axiomatic with any programming language?

Yes. Since Axiomatic uses an LLM that understands code, it works with any language the model supports: TypeScript, JavaScript, Python, Go, Java, Rust, C#, Ruby, PHP, and more.

What happens when the LLM makes a mistake?

The confidence score indicates how certain the agent is. For high-stakes tests, use claude-opus-4-20250514 for the highest accuracy. You can also inspect the agent's reasoning with axm show <test> to verify its logic. Caching means that once a correct result is established, it persists until the code changes.

Can I run Axiomatic offline?

No. Axiomatic requires an internet connection to reach the LLM provider's API. The caching system minimizes API calls, but the initial analysis and any re-analysis after code changes require connectivity.

How do I handle flaky tests?

Use --retries to automatically re-run tests that produce inconsistent results:

axm run --retries 2

If a test is consistently flaky, the condition may be ambiguous. Make it more specific and falsifiable.

On this page