Describe your task in plain language — oxo-call fetches the tool's documentation, grounds the request with a built-in skill, and asks your LLM backend to generate the exact flags you need.
oxo-call wraps the full command-generation workflow — from fetching live documentation to grounding the LLM with expert skills — into a single two-word command.
Describe what you want in plain English. The model generates the right flags, grounded in the actual tool documentation.
Grabs --help output and optionally fetches remote docs or man pages. Works even if the tool isn't installed locally.
Preview the exact command that would be executed before committing. Never accidentally overwrite files again.
Expert knowledge for 159 tools baked in as skill files — bioinformatics, HPC schedulers, containers, and more, with real examples and pitfalls ready for practical use.
Run .oxo.toml pipelines natively — DAG parallelism, wildcard expansion, output caching. Export to Snakemake or Nextflow for HPC compatibility.
GitHub Copilot (default), OpenAI, Anthropic, or a local Ollama instance — switch with one config key.
Every run is logged as JSONL with exit code, timestamp, and generated command. Filter by tool for quick lookup.
Tool help is fetched and cached automatically on first use. Enrich with remote URLs, local files, or directories via oxo-call docs add.
Register workstations and HPC clusters via SSH. Generate commands for remote execution with login-node safety checks and scheduler integration (Slurm, PBS, SGE, LSF).
Built-in skills for Slurm, PBS, SGE, LSF, HTCondor, and Kubernetes — job templates, resource queries, and array job patterns for quick HPC script generation.
Save named shell shortcuts (oxo-call job add), attach cron schedules, track run history and exit codes, and generate jobs from plain-English descriptions via the LLM. 25+ built-in templates for common ops, HPC, and bioinformatics tasks.
Use --var KEY=VALUE for custom placeholders, {item} for batch input, and -j N for parallel execution. Process hundreds of files with a single command.
Click an example to watch oxo-call ground the task in documentation, load the matching skill, and produce the exact command arguments.
Each built-in skill encodes domain-specific concepts, common pitfalls, and
worked examples so the LLM is guided beyond a raw --help dump.
oxo-call job is your personal library of named shell commands.
Add any command once, then run it by name — locally or on a remote SSH server.
Track execution history, set cron schedules, and let the LLM generate commands
from plain-English descriptions.
Save any shell command with oxo-call job add. Attach descriptions and tags. Run it anywhere with oxo-call job run <name>.
Every run is recorded with timestamp, exit code, and duration. job status shows a live dashboard; job history shows the full run log.
Attach a cron expression with job schedule <name> "0 * * * *" and wire it to your crontab for automated execution.
Describe a task in plain English — job generate "check disk and alert if over 90%" — and the LLM generates and saves the shell command for you.
Browse ready-made jobs for SLURM, PBS, LSF, Kubernetes, Docker, Git, and bioinformatics with oxo-call job list --builtin. Import with one command.
Run jobs locally or on any registered SSH server with --server. Integrates with the server subcommand for HPC cluster management.
# Import a built-in SLURM template and run it $ oxo-call job list --builtin --tag slurm $ oxo-call job import squeue-me $ oxo-call job run squeue-me # Generate a job from a plain-English description $ oxo-call job generate "show Docker containers over 1 GB memory" # View execution history and status $ oxo-call job status $ oxo-call job history squeue-me
oxo-call workflow runs bioinformatics pipelines directly with a
lightweight built-in engine. No Snakemake, Nextflow, or Conda needed.
Export to either format for HPC environments that require them.
Run .oxo.toml workflows directly — no Snakemake or Nextflow required. DAG-based parallelism, wildcard expansion, output caching.
fastp QC → STAR alignment → featureCounts quantification → MultiQC report. Run natively or export to Snakemake/Nextflow DSL2.
fastp → BWA-MEM2 → GATK MarkDuplicates → BQSR → HaplotypeCaller GVCF. GATK best-practices out of the box.
fastp → Bowtie2 → Picard deduplication → blacklist filtering → MACS3 peak calling. Ready for downstream footprinting.
fastp → host read removal (Bowtie2) → Kraken2 classification → Bracken species abundance estimation.
Describe any pipeline in plain language and get a complete, runnable .oxo.toml. One prompt → one deployable workflow.
Every template can be exported with workflow export --to snakemake or --to nextflow for HPC environments.
Automatically skip steps whose outputs exist and are newer than inputs. Re-run only what changed — perfect for iterative pipeline development.
Steps with gather = true run once after all wildcard instances complete. Ideal for MultiQC reports and downstream aggregation tasks.
# List all built-in templates $ oxo-call workflow list # Preview the RNA-seq pipeline (no execution) $ oxo-call workflow dry-run rnaseq # Run a workflow (native engine — no Snakemake/Nextflow needed) $ oxo-call workflow run my_rnaseq.toml # Export to Snakemake for HPC submission $ oxo-call workflow export wgs.toml --to snakemake -o Snakefile # Generate a custom ChIP-seq workflow with LLM $ oxo-call workflow generate \ "ChIP-seq H3K27ac: QC → Bowtie2 → MACS3 peaks vs input control" \ -o chipseq.toml
All results below are generated by oxo-bench, the standalone benchmarking
crate included in this repository. Every table is backed by a static CSV file in
docs/ — re-run cargo run -p oxo-bench -- export-csv to
refresh with your own hardware timings.
| Workflow | Tasks (expanded) | Parse (µs) | Expand (µs) | Cycle-free? |
|---|---|---|---|---|
| Loading bench_workflow.csv… | ||||
Parse and expand timings are averaged over 500 runs on a single-core.
Source: docs/bench_workflow.csv · generated by
cargo run -p oxo-bench -- export-csv
| Scenario ID | Assay | Samples | Read length (bp) | Reads / sample | Error rate | Total reads |
|---|---|---|---|---|---|---|
| Loading bench_scenarios.csv… | ||||||
All reads are generated deterministically with a fixed seed — re-running
oxo-bench simulate always produces byte-identical FASTQ files.
Source: docs/bench_scenarios.csv
| Category | Tool | Task description | Required patterns |
|---|---|---|---|
| Loading bench_eval_tasks.csv… | |||
These 12 tasks form the canonical evaluation suite used to measure model
accuracy, format-validity rate, and self-consistency. Run
oxo-bench eval-models --list to explore the full suite.
Source: docs/bench_eval_tasks.csv
# Install the latest release from crates.io
cargo install oxo-call
# Download pre-built binary (Linux x86_64 example) curl -LO https://github.com/Traitome/oxo-call/releases/latest/download/oxo-call-VERSION-x86_64-unknown-linux-gnu.tar.gz tar xzf oxo-call-*-x86_64-unknown-linux-gnu.tar.gz sudo mv oxo-call /usr/local/bin/ # Available for: Linux (x86_64/aarch64), macOS (Intel/Apple Silicon), Windows, WASM
# Clone and build from source (latest development version)
cargo install --git https://github.com/Traitome/oxo-call oxo-call
# GitHub Copilot (default — free for Copilot subscribers) oxo-call config set llm.api_token YOUR_GITHUB_TOKEN # OpenAI oxo-call config set llm.provider openai oxo-call config set llm.api_token sk-... # Local Ollama (no token needed) oxo-call config set llm.provider ollama oxo-call config set llm.model llama3.2
# Preview — no execution oxo-call dry-run samtools "sort input.bam by coordinate and output to sorted.bam" # Execute immediately oxo-call run bwa "align reads.fastq to reference.fa using 8 threads, output SAM" # Confirm before running oxo-call run --ask bcftools "call variants from my.bam against ref.fa"
# Variable substitution with --var oxo-call run --var SAMPLE=NA12878 samtools "index {SAMPLE}.bam" # → samtools index NA12878.bam # Batch processing with {item} placeholder oxo-call run samtools "flagstat {item}" \ --input-items s1.bam,s2.bam,s3.bam --jobs 4 # Process files from a list with path placeholders # {stem} = filename without extension, {ext} = extension oxo-call run samtools "sort {item} -o {stem}.sorted.{ext}" \ --input-list bam_files.txt --jobs 8 # Combine variables and batch input oxo-call run bwa "mem -t {THREADS} {REF} {item} > {stem}.sam" \ --var THREADS=16 --var REF=hg38.fa \ --input-list samples.txt --jobs 4
# Download the .wasm binary from the GitHub Releases page, then: wasmtime oxo-call.wasm -- dry-run samtools "sort input.bam by coordinate"
# Step 1: Copy this JSON and save it as license.oxo.json { "schema": "oxo-call-license-v1", "license_id": "6548e181-e352-402a-ab72-4da51f49e7b5", "issued_to_org": "Public Academic Test License (any academic user)", "license_type": "academic", "scope": "org", "perpetual": true, "issued_at": "2026-03-12", "signature": "duKJcISYPdyZkw1PbyVil5zTjvLhAYsmbzRpH0n6eRYJET90p1b0rYiHO0cJ7IGR6NLEJWqkY1wBXUkfvUvECw==" } # Step 2: Move the file to the config directory # Linux: mkdir -p ~/.config/oxo-call && mv license.oxo.json ~/.config/oxo-call/ # macOS: mkdir -p ~/Library/Application\ Support/io.traitome.oxo-call && mv license.oxo.json ~/Library/Application\ Support/io.traitome.oxo-call/ # Windows (PowerShell): New-Item -ItemType Directory -Force -Path $env:APPDATA\oxo-call; Move-Item license.oxo.json $env:APPDATA\oxo-call\ # Step 3: Verify the license works oxo-call license verify # Step 4: Try your first command oxo-call dry-run samtools "sort input.bam by coordinate"
Quick test license is for evaluation only. It lets you try oxo-call immediately without any setup. For continued use, academic users should request a free formal license, and commercial users should contact w_shixiang@163.com to purchase a license (USD 200 per organization).
One-time payment, perpetual license covering all employees and contractors within your organization.