ORPT-Bench model detail
Model benchmark profile

opencode/nemotron-3-super-free

This page uses opencode/nemotron-3-super-free as the comparison baseline. Every chart and table below is intended to answer the same question: where this model leads, where it lags, and what it costs in quality, time, and request pressure.

nvidia free price tier dev-cheap dev-smoke
Composite
0.181
Correctness-weighted overall standing
Success
26%
Tasks completed successfully
ORPT
19.43
Requests per solved task
Total cost
$0.0000
Observed benchmark spend
Baseline comparison

How the field moves relative to opencode/nemotron-3-super-free

These charts use opencode/nemotron-3-super-free as zero. Positive bars mean other models are above the baseline on that metric; negative bars mean they trail it.

Composite delta vs baseline

Success delta vs baseline

Cost delta vs baseline

Wall time delta vs baseline

Decision table

Field comparison against the baseline

Use this to decide whether another model beats opencode/nemotron-3-super-free enough to justify the change.

Model Composite Delta vs baseline Success Success delta ORPT ORPT delta Cost Cost delta Wall time
opencode/gpt-5.4-nano 0.789 +0.608 85% +59% 15.17 -4.25 $0.4215 +$0.4215 27m 33s
opencode/kimi-k2.5 0.785 +0.603 89% +63% 14.25 -5.18 $0.9122 +$0.9122 41m 05s
opencode/claude-opus-4-6 0.67 +0.489 89% +63% 14.88 -4.55 $21.8757 +$21.8757 40m 04s
opencode/glm-5 0.623 +0.441 78% +52% 11.57 -7.86 $6.4339 +$6.4339 20m 10s
opencode/big-pickle 0.615 +0.434 67% +41% 15.39 -4.04 $0.0000 +$0.0000 36m 28s
opencode/gpt-5.4 0.609 +0.427 78% +52% 11.00 -8.43 $8.9827 +$8.9827 32m 47s
opencode/claude-sonnet-4-6 0.593 +0.411 78% +52% 16.43 -3.00 $11.8406 +$11.8406 42m 31s
opencode/glm-5.1 0.547 +0.365 67% +41% 12.06 -7.37 $1.8816 +$1.8816 64m 39s
opencode/minimax-m2.5 0.481 +0.299 56% +30% 18.87 -0.56 $0.6413 +$0.6413 32m 15s
opencode/gpt-5.4-mini 0.425 +0.243 48% +22% 9.54 -9.89 $1.0606 +$1.0606 21m 48s
opencode/minimax-m2.5-free 0.415 +0.233 59% +33% 16.19 -3.24 $0.0000 +$0.0000 41m 34s
opencode/gemini-3-flash 0.415 +0.233 59% +33% 21.81 +2.38 $2.4307 +$2.4307 62m 52s
opencode/gemini-3.1-pro 0.291 +0.109 37% +11% 12.70 -6.73 $5.8536 +$5.8536 51m 25s
opencode/nemotron-3-super-free Baseline 0.181 +0.0 26% +0% 19.43 +0.00 $0.0000 +$0.0000 109m 00s
Task story

Where opencode/nemotron-3-super-free separates

This table puts the most revealing tasks first: unsolved tasks, single-solver tasks, and tasks where the baseline trails the winner by a meaningful margin.

Task Field read Baseline result Winner Gap to winner Baseline cost Baseline time
SELinux registry volume label repair Clear separation dnf opencode/kimi-k2.5
1.0
1.0 n/a 5m 00s
RHEL k3s node preparation repair Competitive split failed opencode/gpt-5.4-nano
1.0
1.0 n/a 5m 01s
Event status shell summary Competitive split dnf opencode/big-pickle
1.0
1.0 n/a 45s
Kubernetes rollout repair Clear separation dnf opencode/gpt-5.4-mini
1.0
1.0 n/a 5m 00s
Bootstrap phase validation repair Competitive split dnf opencode/kimi-k2.5
0.993
0.993 n/a 5m 00s
ExternalDNS RFC2136 repair Competitive split provider-limited opencode/kimi-k2.5
0.982
0.982 n/a 5m 01s
nftables router ingress repair Competitive split failed opencode/gpt-5.4-nano
0.98
0.98 n/a 4m 52s
Docker Compose observability fix Competitive split failed opencode/gpt-5.4-nano
0.975
0.975 n/a 3m 26s
Pre-ArgoCD bootstrap sequencing Competitive split dnf opencode/gpt-5.4-nano
0.967
0.967 n/a 5m 00s
Log level rollup shell script Competitive split dnf opencode/big-pickle
0.965
0.965 n/a 1m 00s
CNPG restore manifest repair Competitive split dnf opencode/big-pickle
0.964
0.964 n/a 5m 00s
MCP OpenBao contract repair Competitive split dnf opencode/big-pickle
0.954
0.954 n/a 5m 00s
RHEL edge firewalld router repair Competitive split dnf opencode/gpt-5.4-nano
0.953
0.953 n/a 4m 00s
Kubernetes OIDC RBAC repair Competitive split dnf opencode/gpt-5.4-nano
0.95
0.95 n/a 5m 00s
Log audit shell script Competitive split dnf opencode/gpt-5.4-nano
0.935
0.935 n/a 1m 15s
Workspace runtime access convergence Competitive split failed opencode/gpt-5.4-nano
0.932
0.932 n/a 5m 00s
Wildcard TLS route coverage Competitive split dnf opencode/kimi-k2.5
0.929
0.929 n/a 5m 00s
MetalLB ingress address pool repair Competitive split failed opencode/gpt-5.4-nano
0.928
0.928 n/a 5m 01s
AppArmor dnsmasq profile repair Competitive split failed opencode/gpt-5.4-nano
0.918
0.918 n/a 2m 50s
Traefik forwarded header trust repair Competitive split provider-limited opencode/kimi-k2.5
0.913
0.913 n/a 5m 01s
K3s registry mirror trust repair Competitive split passed opencode/big-pickle
1.0
0.3 n/a 2m 01s
Workspace transplant bundle repair Competitive split passed opencode/big-pickle
0.985
0.285 n/a 2m 40s
Terraform static site repair Competitive split passed opencode/kimi-k2.5
0.978
0.278 n/a 3m 58s
Ansible nginx role completion Competitive split passed opencode/big-pickle
0.963
0.263 n/a 4m 41s
RHEL NetworkManager bridge VLAN repair Competitive split passed opencode/gpt-5.4-nano
0.951
0.251 n/a 4m 59s
Build workspace plane convergence Competitive split passed opencode/gpt-5.4-nano
0.942
0.242 n/a 3m 23s
GitOps workspace render validation Competitive split passed opencode/big-pickle
0.941
0.241 n/a 4m 05s
Head to head

Direct matchups

Pairwise task wins and top-line deltas show whether a challenger truly beats the baseline or just looks cheaper or faster in isolation.

Challenger Task record Composite edge Success edge Cost edge Time edge ORPT edge
opencode/gpt-5.4-nano 0-23
4 ties
-0.608 -59% -$0.4215 +81m 28s +4.25
opencode/kimi-k2.5 1-24
2 ties
-0.603 -63% -$0.9122 +67m 56s +5.18
opencode/claude-opus-4-6 0-24
3 ties
-0.489 -63% -$21.8757 +68m 56s +4.55
opencode/glm-5 0-21
6 ties
-0.441 -52% -$6.4339 +88m 50s +7.86
opencode/big-pickle 0-18
9 ties
-0.434 -41% +$0.0000 +72m 32s +4.04
opencode/gpt-5.4 0-21
6 ties
-0.427 -52% -$8.9827 +76m 13s +8.43
opencode/claude-sonnet-4-6 0-21
6 ties
-0.411 -52% -$11.8406 +66m 30s +3.00
opencode/glm-5.1 0-18
9 ties
-0.365 -41% -$1.8816 +44m 21s +7.37
opencode/minimax-m2.5 1-15
11 ties
-0.299 -30% -$0.6413 +76m 45s +0.56
opencode/gpt-5.4-mini 1-13
13 ties
-0.243 -22% -$1.0606 +87m 12s +9.89
opencode/minimax-m2.5-free 1-10
16 ties
-0.233 -33% +$0.0000 +67m 27s +3.24
opencode/gemini-3-flash 3-12
12 ties
-0.233 -33% -$2.4307 +46m 09s -2.38
opencode/gemini-3.1-pro 1-10
16 ties
-0.109 -11% -$5.8536 +57m 35s +6.73
Model context

Benchmark and catalog detail

The benchmark result only matters in context: this section pairs the observed benchmark outcome with the catalog metadata and operating characteristics behind it.

Requests502
Wall time109m 00s
Average task costn/a
Benchmark supportlimited
Catalog blended price$0.0000 / 1M tok
Catalog speed155 tok/s
Intelligence36
Agenticn/a

OpenRouter reference blend for nvidia/nemotron-3-super-120b-a12b:free is 0 USD per 1M tokens using a 3:1 input:output mix. Reference price uses nvidia/nemotron-3-super-120b-a12b at 0.2 USD per 1M tokens from the same OpenRouter family.

Observed to take a slow, tool-heavy path on the scripting smoke task.