OpenJudge rl-reward

install
source · Clone the upstream repo
git clone https://github.com/agentscope-ai/OpenJudge
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/agentscope-ai/OpenJudge "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/rl-reward" ~/.claude/skills/agentscope-ai-openjudge-rl-reward && rm -rf "$T"
manifest: skills/rl-reward/SKILL.md
source content

RL Reward Construction with OpenJudge

Build reward signals for reinforcement learning from human feedback (RLHF) and reinforcement learning from AI feedback (RLAIF) using the

openjudge
library.

When to Use This Skill

  • Building scalar rewards for GRPO / REINFORCE rollout scoring
  • Generating (chosen, rejected) preference pairs for DPO / IPO
  • Best-of-N candidate selection
  • Multi-dimensional reward shaping (correctness + safety + format)
  • Replacing or bootstrapping a reward model with LLM-as-judge

Step 1 — Choose Your Reward Strategy

Use this decision tree before writing any code:

RL Algorithm + Task type?
│
├── GRPO / REINFORCE — Verifiable task (math, code, structured output)
│   └── → POINTWISE  ✅  (FunctionGrader, exact score, zero LLM cost)
│
├── GRPO / REINFORCE — Subjective task (instruction following, dialogue, summarization)
│   └── → PAIRWISE TOURNAMENT  ✅  (compare each rollout vs all others in group,
│                                    reward = net win rate within group)
│
├── DPO / IPO / SLiC — need (chosen, rejected) pairs
│   └── → PAIRWISE  ✅  (two-way comparison, return winner/loser)
│
└── Best-of-N / reranking — rank N candidates
    └── → LISTWISE  ✅  (single call ranks all N at once)
Cost constraint?
├── Low budget
│   └── FunctionGrader (free) → pointwise; or pairwise with small judge model
│
├── Medium budget
│   └── Pointwise: 2–3 LLM graders + WeightedSumAggregator
│   └── Pairwise tournament: 1 LLM judge, N*(N-1)/2 comparisons per group
│
└── High quality / no cost limit
    └── Pointwise voting (3–5 calls) or pairwise with strong judge + debiasing

Sub-documents — Read When Relevant

TopicFileRead when…
Pointwise multi-dim reward
pointwise.md
GRPO on verifiable tasks; multi-dimension scoring
Pairwise reward
pairwise.md
GRPO on subjective tasks (tournament); DPO/RLAIF preference pairs

Read the relevant sub-document before writing any code.

Install

pip install py-openjudge

Strategy Comparison

StrategyOutputReward signalTypical useCost
Pointwisescalar per responsedirect reward
r(x, y)
GRPO on verifiable tasks, filteringLow–Medium
Pairwise Tournamentnet win rate per responserelative reward within groupGRPO on subjective tasksMedium (N²/2 calls)
Pairwisewinner/loser pairimplicit preference
y+ > y-
DPO, IPO, RLAIF preference dataMedium
Listwiserank over N responsesordinal reward / rerankingBest-of-N, rerankingMedium–High

Score Normalization

All graders return scores on different scales. Always normalize before feeding into RL:

def normalize(score: float, min_score: float, max_score: float) -> float:
    """Map [min_score, max_score] → [0.0, 1.0]."""
    if max_score == min_score:
        return 0.0
    return (score - min_score) / (max_score - min_score)

# LLM graders (common/*) return 1–5 → normalize to 0–1
reward = normalize(result.score, min_score=1, max_score=5)

# FunctionGrader / text graders already return 0–1 → no normalization needed

Evaluation Strategies

Evaluation strategies control how many times a grader is called and how results are aggregated. They are independent of the grader itself.

Choose Your Strategy

Grader type?
│
├── Deterministic (FunctionGrader, StringMatch, CodeExecution, etc.)
│   └── → Direct  (zero variance, no need for aggregation)
│
├── LLM grader — Pointwise scoring
│   │
│   ├── Budget limited / speed critical
│   │   └── → Direct  (accept variance, 1× cost)
│   │
│   ├── Discrete scores (1–5 integer, pass/fail, binary)
│   │   └── → Voting  (majority vote, robust to outliers, N× cost)
│   │
│   └── Continuous / fine-grained scores (need precise ranking)
│       └── → Average  (mean, preserves signal, N× cost)
│
└── LLM grader — Pairwise GRPO tournament
    └── → GRPOTournament  (all-pairs comparison, net win rate)
StrategyAggregationBest forCost
DirectEvaluationStrategy
NoneDeterministic graders; low budget
VotingEvaluationStrategy
Majority voteDiscrete / integer LLM scores
AverageEvaluationStrategy
MeanContinuous LLM scores
GRPOTournamentEvaluationStrategy
Net win ratePairwise GRPO on subjective tasksN²/2×

All strategies are imported from

openjudge.evaluation_strategy
.

Pointwise — Noise Reduction with Voting / Average

For high-variance LLM judges, wrap any grader with

VotingEvaluationStrategy
to run N calls and take the majority vote:

from openjudge.evaluation_strategy import VotingEvaluationStrategy

grader = CorrectnessGrader(
    model=model,
    strategy=VotingEvaluationStrategy(num_votes=3, tie_breaker="closest_to_mean"),
)
# Now each call internally runs 3 LLM evaluations and returns the most common score

Use odd

num_votes
(3, 5) to avoid ties.

Pairwise — GRPO Tournament

For GRPO on subjective tasks, use

GRPOTournamentEvaluationStrategy
to run all-pairs comparison and compute net win rate per rollout:

from openjudge.evaluation_strategy import GRPOTournamentEvaluationStrategy

strategy = GRPOTournamentEvaluationStrategy(debiased=False)
results = await strategy.execute(
    pairwise_grader.aevaluate,
    query="Write a haiku about the ocean.",
    responses=["rollout_1", "rollout_2", "rollout_3", "rollout_4"],
)
rewards = [r.score for r in results]  # net win rates in [-1.0, 1.0]

Set

debiased=True
to run each pair in both orders and only count consistent results (doubles LLM calls but mitigates position bias).