Standard RL methods collapse critical information when optimizing multiple rewards. GDPO fixes this by normalizing each reward independently, enabling stable, balanced multi-objective learning across tasks.
Standard RL methods collapse critical information when optimizing multiple rewards. GDPO fixes this by normalizing each reward independently, enabling stable, balanced multi-objective learning across tasks.