Papers
arxiv:2603.06333

SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-Improvement

Published on Mar 6
ยท Submitted by
Aman Chadha
on Mar 11
Authors:
,
,
,

Abstract

SAHOO provides a framework for monitoring and controlling alignment drift in self-improving AI systems through goal drift detection, constraint preservation, and regression risk quantification across multiple domains.

AI-generated summary

Recursive self-improvement is moving from theory to practice: modern systems can critique, revise, and evaluate their own outputs, yet iterative self-modification risks subtle alignment drift. We introduce SAHOO, a practical framework to monitor and control drift through three safeguards: (i) the Goal Drift Index (GDI), a learned multi-signal detector combining semantic, lexical, structural, and distributional measures; (ii) constraint preservation checks that enforce safety-critical invariants such as syntactic correctness and non-hallucination; and (iii) regression-risk quantification to flag improvement cycles that undo prior gains. Across 189 tasks in code generation, mathematical reasoning, and truthfulness, SAHOO produces substantial quality gains, including 18.3 percent improvement in code tasks and 16.8 percent in reasoning, while preserving constraints in two domains and maintaining low violations in truthfulness. Thresholds are calibrated on a small validation set of 18 tasks across three cycles. We further map the capability-alignment frontier, showing efficient early improvement cycles but rising alignment costs later and exposing domain-specific tensions such as fluency versus factuality. SAHOO therefore makes alignment preservation during recursive self-improvement measurable, deployable, and systematically validated at scale.

Community

Paper submitter

SAHOO introduces a measurable alignment-preserving framework for recursive self-improving AI systems using multi-signal drift detection (GDI), constraint-preserving optimization, and regression-risk control to enable capability gains while bounding alignment degradation.

โžก๏ธ ๐Š๐ž๐ฒ ๐‡๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ ๐จ๐Ÿ ๐’๐€๐‡๐Ž๐Ž: ๐’๐š๐Ÿ๐ž๐ ๐ฎ๐š๐ซ๐๐ž๐ ๐‘๐ž๐œ๐ฎ๐ซ๐ฌ๐ข๐ฏ๐ž ๐’๐ž๐ฅ๐Ÿ-๐ˆ๐ฆ๐ฉ๐ซ๐จ๐ฏ๐ž๐ฆ๐ž๐ง๐ญ

๐Ÿงญ ๐‘ฎ๐’๐’‚๐’ ๐‘ซ๐’“๐’Š๐’‡๐’• ๐‘ฐ๐’๐’…๐’†๐’™ (GDI): Multi-Signal Alignment Drift Detection
Introduces a composite alignment drift metric combining semantic, lexical, structural, and distributional divergence signals. Semantic drift uses embedding cosine distance; lexical drift uses Jensenโ€“Shannon divergence over token distributions; structural drift tracks formatting features; and distributional drift measures Wasserstein distance across response embeddings. Learned weights combine these signals into GDI, calibrated via logistic regression on labeled drift data to detect subtle alignment changes across recursive improvement cycles.

๐Ÿ›ก ๐‘ช๐’๐’๐’”๐’•๐’“๐’‚๐’Š๐’๐’•-๐‘ท๐’“๐’†๐’”๐’†๐’“๐’—๐’Š๐’๐’ˆ ๐‘จ๐’๐’Š๐’ˆ๐’๐’Ž๐’†๐’๐’• ๐‘ณ๐’๐’๐’‘
Proposes a recursive improvement pipeline where each iteration evaluates quality (Q), constraint preservation (CPS), and drift (GDI). Explicit constraint predicates enforce safety invariants (e.g., syntactic correctness, factuality, ethical constraints), and violations trigger penalty-guided improvement prompts or immediate termination. This integrates alignment preservation directly into the optimization loop, preventing capability improvements that violate safety constraints.

๐Ÿ“‰ ๐‘น๐’†๐’ˆ๐’“๐’†๐’”๐’”๐’Š๐’๐’ ๐‘น๐’Š๐’”๐’Œ + ๐‘ช๐‘จ๐‘น ๐‘ญ๐’“๐’๐’๐’•๐’Š๐’†๐’“: Stability and Trade-off Modeling
Introduces probabilistic regression-risk estimation using volatility-normalized quality gaps and trend-adjusted Gaussian forecasting to detect when iterative improvements begin undoing earlier gains. Additionally defines the **Capability Alignment Ratio (CAR)**โ€”quality improvement divided by accumulated driftโ€”to map the Pareto frontier between capability growth and alignment degradation, revealing that early cycles provide high efficiency while later improvements incur higher alignment costs.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.06333 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.06333 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.06333 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.