year
int64 2.03k
2.03k
| id
stringlengths 10
10
| rating
listlengths 0
9
| decision
stringclasses 1
value | reviewer_comments
listlengths 0
9
| _raw_metadata
dict |
|---|---|---|---|---|---|
2,026
|
00F7BfXLYJ
|
[
4,
4,
4,
4
] |
[
{
"content": "This paper addresses the limitations of current Multimodal Large Language Models (MLLMs) in deep logical reasoning for video understanding—such as feed-forward processing constraints (lack of self-correction), poor test-time scaling, and hallucinations. Inspired by cybernetic principles (control, communication, self-regulation), it proposes CyberV, a training-free, test-time adaptive scaling framework that redesigns video MLLMs into closed-loop adaptive systems.",
"id": "turFNyeA8W",
"rating": 4
},
{
"content": "CyberV proposes a test-time, control-theoretic framework to boost logical reasoning in video understanding without any additional training. It runs a Best-of-N (BoN) set of reasoning paths (base + multiple CoT variants), uses a “Sensor” to measure attention drift between base and CoT answers (from the last-layer attention of the answer token to video/subtitle segments), and a “Controller” (Score Forest) to aggregate multi-signals (attention retention, confidence, stability, rank, repetition) into a TopScore that decides whether to stop or trigger feedback. When uncertain, CyberV performs targeted inference feedback by extracting key frames from segments with the largest negative drift (optionally with dense temporal sampling or spatial zoom-in) and re-injects them for a second round (N=1) to correct evidence usage. Across VideoMMMU, MMVU-MCQ, and MMR-V, the method consistently improves accuracy—often substantially for small open-source MLLMs—and avoids the perception degradation that naïve CoT can cause on perception-centric benchmarks. The approach emphasizes a lightweight, training-free, closed-loop that couples evidence perception with reasoning, showing strong performance-efficiency trade-offs (e.g., peak gains around N=8) and pointing to future work on more robust feedback selection and broader free-form generation.",
"id": "BbADxAAQx6",
"rating": 4
},
{
"content": "This paper designed a test-time scaling framework inspired by cybernetics, consisting of a MLLM, a sensor and a controller which are working together to determin the execution path of MLLM in multimodal reasoning. Experiments suggests that this framework can significantly improves the accuracy of esxisting MLLMs on certain benchmarks.",
"id": "Qs3Vw5qFe3",
"rating": 4
},
{
"content": "This paper introduces **CyberV**, an approach that leverages cybernetic structures to enhance the reasoning performance of Multi-Modal Large Language Models (MLLMs).",
"id": "f2QI7mx6wj",
"rating": 4
}
] |
{
"cdate": 1757998013559,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025cyberv,\ntitle={CyberV: A Cybernetic Framework for Enhancing Logical Reasoning in Video Understanding},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00F7BfXLYJ},\nnote={under review}\n}"
},
"abstract": {
"value": "Current Multimodal Large Language Models (MLLMs) may struggle with tasks requiring deep logical reasoning about video content, primarily stemming from the feed-forward processing nature, which limits their ability for self-correction and iterative refinement. To address these limitations, we propose a novel framework inspired by cybernetic principles, redesigning video MLLMs as adaptive systems capable of self-monitoring, self-correction, and dynamic resource allocation during inference. Our approach, CyberV, introduces a cybernetic loop consisting of an MLLM Inference System, a Sensor, and a Controller. Specifically, the sensor monitors MLLM forward processes. It collects intermediate interpretations, such as attention drift, then the controller determines when and how to trigger self-correction and generate feedback to guide the next round. This test-time adaptive scaling framework enhances frozen MLLMs without requiring training or additional components. Experiments demonstrate significant improvements on complex reasoning benchmarks: CyberV boosts Qwen2.5-VL-7B by 8.3% and InternVL3-8B by 5.5% on VideoMMMU, surpassing the competitive proprietary model GPT-4o. When applied to Qwen2.5-VL-72B, it yields a 10.0% improvement, achieving performance even comparable to human experts. Furthermore, on other reasoning-focused benchmarks, our method shows consistent gains of 4.6% on the multiple-choice question section of MMVU and 2.4% on MMR-V, highlighting its robustness in enhancing logical reasoning for video understanding. The code will be released to support further research."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Video Understanding",
"Multimodal Large Language Models",
"Test-Time Scaling"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6befca6b66a747daaa91eea1475167c914c23565.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "CyberV: A Cybernetic Framework for Enhancing Logical Reasoning in Video Understanding"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00F7BfXLYJ",
"id": "00F7BfXLYJ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission6845/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897888857,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission6845/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission6845/Authors"
]
}
|
|
2,026
|
00HNN8O7Ni
|
[
4,
2,
2,
4
] |
[
{
"content": "This paper proposed a new reinforcement learning framework of synthesizing hardware circuits based on the feedback from model checking results.\nThe experiments are based on open datasets and the results are outperform supervised learning baselines.\n\nPros:\n1. The integration of model checking results and circuit synthesis is interesting.\n\nCons:\n1. Using feedback from formal methods for learning is not novel, the novelty of the method is limited.\n2. The experiment results are limited and not convincing.",
"id": "XWl4ZN0lS1",
"rating": 4
},
{
"content": "This paper proposes an approach for synthesizing circuits from linear temporal logic (LTL) specifications using machine learning. The method builds on prior work by integrating model checker feedback and adding a search component for circuit size optimization. The approach is evaluated on several datasets.",
"id": "TeuZ9Av2LB",
"rating": 2
},
{
"content": "This paper addresses the limitations of existing deep learning approaches to reactive synthesis—where supervised learning is confined to imitating synthesis tools and reinforcement learning has slow convergence. It proposes a hybrid method that initializes models via supervised learning, then refines them using model checking feedback to prioritize correct circuit synthesis over tool imitation.\n\nReactive synthesis, which constructs systems satisfying linear temporal logic specifications (critical for hardware design), is computationally hard (2EXPTIME-complete), leading traditional tools to timeout even for small specs. The paper’s hybrid framework first trains an initial model ($M_0$) on 200,000 Strix-generated specification-circuit pairs (supervised phase). In the second phase, it verifies the model’s predicted circuits ($\\hat{C}$) with nuXmv: if $\\hat{C}$ meets the spec, it reinforces the model with $(\\varphi, \\hat{C})$; if not, it falls back to the dataset’s correct circuit ($C$).\n\nThree core variants extend the framework: 1) \"Reinforcing Learned Semantics\" boosts generalization by leveraging correct non-dataset predictions; 2) \"Expert Iteration\" uses beam search (top-k predictions) to improve performance and minimize circuit size with 54% smaller than Strix on average; 3) \"Iterating on Open Problems\" samples unsolvable Timeouts dataset to exceed tool capabilities.\n\nExperiments on hierarchical transformers and fine-tuned CodeT5 show state-of-the-art results: CodeT5 with expert iteration hits 89.3% on Testset and 51.9% on Timeouts. The method advances reactive synthesis by combining efficiency, correctness, and scalability beyond traditional tools.",
"id": "LRlqwKLS5Y",
"rating": 2
},
{
"content": "Reactive synthesis is the problem of synthesizing finite-state models from temporal logic specifications. This paper explores if deep learning can be used to solve this problem. Compared to earlier attempts to use ML for reactive synthesis, the new ideas include use of a model checker to give feedback to update the model, use of top-k predictions for improving the quality of learnt solutions, and iterating on problems that model fails to solve. The methods are implemented and evaluated on benchmarks for synthesis competitions.",
"id": "pm3iJpCRUd",
"rating": 4
}
] |
{
"cdate": 1758322705432,
"content": {
"TLDR": {
"value": "We propose a deep learning approach for reactive synthesis that first initializes a model with imitation learning and then continues training by reinforcing formally verified solutions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025learning,\ntitle={Learning Reactive Synthesis from Model Checking Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00HNN8O7Ni},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep learning applications to formal verification typically fall into one of two categories: employing reinforcement learning that suffers from slow convergence, or supervised learning that suffers from limited exploration. For reactive synthesis, the problem of automatically constructing a system that satisfies a formal specification, existing approaches fall into the latter category. In this paper, we propose a hybrid approach that only initializes the model with supervised learning and then continues training by reinforcing formally verified predictions. We show that by training the model to synthesize correct solutions rather than fixating on the supervised data, performance substantially improves. We can further utilize our approach to optimize for size without any performance degradation. Finally, we show that we can iteratively reinforce on open problems that synthesis tools are unable to solve. Our approach is demonstrated for both deep neural networks trained from scratch and pre-trained models fine-tuned on reactive synthesis, establishing new state-of-the-art results for learning reactive synthesis."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Temporal Logic",
"Reactive Synthesis",
"Expert Iteration"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/34d3a3eeb460a6177f52996e217332dfd2836e22.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Learning Reactive Synthesis from Model Checking Feedback"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00HNN8O7Ni",
"id": "00HNN8O7Ni",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission21857/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896899730,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission21857/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission21857/Authors"
]
}
|
|
2,026
|
00UQtHqB2k
|
[
2,
6,
2,
4
] |
[
{
"content": "The paper proposes a unified way to evaluate group fairness through sparsity. It studies links among Maximum Pairwise Difference, the Gini Index, and a PQ Index and argues that higher sparsity means lower fairness. Based on this view, it replaces the pairwise step in common criteria with a sparsity measure and defines S-SP and S-EO for classification and regression, with formulas and properties for PQ. Experiments across several datasets and bias mitigation methods show similar trends to MPD-style metrics and some differences in intersectional settings. The paper positions the work as an evaluation framework rather than a training algorithm.",
"id": "HQDVgNXwzo",
"rating": 2
},
{
"content": "The paper presents a novel framework for fairness evaluation based on sparsity. The authors first propose the use of the PQ index, originally introduced for pruning, as a sparsity measure for fairness evaluation, in a manner similar to the Gini Index. They then describe the properties of this index in comparison to the Gini Index, including differences with respect to the Maximum Pairwise Difference (MPD). The paper further outlines currently used fairness metrics based on MPD and suggests replacing MPD with alternative sparsity measures such as the Gini or PQ index.\nThe authors demonstrate that the behavior of the proposed metrics aligns with that of standard fairness metrics when applied to a binary sensitive attribute and bias mitigation algorithms. Moreover, they show that these sparsity-based metrics are better suited for capturing fairness in scenarios where the sensitive attribute consists of multiple groups. This is because both the Gini and PQ indices consider the full vector of group values, rather than just the maximum and minimum, and thus capture disparities more effectively.",
"id": "S7pg08xnu9",
"rating": 6
},
{
"content": "This paper experimentally examines the use of the PQ-index [1] in place of max pairwise distances (MPD) in two fairness criteria (statistical parity and equalized odds). The comparison is performed on 6 datasets used for fair classification and regression. Experimental results show that the baseline and sparsity-based measures of fairness have similar tradeoff curves between model performance and fairness. Experiments examining intersectional fairness were done on a single dataset. Authors claim these results suggest that sparsity-based fairness metrics may be more sensitive to heterogeneity in the groups.",
"id": "HxkFe3LDw4",
"rating": 2
},
{
"content": "This paper proposes a unified framework for evaluating algorithmic fairness through sparsity measures. The authors theoretically analyze the PQ Index as a sparsity measure, establish its relationships with MPD, and reformulate classical fairness metrics (SP and EO) in terms of sparsity. Experiments on multiple datasets and with several bias mitigation methods demonstrate empirical alignment between sparsity-based and traditional fairness measures.",
"id": "eRZZAU8odl",
"rating": 4
}
] |
{
"cdate": 1758232139112,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025toward,\ntitle={Toward Unifying Group Fairness Evaluation from a Sparsity Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00UQtHqB2k},\nnote={under review}\n}"
},
"abstract": {
"value": "Ensuring algorithmic fairness remains a significant challenge in machine learning, particularly as models are increasingly applied across diverse domains. While numerous fairness criteria exist, they often lack generalizability across different machine learning problems. This paper examines the connections and differences among various sparsity measures in promoting fairness and proposes a unified sparsity-based framework for evaluating algorithmic fairness. The framework aligns with existing fairness criteria and demonstrates broad applicability to a wide range of machine learning tasks. We demonstrate the effectiveness of the proposed framework as an evaluation metric through extensive experiments on a variety of datasets and bias mitigation methods. This work provides a novel perspective to algorithmic fairness by framing it through the lens of sparsity and social equity, offering potential for broader impact on fairness research and applications."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Fairness",
"Sparsity",
"Unified Framework"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/219ccddd225cef5a883ca674d9f1b6bc2e08423c.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/fde30f02a6849cd5c614e87efe679a0e788d23bb.zip"
},
"title": {
"value": "Toward Unifying Group Fairness Evaluation from a Sparsity Perspective"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00UQtHqB2k",
"id": "00UQtHqB2k",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission14292/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897378369,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission14292/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission14292/Authors"
]
}
|
|
2,026
|
017F77AYeQ
|
[
2,
2,
4,
0
] |
[
{
"content": "The paper proposes SMART-3D, a mask token modeling approach for 3D generation.",
"id": "gZowcvNNqh",
"rating": 2
},
{
"content": "The paper proposes an framework that merges masked autoregressive generation with diffusion modeling and linear attention, addressing key efficiency bottlenecks in 3D shape generation. However, technically novelty and evaluation are limited.",
"id": "kE0H4cZdnO",
"rating": 2
},
{
"content": "This paper introduces SMART-3D (Scaling Masked AutoRegressive Transformers for 3D generation) for 3D shape generation. The framework combines the modeling capability of autoregressive models with the efficiency of masked generation strategies. It uses progressive masked decoding to enable parallel decoding and reduce sampling steps, and employs a linear attention mechanism to lower computational complexity, achieving state-of-the-art performance in both generation quality and speed.",
"id": "WIDwzbIezO",
"rating": 4
},
{
"content": "-",
"id": "dS8t6uDrPN",
"rating": 0
}
] |
{
"cdate": 1758113495159,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smartd,\ntitle={{SMART}-3D: Scaling Masked AutoRegressive Transformer for Efficient 3D Shape Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=017F77AYeQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Autoregressive models have shown promise in 3D shape generation by modeling complex spatial dependencies between discrete shape tokens. However, their sequential nature and token-by-token sampling limit scalability and generation speed, especially for high-resolution shapes. In this work, we propose SMART-3D (Scaling Masked AutoRegressive Transformers for 3D generation), a novel framework that combines the modeling capacity of autoregressive transformers with the efficiency of masked generation. By introducing a hierarchical token representation and a progressive masked generation schedule, SMART-3D enables parallel decoding of 3D structures without sacrificing autoregressive fidelity. We further optimize the model with spatially-aware masking and lightweight transformer blocks, allowing generation of detailed 3D shapes with significantly reduced computational overhead. Experiments on ShapeNet, ModelNet, and ShapeNet-55 datasets demonstrate that SMART-3D achieves state-of-the-art performance in both generation quality and speed, outperforming previous competitive baselines. Our approach offers a scalable and practical solution for high-fidelity 3D shape synthesis in real-world applications."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Autoregressive models",
"3D shape generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/676ed3977332fe4f530434b6e3796debb83cbe57.pdf"
},
"primary_area": {
"value": "generative models"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SMART-3D: Scaling Masked AutoRegressive Transformer for Efficient 3D Shape Generation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "017F77AYeQ",
"id": "017F77AYeQ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9157/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897740443,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9157/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9157/Authors"
]
}
|
|
2,026
|
023yMrtHQP
|
[
4,
4,
4
] |
[
{
"content": "This paper introduces a prompting framework, named Expectation–Evidence Prompting (EEP), for large language models to enhance factual verification. Drawing from the Strategic Use of Evidence technique in cognitive psychology, EEP involves generating two sets of expectations, supportive and refutational, and comparing them to observed evidence using a semantic consistency function. The framework is also extended to a supervised learning setup with cross-entropy loss and regularization. Evaluated on three benchmarks using GPT-3.5-turbo, EEP outperforms baselines like Chain-of-Thought, Self-Ask, and Decompose.",
"id": "9JIFVlrjLv",
"rating": 4
},
{
"content": "This paper introduces Expectation–Evidence Prompting (EEP), a cognitive science inspired framework for factual verification in large language models (LLMs). Instead of directly mapping claims to truth labels, EEP guides the model to generate supportive and refutational expectations about what evidence should exist if a claim were true or false. These expectations are then compared to observed evidence using a semantic consistency function, producing support and refutation scores. This is evaluated with a variety of methods including Implicit LLM reasoning, embedding similarity and Natural Language Inference. A claim is accepted, rejected, or abstained from based on thresholded scores. The authors motivate EEP with parallels to the Strategic Use of Evidence (SUE) technique in investigative psychology and evaluate it on FEVER, PubHealth, and SciFact. EEP achieves competitive results, notably 86.3 macro-F1 on FEVER (+3.6 over CoT), 82.1 precision on PubHealth, and 76.1 F1 on the SUPPORTS class in SciFact. EEP thus formalizes a bidirectional reasoning mechanism that improves interpretability and robustness compared to Chain-of-Thought (CoT), Self-Ask, and DECOMP prompting.",
"id": "I6x7K1kcyF",
"rating": 4
},
{
"content": "This paper introduces Expectation–Evidence Prompting (EEP), a cognitively inspired prompting framework for factual verification with large language models (LLMs). Drawing on the Strategic Use of Evidence (SUE) technique from cognitive psychology, EEP prompts the LLM to generate both supportive and refutational expectations for a claim, then explicitly compares these with observed evidence to make a structured three-way decision: support, refute, or abstain. The method is evaluated on three standard fact-checking benchmarks (FEVER, PubHealth, SciFact) and compared to strong prompting baselines (Standard, Chain-of-Thought, Self-Ask, DECOMP). EEP achieves state-of-the-art macro-F1 on FEVER and strong precision on PubHealth, with consistent gains in main metrics.",
"id": "WiPOdGfIDz",
"rating": 4
}
] |
{
"cdate": 1758292986416,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025expectationevidence,\ntitle={Expectation{\\textendash}Evidence Prompting: Structuring Verification by Comparing Expected and Observed Evidence},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=023yMrtHQP},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) often fail in factual verification due to hallucinations, unreliable truthfulness judgments, and opaque reasoning. We identify a structural limitation underlying these failures: LLMs directly compare claims with evidence without accounting for expected refutational alternatives. Specifically, we demonstrate that this omission leads to ambiguity in contradiction detection and unreliable abstention. Leveraging this observation, we introduce Expectation-Evidence Prompting (EEP), a cognitively inspired strategy that first generates supportive and refutational expectations from a claim and then aligns them with observed evidence. This bidirectional reasoning process enforces logical symmetry, reduces bias toward agreement, and provides a principled abstention mechanism. Across three fact-checking benchmarks: FEVER, PubHealth, and SciFact, EEP achieves consistent gains over strong prompting baselines, including an 86.3 macro-F1 on FEVER (+3.6 over Chain-of-Thought), 82.1 precision on PubHealth (highest among all methods), and 76.1 F1 on the Supports class in SciFact. These results demonstrate that embedding expectation evidence alignment into prompt design yields more interpretable, robust, and trustworthy factual reasoning in LLMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models (LLMs)",
"Factual Verification",
"Prompt Engineering",
"Cognitive Psychology–Inspired Prompting",
"Expectation–Evidence Alignment",
"Contradiction Detection",
"Abstention Mechanism"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/da7fb984ac74ee03e0b7788c1519b84d690a4cbf.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Expectation–Evidence Prompting: Structuring Verification by Comparing Expected and Observed Evidence"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "023yMrtHQP",
"id": "023yMrtHQP",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19036/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897064617,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19036/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19036/Authors"
]
}
|
|
2,026
|
02NbD16OnA
|
[
4,
4,
4,
6
] |
[
{
"content": "This paper introduces DECEPTIONDECODED, a multimodal news benchmark with explicitly defined creator intent to support misleading intent detection, source attribution, and desire inference. It reveals that current VLMs fail to reason about intent beyond surface alignment and stylistic cues.",
"id": "fn4fwYc83Q",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a benchmark dataset for analyzing misleading creator intent in multimodal news. The dataset contains 12,000 image–caption–article triplets, each grounded in verified VisualNews articles, with both misleading and non-misleading variants generated under predefined “creator intents.” They evaluate 14 vision–language models, including GPT-4o, Claude-3.7, Gemini-2.5-Pro, and Qwen2.5-VL. The results indicate that even state-of-the-art models perform poorly on intent reasoning, tending to rely on surface-level cues such as image-text consistency or stylistic polish.",
"id": "9WlB8Dphn2",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a novel benchmark designed to evaluate Vision-Language Models (VLMs) in detecting creator intent behind misleading multimodal news content. The dataset is constructed using a synthetic, intent-guided framework that generates manipulations grounded in real news, ensuring relevance and control over deception intent. The study evaluates state-of-the-art VLMs under various input conditions (e.g., image+text, text+article) and with authenticity cues (helpful or adversarial hints).",
"id": "3M7P7dMXHb",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a large-scale benchmark for understanding and detecting misleading creator intent in multimodal news. This work centers on modeling the combination of desired influence and execution plan behind deceptive news creation. The benchmark comprises 12,000 image–caption–article triplets, each grounded in trustworthy news contexts from VisualNews and simulated through intent-guided generation using GPT-4o and FLUX.1. It supports three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. Comprehensive evaluations of 14 VLMs reveal that even leading models struggle to reason about creator intent. Fine-tuning on DECEPTIONDECODED improves performance on external MMD benchmarks (e.g., MMFakeBench), underscoring its transferability.",
"id": "8qt7sRcLNz",
"rating": 6
}
] |
{
"cdate": 1756910313383,
"content": {
"TLDR": {
"value": "We reveal that state-of-the-art VLMs remain blind to misleading creator intent, establishing the need for intent-aware benchmarks and models as the next frontier in multimodal misinformation detection."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025seeing,\ntitle={Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02NbD16OnA},\nnote={under review}\n}"
},
"abstract": {
"value": "The impact of misinformation arises not only from factual inaccuracies but also from the misleading narratives that creators deliberately embed. Interpreting such creator intent is therefore essential for multimodal misinformation detection (MMD) and effective information governance. To this end, we introduce DeceptionDecoded, a large-scale benchmark of 12,000 image–caption pairs grounded in trustworthy reference articles, created using an intent-guided simulation framework that models both the desired influence and the execution plan of news creators. The dataset captures both misleading and non-misleading cases, spanning manipulations across visual and textual modalities, and supports three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. We evaluate 14 state-of-the-art vision–language models (VLMs) and find that they struggle with intent reasoning, often relying on shallow cues such as surface-level alignment, stylistic polish, or heuristic authenticity signals. These results highlight the limitations of current VLMs and position DeceptionDecoded as a foundation for developing intent-aware models that go beyond shallow cues in MMD."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"multimodal misinformation detection",
"vision-language models",
"creator intent"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9be01177d5da89276e95a5c85b7ef81c5e6a455e.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02NbD16OnA",
"id": "02NbD16OnA",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission1711/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898192988,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission1711/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission1711/Authors"
]
}
|
|
2,026
|
02cEkpURXH
|
[
2,
2,
6,
4
] |
[
{
"content": "This paper proposes a KD–based training strategy for OOD generalization. The authors first argue that training compact student models via simple KD from a teacher with strong OOD performance can often surpass standalone algorithmic DG methods. They further note that prior OOD-oriented KD approaches predominantly focus on the teacher’s design or the teacher–student relationship, leaving the design of the student model underexplored. To address this, the authors introduce a forecaster that quantifies per-sample difficulty using auxiliary models built on the student’s internal representations together with uncertainty measures. The KD loss is then reweighted on a per-sample basis according to the predicted difficulty. Experiments on four DomainBed datasets with ResNet-18 demonstrate the effectiveness of the proposed approach.",
"id": "LrWms20vTu",
"rating": 2
},
{
"content": "The paper proposes an adaptive KD framework for domain generalization where a lightweight forecaster uses early-layer readouts (auxiliary heads) and uncertainty features (entropy, confidence margin) to reweight per-instance contributions of supervised loss vs. teacher KL during student training. The forecaster is trained interleaved with the student and discarded at inference, so deployment cost matches vanilla KD.",
"id": "emYIxBo6Yd",
"rating": 2
},
{
"content": "This paper addresses out-of-distribution (OOD) generalization in knowledge distillation by proposing an adaptive framework that uses early layer predictions to dynamically weight the loss components. The authors introduce a \"forecaster\" meta-network that leverages auxiliary classifiers at intermediate layers, along with uncertainty measures (entropy and confidence margin), to predict sample difficulty and reweight the balance between supervised loss and distillation loss on a per-instance basis. The method is evaluated on domain generalization benchmarks (OfficeHome, PACS, VLCS, TerraIncognita) and shows consistent improvements over vanilla KD (+1.0-1.2% average accuracy) while adding no inference overhead.",
"id": "oKvq6yCVz8",
"rating": 6
},
{
"content": "The paper proposes a student-centric, adaptive KD scheme that learns an instance-wise weight to balance cross-entropy vs. KL terms using a lightweight “forecaster” fed by early-layer readouts (stacked intermediate logits) plus uncertainty signals (entropy and a confidence margin). The forecaster is trained with a correctness-prediction objective and its outputs are stabilized via a batch-standardized sigmoid adjustment before modulating the student loss; training alternates between updating the student/auxiliary heads on train splits and the forecaster on a held-out validation split, and all auxiliaries are discarded at inference. Reported results indicate consistent OOD gains over vanilla KD and DG baselines across multiple benchmarks.",
"id": "2cO0cHlmMu",
"rating": 4
}
] |
{
"cdate": 1758311939461,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025early,\ntitle={Early Layer Readouts for Robust Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02cEkpURXH},\nnote={under review}\n}"
},
"abstract": {
"value": "Domain generalization (DG) aims to learn a model that can generalize to unseen i.e. out-of-distribution (OOD) test domain. While large-capacity networks trained with sophisticated DG algorithms tend to achieve high robustness, they tend to be impractical in deployment. Typically, Knowledge distillation (KD) can alleviate this via an efficient transfer of knowledge from a robust teacher to a smaller student network. Throughout our experiments, we find that vanilla KD already provides strong OOD performance, often outperforming standalone DG algorithms. Motivated by this observation, we propose an adaptive distillation strategy that utilizes early layer predictions and uncertainty measures to learn a meta network that effectively rebalances supervised and distillation losses as per sample difficulty. Our method adds no inference overhead and consistently outperforms canonical ERM, vanilla KD, and competing DG algorithms across OOD generalization benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"domain generalization",
"knowledge distillation",
"early layer readouts"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2bb11bab4ab35adbf1f2a9ad3d46d601f3b0111c.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Early Layer Readouts for Robust Knowledge Distillation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02cEkpURXH",
"id": "02cEkpURXH",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission20949/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896950334,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission20949/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission20949/Authors"
]
}
|
|
2,026
|
02mBAZjFzp
|
[
4,
4,
4,
6
] |
[
{
"content": "This paper introduces VRPAGENT, a framework for discovering heuristic operators for Vehicle Routing Problems (VRPs) using large language models (LLMs). The method combines LLM-generated “destroy” and “order” operators with a Large Neighborhood Search (LNS) metaheuristic, leveraging genetic algorithms (GAs) to iteratively evolve improved operators. Although the research motivation and validation results seem feasible, the approach is almost identical to existing LLM-guided heuristic frameworks, which weakens the overall contribution of the paper.",
"id": "uG0zaS46hU",
"rating": 4
},
{
"content": "This paper proposes a framework for automated heuristic discovery in VRPs using LLMs called VRPAgent. VRPAgent integrates LLM-generated problem-specific operators within a Large Neighborhood Search (LNS) metaheuristic and refines them through a genetic algorithm that employs elitism, biased crossover, and code-length penalty mechanisms.\n\nKey features include generating problem-specific destroy and insert heuristics via LLMs, and evolving these operators over multiple generations to maximize solution quality while controlling code complexity. The method is evaluated across standard VRPs (capacitated, time windows, prize-collecting), consistently discovering heuristics that outperform handcrafted and previous LLM/learning-based methods on large benchmark instances using only CPU resources.\n\nThe approach offers interpretability, practical efficiency, and a reproducible pipeline for discovering and improving heuristics for combinatorial optimization, highlighting a new path for LLM-driven algorithmic design in operations research.\n\nThe contributions include:\n1. A hybrid metaheuristic framework (LLM-in-the-loop LNS) for VRPs where LLMs generate, mutate, and combine code for local operators.\n2. A genetic algorithm with code-length penalties to evolve and select the best LLM-generated operators.\n3. Demonstrating state-of-the-art or superior performance compared to both expert-designed heuristic solvers and recent neural/LLM solutions on several large VRP benchmarks, with superior interpretability and scalability",
"id": "D0O7X821Fg",
"rating": 4
},
{
"content": "Designing effective heuristics for VRP problems based on the Large Neighborhood Search (LNS) algorithm typically requires extensive human expertise and trial-and-error. To address this issue, the paper proposes using large language models (LLMs) to automatically design heuristic operators. Building on the concept of genetic algorithms, the LLM generates diverse heuristic candidates, retains the best-performing ones according to the solution results, and performs heuristic modifications and explorations to further improve performance. The proposed method is validated on multiple types of VRP problems, demonstrating a significant overall performance advantage compared with other AI-enhanced LNS approaches.",
"id": "jD0850R4NE",
"rating": 4
},
{
"content": "This paper presents VRPAGENT, a framework that uses Large Language Models (LLMs) to automatically discover heuristic operators for Vehicle Routing Problems (VRPs). The approach embeds LLM-generated problem-specific operators within a Large Neighborhood Search (LNS) metaheuristic and refines them through a genetic algorithm with elitism and biased crossover. The authors evaluate their method on three VRP variants (CVRP, VRPTW, PCVRP) and demonstrate state-of-the-art performance using only a single CPU core at test time.",
"id": "ODFKpFC7tV",
"rating": 6
}
] |
{
"cdate": 1758296070926,
"content": {
"TLDR": {
"value": "We introduce VRPAgent, a framework that leverages LLMs and evolutionary search to discover novel heuristic operators for vehicle routing problems, achieving state-of-the-art performance across multiple VRP variants."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025vrpagent,\ntitle={{VRPA}gent: {LLM}-Driven Discovery of Heuristic Operators for Vehicle Routing Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02mBAZjFzp},\nnote={under review}\n}"
},
"abstract": {
"value": "Designing high-performing heuristics for vehicle routing problems (VRPs) is a complex task that requires both intuition and deep domain knowledge. Large language model (LLM)-based code generation has recently shown promise across many domains, but it still falls short of producing heuristics that rival those crafted by human experts. In this paper, we propose VRPAgent, a framework that integrates LLM-generated components into a metaheuristic and refines them through a novel genetic search. By using the LLM to generate problem-specific operators, embedded within a generic metaheuristic framework, VRPAgent keeps tasks manageable, guarantees correctness, and still enables the discovery of novel and powerful strategies. Across multiple problems, including the capacitated VRP, the VRP with time windows, and the prize-collecting VRP, our method discovers heuristic operators that outperform handcrafted methods and recent learning-based approaches while requiring only a single CPU core. To our knowledge, VRPAgent is the first LLM-based paradigm to advance the state-of-the-art in VRPs, highlighting a promising future for automated heuristics discovery."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"automated algorithm design",
"evolutionary search",
"vehicle routing problem",
"LLM agent",
"heuristic discovery"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/35f37aa40fad450cb00124cdc83059fbb4cb843f.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "VRPAgent: LLM-Driven Discovery of Heuristic Operators for Vehicle Routing Problems"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02mBAZjFzp",
"id": "02mBAZjFzp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19416/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897040045,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19416/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19416/Authors"
]
}
|
|
2,026
|
02mgFnnfqG
|
[
4,
8,
6,
6
] |
[
{
"content": "The paper presents LiveMoments, a method for selecting and restoring a new low-quality (LQ) key photo from a short clip surrounding some key high-quality (HQ) photo. To this end, the authors build a model based on latent flow models and learnable networks for the HQ key image, the LQ candidate, and the motion between the two frames modeled as optical flow. The authors also propose to perform image space motion alignment based on image patches. The authors train the model using open source high-quality data and introduce three benchmarks for evaluation, a synthetic one and two real-world Live Photo datasets.",
"id": "PmPY4GqdRf",
"rating": 4
},
{
"content": "The paper introduces LiveMoments for reselected key photo restoration in Live Photos. It adopts a dual branch diffusion architecture with a ReferenceNet and a RestorationNet, and adds a unified Motion Alignment module that injects flow guided priors at latent and image levels. The authors build three benchmarks and propose relative no reference metrics tailored to the task. Experiments on synthetic and real Live Photo datasets demonstrate consistent gains over RefISR, RefVSR, and diffusion based SISR baselines.",
"id": "q7t5PLY0Y2",
"rating": 8
},
{
"content": "I think the paper introduces a practical task: restoring a reselected low-quality Live Photo frame using the original high-quality (HQ) key photo as a reference. The method, LiveMoments, uses a dual-branch diffusion transformer (ReferenceNet + RestorationNet) with cross-attention fusion and a unified motion-alignment module: (i) latent-level motion embeddings from RAFT flow injected as attention bias; (ii) image-level Patch Correspondence Retrieval (PCR) for tile-wise inference at 4K. Datasets include SynLive260 (synthetic) and real vivoLive144 / iPhoneLive90, plus a relative no-reference metric that normalizes to the HQ reference. Results show consistent perceptual gains on real data.",
"id": "1ujaAv5W14",
"rating": 6
},
{
"content": "This paper introduces the task of Reselected Key Photo Restoration for Live Photos, \nwhere a user-selected frame from the short video is restored using the original high-quality key photo as reference. \nThe paper formulates this as a reference-guided diffusion problem and proposes a dual-branch architecture \ncombining a RestorationNet for the degraded frame and a ReferenceNet for the original photo, fused via cross-attention. \nA unified Motion Alignment module enables alignment both in the latent space through motion-guided attention \nand in the image space via a Patch Correspondence Retrieval (PCR) strategy.\nExperiments demonstrate significant quantitative and visual gains over baselines.",
"id": "LEq7GBrfwn",
"rating": 6
}
] |
{
"cdate": 1757934812324,
"content": {
"TLDR": {
"value": "We are the first to restore reselected key photos in Live Photos, achieving perceptual fidelity beyond existing solutions in real-world scenes."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025livemoments,\ntitle={LiveMoments: Reselected Key Photo Restoration in Live Photos via Reference-guided Diffusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02mgFnnfqG},\nnote={under review}\n}"
},
"abstract": {
"value": "Live Photo captures both a high-quality key photo and a short video clip to preserve the precious dynamics around the captured moment. \nWhile users may choose alternative frames as the key photo to capture better expressions or timing, these frames often exhibit noticeable quality degradation, as the photo capture ISP pipeline delivers significantly higher image quality than the video pipeline. This quality gap highlights the need for dedicated restoration techniques to enhance the reselected key photo. To this end, we propose LiveMoments, a reference-guided image restoration framework tailored for the reselected key photo in Live Photos. Our method employs a two-branch neural network: a reference branch that extracts structural and textural information from the original high-quality key photo, and a main branch that restores the reselected frame using the guidance provided by the reference branch. Furthermore, we introduce a unified Motion Alignment module that incorporates motion guidance for spatial alignment at both the latent and image levels. Experiments on real and synthetic Live Photos demonstrate that LiveMoments significantly improves perceptual quality and fidelity over existing solutions, especially in scenes with fast motion or complex structures."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Live Photo",
"Reference-based Image Restoration",
"Conditional Image Generation",
"Motion Alignment"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bbbb05b5353518a72b45118dfb2eecd0c3ed7f78.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "LiveMoments: Reselected Key Photo Restoration in Live Photos via Reference-guided Diffusion"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02mgFnnfqG",
"id": "02mgFnnfqG",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5782/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897954152,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5782/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5782/Authors"
]
}
|
|
2,026
|
032sg6mGp9
|
[
4,
4,
6,
6
] |
[
{
"content": "This paper introduces a multinomial mixture modelling approach to address the identifiability problem in learning from noisy labels (LNL). The authors theoretically prove that LNL becomes identifiable when each sample has at least 2C−1 independent noisy labels, enabling the unique recovery of clean label distributions without relying on heuristic assumptions. To make this feasible in practice, they propose generating additional pseudo noisy labels from nearest neighbours and applying an Expectation–Maximization algorithm to infer clean labels. Extensive experiments on synthetic, web-controlled, and real-world noisy datasets demonstrate that the proposed method accurately estimates clean labels and achieves performance competitive with state-of-the-art LNL techniques.",
"id": "ur3yGYd6qM",
"rating": 4
},
{
"content": "This paper addresses the long-standing issue of identifiability in learning from noisy labels (LNL). The authors show that, under a multinomial mixture modeling approach, the LNL problem becomes identifiable if at least $2C-1$ independent and identically distributed (i.i.d.) noisy labels are available per instance (where $C$ is the number of classes). As manually acquiring such redundancy is impractical, the paper proposes estimating additional noisy labels via nearest-neighbour augmentation in feature space. Then the paper use an Expectation-Maximisation (EM) algorithm to estimate the clean label distributions. This algorithm works on the mixture model. The experiments show strong results on both synthetic and real-world datasets. This paper also ran many ablation studies. These studies back up our theoretical ideas and design decisions.",
"id": "AVKWRBfTig",
"rating": 4
},
{
"content": "The paper tackles the fundamental issue of identifiability in learning with noisy labels (LNL).\nThe authors demonstrate that when each sample is annotated with at least 2C−1 i.i.d. noisy labels (where C is the number of classes), the true clean-label distribution becomes identifiable under a multinomial mixture model.\nSince collecting that many labels per sample is infeasible in practice, the authors propose a practical algorithm that approximates i.i.d. noisy labels using KNN and LLC, followed by an EM procedure to recover clean posterior estimates.\nExtensive experiments on multiple benchmarks show that this surrogate approach is both theoretically justified and empirically effective.",
"id": "Q8Lv3taOc5",
"rating": 6
},
{
"content": "This paper studies the foundational identifiability problem in learning from noisy labels (LNL). It establishes that the standard single-label LNL setting is non-identifiable in theory, meaning the clean label distribution cannot be recovered without additional assumptions. The key contribution is proving that if each instance has at least 2C−1 i.i.d. noisy labels (where C is the number of classes), then clean labels are identifiable when modeling noisy labels as a multinomial mixture. Extensive experiments on synthetic and real-world noisy-label benchmarks support the theoretical claims, showing competitive or improved performance relative to state-of-the-art baselines such as DivideMix, HOC, and others.",
"id": "T4t5H3p3uv",
"rating": 6
}
] |
{
"cdate": 1758285923748,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025identifiability,\ntitle={Identifiability in Noisy Label Learning: A Multinomial Mixture Modelling Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=032sg6mGp9},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning from noisy labels (LNL) is crucial in deep learning, in which one of the approaches is to identify clean-label samples from poorly-annotated datasets. Such an identification is challenging because the conventional LNL problem, which assumes only one noisy label per instance, is non-identifiable, i.e., clean labels cannot be estimated theoretically without additional heuristics. This paper presents a novel data-driven approach that addresses this issue without requiring any heuristics about clean samples. We discover that the LNL problem becomes identifiable if there are at least $2C - 1$ i.i.d. noisy labels per instance, where $C$ is the number of classes. Our finding relies on the assumption of i.i.d. noisy labels and multinomial mixture modelling, making it easier to interpret than previous studies that require full-rank noisy-label transition matrices. To fulfil this condition without additional manual annotations, we propose a method that automatically generates additional i.i.d. noisy labels through nearest neighbours. These noisy labels are then used in the Expectation-Maximisation algorithm to infer clean labels. Our method demonstrably estimates clean labels accurately across various label noise benchmarks, including synthetic, web-controlled, and real-world datasets. Furthermore, the model trained with our method performs competitively with many state-of-the-art methods."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"label noise learning",
"expectation-maximisation",
"mixture models"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/39e718f6250a4d1ffcf2cdc9270d45e29131db80.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Identifiability in Noisy Label Learning: A Multinomial Mixture Modelling Approach"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "032sg6mGp9",
"id": "032sg6mGp9",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18276/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897114753,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18276/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18276/Authors"
]
}
|
|
2,026
|
03Ek1qDZmI
|
[
4,
4,
4,
2
] |
[
{
"content": "This paper introduces SSTP, a sample selection framework for trajectory prediction. The primary motivation is to address two challenges in existing large-scale datasets: the high computational cost of training and the imbalance where common, low-density scenarios dominate over rare, safety-critical high-density ones. The proposed method consists of two stages. First, it partitions the dataset based on scene density (number of agents) and pre-trains a model to extract gradient information. Second, it uses a submodular selection objective with these gradient-based scores to select a compact and representative subset, while explicitly up-sampling high-density scenarios. Experiments on the Argoverse 1 and 2 datasets show that training on a 50% subset selected by SSTP can achieve comparable performance to training on the full dataset, while significantly improving performance in high-density scenes.",
"id": "MFO5ZWKx5H",
"rating": 4
},
{
"content": "This paper proposes SSTP, a two-stage sample selection framework that constructs a compact yet density-balanced dataset for trajectory prediction. It consists of two stages: (i) first partition the data by scene density; and (ii) select a compact and density-balanced subset via gradient-based scores and a submodular objective. The goal is to reduce training time and mitigate long-tail imbalance. On Argoverse 1 and 2 Datasets and several backbones (HiVT, HPNet, QCNet, DeMo), SSTP claims comparable average metrics to full-data training, and improving high-density performance with around 50% budget.",
"id": "qHWZoAZ0KL",
"rating": 4
},
{
"content": "The paper aims to address an important problem of reducing dependency on large-scale datasets in trajectory prediction, particularly under imbalanced data distributions.",
"id": "YgKroHHCl9",
"rating": 4
},
{
"content": "This paper proposes SSTP, a framework designed to improve data efficiency and scene-density balance in trajectory prediction. The authors observe that existing large-scale trajectory prediction datasets are heavily imbalanced, with low-density scenarios dominating and high-density cases underrepresented. SSTP tackles this issue through two-stage process: density-based partitioning of the dataset and gradient-based submodular selection to identify representative samples within each partition. Experiments on Argoverse 1 and Argoverse 2 show that SSTP achieves comparable performance to full-dataset training while reducing training cost and improving performance in high-density scenarios.",
"id": "h99kgB8KYg",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ZrgOVhMZcB",
"rating": null
}
] |
{
"cdate": 1757189578927,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nyang2025sstp,\ntitle={{SSTP}: Efficient Sample Selection for Trajectory Prediction},\nauthor={Ruining Yang and Yi Xu and Yun Fu and Lili Su},\nyear={2025},\nurl={https://openreview.net/forum?id=03Ek1qDZmI}\n}"
},
"abstract": {
"value": "Trajectory prediction is a core task in autonomous driving. However, training advanced trajectory prediction models on existing large-scale datasets is both time-consuming and computationally expensive. More critically, these datasets are highly imbalanced in scenario density, with normal driving scenes (low-moderate traffic) overwhelmingly dominating the datasets, while high-density and safety-critical cases are underrepresented. As a result, models tend to overfit low/moderate-density scenarios and perform poorly in high-density scenarios. To address these challenges, we propose the SSTP framework, which constructs a compact yet density-balanced dataset tailored to trajectory prediction. SSTP consists of two main stages: (1) Extraction, where a baseline model is pretrained for a few epochs to obtain stable gradient estimates, and the dataset is partitioned by scenario density. (2) Selection, where gradient-based scores and a submodular objective select representative samples within each density category, while biased sampling emphasizes rare high-density interactions to avoid dominance by low-density cases. This approach significantly reduces the dataset size and mitigates scenario imbalance, without sacrificing prediction accuracy. Experiments on the Argoverse 1 and Argoverse 2 datasets with recent state-of-the-art models show that SSTP achieves comparable performance to full-dataset training using only half the data while delivering substantial improvements in high-density traffic scenes and significantly reducing training time. Robust trajectory prediction depends not only on data scale but also on balancing scene density to ensure reliable performance under complex multi agent interactions. The code is available at https://anonymous.4open.science/r/SSTP_v2-69E5/README.md."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Ruining_Yang1",
"~Yi_Xu9",
"~Yun_Fu1",
"~Lili_Su1"
]
},
"authors": {
"value": [
"Ruining Yang",
"Yi Xu",
"Yun Fu",
"Lili Su"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"data efficiency",
"trajectory prediction"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "yang|sstp_efficient_sample_selection_for_trajectory_prediction"
},
"pdf": {
"value": "/pdf/55bd982183b342ab8876bf09c69dfa0fea486112.pdf"
},
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SSTP: Efficient Sample Selection for Trajectory Prediction"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "03Ek1qDZmI",
"id": "03Ek1qDZmI",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission2669/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762981127212,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2669/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2669/Authors"
]
}
|
|
2,026
|
03MfCNn3pF
|
[
2,
4,
2,
6
] |
[
{
"content": "This paper presents PersonalQ, a two-stage system for personalized diffusion model serving. Check-in selects the intended personalized checkpoint via metadata reasoning and LLM-based prompt clarification, while Trigger-Aware Quantization (TAQ) preserves trigger-token features during quantization to maintain generation quality. Experiments on 1,000 checkpoints show improved selection accuracy and memory reduction.",
"id": "YFSuFNpwRu",
"rating": 2
},
{
"content": "The authors explore a setup where a system consists of hundreds of LoRA checkpoints obtained through fine-tuning of a diffusion model. A user interacts with this system via natural language prompts, without employing specific trigger words associated with individual LoRAs. Firstly, the ambiguity of selecting the best-fit LoRA is addressed through LLM interaction with LoRA-related metadata and clarification questions that are posed to the user. Furthermore, memory constraints are discussed through a new quantization strategy, TAQ, which omits quantization for trigger-word-related K/V rows. This approach is motivated by the observation that trigger-word related tokens are particularly vulnerable to quantization error.",
"id": "qiuWkGz332",
"rating": 4
},
{
"content": "This paper addresses the important and practical problem of how to use the large, community-driven repositories of personalized generative models according to user intent. The authors identify that personalized models are highly sensitive to quantization, particularly their \"trigger tokens\" (which invoke specific objects or styles), and that naive quantization degrades quality.\nTo overcome this, they propose TAQ (Trigger-Aware Quantization). Concurrently, they propose \"Check-in,\" a retrieval and selection framework to find desired checkpoints from large repositories based on user queries, and introduce the \"Repo-Prompt\" benchmark to evaluate such retrieval methods. The authors report that TAQ achieves quality close to full precision despite weight reduction, and that \"Check-in\" achieves an 89% win rate in human preference studies.",
"id": "P8eaiMc2Y0",
"rating": 2
},
{
"content": "This manuscript proposes personalQ, an interesting framework that address the ambiguous user prompt matching and quantization model degradation in personalized text-to-image model deployment. The authors introduce Check-In for checkpoint analysis and Type-aware Quantization (TAQ) for high quality inference. The authors also introduce the Repo-Prompt benchmark, and experiments on the benchmark demonstrate the superiority of the proposed method.",
"id": "6Bqe71rjef",
"rating": 6
}
] |
{
"cdate": 1757994763056,
"content": {
"TLDR": {
"value": "PersonalQ enables efficient serving of personalized diffusion models at scale through intelligent checkpoint selection and trigger-token-aware quantization that preserves personalization quality while reducing memory footprint."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025personalq,\ntitle={PersonalQ: Select, Quantize, and Serve Personalized Diffusion Models for Efficient Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03MfCNn3pF},\nnote={under review}\n}"
},
"abstract": {
"value": "Personalized text-to-image generation enables users to create custom AI models that generate their unique concepts—specific objects or artistic styles—achieving unprecedented creative control. However, deploying a large repository of personalized checkpoints faces two critical challenges: (1) ambiguous user prompts make it difficult to match the intended checkpoint in large repositories, and (2) standard post-training quantization methods degrade personalized diffusion checkpoints’ image quality. We analyze the importance of reasoning over checkpoint metadata and clarifying user prompts for intent-aligned checkpoint selection. Additionally, we find that trigger tokens for personalized diffusion play a crucial role in quantization. To address the challenges, we propose PersonalQ, a unified system with two components: Check-in analyzes checkpoint repositories and clarifies user intent for intent-aligned selection, and TAQ (Trigger-Aware Quantization), which protects the trigger-token-related representation to deliver high-quality inference from the chosen checkpoint under quantization. On our Repo-Prompts benchmark, PersonalQ achieves an 89% checkpoint-selection preference win rate and a 4.42/5 intent score. Across benchmarks, TAQ reduces inference memory by up to 75% while maintaining strong text-image alignment (CLIP score 0.297 vs. 0.315 at full precision) and image fidelity (FID 11.03 at W8A8 vs. 10.96 at full precision), enabling scalable deployment of personalized models without compromising quality."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Personalized text-to-image generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/50f61b6537bdaf1e298c0bcf4390b40ad56a54eb.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/4878d33f88b5ea78ce8e4633adfff8251e992811.zip"
},
"title": {
"value": "PersonalQ: Select, Quantize, and Serve Personalized Diffusion Models for Efficient Inference"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03MfCNn3pF",
"id": "03MfCNn3pF",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission6759/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897895805,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission6759/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission6759/Authors"
]
}
|
|
2,026
|
03QzvMzxVM
|
[
2,
4,
4,
4
] |
[
{
"content": "This work presents Robust-NLL, which serves as a plug-and-play loss replacing vanilla NLL loss for robust uncertainty-aware training against label-space outliers. The proposed loss function uses softmax reweighting over sample losses to filter out outliers. The author also provides theoretical analysis and empirical verification of their proposed method.",
"id": "ObgeLTHjtu",
"rating": 2
},
{
"content": "The authors study uncertainty estimation for regression.\n\nThey propose Robust-NLL, a simple and intuitive modification of the standard NLL loss that weighs each loss term with a softmax weight computed across the batch. Robust-NLL is supposed to make the model training more robust to outliers in the train labels.\n\nThey evaluate Robust-NLL on two synthetic 1D regression examples, and on a visual localization dataset. They compare the performance with standard NLL and two NLL variants.",
"id": "H8bucPeIyD",
"rating": 4
},
{
"content": "This paper proposes a robust uncertainty-aware learning where they weight the NLL loss of each training through a temperature-dependent softmax distribution. They provide theortetical analysis of their proposed approach and demonstrate their proposed method's effiicany in three different tasks ranging from simple linear regression to visual localization.",
"id": "Nhxc4sQpSR",
"rating": 4
},
{
"content": "This paper introduces Robust-NLL, a modified loss function that improves uncertainty estimation in neural networks when training data contains outliers. The method uses Boltzmann weighting to down-weight noisy samples while maintaining compatibility with standard training procedures—requiring no architectural changes or additional parameters. Experiments on synthetic and real-world tasks show improvements in both prediction accuracy and uncertainty calibration compared to standard negative log-likelihood training.",
"id": "Ldzrt1maqB",
"rating": 4
}
] |
{
"cdate": 1758019401870,
"content": {
"TLDR": {
"value": "We introduce Robust-NLL for modeling uncertainty under the presence of outliers."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025robust,\ntitle={Robust Uncertainty-Aware Learning via Boltzmann-weighted {NLL}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03QzvMzxVM},\nnote={under review}\n}"
},
"abstract": {
"value": "Uncertainty estimation is critical for deploying deep learning models in high-stakes applications such as autonomy and decision-making. While prior works on data uncertainty modeling estimate aleatoric uncertainty by minimizing the negative log-likelihood (NLL) loss, they often fail under the presence of outliers. To address this limitation, we introduce Robust-NLL, a drop-in replacement for vanilla NLL that filters noisy or adversarial samples. Robust-NLL learns robust uncertainty estimates in neural networks through a Boltzmann-weighted NLL loss that requires no architectural changes, additional parameters, or iterative procedures, and acts as a plug-and-play loss function that maintains full differentiability and mini-batch compatibility. We evaluate our approach on synthetic regression tasks and real-world visual localization benchmarks with injected outliers. Experimental results demonstrate that simply replacing NLL with Robust-NLL consistently improves both prediction accuracy and reliability of uncertainty estimates, achieving substantial performance gains across diverse tasks and architectures."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"robust estimation",
"uncertainty estimation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/444e8304cd012c1ab5fb9f3ae96a85fe575c79e2.pdf"
},
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Robust Uncertainty-Aware Learning via Boltzmann-weighted NLL"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03QzvMzxVM",
"id": "03QzvMzxVM",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7389/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897855752,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7389/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7389/Authors"
]
}
|
|
2,026
|
03ccrSpjOx
|
[
4,
4,
4,
6
] |
[
{
"content": "The paper studies how deliberation format shapes value expression and consensus in LLM-LLM debates over everyday moral dilemmas. Using 1,000 AITA cases, the authors run pairwise and three-way debates among GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash in two settings: synchronous (parallel) and round-robin (sequential). They quantify model inertia and conformity via a multinomial model, analyze verdict change rates, and classify values in explanations using a pruned set of 48 values drawn from “Values in the Wild” (Anthropic) with a separate judge model. Prompt tweaks that explicitly encourage consensus increase revision but do not dramatically raise consensus rates. The paper argues that sociotechnical alignment depends on interaction protocol, not only on single-turn outputs.",
"id": "PaMZ7VFZkc",
"rating": 4
},
{
"content": "This work studies values elicited from multi-agent debate verdicts, arriving to interesting conclusions across multiple deliberating formats and models. Experiments are done on 1000 questions from the AITA reddit community, with debates from models in {GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash}. Results cover aspects including consensus-forming, values orientations, effects of deliberation format and effects of system-prompt-steering.",
"id": "3W5XtvSQuy",
"rating": 4
},
{
"content": "This paper collect 1k everyday dilemmas from Reddit's r/AITA community as the basis for simulate LLM debates. They developed two settings for two models as a pair (synchronous setting: each comment its verdict; head-to-head: one by one between two models). They tested three models (GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash) for order effects and verdict revision. They show some behavioural differences (e.g. Gemini 2.0 Flash prioirizied more on empathy).",
"id": "3E32I36km9",
"rating": 4
},
{
"content": "The proposed approach leverages debate tactics to determine if deliberative dynamics in multi turn settings impact the socio-technical evaluation of LLMs. In particular, the authors leverage everyday situations from the Reddit AITA community as seed situations. Their findings report how deliberation impacts model behavior.",
"id": "hVZiWqQ28N",
"rating": 6
}
] |
{
"cdate": 1758148909076,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025deliberative,\ntitle={Deliberative Dynamics and Value Alignment in {LLM} Debates},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03ccrSpjOx},\nnote={under review}\n}"
},
"abstract": {
"value": "As large language models (LLMs) are increasingly deployed in sensitive everyday contexts -- offering personal advice, mental health support, and moral guidance -- understanding their elicited values in navigating complex moral reasoning is essential. Most evaluations study this sociotechnical alignment through single-turn prompts, but it is unclear if these findings extend to multi-turn settings where values emerge through dialogue, revision, and consensus. We address this gap using LLM debate to examine deliberative dynamics and value alignment in multi-turn settings by prompting subsets of three models (GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash) to collectively assign blame in 1,000 everyday dilemmas from Reddit's \"Am I the Asshole\" community. We use both synchronous (parallel responses) and round-robin (sequential responses) formats to test order effects and verdict revision. Our findings show striking behavioral differences. In the synchronous setting, GPT showed strong inertia (0.6-3.1% revision rates) while Claude and Gemini were far more flexible (28-41%). Value patterns also diverged: GPT emphasized personal autonomy and direct communication, while Claude and Gemini prioritized empathetic dialogue. Certain values proved especially effective at driving verdict changes. We further find that deliberation format had a strong impact on model behavior: GPT and Gemini stood out as highly conforming relative to Claude, with their verdict behavior strongly shaped by order effects. These results show how deliberation format and model-specific behaviors shape moral reasoning in multi-turn interactions, underscoring that sociotechnical alignment depends on how systems structure dialogue as much as on their outputs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"sociotechnical alignment",
"multi-agent debate",
"multi-turn interaction"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/53b15162b8d0641d663ed2799ca10373fb23b76b.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Deliberative Dynamics and Value Alignment in LLM Debates"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03ccrSpjOx",
"id": "03ccrSpjOx",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9918/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897686075,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9918/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9918/Authors"
]
}
|
|
2,026
|
03fFxN6Orj
|
[
4,
2,
4
] |
[
{
"content": "This paper proposed the Adviser-Actor-Critic (AAC) framework, targeting steady-state error reduction for high-precision robotic control tasks in reinforcement learning. AAC augments standard actor-critic architectures with an additional “adviser” module, implemented as a PI controller, that generates dynamically adjusted “virtual goals” to help the actor refine actions and reduce residual errors. The authors present a clear control-theoretic motivation, rigorous mathematical proof of zero steady-state error for constant references, and comprehensive empirical validation on both simulated (Gymnasium-Robotics benchmark tasks) and real-world (quadcopter attitude control) robotic platforms. Experimental results indicate that AAC achieves significant improvements in steady-state tracking error relative to baselines, including >80% error reduction across several benchmark tasks.",
"id": "PxTfOAWPdF",
"rating": 4
},
{
"content": "The paper introduces Advisor-Actor-Critic (AAC), a framework that adds a classical PI controller (advisor) to a standard goal-conditioned reinforcement learning (RL) agent to reduce steady-state error (SSE) in robotic control tasks. The advisor modifies the goal given to the RL agent, creating a \"virtual goal\" that pushes the agent to overcompensate for and thereby eliminate residual tracking errors.",
"id": "AnZBZMVoG2",
"rating": 2
},
{
"content": "The paper proposes Adviser-Actor-Critic (AAC), a hybrid reinforcement learning and control framework that introduces an “adviser” which generates virtual goals to compensate steady-state tracking errors. The adviser is instantiated as a proportional–integral controller that proposes a virtual goal to a goal-conditioned policy. The method is evaluated in six gymnasium-robotics environments and on a real quadcopter attitude-control task, reporting sizable reductions in steady-state error. The paper also presents a theoretical argument for steady-state error elimination under several assumptions.",
"id": "h0YMdMVPhG",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "Phq3Vz4xJs",
"rating": null
}
] |
{
"cdate": 1758271601146,
"content": {
"TLDR": {
"value": "Adviser-Actor-Critic (AAC) combines reinforcement learning with a novel adviser to generate virtual goals, effectively reducing steady-state errors by over 80% in high-precision robotic control tasks."
},
"_bibtex": {
"value": "@misc{\nchen2025adviseractorcritic,\ntitle={Adviser-Actor-Critic: Reducing Steady-State Error in Reinforcement Learning for Robotics Control},\nauthor={Donghe Chen and Jiaxuan Yue and Yubin Peng and Tengjie Zheng and Han Wang and Chaoran Qu and Lin Cheng},\nyear={2025},\nurl={https://openreview.net/forum?id=03fFxN6Orj}\n}"
},
"abstract": {
"value": "High-precision control tasks present substantial challenges for reinforcement learning (RL) algorithms, frequently resulting in suboptimal performance attributed to network approximation inaccuracies and inadequate sample quality. While existing RL frameworks can achieve task completion at coarse precision levels, steady-state tracking errors remain a critical limitation that prevents achieving sub-hardware-level precision. We introduce Adviser-Actor-Critic (AAC), designed to address this precision control dilemma by combining the precision of feedback control theory with the adaptive learning capability of RL and featuring an Adviser that mentors the actor to refine control actions, thereby enhancing the precision of goal attainment. Through extensive benchmark environments from gymnasium-robotics, coupled with real-world quadcopter attitude control, AAC significantly outperforms standard RL algorithms in precision-critical tasks while demonstrating an average $>80\\%$ steady-state error reduction compared to baseline methods."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Donghe_Chen1",
"~Jiaxuan_Yue2",
"~Yubin_Peng1",
"~Tengjie_Zheng1",
"~Han_Wang17",
"~Chaoran_Qu1",
"~Lin_Cheng7"
]
},
"authors": {
"value": [
"Donghe Chen",
"Jiaxuan Yue",
"Yubin Peng",
"Tengjie Zheng",
"Han Wang",
"Chaoran Qu",
"Lin Cheng"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"reinforcement learning",
"robotics",
"control system"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "chen|adviseractorcritic_reducing_steadystate_error_in_reinforcement_learning_for_robotics_control"
},
"pdf": {
"value": "/pdf/635d6df0d70e8cc046d12fa468fe1667715b0a02.pdf"
},
"primary_area": {
"value": "reinforcement learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Adviser-Actor-Critic: Reducing Steady-State Error in Reinforcement Learning for Robotics Control"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "03fFxN6Orj",
"id": "03fFxN6Orj",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762955287461,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission17048/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission17048/Authors"
]
}
|
|
2,026
|
03jzVlLxEe
|
[
6,
6,
4,
4
] |
[
{
"content": "The authors propose **NERVE**, a noise- and variability-robust EEG foundation model designed to address key challenges in EEG analysis, including low signal-to-noise ratios (SNR), high inter-sample variability, and spatial dependencies arising from electrode placement in acquisition systems. The proposed framework consists of three core components. First, a **noise-robust neural tokenizer** encodes EEG patches into discrete neural tokens. Second, a **variability-robust pretraining strategy** enforces alignment and uniformity in the representation space to improve robustness against distributional shifts. Third, an **electrode-position–aware (EPA) transformer** serves as the backbone for both the tokenizer and the foundation model, explicitly modeling the spatial structure of EEG channels.",
"id": "5Z3MN1JjSC",
"rating": 6
},
{
"content": "The paper proposes NERVE, a novel EEG foundation model designed to address key acquisition-related challenges of EEG signals: low signal-to-noise ratio, high inter- and intra-subject variability, and spatial dependencies among electrodes. By introducing a noise-robust neural tokenizer, a variability-robust pretraining objective, and an electrode-position-aware transformer architecture, NERVE demonstrates competitive performance across multiple BCI tasks and improved robustness compared to existing foundation models.",
"id": "wolwjwfHoQ",
"rating": 6
},
{
"content": "This paper highlights the importance of robustness to noise and intra-subject variability in EEG foundation models. To address these challenges, the authors designed specialized modules—such as the EAP and the noise-robust tokenizer—as well as tailored learning objectives, including masked codebook reconstruction with KoLeo regularization, to enhance model robustness. Their robustness analysis reveals that existing EEG foundation models often produce unstable representations for the same class and struggle to disentangle subject-specific from class-specific information. In contrast, the proposed approach demonstrates improved stability and resilience to variability. Overall, the paper raises important awareness of the diverse sources of noise, variability, and artifacts that EEG foundation models must effectively account for.",
"id": "wApgpd5kOO",
"rating": 4
},
{
"content": "This paper proposes NERVE, a noise- and variability-robust EEG foundation model that explicitly addresses three acquisition-related challenges: low signal-to-noise ratio, high inter- and intra-subject variability, and spatial dependencies among electrodes. NERVE introduces a noise-robust neural tokenizer trained via denoising temporal–spectral prediction, a variability-robust pre-training objective using KoLeo regularization, and an electrode-position-aware (EPA) transformer to capture spatial structure. Evaluated on multiple downstream BCI tasks, NERVE demonstrates competitive performance and improved robustness to noise and variability compared to existing EEG foundation models.",
"id": "dxvAusftiR",
"rating": 4
}
] |
{
"cdate": 1758337883115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nerve,\ntitle={{NERVE}: Noise-Variability-Robust {EEG} Foundation Model with Electrode-Brain Interactions},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03jzVlLxEe},\nnote={under review}\n}"
},
"abstract": {
"value": "Electroencephalography (EEG) is an indispensable modality for measuring and recording brain electrical activity, with broad applications in brain–computer interfaces (BCI) and healthcare. While early EEG models predominantly adopted supervised learning methods due to the scarcity of large-scale datasets and the heterogeneity across tasks and datasets, the recent success of large foundation models has driven increasing efforts to build EEG foundation models. However, most existing studies focus on handling signals with varying formats while overlooking inherent characteristics of EEG signals during acquisition, including low signal-to-noise ratios (SNR), high variability across samples, and spatial dependencies arising from electrode placement within the acquisition system. To address these challenges, we propose NERVE, a novel noise-variability-robust EEG foundation model with electrode-brain interactions. Specifically, pre-training of NERVE begins with learning a noise-robust neural tokenizer that encodes EEG patches into discrete neural tokens. The tokenizer is trained through denoising temporal–spectral prediction to reconstruct temporal and frequency information of the original signal from noise-augmented inputs. NERVE is further pretrained to predict the neural codes of masked EEG patches, integrated with a variability-robust objective that promotes uniform EEG representations. To incorporate spatial structure in EEG, we propose an electrode-position-aware transformer as the backbone for both the tokenizer and the foundation model. It enables the model to capture spatial dependencies among electrodes and brain regions via attention mechanisms. NERVE demonstrates competitive performance across diverse BCI tasks and improved robustness to noise and variability compared to existing EEG foundation models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Foundation model",
"Electroencephalography",
"EEG",
"Self-supervised learning",
"Pre-training"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2af8f2986c76341d381f0b7aced096521dd9722f.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "NERVE: Noise-Variability-Robust EEG Foundation Model with Electrode-Brain Interactions"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03jzVlLxEe",
"id": "03jzVlLxEe",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission22991/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Edit"
],
"license": "CC BY 4.0",
"mdate": 1759896837180,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission22991/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission22991/Authors"
]
}
|
|
2,026
|
03qTI3NKqi
|
[
4,
4,
4,
4
] |
[
{
"content": "This work found that previous soft prompts often disrupted information flow and reduced reasoning. They argue that soft prompts should not be limited to the activation and guidance stages but should be inserted into appropriate stages to ensure smooth information flow between layers. Therefore, they proposed a Dynamic Hierarchical Awareness Mechanism (DHAM) to ensure effective coordination between the various stages of reasoning.",
"id": "iWYZVN0FL8",
"rating": 4
},
{
"content": "This paper investigates the role of soft prompt tuning in improving reasoning performance of large language models (LLMs). While previous works show that soft prompts can effectively activate prior knowledge and facilitate early reasoning, this paper observes that maintaining strong prompt influence in later reasoning stages can disrupt information flow and degrade performance. To address this issue, the paper proposes a Dynamic Hierarchy-Aware Mechanism (DHAM) that dynamically regulates soft prompts across reasoning stages. Specifically, DHAM performs hierarchical clustering to identify stage-specific representations and adaptively activates soft prompts based on semantic alignment, thereby ensuring smoother and more coherent information transmission through model layers. Experimental results demonstrate consistent improvements across different models and reasoning benchmarks. Ablation studies suggest that using CKA-based clustering and a moderate number of reasoning stages achieves the best performance, supporting the paper’s hypothesis of stable information flow as a key factor for effective reasoning.",
"id": "M0dqnNM2Ef",
"rating": 4
},
{
"content": "This paper identifies that static soft prompts (SP) can disrupt information flow when injected into middle or late layers. To address this, the paper proposes the Dynamic Hierarchy-Aware Mechanism (DHAM), which uses CKA-based clustering to group layers into functional stages and injects distinct prompts at each stage. This hierarchical alignment is shown to stabilize information flow and improve reasoning performance. However, clearer experimental evidence should be provided.",
"id": "fHO7wdUZYl",
"rating": 4
},
{
"content": "This paper proposes a novel method called Dynamic Hierarchical Awareness Mechanism (DHAM), which aims to address the issues of incoherent information flow and performance degradation in large language models (LLMs) during complex reasoning tasks due to the static injection of soft prompts. The authors found through analysis that improper prompt injection can cause severe oscillations in information propagation between model layers, disrupting the coherence of reasoning. To this end, DHAM first automatically divides the model's Transformer layers into several functionally similar semantic stages using Centered Kernel Alignment (CKA) and hierarchical clustering. Subsequently, it injects trainable soft prompts only at the starting layers of each stage, achieving phased and dynamic guidance of the information flow. Experiments show that this stage-aware injection strategy, especially the injection in the early stages, can effectively promote the smooth transfer of information and significantly improve the model's accuracy on complex reasoning tasks such as GSM8K and MATH.",
"id": "qKJXXFLd9Z",
"rating": 4
}
] |
{
"cdate": 1758191821554,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025unlocking,\ntitle={Unlocking Coherent Reasoning in {LLM}s with Hierarchical Soft Prompts},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03qTI3NKqi},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) exhibit strong reasoning capabilities in complex tasks. Soft prompt tuning, as a lightweight approach, injects trainable vectors into the input to guide the reasoning process and enhance model performance. Prior studies show that soft prompts effectively activate prior knowledge and improve problem understanding in the early stages of reasoning. However, when they continue to exert strong influence in the middle and later stages, they often disrupt the information flow and degrade reasoning performance. Based on this observation, we argue that the role of soft prompts should not be confined to a single stage of activation and guidance. Instead, they should be inserted at appropriate stages to ensure smooth information transmission across layers. Existing methods, however, typically rely on one-shot static injection and cannot dynamically regulate prompts across stages, leading to functional mismatches during reasoning. To address this limitation, we propose a dynamic hierarchy-aware mechanism(DHAM). This mechanism first employs hierarchical clustering to derive stage-specific representations, and then leverages the semantic guidance capability of soft prompts to adaptively align and activate them, ensuring effective coordination across reasoning stages. \nDHAM yields consistent gains across models and benchmarks (e.g., 29.5\\%→43.8\\% on Llama-2-13B/GSM8K), with ablations showing CKA clustering and moderate stage numbers (e.g., $G=3/4$) perform best, consistent with the stable information flow hypothesis."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"Complex Reasoning",
"Soft Prompt Tuning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/511e5f43840e80d2617f1692ac8a2bf18b3b16d7.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Unlocking Coherent Reasoning in LLMs with Hierarchical Soft Prompts"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03qTI3NKqi",
"id": "03qTI3NKqi",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission11167/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897603181,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11167/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11167/Authors"
]
}
|
|
2,026
|
03u504EDJp
|
[
2,
4,
6,
2,
2
] |
[
{
"content": "This paper introduces APO, a new framework for distilling reasoning capabilities from multiple MLLMs that exhibit conceptual drift, defined as variability in their reasoning behaviors or conclusions. The core idea is that APO aggregates all available reasoning trajectories and learns to prefer the self-distillation as positive signals against all negative trajectories. This approach treats distillation as a preference optimization problem, aligning the student model’s reasoning trajectory with the highest-quality outputs among drifting teachers, in a “learn-compare-critique” paradigm. The method is tested on a newly constructed dataset, CXR-MAX, based on chest X-ray interpretation, and shows improvements in accuracy.",
"id": "ogYTGtXZT6",
"rating": 2
},
{
"content": "This paper discusses the concept drift problem in knowledge distillation of multimodal large language models (MLLM). Through the analysis of the connection between concept drift and knowledge distillation, the authors introduce the “learn–compare–critique” paradigm to tackle the issue. The resulting method, autonomous preference optimization (APO), trains the student with self relection over the drifting inference for concept alignment. Experiments demonstrate the effectiveness of APO on knowledge distillation tasks. The authors also contribute to a large-scale dataset called CXR-MAX.",
"id": "tQNQ14YJUU",
"rating": 4
},
{
"content": "The paper studies the problem of knowledge distillation from multiple multimodal large language models (MLLMs). The authors observe that the reasoning trajectories of different teacher models can change inconsistently across models or over time, and that such concept drift can propagate to student models during distillation. To address this issue, the paper proposes a “learn–compare–critique” pipeline. The student model first learns from multiple MLLM teachers; then it performs self-distillation to align and identify inconsistent teacher outputs. Finally, through a preference optimization step, the student reinforces alignment with stable reasoning outputs while down-weighting drifted or biased outputs.\n\nFor experiments, the authors construct the CXR-MAX dataset, which is an extension of the MIMIC-CXR dataset by adding reasoning trajectories about clinical chest X-ray interpretation from multiple MLLM teachers. Results show that the proposed method outperforms other existing distillation methods, while achieving performance comparable to or exceeding that of the original teacher models.",
"id": "Eu39zKqsIm",
"rating": 6
},
{
"content": "- This paper aims to address the challenge of knowledge distillation from multiple, heterogeneous MLLMs. The main challenge is the concept drift problem, where the teacher models provide conflicting information that can confuse the student model.\n- To tackle this, this work proposes a novel three-stage \"learn-compare-critique\" paradigm called Autonomous Preference Optimization (APO). The student model first learn a broad knowledge via standard supervised distillation from all teachers. Second, it compares and aggregates the teachers' outputs and performs self-distillation to generate a unified reasoning trajectory. Finally, it critiques the initial knowledge by using the consensus trajectory as a preferred sample and the individual teacher outputs as negative samples using a simple contrastive learning loss.",
"id": "uJUjaPDF61",
"rating": 2
},
{
"content": "This paper addresses the underexplored problem of knowledge distillation from multiple drifting MLLMs, where inconsistent reasoning trajectories across teachers cause concept drift and bias propagation. The authors propose APO, a “learn–compare–critique” paradigm that enables the student model to self-distill and align reasoning concepts autonomously. Experiments show that this method has certain effectiveness.",
"id": "YpLGCI4tXU",
"rating": 2
}
] |
{
"cdate": 1756744193214,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025learning,\ntitle={Learning from All: Concept Alignment for Autonomous Distillation from Multiple Drifting {MLLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03u504EDJp},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper identifies a critical yet underexplored challenge in distilling from multi-modal large language models (MLLMs): the reasoning trajectories generated by multiple drifting teachers exhibit concept drift, whereby their reasoning distributions evolve unpredictably and transmit biases to the student model, ultimately compromising its performance. To tackle this issue, we pioneer a theoretical connection between concept drift and knowledge distillation, casting the non-stationary reasoning dynamics from multiple MLLM teachers as next-token prediction of multi-stream reasoning trajectories. Guided by concept drift, we introduce the “learn–compare–critique” paradigm, culminating in autonomous preference optimization (APO). Under the active guidance of the teachers, the student model first learns and self-distils preferred thinking by comparing multiple teachers. It then engages in critical reflection over the drifting inference from teachers, performing concept alignment through APO, ultimately yielding a robust, consistent, and generalizable model. Extensive experiments demonstrate our superior performance of consistency, robustness and generalization within knowledge distillation. Besides, we also contributed a large-scale dataset CXR-MAX (Multi-teachers Alignment X-rays), comprising 170,982 distilled reasoning trajectories derived from publicly accessible MLLMs based on MIMIC-CXR. Our code and data are public at: https://anonymous.4open.science/r/Autonomous-Distillation/."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"concept drift",
"transfer learning",
"multi view",
"knowledge distillation",
"multi modal large language model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fe4866ea94ed809fb98d3d8b49b15b242306766f.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/d3f2bf191b959b040fec6edae75de60b04403059.pdf"
},
"title": {
"value": "Learning from All: Concept Alignment for Autonomous Distillation from Multiple Drifting MLLMs"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03u504EDJp",
"id": "03u504EDJp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission525/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898255701,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission525/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission525/Authors"
]
}
|
|
2,026
|
040ClRXMf3
|
[
6,
8,
2,
8
] |
[
{
"content": "This paper proposes a new algorithm to extract cardinal-minimal sufficient explanations for Neural Additive Models (NAMs).\nIt does so by exploiting key design choices of NAMs, showing how this family of models supports explanations with guarantees.\n\nThis is achieved as follows. First, the paper introduces a method to rank features based on how much they influence the final prediction. Then, after this ranking is obtained, an algorithm is discussed to exploit this order to efficiently explore which features to remove from the current sufficient explanation until a cardinal-minimal explanation is obtained.",
"id": "MthqrzzFcv",
"rating": 6
},
{
"content": "This paper presents a novel algorithm for computing provably cardinality-minimal explanations for Neural Additive Models (NAMs). The authors focus on post-hoc, per-instance explanations: given a trained NAM f and an input x, they seek to compute a subset of features S \\subseteq [n]that is sufficient to guarantee the same prediction under bounded perturbations of the remaining features (an\n\\epsilon-ball). Among all sufficient subsets, the goal is to find one of minimum cardinality (the global optimum).\n\nThe paper provides a novel contribution to the state-of-the-art in the broad area of explainability with provable guarantees (in this case, minimality). The paper focuses on NAMs, which to the best of my knowledge it is still a Still a niche but growing area in the interpretability subfield. They are Not widely used in industry production pipelines yet. but research interest persists. In fact, (NAMs) occupy an interesting middle ground in machine learning — they’re not mainstream, but they are important in specific contexts where interpretability and nonlinear modelling both matter. Their main limitation is that in a pure NAM, features don’t interact directly because the model assumes additivity. This means that the effect of each feature x_i on the output y is independent of any other feature x_j, which can be a strong limitation in some practical settings.\n\nThe proposed algorithm proceeds in two stages. In the first stage each univariate subnetwork f_i(x_i) is verified independently to estimate its influence on the model’s decision. This is done via parallelised binary search over feature importance intervals. In Stage 2, after sorting features by importance, a binary search identifies the globally cardinal-minimal sufficient subset of features that provably determines the model’s prediction. This reduces complexity from exponentially many calls to the network to logarithmically many.\n\nExperiments on standard tabular benchmarks demonstrate feasibility and show smaller, faster provable explanations than prior methods; sampling-based visualisations were also shown to be unreliable in some cases, whereas the proposed method always produces verifiably sufficient explanations.\n\nOverall, I am supportive of this paper. It makes a meaningful and well-justified contribution to formal explainability by showing that NAMs enable efficient computation of globally minimal sufficient explanations -- something previously infeasible for general neural networks. With minor revisions, I feel that this paper is a valuable contribution to the state of the art.",
"id": "vX0lZAngLb",
"rating": 8
},
{
"content": "This paper focuses on explainable artificial intelligence and aims to provide concise explanations for the predictions made by Neural Additive Models (NAMs). The primary issue addressed in this study is as follows: given a classifier $ f $ represented by a NAM, an input data instance $ x $ that requires an explanation, and a ball $ B $ centered at $ x $, the goal is to identify a feature subset $ S $ of the minimum size. This subset must ensure that for every instance $ z $ within the ball $ B $, if the values of $ z $ and $ x $, restricted to the features in $ S $, are indistinguishable, then the classifications made by $ f $ for both $ z $ and $ x $ are the same. Such an explanation $ S $ is referred to as a (ball-restricted) minimum-size abductive explanation or a minimum-size sufficient reason.\n\nTo address this problem, the authors propose a two-stage method. In the first stage, the univariate functions $ f_i $ are sorted based on their importance intervals. In the second stage, a minimal-size explanation $ S $ is derived using a greedy approach. The paper includes formal proofs for the correctness and complexity of this method, and it presents comparative experiments conducted on four different datasets that support the theoretical findings.",
"id": "jO4h06yElo",
"rating": 2
},
{
"content": "A computationally-efficient, novel method to compute explanations with provable guarantees for Neural Additive Models (NAMs). The explanations are guaranteed to be the smallest in size, globally. The method claims to be efficient in generating such certified explanations.",
"id": "EWw1viy5FE",
"rating": 8
}
] |
{
"cdate": 1758298867680,
"content": {
"TLDR": {
"value": "Our approach constructs provably sufficient and (globally) cardinal-minimal explanations for neural additive models with improved runtime complexity."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025provably,\ntitle={Provably Explaining Neural Additive Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=040ClRXMf3},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite significant progress in post-hoc explanation methods for neural networks, many remain heuristic and lack provable guarantees. A key approach for obtaining explanations with provable guarantees is by identifying a *(globally) cardinal-minimal* subset of input features which by itself is *provably sufficient* to determine the prediction. However, for standard neural networks, this task is often computationally infeasible, as it demands a worst-case *exponential* number of verification queries in the number of input features, each of which is NP-hard. In this work, we show that for Neural Additive Models (NAMs), a recent and more interpretable neural network family, we can *efficiently* generate explanations with such guarantees. We present a new model-specific algorithm for NAMs that generates provably (globally) cardinal-minimal explanations using only a *logarithmic* number of verification queries in the number of input features, after a parallelized preprocessing step with logarithmic runtime in the required precision is applied to each small univariate NAM component. Our algorithm not only makes the task of obtaining (globally) cardinal minimal explanations feasible, but even outperforms existing algorithms designed to find *(locally) subset-minimal* explanations -- which may be larger and less informative but easier to compute -- despite our algorithm solving a much more difficult task. Our experiments demonstrate that, compared to previous algorithms, our approach provides provably smaller explanations than existing works and substantially reduces the computation time. Moreover, we show that our generated provable explanations offer benefits that are unattainable by standard sampling-based techniques typically used to interpret NAMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"explainability",
"XAI",
"explainable AI",
"formal verification",
"sufficient explanations"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d5a73d9cf5e02a90d26e33e9057769ff66ff64fa.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/688a5ff66ccb15d28a06f568b0f04b60f4413e61.zip"
},
"title": {
"value": "Provably Explaining Neural Additive Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "040ClRXMf3",
"id": "040ClRXMf3",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19723/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897022892,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19723/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19723/Authors"
]
}
|
|
2,026
|
04HwYGgp2w
|
[
6,
8,
6,
6
] |
[
{
"content": "In this paper,the authors introduces ImageDoctor, a unified,multi-aspect evaluation framework for Text-to Image(T2I) models. Unlike previous methods that provide a single scalar, ImageDoctor assesses image quality across four dimensions: plausibility, semantic alignment, aesthetics, and overall quality.ImageDoctor also provides pixel-level flaw indicators in the form of heatmaps, which highlight misaligned or implausible regions, and can be used as a dense reward for T2I model preference alignment. The model is built on a multi-modal large language models(MLLMs) and adopts a “look-think-predict” paradigm. Training involves a two-phase process: cold start and reinforcement finetuning with Group Relative Policy Optimization(GRPO) using tailored rewards. Furthermore, the paper proposes DenseFlow-GRPO, which utilizes ImageDoctor’s dense, pixel-level heatmaps as a dense reward signal. Experiments demonstrates that ImageDoctor achieves strong alignment with human preference across multiple datasets. Furthermore,when used as a reward model for preference tuning, ImageDoctor achieves an improvement of 10% over scalar-based reward models.",
"id": "yxpL47YNqW",
"rating": 6
},
{
"content": "This paper proposes a novel VLM-based evaluation framework for text-to-image generation, named ImageDoctor. ImageDoctor not only provides multi-dimensional scoring capabilities, such as aesthetics and text-image alignment, but also offers pixel-level localization of flawed regions, enabling it to actively identify areas of misalignment and visual implausibility. Notably, the latter capability introduces a fresh perspective for reward modeling in text-to-image generation. Combined with the authors' proposed DenseFlow-GRPO method, which leverages pixel-level supervision signals for reinforcement learning, the framework effectively enhances the performance of image generation models.",
"id": "t3QGxQNWMn",
"rating": 8
},
{
"content": "This paper proposes ImageDoctor, a unified framework for Text-to-Image (T2I) evaluation that simultaneously outputs multi-aspect scores and spatially grounded heatmaps, offering richer and more interpretable feedback than traditional single-scalar assessments. The paper also introduces DenseFlow-GRPO, a method for T2I model fine-tuning, with experimental results demonstrating the value of pixel-level feedback in improving evaluation accuracy and eliminating local artifacts.",
"id": "ldm65H1lzA",
"rating": 6
},
{
"content": "This paper presents ImageDoctor, a unified and interpretable evaluation framework for text-to-image generation. ImageDoctor provides multi-dimensional feedback and introduces pixel-level diagnostic heatmaps for grounded and fine-grained evaluation. The model adopts a \"look-think-predict\" paradigm. Experimental results show that ImageDoctor achieves state-of-the-art correlation with human judgments and improves text-to-image generation quality.",
"id": "w94QtyzdrS",
"rating": 6
}
] |
{
"cdate": 1757544654492,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025imagedoctor,\ntitle={ImageDoctor: Diagnosing Text-to-Image Generation via Grounded Image Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04HwYGgp2w},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid advancement of text-to-image (T2I) models has increased the need for reliable human preference modeling, a demand further amplified by recent progress in reinforcement learning for preference alignment. However, existing approaches typically quantify the quality of a generated image using a single scalar, limiting their ability to provide comprehensive and interpretable feedback on image quality. To address this, we introduce ImageDoctor, a unified multi-aspect T2I model evaluation framework that assesses image quality across four complementary dimensions: plausibility, semantic alignment, aesthetics, and overall quality. ImageDoctor also provides pixel-level flaw indicators in the form of heatmaps, which highlight misaligned or implausible regions, and can be used as a dense reward for T2I model preference alignment. Inspired by the diagnostic process, we improve the detail sensitivity and reasoning capability of ImageDoctor by introducing a ``look-think-predict\" paradigm, where the model first localizes potential flaws, then generates reasoning, and finally concludes the evaluation with quantitative scores. Built on top of a vision-language model and trained through a combination of supervised fine-tuning and reinforcement learning, ImageDoctor demonstrates strong alignment with human preference across multiple datasets, establishing its effectiveness as an evaluation metric. Furthermore, when used as a reward model for preference tuning, ImageDoctor significantly improves generation quality—achieving an improvement of 10% over scalar-based reward models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Image reward model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ab62de115d368d82b0351f14bb9466e9bbe97c92.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "ImageDoctor: Diagnosing Text-to-Image Generation via Grounded Image Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04HwYGgp2w",
"id": "04HwYGgp2w",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3835/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898067519,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3835/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3835/Authors"
]
}
|
|
2,026
|
04JkPDiCnp
|
[
2,
6,
4,
2
] |
[
{
"content": "This paper introduces InternAgent-DR, a multi-agent deep-research framework that models scientific reasoning as a dynamic structured knowledge flow. Instead of relying on a linear task sequence, InternAgent-DR represents research workflows as directed acyclic graphs whose nodes correspond to subtasks such as search, solve, and answer, and whose edges encode knowledge dependencies. The system integrates three major components: a Knowledge Flow Planner that incrementally expands the research graph, a Knowledge Collector that executes outermost nodes through LLM-based agents equipped with tools, and a Knowledge Flow Refiner that dynamically modifies the graph based on intermediate results. This design enables both hierarchical decomposition and adaptive refinement of complex research tasks. Extensive experiments on GAIA, GPQA-diamond, HLE, and TRQA benchmarks demonstrate that InternAgent-DR achieves state-of-the-art performance, surpassing existing open- and closed-source deep-research systems such as OpenAI-DR, OWL, and Manus. Ablation studies confirm the effectiveness of structured planning and flow refinement, and case studies show interpretability and reproducibility advantages.",
"id": "F0Sq86M0oo",
"rating": 2
},
{
"content": "This paper introduces InternAgent-DR, a multi-agent system for complex scientific reasoning and problem-solving. It models research as a dynamic structured knowledge flow, where nodes represent subtasks and edges encode dependencies, enabling adaptive planning, reasoning, and refinement. The framework integrates three modules—Knowledge Flow Planner, Knowledge Collector, and Flow Refiner—to iteratively expand, execute, and adjust research plans. Experiments on benchmarks such as GAIA, GPQA, HLE, and TRQA show state-of-the-art performance, suggesting improved adaptability and reasoning depth compared to both single-agent and static multi-agent systems",
"id": "snn8W0Ai2l",
"rating": 6
},
{
"content": "This paper proposes InternAgent-DR, a deep-research system that constructs and evolves a dynamic structured knowledge flow. Instead of linear task pipelines, the method builds a DAG-structured research graph to explicitly model subproblem dependencies, support parallel exploration, and adapt structure during execution. The system includes (i) a flow planner, (ii) a knowledge collector with tool-augmented LLM agents, and (iii) a flow refiner for graph-level self-revision. Experiments on GAIA, GPQA, HLE, and TRQA show state-of-the-art or competitive performance. Ablations indicate benefits from both structured planning and dynamic refinement.",
"id": "qJmSCE5pAH",
"rating": 4
},
{
"content": "The paper proposes InternAgent-DR **,** a multi-agent deep research system that builds and continually refines a knowledge flow (planner → collector → refiner) to coordinate subtasks and dependencies. Experiments on GAIA, HLE, GPQA, and TRQA report strong or SOTA results, with ablations showing benefits from structured planning and dynamic refinement.",
"id": "2JPa5Z3Bum",
"rating": 2
}
] |
{
"cdate": 1756820032542,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025internagentdr,\ntitle={InternAgent-{DR}: Advancing deep research with dynamic structured knowledge flow},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04JkPDiCnp},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep research is an inherently challenging task that demands both breadth and depth of thinking. It involves navigating diverse knowledge spaces and reasoning over complex, multi-step dependencies, which presents substantial challenges for agentic systems. To address this, we propose InternAgent-DR (Deep Research), a multi-agent framework that actively constructs and evolves a dynamic structured knowledge flow to drive subtask execution and reasoning. InternAgent-DR is capable of strategically planning and expanding the knowledge flow to enable parallel exploration and hierarchical task decomposition, while also adjusting the knowledge flow in real time based on feedback from intermediate reasoning outcomes and insights. InternAgent-DR achieves state-of-the-art performance on both general and scientific benchmarks, including GAIA, HLE, GPQA and TRQA, demonstrating its effectiveness in multi-disciplinary research scenarios and its potential to advance scientific discovery."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"deep research",
"multi-agent",
"reasoning model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1733de55f54fb9280e4bfee98aaf47ded2d07fd1.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "InternAgent-DR: Advancing deep research with dynamic structured knowledge flow"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04JkPDiCnp",
"id": "04JkPDiCnp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission830/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898239693,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission830/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission830/Authors"
]
}
|
|
2,026
|
04Tfwy3LLC
|
[
2,
6,
4,
8
] |
[
{
"content": "The paper relates to the pruning of LLM layers. The paper consists of three main parts:\n1. Discussion of criteria for identifying prunable layers\n2. Comparison between LoRA and partial fine-tuning methods for recovering accuracy after pruning\n3. Theoretical analysis of gradient flow in the presence Pre-Layer Normalization, and how this affects layers by depth\n\nThe main observation in the paper is the relative unimportance of deep layers, and the fact that pruning the last layers is a more useful heuristic than other more elaborate importance estimators (c.f. Magnitude, Taylor, PPL, BI).\nThis claim is supported by Table 1, which shows superior results for the \"reverse order\" method, at a 20% pruning ratio, for Qwen1.5-7B, Llama-3.1-8B-It and Vicuna-7B-v1.5\n\nA parallel finding is the fact that partial fine-tuning of the last one or two layers yields a greater accuracy recovery than full LoRA fine-tuning.\nThis claim is supported by Table 2.\n\nIn the last paragraph of the main body of the paper, the theoretical analysis of gradient flow and show that Pre-LN architectures inherently weaken the gradients and contributions of deeper layers due to the normalization step scaling them down.",
"id": "US7LMRU6C4",
"rating": 2
},
{
"content": "This paper re-evaluates layer pruning methods for Large Language Models (LLMs), addressing whether complex metrics are needed to identify redundant layers and if LoRA is the optimal fine-tuning choice after pruning. Through extensive experiments across various metrics, LLMs, and fine-tuning methods, the paper reveals that a simple \"backward pruning\" (removing the last few layers directly) often outperforms more complex indicators. Furthermore, \"partial layer fine-tuning\" (tuning only the last few layers and the output layer) is found to be more effective and faster than LoRA for performance recovery. This paper provide a theoretical framework based on gradient flow to explain why deeper layers in Pre-LN Transformers contribute less, validating their approach. Pruned models based on these findings significantly surpass existing methods across benchmarks.",
"id": "ULnrI4m9Iy",
"rating": 6
},
{
"content": "This paper re-evaluates layer pruning for Pre-LN LLMs and shows that a simple strategy that prunes layers in reverse order and then fine-tune only the LM head plus the last 1-3 layers consistently matches or even outperforms more complicated pruning methods on a few standard benchmarks (PIQA, HellaSwag, WinoGrande, ARC-e/c, OBQA, MMLU, CMMLU). The empirical study is broad (several LLaMA and Qwen-style models) and scales up to LLaMA-3-70B. The authors give gradient-flow explanation for why deeper layers in Pre-LN are matter less, and they also find that this approach can beat the usual \"prune + LoRA\" recovery. This makes the paper especially useful for users who just want a reliable pruning recipe without complex per-layer scoring.",
"id": "nII3u1uhJm",
"rating": 4
},
{
"content": "The paper is about empirical benchmarking and methodological clarification for layer pruning.\n\nBenchmarks 7 layer-selection metrics and 6 fine-tuning methods across Vicuna-7B, Qwen-7B, and Llama-3.x models.\n\nFinds that reverse-order pruning (dropping last layers) consistently outperforms complex importance metrics.\n\nShows partial-layer fine-tuning (LM head + last 1–3 layers) surpasses LoRA/QLoRA for accuracy and training cost.\n\nExtends tests to Llama-3-70B.\n\nReports 2-19 pp improvement over prior layer-pruning baselines.\n\nAdds a gradient-flow derivation explaining why deep layers matter less.\n\nNotes that iterative prune–tune cycles provide no benefit over one-shot pruning.",
"id": "mCAFX6HKkP",
"rating": 8
}
] |
{
"cdate": 1757254648198,
"content": {
"TLDR": {
"value": "This paper presents a theoretical and empirical analysis of layer pruning in Large Language Models, aiming to improve and refine pruning strategies."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025reassessing,\ntitle={Reassessing Layer Pruning in {LLM}s: New Insights and Methods},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04Tfwy3LLC},\nnote={under review}\n}"
},
"abstract": {
"value": "Although large language models (LLMs) have achieved remarkable success across various domains, their considerable scale necessitates substantial computational resources, posing significant challenges for deployment in resource-constrained environments. Layer pruning, as a simple yet effective compression method, removes layers of a model directly, reducing computational overhead. However, what are the best practices for layer pruning in LLMs? Are sophisticated layer selection metrics truly effective? Does the LoRA (Low-Rank Approximation) family, widely regarded as a leading method for pruned model fine-tuning, truly meet expectations when applied to post-pruning fine-tuning? To answer these questions, we dedicate thousands of GPU hours to benchmarking layer pruning in LLMs and gaining insights across multiple dimensions. Our results demonstrate that a simple approach, i.e., pruning the final layers followed by fine-tuning the lm\\_head and the remaining last three layers, yields remarkably strong performance. These pruning strategies are further supported by theoretical analyses based on the gradient flow. Following this guide, our method surpasses existing state-of-the-art pruning methods by $5.62\\%$–$17.27\\%$ on Llama-3.1-8B-It, by $2.36\\%$–$19.45\\%$ on Llama-3-8B and by $4.34\\%$–$9.59\\%$ on Llama-3-70B. The code is available on GitHub."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Model",
"Layer Pruning",
"Model Compression"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c6ed1e0f689d0744c27ac966827d51d77a626dce.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Reassessing Layer Pruning in LLMs: New Insights and Methods"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04Tfwy3LLC",
"id": "04Tfwy3LLC",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898126388,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2804/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2804/Authors"
]
}
|
|
2,026
|
04h40hEgTj
|
[
6,
6,
2,
4
] |
[
{
"content": "In this paper, the authors aimed at creating a family of toy models for exploring the known challenge of long-context learning for LLM. The proposed toy model have different time series data interleaved with distinct labels. The authors found that LLM developed two distinct learning mechanisms in performing next token prediction on the toy model. The first mechanism focuses on identity regime change in the data, and the second one perform next token prediction based the data observed. The two mechanism also seem to follow different learning dynamic, and the second one developed earlier than the first.",
"id": "xGetJAj2RR",
"rating": 6
},
{
"content": "ICL is a well studied phenomenon in the ML community. Various tasks, such as MQAR and regression, have been proposed to test the ICL capabilities of models in the past. The beauty of each is it both tests the model's ability to perform lookup operations (MQAR) and more complex operations only depending on the previous token (regression). This work combines these into a task using linear dynamical systems, where each system is marked in-context by a specific query label. Two observations are seen: the model uses the open-query label to perform the correct task, and the model uses past elements in the sequence to continue the task. These observations are validated by configuring the systems and states to align, allowing for a clear test of these observations in a controlled setting. Further investigating that these different mechanisms exist within these learned models, a mechanistic study is conducted separating out two circuits from within the model that have markedly distinct performance on the two different subtasks of recall and execution.",
"id": "Akv2lLYAWU",
"rating": 6
},
{
"content": "This paper studies mechanisms through which transformers can perform in-context prediction. \nIn models trained on a novel synthetic task, the paper discovers two mechanisms (\"label-based\" and “observation-based”).\nA further experiment on OLMo checkpoints provides further evidence from a translation task.",
"id": "5QYEAhhMIz",
"rating": 2
},
{
"content": "This paper proposes a new methodology to study in-context behaviors in transformer models. They create a sequence which consists of segments of observations drawn from different distributions. Each segment begins with a special token, termed \"symbolic punctuation label\" (SPL), so model must choose between inferring the next observation based on the SPL or the observations in the context. They provide experimental evidence suggesting that the latter choice develops earlier in training than the first.",
"id": "fvnfFvblDP",
"rating": 4
}
] |
{
"cdate": 1758340263445,
"content": {
"TLDR": {
"value": "We introduce a new family of toy problems that combine features of linear-regression-style continuous in-context learning (ICL) with discrete associative recall and find distinct learning dynamics for different prediction mechanisms."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025decomposing,\ntitle={Decomposing Prediction Mechanisms for In-context Recall},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04h40hEgTj},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce a new family of toy problems to explore challenges with long context learning and associative recall in transformer models. Our setup involves interleaved segments of observations from randomly drawn linear deterministic dynamical systems. Each system is associated with a discrete symbolic label that must be learned in-context since these associations randomly shuffle between training instances.\n\nVia out-of-distribution experiments we find that learned next-token prediction for this toy problem involves at least two separate mechanisms. One \"label-based\" mechanism uses the discrete symbolic labels to do the associative recall required to predict the start of a resumption of a previously seen system's observations. The second ``observation-based'' mechanism largely ignores the discrete symbolic labels and performs a prediction based on the state observations previously seen in context. These two mechanisms have different learning dynamics: the second mechanism develops much earlier than the first.\n\nThe behavior of our toy model suggested concrete experiments that we performed with OLMo training checkpoints on an ICL translation task. We see a similar phenomenon: the model learns to continue a translation task in-context earlier than it decisively learns to in-context identify the meaning of a symbolic label telling it to translate."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"emergence",
"in-context learning",
"time-series",
"associative recall",
"learning dynamics"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/874dd26fa4acf6f26e690461d6232071b158fd84.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Decomposing Prediction Mechanisms for In-context Recall"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04h40hEgTj",
"id": "04h40hEgTj",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23149/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896830101,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23149/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23149/Authors"
]
}
|
|
2,026
|
053vZMxDB5
|
[
2,
8,
4
] |
[
{
"content": "This paper presents a reinforcement learning (RL) approach for learning from signal temporal logic (STL) to make learning more feasible for long-horizon tasks. The novel model-free approach divides and flattens complex STL formulas and searches for time-variable actualizations via Metropolis-Hastings (MH) sampling to enable efficient learning. The proposed method is compared with a range of existing approaches across several environments. I believe the idea is original and shows promise for improving over existing methods for STL learning. However, the paper still needs substantial work; specifically, a more thorough technical analysis and a systematic description of the proposed approach, as well as clearer explanations and presentation of the experimental results.",
"id": "Jnq4Ep2xfC",
"rating": 2
},
{
"content": "The paper proposes Temporal Grounded Policy Optimization (TGPO), a hierarchical reinforcement learning framework for solving control problems specified using Signal Temporal Logic (STL). STL enables rich task specifications with temporal and spatial constraints, but its non-Markovian structure and sparse reward signals make it difficult to handle with standard RL algorithms. TGPO decomposes STL formulas into subgoals with invariant constraints, and introduces a two-level architecture: a high-level “temporal grounding” component assigns time variables to each subgoal, while a low-level time-conditioned policy learns to satisfy them using dense, stage-wise rewards. The framework includes a critic-guided Bayesian time allocation step using Metropolis–Hastings sampling, which focuses exploration on promising temporal schedules.\nExperiments across five environments (2D navigation, unicycle, Franka Panda, quadrotor, and Ant) show that TGPO and its Bayesian variant (TGPO*) outperform several baselines—τ-MDP, F-MDP, RNN, Grad, and CEM—particularly on complex, high-dimensional, and long-horizon STL tasks.",
"id": "lGJIfeabQm",
"rating": 8
},
{
"content": "This paper presents a new reinforcement learning method to learn control policies for some types of STL specifications. The proposed method consists of first sampling time assignments for decomposed subgoals and then learn policies to achieve these subgoals conditioned on the time assignments.",
"id": "S02eqipeGp",
"rating": 4
}
] |
{
"cdate": 1756884774931,
"content": {
"TLDR": {
"value": "We design a Reinforcement Learning framework based on time variables and task decomposition to solve Signal Temporal Logic tasks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025tgpo,\ntitle={{TGPO}: Temporal Grounded Policy Optimization for Signal Temporal Logic Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=053vZMxDB5},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning control policies for complex, long-horizon tasks is a central challenge in robotics and autonomous systems. Signal Temporal Logic (STL) offers a powerful and expressive language for specifying such tasks, but its non-Markovian nature and inherent sparse reward make it difficult to be solved via standard Reinforcement Learning (RL) algorithms. Prior RL approaches focus only on limited STL fragments or use STL robustness scores as sparse terminal rewards. In this paper, we propose TGPO, Temporal Grounded Policy Optimization, to solve general STL tasks. TGPO decomposes STL into timed subgoals and invariant constraints and provides a hierarchical framework to tackle the problem. The high-level component of TGPO proposes concrete time allocations for these subgoals, and the low-level time-conditioned policy learns to achieve the sequenced subgoals using a dense, stage-wise reward signal. During inference, we sample various time allocations and select the most promising assignment for the policy network to rollout the solution trajectory. To foster efficient policy learning for complex STL with multiple subgoals, we leverage the learned critic to guide the high-level temporal search via Metropolis-Hastings sampling, focusing exploration on temporally feasible solutions. We conduct experiments on five environments, ranging from low-dimensional navigation to manipulation, drone, and quadrupedal locomotion. Under a wide range of STL tasks, TGPO significantly outperforms state-of-the-art baselines (especially for high-dimensional and long-horizon cases), with an average of 31.6% improvement in task success rate compared to the best baseline."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Reinforcement Learning; Signal Temporal Logic"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/524211ceccea6ca532fc8ec47c9c896c13dd9fa7.pdf"
},
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "TGPO: Temporal Grounded Policy Optimization for Signal Temporal Logic Tasks"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "053vZMxDB5",
"id": "053vZMxDB5",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission1461/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898207954,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission1461/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission1461/Authors"
]
}
|
|
2,026
|
05NHmcEpNk
|
[
8,
4,
8
] |
[
{
"content": "This paper introduces CT-MLE, a model-based algorithm for continuous-time reinforcement learning (CTRL) that uses maximum likelihood estimation (MLE) of the state marginal density instead of directly modeling system dynamics.\nThe key idea is to achieve instance-dependent adaptivity, where the algorithm’s regret scales with the total reward variance rather than with fixed measurement schedules.\nThe authors derive theoretical guarantees, showing that regret can become independent of the measurement strategy when observation frequency adapts to problem complexity.\nAdditionally, they propose a randomized measurement schedule to enhance sample efficiency without additional measurement cost.",
"id": "AVfMqCDRyx",
"rating": 8
},
{
"content": "This paper analyzes the continuous-time RL setting where the dynamics is modelled as an SDE with both a drift and a diffusion terms. In this setting, the authors present an algorithm for minimizing the regret during interaction with the environment. Crucially, the algorithm is based on constructing two confidence sets around the max likelihood estimate of the dynamics, and then acting optimistically w.r.t. them. The paper next provides a theoretical analysis of the algorithm.",
"id": "dw7T6ras6J",
"rating": 4
},
{
"content": "This paper studies instance-dependent guarantees for continuous-time reinforcement learning (CTRL). Under some conditions, it establishes an instance-dependent second-order regret bound for CTRL. The results provides some new insights for CTRL, including robustness on choice of measurements and weaker horizon dependence compared with prior related works.",
"id": "pNcHIDegAr",
"rating": 8
},
{
"content": "Thank you for your insightful comments. Here we list our main revisions to our paper and highlight which are they for:\n\n**1.** We add a proof sketch in the starting from line 396 to 420 in the revised paper. (**Q1** for Reviewer mhMX)\n\n**2.** In line 340-345, We extended our setting from finite function class to infinite ones, by introducing the brackting number. We have revised our main theorem and corresponding lemmas in line 387-395 and line 1122-1128 accordingly.\n\nIn line 376-384 We have also added a new example of continuous-time dynamics that shows a low eluder dimension and low bracketing numbers. (**Q2** for Reviewer mhMX, **Q2** for Reviewer LDX4, **Q2** for Reviewer Zkqu).\n\n**3.** In line 942-971, we have explained why the continuous-time decomsposition as shown in (4.1) holds. (**Q4** for Reviewer LDX4)\n\n**4.** In line 2046-2054, we have added additional abalation study to study the robustness of our algorithm to the function approximator class (**Q3** for Reviewer Zkqu)",
"id": "6EJJ5epY35",
"rating": null
}
] |
{
"cdate": 1758213925539,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025instancedependent,\ntitle={Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05NHmcEpNk},\nnote={under review}\n}"
},
"abstract": {
"value": "Continuous-time reinforcement learning (CTRL) provides a natural framework for sequential decision-making in dynamic environments where interactions evolve continuously over time. While CTRL has shown growing empirical success, its ability to adapt to varying levels of problem difficulty remains poorly understood. In this work, we investigate the instance-dependent behavior of CTRL and introduce a simple, model-based algorithm built on maximum likelihood estimation (MLE) with a general function approximator. Unlike existing approaches that estimate system dynamics directly, our method estimates the state marginal density to guide learning. We establish instance-dependent performance guarantees by deriving a regret bound that scales with the total reward variance and measurement resolution. Notably, the regret becomes independent of the specific measurement strategy when the observation frequency adapts appropriately to the problem’s complexity. To further improve performance, our algorithm incorporates a randomized measurement schedule that enhances sample efficiency without increasing measurement cost. These results highlight a new direction for designing CTRL algorithms that automatically adjust their learning behavior based on the underlying difficulty of the environment."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Continuous-time reinforcement learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9f4ff6eac9d7af34e021903665ab4988e2f46ad6.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05NHmcEpNk",
"id": "05NHmcEpNk",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13133/-/Full_Submission",
"ICLR.cc/2026/Conference/Submission13133/-/Rebuttal_Revision"
],
"license": "CC BY 4.0",
"mdate": 1763388667273,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13133/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13133/Authors"
]
}
|
|
2,026
|
05PqjBzN6S
|
[
4,
2,
6
] |
[
{
"content": "This paper addresses the problem of determining when sufficient data is available to safely retrain a model after a sudden concept drift. The authors propose CALIPER, a model-agnostic and data-only test to estimate this required post-drift data size. The core idea is grounded in the concept of \"state dependence\" in dynamical systems. CALIPER employs a lightweight weighted local regression (WLR) to probe the local predictability of the post-drift data window. A retraining trigger is issued when the WLR's prediction error exhibits a monotonically non-increasing trend as the locality parameter increases, conditioned on a sufficient effective sample size (ESS). The authors provide theoretical analysis linking this trigger to state dependence and learnability, and empirical results across four datasets and three model families show that CALIPER outperforms fixed-window and incremental update strategies.",
"id": "eNjd7SQMz6",
"rating": 4
},
{
"content": "The paper proposes a method for determining the right time to retrain/adapt a model after concept drift has occurred. The proposed method is computational efficient because it only uses the data from the data stream together with some hyperparameters.",
"id": "QohygaGUI4",
"rating": 2
},
{
"content": "This paper focuses on handling the sudden drift in streaming data and tries to explore when to retrain after drift. A method called CALIPER has been developed for detecting concept drift occurrence and stable retraining. And a theoretical analysis of the proposed method has been given for fundamental support. The experiment on several datasets and benchmarks has been conducted, and the experiment results show the performance of the proposed method.",
"id": "wuX6eItFHt",
"rating": 6
}
] |
{
"cdate": 1758350444098,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025when,\ntitle={When to Retrain after Drift: A Data-Only Test of Post-Drift Data Size Sufficiency},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05PqjBzN6S},\nnote={under review}\n}"
},
"abstract": {
"value": "Sudden concept drift makes previously trained predictors unreliable, yet deciding when to retrain and what post-drift data size is sufficient is rarely addressed. We propose CALIPER —a detector- and model-agnostic, data-only test that estimates the post-drift data size required for stable retraining. CALIPER exploits state dependence in streams generated by dynamical systems: we run a single-pass weighted local regression over the post-drift window and track a one-step proxy error as a function of a locality parameter $\\theta$. When an effective sample size gate is satisfied, a monotonically non-increasing trend in this error with increasing a locality parameter indicates that the data size is sufficiently informative for retraining.\nWe also provide a theoretical analysis of our CALIPER, and we show that the algorithm has a low per-update time and memory. Across datasets from four heterogeneous domains, three learner families, and two detectors, CALIPER consistently matches or exceeds the best fixed data size for retraining while incurring negligible overhead and often outperforming incremental updates. CALIPER closes the gap between drift detection and data-sufficient adaptation in streaming learning."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Concept drift",
"Stream learning",
"Data sufficiency",
"Time series"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c36ecf51e14859470565e33d2e39e69232a4cb26.pdf"
},
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "When to Retrain after Drift: A Data-Only Test of Post-Drift Data Size Sufficiency"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05PqjBzN6S",
"id": "05PqjBzN6S",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23926/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896790097,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23926/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23926/Authors"
]
}
|
|
2,026
|
05SHW9ai9e
|
[
4,
2,
4,
4
] |
[
{
"content": "To address DocQA limitations (single-modality bias, isolated RAG, long-document overload), this paper proposes MDocAgent—a framework integrating dual RAG (text via ColBERTv2, image via ColPali) and 5 collaborative agents (General, Critical, Text, Image, Summarizing). Evaluated on 5 benchmarks (MMLongBench, FetaTab, etc.), it outperforms baselines: Top-1 accuracy 0.407 (new SOTA), Top-4 0.465. Ablation confirms all agents are necessary. Key contributions: \"dual RAG + multi-agent\" architecture, critical info extraction to reduce agent attention dispersion, and validation for complex multi-modal docs.",
"id": "10HA4uLhex",
"rating": 4
},
{
"content": "This paper introduces MDocAgent, a multi-modal multi-agent framework for document question answering (DocQA). Unlike traditional LLM-based or LVLM-based RAG systems that typically focus on a single modality (text or image), MDocAgent integrates both textual and visual information through five collaborative agents.\nThe system leverages dual RAG pipelines (ColBERTv2 for text and ColPali for images) to retrieve the most relevant segments and pages, and then coordinates these agents through staged reasoning and synthesis.\nExperiments across five benchmarks (MMLongBench, LongDocURL, PaperTab, PaperText, and FetaTab) show an average improvement of 12.1% over current state-of-the-art RAG methods (like M3DocRAG).",
"id": "Tp7nrTkJGa",
"rating": 2
},
{
"content": "This paper presents MDocAgent, a multi-modal, multi-agent framework for document understanding and question answering. The system integrates both text- and image-based retrieval (via ColBERT and ColPali) and coordinates several specialized agents (text, image, critical, and summarizing) to perform collaborative reasoning over multimodal documents. Experimental results on multiple DocQA benchmarks show consistent improvements over existing baselines.",
"id": "H8t7t7HedT",
"rating": 4
},
{
"content": "This paper proposes a multi-agent RAG framework to enhance document VQA. The motivation to integrate multimodal information for RAG-based document understanding is clear and relevant. The authors explore using multiple retrievers combined with different prompting strategies to progressively integrate information and improve performance. While the experimental results demonstrate potential, the paper’s novelty is limited. The approach mainly relies on prompt-based fusion of retrieval results from different modalities without introducing substantial methodological innovation. Moreover, as a training-free framework, the experimental validation remains limited.",
"id": "1rxGJ4nT1l",
"rating": 4
}
] |
{
"cdate": 1758214136657,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025mdocagent,\ntitle={{MD}ocAgent: A Multi-Modal Multi-Agent Framework for Document Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05SHW9ai9e},\nnote={under review}\n}"
},
"abstract": {
"value": "Document Question Answering (DocQA) is a very common task. Existing methods using Large Language Models (LLMs) or Large Vision Language Models (LVLMs) and Retrieval Augmented Generation (RAG) often prioritize information from a single modal, failing to effectively integrate textual and visual cues. These approaches struggle with complex multi-modal reasoning, limiting their performance on real-world documents. We present MDocAgent (A Multi-Modal Multi-Agent Framework for Document Question Answering), a novel RAG and multi-agent framework that leverages both text and image. Our system employs five specialized agents: a general agent, a critical agent, a text agent, an image agent and a summarizing agent. These agents engage in multi-modal context retrieval, combining their individual insights to achieve a more comprehensive understanding of the document's content. This collaborative approach enables the system to synthesize information from both textual and visual components, leading to improved accuracy in question answering. Preliminary experiments on five benchmarks like MMLongBench, LongDocURL demonstrate the effectiveness of our MDocAgent, achieve an average improvement of 12.1% compared to current state-of-the-art method. This work contributes to the development of more robust and comprehensive DocQA systems capable of handling the complexities of real-world documents containing rich textual and visual information."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Multimodal",
"DocQA",
"RAG",
"LVLM"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2ddcd015efb50efa2aa66b781add39ffb4dc6e92.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "MDocAgent: A Multi-Modal Multi-Agent Framework for Document Question Answering"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05SHW9ai9e",
"id": "05SHW9ai9e",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13150/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897460751,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13150/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13150/Authors"
]
}
|
|
2,026
|
05THHF0w3y
|
[
0,
2,
4,
4
] |
[
{
"content": "The paper proposes a new method for LLM reasoning, R-Capsule, where LLMs first output high-level plans which are in a latent space and then textual detailed steps and finally the answer. The authors choose several benchmarks on math reasoning (such as GSM-8k) and commensense reasoning (such as strategyQA). They tested four models: GPT-2 (0.2B), LLaMA-3 (1B), LLaMA3.1 (7B) and Qwen-3 (8B) and compared with baselines such as standard SFT, SFT+CoT, iCoT, coconut, etc.",
"id": "HYfkctmIss",
"rating": 0
},
{
"content": "This paper introduces the \"Reasoning Capsule\" (R-Capsule), a framework to improve the efficiency of CoT reasoning. The core idea is to compress the high-level plan into a small set of latent tokens (the \"capsule\") which then conditions the generation of explicit execution steps. This method is grounded in the Information Bottleneck principle, using a structural bottleneck to enforce minimality (compression) and a dual-loss (task accuracy + plan reconstruction) to ensure sufficiency. Experiments on math and commonsense benchmarks show R-Capsule improves both accuracy and efficiency (fewer tokens, lower latency) over strong CoT baselines.",
"id": "yB8K8rBKem",
"rating": 2
},
{
"content": "This paper introduces the \"Reasoning Capsule\" (R-Capsule), a novel framework designed to improve the efficiency and accuracy of large language models (LLMs) in complex reasoning tasks. The core idea is to address the high latency and verbosity of standard Chain-of-Thought (CoT) prompting by decoupling the reasoning process into a high-level plan and low-level execution steps. Instead of generating an explicit textual plan, the model learns to compress it into a small set of latent tokens—the Reasoning Capsule.\n\nThe method is theoretically grounded in the Information Bottleneck (IB) principle. The capsule is encouraged to be minimal through a low-capacity architectural bottleneck and sufficient through a dual training objective. This objective combines a primary task loss (for answer accuracy) with an auxiliary plan-reconstruction loss, where a separate, shallow decoder is trained to reconstruct the original textual plan from the capsule. This reconstruction loss grounds the latent representation, making it more interpretable and preventing the model from learning uninformative shortcuts.\n\nExperiments on mathematical and commonsense reasoning benchmarks (GSM8K, StrategyQA, etc.) with various model sizes (from GPT-2 to 8B models) show that R-Capsule consistently outperforms standard CoT fine-tuning and other baselines in accuracy, while significantly reducing the number of generated tokens and inference latency.",
"id": "5AV74hOxJy",
"rating": 4
},
{
"content": "This paper proposes R-Capsule, a framework that compresses the high-level plan of a reasoning chain into a small number of learned latent tokens, while leaving execution lightweight or explicit. The design is motivated by an Information Bottleneck objective: a low-capacity projection enforces minimality, and a plan-reconstruction loss encourages sufficiency and a semantically grounded latent (via a shallow decoder). Experiments on GSM8K, MultiArith, AQuA, StrategyQA, and CSQA2 with small/medium base models (e.g., GPT-2 ~150M, Llama-3-1B, and Qwen3-8B; limited results for 7B/8B) show modest accuracy gains over CoT-SFT and reduced token counts/latency.",
"id": "mg0F5OMupx",
"rating": 4
}
] |
{
"cdate": 1757406324840,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025rcapsule,\ntitle={R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05THHF0w3y},\nnote={under review}\n}"
},
"abstract": {
"value": "Chain-of-Thought (CoT) prompting has enabled Large Language Models (LLMs) to tackle complex reasoning tasks by generating explicit step-by-step rationales. However, this verbosity incurs significant computational overhead in terms of latency and memory, and can lead to error propagation over long reasoning chains. We propose the \\textbf{Reasoning Capsule}, a novel framework that captures the efficiency of latent reasoning while retaining the transparency of explicit CoT. Our core idea is to compress the high-level strategic plan of a reasoning process into a compact, low-dimensional latent representation---the Reasoning Capsule---while leaving the low-level execution steps explicit. This hybrid approach is grounded in the Information Bottleneck principle, where we learn a capsule that is a \\emph{minimal sufficient statistic} for the reasoning task. Minimality is enforced structurally via a low-dimensional bottleneck, ensuring efficiency. Sufficiency is enforced via a dual-objective function: a primary task loss for answer accuracy and an auxiliary reconstruction loss that ensures the capsule faithfully represents the original textual plan. This reconstruction objective grounds the latent space, making the compressed plan interpretable and robust against uninformative shortcuts. Our framework unifies efficiency, accuracy, and interpretability, significantly reducing the token footprint of reasoning while maintaining or improving performance on complex reasoning benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Model",
"latent reasoning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/41ed6938581c932dbdf98a17f0863c19cb7cfbde.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05THHF0w3y",
"id": "05THHF0w3y",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3349/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898094406,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3349/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3349/Authors"
]
}
|
|
2,026
|
05hNleYOcG
|
[
2,
4,
2,
2
] |
[
{
"content": "The paper introduces PLAGUE, a plug-and-play framework for designing multi-turn jailbreak attacks on large language models (LLMs). Inspired by lifelong-learning and agentic architectures, PLAGUE divides the attack process into three stages — Planner, Primer, and Finisher — enabling adaptable and modular multi-turn red-teaming. The framework supports integration with prior attacks like GOAT, Crescendo, and ActorBreaker, and achieves significant improvements in attack success rates (ASR) across top-tier models. It also incorporates reflection, memory-based retrieval, and rubric-based evaluation to enhance contextual adaptation.",
"id": "tNNFEkSguZ",
"rating": 2
},
{
"content": "This paper introduces PLAGUE, a multi-stage framework for the automated generation of multi-turn jailbreak attacks against Large Language Models (LLMs). The framework decomposes the attack process into three distinct phases: a Planner, a Primer for context-building, and a Finisher for the final attack. The core design aims to enhance the success rate, diversity, and adaptability of multi-turn attacks through a plug-and-play modular architecture combined with a lifelong learning memory mechanism.",
"id": "ymKvgkHJvh",
"rating": 4
},
{
"content": "**NOTE: This paper violates the conference formatting guidelines by substantially reducing the page margins to fit more content. I would recommend a desk rejection due to this severe format violation. Nevertheless, I provide my technical evaluation below and defer the final desk-rejection decision to the AC and PC.**\n\n\nPLAGUE is a plug-and-play, lifelong-learning framework for generating modular multi-turn jailbreaks against black-box LLMs: it builds an n-step plan by retrieving successful past strategies (Planner), escalates context with benign-seeming intermediate prompts (Primer), and then executes the final exploit (Finisher), while using rubriced reflection, backtracking, and a memory of successful strategies to adapt over time. Evaluated on the HarmBench benchmark, PLAGUE outperforms prior multi-turn and single-turn methods, achieving ASRs such as 81.4% on OpenAI o3, 67.3% on Claude Opus 4.1, and up to 97.8% on Deepseek-R1, while remaining computationally efficient within a six-turn budget; the authors note ethical risks but argue the framework aids systematic vulnerability evaluation and defense development.",
"id": "twNOgBALCS",
"rating": 2
},
{
"content": "This paper introduces PLAGUE, a modular, memory-augmented multi-round jailbreak framework that coordinates a three-stage Planner–Primer–Finisher pipeline, achieving state-of-the-art attack-success rates on several mainstream LLMs.",
"id": "utFk1lpGtz",
"rating": 2
}
] |
{
"cdate": 1758135059535,
"content": {
"TLDR": {
"value": "Agentic framework for discovering novel potent multi-turn jailbreak attacks that achieve an attack success rate of 67.3% on Claude Opus 4.1"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025plague,\ntitle={{PLAGUE}: Plug-and-play Framework for Lifelong Adaptive Generation of Multi-turn Exploits},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05hNleYOcG},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) are improving at an exceptional rate. With the advent of agentic workflows, multi-turn dialogue has become the de facto mode of interaction with LLMs for completing long and complex tasks. While LLM capabilities continue to improve, they remain increasingly susceptible to jailbreaking, especially in multi-turn scenarios where harmful intent can be subtly injected across the conversation to produce nefarious outcomes. While single-turn attacks have been extensively explored, adaptability, efficiency and effectiveness continue to remain key challenges for their multi-turn counterparts. To address these gaps, we present PLAGUE, a novel plug-and-play framework for designing multi-turn attacks inspired by lifelong-learning agents. PLAGUE dissects the lifetime of a multi-turn attack into three carefully designed phases (Primer, Planner and Finisher) that enable a systematic and information-rich exploration of the multi-turn attack family. Evaluations show that red-teaming agents designed using PLAGUE achieve state-of-the-art jailbreaking results, improving attack success rates (ASR) by more than 30% across leading models in a lesser or comparable query budget. Particularly, PLAGUE enables an ASR (based on StrongReject) of 81.4% on OpenAI's o3 and 67.3% on Claude's Opus 4.1, two models that are considered highly resistant to jailbreaks in safety literature. Our work offers tools and insights to understand the importance of plan initialization, context optimization, and lifelong learning in crafting multi-turn attacks for a comprehensive model vulnerability evaluation."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"LLM Red-Teaming",
"Agentic AI"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/de8dc0979b8266f26b81ee913344d9abba387bb0.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/da1b9d173949372d38df20cfd54baf183ccdf1be.zip"
},
"title": {
"value": "PLAGUE: Plug-and-play Framework for Lifelong Adaptive Generation of Multi-turn Exploits"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05hNleYOcG",
"id": "05hNleYOcG",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9695/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897703848,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9695/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9695/Authors"
]
}
|
|
2,026
|
05pfP2khzx
|
[
2,
2,
4
] |
[
{
"content": "This paper introduces VIDEOREPAIR, a video refinement framework to correct text-video misalignments. It has three steps: 1. detect misalignment. Finding the issue and region with MLLM. 2. Plan the refinement including preserve the correct parts and construct prompts that could be used to re-generate the target parts. 3. regenerate the incorrect parts. \nThe method is evaluated on two benchmark EvalCrafter and T2V-CompBench on three different text to video models.",
"id": "3ygO9k7VKw",
"rating": 2
},
{
"content": "To address the challenge that current text-to-video (T2V) models often fail to align with complex text prompts,the authors propose VideoRepair, a training-free, self-correcting, and model-agnostic video refinement framework. VideoRepair automatically detects fine-grained text–video misalignments and performs targeted, localized corrections. The key contributions are as follows:\n- Misalignment detection, which identifies both faithful and misaligned regions within generated videos;\n- Refinement planning, which preserves correctly generated entities, segments their corresponding regions across frames, and constructs targeted prompts for misaligned areas;\n- Localized refinement, which selectively regenerates problematic regions while preserving faithful content through joint optimization of preserved and newly generated areas.",
"id": "nfBvAALDzB",
"rating": 2
},
{
"content": "This paper addresses the text-video misalignment problem under complex cues in T2V generation by proposing a model-agnostic, training-free, refined framework, VIDEOREPAIR. Its core achieves self-correction through a two-stage process: first, it utilizes a multimodal large model (MLLM) to generate fine-grained spatiotemporal problem detection, identifying misaligned regions and locking in the correct content; then, through region-preserving segmentation and target cue construction, it locally regenerates the problem region and integrates the global content.",
"id": "ycBToBY7Bj",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "6dOs45S72Q",
"rating": null
}
] |
{
"cdate": 1758222291968,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nlee2025selfcorrecting,\ntitle={Self-Correcting Text-to-Video Generation with Misalignment Detection and Localized Refinement},\nauthor={Daeun Lee and Jaehong Yoon and Jaemin Cho and Mohit Bansal},\nyear={2025},\nurl={https://openreview.net/forum?id=05pfP2khzx}\n}"
},
"abstract": {
"value": "Recent text-to-video (T2V) diffusion models have made remarkable progress in\ngenerating high-quality and diverse videos. However, they often struggle to align\nwith complex text prompts, particularly when multiple objects, attributes, or spatial\nrelations are specified. We introduce VideoRepair, the first self-correcting,\ntraining-free, and model-agnostic video refinement framework that automatically\ndetects fine-grained text–video misalignments and performs targeted, localized\ncorrections. Our key insight is that even misaligned videos usually contain correctly\nrendered regions that should be preserved rather than regenerated. Building on this\nobservation, VideoRepair proposes a novel region-preserving refinement strategy\nwith three stages: (i) misalignment detection, where systematic MLLM-based evaluation\nwith automatically generated spatio-temporal questions identifies faithful\nand misaligned regions; (ii) refinement planning, which preserves correctly generated\nentities, segments their regions across frames, and constructs targeted prompts\nfor misaligned areas; and (iii) localized refinement, which selectively regenerates\nproblematic regions while preserving faithful content through joint optimization\nof preserved and newly generated areas. This self-correcting, region-preserving\nstrategy converts evaluation signals into actionable guidance for refinement, enabling\nefficient and interpretable corrections. On two challenging benchmarks,\nEvalCrafter and T2V-CompBench, VideoRepair achieves substantial improvements\nover recent baselines across diverse alignment metrics. Comprehensive\nablations further demonstrate the efficiency, robustness, and interpretability of our\nframework."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Daeun_Lee2",
"~Jaehong_Yoon1",
"~Jaemin_Cho1",
"~Mohit_Bansal2"
]
},
"authors": {
"value": [
"Daeun Lee",
"Jaehong Yoon",
"Jaemin Cho",
"Mohit Bansal"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Video Generation",
"Multi-agent"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "lee|selfcorrecting_texttovideo_generation_with_misalignment_detection_and_localized_refinement"
},
"pdf": {
"value": "/pdf/92074a4083fee85665efd54a5e543a7af3d7095e.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/d49f3262bd569432cfbb01e316e81fba9e473798.zip"
},
"title": {
"value": "Self-Correcting Text-to-Video Generation with Misalignment Detection and Localized Refinement"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "05pfP2khzx",
"id": "05pfP2khzx",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13771/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762964082540,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13771/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13771/Authors"
]
}
|
|
2,026
|
05uq3XUJaT
|
[
2,
2,
4
] |
[
{
"content": "This paper introduces a listwise fine-tuning method for LLM-based text reranking. The method improves three limitations of existing LLM rankers (single-token compression, shallow scoring heads, and pairwise objectives).",
"id": "DvaKUEhgPp",
"rating": 2
},
{
"content": "This paper presents ListRank to address limitations in existing reranking approaches. The method includes three extra modules compared to the Qwen3-Reranker-4B backbone: (1) attention pooling, (2) a gated MLP, and (3) ListRank Loss. The model is trained on a RankGPT-refined subset of the MS MARCO passage ranking dataset. Experimental results show that ListRank achieves comparable performance on MS MARCO dev, TREC DL19, and DL20 benchmarks with a 4B model. Ablation studies confirm that each component contributes to performance.",
"id": "tqvEbUa5Yi",
"rating": 2
},
{
"content": "This paper proposes ListRank, a new framework designed for large language model (LLM)-based text retrieval and reranking tasks. The main contribution lies in addressing limitations of current LLM-based reranking approaches through three key innovations: A customized attention-based fusion of token-level representations. A multi-layer perceptron (MLP) module for enhanced feature transformation. A ListRank loss designed to model listwise ordering, thereby improving the fine-grained relevance order of candidate documents in a ranking task. The experimental results on MS MARCO and TREC datasets show that ListRank outperforms existing state-of-the-art reranking models in terms of mean reciprocal rank (MRR) and normalized discounted cumulative gain (nDCG) at 10.",
"id": "2ZQqLSLjjV",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ZEel6wh69o",
"rating": null
}
] |
{
"cdate": 1757411444566,
"content": {
"TLDR": {
"value": "We propose a method to improve the fine-tuning performance of text ranking models by leveraging feature fusion, incorporating customized MLP modules, and optimizing with a listwise loss."
},
"_bibtex": {
"value": "@misc{\nsong2025finetuning,\ntitle={Fine-tuning large language models for text ranking with listwise constraints},\nauthor={Jiawen Song and Bingfei Zhang and Sai Gao and Xueyao Zhang and Wenqing Xu and Guanyu Chen and Junwei Xing and Hui Li and Yunpeng Peng and Zhi Zang},\nyear={2025},\nurl={https://openreview.net/forum?id=05uq3XUJaT}\n}"
},
"abstract": {
"value": "With the rapid adoption of large language models (LLMs) across diverse applications, retrieval augmentation has become a key factor for improving downstream performance. Recent advances show that LLM-based retrieval can substantially enhance ranking quality. In this work, we present a novel LLM-based retrieval framework optimized along three complementary dimensions: (1) a customized attention-based fusion of hidden-layer representations, (2) a dedicated multi-layer perceptron (MLP) module for enriched feature transformation, and (3) a new list-wise learning objective, ListRank loss, to capture fine-grained relevance order. Experimental results demonstrate that our model achieves state-of-the-art performance. The model is publicly available for download on HuggingFace."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Jiawen_Song1",
"~Bingfei_Zhang1",
"~Sai_Gao1",
"~Xueyao_Zhang2",
"~Wenqing_Xu3",
"~Guanyu_Chen14",
"~Junwei_Xing1",
"~Hui_Li58",
"~Yunpeng_Peng2",
"~Zhi_Zang1"
]
},
"authors": {
"value": [
"Jiawen Song",
"Bingfei Zhang",
"Sai Gao",
"Xueyao Zhang",
"Wenqing Xu",
"Guanyu Chen",
"Junwei Xing",
"Hui Li",
"Yunpeng Peng",
"Zhi Zang"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Feature fusion",
"listwise",
"LLM",
"rank"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "song|finetuning_large_language_models_for_text_ranking_with_listwise_constraints"
},
"pdf": {
"value": "/pdf/438531bfdc6d7eff6df3c9f4faf576cb9faa1f30.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Fine-tuning large language models for text ranking with listwise constraints"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "05uq3XUJaT",
"id": "05uq3XUJaT",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Edit",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3367/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763361432756,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3367/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3367/Authors"
]
}
|
|
2,026
|
0694m9ixnv
|
[
4,
6,
2
] |
[
{
"content": "This paper introduces Instruction Distillation, a new paradigm for improving the quality of low-quality instruction-following data. The authors propose a dataset called MIXTURE that maps multiple low-quality or redundant text inputs to a distilled high-quality target. Building on this dataset, they develop LM-Mixup, a reinforcement learning framework that fine-tunes language models using GRPO with three rewards. The method aims to transform low-quality, redundant, or noisy samples into information-dense outputs. Experimental results show that LM-Mixup outperforms SFT and several strong data selection baselines.",
"id": "H1wFP40ufY",
"rating": 4
},
{
"content": "The paper introduces a new task: instruction distillation, i.e., combining multiple low-quality instructions into a high-quality instruction. The authors then create a dataset for this task, where they trains a model with GRPO. They prove that the trained model is useful by applying it to improve the low-quality training data of other models. They observe an improvement on the performance when replacing the low-quality training data with distilled ones.",
"id": "8dqhL4443S",
"rating": 6
},
{
"content": "This paper introduces LM-mixup, a method for augmenting low-quality instruction data by distilling multiple imperfect inputs into high-quality outputs using a language model fine-tuned with reinforcement learning. The authors construct Mixture, a 144K-sample dataset, and train LM-mixup using GRPO with multi-dimensional rewards. Experiments show that training on a small mixup-augmented subset (∼3% of full data) can match or exceed full-dataset training and compete with data selection baselines on OpenLLM benchmarks.",
"id": "4xzSp8wRGS",
"rating": 2
}
] |
{
"cdate": 1758008662115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025lmmixup,\ntitle={{LM}-mixup: Text Data Augmentation via Language Model based Mixup},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0694m9ixnv},\nnote={under review}\n}"
},
"abstract": {
"value": "Instruction tuning is crucial for aligning Large Language Models (LLMs), yet the quality of instruction-following data varies significantly. While high-quality data is paramount, it is often scarce; conversely, abundant low-quality data is frequently discarded, leading to substantial information loss. Existing data augmentation methods struggle to augment this low-quality data effectively, and the evaluation of such techniques remains poorly defined. To address this, we formally define the task of *Instruction Distillation*: distilling multiple low-quality and redundant inputs into high-quality and coherent instruction-output pairs. Specifically, we introduce a comprehensive data construction pipeline to create *MIXTURE*, a 144K-sample dataset pairing low-quality or semantically redundant imperfect instruction clusters with their high-quality distillations. We then introduce *LM-Mixup*, by first performing supervised fine-tuning on *MIXTURE* and then optimizing it with reinforcement learning. This process uses three complementary reward signals: quality, semantic alignment, and format compliance, via Group Relative Policy Optimization (GRPO). We demonstrate that *LM-Mixup* effectively augments imperfect datasets: fine-tuning LLMs on its distilled data, which accounts for only about 3% of the entire dataset, not only surpasses full-dataset training but also competes with state-of-the-art high-quality data selection methods across multiple benchmarks. Our work establishes that low-quality data is a valuable resource when properly distilled and augmented with *LM-Mixup*, significantly enhancing the efficiency and performance of instruction-tuned LLMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Instruction distillation",
"LM mixup"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/063db25688cafc17b63b0a73cc99a225f64ae83e.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "LM-mixup: Text Data Augmentation via Language Model based Mixup"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0694m9ixnv",
"id": "0694m9ixnv",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7123/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897871663,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7123/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7123/Authors"
]
}
|
|
2,026
|
06I7jcrkW2
|
[
6,
6,
4,
8
] |
[
{
"content": "This paper tackles the important and challenging problem of accelerating Real-Time TDDFT (RT-TDDFT) computations using deep learning. \nSpecifically, it adopts an autoregressive framework to accelerate the propagations of RT-TDDFT, where the wavefunctions of previous steps are input into the network for the prediction of the next steps' wavefunctions. The paper proposes two model architectures (OrbEvo-FullWF and OrbEvo-DM) with different electronic state interacting strategies and compares their performance on their self-generated TDDFT dataset.\n\nI think the paper is in a good shape, with nontrivial contributions for a novel application (RT-TDDFT) and specifically designed models (OrbEvo-FullWF and OrbEvo-DM). Nonetheless, there exist several concerns, which should be addressed before acceptance.",
"id": "nfuK4RDRJ7",
"rating": 6
},
{
"content": "This paper proposes *Orbital Transformers*, an equivariant graph Transformer designed to directly predict the *time evolution of Kohn–Sham wavefunctions* in real-time time-dependent density functional theory (RT-TDDFT). Unlike prior approaches that predict energies, Hamiltonians, or spectral observables, this model learns the mapping $C(t) \\to C(t+\\Delta t)$ (or $\\Delta C_t$) directly, effectively learning the quantum propagation operator. The authors introduce an SO(2)-equivariant attention mechanism that takes the external electric field direction as the reference axis, and use FiLM-style conditioning to inject both the field’s direction and time-dependent amplitude. A local autoregressive temporal modeling scheme, along with pushforward training, enables the model to track the dynamic evolution of the system stably over several femtoseconds. Experiments on RT-TDDFT trajectories of QM9 and MD17 molecules under external fields show that the model accurately reproduces dipole dynamics and orbital evolution.",
"id": "bgy26jEHMo",
"rating": 6
},
{
"content": "The paper proposed a new model and method that learns the time-dependent DFT's properties, and has shown that the new proposed method, combined with a serious method improvement, can predict nicely the properties from TDDFT.",
"id": "0HKETKMwSW",
"rating": 4
},
{
"content": "This paper introduces OrbEvo, an equivariant graph transformer framework for learning the time evolution of Kohn–Sham wavefunctions in real-time time-dependent density functional theory (RT-TDDFT). Unlike prior works such as OrbFormer, which focus on static ground-state properties, OrbEvo aims to learn the dynamics of electronic states under external electric fields.\nThe authors propose two model variants: OrbEvo-FullWF, which aggregates wavefunction features through pooling across occupied states, and OrbEvo-DM, which computes density-matrix-based interactions between states via tensor contraction. The model employs SO(2)-equivariant conditioning to represent field-induced symmetry breaking and a pushforward training scheme to stabilize long-horizon rollout. Experiments on QM9 and MD17 demonstrate that OrbEvo-DM outperforms the pooling-based variant, capturing physically consistent time-dependent dipole moments and absorption spectra.",
"id": "BXaxWyNQ81",
"rating": 8
}
] |
{
"cdate": 1758291547393,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025orbital,\ntitle={Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=06I7jcrkW2},\nnote={under review}\n}"
},
"abstract": {
"value": "We aim to learn wavefunctions simulated by time-dependent density functional theory (TDDFT), which can be efficiently represented as linear combination coefficients of atomic orbitals. In real-time TDDFT, the electronic wavefunctions of a molecule evolve over time in response to an external excitation, enabling first-principles predictions of physical properties such as optical absorption, electron dynamics, and high-order response. However, conventional real-time TDDFT relies on time-consuming propagation of all occupied states with fine time steps. In this work, we propose OrbEvo, which is based on an equivariant graph transformer architecture and learns to evolve the full electronic wavefunction coefficients across time steps. First, to account for external field, we design an equivariant conditioning to encode both strength and direction of external electric field and break the symmetry from SO(3) to SO(2). Furthermore, we design two OrbEvo models, OrbEvo-FullWF and OrbEvo-DM, using wavefunction pooling and density matrix as interaction method, respectively. Motivated by the central role of the density functional in TDDFT, OrbEvo-DM encodes the density matrix aggregated from all occupied electronic states into feature vectors via tensor contraction, providing a more intuitive approach to learn the time evolution operator. We adopt a training strategy specifically tailored to limit the error accumulation of time-dependent wavefunctions over autoregressive rollout. To evaluate our approach, we generate TDDFT datasets consisting of 5,000 different molecules in the QM9 dataset and 1,500 molecular configurations of the malonaldehyde molecule in the MD17 dataset. Results show that our OrbEvo model accurately captures quantum dynamics of excited states under external field, including time-dependent wavefunctions, time-dependent dipole moment, and optical absorption spectra characterized by dipole oscillator strength. It also shows strong generalization capability on the diverse molecules in the QM9 dataset."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Machine learning density functional theory",
"Time dependent neural PDE solver"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b9b9470edaaf38e546adf996fb79f0e4341c771e.pdf"
},
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "06I7jcrkW2",
"id": "06I7jcrkW2",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18854/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897077611,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18854/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18854/Authors"
]
}
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 34