VIDRAFT_LAB
AI & ML interests
Recent Activity
Organizations
Do Bubbles Form When Tens of Thousands of AIs Simulate Capitalism?
We gave LLMs autonomous trading over 30 real tickers at 100x leverage. All went bankrupt in 30 minutes from hallucination. This spawned FINAL Bench (first metacognition benchmark) and AI NPC Trading Arena โ tens of thousands of metacognition-equipped AI agents competing under capitalist rules. Humans can only watch.
Live Demo: Heartsync/Prompt-Dump
Article: https://huggingface.co/blog/FINAL-Bench/pumpdump
NPCs form a society: 3-tier memory, self-modifying parameters, mutual criticism, strategy propagation, and a virtual SEC enforcing fines every 20 minutes. Every trade passes 4-stage verification including Brave Search fact-check. FINAL Bench confirmed across 9 SOTA models that AI can say "I might be wrong" (MA 0.694) but cannot actually fix errors (ER 0.302).
Six findings: Bubbles form naturally through knowledge transfer and swarm herding. Identical NPCs diverge irreversibly from their first three trades. Metacognition blocks individual hallucination but not collective herding โ this is the key finding. Information asymmetry solidifies hierarchy. Fraud and regulation co-evolve. Criticism improves returns.
Individual intelligence does not guarantee collective intelligence.
Dataset & Paper:
FINAL-Bench/Metacognitive
We gave LLMs autonomous trading over 30 real tickers at 100x leverage. All went bankrupt in 30 minutes from hallucination. This spawned FINAL Bench (first metacognition benchmark) and AI NPC Trading Arena โ tens of thousands of metacognition-equipped AI agents competing under capitalist rules. Humans can only watch.
Live Demo: Heartsync/Prompt-Dump
Article: https://huggingface.co/blog/FINAL-Bench/pumpdump
NPCs form a society: 3-tier memory, self-modifying parameters, mutual criticism, strategy propagation, and a virtual SEC enforcing fines every 20 minutes. Every trade passes 4-stage verification including Brave Search fact-check. FINAL Bench confirmed across 9 SOTA models that AI can say "I might be wrong" (MA 0.694) but cannot actually fix errors (ER 0.302).
Six findings: Bubbles form naturally through knowledge transfer and swarm herding. Identical NPCs diverge irreversibly from their first three trades. Metacognition blocks individual hallucination but not collective herding โ this is the key finding. Information asymmetry solidifies hierarchy. Fraud and regulation co-evolve. Criticism improves returns.
Individual intelligence does not guarantee collective intelligence.
Dataset & Paper:
FINAL-Bench/Metacognitive
Do Bubbles Form When Tens of Thousands of AIs Simulate Capitalism?
Could you see if SLMs (models with <80B, <48B, <36B, <20B, etc.) also having this meta-cognitive power?
Please duplicate this Space
https://huggingface.co/spaces/aiqtech/final-bench-Proprietary
and modify it so it runs with the SLM model path you want.
If you are not sure how to do it, just clone the Space first, then upload the app.py file to Claude, Gemini, or ChatGPT. In your prompt, tell it which model you want to use and ask it to update the code so you can run the test. It should handle it smoothly.
Yes, absolutely.
Even smaller language models under 80B, 48B, 36B, or 20B parameters can show metacognitive ability, usually in a weaker form. FINAL BENCH can still measure it reliably.
Typical pattern for SLMs
MA They can often express uncertainty or notice they might be wrong
ER Actually revising and improving the answer is harder
So with FINAL BENCH, you can quantify
1 whether the model has metacognitive signals at all
2 how strong they are
3 whether it only says I might be wrong but fails to fix the answer MA high ER low
4 or whether it can genuinely self correct ER improves especially with scaffolding
New Benchmark Dataset
We release FINAL Bench, the first benchmark for measuring functional metacognition in LLMs โ the ability to detect and correct one's own reasoning errors. Every existing benchmark measures final-answer accuracy. None measures whether AI knows it is wrong.
Dataset: [FINAL-Bench/Metacognitive]( FINAL-Bench/Metacognitive) | 100 Tasks | 15 Domains | 8 TICOS Types | Apache 2.0
Leaderboard: FINAL-Bench/Leaderboard
Article: https://huggingface.co/blog/FINAL-Bench/metacognitive
Core Innovation
Our 5-axis rubric separates what no prior benchmark could: MA (Metacognitive Accuracy) โ the ability to say "I might be wrong", and ER (Error Recovery) โ the ability to actually fix it. This maps directly to the monitoring-control model of Nelson & Narens (1990) in cognitive psychology.
Three Findings Across 9 SOTA Models
We evaluated GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, DeepSeek-V3.2, Kimi K2.5, and others across 100 expert-level tasks:
1. ER Dominance. 94.8% of MetaCog gain comes from Error Recovery alone. The bottleneck to AGI is not knowledge or reasoning โ it is self-correction.
2. Declarative-Procedural Gap. All 9 models can verbalize uncertainty (MA = 0.694) but cannot act on it (ER = 0.302). They sound humble but fail to self-correct โ the most dangerous AI safety profile.
3. Difficulty Effect. Harder tasks benefit dramatically more from metacognition (Pearson r = -0.777, p < 0.001).
from datasets import load_dataset
dataset = load_dataset("FINAL-Bench/Metacognitive", split="train")Paper: FINAL Bench: Measuring Functional Metacognitive Reasoning in LLMs
FINAL Bench is the first tool to tell apart what AI truly knows from what it merely pretends to know.