The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
data_type: string
metric_name: string
value: double
item_name: string
-- schema metadata --
huggingface: '{"info": {"features": {"data_type": {"dtype": "string", "_t' + 172
to
{'pattern_name': Value('string'), 'collaboration_count': Value('float64'), 'collaboration_pct': Value('float64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
data_type: string
metric_name: string
value: double
item_name: string
-- schema metadata --
huggingface: '{"info": {"features": {"data_type": {"dtype": "string", "_t' + 172
to
{'pattern_name': Value('string'), 'collaboration_count': Value('float64'), 'collaboration_pct': Value('float64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Economic Index - Structured & Cleaned Dataset
This dataset is a cleaned, structured version of the Anthropic Economic Index, organized for easy integration with persona-based scenario generation pipelines.
Dataset Description
The Anthropic Economic Index tracks how people use Claude AI for work-related tasks. This structured version extracts and organizes the key information into easy-to-use tables.
Original Data Period: August 4-11, 2025
Source: Anthropic Economic Index Release 2025-09-15
Processing: Extracted from enriched_claude_ai.csv with comprehensive structuring
Dataset Structure
This dataset contains 5 splits:
1. tasks (2,616 rows)
All unique tasks people do with Claude AI, with usage metrics.
Columns:
task_name(string): Description of the taskonet_task_count(float): Number of conversations using this taskonet_task_pct(float): Percentage of total usageonet_task_pct_index(float): Specialization indexautomation_pct(float): Automation percentage (where available)augmentation_pct(float): Augmentation percentage (where available)has_automation_data(bool): Whether automation data existshas_augmentation_data(bool): Whether augmentation data existshas_usage_data(bool): Whether usage data exists
Example:
from datasets import load_dataset
ds = load_dataset("anna-sarvam/economic-index-structured")
print(ds['tasks'][0])
# {'task_name': 'write new programs or modify existing programs...',
# 'onet_task_count': 6618.0, 'onet_task_pct': 0.52, ...}
2. collaboration_patterns (5 rows)
How users interact with Claude AI.
Patterns:
- directive (38.8%) - Direct instructions
- task iteration (22.2%) - Step-by-step refinement
- learning (20.3%) - Educational assistance
- feedback loop (10.3%) - Iterative improvement
- validation (4.5%) - Verification
Columns:
pattern_name(string): Name of collaboration patterncollaboration_count(float): Number of usescollaboration_pct(float): Percentage of total
3. task_collaboration_intersections (4,528 rows)
Which collaboration patterns are used for which tasks.
Columns:
task_name(string): Task descriptioncollaboration_pattern(string): Pattern usedonet_task_collaboration_count(float): Count for this combinationonet_task_collaboration_pct(float): Percentage within task
4. occupations (22 rows)
SOC (Standard Occupational Classification) occupation groups.
Top Occupations:
- Computer and Mathematical (35.9%)
- Educational Instruction and Library (12.3%)
- Arts, Design, Entertainment, Sports, and Media (8.2%)
Columns:
soc_group(string): Occupation group namepercentage(float): Percentage of classified tasksfacet(string): Data facet
5. india (65 rows)
India-specific usage patterns and top tasks.
Columns:
data_type(string): Type of data (overall_metric, top_task, collaboration_pattern)metric_name(string): Name of metricvalue(float): Metric valueitem_name(string): Task or pattern name (if applicable)
Key Statistics
- Total Tasks: 2,616 unique tasks
- Collaboration Patterns: 5 main types
- Occupation Groups: 22 SOC categories
- Task-Pattern Combinations: 4,528
- Geographic Coverage: 201 countries (including India)
Usage Examples
Load the entire dataset
from datasets import load_dataset
ds = load_dataset("anna-sarvam/economic-index-structured")
Get top 10 tasks
tasks = ds['tasks'].to_pandas()
top_10 = tasks.nlargest(10, 'onet_task_count')
print(top_10[['task_name', 'onet_task_count']])
Find education-related tasks
tasks = ds['tasks'].to_pandas()
education_tasks = tasks[tasks['task_name'].str.contains('education', case=False)]
Get India-specific top tasks
india = ds['india'].to_pandas()
india_top_tasks = india[india['data_type'] == 'top_task']
top_5_india = india_top_tasks.nlargest(5, 'value')
Find tasks for software developers
tasks = ds['tasks'].to_pandas()
software_tasks = tasks[tasks['task_name'].str.contains('software|program|code', case=False)]
Analyze collaboration patterns
patterns = ds['collaboration_patterns'].to_pandas()
print(patterns[['pattern_name', 'collaboration_pct']].sort_values('collaboration_pct', ascending=False))
India-Specific Insights
Usage Statistics
- Total Conversations: 1,831
- Global Percentage: 0.88%
- Automation: 45.5%
- Augmentation: 54.5%
Top 5 Tasks in India
- Write/modify programs (6,618 uses)
- Fix software errors (5,118 uses)
- Adapt software to new hardware (3,594 uses)
- Debug and correct errors (2,663 uses)
- Build/maintain websites (2,661 uses)
Top 3 Collaboration Patterns in India
- directive (44.7%) - Higher than global average
- task iteration (23.4%)
- learning (14.5%)
Use Cases
1. Persona-Scenario Matching
Match tasks from this dataset to expanded personas based on occupation:
# Load tasks
tasks = ds['tasks'].to_pandas()
# Filter for teachers
education_tasks = tasks[tasks['task_name'].str.contains('educat|teach|tutor', case=False)]
# Match to teacher personas
2. Realistic Collaboration Patterns
Use actual collaboration patterns in scenario generation:
patterns = ds['collaboration_patterns'].to_pandas()
# Sample by actual distribution
sampled_pattern = patterns.sample(1, weights='collaboration_pct')
3. India-Specific Scenarios
Generate scenarios using India's actual usage patterns:
india = ds['india'].to_pandas()
india_tasks = india[india['data_type'] == 'top_task'].nlargest(20, 'value')
Data Processing
This dataset was created from the Anthropic Economic Index through:
- Download: Extracted enriched_claude_ai.csv (137K rows)
- Filtering: Selected global-level data (geo_id='GLOBAL')
- Structuring: Organized by facets (tasks, collaboration, occupations)
- Flattening: Converted nested metrics to flat tables
- India Extraction: Isolated India-specific patterns (3,874 rows)
Automation vs Augmentation
Global Averages:
- Automation: 51.1% (AI does the task)
- Augmentation: 48.9% (AI assists human)
India:
- Automation: 45.5%
- Augmentation: 54.5%
India shows more augmentation-focused usage compared to global patterns.
Limitations
- Data from only one week (Aug 4-11, 2025)
- Filtered for privacy (>200 conversations per country)
- "not_classified" and "none" categories removed for clarity
- Some tasks may not have automation/augmentation data
Citation
If you use this dataset, please cite both the structured version and the original:
@dataset{economic_index_structured,
title={Economic Index - Structured & Cleaned Dataset},
author={Your Name},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/anna-sarvam/economic-index-structured}
}
@dataset{anthropic_economic_index,
title={Anthropic Economic Index},
author={Anthropic},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/Anthropic/EconomicIndex}
}
Related Resources
License
Same as the original Anthropic Economic Index dataset (MIT License).
Maintenance
This is a snapshot of the Economic Index as of September 2025. For the most up-to-date data, refer to the original dataset.
- Downloads last month
- 6