Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
0
80
JSONL FILES ANALYSIS REPORT
Directory: /mnt/test/decompressed/submissions/2005
Analysis Started: 2026-01-15 06:08:33
Total files to process: 7
================================================================================
================================================================================
FILE: RS_2005-06.jsonl
Analysis Time: 2026-01-15 06:08:33
================================================================================
Total lines: 103
Processed lines: 103
Total unique fields: 58
Max unique values tracked per field: 1,000
================================================================================
────────────────────────────────────────────────────────────
FIELD: archived Occurrence: 103/103 (100.0%)
Types: bool:103
Booleans: true:103 (100.0%), false:0 (0.0%)
────────────────────────────────────────────────────────────
FIELD: author Occurrence: 103/103 (100.0%)
Types: str:103
String length avg: 7.4
Unique strings tracked: 29
Top 5 string values:
'kn0thing': 30 (29.1%)
'spez': 8 (7.8%)
'chickenlittle': 8 (7.8%)
'tyrtle': 8 (7.8%)
'r0gue': 7 (6.8%)
────────────────────────────────────────────────────────────
FIELD: author_flair_background_color Occurrence: 103/103 (100.0%)
Types: NoneType:101, str:2
Null/Empty: null:101, empty_str:2
────────────────────────────────────────────────────────────
FIELD: author_flair_css_class Occurrence: 103/103 (100.0%)
Types: NoneType:103
Null/Empty: null:103
────────────────────────────────────────────────────────────
FIELD: author_flair_text Occurrence: 103/103 (100.0%)
Types: NoneType:103
Null/Empty: null:103
────────────────────────────────────────────────────────────
FIELD: author_flair_text_color Occurrence: 103/103 (100.0%)
Types: NoneType:101, str:2
Null/Empty: null:101
String length avg: 4.0
Unique strings tracked: 1
String values distribution:
'dark': 2 (100.0%)
────────────────────────────────────────────────────────────
FIELD: brand_safe Occurrence: 103/103 (100.0%)
Types: bool:103
Booleans: true:103 (100.0%), false:0 (0.0%)
────────────────────────────────────────────────────────────
FIELD: can_gild Occurrence: 103/103 (100.0%)
Types: bool:103
Booleans: true:101 (98.1%), false:2 (1.9%)
────────────────────────────────────────────────────────────
FIELD: contest_mode Occurrence: 103/103 (100.0%)
Types: bool:103
Booleans: true:0 (0.0%), false:103 (100.0%)
────────────────────────────────────────────────────────────
FIELD: created_utc Occurrence: 103/103 (100.0%)
Types: int:103
Numeric values: 103 total
Numeric range: min:1,119,552,233, max:1,120,172,562, avg:1119886598.3
Numeric std dev: 215205.7
Unique numbers tracked: 103
────────────────────────────────────────────────────────────
FIELD: distinguished Occurrence: 103/103 (100.0%)
Types: NoneType:103
Null/Empty: null:103
────────────────────────────────────────────────────────────
FIELD: domain Occurrence: 103/103 (100.0%)
Types: str:103
String length avg: 12.6
Unique strings tracked: 60
Top 5 string values:
'cnn.com': 14 (13.6%)
'news.bbc.co.uk': 8 (7.8%)
'nytimes.com': 6 (5.8%)
'news.yahoo.com': 5 (4.9%)
'wired.com': 4 (3.9%)
────────────────────────────────────────────────────────────
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Reddit Data Archive & Analysis Pipeline

πŸ“Š Overview

A comprehensive pipeline for archiving, processing, and analyzing Reddit data from 2005 to 2025. This repository contains tools for downloading Reddit's historical data, converting it to optimized formats, and performing detailed schema analysis and community studies.

πŸ—‚οΈ Repository Structure

β”œβ”€β”€ scripts/                           # Analysis scripts
β”œβ”€β”€ analysis/                          # Analysis methods and visualizations
β”‚   β”œβ”€β”€ original_schema_analysis/      # Schema evolution analysis
β”‚   β”‚   β”œβ”€β”€ comments/                  # Comment schema analysis results
β”‚   β”‚   β”œβ”€β”€ figures/                   # Schema analysis visualizations
β”‚   β”‚   └── submissions/               # Submission schema analysis results
β”‚   β”‚       β”œβ”€β”€ analysis_report_2005.txt
β”‚   β”‚       β”œβ”€β”€ analysis_report_2006.txt
β”‚   β”‚       └── ...
β”‚   └── parquet_subreddits_analysis/   # Analysis of Parquet-converted data
β”‚       β”œβ”€β”€ comments/                  # Comment data analysis
β”‚       β”œβ”€β”€ figures/                   # Subreddit analysis visualizations
β”‚       └── submissions/               # Submission data analysis
β”‚
β”œβ”€β”€ analyzed_subreddits/               # Focused subreddit case studies
β”‚   β”œβ”€β”€ comments/                      # Subreddit-specific comment archives
β”‚   β”‚   └── RC_funny.parquet          # r/funny comments (empty as of now)
β”‚   β”œβ”€β”€ reddit-media/                  # Media organized by subreddit and date
β”‚   β”‚   β”œβ”€β”€ content-hashed/           # Deduplicated media (content addressing)
β”‚   β”‚   β”œβ”€β”€ images/                   # Image media
β”‚   β”‚   β”‚   └── r_funny/              # Organized by subreddit
β”‚   β”‚   β”‚       └── 2025/01/01/      # Daily structure for temporal analysis
β”‚   β”‚   └── videos/                   # Video media
β”‚   β”‚       └── r_funny/              # Organized by subreddit
β”‚   β”‚           └── 2025/01/01/      # Daily structure
β”‚   └── submissions/                   # Subreddit-specific submission archives
β”‚       └── RS_funny.parquet          # r/funny submissions (empty as of now)
β”‚
β”œβ”€β”€ converted_parquet/                 # Optimized Parquet format (year-partitioned)
β”‚   β”œβ”€β”€ comments/                      # Comments 2005-2025
β”‚   β”‚   β”œβ”€β”€ 2005/ ── 2025/            # Year partitions for efficient querying
β”‚   └── submissions/                   # Submissions 2005-2025
β”‚       β”œβ”€β”€ 2005/ ── 2025/            # Year partitions
β”‚
β”œβ”€β”€ original_dump/                     # Raw downloaded Reddit archives
β”‚   β”œβ”€β”€ comments/                      # Monthly comment archives (ZST compressed)
β”‚   β”‚   β”œβ”€β”€ RC_2005-12.zst ── RC_2025-12.zst  # Complete 2005-2025 coverage
β”‚   β”‚   └── schema_analysis/           # Schema analysis directory
β”‚   └── submissions/                   # Monthly submission archives
β”‚       β”œβ”€β”€ RS_2005-06.zst ── RS_2025-12.zst  # Complete 2005-2025 coverage
β”‚       └── schema_analysis/           # Schema evolution analysis reports
β”‚           β”œβ”€β”€ analysis_report_2005.txt
β”‚           └── ...
β”‚
β”œβ”€β”€ subreddits_2025-01_*               # Subreddit metadata (January 2025 snapshot)
β”‚   β”œβ”€β”€ type_public.jsonl              # 2.78M public subreddits
β”‚   β”œβ”€β”€ type_restricted.jsonl          # 1.92M restricted subreddits
β”‚   β”œβ”€β”€ type_private.jsonl             # 182K private subreddits
β”‚   └── type_other.jsonl               # 100 other/archived subreddits
β”‚
β”œβ”€β”€ .gitattributes                     # Git LFS configuration for large files
└── README.md                          # This documentation file

πŸ“ˆ Dataset Statistics

Subreddit Ecosystem (January 2025)

  • Total Subreddits: 21,865,152
  • Public Communities: 2,776,279 (12.7%)
  • Restricted: 1,923,526 (8.8%)
  • Private: 182,045 (0.83%)
  • User Profiles: 16,982,966 (77.7%)

Content Scale (January 2025 Example)

  • Monthly Submissions: ~39.9 million
  • Monthly Comments: ~500+ million (estimated)
  • NSFW Content: 39.6% of submissions
  • Media Posts: 34.3% on reddit media domains

Largest Communities

  1. r/funny: 66.3M subscribers (public)
  2. r/announcements: 305.6M (private)
  3. r/XboxSeriesX: 5.3M (largest restricted)

πŸ› οΈ Pipeline Stages

Stage 1: Data Acquisition

  • Download monthly Pushshift/Reddit archives
  • Compressed ZST format for efficiency
  • Complete coverage: 2005-2025

Stage 2: Schema Analysis

  • Field-by-field statistical analysis
  • Type distribution tracking
  • Null/empty value profiling
  • Schema evolution tracking (2005-2018 complete more coming soon)

Stage 3: Format Conversion

  • ZST β†’ JSONL decompression
  • JSONL β†’ Parquet conversion
  • Year-based partitioning for query efficiency
  • Columnar optimization for analytical queries

Stage 4: Community Analysis

  • Subreddit categorization (public/private/restricted/user)
  • Subscriber distribution analysis
  • Media organization by community
  • Case studies of specific subreddits

πŸ”¬ Analysis Tools

Schema Analyzer

  • Processes JSONL files at 6,000-7,000 lines/second
  • Tracks 156 unique fields in submissions
  • Monitors type consistency and null rates
  • Generates comprehensive statistical reports

Subreddit Classifier

  • Categorizes 21.8M subreddits by type
  • Analyzes subscriber distributions
  • Identifies community growth patterns
  • Exports categorized datasets

Media Organizer

  • Content-addressable storage for deduplication
  • Daily organization (YYYY/MM/DD)
  • Subreddit-based categorization
  • Thumbnail generation

πŸ’Ύ Data Formats

Original Data

  • Format: ZST-compressed JSONL
  • Compression: Zstandard (high ratio)
  • Structure: Monthly files (RC/RS_YYYY-MM.zst)

Processed Data

  • Format: Apache Parquet
  • Compression: Snappy/GZIP (columnar)
  • Partitioning: Year-based (2005-2025)
  • Optimization: Column pruning, predicate pushdown

Metadata

  • Format: JSONL
  • Categorization: Subreddit type classification
  • Timestamps: Unix epoch seconds

🎯 Research Applications

Community Studies

  • Subreddit lifecycle analysis
  • Moderation pattern tracking
  • Content policy evolution
  • NSFW community dynamics

Content Analysis

  • Media type evolution (2005-2025)
  • Post engagement metrics
  • Cross-posting behavior
  • Temporal posting patterns

Network Analysis

  • Cross-community interactions
  • User migration patterns
  • Community overlap studies
  • Influence network mapping

πŸ“Š Key Findings (Preliminary)

Subreddit Distribution

  • Long tail: 89.4% of subreddits have 0 subscribers
  • Growth pattern: Most communities start as user profiles
  • Restriction trend: 8.8% of communities are restricted
  • Private communities: Mostly large, established groups

Content Characteristics

  • Text dominance: 40.7% of posts are text-only
  • NSFW prevalence: 39.6% of content marked adult
  • Moderation scale: 32% removed by Reddit, 36% by moderators
  • Media evolution: Video posts growing (3% in Jan 2025)

πŸ“„ License & Attribution

Data Source

  • Reddit Historical Data via Pushshift/Reddit API
  • Subreddit metadata from Reddit API
  • Note: Respect Reddit's terms of service and API limits

Code License

MIT License - See LICENSE file for details

Citation

If using this pipeline for research:

Reddit Data Analysis Pipeline. (2025). Comprehensive archive and analysis 
tools for Reddit historical data (2005-2025). GitHub Repository.

πŸ†˜ Support & Contributing

Issue Tracking

  • Data quality issues: Report in schema analysis
  • Processing errors: Check file integrity and formats
  • Performance: Consider partitioning and compression settings

Contributing

  1. Fork repository
  2. Add tests for new analyzers
  3. Document data processing steps
  4. Submit pull request with analysis validation

Performance Tips

  • Use SSD storage for active processing
  • Enable memory mapping for large files
  • Consider Spark/Dask for distributed processing
  • Implement incremental updates for new data

πŸ“š Related Research

  • Social network analysis
  • Community detection algorithms
  • Content moderation studies
  • Temporal pattern analysis
  • Cross-platform comparative studies

Downloads last month
430