text
stringlengths 0
80
|
|---|
JSONL FILES ANALYSIS REPORT
|
Directory: /mnt/test/decompressed/submissions/2005
|
Analysis Started: 2026-01-15 06:08:33
|
Total files to process: 7
|
================================================================================
|
================================================================================
|
FILE: RS_2005-06.jsonl
|
Analysis Time: 2026-01-15 06:08:33
|
================================================================================
|
Total lines: 103
|
Processed lines: 103
|
Total unique fields: 58
|
Max unique values tracked per field: 1,000
|
================================================================================
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: archived Occurrence: 103/103 (100.0%)
|
Types: bool:103
|
Booleans: true:103 (100.0%), false:0 (0.0%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: author Occurrence: 103/103 (100.0%)
|
Types: str:103
|
String length avg: 7.4
|
Unique strings tracked: 29
|
Top 5 string values:
|
'kn0thing': 30 (29.1%)
|
'spez': 8 (7.8%)
|
'chickenlittle': 8 (7.8%)
|
'tyrtle': 8 (7.8%)
|
'r0gue': 7 (6.8%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: author_flair_background_color Occurrence: 103/103 (100.0%)
|
Types: NoneType:101, str:2
|
Null/Empty: null:101, empty_str:2
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: author_flair_css_class Occurrence: 103/103 (100.0%)
|
Types: NoneType:103
|
Null/Empty: null:103
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: author_flair_text Occurrence: 103/103 (100.0%)
|
Types: NoneType:103
|
Null/Empty: null:103
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: author_flair_text_color Occurrence: 103/103 (100.0%)
|
Types: NoneType:101, str:2
|
Null/Empty: null:101
|
String length avg: 4.0
|
Unique strings tracked: 1
|
String values distribution:
|
'dark': 2 (100.0%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: brand_safe Occurrence: 103/103 (100.0%)
|
Types: bool:103
|
Booleans: true:103 (100.0%), false:0 (0.0%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: can_gild Occurrence: 103/103 (100.0%)
|
Types: bool:103
|
Booleans: true:101 (98.1%), false:2 (1.9%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: contest_mode Occurrence: 103/103 (100.0%)
|
Types: bool:103
|
Booleans: true:0 (0.0%), false:103 (100.0%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: created_utc Occurrence: 103/103 (100.0%)
|
Types: int:103
|
Numeric values: 103 total
|
Numeric range: min:1,119,552,233, max:1,120,172,562, avg:1119886598.3
|
Numeric std dev: 215205.7
|
Unique numbers tracked: 103
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: distinguished Occurrence: 103/103 (100.0%)
|
Types: NoneType:103
|
Null/Empty: null:103
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
FIELD: domain Occurrence: 103/103 (100.0%)
|
Types: str:103
|
String length avg: 12.6
|
Unique strings tracked: 60
|
Top 5 string values:
|
'cnn.com': 14 (13.6%)
|
'news.bbc.co.uk': 8 (7.8%)
|
'nytimes.com': 6 (5.8%)
|
'news.yahoo.com': 5 (4.9%)
|
'wired.com': 4 (3.9%)
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
End of preview. Expand
in Data Studio
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Reddit Data Archive & Analysis Pipeline
π Overview
A comprehensive pipeline for archiving, processing, and analyzing Reddit data from 2005 to 2025. This repository contains tools for downloading Reddit's historical data, converting it to optimized formats, and performing detailed schema analysis and community studies.
ποΈ Repository Structure
βββ scripts/ # Analysis scripts
βββ analysis/ # Analysis methods and visualizations
β βββ original_schema_analysis/ # Schema evolution analysis
β β βββ comments/ # Comment schema analysis results
β β βββ figures/ # Schema analysis visualizations
β β βββ submissions/ # Submission schema analysis results
β β βββ analysis_report_2005.txt
β β βββ analysis_report_2006.txt
β β βββ ...
β βββ parquet_subreddits_analysis/ # Analysis of Parquet-converted data
β βββ comments/ # Comment data analysis
β βββ figures/ # Subreddit analysis visualizations
β βββ submissions/ # Submission data analysis
β
βββ analyzed_subreddits/ # Focused subreddit case studies
β βββ comments/ # Subreddit-specific comment archives
β β βββ RC_funny.parquet # r/funny comments (empty as of now)
β βββ reddit-media/ # Media organized by subreddit and date
β β βββ content-hashed/ # Deduplicated media (content addressing)
β β βββ images/ # Image media
β β β βββ r_funny/ # Organized by subreddit
β β β βββ 2025/01/01/ # Daily structure for temporal analysis
β β βββ videos/ # Video media
β β βββ r_funny/ # Organized by subreddit
β β βββ 2025/01/01/ # Daily structure
β βββ submissions/ # Subreddit-specific submission archives
β βββ RS_funny.parquet # r/funny submissions (empty as of now)
β
βββ converted_parquet/ # Optimized Parquet format (year-partitioned)
β βββ comments/ # Comments 2005-2025
β β βββ 2005/ ββ 2025/ # Year partitions for efficient querying
β βββ submissions/ # Submissions 2005-2025
β βββ 2005/ ββ 2025/ # Year partitions
β
βββ original_dump/ # Raw downloaded Reddit archives
β βββ comments/ # Monthly comment archives (ZST compressed)
β β βββ RC_2005-12.zst ββ RC_2025-12.zst # Complete 2005-2025 coverage
β β βββ schema_analysis/ # Schema analysis directory
β βββ submissions/ # Monthly submission archives
β βββ RS_2005-06.zst ββ RS_2025-12.zst # Complete 2005-2025 coverage
β βββ schema_analysis/ # Schema evolution analysis reports
β βββ analysis_report_2005.txt
β βββ ...
β
βββ subreddits_2025-01_* # Subreddit metadata (January 2025 snapshot)
β βββ type_public.jsonl # 2.78M public subreddits
β βββ type_restricted.jsonl # 1.92M restricted subreddits
β βββ type_private.jsonl # 182K private subreddits
β βββ type_other.jsonl # 100 other/archived subreddits
β
βββ .gitattributes # Git LFS configuration for large files
βββ README.md # This documentation file
π Dataset Statistics
Subreddit Ecosystem (January 2025)
- Total Subreddits: 21,865,152
- Public Communities: 2,776,279 (12.7%)
- Restricted: 1,923,526 (8.8%)
- Private: 182,045 (0.83%)
- User Profiles: 16,982,966 (77.7%)
Content Scale (January 2025 Example)
- Monthly Submissions: ~39.9 million
- Monthly Comments: ~500+ million (estimated)
- NSFW Content: 39.6% of submissions
- Media Posts: 34.3% on reddit media domains
Largest Communities
- r/funny: 66.3M subscribers (public)
- r/announcements: 305.6M (private)
- r/XboxSeriesX: 5.3M (largest restricted)
π οΈ Pipeline Stages
Stage 1: Data Acquisition
- Download monthly Pushshift/Reddit archives
- Compressed ZST format for efficiency
- Complete coverage: 2005-2025
Stage 2: Schema Analysis
- Field-by-field statistical analysis
- Type distribution tracking
- Null/empty value profiling
- Schema evolution tracking (2005-2018 complete more coming soon)
Stage 3: Format Conversion
- ZST β JSONL decompression
- JSONL β Parquet conversion
- Year-based partitioning for query efficiency
- Columnar optimization for analytical queries
Stage 4: Community Analysis
- Subreddit categorization (public/private/restricted/user)
- Subscriber distribution analysis
- Media organization by community
- Case studies of specific subreddits
π¬ Analysis Tools
Schema Analyzer
- Processes JSONL files at 6,000-7,000 lines/second
- Tracks 156 unique fields in submissions
- Monitors type consistency and null rates
- Generates comprehensive statistical reports
Subreddit Classifier
- Categorizes 21.8M subreddits by type
- Analyzes subscriber distributions
- Identifies community growth patterns
- Exports categorized datasets
Media Organizer
- Content-addressable storage for deduplication
- Daily organization (YYYY/MM/DD)
- Subreddit-based categorization
- Thumbnail generation
πΎ Data Formats
Original Data
- Format: ZST-compressed JSONL
- Compression: Zstandard (high ratio)
- Structure: Monthly files (RC/RS_YYYY-MM.zst)
Processed Data
- Format: Apache Parquet
- Compression: Snappy/GZIP (columnar)
- Partitioning: Year-based (2005-2025)
- Optimization: Column pruning, predicate pushdown
Metadata
- Format: JSONL
- Categorization: Subreddit type classification
- Timestamps: Unix epoch seconds
π― Research Applications
Community Studies
- Subreddit lifecycle analysis
- Moderation pattern tracking
- Content policy evolution
- NSFW community dynamics
Content Analysis
- Media type evolution (2005-2025)
- Post engagement metrics
- Cross-posting behavior
- Temporal posting patterns
Network Analysis
- Cross-community interactions
- User migration patterns
- Community overlap studies
- Influence network mapping
π Key Findings (Preliminary)
Subreddit Distribution
- Long tail: 89.4% of subreddits have 0 subscribers
- Growth pattern: Most communities start as user profiles
- Restriction trend: 8.8% of communities are restricted
- Private communities: Mostly large, established groups
Content Characteristics
- Text dominance: 40.7% of posts are text-only
- NSFW prevalence: 39.6% of content marked adult
- Moderation scale: 32% removed by Reddit, 36% by moderators
- Media evolution: Video posts growing (3% in Jan 2025)
π License & Attribution
Data Source
- Reddit Historical Data via Pushshift/Reddit API
- Subreddit metadata from Reddit API
- Note: Respect Reddit's terms of service and API limits
Code License
MIT License - See LICENSE file for details
Citation
If using this pipeline for research:
Reddit Data Analysis Pipeline. (2025). Comprehensive archive and analysis
tools for Reddit historical data (2005-2025). GitHub Repository.
π Support & Contributing
Issue Tracking
- Data quality issues: Report in schema analysis
- Processing errors: Check file integrity and formats
- Performance: Consider partitioning and compression settings
Contributing
- Fork repository
- Add tests for new analyzers
- Document data processing steps
- Submit pull request with analysis validation
Performance Tips
- Use SSD storage for active processing
- Enable memory mapping for large files
- Consider Spark/Dask for distributed processing
- Implement incremental updates for new data
π Related Research
- Social network analysis
- Community detection algorithms
- Content moderation studies
- Temporal pattern analysis
- Cross-platform comparative studies
- Downloads last month
- 430