Multi-Window EEG Models
This repository hosts pre-trained PyTorch models (.pth files) for temporal analysis of EEG signals in object category decoding. The models are trained on the Alljoined/05_125 dataset, using COCO 2017 images as stimuli. Each model processes a specific time window post-stimulus onset to capture different stages of visual processing:
- EarlyVisual (50-150ms): Early visual features (e.g., edges, basic shapes). AUROC: ~0.59
- MidFeature (150-250ms): Mid-level object parts (e.g., N170-like responses). AUROC: ~0.97
- LateSemantic (250-350ms): Late semantic integration (e.g., N400/P300). AUROC: ~0.67
- EarlyCombined (50-250ms): Combined early + mid processing. AUROC: ~0.97
- FullWindow (50-350ms): Full baseline window. AUROC: ~0.97
These models use a hybrid CNN-Transformer architecture for multi-label classification over 38 COCO categories (animals, vehicles, food, outdoor objects). They detect weak category-specific signals in noisy EEG data.
Usage
Load a model with PyTorch:
import torch
# Example: Load MidFeature model
checkpoint = torch.load("model_150_250ms_MidFeature.pth", map_location="cpu")
model = HybridCNNTransformer(n_timepoints=52) # Exact points from window
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()
# Inference on EEG window (shape: [1, 64, n_timepoints])
logits = model(eeg_tensor)
probs = torch.sigmoid(logits)
top_categories = torch.topk(probs, k=20).indices
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support