vocsim / README.md
anonymous-submission000's picture
Update README.md
8d59b89 verified
metadata
license: cc-by-4.0
size_categories:
  - 100K<n<1M
pretty_name: VocSim
tags:
  - audio
  - audio-similarity
  - zero-shot-learning
  - representation-learning
  - embedding-evaluation
  - unsupervised-learning
  - speech
  - environmental-sounds
  - animal-vocalizations
  - benchmark
paperswithcode_id: audiosim
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: subset
      dtype: string
    - name: speaker
      dtype: string
    - name: label
      dtype: string
  splits:
    - name: train
      num_bytes: 5452179735
      num_examples: 114641
  download_size: 5500616162
  dataset_size: 5452179735
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

VocSim: A Training-Free Benchmark for Content Identity in Single-Source Audio Embeddings

GitHub Repository Paper: arXiv License: CC BY 4.0

VocSim evaluates how well neural audio embeddings generalize for zero-shot audio similarity. It tests recognizing fine-grained acoustic similarity without specific similarity training.


Key Features

  • Diverse Sources: Human speech (phones, words, utterances), birdsong, otter calls, environmental sounds.
  • Varied Conditions: Spans clean to noisy recordings, short (<100ms) to long durations, few to many classes per subset.
  • Standardized: All audio is 16kHz mono.

Data Format

{
  'audio': {'array': array([...], dtype=float32), 'sampling_rate': 16000},
  'subset': 'HW1',      # Origin identifier
  'speaker': 'spk_id',  # Speaker/Animal/Source ID or 'N/A'
  'label': 'hello'      # Ground truth class for similarity
}

Train split: 114,641 public examples from 15 subsets for evaluation.

Blind Test Sets: 4 additional subsets held out privately.

Citation

@inproceedings{vocsim_authors_2025,
  title={VocSim: A Training-Free Benchmark for Content Identity in Single-Source Audio Embeddings},
  author={Anonymous Authors},
  booktitle={Conference/Journal},
  year={2025},
  url={[Link to paper upon DOI]}
}

License

CC BY 4.0 - Creative Commons Attribution 4.0 International.