You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

A dataset with about 500 podcasts chopped into 60s segments, then tokenized into discrete tokens. This is meant for autoregressive training. There is also a variant with 30s segments instead. In total there is 320 million tokens in this dataset or 0.32B tokens. Hopefully enough to train a medium sized model.

By using this dataset you agree:

  • You cant use this commerically
  • You have to reference me
  • You have to use this lisence

If you want to work out something please contact me.

Downloads last month
6