A dataset with about 500 podcasts chopped into 60s segments, then tokenized into discrete tokens. This is meant for autoregressive training. There is also a variant with 30s segments instead. In total there is 320 million tokens in this dataset or 0.32B tokens. Hopefully enough to train a medium sized model.
By using this dataset you agree:
- You cant use this commerically
- You have to reference me
- You have to use this lisence
If you want to work out something please contact me.
- Downloads last month
- 6