Request to open source the training code.
Request to open source the training source code. This is not transparent, nor is it a healthy open source project. If you want to promote community developer participation, please truly open source it.
Request to open source the training source code. This is not transparent, nor is it a healthy open source project. If you want to promote community developer participation, please truly open source it.
Hi, we are finalizing the documentation and testing for this part of the scripts and other content, and we will open source it on GitHub. We will notify you at that time and update the link in the README.
Request to open source the training source code. This is not transparent, nor is it a healthy open source project. If you want to promote community developer participation, please truly open source it.
Hi, we are finalizing the documentation and testing for this part of the scripts and other content, and we will open source it on GitHub. We will notify you at that time and update the link in the README.
I may have spoken a little too harshly.Without the official training code library reference, it would be difficult to carry out secondary development or migrate the architecture of existing models.
Hi there, we’ve built dllm-trainer, a lightweight finetuning framework for diffusion language models on top of the 🤗 Transformers Trainer. Give it a try if you’d like to finetune LLaDA / LLaDA-MoE and Dream.