Model Card for SynCodonLM
- This model is a replicate of that trained with species - token type ID, however, trained without any token type ID.
- This model is totally protein-agnostic, while the species-token type model may still have a small amount of spurious statistical focus.
Installation
git clone https://github.com/Boehringer-Ingelheim/SynCodonLM.git
cd SynCodonLM
pip install -r requirements.txt #maybe not neccesary depending on your env :)
Embedding a Coding DNA Sequence Using our Model Trained without Token Type ID
from SynCodonLM import CodonEmbeddings
model = CodonEmbeddings(model_name='jheuschkel/SynCodonLM-V2-NoTokenType') #this loads the model & tokenizer using our built-in functions
seq = 'ATGTCCACCGGGCGGTGA'
mean_pooled_embedding = model.get_mean_embedding(seq)
#returns --> tensor of shape [768]
raw_output = model.get_raw_embeddings(seq)
raw_embedding_final_layer = raw_output.hidden_states[-1] #treat this like a typical Hugging Face model dictionary based output!
#returns --> tensor of shape [batch size (1), sequence length, 768]
Citation
If you use this work, please cite:
@article {Heuschkel2025.08.19.671089,
author = {Heuschkel, James and Kingsley, Laura and Pefaur, Noah and Nixon, Andrew and Cramer, Steven},
title = {Advancing Codon Language Modeling with Synonymous Codon Constrained Masking},
elocation-id = {2025.08.19.671089},
year = {2025},
doi = {10.1101/2025.08.19.671089},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Codon language models offer a promising framework for modeling protein-coding DNA sequences, yet current approaches often conflate codon usage with amino acid semantics, limiting their ability to capture DNA-level biology. We introduce SynCodonLM, a codon language model that enforces a biologically grounded constraint: masked codons are only predicted from synonymous options, guided by the known protein sequence. This design disentangles codon-level from protein-level semantics, enabling the model to learn nucleotide-specific patterns. The constraint is implemented by masking non-synonymous codons from the prediction space prior to softmax. Unlike existing models, which cluster codons by amino acid identity, SynCodonLM clusters by nucleotide properties, revealing structure aligned with DNA-level biology. Furthermore, SynCodonLM outperforms existing models on 6 of 7 benchmarks sensitive to DNA-level features, including mRNA and protein expression. Our approach advances domain-specific representation learning and opens avenues for sequence design in synthetic biology, as well as deeper insights into diverse bioprocesses.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089},
eprint = {https://www.biorxiv.org/content/early/2025/08/24/2025.08.19.671089.full.pdf},
journal = {bioRxiv}
}
- Downloads last month
- 16