File size: 1,339 Bytes
7360167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

---
tags:
- language-model
- transformer-decoder
- tiny-shakespeare
license: mit
datasets:
- tiny_shakespeare
model_description: |
    This is a small autoregressive language model based on the Transformer architecture trained on the Tiny Shakespeare dataset.

    ## Model Description
  The model is a custom implementation of a TransformerDecoderModel, which uses a decoder-only architecture similar to GPT-2.
  It was trained on the Tiny Shakespeare dataset to generate text in the style of William Shakespeare.

  ## Training Details
  The model was trained and tracked using [Weights & Biases](https://wandb.ai/honcharova-de-hannover/LanguageModel_Project?nw=nwuserhoncharovade).

  ## How to Use
  To generate text with this model, you can load it and the tokenizer as follows:

    ```python
    from transformers import AutoTokenizer
    from transformers import GPT2LMHeadModel

    # Load the model and tokenizer
    model = GPT2LMHeadModel.from_pretrained('NataliaH/TransformerDecoderModel')
    tokenizer = AutoTokenizer.from_pretrained('NataliaH/TransformerDecoderModel')

    # Provide input text and generate output
    input_text = 'To be or not to be'
    inputs = tokenizer(input_text, return_tensors='pt')
    outputs = model.generate(**inputs)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    ```