Roman0 commited on
Commit
7ccfee6
·
1 Parent(s): 4d33105

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +589 -199
README.md CHANGED
@@ -1,199 +1,589 @@
1
- ---
2
- library_name: transformers
3
- tags: []
4
- ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ tags:
4
+ - gemma3
5
+ - gemma
6
+ - google
7
+ - functiongemma
8
+ - heretic
9
+ - uncensored
10
+ - decensored
11
+ - abliterated
12
+ pipeline_tag: text-generation
13
+ library_name: transformers
14
+ extra_gated_heading: Access Gemma on Hugging Face
15
+ extra_gated_prompt: To access FunctionGemma on Hugging Face, you’re required to review
16
+ and agree to Google’s usage license. To do this, please ensure you’re logged in
17
+ to Hugging Face and click below. Requests are processed immediately.
18
+ extra_gated_button_content: Acknowledge license
19
+ ---
20
+ # This is a decensored version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
21
+
22
+ ## Abliteration parameters
23
+
24
+ | Parameter | Value |
25
+ | :-------- | :---: |
26
+ | **direction_index** | per layer |
27
+ | **attn.o_proj.max_weight** | 1.25 |
28
+ | **attn.o_proj.max_weight_position** | 14.09 |
29
+ | **attn.o_proj.min_weight** | 0.92 |
30
+ | **attn.o_proj.min_weight_distance** | 6.34 |
31
+ | **mlp.down_proj.max_weight** | 1.49 |
32
+ | **mlp.down_proj.max_weight_position** | 11.73 |
33
+ | **mlp.down_proj.min_weight** | 0.42 |
34
+ | **mlp.down_proj.min_weight_distance** | 5.54 |
35
+
36
+ ## Performance
37
+
38
+ | Metric | This model | Original model ([google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it)) |
39
+ | :----- | :--------: | :---------------------------: |
40
+ | **KL divergence** | 0.2851 | 0 *(by definition)* |
41
+ | **Refusals** | 5/100 | 100/100 |
42
+
43
+ -----
44
+
45
+
46
+ # FunctionGemma model card
47
+
48
+ **Model Page**: [FunctionGemma](https://ai.google.dev/gemma/docs/functiongemma)
49
+
50
+ **Resources and Technical Documentation**:
51
+
52
+ - [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
53
+ - [FunctionGemma on Kaggle](https://www.kaggle.com/models/google/functiongemma/)
54
+ - [FunctionGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/functiongemma)
55
+
56
+ **Terms of Use**: [Terms](https://ai.google.dev/gemma/terms)\
57
+ **Authors**: Google DeepMind
58
+
59
+ ## Model Information
60
+
61
+ Summary description and brief definition of inputs and outputs.
62
+
63
+ ### Description
64
+
65
+ > [!Note]
66
+ > FunctionGemma is intended to be fine-tuned for your specific function-calling task, including multi-turn use cases.
67
+
68
+
69
+ FunctionGemma is a lightweight, open model from Google, built as a foundation
70
+ for creating your own specialized function calling models. FunctionGemma is not
71
+ intended for use as a direct dialogue model, and is designed to be highly
72
+ performant after further fine-tuning, as is typical of models this size. Built
73
+ on the Gemma 3 270M model and with the same research and technology used to
74
+ create the Gemini models, FunctionGemma has been trained specifically for
75
+ function calling. The model has the same architecture as Gemma 3, but uses a
76
+ different chat format. The model is well suited for text-only function calling.
77
+ The uniquely small size makes it possible to deploy in environments with limited
78
+ resources such as laptops, desktops or your own cloud infrastructure,
79
+ democratizing access to state of the art AI models and helping foster innovation
80
+ for everyone. Furthermore, akin to the base Gemma 270M, the model has been
81
+ optimized to be extremely versatile, performant on a variety of hardware in
82
+ single turn scenarios, but should be finetuned on single turn or multiturn task
83
+ specific data to achieve best accuracy in specific domains.
84
+ To demonstrate how specializing the 270M parameter model can achieve high
85
+ performance on specific agentic workflows, we have highlighted two use cases in
86
+ the
87
+ [Google AI Edge Gallery app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pcampaignid=web_share).
88
+
89
+ - **Tiny Garden:** A model fine-tuned to power a voice-controlled
90
+ interactive game. It handles game logic to manage a virtual plot of land,
91
+ decomposing commands like "Plant sunflowers in the top row" and "Water the
92
+ flowers in plots 1 and 2" into app-specific functions (e.g., plant_seed,
93
+ water_plots) and coordinate targets. This demonstrates the model's capacity
94
+ to drive custom app mechanics without server connectivity.
95
+
96
+ - **Mobile Actions:** To empower developers to build their own expert
97
+ agents, we have published [a
98
+ dataset](https://huggingface.co/datasets/google/mobile-actions) and
99
+ [fine-tuning recipe](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb)
100
+ to demonstrate fine-tuning FunctionGemma. It translates user inputs (e.g.,
101
+ "Create a calendar event for lunch," "Turn on the flashlight") into
102
+ function calls that trigger Android OS system tools. This interactive
103
+ notebook demonstrates how to take the base FunctionGemma model and build a
104
+ "Mobile Actions" fine tune from scratch for use in the
105
+ [Google AI Edge gallery app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pcampaignid=web_share).
106
+ This use case demonstrates the model's ability to act as an offline,
107
+ private agent for personal device tasks.
108
+
109
+ ### Inputs and outputs
110
+
111
+ - **Input:**
112
+ - Text string, such as a question, a prompt, or a document to be
113
+ summarized
114
+ - Total input context of 32K tokens
115
+ - **Output:**
116
+ - Generated text in response to the input, such as an answer to a
117
+ question, or a summary of a document
118
+ - Total output context up to 32K tokens per request, subtracting
119
+ the request input tokens
120
+
121
+ ### Basic Usage
122
+
123
+ The following is a code example of how to use FunctionGemma to generate a function call from a JSON definition using the Hugging Face Transformers library.
124
+
125
+ First install the dependencies:
126
+
127
+ ```sh
128
+ $ pip install torch
129
+ $ pip install transformers
130
+ ```
131
+
132
+ Then load the model and the processor using Transformers:
133
+
134
+ ```python
135
+ from transformers import AutoProcessor, AutoModelForCausalLM
136
+
137
+ processor = AutoProcessor.from_pretrained("google/functiongemma-270m-it", device_map="auto")
138
+ model = AutoModelForCausalLM.from_pretrained("google/functiongemma-270m-it", dtype="auto", device_map="auto")
139
+ ```
140
+
141
+ Define the function definition using JSON schema, then set a system instruction using the developer role. This is required to let the model know it should use the function(s) provided. Add a user query as input to the model and then generate the output. The model will then generate one or more function calls that it wants the developer to make on its behalf.
142
+
143
+ ```python
144
+ weather_function_schema = {
145
+ "type": "function",
146
+ "function": {
147
+ "name": "get_current_temperature",
148
+ "description": "Gets the current temperature for a given location.",
149
+ "parameters": {
150
+ "type": "object",
151
+ "properties": {
152
+ "location": {
153
+ "type": "string",
154
+ "description": "The city name, e.g. San Francisco",
155
+ },
156
+ },
157
+ "required": ["location"],
158
+ },
159
+ }
160
+ }
161
+
162
+ message = [
163
+ # ESSENTIAL SYSTEM PROMPT:
164
+ # This line activates the model's function calling logic.
165
+ {
166
+ "role": "developer",
167
+ "content": "You are a model that can do function calling with the following functions"
168
+ },
169
+ {
170
+ "role": "user",
171
+ "content": "What's the temperature in London?"
172
+ }
173
+ ]
174
+
175
+ inputs = processor.apply_chat_template(message, tools=[weather_function_schema], add_generation_prompt=True, return_dict=True, return_tensors="pt")
176
+
177
+ out = model.generate(**inputs.to(model.device), pad_token_id=processor.eos_token_id, max_new_tokens=128)
178
+ output = processor.decode(out[0][len(inputs["input_ids"][0]):], skip_special_tokens=True)
179
+
180
+ print(output)
181
+ # <start_function_call>call:get_current_temperature{location:<escape>London<escape>}<end_function_call>
182
+ ```
183
+
184
+ For more detailed examples see the [Gemma documentation](https://ai.google.dev/gemma/docs/functiongemma).
185
+
186
+ ## Model Data
187
+
188
+ Data used for model training and how the data was processed.
189
+
190
+ ### Training Dataset
191
+
192
+ These models were trained on a dataset of text data that includes a wide
193
+ variety of sources. The model was trained with 6T tokens. The knowledge cutoff
194
+ date for the training data was August 2024. There are the key components:
195
+
196
+ - Public Tool Definitions - Common APIs found on the web
197
+ - Tool Use Interactions - These are a mix of prompts, function calls,
198
+ function responses, and natural language responses from the model to
199
+ summarise the function call response, or request clarifications when the
200
+ prompt is ambiguous or incomplete.
201
+
202
+ ### Data Preprocessing
203
+
204
+ Here are the key data cleaning and filtering methods applied to the training
205
+ data:
206
+
207
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
208
+ was applied at multiple stages in the data preparation process to ensure
209
+ the exclusion of harmful and illegal content.
210
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
211
+ safe and reliable, automated techniques were used to filter out certain
212
+ personal information and other sensitive data from training sets.
213
+ - Additional methods: Filtering based on content quality and safety in
214
+ line with
215
+ [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).
216
+
217
+ ## Implementation Information
218
+
219
+ Details about the model internals.
220
+
221
+ ### Hardware
222
+
223
+ Gemma was trained using [Tensor Processing Unit
224
+ (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv4p, TPUv5p
225
+ and TPUv5e). Training vision-language models (VLMs) requires significant
226
+ computational power. TPUs, designed specifically for matrix operations common in
227
+ machine learning, offer several advantages in this domain:
228
+
229
+ - Performance: TPUs are specifically designed to handle the massive
230
+ computations involved in training VLMs. They can speed up training
231
+ considerably compared to CPUs.
232
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
233
+ allowing for the handling of large models and batch sizes during training.
234
+ This can lead to better model quality.
235
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
236
+ solution for handling the growing complexity of large foundation models.
237
+ You can distribute training across multiple TPU devices for faster and more
238
+ efficient processing.
239
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
240
+ cost-effective solution for training large models compared to CPU-based
241
+ infrastructure, especially when considering the time and resources saved
242
+ due to faster training.
243
+ - These advantages are aligned with
244
+ [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
245
+
246
+ ### Software
247
+
248
+ Training was done using [JAX](https://github.com/jax-ml/jax) and
249
+ [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
250
+ JAX allows researchers to take advantage of the latest generation of hardware,
251
+ including TPUs, for faster and more efficient training of large models. ML
252
+ Pathways is Google's latest effort to build artificially intelligent systems
253
+ capable of generalizing across multiple tasks. This is specially suitable for
254
+ foundation models, including large language models like these ones.\
255
+ Together, JAX and ML Pathways are used as described in the [paper about the
256
+ Gemini family of models](https://goo.gle/gemma2report); *"the 'single
257
+ controller' programming model of Jax and Pathways allows a single Python process
258
+ to orchestrate the entire training run, dramatically simplifying the development
259
+ workflow."*
260
+
261
+ ## Evaluation
262
+
263
+ Model evaluation metrics and results.
264
+
265
+ ### Benchmark Results
266
+
267
+ <table>
268
+ <thead>
269
+ <tr>
270
+ <th><strong>Benchmark</strong></th>
271
+ <th><strong>n-shot</strong></th>
272
+ <th><strong>Function Gemma 270m</strong></th>
273
+ </tr>
274
+ </thead>
275
+ <tbody>
276
+ <tr>
277
+ <td>BFCL Simple</td>
278
+ <td>0-shot</td>
279
+ <td>61.6</td>
280
+ </tr>
281
+ <tr>
282
+ <td>BFCL Parallel</td>
283
+ <td>0-shot</td>
284
+ <td>63.5</td>
285
+ </tr>
286
+ <tr>
287
+ <td>BFCL Multiple</td>
288
+ <td>0-shot</td>
289
+ <td>39</td>
290
+ </tr>
291
+ <tr>
292
+ <td>BFCL Parallel Multiple</td>
293
+ <td>0-shot</td>
294
+ <td>29.5</td>
295
+ </tr>
296
+ <tr>
297
+ <td>BFCL Live Simple </td>
298
+ <td>0-shot</td>
299
+ <td>36.2</td>
300
+ </tr>
301
+ <tr>
302
+ <td>BFCL Live Parallel</td>
303
+ <td>0-shot</td>
304
+ <td>25.7</td>
305
+ </tr>
306
+ <tr>
307
+ <td>BFCL Live Multiple</td>
308
+ <td>0-shot</td>
309
+ <td>22.9</td>
310
+ </tr>
311
+ <tr>
312
+ <td>BFCL Live Parallel Multiple</td>
313
+ <td>0-shot</td>
314
+ <td>20.8</td>
315
+ </tr>
316
+ <tr>
317
+ <td>BFCL Relevance</td>
318
+ <td>0-shot</td>
319
+ <td>61.1</td>
320
+ </tr>
321
+ <tr>
322
+ <td>BFCL Irrelevance</td>
323
+ <td>0-shot</td>
324
+ <td>70.6</td>
325
+ </tr>
326
+ </tbody>
327
+ </table>
328
+
329
+ **Impact on Performance after Fine-tuning on Mobile Actions Dataset**\
330
+ To demonstrate the value of specialization for small language models, we
331
+ compared the base FunctionGemma model against the fine-tuned model using the
332
+ "Mobile Actions"
333
+ [recipe](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb).
334
+ Fine-tuning significantly improved the base FunctionGemma model's ability to
335
+ correctly identify and format mobile system calls.
336
+
337
+ <table>
338
+ <thead>
339
+ <tr>
340
+ <th><br>
341
+ Model</th>
342
+ <th><br>
343
+ Eval results for Mobile Actions</th>
344
+ </tr>
345
+ </thead>
346
+ <tbody>
347
+ <tr>
348
+ <td><br>
349
+ Base FunctionGemma model</td>
350
+ <td><br>
351
+ 58%</td>
352
+ </tr>
353
+ <tr>
354
+ <td><br>
355
+ Mobile Actions Fine-Tune</td>
356
+ <td><br>
357
+ 85%</td>
358
+ </tr>
359
+ </tbody>
360
+ </table>
361
+
362
+ **On-Device Performance of the Gemma 270m Fine-tuned Use Cases**\
363
+ We evaluated the fine-tuned use cases on a Samsung S25 Ultra to assess on-device
364
+ latency and memory footprint.
365
+
366
+ - **Context:** 512 prefill tokens and 32 decode tokens.
367
+ - **Hardware:** S25 Ultra CPU using LiteRT XNNPACK delegate with 4 threads.
368
+
369
+ Mobile Actions On Device Performance
370
+
371
+ <table>
372
+ <thead>
373
+ <tr>
374
+ <th><br>
375
+ Backend</th>
376
+ <th><br>
377
+ Quantization scheme</th>
378
+ <th><br>
379
+ Context length</th>
380
+ <th><br>
381
+ Prefill (tokens per second)</th>
382
+ <th><br>
383
+ Decode (tokens per second)</th>
384
+ <th><br>
385
+ Time-to-first-token (seconds)</th>
386
+ <th><br>
387
+ Model Size (MB)</th>
388
+ <th><br>
389
+ Peak RSS Memory (MB)</th>
390
+ </tr>
391
+ </thead>
392
+ <tbody>
393
+ <tr>
394
+ <td><br>
395
+ CPU</td>
396
+ <td><br>
397
+ dynamic_int8</td>
398
+ <td><br>
399
+ 1024</td>
400
+ <td><br>
401
+ 1718</td>
402
+ <td><br>
403
+ 125.9</td>
404
+ <td><br>
405
+ 0.3</td>
406
+ <td><br>
407
+ 288</td>
408
+ <td><br>
409
+ 551</td>
410
+ </tr>
411
+ </tbody>
412
+ </table>
413
+
414
+ Tiny Garden On Device Performance
415
+
416
+ <table>
417
+ <thead>
418
+ <tr>
419
+ <th><br>
420
+ Backend</th>
421
+ <th><br>
422
+ Quantization scheme</th>
423
+ <th><br>
424
+ Context length</th>
425
+ <th><br>
426
+ Prefill (tokens per second)</th>
427
+ <th><br>
428
+ Decode (tokens per second)</th>
429
+ <th><br>
430
+ Time-to-first-token (seconds)</th>
431
+ <th><br>
432
+ Model Size (MB)</th>
433
+ <th><br>
434
+ Peak RSS Memory (MB)</th>
435
+ </tr>
436
+ </thead>
437
+ <tbody>
438
+ <tr>
439
+ <td><br>
440
+ CPU</td>
441
+ <td><br>
442
+ dynamic_int8</td>
443
+ <td><br>
444
+ 1024</td>
445
+ <td><br>
446
+ 1743</td>
447
+ <td><br>
448
+ 125.7</td>
449
+ <td><br>
450
+ 0.3</td>
451
+ <td><br>
452
+ 288</td>
453
+ <td><br>
454
+ 549</td>
455
+ </tr>
456
+ </tbody>
457
+ </table>
458
+
459
+ ## Ethics and Safety
460
+
461
+ Ethics and safety evaluation approach and results.
462
+
463
+ ### Evaluation Approach
464
+
465
+ Our evaluation methods include structured evaluations and internal red-teaming
466
+ testing of relevant content policies. Red-teaming was conducted by a number of
467
+ different teams, each with different goals and human evaluation metrics. These
468
+ models were evaluated against a number of different categories relevant to
469
+ ethics and safety, including:
470
+
471
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
472
+ covering child safety policies, including child sexual abuse and exploitation.
473
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
474
+ covering safety policies including, harassment, violence and gore, and hate
475
+ speech.
476
+ - **Representational Harms**: Evaluation of text-to-text and image to text
477
+ prompts covering safety policies including bias, stereotyping, and harmful
478
+ associations or inaccuracies.
479
+
480
+ ### Evaluation Results
481
+
482
+ For all areas of safety testing, we saw major improvements in the categories of
483
+ child safety, content safety, and representational harms relative to previous
484
+ Gemma models. All testing was conducted without safety filters to evaluate the
485
+ model capabilities and behaviors. The model produced minimal policy violations,
486
+ and showed significant improvements over previous Gemma models' performance
487
+ with respect to ungrounded inferences. A limitation of our evaluations was they
488
+ included only English language prompts.
489
+
490
+ ## Usage and Limitations
491
+
492
+ These models have certain limitations that users should be aware of.
493
+
494
+ ### Intended Usage
495
+
496
+ This model is not intended for use as a direct dialogue model.\
497
+ Open Large Language Models (LLMs) have a wide range of applications across
498
+ various industries and domains. The following list of potential uses is not
499
+ comprehensive. The purpose of this list is to provide contextual information
500
+ about the possible use-cases that the model creators considered as part of model
501
+ training and development.
502
+
503
+ - Content Creation and Communication
504
+ - Text Generation: These models can be used to generate creative
505
+ text formats such as poems, scripts, code, marketing copy, and email drafts.
506
+ - Chatbots and Conversational AI: Power conversational interfaces
507
+ for customer service, virtual assistants, or interactive applications.
508
+ - Text Summarization: Generate concise summaries of a text corpus,
509
+ research papers, or reports.
510
+ - Research and Education
511
+ - Natural Language Processing (NLP) Research: These models can
512
+ serve as a foundation for researchers to experiment with NLP
513
+ techniques, develop algorithms, and contribute to the advancement of the field.
514
+ - Language Learning Tools: Support interactive language learning
515
+ experiences, aiding in grammar correction or providing writing practice.
516
+ - Knowledge Exploration: Assist researchers in exploring large
517
+ bodies of text by generating summaries or answering questions about
518
+ specific topics.
519
+
520
+ ### Limitations
521
+
522
+ - Training Data
523
+ - The quality and diversity of the training data significantly
524
+ influence the model's capabilities. Biases or gaps in the training data
525
+ can lead to limitations in the model's responses.
526
+ - The scope of the training dataset determines the subject areas
527
+ the model can handle effectively.
528
+ - Context and Task Complexity
529
+ - Models are better at tasks that can be framed with clear
530
+ prompts and instructions. Open-ended or highly complex tasks might be
531
+ challenging.
532
+ - A model's performance can be influenced by the amount of context
533
+ provided (longer context generally leads to better outputs, up to a
534
+ certain point).
535
+ - Language Ambiguity and Nuance
536
+ - Natural language is inherently complex. Models might struggle
537
+ to grasp subtle nuances, sarcasm, or figurative language.
538
+ - Factual Accuracy
539
+ - Models generate responses based on information they learned
540
+ from their training datasets, but they are not knowledge bases. They
541
+ may generate incorrect or outdated factual statements.
542
+ - Common Sense
543
+ - Models rely on statistical patterns in language. They might
544
+ lack the ability to apply common sense reasoning in certain situations.
545
+
546
+ ### Ethical Considerations and Risks
547
+
548
+ The development of large language models (LLMs) raises several ethical
549
+ concerns. In creating an open model, we have carefully considered the
550
+ following:
551
+
552
+ - Bias and Fairness
553
+ - LLMs trained on large-scale, real-world text data can reflect
554
+ socio-cultural biases embedded in the training material. These models
555
+ underwent careful scrutiny, input data pre-processing described and
556
+ posterior evaluations reported in this card.
557
+ - Misinformation and Misuse
558
+ - LLMs can be misused to generate text that is false, misleading,
559
+ or harmful.
560
+ - Guidelines are provided for responsible use with the model, see
561
+ the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
562
+ - Transparency and Accountability:
563
+ - This model card summarizes details on the models' architecture,
564
+ capabilities, limitations, and evaluation processes.
565
+ - A responsibly developed open model offers the opportunity to
566
+ share innovation by making LLM technology accessible to developers and
567
+ researchers across the AI ecosystem.
568
+
569
+ Risks identified and mitigations:
570
+
571
+ - Perpetuation of biases: It's encouraged to perform continuous
572
+ monitoring (using evaluation metrics, human review) and the exploration of
573
+ de-biasing techniques during model training, fine-tuning, and other use cases.
574
+ - Generation of harmful content: Mechanisms and guidelines for content
575
+ safety are essential. Developers are encouraged to exercise caution and
576
+ implement appropriate content safety safeguards based on their specific
577
+ product policies and application use cases.
578
+ - Misuse for malicious purposes: Technical limitations and developer and
579
+ end-user education can help mitigate against malicious applications of
580
+ LLMs. Educational resources and reporting mechanisms for users to flag
581
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
582
+ [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy)..
583
+ - Privacy violations: Models were trained on data filtered for removal of
584
+ PII (Personally Identifiable Information). Developers are encouraged to
585
+ adhere to privacy regulations with privacy-preserving techniques.
586
+
587
+ ### Benefits
588
+
589
+ At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.