Gemma 2 2B JPN IT (GGUF / Q3_K_M)

This repository contains the GGUF quantized version of the google/gemma-2-2b-jpn-it model.

このリポジトリは google/gemma-2-2b-jpn-it をGGUF形式(Q3_K_M)に量子化したモデルを含みます。

Model Details / モデル詳細

  • Base Model: google/gemma-2-2b-jpn-it
  • Quantization Method: Q3_K_M (3-bit quantization)
  • Format: GGUF
  • Quantized by: llama.cpp

License / ライセンス

This model is based on Gemma 2 by Google and is subject to the Gemma Terms of Use. For more details, please refer to the original model card and license.

本モデルはGoogleのGemma 2に基づいており、Gemma 利用規約 (Gemma Terms of Use) に従います。 詳細は元モデルのモデルカードおよびライセンスをご確認ください。

Disclaimer / 免責事項

The uploader is not responsible for any damages caused by the use of this model. This model is provided "as is" without warranty of any kind.

このモデルの使用によって生じたいかなる損害についても、アップロード者は責任を負いません。

Downloads last month
21
GGUF
Model size
3B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for wamo2351/gemma-2-2b-jpn-it-Q3_K_M

Base model

google/gemma-2-2b
Quantized
(23)
this model