You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This model is a fine-tuned version of google/gemma-3-12b-pt. As it is intended solely for text generation, we have extracted and utilized only the Gemma3ForCausalLM component from the original architecture.
3889
+
Unlike our previous EEVE models, this model does not feature an expanded tokenizer. Base Model: google/gemma-3-12b-pt
3890
+
This model is a 12-billion parameter, decoder-only language model built on the Gemma3 architecture and fine-tuned by Yanolja NEXT. It is specifically designed to translate structured data (JSON format) while preserving the original data structure.
3891
+
The model was trained on a multilingual dataset covering the following languages equally:
3892
+
Arabic
3893
+
Bulgarian
3894
+
Chinese
3895
+
Czech
3896
+
Danish
3897
+
Dutch
3898
+
English
3899
+
Finnish
3900
+
French
3901
+
German
3902
+
Greek
3903
+
Gujarati
3904
+
Hebrew
3905
+
Hindi
3906
+
Hungarian
3907
+
Indonesian
3908
+
Italian
3909
+
Japanese
3910
+
Korean
3911
+
Persian
3912
+
Polish
3913
+
Portuguese
3914
+
Romanian
3915
+
Russian
3916
+
Slovak
3917
+
Spanish
3918
+
Swedish
3919
+
Tagalog
3920
+
Thai
3921
+
Turkish
3922
+
Ukrainian
3923
+
Vietnamese
3924
+
While optimized for these languages, it may also perform effectively on other languages supported by the base Gemma3 model.
0 commit comments