Contact Form

Name

Email *

Message *

Cari Blog Ini

We Open Source Chinese Llama 2 Foundation Model And Alpaca 2 Instruction Following Model

We open-source Chinese LLaMA-2 foundation model and Alpaca-2 instruction-following model

A New Foundation Model

In this work we develop and release Llama 2 a collection of pretrained and fine-tuned large. This paper extends LLaMAs existing vocabulary with an additional 20000 Chinese tokens. To address these issues this project open-sources the Chinese LLaMA and Alpaca large models. In this paper we propose a method to augment LLaMA with capabilities for understanding and.

To facilitate the open-sourcing, we release the Chinese version of LLaMA and Alpaca under the Apache License 2.0. We provide access to the model via our API, as well as code for training and inference in Hugging Face Transformers and a demo for interacting with the model.

Conclusion

We believe that the open-sourcing of these models will foster the development of Chinese-language AI applications and research. Furthermore, we hope that our work will contribute to the advancement of AI as a whole.


Comments