Join Community
Scan to join AGI
/
EN
Wealth Ranking
LLaMA is a series of large language models developed by Meta (formerly Facebook). These models are trained entirely on publicly available datasets and have been open-sourced for commercial and research purposes. The LLaMA models range from 7B to 65B parameters, with the training dataset size reaching 1.4 trillion tokens. These models perform exceptionally well, for example, LLaMA-13B outperforms GPT-3 (175B version) in most benchmark tests, while LLaMA-65B is comparable to industry-leading models such as Chinchilla-70B and PaLM-540B.