Skip to main content

Llama Models Overview: Unlock AI Potential

Meta Llama
info

For detailed information about using models with APIpie, check out our Models Overview and Completions Guide.

Description

The Llama Series represents Meta's family of state-of-the-art large language models. These open-source models, developed by Meta AI, leverage cutting-edge technology to deliver exceptional performance in natural language processing, instruction-following tasks, and extended-context interactions. The models are available through various providers integrated with APIpie's routing system.

Key Features

  • Extended Token Capacity: Models support context lengths from 4K to 131K tokens for handling various text processing needs.
  • Multi-Provider Availability: Accessible across platforms like OpenRouter, EdenAI, Together, Deepinfra, Monster, and Amazon Bedrock.
  • Diverse Applications: Optimized for chat, instruction-following, code generation, vision, and moderation tasks.
  • Scalability: Models ranging from efficient 1B parameter configurations to powerful 405B parameter versions.

Model List in the Llama Series

Model List updates dynamically please see the Models Route for the up to date list of models

info

For information about model performance and benchmarks, see the Llama Technical Report.

Model NameMax TokensResponse TokensProviderType
nous-hermes-llama2-13b4,0964,096openrouterllm
llama-2-13b-chat4,0964,096openrouterllm
llama2-13b-chat-v14,0963,996bedrockllm
llama2-70b-chat-v14,0963,996bedrockllm
Llama-2-70b-chat-hf4,0964,096deepinfrallm
CodeLlama-70b-Instruct-hf4,0964,096deepinfracode
CodeLlama-34b-Instruct-hf16,38416,384deepinfracode
Phind-CodeLlama-34B-v216,38416,384deepinfracode
Llama-2-7b-chat-hf4,0964,096deepinfrallm
Llama-2-13b-chat-hf4,0964,096deepinfrallm
Meta-Llama-3-70B-Instruct8,1928,192deepinfrallm
Meta-Llama-3-8B-Instruct8,1928,192deepinfrallm
llama-3-lumimaid-8b24,5762,048openrouterllm
llama-guard-2-8b8,1928,192openrouterllm
llama-3-lumimaid-70b8,1922,048openrouterllm
llama3-8b-instruct-v18,1928,092bedrockllm
llama3-70b-instruct-v18,1928,092bedrockllm
Meta-Llama-3-8B-Instruct8,1928,192monsterllm
llama-3-8b-instruct8,1928,192openrouterllm
llama-3-70b-instruct8,1928,192openrouterllm
llama-3.1-sonar-large-128k-online127,072127,072openrouterllm
llama-3.1-sonar-large-128k-chat131,072131,072openrouterllm
llama-3.1-sonar-small-128k-online127,072127,072openrouterllm
llama-3.1-sonar-small-128k-chat131,072131,072openrouterllm
llama-3.1-sonar-huge-128k-online127,072127,072openrouterllm
llama-3.1-lumimaid-8b32,7682,048openrouterllm
meta-llama-3.1-8b-instruct8,1928,192openrouterllm
llama-3.2-3b-instruct131,000131,000openrouterllm
llama-3.2-1b-instruct131,072131,072openrouterllm
llama-3.2-90b-vision-instruct131,072131,072openroutervision
llama-3.2-11b-vision-instruct131,0724,096openroutervision
llama-3.1-nemotron-70b-instruct131,000131,000openrouterllm
llama-3.1-lumimaid-70b16,3842,048openrouterllm
llama-3.3-70b-instruct131,072131,072openrouterllm
llama3-1-405b-instruct-v1:0--edenaillm
llama3-1-70b-instruct-v1:0--edenaillm
llama3-1-8b-instruct-v1:0--edenaillm
llama3-70b-instruct-v1:0--edenaillm
llama3-8b-instruct-v1:0--edenaillm
eva-llama-3.33-70b16,3844,096openrouterllm
Llama-3.1-Nemotron-70B-Instruct-HF32,76832,768togetherllm
Llama-3.3-70B-Instruct-Turbo131,072131,072togetherllm
Llama-3.2-11B-Vision-Instruct-Turbo131,072131,072togetherllm
Llama-3-8b-chat-hf8,1928,192togetherllm
Llama-Guard-3-11B-Vision-Turbo131,072131,072togethermoderation
Llama-3-70b-chat-hf8,1928,192togetherllm
Llama-3.2-3B-Instruct-Turbo131,072131,072togetherllm
Meta-Llama-3.1-405B-Instruct-Turbo130,815130,815togetherllm
scb10x-llama3-typhoon-v1-5x-4f3168,1928,192togetherllm
Meta-Llama-3.1-8B-Instruct-Turbo-128K131,072131,072togetherllm
Meta-Llama-3.1-8B-Instruct-Turbo131,072131,072togetherllm
Llama-2-13b-chat-hf4,0964,096togetherllm
Llama-3.2-90B-Vision-Instruct-Turbo131,072131,072togetherllm
Meta-Llama-3.1-70B-Instruct-Turbo131,072131,072togetherllm
Meta-Llama-3-8B-Instruct-Turbo8,1928,192togetherllm
Meta-Llama-3-8B-Instruct-Lite8,1928,192togetherllm
scb10x-llama3-typhoon-v1-5-8b-instruct8,1928,192togetherllm
Llama-2-70b-hf4,0964,096togetherllm
LlamaGuard-2-8b8,1928,192togethermoderation
Llama-Guard-7b4,0964,096togethermoderation
Meta-Llama-3-70B-Instruct-Turbo8,1928,192togetherllm
Meta-Llama-3-70B-Instruct-Lite8,1928,192togetherllm
Llama-Rank-V18,1928,192togetherllm
Llama-2-7b-chat-hf4,0964,096togetherllm
Meta-Llama-Guard-3-8B8,1928,192togethermoderation
llama3-1-8b-instruct-v1128,000128,000bedrockllm
llama3-1-70b-instruct-v1128,000128,000bedrockllm
llama3-2-11b-instruct-v1128,000128,000bedrockllm
llama3-2-90b-instruct-v1128,000128,000bedrockllm
llama3-2-1b-instruct-v1128,000128,000bedrockllm
llama3-2-3b-instruct-v18,1928,092bedrockllm
llama3-3-70b-instruct-v18,1928,092bedrockllm

Example API Call

Below is an example of how to use the Chat Completions API to interact with a model from the Llama Series, such as llama-3-70b-instruct.

curl -L -X POST 'https://apipie.ai/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
--data-raw '{
"provider": "openrouter",
"model": "llama-3-70b-instruct",
"max_tokens": 150,
"messages": [
{
"role": "user",
"content": "Can you explain how photosynthesis works?"
}
]
}'

Response Example

The expected response structure for the Llama model might look like this:

{
"id": "chatcmpl-12345example12345",
"object": "chat.completion",
"created": 1729535643,
"provider": "openrouter",
"model": "llama-3-70b-instruct",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Photosynthesis is the process by which green plants, algae, and some bacteria convert light energy into chemical energy. Here's how it works:\n\n1. **Light Absorption**: Plants capture light energy using a pigment called chlorophyll, which is found in chloroplasts.\n\n2. **Water and Carbon Dioxide**: They absorb water through their roots and carbon dioxide from the air.\n\n3. **Glucose Production**: The light energy is used to convert water and carbon dioxide into glucose (a sugar) and oxygen. The equation is:\n \n 6CO2 + 6H2O + light energy → C6H12O6 + 6O2\n\nThis process provides energy for the plant and releases oxygen into the atmosphere."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 125,
"total_tokens": 140,
"prompt_characters": 45,
"response_characters": 520,
"cost": 0.002250,
"latency_ms": 3100
},
"system_fingerprint": "fp_123abc456def"
}

API Highlights

  • Provider: Specify the provider or leave blank for automatic selection.
  • Model: Use any model from the Llama Series, such as llama-3-70b-instruct or others suited to your task. See Models Guide.
  • Max Tokens: Set the maximum response token count (e.g., 150 in this example).
  • Messages: Format your request with a sequence of messages, including user input and system instructions. See message formatting.

This example demonstrates how to seamlessly query models from the Llama Series for conversational or instructional tasks.


Applications and Integrations

  • Conversational AI: Powering chatbots, virtual assistants, and other dialogue-based systems. Try it with LibreChat or OpenWebUI.
  • Code Generation: Using CodeLlama variants for programming tasks, code completion, and technical documentation.
  • Content Moderation: Leveraging LlamaGuard models for content filtering and safety checks.
  • Vision Tasks: Using vision-enabled models for image understanding and multimodal applications.
  • Extended Context Tasks: Processing long documents with models supporting up to 131K tokens. Learn more in our Models Guide.

Ethical Considerations

The Llama models are powerful tools that should be used responsibly. Users should implement appropriate safeguards and consider potential biases in model outputs. For guidance on responsible AI usage, see Meta's AI Responsibility Guidelines.


Licensing

The Llama Series is available under Meta's community license agreement, which allows for both research and commercial use under specific terms. For detailed licensing information, consult the official Llama repository, model cards, and respective hosting providers.

tip

Try out the Llama models in APIpie's various supported integrations.