Perplexity AI (pplx-api)
API Key
# env variable
os.environ['PERPLEXITYAI_API_KEY']
Sample Usage
from litellm import completion
import os
os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/llama-3.1-sonar-small-128k-online ",
messages=messages
)
print(response)
Sample Usage - Streaming
from litellm import completion
import os
os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/llama-3.1-sonar-small-128k-online ",
messages=messages,
stream=True
)
for chunk in response:
print(chunk)
Supported Models
All models listed here https://docs.perplexity.ai/docs/model-cards are supported. Just do model=perplexity/<model-name>
.
Model Name | Function Call |
---|---|
llama-3.1-sonar-small-128k-online | completion(model="perplexity/llama-3.1-sonar-small-128k-online", messages) |
llama-3.1-sonar-large-128k-online | completion(model="perplexity/llama-3.1-sonar-large-128k-online", messages) |
llama-3.1-sonar-huge-128k-online | completion(model="perplexity/llama-3.1-sonar-huge-128k-online", messages) |
Return citations
Perplexity returns citations for the generated answer. Prior to November 2024, this required setting return_citations=True
, however it now returns citations by default and that parameter has no effect. Perplexity Docs.
If Perplexity returns citations, LiteLLM will pass it straight through.
info
For passing more provider-specific, go here
- SDK
- PROXY
from litellm import completion
import os
os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/llama-3.1-sonar-small-128k-online",
messages=messages,
return_citations=True
)
print(response)
- Add perplexity to config.yaml
model_list:
- model_name: "perplexity-model"
litellm_params:
model: "llama-3.1-sonar-small-128k-online"
api_key: os.environ/PERPLEXITY_API_KEY
- Start proxy
litellm --config /path/to/config.yaml
- Test it!
curl -L -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "perplexity-model",
"messages": [
{
"role": "user",
"content": "Who won the world cup in 2022?"
}
],
"return_citations": true
}'