ChatAnthropic
This notebook provides a quick overview for getting started with Anthropic chat models. For detailed documentation of all ChatAnthropic features and configurations head to the API reference.
Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the Anthropic docs.
Note that certain Anthropic models can also be accessed via AWS Bedrock and Google VertexAI. See the ChatBedrock and ChatVertexAI integrations to use Anthropic models via these services.
Overview
Integration details
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatAnthropic | langchain-anthropic | ❌ | beta | ✅ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To access Anthropic models you'll need to create an Anthropic account, get an API key, and install the langchain-anthropic
integration package.
Credentials
Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable:
import getpass
import os
if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
The LangChain Anthropic integration lives in the langchain-anthropic
package:
%pip install -qU langchain-anthropic
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
temperature=0,
max_tokens=1024,
timeout=None,
max_retries=2,
# other params...
)
Invocation
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})
print(ai_msg.content)
J'adore la programmation.
Chaining
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
AIMessage(content="Here's the German translation:\n\nIch liebe Programmieren.", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})
Content blocks
One key difference to note between Anthropic models and most others is that the contents of a single Anthropic AI message can either be a single string or a list of content blocks. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized AIMessage.tool_calls
):
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
"""Get the current weather in a given location"""
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke("Which city is hotter today: LA or NY?")
ai_msg.content
[{'text': "To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.",
'type': 'text'},
{'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',
'input': {'location': 'Los Angeles, CA'},
'name': 'GetWeather',
'type': 'tool_use'},
{'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',
'input': {'location': 'New York, NY'},
'name': 'GetWeather',
'type': 'tool_use'}]
ai_msg.tool_calls
[{'name': 'GetWeather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
{'name': 'GetWeather',
'args': {'location': 'New York, NY'},
'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]
Citations
Anthropic supports a citations feature that lets Claude attach context to its answers based on source documents supplied by the user. When document content blocks with "citations": {"enabled": True}
are included in a query, Claude may generate citations in its response.
Simple example
In this example we pass a plain text document. In the background, Claude automatically chunks the input text into sentences, which are used when generating citations.
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = llm.invoke(messages)
response.content
[{'text': 'Based on the document, ', 'type': 'text'},
{'text': 'the grass is green',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The grass is green. ',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 0,
'end_char_index': 20}]},
{'text': ', and ', 'type': 'text'},
{'text': 'the sky is blue',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The sky is blue.',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 20,
'end_char_index': 36}]},
{'text': '.', 'type': 'text'}]
Using with text splitters
Anthropic also lets you specify your own splits using custom document types. LangChain text splitters can be used to generate meaningful splits for this purpose. See the below example, where we split the LangChain README (a markdown document) and pass it to Claude as context:
import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter
def format_to_anthropic_documents(documents: list[str]):
return {
"type": "document",
"source": {
"type": "content",
"content": [{"type": "text", "text": document} for document in documents],
},
"citations": {"enabled": True},
}
# Pull readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
# Split into chunks
splitter = MarkdownTextSplitter(
chunk_overlap=0,
chunk_size=50,
)
documents = splitter.split_text(readme)
# Construct message
message = {
"role": "user",
"content": [
format_to_anthropic_documents(documents),
{"type": "text", "text": "Give me a link to LangChain's tutorials."},
],
}
# Query LLM
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke([message])
response.content
[{'text': "You can find LangChain's tutorials at https://python.langchain.com/docs/tutorials/\n\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.",
'type': 'text',
'citations': [{'type': 'content_block_location',
'cited_text': "[Tutorials](https://python.langchain.com/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.",
'document_index': 0,
'document_title': None,
'start_block_index': 243,
'end_block_index': 248}]}]
API reference
For detailed documentation of all ChatAnthropic features and configurations head to the API reference: https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html
Related
- Chat model conceptual guide
- Chat model how-to guides