Skip to main content
Open In ColabOpen on GitHub

ChatContextual

This will help you getting started with Contextual AI's Grounded Language Model chat models.

To learn more about Contextual AI, please visit our documentation.

This integration requires the contextual-client Python SDK. Learn more about it here.

Overviewโ€‹

This integration invokes Contextual AI's Grounded Language Model.

Integration detailsโ€‹

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatContextuallangchain-contextualโŒbetaโŒPyPI - DownloadsPyPI - Version

Model featuresโ€‹

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โŒโŒโŒโŒโŒโŒโŒโŒโŒโŒ

Setupโ€‹

To access Contextual models you'll need to create a Contextual AI account, get an API key, and install the langchain-contextual integration package.

Credentialsโ€‹

Head to app.contextual.ai to sign up to Contextual and generate an API key. Once you've done this set the CONTEXTUAL_AI_API_KEY environment variable:

import getpass
import os

if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installationโ€‹

The LangChain Contextual integration lives in the langchain-contextual package:

%pip install -qU langchain-contextual

Instantiationโ€‹

Now we can instantiate our model object and generate chat completions.

The chat client can be instantiated with these following additional settings:

ParameterTypeDescriptionDefault
temperatureOptional[float]The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness.0
top_pOptional[float]A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness.0.9
max_new_tokensOptional[int]The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048.1024
from langchain_contextual import ChatContextual

llm = ChatContextual(
model="v1", # defaults to `v1`
api_key="",
temperature=0, # defaults to 0
top_p=0.9, # defaults to 0.9
max_new_tokens=1024, # defaults to 1024
)

Invocationโ€‹

The Contextual Grounded Language Model accepts additional kwargs when calling the ChatContextual.invoke method.

These additional inputs are:

ParameterTypeDescription
knowledgelist[str]Required: A list of strings of knowledge sources the grounded language model can use when generating a response.
system_promptOptional[str]Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly.
avoid_commentaryOptional[bool]Optional (Defaults to False): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses.
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."

# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]

# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]

# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)

print(ai_msg.content)

Chainingโ€‹

We can chain the Contextual Model with output parsers.

from langchain_core.output_parsers import StrOutputParser

chain = llm | StrOutputParser

chain.invoke(
messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True
)
API Reference:StrOutputParser

API referenceโ€‹

For detailed documentation of all ChatContextual features and configurations head to the Github page: https://github.com/ContextualAI//langchain-contextual


Was this page helpful?