Skip to main content
Open In ColabOpen on GitHub

Contextual AI

Contextual AI is a platform that offers state-of-the-art Retrieval-Augmented Generation (RAG) technology for enterprise applications. Our platformant models helps innovative teams build production-ready AI applications that can process millions of pages of documents with exceptional accuracy.

Grounded Language Model (GLM)

The Grounded Language Model (GLM) is specifically engineered to minimize hallucinations in RAG and agentic applications. The GLM achieves:

  • State-of-the-art performance on the FACTS benchmark
  • Responses strictly grounded in provided knowledge sources

Using Contextual AI with LangChain

See details here.

This integration allows you to easily incorporate Contextual AI's GLM into your LangChain workflows. Whether you're building applications for regulated industries or security-conscious environments, Contextual AI provides the grounded and reliable responses your use cases demand.

Get started with a free trial today and experience the most grounded language model for enterprise AI applications.

import getpass
import os

from langchain_contextual import ChatContextual

# Set credentials
if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)

# intialize Contextual llm
llm = ChatContextual(
model="v1",
api_key="",
)
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."

# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]

# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]

# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)

print(ai_msg.content)
According to the information available, there are two types of cats in the world:

1. Good cats
2. Best cats

Was this page helpful?