Generative AI

Create a Custom Slackbot LLM Agent with NVIDIA NIM and LangChain

Chatbot avatar in front of a stylized chat screen on a purple background.

In the dynamic world of modern business, where communication and efficient workflows are crucial for success, AI-powered solutions have become a competitive advantage.  

AI agents, built on cutting-edge large language models (LLMs) and powered by NVIDIA NIM provide a seamless way to enhance productivity and information flow. NIM, part of NVIDIA AI Enterprise, is a suite of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across clouds, data centers, and workstations.

This post is part of the NVIDIA Chat Labs series, which shares insights and best practices developed from the internal generative AI projects that we create to help others navigate AI adoption.

By harnessing the power of NIM microservices, businesses can leverage models from the API Catalog and quickly build intelligent Slackbots that go far beyond simple automation. This suggests that  the API Catalog can be used for production deployments. These Slackbots become valuable virtual assistants, capable of handling a wide array of tasks—from answering basic queries to solving complex problems and even generating creative content. This not only saves time and resources but also fosters a more collaborative and productive work environment.

In this post, I guide you step-by-step through the process of creating a custom Slackbot agent tailored to specific use cases using NVIDIA NIM and LangChain. 

Slackbot capabilities and architecture

The initial implementation of the Slackbot supports interactions though Slack channels, threads, and chatbot personal messages. The main model supporting this interaction is llama-3_1-405b-instruct, which can access external tools for enhanced responses. These tools involve, calling, and preprocessing external endpoints.

Key features of Slackbot include the following:

  • Multi-channel support: The Slackbot can be invited to any channel and answer queries relevant to the context of that channel.
  • Interaction through tagging: To start a conversation, users tag the bot and ask a comprehensive question. The bot replies in a thread, tagging the user in the same channel.
  • Customizable responses: The Slackbot may follow up with a clarifying question or use external tools to generate responses. It also supports  private messages.

For the architecture (Figure 1), Amazon EC2 is used as the primary host for the project with Amazon Aurora PostgreSQL as the database for tracking human-AI Slack interactions. Other cloud providers like Microsoft Azure or Google Cloud can be used as alternatives. 

For memory management, DynamoDB is combined with LangChain’s DynamoDBChatMessageHistory to keep track of the previous user interactions.

Step-by-step guide to creating a Slackbot agent 

Here are the steps to deploy Slackbot on AWS:

  • Install required libraries
  • Define the main agent
  • Set up DynamoDB for memory management
  • Configure conversational memory
  • Define keyword-based tool usage
  • Finalize the agent
  • Save interactions in Amazon Aurora PostgreSQL

Prerequisites

Before you begin building the Slackbot, make sure to::

The required libraries for the installation include the following:

openai
boto3
slack_bolt
slack-sdk
langchain
python-dotenv
langchain-community
langchain-nvidia-ai-endpoints
python-dotenv
langchainhub

You also need the following resources:

  • An API key from the NVIDIA API Catalog
  • AWS account (for Amazon EC2, Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache, and so on) or similar cloud services
  • Jupyter Lab notebook for initial testing

Install required libraries

Before setting up the agent, ensure that the necessary libraries are installed, such as LangChain, LangChain NVIDIA AI endpoints, Slack SDKs, and so on:

pip install openai boto3 slack_bolt slack-sdk langchain python-dotenv langchain-community langchain-nvidia-ai-endpoints langchainhub

Define the main agent

Next, define the primary Slack features for user interaction and integrate the NIM model as the main agent:

from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from dotenv import load_dotenv
from langchain.agents import AgentExecutor, create_react_agent
from langchain.memory import ConversationBufferWindowMemory
from langchain_core.prompts import PromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA

def agent(user_id, thread_id, channel_name, message_text):
    try:
        llm = ChatNVIDIA(
            model="meta/llama-3.1-405b-instruct",
            temperature=0.1,
            top_p=1,
            max_tokens=2000,
        )

Use Meta’s Llama-3 model that provides support for agent tasks. In the same function, declare agent tools, which are external resources that AI agents use to complete tasks beyond their inherent abilities. The tools can have any scope, as long as they have a working API endpoint:

tools = [
            CalculatorTool(),
            CopilotTool(),
            BrowsingTool(),
	     ... #Other Tools
]

Set up DynamoDB for memory management

To keep track of agent interactions, initialize the DynamoDB table and configure session memory:

# Specify the region where the table is created
boto3.setup_default_session(region_name='us-east-2')

# Initialize DynamoDBChatMessageHistory
session_id = channel_name  # Use channel_name as session ID
history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id=session_id)

Configure conversational memory

Integrate chat message history into the agent’s conversational memory:

conversational_memory = ConversationBufferWindowMemory(
     memory_key='chat_history',
     k=10,
     return_messages=True,
     input_key='input',
     output_key='output',
     chat_memory=history
  )

Define keyword-based tool usage

You can add keyword-based triggers to prompt the bot to use specific tools:

# Check for specific hashtags and use the corresponding tool
if "%intro" in message_text.lower():
    result = INTRO_MESSAGE
 elif "%gtc24" in message_text.lower():
     selected_tool = GTCFAQTool()
     result = selected_tool._run(message_text)
...
else:
# Read the template from prompt.txt file
      with open('prompt.txt', 'r') as file:
            template = file.read()

 prompt = PromptTemplate.from_template(template)

To prompt your specific use case, check LangChain Hub.

Finalize the agent

ReACT is a framework where LLMs combine reasoning with actions. Use it to solve tasks based on the provided examples. Create the ReACT agent and the agent executor with the predefined variables:

# LLM is the NIM agent, with ReACT prompt and defined tools
react_agent = create_react_agent(
       llm=llm,
       tools=tools,
       prompt=prompt
)

# Connect to DB for memory, add react agent and suitable exec for Slack
agent_executor = AgentExecutor(
        agent=react_agent,
        tools=tools,
        verbose=True,
        handle_parsing_errors=True,
        return_intermediate_steps=True,
        memory=conversational_memory
)

Save interactions in Amazon Aurora PostgreSQL

Save the interaction in a predefined function for the Amazon Aurora PostgreSQL database:

save_conversation(user_id, thread_id, channel_name, message_text, result)
return result

There are various error-handling mechanisms you can add to the agent flow based on the given use case:

  • Add an exception handler for a custom message when tools fail 
  • Timeout message for when the tool is taking too long
  • Failure message that lets the user know that the tool is failing

Configuring Slack interactions

After setting up permissions and installing the required libraries, load environment variables and initialize the Slack app:

# Load environment variables
load_dotenv(".env")

# Initialize Slack app
app = App(token=os.getenv('SLACK_BOT_TOKEN'))

For configuring any predefined messages, constants, and long prompts, add the files in text format and invoke them separately:

# Constant examples
MAX_BLOCK_CHARS = 3000 #Slack char limit
MESSAGE_FOLDER = "messages"

def load_message(file_name):
    file_path = os.path.join(MESSAGE_FOLDER, file_name)
    with open(file_path, 'r') as file:
        return file.read()

#Example loading automatic message
INTRO_MESSAGE = load_message('greeting_message.txt')

There are several ways in which communication can be handled, including custom events for app mention, posting and sending messages to Slack, and splitting messages into multiple blocks if they exceed the character limit. 

To enable the agent to handle direct messages, set up an event listener:

# Add event listener for direct messages
@app.event("message")
def handle_direct_message_events(body, say, logger):
    event = body['event']
    if event.get('channel_type') == 'im':
        user_id = event['user']
        thread_id = event.get('ts')
        channel_name = event['channel']   # Use the user ID as channel_name
        text = event['text'].strip()
    
        response = agent_with_timeout(user_id, thread_id, channel_name, text)
    
        if response is not None:
            post_message_to_slack(channel_name, response, thread_id, user_id)

Creating custom tools for the agent

To add custom tools, extend the LangChain BaseTool class by providing a clear name and detailed description for each tool:

class CustomTool(BaseTool):
    name = "Custom Tool"
    description = """
    Tool used to fetch information
			... 
    """

Ensure that the description is thorough and includes an example for use within prompts. Afterward, append the tool to the agent’s set of tools. 

You can also customize tool behavior for various scenarios, such as using regex patterns to match Slack’s interface layout for coding blocks. 

This approach ensures that each tool is tailored to specific needs and enhances the agent’s versatility.

Managing agent interactions and memory

To store the agent-user interactions, connect to the Amazon Aurora PostgreSQL database instance:

DB_NAME = os.getenv('DB_NAME')
DB_USER = os.getenv('DB_USER')
DB_PASSWORD = os.getenv('DB_PASSWORD')
DB_HOST = os.getenv('DB_HOST')
DB_PORT = os.getenv('DB_PORT')

Several core functions help manage database functionality. You can manually create a database or automate the process using a script. The following code example shows a function that saves the agent-human conversations: 

# Function to save a conversation
def save_conversation(user_id, thread_id, channel_name, message_text, response):
    try:
        conn = psycopg2.connect(
            dbname=DB_NAME,
            user=DB_USER,
            password=DB_PASSWORD,
            host=DB_HOST,
            port=DB_PORT
        )
        c = conn.cursor()
        
        c.execute("""INSERT INTO history (user_id, thread_id, channel_name,    message_text, response)
                     VALUES (%s, %s, %s, %s, %s)""",
                  (user_id, thread_id, channel_name, message_text, response))
        
        conn.commit()
        conn.close()
        print("Conversation saved successfully")
    except psycopg2.Error as e:
        print("Error saving conversation:", e)

In this function, key interaction details—such as user IDs, channel names, and messages—are stored in the database for future reference. 

For memory management, use DynamoDB to track conversation sessions and maintain context:

# Initialize the DynamoDB resource using the IAM role credentials in the Ohio region
dynamodb = boto3.resource('dynamodb', region_name='us-east-2')

# Create the DynamoDB table
table = dynamodb.create_table(
    TableName='SessionTable',
    KeySchema=[
        {
            'AttributeName': 'SessionId',
            'KeyType': 'HASH'  # Partition key
        }
    ],
    AttributeDefinitions=[
        {
            'AttributeName': 'SessionId',
            'AttributeType': 'S'
        }
    ],
    BillingMode='PAY_PER_REQUEST'
)

Next steps and enhancements

To further optimize the Slackbot, consider the following enhancements:

Keep exploring beyond custom Slackbots

AI agents are transforming enterprise applications by automating tasks, optimizing processes, and boosting productivity. NVIDIA NIM microservices offer a seamless way to integrate multiple agents and tools, enabling businesses to create tailored AI-driven solutions.

In this post, I demonstrated how to use NIM AI endpoints to create an end-to-end Slackbot agent with custom tools. This solution enhances a simple Slack interface, enabling it to handle more complex tasks and solve unique challenges. 

For more examples, explore the official /NVIDIA/GenerativeAIExamples GitHub repo. For more information about building with NIM microservices and LangChain, see NVIDIA AI LangChain endpoints.

Discuss (1)

Tags