AI & Machine Learning

Building Intelligent Chatbots with LangChain and GPT-4

Space2Code Team
December 28, 2023
15 min read
AI

Introduction

Chatbots have evolved from simple rule-based systems to sophisticated AI-powered assistants capable of understanding context and generating human-like responses. LangChain, combined with GPT-4, makes it easier than ever to build intelligent chatbots.

In this comprehensive guide, we'll walk through building a production-ready chatbot from scratch.

What is LangChain?

LangChain is a framework for developing applications powered by language models. It provides:

  • Chains - Combine multiple AI calls
  • Memory - Maintain conversation context
  • Agents - AI that can use tools
  • RAG - Retrieve and use your data

Project Setup

Let's start by setting up our project:

# Create project
mkdir ai-chatbot && cd ai-chatbot
npm init -y

# Install dependencies
npm install langchain @langchain/openai @langchain/community
npm install express cors dotenv
npm install -D typescript @types/node @types/express
// tsconfig.json
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "strict": true,
    "esModuleInterop": true,
    "outDir": "./dist"
  }
}

Basic Chatbot Implementation

Start with a simple conversational chatbot:

// src/chatbot.ts
import { ChatOpenAI } from '@langchain/openai';
import { ConversationChain } from 'langchain/chains';
import { BufferWindowMemory } from 'langchain/memory';
import {
  ChatPromptTemplate,
  SystemMessagePromptTemplate,
  HumanMessagePromptTemplate,
  MessagesPlaceholder,
} from '@langchain/core/prompts';

// Initialize the LLM
const llm = new ChatOpenAI({
  modelName: 'gpt-4-turbo-preview',
  temperature: 0.7,
  openAIApiKey: process.env.OPENAI_API_KEY,
});

// Create a prompt template
const prompt = ChatPromptTemplate.fromMessages([
  SystemMessagePromptTemplate.fromTemplate(
    `You are a helpful AI assistant for Space2Code, a software development company.
    
    Your responsibilities:
    - Answer questions about our services (mobile apps, web development, AI solutions)
    - Help users understand our process
    - Collect information for project inquiries
    - Be friendly, professional, and concise
    
    If asked about pricing, explain that it depends on project scope and suggest scheduling a consultation.`
  ),
  new MessagesPlaceholder('history'),
  HumanMessagePromptTemplate.fromTemplate('{input}'),
]);

// Memory to store conversation history (last 10 messages)
const memory = new BufferWindowMemory({
  k: 10,
  returnMessages: true,
  memoryKey: 'history',
});

// Create the conversation chain
export const chatbot = new ConversationChain({
  llm,
  prompt,
  memory,
});

export async function chat(message: string): Promise<string> {
  const response = await chatbot.call({ input: message });
  return response.response;
}

Adding RAG for Knowledge Base

Make your chatbot smarter with your own data:

// src/rag-chatbot.ts
import { OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { Document } from '@langchain/core/documents';
import { RetrievalQAChain } from 'langchain/chains';
import { ChatOpenAI } from '@langchain/openai';

// Your knowledge base
const documents = [
  new Document({
    pageContent: `Space2Code offers mobile app development using React Native and Flutter.
    We build cross-platform apps that work on both iOS and Android.
    Our mobile development process includes: discovery, design, development, testing, and deployment.`,
    metadata: { topic: 'mobile-development' },
  }),
  new Document({
    pageContent: `Our web development services include Next.js, React, and Node.js.
    We build fast, SEO-friendly web applications with modern architectures.
    We specialize in e-commerce, SaaS platforms, and enterprise applications.`,
    metadata: { topic: 'web-development' },
  }),
  new Document({
    pageContent: `Space2Code provides AI/ML integration services.
    We can add chatbots, recommendation systems, and predictive analytics to your apps.
    We use OpenAI, LangChain, TensorFlow, and PyTorch for AI development.`,
    metadata: { topic: 'ai-development' },
  }),
];

// Create embeddings and vector store
const embeddings = new OpenAIEmbeddings();
const vectorStore = await MemoryVectorStore.fromDocuments(documents, embeddings);

// Create RAG chain
const llm = new ChatOpenAI({ modelName: 'gpt-4-turbo-preview' });
const ragChain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());

export async function ragChat(question: string): Promise<string> {
  const response = await ragChain.call({ query: question });
  return response.text;
}

Building the API

Create an Express API for your chatbot:

// src/server.ts
import express from 'express';
import cors from 'cors';
import { chat } from './chatbot';
import { ragChat } from './rag-chatbot';

const app = express();
app.use(cors());
app.use(express.json());

// Simple chat endpoint
app.post('/api/chat', async (req, res) => {
  try {
    const { message, sessionId } = req.body;
    
    if (!message) {
      return res.status(400).json({ error: 'Message is required' });
    }

    const response = await chat(message);
    res.json({ response, sessionId });
  } catch (error) {
    console.error('Chat error:', error);
    res.status(500).json({ error: 'Failed to generate response' });
  }
});

// RAG chat endpoint
app.post('/api/chat/knowledge', async (req, res) => {
  try {
    const { question } = req.body;
    
    if (!question) {
      return res.status(400).json({ error: 'Question is required' });
    }

    const response = await ragChat(question);
    res.json({ response });
  } catch (error) {
    console.error('RAG error:', error);
    res.status(500).json({ error: 'Failed to generate response' });
  }
});

const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
  console.log(`Chatbot API running on port ${PORT}`);
});

Adding Streaming Responses

Make your chatbot feel more responsive with streaming:

// src/streaming-chat.ts
import { ChatOpenAI } from '@langchain/openai';

const streamingLLM = new ChatOpenAI({
  modelName: 'gpt-4-turbo-preview',
  streaming: true,
});

export async function* streamChat(message: string) {
  const stream = await streamingLLM.stream(message);
  
  for await (const chunk of stream) {
    yield chunk.content;
  }
}

// API endpoint for streaming
app.post('/api/chat/stream', async (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');

  const { message } = req.body;
  
  for await (const chunk of streamChat(message)) {
    res.write(`data: ${JSON.stringify({ chunk })}

`);
  }
  
  res.write('data: [DONE]

');
  res.end();
});

Frontend Integration

Connect your React frontend to the chatbot:

// components/Chatbot.tsx
'use client';

import { useState, useRef, useEffect } from 'react';

interface Message {
  role: 'user' | 'assistant';
  content: string;
}

export default function Chatbot() {
  const [messages, setMessages] = useState<Message[]>([]);
  const [input, setInput] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  const sendMessage = async () => {
    if (!input.trim()) return;

    const userMessage = { role: 'user' as const, content: input };
    setMessages((prev) => [...prev, userMessage]);
    setInput('');
    setIsLoading(true);

    try {
      const response = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ message: input }),
      });

      const data = await response.json();
      setMessages((prev) => [
        ...prev,
        { role: 'assistant', content: data.response },
      ]);
    } catch (error) {
      console.error('Error:', error);
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <div className="chatbot-container">
      <div className="messages">
        {messages.map((msg, i) => (
          <div key={i} className={`message ${msg.role}`}>
            {msg.content}
          </div>
        ))}
        {isLoading && <div className="loading">Thinking...</div>}
      </div>
      <div className="input-area">
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && sendMessage()}
          placeholder="Ask me anything..."
        />
        <button onClick={sendMessage} disabled={isLoading}>
          Send
        </button>
      </div>
    </div>
  );
}

Best Practices

  1. Rate Limiting - Protect your API from abuse
  2. Error Handling - Gracefully handle API failures
  3. Conversation History - Store and retrieve past conversations
  4. Moderation - Filter inappropriate content
  5. Analytics - Track usage and improve responses

Conclusion

Building intelligent chatbots with LangChain and GPT-4 is straightforward and powerful. Start with a basic implementation and gradually add features like RAG, streaming, and tool use.

Need help building your AI chatbot? Contact Space2Code for expert AI development services.

Tags

#Chatbot#LangChain#GPT-4#AI#NLP

Share this article

Need Help with AI & Machine Learning?

Our team of experts is ready to help you build your next project.

Get Free Consultation