Language Models and Large Language Models
Discover AI models and the role of tokenization in large language models. Uncover how tokens drive today's NLP applications. Explore now for insights!
LLM-driven insights refer to using advanced AI language models, like OpenAI’s GPT, to understand, interpret, and generate meaningful information from complex datasets. Instead of manually writing queries or code, you can ask the AI in natural language questions like:
The LLM understands your request, analyzes the sales data, and provides clear, actionable answers — sometimes even generating charts or summaries to visualize the findings.
We’ll create a powerful yet simple data analysis chatbot using Next.js and OpenAI’s language model. This chatbot will allow users to interact with sales data through natural language queries — no complex coding, SQL knowledge, or need to install BI platforms.
Choosing the right framework is essential when building a chatbot that’s both powerful and user-friendly. Next.js stands out as an excellent choice thanks to its robust features tailored for modern web applications. Here’s why Next.js is perfect for our data analysis chatbot:
Open your Command line interface and run the following command npx create-next-app@latest
After installation is done, in the command line, change the directory cd llm-insights
and run npm run dev
Visit Localhost. You should see something like the following
In the command line, stop the development server and install Vercel's AI SDK by running the following command npm install ai @ai-sdk/openai
Now, go to the OpenAI site and on the left sidebar, get an API key. Copy the key and in the root of the project, create a .env
file, and add OPENAI_API_KEY="YourKey"
to it.
Now, let's install frontend libraries and dependencies. Install Heroicon by running npm install @heroicons/react
In the project root directory, create a folder called lib
And inside it, create a file called openai.ts
and in the app/global.css
Remove everything and only add the following to it
In the app
folder, add components
directory and create the following files in it
In the app directory, change the page.tsx
to the following
Here is what you should see on your screen
In our example, we are going to create a file and add some sample sales data there, but in a real-world application, you probably pull data from an API or Database. In the project root
Create a new folder called data
and add a new file called sales.js
in it
Vercel AI for chatbots has 2 main components
Let's implement our API endpoint where we receive messages here and use streamText to pass messages to it and generate an LLM response. In the app directory, create api/ai/chat/route.tsx
Now we can wire up our ChatBot component to the endpoint to send messages and receive a streamed response.
First, let's install the markdown parser, because LLM provides the answer in Markdown by default, run npm i react-markdown remark-gfm
In the command prompt, next, change the ChatBot content with the following
useChat
by default uses the api/chat
endpoint, but in our case, since we changed the endpoint to api/ai/chat
We should mention it in the options. Let's give it a try
Prompt engineering is the Large Language Models (LLMs) steering wheel which lets us guide the model to produce accurate and relevant results. Since LLMs generate responses based on the prompt context, how you phrase questions or instructions directly impacts the quality of insights you get.
For our sales data chatbot, prompt engineering helps the AI understand exactly what data to analyze, how to interpret it, and how to present the results, whether as text summaries or chart specifications.
If you want to learn more about prompt engineering and its advanced techniques, feel free to read The Definitive Guide to LLM Prompt Engineering and Prompt Debugging Techniques
streamText can receive system prompt, which can help us to give our model a persona (Expert Data Analyzer) This helps the model to look into the related trained data, define its tasks, pand rovide constraints (our sales data) and give it a few example to make sure it provides an accurate answer.
In the route.tsx
Let's add a system prompt and pass it to the streamText
Let's see how it works
Tool calling is a powerful feature that allows a language model to interact with external tools or APIs during a conversation. Instead of just generating text, the model can “call” specialized functions to perform tasks like fetching data, running calculations, or creating visualizations.
For our sales data chatbot, tool calling enables the AI to request the generation of charts based on user queries. When a user asks for a sales trend or comparison, the model can produce a structured chart specification (like Vega-Lite JSON) and then call a chart-rendering tool to display the visualization directly in the app.
Let's create a tool called generateRevenueChart
, which generates a Pie chart for a given category. In the route.tsx
Here we imported the tool library from Vercel AI SDK, created our tool with parameter category as string (parameter is what we give to the function that we want the tool to execute), our function pulls sales JSON and using the passed category generates revenue data for our pie chart.
In the components
folder, create a new file PieChart.tsx
With the following content
Since we are using recharts let's install it npm install recharts
When our tool gets invoked, the response type, instead of text, will be tool-invocation
. Let's add a case for that to make sure that when the LLM response is tool-invocation
, we get the tool result for the revenue chart and pass it to our PieChart
Let's change the ChatBot.tsx content with the following
Throughout this tutorial, we’ve seen how combining Next.js with OpenAI’s powerful language models enables us to build an intuitive, AI-driven data analysis chatbot. From setting up the project and integrating sample sales data to crafting effective prompts and leveraging the tool calling for dynamic chart generation, each step contributes to a seamless user experience. This approach not only simplifies complex data querying but also enhances decision-making by delivering clear, actionable insights in real time.
Building an LLM-driven data analysis chatbot with Next.js and OpenAI unlocks a new level of accessibility and efficiency in working with complex datasets. By harnessing prompt engineering and tool calling, we empower users to interact with data naturally, asking questions in plain language and receiving insightful answers along with visualizations. This method reduces reliance on specialized technical skills and accelerates data-driven decisions.
As AI continues to evolve, integrating such intelligent chatbots into your applications can transform how your teams explore and understand data. Whether you’re analyzing sales, marketing, or operational metrics, the combination of Next.js and LLMs offers a scalable, flexible foundation to build smarter analytics tools.
Ready to take your data analysis capabilities further? Experiment with expanding your chatbot’s dataset, refining prompts, or adding new tools to generate richer insights. The future of AI-powered analytics is at your fingertips.
You can find the project repo here
Looking to learn more about Prompt Engineering, ai querying, openai, chatbot, nextjs and ? These related blog articles explore complementary topics, techniques, and strategies that can help you master LLM-Driven Data Analytics: Build AI-Powered Insights & Dynamic Charts with Next.js and OpenAI.
Discover AI models and the role of tokenization in large language models. Uncover how tokens drive today's NLP applications. Explore now for insights!
Master AI prompt creation with our step-by-step LLM prompt engineering guide! Discover expert tips and boost your skills today. Explore now!
Discover prompt engineering best practices to elevate your LLM results. Learn proven tips, refine your prompts, and unlock smarter, faster outputs today!
Master prompt engineering for chatbots with 6 core strategies to craft precise AI prompts, improve response accuracy, and enhance user engagement. Learn best practices now!
Master LLM prompt engineering and boost Google Search Console performance. Craft high-impact prompts, monitor keywords, and elevate your site’s SEO results.
Discover Alan Turing's five pivotal AI breakthroughs that shaped modern technology. Explore his revolutionary contributions to artificial intelligence today!
Step by Step Prompt Debugging Techniques to fix errors fast. Act now to uncover expert troubleshooting tips and boost your LLM workflow with confidence.
Learn how to build a powerful contract review chatbot using Next.js and OpenAI’s GPT-4o-mini model. This step-by-step tutorial covers file uploads, API integration, prompt engineering, and deployment — perfect for developers wanting to create AI-powered legal assistants.
Learn how to build an AI-powered quiz generator using OpenAI and Next.js. Upload PDF content and automatically generate multiple-choice questions with answers and difficulty levels in JSON format.
Learn how to enhance your AI Quiz Generator with OpenAI by adding interactive user feedback and automated grading. Follow this step-by-step Next.js tutorial to create a dynamic quiz app that provides real-time feedback and scores, improving student engagement and learning outcomes.
Learn how to do keyword research with Perplexity AI, the cutting-edge AI-powered search engine. Discover step-by-step strategies to find high-volume, low-competition keywords, generate long-tail keyword ideas, analyze search intent, and export your results for SEO success in 2025.
Learn how to create an AI-powered social media video maker using Next.js and Google Veo. This step-by-step tutorial covers integrating Veo’s text-to-video API, building a modern web app, and generating cinematic sneaker ads ready for social media publishing.
Discover essential strategies to enhance your Next.js application's SEO performance. Learn about Server-Side Rendering (SSR), Static Site Generation (SSG), Incremental Static Regeneration (ISR), and more. Optimize your metadata, images, and site structure to improve crawlability and user experience, ensuring higher rankings on search engines.
Discover how to optimize your Next.js 15.2 applications for superior page speed and SEO. Learn about the latest features, including Turbopack stability, smarter caching, and effective metadata optimization. Enhance your site's performance with server-side rendering, static site generation, and image optimization techniques while ensuring compliance with Google's Core Web Vitals.
Discover effective ChatGPT prompt engineering techniques! Unleash the power of AI in your projects and stay ahead in the tech game.