Building an AI-Powered Search Bar - A Guide with OpenAI, Supabase, and NuxtContent - Part 2

Building an AI-Powered Search Bar - A Guide with OpenAI, Supabase, and NuxtContent - Part 2

ByTobias Reich

Let's dive into the world of AI-powered web development. We're combining Nuxt Content, Supabase, and OpenAI's embeddings to craft an advanced AI content search system. Together, we'll step through a process that not only enhances search functionality but also brings a new level of intelligence to user interactions. By the end of this tutorial, you'll have a fully functional AI search bar integrated that you can integrate into your own application. In part 2 we will code the search functionality. If you want an easy follow along, I would recommend checking out Part 1 where we set up our environment together.

Now its finally time to create the search functionality. Our first step is to create a function, that generates the embeddings for our articles.

Generating the embeddings

To hide the API keys for openAI and Supabase, we create an api server route in the server folder: /server/api/generateEmbeddings.ts Here we get the article content and route as argument so we can create an embedding for each section of the article. Generating multiple embeddings is important if we want tight search results. After creating those we put them into our supabase table.

generateEmbeddings.ts
import { OpenAI } from "openai";
import { serverSupabaseClient } from "#supabase/server";

export default defineEventHandler(async (event) => {
  const openai = new OpenAI();
  const { articlePath, articleContent } = await readBody(event);
  const client = await serverSupabaseClient(event);

  // split blog content into sections
  const sections: string[] = [];
  // recursive function to get all sections
  const getData = (content) => {
    content.children.forEach((element) => {
      if (element.children) {
        getData(element);
      } else {
        sections.push(element.value);
      }
    });
  };
  getData(articleContent);

  // create embeddings for each section
  const embeddingRequest = await openai.embeddings.create({
    model: "text-embedding-ada-002",
    input: sections,
  });
  const embeddings = embeddingRequest.data;

  // save each embedding to the database

  embeddings.forEach(async (embedding) => {
    await client.from("blog_chunk_embeddings").insert({
      article_title: articlePath,
      embedding: embedding.embedding,
    });
  });
  return;
});

Now we have to call this function. For simplicity I will add a button under the blog articles title in article .vue, which calls the "generateEmbedding" function:

[article
<button @click="generateEmbedding">generateEmbedding</button>
[article
const generateEmbedding = async () => {
  await useFetch("/api/generateEmbeddings", {
    method: "POST",
    body: JSON.stringify({
      articlePath: route.params.article,
      articleContent: article.value?.body,
    }),
  });
};

When you click on the button, the freshly generated embeddings will appear in the supabase Table.

Create a vector search function in Supabase

We now want to utilize the new embeddings. The goal of the next step is to create a Supabase function, that returns all the article titles, that have an embedding, which is matching a given embedding. This is what it looks like:

sql
create or replace function match_blog_sections(embedding vector(1536), match_threshold float)
returns table (id bigint, article_title text, similarity float)
language plpgsql
as $$
#variable_conflict use_variable
begin
  return query
  select
    blog_chunk_embeddings.id,
    blog_chunk_embeddings.article_title,
    (blog_chunk_embeddings.embedding <#> embedding) * -1 as similarity
  from blog_chunk_embeddings

  where (blog_chunk_embeddings.embedding <#> embedding) * -1 > match_threshold

  order by blog_chunk_embeddings.embedding <#> embedding;
end;

$$;

Now the next step on our roadmap will be to generate a search embedding and call the Supabase function with it.

As we will use API keys again, we will put the search function into an api route: /server/api/searchBlog.ts

searchBlog.ts
import { OpenAI } from "openai";
import { serverSupabaseClient } from "#supabase/server";

export default defineEventHandler(async (event) => {
  const openai = new OpenAI();
  const query = await readBody(event);
  const client = await serverSupabaseClient(event);

  // generate the embedding for the search parameter
  const embeddingResponse = await openai.embeddings.create({
    model: "text-embedding-ada-002",
    input: query.query,
  });
  const queryEmbedding = embeddingResponse.data[0].embedding;

  //call the supabase function with the embedding
  const { data, error } = await client.rpc("match_blog_sections", {
    embedding: "[" + queryEmbedding.toString() + "]",
    match_threshold: 0.8,
  });
  if (error) throw error;

  //return the results
  if (data) {
    const articles = data
      .map((article) => article.article_title)
      .filter((value, index, self) => self.indexOf(value) === index);
    return articles;
  }
});

This route will be used in a custom search composable: /composables/ai-blog-search.ts

ai-blog-search.ts
export default async function customSearchContent(search: string, articles) {
  //if the search term ist empty return all articles unfiltered
  if (search === "") {
    return articles;
  }

  // call the api function
  const { data } = await useFetch("/api/searchBlog", {
    method: "POST",
    body: JSON.stringify({ query: search }),
  });

  //filter the articles with the results
  const filteredArticles = articles?.filter((article) =>
    data.value.includes(article._path.replace("/", "")),
  );

  return filteredArticles;
}

Last step: Frontend

Phew, that was quite a lot. Let's now come to an end and implement the whole functionality into our frontend index.vue. At first we create a search bar on top of the articles.

index.vue
<input type="search" name="Search" v-model="search" />

Now we add the search variable to our script and call the search function every time you add something into the searchbox:

index.vue
const search = ref<string>("");

function debounce<T extends (...args: any[]) => any>(
  func: T,
  delay: number,
): (...args: Parameters<T>) => void {
  let timer: ReturnType<typeof setTimeout> | null = null;
  return function (...args: Parameters<T>): void {
    if (timer) clearTimeout(timer);
    timer = setTimeout(() => func(...args), delay);
  };
}

const performSearch = async () => {
  filteredArticles.value = await customSearchContent(
    search.value,
    articles.value || [],
  );
};

const debouncedSearch = debounce(performSearch, 500);

watch(search, () => {
  debouncedSearch();
});

Bye!

And that's it! We've covered the essential steps to create an AI-powered content search application with Nuxt Content, Supabase, and OpenAI. We hope this tutorial has given you valuable insights into integrating AI in web development. Remember, the key to mastery is practice and experimentation. Until next time, happy coding!

Weitere Artikel

Building a Chatbot with OpenAI API in TypeScript
Tutorial

Building a Chatbot with OpenAI API in TypeScript

Explore the creation of an AI chatbot using TypeScript and OpenAI's powerful API in this streamlined tutorial. Perfect for developers at any level, this guide covers setting up a TypeScript project, integrating the OpenAI API, and crafting a basic interactive chatbot. Step into the world of conversational AI with clear instructions, practical examples, and a focus on type-safe development, setting a solid foundation for more advanced applications.

Martin Kogut

Co-Founder / CEO

Machen wir Ihre digitale Zukunft möglich

Unsere Standorte

  • Berlin
    Bornstraße 32, 12163 Berlin, Germany
  • Szczecin
    ul. Herbowa 14, 71-427 Szczecin, Poland