There’s no magic bullet when it comes to marketing, but when you can generate inbound leads into perpetuity, identify new qualified prospects, and sign new customers passively, SEO sure comes close.
Search engine optimization, or SEO, is the process of creating and optimizing content (normally text content) that drives clicks (or even better, conversions).
But SEO isn’t easy. I’m on my third (maybe even fourth or fifth, depending on how you count them) startup now, and I’ve always viewed SEO as an enigmatic black box — some people just know how to do it well, and the rest of us are left to figure things out in low-traffic purgatory.
But now that I’ve started Locusive, I’ve decided to learn, once and for all, how to implement an optimized SEO strategy that can lead to new customers.
Over the past year, I’ve learned enough to build a decent foundation, and as I’ve started to implement a cohesive strategy, I’ve seen it generate an increasing number of clicks and conversions. And with the massive growth in AI that we’ve seen over the past year, I’ve been able to cut down how much time it takes to generate a new piece of content from days to hours.
In the rest of this post, I’ll provide details about my SEO automation playbook (which you can find here), detailing the exact strategies I use, and even providing the prompts I use to generate high-quality content that’s personalized to my customers and my business.
Before I jump in, I should point out that I’m not a professional SEO and I’ve only learned the foundations over the past 12 months. In providing this guide, my hope is that you’ll be able to learn a little bit more about how to use Generative AI to grow your own business, and that you’ll be able to save lots of time doing so by re-using the prompts, strategies, and tactics that work for me. It’s also worth noting that when I mention ChatGPT in the article below, I’m referring to ChatGPT 4 specifically. All that said, let’s jump in.
Creating a new piece of content vs. long-term strategy
The first thing to keep in mind is that SEO is a long-term practice that requires dedication and consistency. I’ve approached it with this thought in mind, preparing to wait at least six months before seeing results. While it may not always take that long, the point is that if you’re going to really invest in SEO as a lead gen and conversion strategy, you’re going to need to take a long-term view of the process.
For me, that means that every month, I produce several pieces of high-quality content, audit my existing content to update it, add cross-links, and clean it up, and hope to also start getting more backlinks soon.
In this post, I’ll focus the first piece of that — how I use AI to generate a single, new piece of content at Locusive.
An overview of the approach
At the moment I’m the only one working on marketing for Locusive, so my goal is to generate one new long-form article a week. As we grow, I’d love to incorporate new forms of media and a larger number of weekly articles, but for now, the process below should work at most scales.
When I approach generating new content, I use the following 10-step process. As you can see, I do a lot of this by using AI. Unfortunately, not all of it can be automated (and that’s why I don’t use automatic writing tools or services), but the manual piece of it isn’t that bad since AI has taken up a lot of the tedious and taxing tasks.
The process I follow:
- Determine the topic of the article
- Identify the keywords to target
- Find the existing top pieces of content for the keyword(s)
- Teach ChatGPT about my company
- Use ChatGPT to extract the structure of the competitors’ articles
- Use ChatGPT to create an optimized outline for my article
- Use Claude to iteratively generate the content for each section of the article
- Use Claude to create an optimized title and description
- Create a hero image using Claude and ChatGPT + DALL-E
- Publish the article on key channels
I’ll dive deeper into each of these steps below.
1. Determine the topic of the article
The first, and probably most important, step in the process is to determine the topic of the new article. Most folks generally do this by looking for keywords that might be worth targeting, but, according to the fine folks over at Grow and Convert, the better approach is to identify pain points that your customers are facing, create an article to address those pain points, and then target the keywords that they’re most likely to search for when looking for a solution.
This approach makes a lot of sense to me, as it optimizes for what I really care about: conversions. In general, I find that this approach, which targets bottom-of-the-funnel users, has lower volume, but will convert at a higher rate than top-of-funnel searches.
At Locusive, I’ve combined this philosophy with the hub-and-spoke model, where I maintain a “hub” page for a particular topic and then create “spoke” pieces of content that link to and from the hub. In my approach, our hubs focus on high-level user problems, rather than keywords, which is what you’ll see most other guides focus on.
Mapping our content
To facilitate this approach, I maintain a Content Map that identifies all of our hubs and spokes, along with the keywords that they’re targeting and the cross-links contained within each article.
This allows me to quickly see which user problems I want to cover, what articles I’ve already created, and what keywords I’m targeting. I keep this content map periodically updated with new ideas, as it serves as the basis for how I choose the next topic for.
Selecting the topic
To select the actual topic of my next article, I review this document and identify any gaps in terms of what users might be trying to do and what articles I’ve already created.
For example, one of our hub topics focuses on how users can implement chatbots specific to their business. I’ll maintain a list of the articles I’ve already created specific to this topic, and then analyze what our product does and what our users have asked us, and search for any potential topics that we haven’t written about yet. For example, if we just created a new Google Drive integration for our chatbot (which we did a few months ago), I might create an article on how to use chatbots with your Google Drive documents.
Choosing an angle
While it might be enough to select a high-level topic, I find it best to go one step farther and select a specific angle to use when covering that topic — ideally an angle that covers bottom-of-the-funnel needs so that I can drive more conversions to our product.
Following the same example above, if I choose to write about chatbots for Google Docs, I’ll generally use one of the high-intent, low-funnel approaches that Grow and Convert recommends, specifically, category keywords (i.e. “google drive chatbots”), or even better, comparison/alternative keywords (i.e. “best google drive chatbots”). As mentioned above, this approach won’t drive as much traffic as if you were focusing on higher-funnel, informational keywords, but they’ll lead to more conversions from those users who do click on your articles, since they’re more likely to be ready to try a product that solves their pain.
Summary
To recap, in order to choose the best topic for a new piece of content, you should:
- Know what your customers are asking and have a running list of topics
- See where your existing content library falls short in terms of covering important topics your customers care about
- Select a topic that you haven’t covered yet
- Identify an angle to cover that topic that uses low-funnel, high-intent keywords
Once you’ve got your topic down, you can then select the keywords that you want to target for that topic.
2. Select your keywords
Once you know what you want to write about, you want to maximize the chances of your prospective customers seeing it. One way to do this is to search for the keywords and search terms that your customers are using to find similar content.
Google’s Keyword Planner is a useful tool to do this. By typing in some of the keywords from the topic that you’re writing about, it can show you a list of relevant keywords that people are searching for, along with an estimate of the number of monthly searches that keyword is seeing.
In general, I like to target keywords that get between 10–100 searches a month, assuming they’re low-funnel. These searches tend to drive a consistent number of conversions without being impossible to rank for unless you’re a big corporation.
Once I’ve found 2–3 keywords I want to target, I’ll write them down for later use. I also maintain a running list of keywords that I’m targeting (or that are driving to our website) so that I can track what’s going on with my efforts.
3. Find the existing top performers for my keywords
Fortunately, the next step of the process is pretty easy. I’ll simply type the keywords I’ve recorded into Google and record the top 5–10 URLs that come up in the results. I’ll use them later.
4. Provide context on your company and products to ChatGPT
For AI to be able to create high-quality, personalized content, it needs a lot of context — context about your business, your products, your goals and objectives, your customers, and what you’re trying to do, not to mention context about the problems your customers are facing.
If you ask AI to write you an article about a topic without giving it context, it’s going to come up with a lot of generic garbage that isn’t useful to anyone. But with a lot of guidance and additional information, it will be able to create a highly targeted, personalized article that focuses on your customers and your solution to their problems.
That’s why the next step in my process is to provide ChatGPT with the context it needs about Locusive, our products, our offerings, and our journey thus far. By having this context written down and part of the coming message history, it will be used when generating an outline for our article.
Below, I’ve provided the exact prompt I provide to ChatGPT to teach it more about Locusive. You’ll need to create your own prompt for your company, but you can use the below as an example (note, this is a lot of text, which is why I have to paste it as a separate message into ChatGPT, rather than using it’s “custom instructions” field):
The prompt I give to ChatGPT to teach it about Locusive:
I’d like you to help me create a new long-form, SEO-optimized article. However, in order to create that article, you’re going to need to know some things about my company. Below, I’ve provided context for you to learn about my company, and later I’ll give you details about the article I’d like to create.
=== Details about my company ===
I’ve started a new startup called Locusive. Over the long run, my goal is to create a billion-dollar+ company that changes the way businesses and individuals interact with software by providing an AI-assistant that automates virtually all aspects of their digital lives.
But it’s still early and I’m heavily focused on finding problem-solution fit and product-market fit right now (because in the past I tried to start a company by just using “cool new technology” to build some interesting new things and it was really hard to sell that). Over the past year, I’ve done a lot of work to try to identify a problem area that enough businesses are having that it makes sense for me to dedicate my entire life to solving. My goal is to find a big problem that people are willing to pay to solve using a custom AI-solution that I’ve built, while also having it be something I’m excited and passionate to work on.
In order to try to find a big enough problem to solve, I’ve done several different things. First, I started by doing paid software consulting, which helped me identify an area of focus, and landed me in the high-level area of helping companies connect their internal data to large language models.
As I started to get into this area, I also built out two initial products to get myself familiar with the core technologies:
# 1. Chatbot for Slack We’ve created a chatbot for Slack that allows organizations and their employees to connect data sources from the apps they already use (like Google Drive, file uploads, the Salesforce API, etc) and responds to their requests using a retrieval augmented generation (RAG) framework, or by using an autonomous agent model to retrieve information on demand (if, for example, they’re asking about something in Salesforce that couldn’t be indexed into a vector database ahead of time).
## Key features
Here are the key features of the chatbot for Slack:
- Chat interface: Offers a natural way to interact with data
- Time-saving: Reduces time spent searching for information
- Team collaboration: Enable the entire team to access and query company data
- Increased automation: Automates tasks by interpreting user requests and fetching the required information
- Higher efficiency: Acts as a knowledgeable team member with comprehensive data awareness
- Centralized access: Provides a single interface for accessing all company data — Slack integration: Seamlessly integrates with Slack, avoiding the need for additional apps
- Integrations: Compatible with commonly used apps and data sources, including the Salesforce API, PDFs in Google Drive folders, Google Sheets, Notion, and the ability to upload PDFs, .docx, and .txt files from the website
-Search or ask: Allows users to find documents or get trusted answers directly
- Trusted answers: Ensures reliability by showing sources for answers and admitting when it cannot provide an answer by using the user’s trusted knowledge base
## Customer success story
I have a recorded testimonial from a customer, John Mantia, the CEO of PARCO, a financial services firm that helps government employees plan for their retirement, who used the Slackbot with his team to help them get more efficient. Here’s is a transcription of the testimonial:
— — Start of John Mantia’s testimonial transcription — —
Transcript:
(00:01) I’m John Mantia, I’m the co-founder and CEO of PARCO and we do retirement and financial planning for government employees. Our customers all work for the federal government — and so the federal government’s a huge organization it has millions and millions of people that work for it and there’s hundreds of thousands of different documents that relate to our clients employment and the rules that go into their different employment planning processes so having all that stored in our minds is really inefficient, and what Locusive
(00:42) allows us to do is upload and have all that data and create a library allowing us to access all these different source documents in real time. It’s great for when one of our teammates, especially a new person, comes on and let’s say they’re working with a client that asks a question that you know an individual on our team doesn’t know but we know as a hive mind — we’re able to ask that question in the slackbot and quickly get an answer that Locusive produces, so it’s contextual but then importantly it also
(01:16) provides the source document so that way if the client or if our teammate needs that document to do more research they actually have that handy. [Music] The labor and the efforts of our team to do this for our teammates, so if a teammate was on a sales call and was talking to a client working with a client and they didn’t know the answer to a question they would message our team-wide thread and it was all hands on deck to answer that question for a teammate but that required a lot of effort from our humans our team Locusive
(01:52) was almost like having another super teammate on our on our side who knows all the things that we’ve uploaded, so it’s made it a lot faster for us to get that information and saves us an awful lot of time. When we started PARCO I was thinking man we’re going to have to have a team of you know 15 to 20 researchers just given the amount of customers in the scale we’re trying to build, and I think with Locusive we’re not going to need to — we’re going to totally rethink that so it’s making us a whole lot more
(02:18) efficient and a whole lot more consistent. [Music] Locusive has been really responsive to our feedback and what’s nice too is it seems like they’re really trying to understand the business that we have and the problem that we have. I think what’s always been challenging about working you on projects with technology companies is that they sometimes don’t give you that extra layer of polish and Locusive’s team does and that that’s been really important because you know just as important as it
(02:48) is for us to know what we want, whoever we’re working with is helping them to understand what we want so they can build it a big part of our philosophy here and values of PARCO’s we want to always give a consistent experience for our customers regardless of who they’re working with or regardless of you know the people that are supporting that customer on the back end, and these LLMs are really going to allow us to do that. They can’t do your business for you but you can give it the rules and and logic to help you do your
(03:21) business faster in the way you’d want it to um I don’t think it’s going to mean we’re gonna need to be able to be a different type of team so we’re going to use Locusive a lot
— — End of John Mantia’s testimonial transcription — —
# 2. Data and chat management API
After creating the chatbot for Slack, it was easy to create a public-facing API that allows users to store (index) their data, search for the most relevant document snippets, and chat with their data all using our own infrastructure, so that they don’t have to maintain their own vector database, worry about their embeddings, or create jobs to update their data periodically.
The API allows customers to create their own chatbots without having to use our chatbot for Slack. But it also provides the ability for them to get the benefits of a vector database without them having to do maintain their own vector databases, and this is something a lot of my customers re using us to do. For example, one customer, GrantExec, stores several thousand grants (federal, state, or private funding opportunities) in our system and uses our API to find the best 30–40 grants for every new customer that signs up to their system by querying our API with that customer’s profile. Another customer stores hundreds of thousands of customer audiences (segments) in our system and uses our API to find the best audiences for any of their new customers that signs up by querying our API with the contents of that customer’s website.
## Recorded testimonial
I have a recorded testimonial from the owner of GrantExec, Ryan Alcorn, on how he uses our API. Here’s the testimonial below:
— — Start of GrantExec’s testimonial — —
Speaker: Ryan, GrantExecRyan Alcorn Founder & CEO of GrantExec
Ryan: Hi there. My name is Ryan Alcorn. I’m the founder and CEO of GrantExec. We’re a startup dedicated to streamlining the grant seeking process and eliminating the need to search what we do is we centralize the universe of live grant opportunities. And we’ve developed a matching algorithm to help nonprofits, small businesses researchers, local governments and schools identify opportunities faster. Now, before working with Locusive, we struggled to find the best way to match opportunities to client profiles at speed. We were particularly interested in a generative A I solution given the data that we have on our client profiles and our grant opportunities. We experimented with a little bit of generative A I on our own, but we were limited by our technical capacity and deep knowledge in the field. So we were very excited to partner with Locusive to change that.
How did Locusive help you serve your clients better?
Locusive helped us improve the process of matching grant opportunities for our clients by taking the code that I had initially written, which was pulled together with glue and tape and by no means professional and transformed and optimized that process that, that script into something much more compelling, much faster and, and much more accurate. We started this, this process just a couple of months ago and with Locusive’s help, our process has now been cut down by at least 80, 85%. So we’re incredibly happy with the results and the accuracy and professionalism with which they were delivered.
How did you use the Locusive chatbot for Slack?
Locusive’s Slack bot is particularly impressive because when we were able to upload our live data set to the Slack bot, we were able to query it in a conversational manner, just like we would with ChatGPT. But we were able to identify opportunities just about in real time. Uh So that was an incredibly exciting development one that we had not even initially anticipated delivering to our clients. But because of this new opportunity that Locusive’s Slack bot trained on our data has presented, we’re now exploring new opportunities to provide value to our clients and potentially build out a larger user user interface with this exciting new technology.
What was your experience like working with the team at Locusive?
Ryan: My experience working with the Locusive of team was incredible. Both of these guys that I interacted with were excellent. I can’t say enough good things about them. Also, I would definitely draw folks attention to their customer service, their willingness to develop custom solutions and keep in close communication throughout the project. Whenever there was a question that I had or something I was uncertain about or even wanted to check in, I would always hear back nearly instantaneously from one of the project managers and I’m incredibly glad to have that kind of relationship with them. Um I was a new client when we came in and we intend to do much more work with them because of our relationship that we’ve built and the way they’ve treated us as a client.
Would you work with Locusive again?
Ryan: I intend to work with Locusive again and I’ve already recommended others to do the same. I look forward to a long term fruitful partnership with their team.
— — End of GrantExec’s testimonial — —
In creating these products, I was able to get familiar with the nuances of working with RAG-based applications, large language models, and autonomous agents. I also started marketing these products to the world. The usage data I get from them allows me to get quantitative metrics on what people are doing with their data, and it also allows me to set up interviews, demos, and sales calls with businesses who are interested in using AI to solve some core problems, allowing me to gather additional qualitative data on areas I should be focusing on.
As I was creating these products, I doubled down on customer research, and identified 9 different hypotheses I wanted to test about what problems different customers were experiencing. I did lots of customer discovery calls and identified one key area in the world of SaaS apps. Rather than describe the final hypothesis that we landed on, I’ve pasted a social media post below around this topic, and you can use that to gain an understanding of the problem we’re trying to solve:
— — Social media post below — —
CSMs, CEOs, Heads Of Customer Success — I’m looking for your help.
We’re building a new AI assistant at Locusive whose goal is to reduce the amount of time CSMs spend on tedious, low-value tasks, while also giving customers a faster, more delightful experience when they need support.
We’ve had dozens of conversations with folks like yourself who have exclaimed how much time they spend on low-value tasks like pulling custom reports for customers, helping them find answers within the existing app you offer, answering hundreds of support questions while still having thousands in the backlog, and more.
You may want to spend more time talking with customers but you can’t do so because you’re crunching numbers in Excel or sending the same email twenty times with slightly different messaging.
These issues prevent you from spending time on more strategic tasks like driving renewals or upsells, researching new prospects, and generating revenue.
You can’t ignore these requests from customers, because you need to provide good service and make sure they’re feeling heard, but until now, you haven’t been able to automate away a lot of these tasks because they still required some amount of human intervention to complete properly.
We’re in a world where AI can handle a lot of the tedious and repetitive tasks that you’ve been forced to do so far, and it can do so in a way that gets customers their answers faster and at all hours of the day in a way that humans haven’t been able to do so yet.
If this sounds interesting for you, then we’d love to either chat with you to become a customer development partner or an early beta tester for Locusive’s new Concierge product — a system designed to let customers self-service while you get freed up for more time.
We’re in the earliest days of building this product, utilizing a lot of the learnings and techniques we’ve picked up from our chatbot and API products, and we’re 100% focused on building the best, most valuable product we can.
We’re looking for folks like yourself to provide feedback, help us understand common requests, plug in your data sources, and try out our product as we build it. Our earliest partners will get heavily discounted access to the product while also being able to influence its direction.
If you’re interested in chatting more, contact us on our website, or sign up for our beta list at https://www.locusive.com/waitlist.
— — End of social media post — —
From here, you can see that we’re attempting to create a new type of embedded app assistant, something that sits in a SaaS app and allows users to use it for the same types of requests that that they would have normally messages their human customer success manager for (e.g. pulling one-off reports, gathering data, explaining why the dashboard says X). The end results should be that users get responses and data faster, and CSMs are freed up to work on more strategic work, which should lead to more renewals for the company.
We’re currently in the very early stages of building this assistant, which, you can see, we’re calling the Concierge. In order to make sure we’re building the right thing, we’re currently trying to put together a customer development partner program that consists of 5 early companies who will help us by providing the top types of requests they get, access to their data sources that they use to solve those requests, early testing, design feedback, and more. These customers are paying a small refundable deposit up front and committing to a monthly fee at a large discount to the normal price (once we have built the product to their liking).
=== That’s all you need to know about the company for now ===
5. Use ChatGPT to extract the structure of the top-ranked articles for the keywords I’m targeting
The next step is to create a high-level outline of the article I’ll be writing. This approach works better than just having a large language model create an article from scratch, as it provides structure, allows me to work iteratively with the AI to write each section (otherwise it would go off the rails), and allows me to add a human layer of supervision more easily than if I didn’t have an outline to work from.
Fortunately, creating an optimal outline at this point is simply a matter of taking what we’ve already done and using it to prompt ChatGPT correctly. At this stage, I provide the following prompt to ChatGPT (note, this is where we’ll have to manually replace a few things in the prompt, primarily, the keyword and the URLs of competing articles).
In order for this to work, I use the WebPilot plugin with ChatGPT, which allows it to access the web, and more specifically, the competitor URLs.
The prompt I give to ChatGPT to have it pull information from competing articles:
You are an SEO expert that will help me create the outline for a long-form article focused on the keyword “<insert keyword here>”. We’re going to take an iterative approach, where we’ll first use the current top-performing results for this keyword to identify an optimal outline and areas of focus for our own article. Next, using those outlines and the information you have about my company, we’ll create an SEO-optimized outline for a new article that’s specific to my company.
To begin, please pull the H1, H2, H3 from the following URLs, include all keywords, LSI keywords, FAQ, table of contents from these pages, write your results after accessing each page:
- <url 1>
- <url 2>
- <url 3>
- <url 4>
- <url 5>
At this point, ChatGPT, using the WebPilot plugin, will go browse the competing articles you gave it and will list out the key headings and FAQs that it finds from those articles. This is useful for the next step, as it allows ChatGPT to come up with an “optimal” outline for our own article.
6. Use ChatGPT to create an optimized outline for my article
At this point, ChatGPT now knows about the top performing articles on Google and has background information on my company, which means it’s ready to start creating the outline for our article.
Getting it to do so requires a simple prompt (keep track of the placeholders in the prompt below, you’ll need to replace them with items you found above):
Let’s now start to work on an SEO optimized blog post. Using the outlines that you found above from my competitors, along with all the information you have about my company, I’d like you to start by writing an in-depth, comprehensive SEO outline for the keyword “<insert keyword here>”, but one that’s optimized for my company, Locusive.
This will be the basis for a long-form article that should persuade people to try my product, so be sure to include details and short descriptions on what should be included in each section. Focus on creating an in-depth blog post outline with every single question or topic a person would have for this topic. The article should be targeted towards non-technical business executives who want to use an existing, off-the-shelf product to boost their growth.
Keep in mind that we are trying to rank for the target keyword, so it is important for us to include variations of the keyword in the H1, H2, H3, and throughout the article. Include all LSI keywords, a table of contents, frequently asked questions, and ideas for tables, charts, graphs, or statistics that would make this the top ranking post for the keyword. Ensure you also target “<other keyword 1>” and “<other keyword 2>” as related keywords and include those in the outline as well.
7. Use Claude to iteratively generate each section of the article
At this point, we’ve done a lot of work with ChatGPT to get us to a point where we can begin writing our article, but unfortunately, I’ve found that ChatGPT itself isn’t great at producing natural-sounding, persuasive text in the same tone of voice and style as most humans would produce.
Fortunately, we can use Claude 2, a competing large language model from Anthropic, to handle this task. I’ve found Claude 2 to be highly effective at producing high-quality, educational, conversational content.
Copy and paste the information about Locusive into Claude so it also has the background it needs
Unfortunately, since I’m starting a brand new thread with a new LLM, I have to copy and paste a lot of the information that I gathered from the previous step into Claude.
The first step of this process is to copy and paste the entire context about Locusive into Claude, generally with the following context to begin:
I’d like you to help me create a new long-form, SEO-optimized article. However, in order to create that article, you’re going to need to know some things about my company. Below, I’ve provided context for you to learn about my company, and later I’ll give you details about the article I’d like to create, along with instructions on how to write the article.
<rest of the context about Locusive goes here>
Please process the above and let me know when you’re ready for me to provide you with an outline that targets the primary keyword “<primary keyword here>” and also the secondary keywords “<other keyword 1>” and “<other keyword 2>”.
Copy and paste the outline that ChatGPT created
This step is simple — once Claude is ready, copy and paste the outline from ChatGPT into Claude.
Provide Claude with a writing sample for it to mimic
At this point, I’m almost ready to have Claude start writing some content for me, but I like to provide it with some final instructions to ensure it sounds more human that it would otherwise.
In order to do so, the first thing I do is provide it with a sample piece of writing from another one of my articles that converts well, and then ask it to analyze that sample and understand its writing style so it can mimic it later.
Here’s the prompt I give it:
We’ll now begin to create the content of this article. But before we jump in, it’s important for you to write using the same tone of voice, style, and format as the articles that are already producing conversions on my site. In order to do so, I’ll provide you with a series of instructions that you must follow. First, I have provided you with a small sample of a high-performing article from my site, you must analyze it and then identify all notable attributes of the writing style so you can replicate that style going forward.
— — Sample of high-performing article is below — -
## LLMs Create New Opportunities For Businesses
The release of ChatGPT has demonstrated the immense potential of large language models (LLMs) to help businesses increase productivity. Companies can now develop better customer support tools, create better internal search engines, answer common questions automatically, create ETL jobs that incorporate smart and automated decision-making, decrease research time, and much more, all with the help of LLMs. But in order to enable these business-specific use cases, LLMs require access to proprietary company data — documents, conversations, databases, product docs, and other sources of information that can help provide accurate, trustworthy information to handle user queries and requests.
This is where vector databases come in. The specialized systems can organize your textual data so that it’s easy to retrieve relevant contextual information for a given topic based. They can find paragraphs in documents, rows in a Google Sheet, or files in a folder that are highly likely to contain the information needed by an LLM to answer a user’s question without making up (or hallucinating) an answer. By using vector databases efficiently you can ensure that any answer that your users get from your chatbots are trustworthy and true to your company’s policies and offerings. Unfortunately, while vector databases are crucial for operating A.I. chatbots that can accurately answer questions from your team or your customers, they’re not as easy to run and operate as you might think.
### The Hidden Headaches of Vector Databases
On the surface, vector databases seem like a clean solution for easily creating a custom chatbot for your company’s data. But the reality is that using a vector database isn’t as easy as calling an API when you need to answer a question. They require a significant amount of investment in both time, personnel, and infrastructure to maintain and use in a chat-enabled application.
### Data Integration Requires Constant Engineering Work
To function properly, vector databases need access to source content like Google Drive docs and internal databases. But they lack native connectors or pipelines, which means that engineering teams need to build custom scrapers or ETL jobs that can transfer data to them in a properly structured format and on a continuous basis. As new sources are added, more dev work is needed to continually integrate new types of data and data sources.
### What does continuous engineering work look like?
For example, let’s say that your company uses data from a Google Sheets file as the source of trusted information for a customer support chatbot. But your CSM team has started to keep track of FAQs in PDFs stored in Google Drive, and you now need to make sure your chatbot has access to that content to properly respond to your users. Even if your company was using a vector database to store data from Google Sheets, you’ll need to write and maintain new software to access data from content in Google Drive.
You’ll first need to create the ability for your chatbot to read in data from saved PDFs, which usually means implementing a new OAuth mechanism and reading in data from a new API. Then you’ll need to create text embeddings from your content and index all of those embeddings into your vector database using its native API — and since many vector databases don’t let you query for a list of the documents or document snippets that you’ve added to your index, you’ll also need to build an entirely separate database just to keep track of which files you’ve actually added.
Finally, you’ll need to ensure that you keep your index up to date, adding new content any time your source data changes, and removing old content whenever files are removed or changed. This constant updating of information means you’ll need to implement scheduling infrastructure or webhooks that can track changes to your data and update your vector database’s index appropriately. Because if you don’t update your indexes, your vectors will become stale as your company’s documents evolve, which means your users will get old or outdated information when they ask their questions.
### Security and Access Controls Are Not Included
Vector databases also operate at the raw text layer without awareness of users or permissions. To support data access controls, you’ll need to architect custom identity and authorization systems leveraging OAuth and SAML, and tightly integrate them with vector indexes. This complex task risks exposing sensitive information if not properly implemented.
This means that your engineering team now needs to incorporate existing permissions infrastructure, or create entirely new systems to track and manage who should have access to query and update your vector database.
### Beefy Infrastructure And Ongoing Optimization
Running vector databases can also require provisioning sizable hardware clusters for indexing your content. And if you don’t want to run your own hardware, you’ll need to pay for a third party tool, but because most vector database providers still expect you to manage your index’s size and fullness, you’ll still need to monitor and optimize usage in your paid vector database service.
[Screenshot of an image of the Pinecone dashboard]
Image caption: Pinecone, one of the leading vector database providers, still requires you to manage your indexes and infrastructure
### No Turnkey LLM Integrations
Finally, vector databases just return text snippets. Additional infrastructure must be built to orchestrate interactions with downstream LLMs. So if you’re building your own A.I. chatbot for your company, you’ll not only need to integrate a vector database, but you’ll also need to write the software that orchestrates data from your vector database with your LLM.
So while vector databases play a crucial role in ensuring your company can create a chatbot on its own data, you’ll generally need to pour a lot of engineering resources into doing so, unless you go with an alternative solution that lets you combine the power of vector databases with the automation capabilities of a system that can automatically connect into your data, refresh it, maintain permissions, and integrate with an LLM.
### APIs That Act As Vector Databases
While vector databases provide important functionality for integrating business data with ChatGPT, managing them involves a lot of annoying heavy lifting. What if you could get all the benefits of vector search without the headaches? That’s where purpose-built APIs come in. Leading API platforms are emerging that handle the messy details of connecting to data sources, managing indexing and infrastructure, and orchestrating LLM queries on your behalf. These APIs act as “vector databases on steroids” by automating the challenging aspects of working with vector databases while also delivering greater functionality. Here are a couple of ways that purpose-built APIs can help:
— — End of sample — -
Provide Claude with final writing instructions
Next, I’ll provide Claude with a set of instructions that it needs to keep in mind as it’s writing out the content of the article:
Here’s the prompt I give it:
Here are your final instructions to keep in mind before we begin:
1. Follow the writing style you just analyzed
2. Limit adjectives and adverbs: Request to minimize the use of adjectives and adverbs. This can make the writing more concise and to-the-point.
3. Avoid jargon and complex words: Use simple, everyday language instead of technical jargon or complex vocabulary, unless it’s necessary for the topic.
4. Use a few analogies and metaphors: Encourage the use of analogies and metaphors to explain complex ideas. This makes the content more relatable and easier to understand.
5. Vary sentence length: Use a mix of short and long sentences to keep the rhythm of the writing engaging and easy to read.
6. Use first or second person narrative: Use “I,” “we,” or “you” to create a conversational tone.
7. Limit passive voice: Minimize passive voice constructions in favor of active voice, making the content more direct and lively.
8. Avoid repetition: Avoid repeating the same points or phrases, ensuring each part of the content is unique and adds value.
9. Incorporate storytelling elements: Use storytelling techniques, like setting up a problem and resolving it, to make the content more engaging.
10. Use specific examples and case studies: Include real-life examples, case studies, or anecdotes to illustrate points more vividly, but only when relevant and do not make these up, if you don’t have anything relevant, don’t use them.
11. Balance SEO and readability: While SEO is important, the primary focus should be on readability and providing value to the reader.
12. Avoid overuse of keywords: Integrate keywords naturally, avoiding forced or excessive use that could disrupt the flow of the content.
13. Incorporate questions: When relevant or possible, use rhetorical or direct questions to engage the reader and make the content more interactive.
14. Incorporate light humor when possible
15. Use an educational and friendly tone when possible
16. Do not ramble
17. Include aesthetically pleasing placeholders that provide additional information (like screenshots), highlight key pieces of information (like blockquotes), provide statistics or context (like inforgraphics), and similar such elements. Place all such placeholders in [bold square brackets, ensuring you provide a description of why you want that placeholder and what it should be]
Use Claude to iterative write each section in the article using the outline
Finally, we can now get to the meat and potatoes of the whole task — creating the content. I’ve found that when I ask Claude to produce the content of each section one by one, it produces higher-quality content, and I’m better able to manage it.
At this point, I then ask it to create each section using a prompt similar to the below, iteratively copying and pasting it into Webflow, where I host my website and blog:
Now please create me the content of the introduction, as a reminder, the outline for the introduction is below:
“<insert outline for section here>”
Remember to use placeholders for visuals (when applicable) in [bold square brackets, ensuring you provide a description of why you want that placeholder and what it should be]
At this point, I’ll read every section that Claude produces, frequently providing it with feedback and asking it to make tweaks or clarify certain things, but eventually I’ll get through the final section and will have a complete article optimized with the keywords I want to target.
8. Use Claude to create an optimized title and description
Whew!
Fortunately all the hard work is done, and we can now get through a few checklist items and then publish our article.
The first thing I like to do is to have Claude create a new, optimized title for me, along with an optimized description of the article.
To get it to do so, I provide it the following prompt:
We’ve now written the entire article.
Here is is from top to bottom: “<insert article here here>”
Using this, please now create an SEO-optimized title and 2-sentence metadata data description.
9. Create a hero image using Claude and ChatGPT + DALL-E
At this point, we’re ready to get a hero image. In the past, I used to have to browse all sorts of royalty-free image sites, looking for something remotely relevant to the topic I just wrote about. This was frequently an effort in futility.
Now, I use AI to generate a targeted image based on the content I just produced.
First, I’ll ask Claude to provide me with a prompt that I can then feed into ChatGPT + DALL-E. The prompt for this can be extremely simple:
Now please provide a prompt that I can put into DALL-E to create a good hero image for this article.
I’ll take the resulting prompt that it provides me and paste that into DALL-E, at which point I might have DALL-E provide a few different images for me until I find something that I like. From there, I can take the image and paste it into the article as the hero image.
10. Publish the article on the website, Medium, Substack, and LinkedIn.
The last step is to publish the article I just wrote. This is easy and straightforward, but there are a few nuances to keep in mind. First, it’s important to choose the right distribution channels. Obviously you’re going to want to publish on your own website, but I’ve found publishing on LinkedIn and Medium are effective at generating conversions. Medium even lets you set the canonical URL for an article, which will help search engines drive users to the article on your website.
And that’s it, if you’ve gotten this far, my guess is you’re dedicated to implementing a high-quality, high-converting SEO strategy that can help your business grow.
The ten-step process I’ve outlined above, while it may seem long, is one of the most efficient processes I’ve found to generate high-performing content in a short amount of time.
It does take a little bit of work to ensure you’re getting something relevant and personalized for your users, but using this process, you can now generate lots of high-quality content in a fraction of the time that you did before.
If this was all too much to read for you at once, you can always visit the SEO Playbook that I’ve created for myself on Notion, which is easier to use when you’re looking to process each step one by one.
And I’d be remiss if I didn’t say that if you’ve got a SaaS application or website that’s taking up a lot of time for your customer success team to handle customer requests, we’re building out an AI-assistant called the Locusive Concierge to help your customers self-service and free up your CSMs for more strategic tasks — get in touch if that’s something that you’d like to learn more about!