saas.unbound is a podcast for and about founders who are working on scaling inspiring products that people love, brought to you by https://saas.group/, a serial acquirer of B2B SaaS companies.

In episode #16 of season 6, Anna Nadeina talks with Leo Goldfarb, co-founder of Albato — a 100% bootstrapped automation and integration platform with 200,000 users and 80 people — after years at Microsoft, IBM, HP, and Booking.com. This episode gets into what it actually takes to build an AI-first product when you’re competing with Zapier and nobody on your team has formal AI experience.

The Wrong Way to Bring AI Into Your SaaS

Adding AI to a SaaS product sounds straightforward. Pick a model, add a chat interface, and ship a new feature. In practice, that approach often fails.

The biggest mistake is treating AI like a shortcut that can cover product gaps, messy APIs, weak onboarding, or unclear user workflows. It cannot. AI tends to amplify whatever foundation it sits on. If the product is structured well, AI can improve activation, automation, and productivity. If the foundation is weak, AI usually makes the experience more confusing, expensive, and unreliable.

This guide explains how to bring AI into a SaaS product the right way, what usually goes wrong, how to measure ROI, and how to avoid common traps when building AI features inside an existing platform.

What “bringing AI into your SaaS” actually means

For most SaaS companies, AI adoption falls into two broad categories:

  • AI inside the product, such as copilots, natural language workflow builders, assistants, or agentic automation features.
  • AI inside the company, such as internal agents for content, support, operations, research, or repetitive team workflows.

Both can create value. Both can also waste time if the goal is vague.

A practical AI initiative starts with a specific problem, such as:

  • Users drop off because setup is too complex
  • Customers need help translating plain-language requests into system actions
  • Teams spend too much time on repetitive internal processes
  • Manual work slows delivery, support, or onboarding

That is very different from adding AI because competitors are doing it.

Why many AI projects fail in SaaS

A major reason AI projects fail is that teams measure the wrong thing.

It is easy to celebrate activity metrics like:

  • How many people used an AI feature
  • How many prompts were sent
  • How many tokens were consumed
  • How many internal teams experimented with AI tools

Those are not business outcomes.

If AI is worth building, it should improve a meaningful metric such as:

  • Activation rate
  • Paid conversion
  • Customer adoption
  • Retention
  • Time saved
  • Headcount efficiency
  • Revenue from customers who need the feature

Without that connection, teams can spend heavily on AI and still have no clear return.

The biggest mistake: using AI to patch a broken product

If your UX is confusing, fix the UX. If your API is hard to work with, fix the API. If your onboarding is unclear, fix the onboarding.

Do not expect AI to magically smooth over structural problems.

This matters even more with large language models because they are probabilistic. They work best when the environment around them is structured, predictable, and well-defined. When the underlying product is messy, the model has to interpret chaos, and the output becomes less reliable.

A simple rule helps here:

Improve the foundation before adding intelligence on top of it.

What this looks like in practice

Imagine a SaaS company building an AI copilot that should translate a user request into actions through the platform’s API. On paper, it sounds perfect.

But if the API was originally designed for a different use case and returns oversized or badly structured responses, the AI layer may struggle to process them efficiently. The result can be crashes, poor outputs, or inconsistent behavior.

In that situation, the real work is not prompt tuning. It is redesigning the API so the AI system can operate reliably.

That is why “AI-first” does not mean “skip the infrastructure.” It often means the opposite.

How to know if your SaaS is ready for AI

Before building an AI feature, ask these questions:

  • Is the user problem clearly defined?
  • Do we know where users get stuck today?
  • Can we map plain-language intent to structured actions?
  • Are our APIs or internal systems clean enough for an AI layer to use?
  • Do we have a measurable success metric?
  • Will this reduce friction or just add novelty?

If several of those answers are unclear, the product may not be ready yet.

AI is especially useful when no-code still feels too technical

Many no-code tools promise accessibility, but they still require users to understand concepts like triggers, actions, workflows, schemas, and data mapping.

That creates a gap:

  • The product is technically no-code
  • But it still demands a technical mindset

This is where AI can genuinely improve the experience.

Instead of forcing users to build from scratch inside a workflow editor, an AI copilot can let them describe what they want in plain language. The system can then translate that request into structured automation logic behind the scenes.

For SaaS products in automation, integration, or workflow-heavy categories, this is one of the most promising AI use cases: reducing setup friction for non-technical users.

How to choose the right AI use case

Not every AI idea deserves to be built. The best candidates usually meet three conditions:

  • They remove a major bottleneck
  • They connect directly to a business metric
  • They fit the product’s real audience

Good AI use cases for SaaS

  • Helping new users complete setup faster
  • Turning plain language into structured workflows
  • Automating repetitive internal processes
  • Supporting customer-facing automation inside the product
  • Reducing manual work across sales, CRM, or operations

Weak AI use cases for SaaS

  • Adding chat just because competitors have chat
  • Masking poor UX with an assistant
  • Launching a feature with no success metric
  • Building for “everyone” instead of a clear ideal customer profile
  • Chasing hype without a delivery plan

Should you build a separate AI innovation team?

Many SaaS companies create a dedicated AI or innovation team. That can work, especially early on. But there is a tradeoff.

When AI experimentation sits in one isolated group, that team can become disconnected from:

  • Real customer pain points
  • Everyday team workflows
  • Practical implementation constraints
  • The product areas where AI could create immediate value

An alternative is a more decentralized model:

  • Encourage each team to identify useful AI applications
  • Let product, marketing, sales, and operations experiment
  • Keep engineering involved for code review, QA, and safe deployment
  • Share learnings across the organization

This approach can help a company become truly AI-first, rather than AI-centralized.

The key is balance. AI should be broadly adopted, but not loosely governed.

Why team mindset matters more than “AI experience”

Formal AI expertise is still relatively rare. In many cases, the better question is not “Who has years of AI experience?” but “Who is willing to learn fast, test fast, and adapt?”

When building AI products, teams often operate without a mature playbook. Models change quickly. Tools change quickly. Best practices change quickly. That makes attitude unusually important.

Look for these traits

  • Curiosity
  • Comfort with experimentation
  • Willingness to fail and iterate
  • Belief that AI can be useful when applied correctly
  • Practical focus on outcomes instead of hype

Be careful with excessive skepticism

Healthy skepticism is valuable. It prevents bad decisions. But deep resistance can slow an AI initiative to a crawl.

If a core contributor fundamentally does not believe AI should be used, the team may stall, overanalyze, or avoid experimentation altogether. That is especially risky in research-heavy AI work where progress depends on repeated testing and iteration.

The strongest teams are usually not blindly optimistic. They are simply willing to engage with the shift instead of rejecting it.

How to measure AI ROI in SaaS

If you want AI adoption to survive beyond experimentation, define success before launch.

A simple framework for measuring AI ROI

1. Start with the problem

What friction or inefficiency are you trying to remove?

2. Pick one primary metric

Choose the main number that should move if the AI feature works.

3. Track supporting metrics

These help explain why the primary metric moved or failed to move.

4. Include cost

Token usage, inference spend, and tool costs matter. AI savings are not real if model costs erase the gain.

Examples of useful AI metrics

  • Product AI: activation rate, trial-to-paid conversion, successful workflow creation, onboarding completion
  • Internal AI: hours saved, output quality, task throughput, reduced manual effort, fewer hires needed for repetitive work
  • Customer-facing automation: feature adoption, retention impact, expansion revenue, reduced support load

What not to use as your main KPI

  • Number of prompts
  • Number of people who opened the AI feature
  • Volume of model usage without outcome context
  • Internal excitement alone

Do not ignore the cost side of AI

AI can reduce manual work, but it can also create a new operating expense that grows fast.

That is why every AI workflow should be judged on both sides:

  • What value does it create?
  • What does it cost to run reliably?

For internal agents, that might mean comparing output against labor savings. For product features, it might mean comparing model costs against improved activation, revenue, or retention.

If an AI workflow is expensive and only produces marginal gains, it may not be worth scaling.

How to roll out AI across teams without creating chaos

A strong rollout is usually decentralized in ideas, but structured in execution.

A practical model

  • Let teams propose use cases based on their daily bottlenecks
  • Require a business metric for each AI experiment
  • Keep engineering oversight for security, QA, and deployment standards
  • Review costs regularly including token spend and tooling
  • Share what works so experiments do not stay trapped in silos

This keeps AI practical and aligned with company goals.

Common mistakes when adding AI to a SaaS product

1. Competing on AI alone

AI features are easier to copy than core strategic advantages. A better long-term position usually comes from solving a sharper problem for a clearer market segment.

2. Building for everyone

If your product serves both technical and non-technical users, AI may need to support very different jobs to be done. A vague audience leads to vague product decisions.

3. Treating no-code as universally easy

No-code often still assumes technical thinking. AI can help close that gap, but only if the product is designed for plain-language intent from the start.

4. Over-indexing on hype

An AI-first mindset is useful. AI-for-its-own-sake is not. Every initiative needs a concrete reason to exist.

5. Letting skepticism block progress

Teams need realism, not paralysis. If experimentation stops before learning begins, the initiative is dead on arrival.

6. Ignoring product plumbing

Bulky APIs, unclear data structures, and legacy workflows become serious blockers when AI depends on them.

7. Spreading effort too thin

As AI possibilities multiply, product backlogs can balloon quickly. Choosing what not to build becomes a strategic skill.

A simple checklist before you add AI to your SaaS

  • Define the user or business problem clearly
  • Choose one success metric that matters
  • Audit the product foundation first
  • Confirm your APIs and data structures are usable by the AI layer
  • Select a team that is open to experimentation
  • Plan for QA and technical review
  • Track model and tooling costs from day one
  • Limit scope so the project can ship and be evaluated
  • Focus on a real customer segment, not everybody
  • Decide in advance what outcome would count as success

What an effective AI-first SaaS strategy looks like

An effective strategy usually has these traits:

  • It solves a real problem, not just a trend-driven one
  • It improves an important metric, not just feature usage
  • It fits the product’s actual audience
  • It is built on a clean foundation
  • It is adopted across the company where useful
  • It is measured against cost as well as output

In other words, successful AI adoption in SaaS is less about adding a flashy layer and more about combining structure, discipline, and clear product thinking.

The wrong way to bring AI into your SaaS is to use it as a patch for deeper product problems.

The right way is to start with friction that already exists, fix the underlying structure where needed, and then use AI to make the product easier, faster, or more valuable in a measurable way.

If the foundation is solid, AI can unlock powerful outcomes such as higher activation, better automation, and more efficient internal operations. If the foundation is weak, AI usually adds cost and complexity faster than it adds value.

Before building your next AI feature, ask one question: What exact problem will this solve better than a non-AI solution? That question filters out most bad ideas and points toward the ones worth shipping.

Head of Growth, saas.group