saas.unbound is a podcast for and about founders who are working on scaling inspiring products that people love, brought to you by https://saas.group/, a serial acquirer of B2B SaaS companies.

In episode #6 of season 6, Anna Nadeina talks with Belma Ibrahimovic, Head of AI at saas.group, a founder-friendly acquirer of B2B SaaS companies and a global team of passionate SaaS operators building the future of software.

This episode addresses common fears about artificial intelligence and its impact on work. We discuss how AI productivity tools like ChatGPT and claude ai can enhance your AI workflow, rather than replacing jobs. The conversation highlights the changing landscape of work and how individuals can adapt to avoid job disruption.

AI is the single biggest change to knowledge work in years—and it scares people. The fear often shows up as questions about job security, creative ownership, and whether teams should restructure around autonomous agents. The right response isn’t to hide from AI or to adopt it recklessly. It’s to treat AI as a powerful tool and design a practical, people-first plan for adoption.

Why people are afraid—and why fear is the wrong starting point

Fear stems from uncertainty: What does AI mean for my job? Will my creative work disappear? Will teams be replaced by autonomous agents? Those are valid concerns, but they become harmful if they stop people from experimenting.

AI is not a magic eraser for judgment, and it is not an all-or-nothing threat. It is a toolkit of models and systems you can use to make work faster, less repetitive, and more impactful. The real risk today is not that AI will replace people, but that people who refuse to learn AI will be left behind.

Practical guardrail: don’t outsource your judgment

Do not outsource your judgment.

AI models are probabilistic next-token predictors. They can surface expert knowledge and help you think through complex problems, but they are not a substitute for human judgment—especially for high-stakes decisions. Use AI to augment thinking and speed up low-value tasks, not to make critical choices for you.

Will agents replace teams?

Short answer: no—at least not today. Agentic systems are excellent at automating repetitive workflows and orchestrating tasks across tools, which means they can remove boring, manual work. That frees humans for the parts of the job that need domain expertise, creativity, and judgment.

What will change is who wins: the person or team that integrates AI into their workflow will outperform those that don’t. So the competitive advantage belongs to teams that learn how to use AI meaningfully.

How to roll out AI across a SaaS organization

Adoption is a change management problem first and a technology problem second. Here’s a practical rollout framework that scales across brands and product teams.

1. Remove fear and reframe purpose

  • Message: AI empowers people rather than replaces them.
  • Focus: Align on the question “How can AI help me do this better?”

2. Lead by example

  • Managers and leaders must use AI daily and show how they use it.
  • When leadership demonstrates usage, adoption follows faster and with less resistance.

3. Give people permission and time

  • Allocate regular learning time in the calendar. Treat it as an investment.
  • Provide guided, role-specific sessions rather than a single generic course.

4. Create shared learning paths

  • Centralize vetted resources, templates, and examples (not a million newsletters).
  • Run internal hackathons and cross-brand knowledge sharing so teams don’t reinvent the wheel.

5. Start with individual productivity, then scale to teams

  • Encourage people to find their own “aha” with AI on day-to-day tasks.
  • Next ask: how can AI help my team? Then: how can it help my product?

Deciding whether to put AI into your product

Build with impact in mind. Don’t add AI features for the sake of novelty. Ask: what problem can AI uniquely solve that wasn’t possible before? If the answer is meaningful, proceed. If not, don’t force it.

Some product integrations are obvious: conversational search, automated summarization, or customer support assistants that handle routine queries. Other cases might be better served by improved workflows or analytics powered by simpler models.

Engineering discipline matters: evaluate, version, and test

AI in product is still software engineering. Two critical operational practices:

  • Evaluation (Evals): Build test sets—train, validation, and final test—so you understand real-world behavior and failure modes. Test the edge cases, not just happy paths.
  • Model versioning: Pin model versions when using third-party models. That prevents breakout changes when a vendor updates their backend.

Designing robust AI features requires thinking beyond the model: data pipelines, retrieval systems, rate limits, monitoring, and recovery strategies all matter.

Practical playbook: where to start tomorrow

  1. Ask the right questions: “How can AI help me do this better?” and then “How can AI help my team?”
  2. Run a small experiment: Pick one boring, repetitive workflow and automate it with an agent or a simple model.
  3. Set evaluation criteria: Create an evaluation dataset and test for edge cases before shipping.
  4. Pin versions: Lock the model version or run a staged rollout to avoid surprises.
  5. Allocate time: Give each team 1–2 hours per week to learn and experiment together.
  6. Share results: Publish short writeups or demos internally so other teams can copy or improve.

A few concrete hacks and tips

  • Use projects in AI tools: Create projects (or workspaces) where you store files, custom instructions, and connectors. This becomes your AI assistant that already knows your context.
  • Connect company data: Hook enterprise Notion, Google Drive, or docs into projects so the assistant can act on real company knowledge.
  • Talk, don’t just type: For many workflows, voice input speeds up iteration and helps you give richer context to the model.
  • Reduce noise: Unsubscribe from duplicate newsletters. Follow a few trusted voices and curate your feed.
  • Build in safe spaces: Side projects are low-risk ways to discover new AI workflows and get those aha moments.

Common pitfalls to avoid

  • Assuming AI can make high-stakes decisions without human oversight.
  • Testing only the easy examples. Real users will probe weak spots.
  • Sprinkling AI features into a product without a clear impact hypothesis.
  • Expecting agents to replace domain expertise overnight.

Head of Growth, saas.group