Dario Bianchi is the Chief Product Officer at Mindvalley, leading AI-driven innovation across one of the world’s top edtech and wellness platforms. He also serves as Board Member and Chair of the Technology Committee at Affinity Africa, a digital bank advancing financial inclusion. With over 20 years of experience across Europe, Africa, Latin America, and Asia, Dario combines deep product, fintech, and AI expertise to drive growth and transformation globally.
Here’s what Dario shared with us on scaling conversational AI, aligning cross-functional teams, and delivering meaningful customer outcomes.
It was never a master plan. I started as a telecom engineer in Italy, and what kept pulling me forward was the question of what technology could actually do for people, not just how it worked. Moving to the UK and joining Orange, then Vodafone, gave me my first real exposure to building digital products at
scale. The real acceleration came at Digicel, where I was responsible for digital channels across 24 markets in the Caribbean, Latin America, and the South Pacific. That kind of scope forces you to grow fast: wildly different customer behaviours, infrastructure constraints, and market dynamics all at once.
Then MTN Ghana changed everything. I built the digital division from scratch, scaled a super-app to close to two million active users, launched one of the first chatbots in the MTN Group, and grew digital revenues by over 200% in three years. That is where I fell properly in love with conversational AI and the
idea that a well-designed interaction could replace an entire customer service operation. Moving to Mindvalley as CPO brought it full circle: a global platform, AI at the core, and the challenge of making technology feel genuinely human.
When I joined Mindvalley, AI was already part of the conversation but not yet embedded in the product in any meaningful way. The first thing I did was anchor the entire roadmap around it. We launched Eve, an AI coach that now lives across all touchpoints in the Mindvalley app, and it became the flagship feature almost immediately. Alongside that, I restructured how the product team works: automated PRDs, AI assisted development across all squads, standardised skills to handle recurring workflows. The goal was not to add AI on top of what already existed but to rebuild how we build.
The principle behind all of it is that the real leverage of AI is not in any single feature. It is in the speed and quality of everything you ship when the whole organisation is built around it. That means embedding AI into the process of building products, not just into the products themselves. When your team thinks in those terms, the innovation stops being a project and becomes the default way of operating.
The biggest challenge is always the gap between what the technology can do in a demo and what it actually delivers in the hands of real users at scale. At MTN Ghana, when we launched Zigi, our chatbot, the first version handled the happy path well but fell apart the moment a customer deviated from the expected flow. In a market like Ghana, where digital literacy varies enormously and customers mix English with local languages mid-sentence, that failure was constant. We had to go back to basics: study actual interaction logs obsessively, redesign the conversation flows around how people really communicate rather than how we assumed they would, and build in graceful fallbacks that kept the experience from feeling broken when the AI hit its limits.
The other challenge is organisational, and in some ways it is harder. Conversational AI cuts across product, engineering, customer service, and commercial teams, and each of them has a different idea of what success looks like. Getting alignment on the right metrics, and keeping everyone pointed at the customer outcome rather than their own function’s KPIs, requires constant work. The way I have approached it is to make the conversation data visible to everyone, so the evidence of what is and is not working becomes the common language across all the teams involved.
Metrics are everything, but the wrong metrics will kill a good AI initiative faster than bad technology will. The default instinct when deploying conversational AI is to measure cost reduction: fewer calls to the call centre, lower support headcount, faster resolution times. Those are real and worth tracking. At MTN Ghana, Zigi reduced call centre volume by 25% in 18 months, and that mattered. But if cost reduction is the only lens, you end up optimising for deflection rather than experience, and customers feel it. The question that unlocks more value is what the AI is doing for the customer, not just what it is saving the business.
The shift happens when you connect AI interactions to downstream commercial outcomes. At Mindvalley, the frame is retention and lifetime value: is the AI coach keeping members engaged, is it surfacing the right content at the right moment, is it reducing the signals that predict cancellation. When you instrument your AI that way, the conversation with leadership changes completely. You stop defending the cost of the investment and start showing its contribution to revenue. That reframing is as much a communication challenge as it is a technical one, and it is something product leaders have to drive
deliberately.
The most effective thing I have done consistently is start from the customer problem rather than the technology. It sounds obvious but it is surprisingly rare. Most AI initiatives I have seen start with a capability: “we have this model, what can we do with it?” That approach produces demos that impress
internally and disappoint externally. The projects that actually moved metrics started with a specific friction point in the customer journey, something painful and measurable, and worked backwards to the simplest AI intervention that could remove it. At MTN Ghana that was the call centre queue. At
Mindvalley it was the drop-off in engagement after a member’s first week on the platform.
The other thing that separates initiatives that scale from ones that stall is how quickly you get real interaction data and act on it. The first version of any conversational AI will be wrong in ways you cannot predict from a design document. The teams that win are the ones that treat launch as the beginning of the learning cycle, not the end of the build cycle. That means short feedback loops, someone whose job it is to read the conversation logs every week, and the organisational permission to change things fast. Governance and measurement frameworks matter too, but only if they are lightweight enough that they do not slow down the iteration.
When we launched Zigi at MTN Ghana, the early numbers looked promising on the surface. Engagement was growing and the deflection rate from the call centre was heading in the right direction. But when we dug into the actual conversation logs, we found something uncomfortable: a significant portion of interactions were ending in what I can only describe as polite abandonment. Customers were not complaining, they were just stopping mid-conversation and calling the call centre anyway. The AI was technically functioning but it was not actually solving the problem. Users were asking questions in ways we had not anticipated, mixing languages, using shorthand, referencing account details in formats the system did not recognise.
The fix was not a technology upgrade. It was a process change. We set up a weekly ritual where the team reviewed a sample of failed or abandoned conversations together, product, customer service, and engineering in the same room. That practice surfaced patterns that no dashboard would have caught, and it built a shared sense of ownership across functions that had previously treated the chatbot as each other’s problem. Within a few months the completion rate improved meaningfully and the call centre reduction numbers started reflecting what the system was actually capable of. The lesson I took from it is that conversational AI failure is almost always a data and process problem before it is a model problem.
The clearest example I can point to is the myMTN super-app in Ghana. When I took over the digital division, the app existed but it was not really living up to its potential as a platform. We rebuilt the product strategy around making Mobile Money the gravitational centre of the experience, integrating payments, peer-to-peer transfers, and value-added services into a single coherent journey rather than a collection of disconnected features. We then layered in Zigi, the AI chatbot, as the primary support and navigation layer, so customers who got stuck or needed help never had to leave the app. The combination of a cleaner product architecture and an AI layer that actually worked pushed active users past 1.8 million, a growth of over 300% in two years, with 70% of those users actively transacting through Mobile Money.
What made it a genuine success rather than just a vanity metric story was the commercial outcome. The app became a meaningful contributor to data revenues and created a digital engagement habit that strengthened customer loyalty across the whole MTN relationship. The AI component was not the headline but it was load-bearing: it absorbed support volume, reduced friction at critical moments in the transaction flow, and kept the experience feeling responsive even when the underlying telco infrastructure had its moments. That combination of strong product architecture and well-placed AI is the pattern I have tried to replicate in every role since.
The framework I keep coming back to is deceptively simple: every AI initiative has to answer three questions before it gets resourced. What specific customer problem does it solve? What is the measurable outcome we are committing to? And what does good look like at 90 days? That last question is the one most teams skip, and it is the one that separates initiatives that stay accountable from ones that drift into perpetual pilots. I am not dogmatic about methodology: I have worked with Agile, SAFe, OKRs, and various hybrid approaches depending on the organisation. What matters more than the framework is whether the team can articulate the link between what they are building and why it matters to the business.
For prioritisation specifically, I use a lens I think of as impact per unit of trust. AI initiatives are not just resource bets, they are trust bets. Every time an AI interaction goes wrong, it costs you something with the customer that is harder to recover than a feature bug. So the highest priority initiatives are the ones where the potential business impact is large, the failure mode is recoverable, and the feedback loop is short enough that you can learn and correct quickly. That tends to push you toward high-frequency, lower-stakes interactions first, building credibility with customers and internal stakeholders before you deploy AI in moments where the cost of getting it wrong is high.
The structural change that made the biggest difference for me was eliminating the handoff model. In most organisations, AI initiatives die in the gap between the team that designs them and the team that builds them, or between the team that builds them and the team that operates them. At Mindvalley I replaced the traditional product manager and designer setup with builder-led squads that own the full lifecycle from discovery to delivery. No sprints, no handoffs, no separate business requirements document that engineering then interprets. That single change meaningfully accelerated our time to market and, more importantly, it meant that when something was not working, there was one team accountable for fixing it rather than three teams pointing at each other.
For cross-market deployments the key is accepting early that what works in one context will not simply transfer to another. At Digicel I was coordinating across 24 operating companies simultaneously, and the instinct to build one solution and roll it out everywhere is almost always wrong. What I found works is building a strong core that is genuinely reusable, then giving local teams real authority to adapt the last mile. That requires trust in both directions: headquarters trusting local teams to know their markets, and local teams trusting that the core platform is solid enough to build on. Getting that balance right is more a leadership challenge than a technical one, and it has to be maintained actively because the pressures in both directions never go away.
The tension between moving fast and maintaining trust is real, and I do not think you resolve it by finding some perfect middle ground. You resolve it by being very deliberate about where you experiment and where you do not. There are parts of the product where speed is the priority and failure is cheap, and there are parts where consistency and reliability are non-negotiable because the cost of getting it wrong is too high. With Eve, Mindvalley’s AI coach, this became very concrete very quickly. When you build an AI that has meaningful conversations with people about their personal growth, their fears, and their aspirations, you have to accept that some of those conversations will surface something more serious. Someone might not just be asking about productivity habits: they might be signalling distress. We built a safety mode into Eve specifically to detect those moments and respond in a way that is responsible, signposting professional support rather than trying to handle something the AI has no business handling.
That decision shaped how I think about trust in conversational AI more broadly. Trust is not just about accuracy or uptime. It is about knowing where your product’s responsibility ends and having the humility to design for that boundary explicitly. Users who encounter an AI that recognises its own limits and
responds with genuine care rather than a confident wrong answer come away with more trust, not less. Building those guardrails from the beginning, treating the edge cases and the sensitive failure modes as first-class design problems rather than afterthoughts, is what separates a platform that scales with user trust intact from one that eventually does serious damage to it.
The most important thing I did at Mindvalley was make AI fluency a job requirement rather than a nice-to have. That meant rebuilding the product organisation around the expectation that every person on the team, not just engineers, would use AI as a core part of their daily work. We standardised AI skills for recurring workflows, automated the more mechanical parts of product management like PRDs and status updates, and created the space for people to experiment without needing permission every time. The shift happened when AI stopped being a tool people reached for occasionally and became the default way the team thought about problems. That does not happen through training programmes alone. It happens when the structure of the work itself demands it.
For leadership specifically, the bar I hold is higher. An AI-fluent product leader is not someone who can use the tools. It is someone who can ask the right questions about what the AI is actually doing, spot when a metric is being gamed by the model rather than genuinely improved, and make sound judgements about where AI adds real value versus where it creates a false sense of progress. Building that kind of critical literacy requires exposure to failure as much as success. I make a point of reviewing AI initiatives that did not deliver as a team, publicly and without blame, because that is where the real learning lives. The competitive advantage is not in having the most AI features. It is in having a team that can tell the difference between good AI and impressive-looking AI.
Honestly, the most valuable signal I get is not from reading but from doing. Running a product organisation that is actively deploying AI means I am confronted with what works and what does not in real time, with real users, and that is a faster feedback loop than any newsletter or conference can provide. That said, I do pay close attention to what is happening at the frontier: I follow the research coming out of the major labs, I watch how consumer AI products are evolving in terms of interaction design, and I stay close to the fintech and telco worlds through my board work in Africa, which keeps me honest about how these technologies land in markets that look very different from Silicon Valley assumptions.
The other thing I invest in deliberately is my network. Being CPO at a global company in Kuala Lumpur, with a background spanning Europe, Africa, Latin America, and Southeast Asia, means I have access to perspectives that are genuinely diverse. A conversation with a founder building in Accra and a conversation with a product leader in Singapore will surface completely different problems and solutions, and the pattern recognition you build from that kind of range is hard to replicate by staying in one ecosystem. Speaking at events like CACES is part of that too: the preparation forces clarity, and the
conversations that happen around the edges of the stage are often more useful than the formal programme.
In the above engaging conversation, Dario Bianchi, Chief Product Officer at Mindvalley, offers a refreshingly practical perspective on why most conversational AI initiatives fall short and what it truly takes to make them work at scale. His insights go beyond the surface of technology, emphasizing the importance of choosing the right metrics, aligning teams, and focusing relentlessly on real customer value.
As highlighted during the Conversational AI & Customer Experience Summit Asia 2026, success in conversational AI is not defined by implementation, but by it’s impact on engagement, retention, and overall experience. By embedding AI into both products and processes, and fostering AI fluency across teams, organizations can move from experimentation to meaningful transformation.
“This conversation offered meaningful takeaways on building AI driven experiences that balance innovation with real customer impact, we look forward to sharing more such in depth insights and perspectives from similar industry leaders across our Asia edition.”
Join us at Altrusia Global Events as we embark on a transformative journey
Copyright © 2026 Altrusia Global Events Pvt Ltd