AI & ML

Unpacking the Complexities of AI Implementation

Apr 10, 2026 5 min read views

Unearthing the Realities of AI Adoption

As companies race to integrate AI into their operations, the journey is far from straightforward. Despite the buzz surrounding artificial intelligence, organizations often grapple with complex issues like pipeline sprawl and shadow AI. In a recent discussion, Ryan Donovan speaks with Hema Raghavan, co-founder and engineering lead at Kumo.ai, to unpack these challenges and explore governance strategies essential for AI success. Implementing AI isn’t merely about the technology; it necessitates a cohesive approach to managing data flow and monitoring activity. Raghavan highlights that as businesses push to become AI-first, various departments from engineering to sales are keen to exploit AI solutions—often outside the IT department's purview. This trend isn't without risk. For instance, executives are increasingly concerned about sensitive corporate data inadvertently being sent to unapproved AI services during routine tasks, such as crafting a sales presentation. The lack of oversight leads to untracked data movement, raising alarms among CISOs and CIOs tasked with protecting company information. Kumo.ai attempts to address these governance issues by implementing different operational models, such as deploying AI within approved platforms like Snowflake’s Snowpark Container Services. This model allows organizations to utilize AI while keeping data securely within their controlled environments. Furthermore, Raghavan advocates for the use of monitored gateways that track data in and out of these systems, thus reinforcing security measures. However, the conversation takes an intriguing turn as Raghavan reflects on the convoluted nature of traditional data pipelines. She recounts her experiences at LinkedIn, where an intricate web of pipelines often complicated the development and maintenance of AI models. A malfunction in one pipeline could send shockwaves through multiple models, making troubleshooting a considerable challenge. Raghavan’s vision for Kumo.ai emerged from these frustrations, as she sought to simplify AI model architecture by reducing reliance on extensive feature engineering and the resulting pipeline sprawl. Instead, Kumo.ai proposes a single foundation model that interfaces directly with various databases, eliminating the need for numerous interconnected pipelines. Each request can dynamically query a database in real time, thus ensuring that maintenance is streamlined and reducing the potential for errors. This is more significant than it may appear. The ability to manage AI deployment with such efficiency could revolutionize how companies leverage AI. If firms adopt this model, they can potentially sidestep the pitfalls of intricate pipeline ecosystems, allowing for more reliable and scalable AI solutions. Those venturing into the AI space need to reconsider not just how they implement models, but also how they architect their data environments. As Raghavan emphasizes, “What you give your AI access to is critical.” By establishing clear governance practices alongside architectural strategies, organizations can work toward reducing the chaos that often accompanies AI integration—a necessary step toward realizing the full potential of AI technologies. For companies exploring effective AI applications, the lessons drawn from this discussion with Hema Raghavan could prove indispensable. The intersection of data governance and AI strategy holds the key to unlocking operational efficiency in the digital age. If you want to learn more or connect with Hema Raghavan, you can find her on LinkedIn or reach out via email at [email protected].### Navigating the Future of Engineering in AI As we wrap up this discussion, it’s clear that we stand at a pivotal moment in engineering, especially within the AI realm. The conversations surrounding design choices, the emergence of new tools, and the challenges of maintaining system coherence highlight not just technical hurdles but also deeper cultural shifts in engineering teams. If you're in this field, you’ll want to pay attention to how these transitions unfold. One of the most significant points made is the tension between experimentation and the need for structured governance. Hema Raghavan emphasized that while innovation is essential, the proliferation of different databases and coding paradigms can quickly lead to chaos—a phenomenon she referred to as "pipeline sprawl." This is more than just a logistical headache; it’s a fundamental issue that affects the speed and quality of product releases. The observation that organizations must have a framework for managing their architecture is spot on. As experimentation grows, so does the need for tools and governance models that can provide visibility into operations, especially as new AI capabilities come into play. A recurring theme in the dialogue is the changing nature of what it means to be a senior engineer today. This isn’t just about being hands-on with code; it's about guiding junior engineers as they navigate new technologies and ensuring they not only trust the AI outputs but also understand the underlying decisions being made—even if they come from an agent or an AI system. This mentorship will be crucial as the landscape becomes increasingly complex. The expectation that junior engineers will engage critically with generated solutions raises the bar for everyone in the team. Looking ahead, organizations will likely need to strike a balance. Yes, fostering an experimental culture is invaluable, but without a strategic approach to governance, the very innovations we seek may become sources of inefficiency or, worse, liabilities. CTOs and engineering leads will have to be proactive in establishing standards and practices that can adapt to new discoveries without losing operational integrity. So, what’s next for engineers and companies diving into AI? It’s essential to embrace the excitement of this new age while applying lessons from previous experiences. As Raghavan aptly stated, it’s about making new mistakes rather than repeating the old ones. If the engineering community can maintain that delicate balance of innovation and caution, the future looks promising. Let’s embrace this journey together, making strides that push boundaries while keeping our eyes open to the pitfalls of past decisions.