In 2025, AI agent-based systems have moved from a cutting-edge concept to a disruptive force, commanding the spotlight and sparking discussion across tech communities worldwide. Fueled by massive investments and tangible success stories, these systems are now poised to reshape how enterprises operate and innovate.
Join us for a forward-thinking discussion with industry trailblazers as we explore the next frontier of AI-driven enterprise—where human ingenuity and intelligent machines collaborate for unparalleled transformation.
Meet the Speakers
Steve Nouri:CEO and Co-founder of GenAI Works, a global AI community leader, and advocate for responsible AI.
Assaf Elovic:Head of AI at Monday.com, and creator of GPT Researcher, a leading open-source agent.
Rotem Alaluf:CEO of Wand AI and pioneer in large-scale AI systems across defense, finance, and tech.
Key Topics Discussed
In this webinar, several critical topics shaping the future of artificial intelligence (AI) are discussed:
- AI in Enterprises:
Explore how AI agents are transforming enterprises by automating tasks and speeding up decision-making—yet still facing adoption hurdles around security, data governance, and culture change. Speakers share lessons on bridging knowledge gaps, overcoming quality concerns, and refocusing teams on higher-level innovation. - Human-AI Collaboration:
Understand why trust—and not technology—remains the biggest obstacle in AI adoption. Learn how human oversight builds confidence in AI outputs and how organizations strike a balance between partial automation (with humans in the loop) and the push toward fully autonomous systems. - Transparency and Explainability:
See how multi-agent systems can mirror human workflows and produce more transparent, auditable processes, and why explainability tools are essential for compliance, user confidence, and error-handling—particularly when AI inevitably makes mistakes. - Scaling AI in Enterprises:
Discover how companies like Wand.ai and Monday.com are approaching AI rollouts step by step. Hear real examples of how well-managed AI can save teams “tens of hours” each day while respecting data privacy and trust boundaries. - The Emergence of Fully Autonomous Companies:
Learn about the pioneering work in building entirely AI-run organizations—especially in areas like algorithmic trading, marketing, or legal services—and why they’re poised to compete against traditional models. The conversation addresses open questions about speed, iterative learning, and human “blocking” of AI progress.
Introduction
Steve:
Hello everyone! Welcome back to another great discussion about AI. Today, we’re diving into the future of AI agents, with a special focus on the B2B landscape in Enterprise AI. We’re at a point where AI agent-based systems are no longer just cool demos—they’re rapidly becoming central to enterprise strategy. There’s plenty of buzz out there about massive AI initiatives and huge investments, and we want to go deeper, beyond the headlines.
I’m thrilled to be joined by two leaders in this space. We’ll focus on AI’s role in the enterprise, trust and transparency in multi-agent systems, and the journey toward fully autonomous companies. Let’s dive in and learn how organizations are tackling adoption hurdles—and where the technology might take us in the next few years.
AI Adoption in the Enterprise—What’s Slowing Things Down?
Steve:
Although AI is everywhere in the news, enterprise adoption can lag behind the hype. Many large organizations already have access to powerful models, but they aren’t always moving as quickly as smaller startups. Assaf, why do you think that is?
Assaf:
Sure, I’ll break this down quickly, but I think the first, and maybe most obvious, reason is that enterprises tend to move slower by nature. It’s just how priorities, processes, and scale work in larger organizations. That’s kind of the default challenge.
But there are more interesting barriers I’ve noticed, especially from working with or consulting for enterprise companies.
First, security and data issues. Enterprises face much stricter regulations around how and when they can share data or work with large language models (LLMs). Even now, with solutions like OpenAI offering private cloud LLMs or open-source options, we still see this as a major roadblock.
Second, quality. Startups are often more willing to take risks and work at smaller scales, but for enterprises, scaling up AI introduces a whole new set of edge cases. Ensuring LLMs perform well at scale is tough and requires more time to deliver a reliable, high-quality product.
Finally, knowledge gaps. Some enterprises got ahead of the curve and built AI teams early on. But for many, AI caught them by surprise. They woke up one day and realized, We need AI now. Moving from that realization to actually building a dedicated, capable AI team that can deliver meaningful value takes time. That’s why we’ve seen such a massive demand for service providers over the last few years—companies outsourcing their AI development.
But even that approach has its limits, and I think enterprises will need to catch up internally to fully unlock the potential of AI.
Steve:
Makes sense. Rotem, what would you like to add?
Rotem:
Assaf made some excellent points. I’d like to add a few thoughts. While large language models have shown immense promise, significant questions remain about their readiness for enterprise adoption.
Technically, these models are capable—they allow us to ask questions and receive meaningful answers, and their outputs can take many forms—whether it’s reporting, modeling, or other applications. However, the key question is whether their outputs are consistent and reliable enough for enterprise processes?
Over the last year or two, the answer has often been, it’s promising, but not yet ready. While the technology is impressive, especially for B2C use cases, it hasn’t yet matured in terms of quality, reliability, or consistency to replace core processes in enterprises.
That said, the past year has seen progress, particularly with advancements in agentic systems. Companies like ours are pushing the boundaries, and other organizations are contributing as well, collectively moving the needle toward creating more mature, scalable solutions that can truly transform enterprise operations.
I’d also emphasize the change management challenge. Even if the technology were perfect, large-scale transformation inside an enterprise isn’t trivial. Training people, modifying workflows, ensuring regulatory compliance—it’s a lot of moving parts. Smaller companies or startups can move faster because they don’t have the same bureaucratic layers or legacy infrastructures.
The Role of Trust—Human in the Loop vs. Full Autonomy
Steve:
Thanks Rotem. Let’s talk trust—often the biggest roadblock to AI adoption. Some folks argue that to unlock AI’s full value, you need a fully autonomous setup. Others say we must keep humans in the loop. Where do each of you stand?
Assaf:
My take is that the biggest barrier is not technology; it’s trust. Let’s take autonomous vehicles as an example. Even if they were proven to work flawlessly, would you go out tomorrow and buy a car that is one hundred percent autonomous tomorrow? Would you let it drive everywhere, pick up your kids, or handle tasks without supervision? For many, the answer is still no, and that illustrates the core issue: trust.
The problem isn’t about creating fully autonomous AI capable of doing everything on its own. Instead, the focus should be on identifying high-risk areas where human involvement is essential to maintain control and build trust around AI.
In some cases, full autonomy might be acceptable, but for many applications, people want—and may always want—some level of control. That’s why I advocate for keeping humans in the loop, even as AI advances. Not only does this build trust, but it also improves AI performance through reinforcement learning and constant feedback.
By designing systems with human involvement from the start, we ensure adaptability to changing environments and maintain oversight, allowing AI to consistently perform as intended. It’s a strategic approach that strengthens both trust and effectiveness.
Translating that back to the enterprise, I don’t think we should be going for complete autonomy across everything right away. Instead, identify which tasks and workflows require human oversight, at least initially, so people feel comfortable. Over time, as AI systems prove themselves reliable, we may see less and less human intervention. But pushing for immediate, 100% autonomy might scare people off.
Steve:
Great points, Assaf. Rotem, what do you think, should we push for fully autonomous?
Rotem:
Some companies—like certain algorithmic trading firms—are already near-autonomous in practice. But building a brand-new, fully autonomous company is simpler than converting an established enterprise. If you start from scratch with AI at the core, you don’t have to upend existing teams or legacy processes.
For enterprises, the hybrid model—humans plus AI—will dominate for a while. Humans handle the high-risk decisions or strategic judgments, and the AI manages large-scale tasks that benefit from speed and consistency.
However, I do believe we’ll see fully autonomous companies within the next one or two years in certain domains—like investment, legal, or marketing. They’ll be small at first, but they’ll move incredibly fast compared to traditional organizations.
Assaf:
That analogy almost convinced me—humans can be blockers to fully autonomous AI. Eventually, we may reach a point where trust develops through use cases like an AI agent finding the cheapest flight. If human involvement risks missing the opportunity, people will say, “Go ahead,” and trust the AI to handle it.
On a broader level, the idea of human oversight as the next step in AI progress is exciting. For example, in customer service, AI often resolves issues but hands off to humans when needed. In the future, we might see fully autonomous systems managing tickets, showing sentiment for tasks, and allowing humans to step in only when necessary. It’ll be fascinating to see how different markets evolve.
Steve:
Great points. I think peer pressure will eventually push companies toward full autonomy as they see humans as bottlenecks and fear losing their edge to smaller, more agile, and risk-tolerant competitors. It’ll be fascinating to see how this evolves
Transparency & Explainability—Why It Matters
Steve:
Let’s move to transparency. We hear a lot about “explainability” whenever people voice concerns about AI. How important is it for enterprise use?
Rotem:
It really depends on the use case. If it’s purely profit-driven—like an investment algorithm—and it’s making you money, maybe you don’t care so much about why. But for most enterprise workflows, explainability is crucial. Stakeholders want to know how a conclusion was reached, and regulators care about potential biases or discriminatory outcomes.
One advantage of multi-agent systems is how they mirror human processes in their interactions. These systems decompose tasks, collaborate, and show how they distribute and solve problems, providing transparency in their execution. While we may not know exactly how a specific agent generated a particular token or decision, this mirrors the lack of detailed understanding we have about human decision-making processes.
Multi-agent systems still offer outputs that are fairly transparent, similar to human execution. Traditional machine learning systems have focused on explaining feature importance for regulation and bias control. This extra layer of transparency—explaining parameters used in decisions—remains important for compliance and trust. Overall, multi-agent systems provide a level of transparency comparable to humans when executing tasks, while still meeting regulatory needs.
Steve:
That’s fascinating. Assaf, do you see similar patterns at Monday.com?
Assaf:
Yes, definitely. To add to what Rotem said, at Monday we observed that adding a layer of explainability—showing how AI reaches its results—leads to a slight increase in adoption. Is this due to trust issues? As Rotem mentioned, trust may become less critical over time, but there are still two key areas where explainability adds value.
First, it builds trust, especially when the AI gets things wrong. When it performs as expected, users are satisfied and forgiving. But when mistakes happen, the lack of explanation can feel like flipping a coin—you have no idea what went wrong or why. However, if users can see the reasoning behind the mistake, even if they disagree, it gives them a sense of control and understanding. They can see the logic, even if flawed, and often empathize with the system’s error.
Second, explainability improves the user experience. In traditional, non-AI products, bugs are often easy to trace. With AI, explainability lets users refine their prompts or adjust their input after seeing why the result turned out the way it did. This iterative process helps users fine-tune their experience and achieve better outcomes.
In short, beyond trust, explainability offers practical benefits for diagnosing errors and enhancing overall UX. It’s a worthwhile investment in AI product development, even if we believe user trust will eventually become second nature.
Scaling AI at Wand.ai
Steve:
Let’s shift gears a bit. Rotem, tell us how Wand.ai tackles enterprise-grade challenges like consistency, security, and data privacy, especially given the complexities you both just mentioned.
Rotem:
Sure. We began by automating small, well-defined tasks for large clients—always with enterprise-grade concerns like security, consistency, and regulatory compliance front and center. Then we gradually expanded to bigger workflows, often spanning multiple departments. Now, we’re working to automate entire functions—HR, marketing, certain operations—and exploring how to connect them so you can effectively run an entire division with minimal human input.
Ultimately, I believe we’re only a few steps away from being able to orchestrate entire autonomous companies in certain fields. People assume that’s a decade off, but it’s likely sooner. Once you can string together multiple departmental automations under a unifying logic or agent-based system, you’ve got the core of a company that runs by itself—humans just handle strategic oversight or special exceptions.
Surprising AI Initiatives at Monday.com
Steve:
Very cool. Assaf, Monday.com has a massive user base, from solo freelancers to Fortune 500 giants. Any AI project or rollout that truly shocked you with unexpected results?
Assaf:
Well, I’m constantly amazed by how users interact with AI. One example that ties into trust and human-in-the-loop concepts is our AI Blocks feature which can automate entire boards—for instance, pulling data from emails or auto-filling CRM leads—user’s report that it’s been a game-changer, saving them hours daily.
But here’s the surprising part: even with a near-perfect track record, some users hesitate to let the AI run fully unsupervised. A single mistake—like assigning the wrong name to a lead—can feel catastrophic if it embarrasses them in front of a client.
This experience has been a real eye-opener. It’s made us think deeply about risk tolerance in specific workflows and how that impacts AI adoption. For some teams, 95% accuracy is fine. For others, even 99.9% might not be enough if a mistake has dire consequences. That’s where we decided to focus more on explainability and human verification loops, so users know exactly where they can step in and correct the AI if necessary. Building trust is an ongoing process, and we’re continually innovating to address these challenges. It’s a fascinating journey.
From Prompt Engineers to Strategy Engineers
Steve:
We hear a lot about prompt engineering, but Rotem has mentioned a bigger challenge around strategy engineering—defining rules, objectives, and guardrails. Can you elaborate?
Rotem:
Sure. Right now, everyone is obsessed with “prompt engineering” for large language models, but we’re quickly moving toward a world of multi-agent or composite AI systems. Telling a single agent “Increase revenue” might lead to unexpected or unethical actions if the AI is not constrained or given the right context.
We’ll need “strategy engineers” who set overarching rules: Don’t exceed budget X, comply with Y regulation, don’t violate these brand values, etc. Instead of micromanaging each AI decision, we define the bigger boundaries and let the AI figure out how to optimize within them. This is reminiscent of human corporate governance, except more systematic because the AI always follows the coded rules—if the code is done correctly.
Assaf:
Exactly. We’re experimenting with “soft rules” that AI can override under certain conditions, and “hard rules” that are never to be broken (like regulatory or ethical constraints). Being explicit about these guardrails is key to building comfort and trust. If the AI can demonstrate it respects these boundaries, teams are more willing to let it automate higher-stakes tasks.
Audience Q&A—Incentivizing AI & Fear of the Unknown
Steve:
We received an audience question: “How do we incentivize AI?” Typically we incentivize employees with bonuses, promotions, and social recognition, but AI is different.
Rotem:
Yes, it’s all about reward functions and clearly defined system prompts. For instance, you might have an AI “finance agent” that’s rewarded for keeping costs below a certain threshold and a “marketing agent” rewarded for increasing conversions. They have to negotiate if marketing wants to spend more. If we see them clashing or producing undesired outcomes, we tweak their instructions or adjust the weighting of their objectives.
Assaf:
At Monday, we do something similar with guidelines that specify “What does success look like?” and “What is off-limits?” We also track user feedback as a “soft” reward signal: if users revert the AI’s changes or file a negative rating, that’s a sign the AI needs a different approach. It’s not a perfect science, but it’s surprisingly effective—especially with iterative refinement.
Wrap-Up—Key Takeaways & Future Outlook
Steve:
We’ve covered a broad range: from enterprise adoption roadblocks and trust issues to multi-agent systems and strategic guardrails for AI. Let’s finish with one key takeaway from each of you.
Rotem’s Final Word:
I expect to see fully autonomous companies in specific fields—like investments, marketing, or legal—within the next year or two. They’ll be built from scratch with AI as the foundation. For existing enterprises, I think we’ll see a hybrid approach dominate for a while: partial automation, with humans still steering the strategic or high-risk decisions. But as technology matures and trust builds, more tasks will move under AI control.
Assaf’s Final Word:
Agreed. For companies with legacy processes, it’s all about incremental trust-building. If the AI can reliably handle routine work, people start letting it do more. Eventually, humans may realize they’re bottlenecks on speed or efficiency and willingly hand over greater autonomy. But that process won’t happen overnight, and robust explainability will remain important—both for internal buy-in and for regulatory needs.
Steve’s Closing Remarks
Thank you both for an enlightening conversation. We’ve seen how AI’s promise can stall without the right trust, transparency, and strategic guardrails in place. For everyone tuning in, we hope this sparks ideas on how to integrate AI agents responsibly and effectively within your own organizations.
If you want to dive deeper, check out our executive course on generative AI from GenAI Works—tailored for leaders looking to harness these technologies without getting blindsided by pitfalls.
And remember, be kind to your AI—you never know when that positivity might come in handy in our increasingly autonomous future!
Thanks again, and see you next time.