The 17th edition of the World Policy Conference (WPC), founded by Thierry de Montbrial, Chairman of the French Institute of International Relations (Ifri), took place from December 13 to 15, 2024, in Abu Dhabi, United Arab Emirates. This marks the fourth consecutive year that Abu Dhabi has hosted the prestigious event, a gathering of political, economic, academic, and media leaders from across the globe.
Since its inception in 2008, the WPC has been dedicated to addressing the pressing challenges of a rapidly evolving world and fostering lasting connections among its participants. By bringing together voices from major powers, emerging nations, and smaller countries across five continents, the conference ensures a rich diversity of perspectives. Its goal remains clear: to inspire thoughtful dialogue and practical solutions for the global issues of our time.
Panel Session: Exploring How Generative AI Can Transform Business
Speaker
Rotem Alaluf
CEO, Wand AI
Panel Chair
Lucia Sinapi-Thomas
Managing Director, Capgemini Ventures
Rotem: Thanks a lot for inviting me here and thanks a lot for the introduction. The core thing we are working on is what we call artificial workforces. We have the human workforce today, and we are working on an artificial version for companies to enable those agents to collaborate among themselves and with humanity. If we think about the first generation of what happened with generative AI, the focus was on creating a single artificial brain that could take information and generate responses based on it. If you think about the human brain in that context, it hasn’t changed much over the last thousands of years, but society and information flows between different humans have enabled us to step up and create most of the things we have today.
The AI Workforce
There is still a lot of work on the artificial brain. We call it cognitive language models with improved reasoning, planning, and self-awareness capabilities, which work a bit differently than transformers today that have several gaps. One of these gaps is the need for a lot of data. As mentioned before, it’s much easier to reach human capabilities when you have execution capabilities that allow you to get feedback into the system. Coding is one of those because you can run the program, see if it works, and get the output back to the models, creating a feedback loop. But there are also other ways to reach those points. So there is still a lot of work in creating this artificial brain, but as humanity evolved based on societal interactions and working together, we believe the next generation of technology will build a society of agents, a group of agents that can collaborate and evolve over time.
Let’s take an example of a company today, like Capgemini. They run and execute many tasks, and when they identify gaps in execution, they hire new people to execute tasks better next time. We want to enable agents in similar processes so they can execute tasks assigned by humans. If they believe they can do it better next time, we want this group to self-evolve, to autonomously hire a new agent to run it better. This is similar to what was discussed before because the task was executed, and we got feedback from humans. Ideally, at the end of this process, we can compare these two groups of agents. So, having a group that executes tasks, checks inefficiencies in execution, or at least assumes the inefficiencies and solutions, creates a new group of agents with new functionalities and roles. This process continues until we have an artificial workforce group self-evolving to solve more and more complex tasks. This fundamental and core technology uses a lot of reinforcement learning technologies, not just generative models and AI.
One thing is to build this artificial workforce, and the second is how humanity will collaborate best with this artificial workforce. We need a lot of feedback to bring these artificial workforces to a level of execution similar to or better than humans. The best ones to give us this feedback are humans. The collaboration between the human workforce and the artificial workforce is crucial for enabling systems to automate and execute more complex tasks over time. Ideally, we want every agent in our system to approach every person in any organization when it needs to ask a question or when it gets stuck. We want a complete two-sided collaboration process between humanity and the genetic system. So that’s what we are working on, and as part of it, we are also working on the fundamental technology of a specific brain, but mostly on how these artificial workforces work together with the human workforce.
Lucia: That’s fascinating. I am sure you share my view that this is also futuristic. We hear you talk about collaboration between digital agents and employees. What is the role of the employee in your vision?
Human-AI Agent Collaboration
Rotem: That’s a great question. One of the mistakes happening in companies, enterprises, governments, and many organizations we work with today is that they look at the individual employee and ask how AI can help this individual employee do better. That’s a very problematic way to look at it because the employee may not necessarily do the same thing. They may do something completely different. Let’s take the industrial revolution as an example. People who handmade t-shirts moved to manage production lines. So, their role evolved to something different, and together they achieved a hundred or thousand times more productivity gains than before. We believe we’ll see a similar process. In the first 5, 10, or 20 years, we’ll see a lot of feedback loops coming from humans.
We want the agents to be proactive in asking for feedback when they don’t know how to solve a problem, and humans to help them execute the problem. Agents will learn and be able to do more things, but eventually, people will evolve to focus on strategy, creativity, and processes that are more random and require less data. So again, connecting to what was mentioned before, there is less data on how to make decisions in those situations. Humans will focus more on those things. But most execution problems will transition to agents eventually, especially in areas with a lot of data. With human feedback, we will collect more data and improve.
Lucia: Thank you. So, the more impact we expect from these solutions, the more important it is that at the enterprise level, it’s about trusted AI.
Rotem: As mentioned, first of all, adoption and trust are required in enterprises. We also offer solutions to run behind the customer’s government, VPC, and on-premise solutions to reduce the trust gaps we see today in enterprises. The second thing I want to mention is that people took large language models in the past, and we see them evolving in enterprises currently. They tried to push them to specific tasks and then realized many components were missing. For example, creating technology that says, “I don’t know how to solve it,” is a huge problem with large language models like hallucination. Identifying that the large language model doesn’t know how to solve a problem is one issue. Second, role-based access control, how agents access different data, how agents send questions they don’t know to humans, how we run on-premise environments, and many other issues. Migrating data and other problems require many components. That’s why a large language model can be thought of as an engine of a car, but it’s not a car. Enterprises don’t necessarily need to build a car if they’re not car manufacturers.
We want them to take a car and customize it, or even better, in the case of artificial workforces, the car will evolve automatically based on the enterprise’s needs and questions to execute in the best way. In summary, AI, especially in enterprise government mission-critical systems, is not a one-component language model or any other foundational model. It’s a much more complex platform that requires dozens of components. Enterprises should not build everything end-to-end. Tech companies need to push more in creating enterprise-grade solutions that provide trust and all the components needed for enterprises to use the system better. The last thing I want to say is that tech is difficult.
Tech is difficult, but it will be solved. Many smart people are working to solve the tech barriers today. In my opinion, we’ll reach human capabilities in almost every task in the next five to ten years. What is more difficult today, even more than the tech, is change management. How we bring the technology to people, how we create trust. There is a lot of work for consulting firms and different companies to close the gap between the current situation, which we call the old software situation, and the new world of AI workforce collaborating with humanity. I would say that it’s even more difficult than the tech parts today.
Lucia: And that’s definitely explaining the pace of adoption. It’s driving that, and of course, we know that this is not only about tech solutions. At Capgemini, we’re well-placed to understand that this is also about training people, cultural change processes, and organization. It’s a transformation across the board.
Audience: Thank you. My name is Krista Murti from Indonesia. I take the benefit of this panel to raise a question related to tomorrow’s panel, which is food security and hunger. You mentioned workforce and the importance of AI and human interface. The workforce that I think is very important for hunger and food security is the farmers. In Africa and Asia, farmers have many limitations. They are not well-educated, they don’t have money, and they don’t have computers. My question is for both of you: will AI help the communication between AI and farmers? Will we see something that can help farmers produce more with this technology? Because at the end of the day, we have to eat the food that farmers plant. We will not eat AI; we have to eat the food. So how can AI help farmers produce more? Thank you.
Rotem: Great question. There are many problems today with current technology, and both liquid foundation models and many things we are doing discuss the problem of sustainability in running very large models. We’ve been in touch with one of the largest countries on this topic, and one of the things we saw is that if we take the large models today, it doesn’t make sense from a compute and salary perspective to bridge the gaps. The goal and value of small models that are as smart or even smarter than large models to help in these fields is very significant. Small models also enable us to go to fields that weren’t feasible with large models due to sustainability issues.
From a farmer’s perspective, optimizing crop management, gaining insights about soils, weather problems, alerts, and other factors can help create more productive crops with less effort and waste. One of the biggest problems today is waste due to missing data or data difficulties, such as potential weather changes or ideal soil and medicine conditions. Agents can help a lot in these areas. We need to make these models smaller, more efficient, and more sustainable to make them economically feasible for these places. This is something we and other companies are working on. It’s hugely important, as there are countries with more than a hundred million farmers. It’s a huge opportunity, and we need to push for advancements in this field.
Interested in learning more about the future of artificial intelligence? Watch Rotem Alaluf’s on-demand webinar, “The Future of Artificial Intelligence,” to gain deeper insights into how AI workforces are transforming businesses and the world.
Watch Now >