Why do LLMs hallucinate and is there a way out?
August 23, 2024
What is Collaborative Artificial General Intelligence (CAGI)?
September 13, 2024

On-demand Webinar: The Future of Artificial Intelligence

September 11, 2024

The future of intelligence is a topic that has garnered significant attention in recent years. As artificial intelligence (AI) technologies continue to evolve, their impact on various industries and our daily lives becomes increasingly profound. Generative AI, the largest artificial intelligence (AI) community, recently hosted a webinar to provide a comprehensive overview of what lies ahead in the realm of artificial intelligence. 

Meet the Speakers

Ramesh Raskar: Associate Professor at MIT Media Lab, directing the Camera Culture research group. His work focuses on machine learning and imaging for health and sustainability. At Facebook, he led innovation teams in digital health, satellite imaging, VR/AR, and more.

Steve Nouri: CEO and Co-founder of GenAI Works, the largest AI community. He is a renowned AI leader and Australia’s ICT Professional of the Year. Steve has revolutionized AI perspectives while championing responsible and inclusive AI, founding a global non-profit initiative.

Rotem Alaluf: AI expert, serial entrepreneur, and Co-Founder & Tech Advisor of GenAI Works and CEO of Wand AI. As a pioneer in artificial intelligence, Rotem has led the development of multiple large-scale systems across diverse sectors, including defense, intelligence, finance, and technology.

In this webinar, several critical topics that are shaping the future of artificial intelligence (AI) are discussed:

  • AI in Enterprises: Discover how AI agents are transforming enterprises by performing multiple tasks, making decisions, and using various tools to enhance productivity and efficiency.
  • Core AI and Applied AI: Understand the distinction between core AI (like chips and cloud services) and applied AI, which tailors models to specific domains and adds layers like privacy and payments.
  • Collaborative Artificial General Intelligence: Explore the concept of collaborative AGI, where systems of AI agents work together and with humans to outperform human capabilities, enhancing compute efficiency, sustainability, security, and control.
  • Human-AI Interaction: Examine the evolving interactions between humans and AI agents, focusing on natural language communication, strategic decision-making, and the role of humans as escalation points for agents.
  • Decentralized AI: Investigate the shift towards decentralized AI, where power is distributed away from large tech players, enabling more localized and specialized AI applications across various sectors.

Steve: We have two awesome guests, one from academia and one from the industry, but both of them have extensive experience in both academia and industry. So first guest, Ramesh Raskar is an AI professor at the MIT media lab and has made significant contributions to the field of artificial intelligence (AI). Among other topics, Ramesh is responsible for development of automated machine learning and private machine learning with split learning. His work focuses on enhancing machine learning, efficiencies and privacy, making artificial intelligence (AI) more accessible and secure for broader applications in health and sustainability. Throughout his career, Ramesh has led research efforts in major tech companies like Google, Facebook, apple, where he applied his research in real world challenges. His pioneering efforts in machine learning have earned him widespread recognition in numerous awards. Great to have you Ramesh. Thanks for joining us.

Ramesh: Wonderful to be here. Thank you Steve.

Steve: And our second guest, Rotem Alaluf, is a seasoned AI expert serial entrepreneur and CEO of Wand AI, renowned as a pioneer in artificial intelligence. He has developed and scaled multiple AI initiatives across various sectors including defense, intelligence, finance and technology. In addition to funding multiple companies, Rotem also serves as an advisory board member for global companies, is an AI investor and a thought leader. He regularly contributes his expertise at international conferences, universities and industry firms. Also my co-founder at Generative AI. Thanks for joining us Rotem.

Rotem: Thanks a lot Steve and great to be here.

What are AI Agents?

Steve:  We are going to talk about an interesting topic, the future of artificial intelligence and specifically I guess AI agents are going to be the main part of our discussion. So let’s just jump into it. Rotem, if you can start with a little bit of, I guess maybe describing what does AI agents mean and what does it look like in terms of how it can help enterprises and why it’s critical for enterprises to adopt AI technologies at this stage.

Rotem: Sounds great. So I’ll start actually with why enterprises and why artificial intelligence (AI) is important for enterprises. The current wave of AI systems that we are seeing today has already created significant value for enterprises, mostly in text-rich environments like customer support. We are already seeing its impact in fields like legal and healthcare, etc. As AI, and specifically AI agents, get more and more sophisticated, we are seeing them entering more fields, more industries, and starting to influence more sectors and roles. If we look at enterprises in the future, those that don’t adopt business transformations are at huge risk of becoming completely obsolete. The reasons for this are several core things.

AI and AI agent systems can significantly boost productivity, sometimes by dozens or even hundreds of percent today. In the future, this gap will only widen, potentially reaching thousands or even tens of thousands of percent. Companies that fail to adopt these advancements won’t be able to compete with those that do.

From a decision-making process, we are living in an era where we are analyzing huge amounts of data, and in order to make good decisions, we need to analyze tens of thousands, hundreds of thousands, or millions of data points. As humans, we are not really capable of doing this efficiently. AI systems and AI agents are able to do that. And, of course, the risks from new players entering those sectors with the opportunity to create new types of organizations pose risks to current enterprises. This is one of the core reasons that enterprises must start now to think very deeply about business transformation and how they are adopting agent frameworks and agent systems.

Regarding the question about what an AI agent is, there are a lot of different definitions, but we can think of it as an entity in the organization that is based on AI, capable of performing multiple tasks, receiving information, making decisions, and using different tools to do so. Today, we are seeing more and more agents specialized in specific fields, like a legal expert agent or GitHub Copilot, for example. You can think of an AI agent for coding in that part and a data scientist agent that will help in solving data science problems. Each AI agent has different expertise, different tools, fine-tuning, different data, and problems, ideally to help us perform more complex tasks than we can with general AI technologies or large language models trained on general data, ultimately creating more productivity and impact in enterprises.

Steve: It’s a good intro. We all know that artificial intelligence (AI) is going to be very helpful. It’s just that we have seen it in many different cases, and specifically as practitioners, we are excited about technologies and how they can help us going forward. But there are always challenges and lots of reasons why some of these companies are not leveraging AI systems. What are the challenges they might face if their adoption pace lags behind?

Rotem:  I can start. Ramesh, feel free to add to that part, but I believe that in general, the risks are existential for enterprises that lag behind. Currently, the risk is being out of business in the coming years because of those productivity gains and better decisions that are able to be made by companies that are deeply using AI systems. So, companies that want to go through this path in a very, very strategic way face a real existential risk of being out of business in the coming years.

Ramesh: It’s definitely an exciting time but also a time when organizations can fall behind. In addition to what Rotem said about agentic AI, it’s very important to realize the distinction between what I call core AI and applied AI. There’s so much excitement about core AI, whether that’s chips and cloud services, but going forward, it’s going to be a very small piece of the AI puzzle. Applied AI sits on top of core AI, tailoring the models based on your domain application, adding other layers like privacy, payments, data pipes, and tooling. Finally, there’s the application layer itself, right? So there’s a distinction between core AI and applied AI, and I would say even within applied AI, there’s a distinction between screen AI and dimensional AI.

I like to call screen AI something you would just add as a chatbot or summarization tool, as opposed to dimensional AI, where you’re really looking at your data, involving the processes of your organization. You might be in the three-dimensional world, dealing with real businesses outside your organization. Dimensional AI is really the place where most companies are going to thrive, as opposed to screen AI. Sometimes it’s a joke; you meet organizations and ask, “What’s your AI strategy?” and they say, “Oh, we have a chatbot on our website for our customers, so we are in the AI business.” I think that would be a risk for a lot of organizations unless they deeply embed AI in their own workflow. It’s a lost opportunity.

Coming to AI agents, I agree with Rotem that this is a very fast-moving space. Any definition that I or they can propose today is going to change over time. There are two dimensions of AI agents that I think are very interesting. One is how intelligent an individual AI agent is getting, and the other axis is how often the AI agents are talking to each other. You can plot them on two axes: the intelligence axis and the network axis for AI agents.

As much progress as we’re making on making individual agents smarter, I think there’s a limit to how far that will go. But if the AI agents are networked and they talk to each other, they’ll get smarter together. This is also true for us human beings. We come with some hardware when we are born and a little bit of software, but as we grow, the software is what really makes us more intelligent as we interact with other human beings. We go to good universities, good institutions, interesting places, and meet interesting people, and that’s how we become better together, even as humans. So I think these are the two axes for agents: either intelligent or networked.

I wouldn’t be surprised if we take a dimensional route to build an agentic world where, in the beginning, the agents are very simple. They do travel, finance, understand the world a little bit, have some personal information about myself or my organization, and they all start talking to each other. If you take the financial world or health, the agent for an insurance company will talk to the AI agent from the pharmaceutical company, the hospital company, and so on. As they start talking to each other, they get better together. Those kinds of AI agents that are networked and talk to each other have the possibility of becoming even smarter than AI agents that are trying to become smarter on their own.

I think that’s where a lot of low-code, no-code platforms are going to come in because I would like to run my own agent and configure it the way I want. It’s like buying a phone in the beginning. If a company markets, “Hey, we have the most amazing phone for you, but we won’t let you do anything else with it other than what the hardware and software come with,” that’s a very different phone than one that says, “You know what, we’ll give you something that has reasonable hardware and a little bit of software, but as you start interacting with more apps and services, your life will become richer because of that.”

So I just want to make a distinction between this notion of screen AI versus dimensional AI and also AI agents that are getting better on their own versus getting better together.

Rotem: I would like to add, and I think it’s an exceptional point to add on top of that, the path to intelligence. The way we are looking at it as a company, and I believe the industry is starting to get there more, is a lot more collaborative. We call it collaborative artificial general intelligence and how we are able to take a system of agents together to outperform human capabilities, not just individual agents. It has huge benefits from a compute and sustainability perspective, of course, but also crucial topics like security and the ability to control those types of systems. When there are a lot of interactions between the AI agents that happen in human language, it is much easier to control and ensure that those agents are serving us and not eventually controlling us in scenarios that we are far from but could pose existential risks to humanity if not dealt with properly when the technology becomes much smarter than today.

So, I believe that the industry needs to push more and more towards what we term collaborative artificial general intelligence in systems of networks that by themselves will bring the intelligence and learning for the AI agents. Agents are learning today from human feedback; we want them to learn from agent feedback as well. Using reinforcement learning for learning from AI agents’ feedback, so agents learning from other AI agents’ feedback, can speed up a lot of things that we are seeing today.

How should enterprises integrate AI agents into their operations?

Steve: That’s a great point. I want to go back to enterprises and how they can leverage this. I would like to know what the most critical changes in enterprise operations will be if they fully integrate these AI agents.

Ramesh: I’ll start and then hand it to Rotem. I mean, any transformative technology like artificial intelligence (AI), or any transformative technology, goes through three phases: improve, transform, and then disrupt.

In the improve phase, which is the first phase, you take what you have—your own workflows, whether they are financial workflows, people workflows (HR), or supply chain workflows—and you try to make them better. You improve efficiency, reduce costs, improve safety, reduce fraud, and all those things. That’s the first phase, and that’s where we are at.

The second phase is kind of transform, where we say we don’t want a faster horse but we need a car. These are some of the things that we cannot see right now, but in the AI world, we can kind of already see it. Rotem kind of picked the phrase “collaborative AI,” right? That’s where the world is really going to move.

And then there’s disrupt, which involves more futuristic scenarios where there could be a single-person entity that can run multi-billion dollar businesses. Or I can create five digital twins of myself for every employee, for example, and they’ll interact with each other. We’ll have a flipped employment model where we don’t sit there from eight hours a day looking at the screen and working, but it’s flipped because most of the time it’s the agents who are doing the work for you. Then we come in almost like air traffic controllers as employees to see if they’re doing the right things.

Again, this flipped employment model means I’m not just working for one company but I might be working for 10 different companies because agents are doing most of the work and I’m doing the air traffic control. So I would say right now we’re in the improve phase, the first phase of any emerging technology, and this itself is worth trillions of dollars. That will play out over the next three to five years, but very quickly we’ll switch to the next two phases.

Rotem: So those are great points, and I would just like to add another layer which is crucial in order to run all the execution for AI agents, especially when we are in collaborative networks of agents, which is data. How are we enabling those agents to have proper access to data? If we look at enterprises today, they have the data but not necessarily the connectivity between the data or the relationships between the data—how each data point and piece influences the other.

So, this part of how we are bringing all the data needed for the agents to execute into the same operating system or execution system is also crucial for us to be able to gain huge productivity gains. I think we will see a bit more centralization of data in enterprises. Software will get more centralized, and this human-AI collaborative platform will become kind of a next operating system or another operating system on top of the computer’s operating system. All the data, execution, and workflows we are running will actually happen in the same place.

That’s, I believe, an ideal situation to start seeing a decrease in the number of databases and data sources in organizations, enabling agents to run and execute in a much more efficient way.

How will humans interact with AI agents?

Steve: That’s a great point. I mean, I do agree. I can see the same trends when I talk with my peers and friends from the industry and academia. But one of the questions that I get all the time is, how do you see the interactions between humans and AI agents? Because if you have a lot of agents and they’re talking to each other—and hopefully they’re talking in human language as Eric Schmidt mentioned—if they didn’t talk in our language, just make sure you plug all the cables. But anyways, if we make sure that happens, how are we going to be involved and what will our interactions look like?

Rotem: So I definitely agree that interactions will move more towards natural language interaction between AI agents, and also humans and AI agents, of course. I think we will start to see a shift with people moving more towards strategic decisions. As AI agents become more sophisticated and able to execute more tasks by themselves, people will focus more on making strategic decisions and helping the agents improve.

In this network, one important topic to develop for AI agents is the ability to self-criticize their results or have other AI agents criticize them. In those situations, we will want to involve a person who can help the agents make the best decision. This will help the agent execute a task and improve for future tasks, reducing the need for human involvement over time.

I believe we will see several shifts over the next five years. First, people will move more towards strategy. Second, people will become escalation points for agents during execution. AI agents will call on human experts when they get stuck. For example, if Steve asks a question and several agents collaborate to provide an answer, but one of them gets stuck, it might go to Ramesh, the expert in that specific area. Ramesh will work with the agent to get the answer, and then the agent will continue with its task. Over time, with a feedback loop in place, AI agents will need less and less human involvement and escalation points, allowing people to focus more on strategy.

Steve: Can we say that human in the loop is dead, or is it still alive? Ramesh, would you like to take that?

Ramesh: I would say exactly the opposite. Rotem talked about collaborative AI, and that’s happening at multiple scales. He mentioned two different types of collaborative AI: agents collaborating with each other and agents being able to decide when to bring a human expert into the loop. This will occur at different layers: simple tasks are taken over by agents, more complex tasks are handled by collaborative agents, and even more complex tasks involve humans in the loop, and so on.

I think we, as humans, have to move away from our classic roles of solving day-to-day tasks and start adopting a system thinking mindset, which I like to call the air traffic controller mindset. Just as it doesn’t make sense to aspire to be a librarian or a travel agent today, the same applies to our human careers. The most important thing to remember is that for individuals to thrive, we have to take control of our own AI. Not because we fear someone else will misuse it, but just as we take care of our own email and calendar today, we will need to take control of our own AI.

We need to learn how to use no-code and low-code tools to configure our AI, determine what gets triggered, and decide what our agents should collect and from which sources. This agency will still remain with humans, and we all need to become very familiar with these digital systems, just as we are familiar with calendars and email today.

What are some of the limitations with large language models (LLMs)?

Steve: That’s a great point. Specifically about LLMs, which are kind of the enablers of AI agents at this point, I would like to talk about some of the limitations that we are seeing with these LLMs. As much as we are all excited and using them for different tasks, like summarization and finding insights in huge documents, we are also seeing many people sharing tweets and talking about the limitations and things they can get wrong. Every now and then, I see somebody trying to trick an LLM or give it a riddle or something complex that it cannot solve, and they will just mention it. Probably they are not ready to be used.

So my question here is to you, Ramesh, actually because of your extensive background in the research area: What are some of the limitations of the current LLM technologies, and what breakthroughs are necessary in order to overcome these?

Ramesh: That’s a great question. As you know, my group at MIT invented AutoML, the notion that software can write software or AI models can create better AI models. We invented that almost nine, ten years ago now. We also invented the notion of privacy-preserving machine learning, which is how AI can work across data silos. These are very critical problems. We have made a lot of progress in these fields, but we need to go much beyond that.

I would also say that LLMs today, such as ChatGPT or BERT, are magical. At the same time, they are mimicking something that humans have created, which is language. With our 20-watt brain, two pounds, that’s what we have created as humans. It’s kind of okay; it’s not the most complex system in the world. But as we move forward, the language of physics, the language of biology, the language of populations is where we need to go.

So, the language of humans looks like we have almost cracked it with today’s language models, but the language of all these things—and of course, I’m using “language” as almost like the semantics or the rules. For example, if I’m driving down a highway and I see a truck with a mattress on top, when the mattress is falling, I have a mental model, a world model, of how this mattress is going to bounce on the highway and how I should avoid it by driving straight, around it, or stopping. This is the language of physics. We have to have a physical interaction of every object with every other object, and that encoding is not easy.

If you go even further, think about finding a cure for a disease or developing drugs. We just don’t have the language of biology and the language of biochemistry yet. I think that’s where it goes back to having to control our own AI. I’m not going to call up OpenAI or Anthropic and say, “Here’s all my data, can you please build something better for me?” because we are going to be in this interim regime where we have to fiddle around with it ourselves, with our best employees, and see what we can do to realize the fact that we don’t have a world model.

That’s why I’m very excited about the world of artificial intelligence (AI) that I can program myself and control myself. So, I would say as amazing as LLMs are, they are mimicking human knowledge or human language, but most of the world’s knowledge, most of science, is not encoded in pure text. The stuff that we need to invent, the stuff that we need to deploy, we cannot describe with any straightforward language right now. Just think about driving; you can’t just write a book about how to drive because there are so many scenarios, so many combinatorial scenarios that cannot be described with language. So, as amazing and exciting as language models are, there’s so much more to do when it comes to customer behavior, financial implications, physical interactions, biochemistry, and so on.

Rotem: Just adding on top of those great points. So, language and if we’re looking at intelligence, eventually we are trying to create intelligence. Knowledge or language is part of it. The human brain has many other capabilities and just encapsulates knowledge in that part. These are things that current language models are not exceptional at, such as logic, understanding, reasoning, planning, and critical thinking—all topics that the human brain excels at today. These are some of the limitations we are seeing in the very narrow part of only encapsulating knowledge, which is an exceptional and valuable thing to have, but still is not a comprehensive intelligence system.

I like to look at it from the perspective that the human brain is one of the most interesting concepts we can compare things to. The human brain has different lobes, each responsible for different tasks. There is a lobe responsible for language or knowledge, but there are others responsible for decision-making, planning, reasoning, sensations, etc. From an efficiency perspective, we believe that we can create more capable models by decomposing these tasks in a similar way to the human brain. We can have different components that excel in different topics, whether it’s knowledge, reasoning, planning, or self-awareness. This allows agents to improve and diverge from each other over time, addressing a lot of different problems.

A multi-component system or multi-component intelligence has huge advantages when addressing problems like hallucinations. There is a self-mechanism of compensation between different components in engineering, not only in this specific case, and self-criticism processes where different components can criticize each other. This also improves other problems of large language models that we’re seeing today, such as reasoning, planning, and divergence issues with very long tasks, etc.

What AI strategies should enterprises adopt to ensure they stay competitive?

Steve: I agree, and I believe that having said all of that, we are seeing a lot of value that AI is bringing to the market. It’s just obvious that companies leveraging artificial intelligence (AI) are showcasing some really good results, not only from a productivity perspective for individuals but also for the business in general. What strategies should enterprises adopt to make sure that in this very competitive market they not only survive but also thrive and are able to leverage the full potential of AI?

Rotem: I can start with that, and Ramesh can continue. So I think, like Ramesh shared, enterprises often say, “Okay, we got a chatbot on the website, we have our artificial intelligence (AI) strategy, we are an AI-first company, everything is AI-driven,” but actually, it’s not that simple. It’s simple to get very tiny percentages of productivity like that. But if we want to stay competitive over time, the change needs to be much deeper. We need to start discussing the topics we’ve talked about, like collaborative systems, how agents are embedded in our day-to-day life, and how, over time, we are shifting more tasks to agents for execution power. We are changing the ratios between people and agents because that’s the way companies can become competitive. If in the future companies can run a ratio of one person to 10,000 agents, they will necessarily be more productive than companies with a one-to-one or one-to-ten ratio, provided the agents are good enough.

So we need to see, and it’s not a simple process. It’s not like a simple solution. It’s not even a solution where you can just take some small thing, implement it, and run it. It’s a deep transformation path. We almost need to make a change, in my opinion, on a scale similar to the digital transformation when we moved to computers. The current structure and infrastructure in enterprises are not necessarily ideal for the future of collaborative human-AI systems. There need to be deep transformations from a data perspective and centralization, from how people collaborate with agents to how to automate more and more workflows in this process, to how to run those escalation processes, and how to improve agents to become better at what we need as organizations. So there are a lot of things to do.

When we want to reach the productivity levels that will be available in the future, I always tell customers to start prioritizing their use cases and things that can create value for them. Don’t start with the most difficult thing; start with a use case, then another use case, and another. In parallel, think strategically about which structural changes need to be made in the enterprise to enable human-AI collaboration at scale.

Steve: Awesome. You almost tackled two questions in one: how enterprises are going to use it and how to start, which is a very important question. Many companies, and even smaller businesses, have these questions: where should they start? So, Ramesh, I would love to hear your take.

Ramesh: I think Rotem covered most of it. I’ll just rephrase it by saying that it’s basically people, process, product. You heard him: make your people AI-first, then your process AI-first, and then worry about making your product AI-first.

For people, make sure—especially if you’re a non-tech business, whether you’re in healthcare, energy, ports, logistics, or whatever—that at least one person reporting to the CEO is the head of AI. They shouldn’t be somewhere down the line. This is a very common problem, for example, in healthcare, where there’s a CIO who understands SaaS and so on, but the head of artificial intelligence (AI) is much further down the line. You need the head of AI reporting straight to the CEO, and most of the organization has to be trained to use low-code, no-code, do hackathons, use Python code, use other software platforms, and get people ready. So, people first.

Then there’s the process, as Rotem said. Where’s your data? Is it centralized? Is it homogenized? What do you want to access? Do you want to collaborate?

Finally, it comes to the product. Product is the most difficult part to make AI-first, so don’t worry about that initially. Start by making your whole company AI-first from a human resources point of view, and then make your process AI-first.

What is the future of AI in the next three to five years?

Steve: Awesome. With that note, I would like to get to our final thoughts about the future. How does it look? I mean, that’s always my favorite question, and when I get it, I always try to dodge it, but I would love to hear your thoughts about the future of AI in maybe the next three to five years.

Ramesh: Go for it, Rotem.

Rotem: That’s the part where you have the most to lose because if your prediction is completely off…

Steve: We’ll keep both of you accountable. Five years from now, I’ll do this live again.

Rotem: I will say it like this: In order to get to collaborative systems at scale and to improve the capabilities of different agents, multiple things need to be done in both applied AI and fundamental AI, as well as in the engineering part of enterprises and how to structure solutions. But I believe that, especially if we are aiming for what I termed before as Collaborative Artificial General Intelligence, we will be able to achieve human-level performance in many different topics and fields.

In the first three to five years, I believe agents will collaborate a lot with people in order to learn from them. Over time, they will become more and more independent, requiring fewer and fewer escalations. We need to remember that as people, we also escalate questions when we don’t know the answers. So, agents will escalate more at the beginning, but eventually, I believe they will reach a level of escalation similar to humans.

Therefore, I can predict that in five years, we will have types of systems in enterprises that collaboratively will be able to perform in a way similar to humans.

When will we achieve Artificial General Intelligence?

Steve: How about Artificial General Intelligence? Are we going to get to Artificial General Intelligence in five years?

Rotem: I like the collaborative way to get to AGI. So, a system of dozens or hundreds of thousands of AI agents together outperforming human capabilities. For this question, I believe we will be in the ballpark of those capabilities in five years or so.

Steve: Alright, awesome. Ramesh.

Ramesh: I guess I definitely agree with what Rotem said, but I would add that a lot of our research here at MIT is about decentralized AI. The philosophy there is that there is too much power right now with large tech players, which may be fine for well-understood modalities like text, speech, and video. But as we start thinking about creating intelligence in every sector, is there a foundation model for MRI? Is there a foundation model for the healthcare system of Indonesia? Is there a better model for how the United Nations works, and so on? I don’t think it’s going to happen by one company centralizing all the data, centralizing talent, centralizing models, and also reluctantly centralizing the governance of it. I think this is going to happen in a decentralized manner.

There are two reasons why this will happen. One is because we have the world of AI PCs coming, which are going to be very capable. You’ll have compute on the edges, and that’s where exciting things are happening now, as opposed to using centralized servers. We’ll be doing things on our home office PCs. The second big trend is this notion of agents. All of us will have agents, and they’ll talk to each other peer-to-peer, as opposed to being in a spoke-and-hub model. Because of these two reasons, I think decentralization is inevitable.

Those are the inputs and reasons why it will happen, but the way it will happen and what we need to solve are things like being able to do AI across data silos, making sure the artificial intelligence (AI) you’re using is responsible, and ensuring it’s real and verifiable. There should be some way of creating data markets where we can pay for models just like we pay for apps and their services. Finally, having some kind of air traffic control board or dashboard that allows me to interact with all these things is crucial.

We think decentralization has these four big elements: privacy, verifiability, markets, and exchanges. We believe this will take us to a world where there will be dozens of agents working on my behalf, doing interesting things. I’m not so worried about AGI because there has to be some market force for something to become intelligent, and unfortunately, humans are the ones who care about money and market forces. So, I think any elements of really intelligent AI will still be at the service of humans.

I’m more concerned about how we get a whole population of 8 billion people to start thinking about becoming the AI CEO of their own lives, as opposed to AI taking over. It’s such an exciting time. I remember when I was in college, my family said you have to learn typewriting. Then, as we got older, people said, “Oh, you have to learn this thing called the web.” Now, I would say everybody has to learn this thing called AI and make sure you can manage your own AI.

Steve: Thank you very much, Ramesh and Rotem. Maybe at some point, we won’t need to learn anything. AI is going to take over jobs, and we’ll enjoy our lives on the beach.

Rotem: I think that, like Ramesh said, we will become CEOs of AI. So, when we’ve discussed people moving to strategies, eventually people will become CEOs of huge AI groups, or ideally, that’s what will happen if we build the system properly. Just very quickly, two points from the messages: AGI, or artificial general intelligence, is the ability of AI systems to achieve capabilities similar to humans in broad topics. Another very important question that was raised is about security. I think we’ve discussed it a bit. The collaborative network of agents enables a lot of opportunities to make the system safer because we are controlling, seeing, and understanding the interactions between those different agents since they are in human language. It’s much easier for us when we have different systems, each one an expert in a different field, interacting with each other to provide another layer of monitoring that is efficient. This will enable us to provide more guardrails on the systems and push the topic of AI safety to the next level, ensuring that these systems are safe over time.

Steve: That’s great. Safety and privacy are all important issues, and it probably needs another hour of conversation, maybe another live session. But as a wrap-up, Ramesh, what is an interesting project that you’re working on that you would like to share with our audience, and where can they find you and reach out?

Ramesh: Absolutely. I’m Ramesh Raskar at MIT. I encourage all of you to join our weekly webinars on decentralized AI. Just search for MIT decentralized AI. Actually, there’s one tomorrow at noon Eastern time. I’m working on two main things. One is this notion of decentralized AI, and the other is this notion of AI as a scientist. Right now, AI is more like a librarian or a knowledge base with ChatGPT, but we need to convert that into AI as an engineer and eventually AI as a scientist. So, those are the two main things we’re working on: how to create new inventions, new imaging systems, and new medical devices using AI as a scientist and decentralized AI. It’s such an exciting time to be in this space. When I meet entrepreneurs who are building really amazing tools that we use in our classes and in our work at MIT, we just hope there are many of you in the audience who are building various kinds of tools to get us to this utopian world of highly collaborative AI and also a world where AI can actually be our partner in inventing new things and solving societal problems.

Steve: Awesome. That’s super exciting. Ramesh and Rotem, I know that each of you is doing some great stuff. Could you share a little bit about it and where people can use those features or products?

Rotem: Sure. So in general, at Wand AI, we’re working on several layers because we believe there are several core problems to solve in order to achieve a scalable, collaborative human-AI system that works properly and reaches the productivity levels of tens of thousands of percent that we want to achieve eventually.

These different layers include, first of all, the fundamental part: how we create models that are smarter, know how to plan better, reason better, and criticize themselves better. We have an exceptional group of researchers working on this part. Then, how we are able to create specific agents that are experts in specific fields. Ramesh created AutoML with his group. Another direction we’re working on is auto AI agents—how we can say, “I want a perfect data scientist,” and eventually get a data scientist. This is another exciting topic of research we are working on in the company.

Then, there are a lot of processes in collaboration between agents: how we are scaling this technology, how we are enabling dozens, hundreds, and thousands of agents to collaborate efficiently with each other. Sometimes, when we are interacting with you, we forget a lot of the processes we do in a very natural way. We can be like 30 people in a conversation, and in general, we know when to speak, how to do those things, when to respond, and when not to respond, etc. Those problems are not solved yet. We are also working on those problems: how agents are collaborating with humans efficiently and how we are centralizing the databases and the information to give agents the opportunity to execute properly. By the way, some of the data can also sit in other places, but we want to enable the agents to access it in a very efficient and convenient way for them in order to build the answers and make the decisions based on this information.

These are just some of the topics that we are working on. We have multiple versions, and one of them is also available online at One. A lot of exciting things are happening in the research part currently.

Steve: Awesome, that sounds good. I’m really excited about all of it. At Generative AI, we are also building the largest and most interesting generative AI community in the world. We are currently raising funds through crowdfunding. Please check it out, join us, and help us do this together. Thank you very much, everyone, for tuning in. Looking forward to another session very soon. Thank you.

Watch the LinkedIn Live session here.

Share