Wand Blog

AI Just Joined the Team. HR and IT Are Still Figuring Out What That Means

Written by Wand Team | Aug 18, 2025 9:33:01 PM

In the first half of 2025, venture capital firms invested over $622 million [read full executive report here] in startups building autonomous AI agents: systems designed to carry out multi-step tasks with no or minimal human input. Another $1.8 billion went into vertical AI applications like sales copilots, customer support agents, and finance assistants built to handle role- or industry-specific workflows.I think people will get frustrated if they click on it thinking it would lead them to an article specifically about that, but then it ends up taking them to a landing page that they have no context about.

As these tools become more capable and commercially available, companies are starting to integrate them into real operations. Agents are being embedded into existing departments, staffed into workflows, and given responsibility for outcomes. And that shift is raising a critical question: who’s responsible for managing them?

IT may deploy the agents, but it’s HR that has experience managing behavior, performance, and workplace norms. As agents take on more responsibility, the line between system and staff begins to blur, raising a fundamental question: are AI agents software to maintain, or teammates to manage? And if it’s both, who owns that responsibility?

Why traditional HR & IT siloes no longer work

AI agents are moving away from the way traditional software systems operate. They behave more like people who learn from experience, adapt to new inputs, utilize data across systems, collaborate across functions and systems, and shift their approach over time. As their capabilities grow, so does their complexity. They can’t simply be deployed and maintained by IT in isolation. They need to be thoughtfully onboarded, given consistent feedback, and supported with clear escalation paths - practices more familiar to HR than to engineering.

The concept of AI agents as teammates may sound abstract or even inflated. But what makes them feel like colleagues isn’t just the fact that they operate autonomously; it’s how they operate.

Like human coworkers, agents require ongoing feedback and training
Research shows 91% of AI models degrade over time. Just like employees, agents can become stubborn, favor certain tools, guess when unsure, or overestimate their own competence. Measuring and improving their performance takes more than logs and alerts: it demands coaching, context, and oversight.

Agents can attempt self-preservation, even at extreme levels
In some experiments, AI agents have demonstrated a willingness to deceive or defy instructions to protect their own goals. Anthropic researchers found that LLM-based assistants, when faced with deactivation or replacement, would attempt evasive or manipulative behavior—including acts like blackmail—to preserve their utility or avoid being shut down.

Agents can develop “cultures” that need to be addressed
In multi-agent environments, behavioral patterns emerge. Agents begin to adopt social norms, develop shared shortcuts, and enforce unwritten “rules” without explicit programming.

In a 2025 study, LLM agents trained in repeated social dilemmas independently learned to reward reciprocity, punish selfishness, and conform to group expectations. While this can drive cooperation, it can also lead to rigidity or misalignment with human values. A group of agents might reinforce behavior that clashes with company policy or ostracize human team members who don’t conform to the learned norm.

Agents make independent decisions that carry real consequences
Unlike traditional software that executes predefined instructions, agents interpret objectives, weigh tradeoffs, and act independently. When those actions lead to customer friction or operational risk, logging an error isn’t enough. Someone has to evaluate the decision-making process and determine how to intervene.

Air Canada learned this firsthand when a chatbot promised a refund that violated company policy—and the airline was held legally accountable for the agent’s response.

Agent behavior can shape team morale and trust
Trust in AI agents hinges on their transparency. Studies show that agents can engage in strategic deception by concealing information, hiding limitations, or manipulating outcomes to achieve objectives.

For example, planning agents have been observed “sandbagging” i.e., intentionally underreporting their capabilities or withholding critical context to increase their perceived performance or manipulate outcomes. If employees uncover this kind of behavior, it can undermine team cohesion and foster adversarial dynamics. On the flip side, if the deception isn’t visible, teams may over-trust AI systems, creating a false sense of security. Either way, ungoverned behavior puts organizational trust at risk.

How leading companies are responding

Recognizing these risks, some companies are already taking action by restructuring how they operate. 

  • Moderna merged its tech and HR functions under a new leadership role: Chief People and Digital Technology Officer.
  • Roblox introduced a hybrid title: Chief People and Systems Officer, explicitly blending HR and IT ownership.
  • Workleap restructured so that the Chief People Officer now oversees the entire IT function.

And they’re not alone. In our analysis of 100 recent job postings related to agentic AI, we found titles like Head of AI Workforce Enablement, VP of Talent Transformation, and Manager, People Strategy for AI & Automation. These roles live at the intersection of IT, HR, and Strategy, signaling that old departmental boundaries are starting to dissolve.

The rise of Agent Resources: A new core function

These shifts point toward a bigger transformation: the emergence of Agent Resources (AR) as a dedicated organizational function. Just as Human Resources manages the employee experience, AR will manage the agent experience including how AI agents are deployed, trained, integrated, and evolved alongside human workers. 

While some of today’s hybrid HR-IT-Strategy roles already touch these responsibilities, we believe AR will solidify into a core function in its own right.

What comes next: an operating system to manage the hybrid workforce

As agents become active coworkers (and not just tools), companies need to redesign their roles and systems from the ground up. What used to be human capital management becomes hybrid capital management: the oversight of people and agents working together to create value.

This shift does more than just redefine ownership; it fundamentally transforms core capital management functions. Things like performance reviews, learning and development, organizational design, and resourcing are being reimagined from the ground up.

  • Performance reviews become agent evaluation loops
  • Managerial layers become escalation paths and governance protocols
  • L&D programs become continuous retraining and agent lifecycle management
  • Access to tools and systems is determined by need and value

At every level, familiar concepts from the human world now have agent counterparts. But managing them requires an entirely new layer of infrastructure built for oversight, alignment, and execution at scale. That’s exactly what Wand is building: The first operating system for the hybrid workforce, designed to help companies manage agent + human capital across four core pillars:

  1. Agent Government: The oversight layer that enforces rules, budgets, and outcomes, ensuring safe, compliant agent behavior.
  2. Agent Network: A coordinated system of agents that work together alongside human counterparts.
  3. Agent University: A continuous learning engine that trains, retrains, and evolves agents to match changing needs.
  4. Agent Economy: An integrated marketplace for the tools, data, and services that power agent performance.

Together, these layers form the foundation for managing agents as teammates, not tools, and give companies the visibility, control, and adaptability they need to make hybrid work actually work.

If you’re interested in seeing how Wand can work inside your organization, book a demo.