“Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.”
–Ross Dawson
About Ross DawsonRoss Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload.
LinkedIn Profile:
What you will learn- How human-AI teams outperform human-only teams in productivity and efficiency
- The crucial role of understanding AI strengths and limitations when designing collaborative workflows
- Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity
- Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust
- Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents
- How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts
- The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation
- Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction
Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications.
But first, a bit of an update.
2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like.
So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on.
And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively.
But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests.
Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively.
So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that.
We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks.
So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that.
And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams.
How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams.
And that is again going into the first few test organizations in the next month or so. So again, just let me know.
So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans.
So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes.
First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT.
So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked.
So a few particularly interesting findings from that.
So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong.
But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t.
And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that.
And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that.
This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI.
And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient.
But some of these social cues which are valuable in the human-human collaboration started to disappear.
And this automation process meant that there was not, in the end, as much creative diversity.
Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process.
But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs.
So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output.
And we’re making sure that we are ensuring that the human diversity is maintained.
Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans.
Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind.
So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge.
And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility.
So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems.
So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task.
Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic.
Whether verifiable, as you know you have high-quality information, or whether that’s the degree of uncertainty around whether decisions are reversible, the degree of subjectivity, because not everything is data-driven.
And so assessing these task characteristics start to define where human judgment plays a role, how do you create those checks, and how do you build that.
So this creates a system so intelligent delegation is not just how the humans delegate, but in turn the structure of how that cascades down through the agents.
So this requires this idea of dynamic assessment.
So you’re not just setting and forgetting. You are continuously reassessing what is happening with the context, what is changing in the stakes, any uncertainty.
So you’re coming back to be able to ensure there’s not just a single delegation structure, but you’re changing it over time.
And you’ll continue to adapt as you’re executing, and be able to monitor, replan, and set.
So transparency has to be built into the structure so that you have where the decision is made, what authorizations are given, you know where the audit trail is visible so you can always see what is going on in those structures.
And being able to scale how you are coordinating the systems.
And if it’s just small scale that’s fine, but you want to be able to build something which has been able to move across many agents.
And so this requires a way of being able to discover which agents are most appropriate and be able to essentially establish the delegation of a particular task to them again on a dynamic basis.
And essentially this final principle of systemic resilience, where you have to expect that things will go wrong.
So there’s continuing monitoring, being able to understand that these systems can be attacked in various ways and being able to recover.
So, very solid paper, quite deep, but really giving some very good principles for how it is we can delegate to AI systems.
So the final of the three papers goes to a bit of a higher level.
It’s called Agentic Interactions, and it’s from Alex Imas, Sanjog Misra of the University of Chicago, and Kevin Lee at the University of Michigan.
And what they’re looking at is what happens on a macro scale when increasingly decisions are delegated to AI agents.
So this is the agent economy that I’ve been talking about for a very long time, which is now very much coming to the fore.
And so what they do is they look at what happens when we start to delegate more and more economic decisions, such as buying and selling decisions.
So what they found is extraordinarily interesting.
They found that the AI agents in fact do behave very similarly to their human creators.
And in fact what you can observe is that there are differences in the agents where you can infer the gender and the personality of the person who is delegating the agent.
Even though there is no information, the agent doesn’t even know what the gender or the personality is, they are actually flowing through.
So in fact agents represent us in the market as it were, potentially very accurately.
But this goes directly to the second point where this idea of machine fluency.
And so AI fluency is very much a term in vogue at the moment.
So the authors talk about this idea of machine fluency which is how well can a user put their intent and align that with the agent so the agent is aligned with them.
And in fact they found that there’s very significant degrees of difference in those.
And those people who are better at being able to get their agents to express their wishes could in fact amplify the economic outcomes of these people.
And related to that in fact they showed there was a correlation that higher educational levels mean that you were able to better delegate to AI, and your AI agents performed better and gave you better returns.
So again pointing to these ways in which we’re starting to see potentials for aggravation of differences in the agentic economy when our agents who act for us in the economy start to reflect among other things educational differences or capabilities in how it is we express our results and our intentions through AI.
There was one very interesting and I suppose counterintuitive result.
Women get better outcomes in negotiation when using AI agents than they do in human-to-human interactions.
Again this is without the AI agents knowing that they are representing a woman or not.
But in fact this shows that the style and the way on the machine fluency the ways in which women are able to instruct and put their intent into the AI agents is in this study superior to those of males.
And there’s of course in the real world unfortunately a bias towards male performance in negotiation.
And that was inversed in the study.
So exceptionally interesting.
So just pulling back some of the common themes of these three papers.
We increasingly want a world where humans have relationships to agents.
We are starting to work with them in teams and systems.
And we’re starting to build economies where humans are represented by agents.
And essentially our relationship to those agents and our ability to delegate effectively is driving value of course to the individual but also across these agentic systems that are emerging.
So this is early on because the realities of these agentic human-agent systems are pretty early at this point.
But this starts to point to some of the potential, some of the challenges, some of the opportunities, and some of the work that we have to do.
So I will be sharing more on these kinds of topics in my interviews with people and also of course on the Humans Plus AI website.
So just go to humansplus.ai.
Actually to be frank it hasn’t been updated a lot recently but we will be sharing a lot more there.
Or LinkedIn is where I share the most actually, and getting back on Twitter as well if you’re interested.
But I’ll be diving deep and trying to share what I find is useful as well as interesting in helping us to create a world where humans are first.
AI complements us.
The reality is we are moving to humans plus AI systems.
And if we design that well with the right intentions we can make this absolutely one which drives human value first.
So glad to have you on the journey.
Have a wonderful rest of your day.
The post Ross Dawson on Humans + AI Agentic Systems (AC Ep34) appeared first on Humans + AI.