Currently, we manage multiple agents, but as their performance improves, will the capabilities of a single agent surpass the cognitive abilities of a human, making them difficult to track and manage?
In other words, will we evolve to the point where we need teams to manage individual agents?
Just a random thought.
I'm not referring to this happening in the short term.
I'm actually wondering if we'll reach a point where models are "good enough" that we no longer need to improve them—this makes me wonder what that would look like.
This leads me to imagine that when a model has reached its maximum utility for one person, naturally, we need a better model to benefit from it, which requires collaboration among multiple people.
The above is purely hypothetical and not rigorous.