That's a very interesting analysis. I certainly like thinking about the mental models that people make of their higher ups in the course of doing their jobs. I'm glad you're asking "is any of this a good idea?" I think that's the most important question. I find the assertion in your last sentence troubling, though. This isn't so much criticism of your ideas, so much as the way we're talking about AI right now as a society.
"If it is more effective then it will probably happen." Yeah, as things stand now, that's completely true. But why have we as a society decided that a corporation can, should, and will take any move that is cost effective? Society does not have to work this way, and I would argue we have a responsibility to regulate companies to give them more constraints than "is it profitable?"
Would this be effective? That depends entirely on how you define "effective," and that's the problem. LLMs are good at simulating a person, but there's "nobody home." From a metrics perspective, they WILL look equivalent or better, by definition. The losses will be in terms of relationships, autonomy, accountability, compassion, strategic thinking, and all sorts of fuzzy human stuff like that. The stuff we can't measure. Introducing metrics would just cloud the issue. You can't measure both sides fairly and consistently, so there's no objective way to compare their "effectiveness."
And that's why that framing is deeply problematic. Given the setup above, OF COURSE this will look "effective" to somebody. The numbers will be on their side. It will save lots of money, so any negative consequences would have to be HUGE to offset that. So, by your logic, this is inevitable. But that's absurd! Of course we have a choice, and it's an important one that will redefine the role of human beings in society. This is not a decision we can make with a simple cost / benefit analysis, and it would be dangerous to do so. It's a bad idea to even present the dilemma in that way!
This is the central problem with much of the discussion of AI right now. In a nutshell, "Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should." Yes, we can. Yes, it might be "effective." Do we really want to replace human judgment with literally the first technology that looks like it could plausibly do the trick? I think we need to flip the conversation. What do we want the role and rights of people to be going forward? How should that inform our use of LLMs? That has to come FIRST, before we roll this technology out, making mistakes we can't undo.
Regarding whether "we should", I suspect it will depend on what kind of "simulation" we have the language models use. If it's for things like defining metrics and doing quick question lookup to unblock people then that's probably net positive. if it's completely removing the human bonds in an organization and the value that middle managers provide in nurturing that then it's almost certainly bad.
I wonder how many managers would be happy to say "I'm so confident this model of me is good, you can ask it any question, and if it gives you a direct answer, I will stand by whatever it said, or if it tells you to check with the real me, I will not consider that request a waste of time". It's a risk but perhaps somebody will try to take it.
To a large extent people already do that. They delegate decisions to other people who they trust to make decisions they would agree with. Or they suggest that people follow the principles in a document they wrote, rather than asking them. it simply isn't practical for a CEO to make every decision, so they have to find mechanisms that can simulate them.
People already do that today. The only question is the extent to which LLMs will be added to the selection of ways that the CEO gets simulated.
Looking at this from the other side, people are already asking chatGPT to tell them what to do. I ask chatGPT for advice regularly and find it very helpful. If it could give advice specific to my organization then that would be even more useful.
That's a very interesting analysis. I certainly like thinking about the mental models that people make of their higher ups in the course of doing their jobs. I'm glad you're asking "is any of this a good idea?" I think that's the most important question. I find the assertion in your last sentence troubling, though. This isn't so much criticism of your ideas, so much as the way we're talking about AI right now as a society.
"If it is more effective then it will probably happen." Yeah, as things stand now, that's completely true. But why have we as a society decided that a corporation can, should, and will take any move that is cost effective? Society does not have to work this way, and I would argue we have a responsibility to regulate companies to give them more constraints than "is it profitable?"
Would this be effective? That depends entirely on how you define "effective," and that's the problem. LLMs are good at simulating a person, but there's "nobody home." From a metrics perspective, they WILL look equivalent or better, by definition. The losses will be in terms of relationships, autonomy, accountability, compassion, strategic thinking, and all sorts of fuzzy human stuff like that. The stuff we can't measure. Introducing metrics would just cloud the issue. You can't measure both sides fairly and consistently, so there's no objective way to compare their "effectiveness."
And that's why that framing is deeply problematic. Given the setup above, OF COURSE this will look "effective" to somebody. The numbers will be on their side. It will save lots of money, so any negative consequences would have to be HUGE to offset that. So, by your logic, this is inevitable. But that's absurd! Of course we have a choice, and it's an important one that will redefine the role of human beings in society. This is not a decision we can make with a simple cost / benefit analysis, and it would be dangerous to do so. It's a bad idea to even present the dilemma in that way!
This is the central problem with much of the discussion of AI right now. In a nutshell, "Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should." Yes, we can. Yes, it might be "effective." Do we really want to replace human judgment with literally the first technology that looks like it could plausibly do the trick? I think we need to flip the conversation. What do we want the role and rights of people to be going forward? How should that inform our use of LLMs? That has to come FIRST, before we roll this technology out, making mistakes we can't undo.
You definitely make some good points.
Regarding whether "we should", I suspect it will depend on what kind of "simulation" we have the language models use. If it's for things like defining metrics and doing quick question lookup to unblock people then that's probably net positive. if it's completely removing the human bonds in an organization and the value that middle managers provide in nurturing that then it's almost certainly bad.
I wonder how many managers would be happy to say "I'm so confident this model of me is good, you can ask it any question, and if it gives you a direct answer, I will stand by whatever it said, or if it tells you to check with the real me, I will not consider that request a waste of time". It's a risk but perhaps somebody will try to take it.
To a large extent people already do that. They delegate decisions to other people who they trust to make decisions they would agree with. Or they suggest that people follow the principles in a document they wrote, rather than asking them. it simply isn't practical for a CEO to make every decision, so they have to find mechanisms that can simulate them.
People already do that today. The only question is the extent to which LLMs will be added to the selection of ways that the CEO gets simulated.
Looking at this from the other side, people are already asking chatGPT to tell them what to do. I ask chatGPT for advice regularly and find it very helpful. If it could give advice specific to my organization then that would be even more useful.