4 Comments
May 2, 2023Liked by Rob Ennals

That's a very interesting analysis. I certainly like thinking about the mental models that people make of their higher ups in the course of doing their jobs. I'm glad you're asking "is any of this a good idea?" I think that's the most important question. I find the assertion in your last sentence troubling, though. This isn't so much criticism of your ideas, so much as the way we're talking about AI right now as a society.

"If it is more effective then it will probably happen." Yeah, as things stand now, that's completely true. But why have we as a society decided that a corporation can, should, and will take any move that is cost effective? Society does not have to work this way, and I would argue we have a responsibility to regulate companies to give them more constraints than "is it profitable?"

Would this be effective? That depends entirely on how you define "effective," and that's the problem. LLMs are good at simulating a person, but there's "nobody home." From a metrics perspective, they WILL look equivalent or better, by definition. The losses will be in terms of relationships, autonomy, accountability, compassion, strategic thinking, and all sorts of fuzzy human stuff like that. The stuff we can't measure. Introducing metrics would just cloud the issue. You can't measure both sides fairly and consistently, so there's no objective way to compare their "effectiveness."

And that's why that framing is deeply problematic. Given the setup above, OF COURSE this will look "effective" to somebody. The numbers will be on their side. It will save lots of money, so any negative consequences would have to be HUGE to offset that. So, by your logic, this is inevitable. But that's absurd! Of course we have a choice, and it's an important one that will redefine the role of human beings in society. This is not a decision we can make with a simple cost / benefit analysis, and it would be dangerous to do so. It's a bad idea to even present the dilemma in that way!

This is the central problem with much of the discussion of AI right now. In a nutshell, "Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should." Yes, we can. Yes, it might be "effective." Do we really want to replace human judgment with literally the first technology that looks like it could plausibly do the trick? I think we need to flip the conversation. What do we want the role and rights of people to be going forward? How should that inform our use of LLMs? That has to come FIRST, before we roll this technology out, making mistakes we can't undo.

Expand full comment

I wonder how many managers would be happy to say "I'm so confident this model of me is good, you can ask it any question, and if it gives you a direct answer, I will stand by whatever it said, or if it tells you to check with the real me, I will not consider that request a waste of time". It's a risk but perhaps somebody will try to take it.

Expand full comment