I cannot help thinking of Richard Brautigan's poem "All Watched Over by Machines of Loving Grace".
How feasible do you think it will be for people to tune GPTs to share their varying moral intuitions and thereby reduce some of the creepiness? Outsourcing some of these judgments seems least creepy if one thinks of the LLM as "a trusted elder who shares my values" and perhaps we could get closer to that ideal if LLMs were both more tunable and better at eliciting values clarification.
I cannot help thinking of Richard Brautigan's poem "All Watched Over by Machines of Loving Grace".
How feasible do you think it will be for people to tune GPTs to share their varying moral intuitions and thereby reduce some of the creepiness? Outsourcing some of these judgments seems least creepy if one thinks of the LLM as "a trusted elder who shares my values" and perhaps we could get closer to that ideal if LLMs were both more tunable and better at eliciting values clarification.
A good question. My guess is "probably somewhat".
I explored this idea a bit in my "simulate the CEO" post, but there is definitely more meat there.