ygolo
My termites win
- Joined
- Aug 6, 2007
- Messages
- 6,653
I keep converging on the same set of ideas the more I think. I am not sure if I am looping or if I am seeing something fundamental.
1) AI needs to be an impetus to replace the bosses instead of workers.
2) AI ethics vs AI "safety". Depending on the type of regulations put in place, one or the other thing will be encouraged. The "AGI will kill us all" camp of "safety" researchers tends to want regulations that will tend to replace the workers. While "AI has biases and inequities built-in that needs to be fought" will tend to favor regulations that aim to bring about more equity.
3) Encoding human needs vs human values into the utility functions that AI optimizes. A lot of the AI researches have been harping on for decades about making sure we are wise as well as smart and that we need to encode human values into the AI. There hasn't been a lot of progress on this as means to "preserve humanity" and impart "humanity" into the AI. Part of the reason for this failure, I think, is that we believe "wisdom" is purely a faculty of the mind that we plan to transfer by some mechanism to the AI. But we miss the fact that a lot of the thrust behind what makes someone wise is emotional. We also miss that what we find valuable is ultimately connected to human needs. But there is little, if any, research on encoding human needs into AI utility functions. This is despite the fact that there will be a lot more convergence and objectivity around what would be considered a "human need" as opposed to a "human value", there is little in the research AI literature about encoding human needs in general terms.
In systems engineering, however, needs take a rather central role. Why aren't AI researchers taking the decades of research on encoding needs from systems engineering to make it part of utility functions?
So am I looping or seeing something more fundamental?
1) AI needs to be an impetus to replace the bosses instead of workers.
2) AI ethics vs AI "safety". Depending on the type of regulations put in place, one or the other thing will be encouraged. The "AGI will kill us all" camp of "safety" researchers tends to want regulations that will tend to replace the workers. While "AI has biases and inequities built-in that needs to be fought" will tend to favor regulations that aim to bring about more equity.
3) Encoding human needs vs human values into the utility functions that AI optimizes. A lot of the AI researches have been harping on for decades about making sure we are wise as well as smart and that we need to encode human values into the AI. There hasn't been a lot of progress on this as means to "preserve humanity" and impart "humanity" into the AI. Part of the reason for this failure, I think, is that we believe "wisdom" is purely a faculty of the mind that we plan to transfer by some mechanism to the AI. But we miss the fact that a lot of the thrust behind what makes someone wise is emotional. We also miss that what we find valuable is ultimately connected to human needs. But there is little, if any, research on encoding human needs into AI utility functions. This is despite the fact that there will be a lot more convergence and objectivity around what would be considered a "human need" as opposed to a "human value", there is little in the research AI literature about encoding human needs in general terms.
In systems engineering, however, needs take a rather central role. Why aren't AI researchers taking the decades of research on encoding needs from systems engineering to make it part of utility functions?
So am I looping or seeing something more fundamental?