• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Future of Life Institute - Pause on AI more powerful than GPT 4?

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
I keep converging on the same set of ideas the more I think. I am not sure if I am looping or if I am seeing something fundamental.

1) AI needs to be an impetus to replace the bosses instead of workers.

2) AI ethics vs AI "safety". Depending on the type of regulations put in place, one or the other thing will be encouraged. The "AGI will kill us all" camp of "safety" researchers tends to want regulations that will tend to replace the workers. While "AI has biases and inequities built-in that needs to be fought" will tend to favor regulations that aim to bring about more equity.

3) Encoding human needs vs human values into the utility functions that AI optimizes. A lot of the AI researches have been harping on for decades about making sure we are wise as well as smart and that we need to encode human values into the AI. There hasn't been a lot of progress on this as means to "preserve humanity" and impart "humanity" into the AI. Part of the reason for this failure, I think, is that we believe "wisdom" is purely a faculty of the mind that we plan to transfer by some mechanism to the AI. But we miss the fact that a lot of the thrust behind what makes someone wise is emotional. We also miss that what we find valuable is ultimately connected to human needs. But there is little, if any, research on encoding human needs into AI utility functions. This is despite the fact that there will be a lot more convergence and objectivity around what would be considered a "human need" as opposed to a "human value", there is little in the research AI literature about encoding human needs in general terms.

In systems engineering, however, needs take a rather central role. Why aren't AI researchers taking the decades of research on encoding needs from systems engineering to make it part of utility functions?

So am I looping or seeing something more fundamental?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
One of the things I really like to do is prove myself wrong.

Yann LaCunn's Objective Driven AI is a great move forward in the direction that I saw as not being followed in the previous post.

 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
Yann LaCunn's optimistic take on objective AI and it's pretty compelling:

Like reading a book, skip and go back and forth, or get the transcript.
 

kevinjdalton

New member
Joined
Dec 28, 2023
Messages
1
The Future of Life Institute's "Pause on AI" proposal is indeed a thought-provoking initiative. Considering the rapid advancements in AI, it makes sense to pause and carefully assess the potential risks associated with more powerful systems like HAL 9000 While the development of AI brings about incredible opportunities, it also raises ethical concerns and challenges. Striking a balance between innovation and responsible development is crucial. Personally, I believe a temporary pause could provide an opportunity to address safety and ethical considerations, ensuring that as we move forward with more advanced AI, we do so with a well-thought-out framework that prioritizes human well-being.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
The Future of Life Institute's "Pause on AI" proposal is indeed a thought-provoking initiative. Considering the rapid advancements in AI, it makes sense to pause and carefully assess the potential risks associated with more powerful systems like HAL 9000. While the development of AI brings about incredible opportunities, it also raises ethical concerns and challenges. Striking a balance between innovation and responsible development is crucial. Personally, I believe a temporary pause could provide an opportunity to address safety and ethical considerations, ensuring that as we move forward with more advanced AI, we do so with a well-thought-out framework that prioritizes human well-being.
I agree that being thoughtful and responsible needs to be balanced with advancing the potential benefits.

The proposed pause would have already almost ended by now. Just because some people pause doesn't make everyone else pause. Also, the demarcation between what should be paused and what shouldn't in this letter was horribly ambiguous and problematic. Unlike other moratoriums (in biotech in particular) that were clear and scientifically sound, this letter used words like "loyal" and other things that could be interpreted as coded eugenics.

The letter mentioned nothing about the real and present harms, like coded bias, or unexpected failure modes in things like self-driving.

The problem with this pause as an approach to AI safety is that it did nothing (as the evidenced by the last 4 of the 6 months of the "pause") as far as this technology is concerned.
1) There is too much application for good possible. The lack of clear demarcation between what should be allowed and what shouldn't (as was done in biotech) is main reason why the pause was so ineffectual.
2) There is basically no way to learn about AI safety by purely sitting and thinking from an arm chair about what would make things safe.That is why it is only theoretical physicists, longtermists and utilitarian philosophers and their ilk who propose this sort of solution. They have a bias against reality and practicality.

The way you learn how to make things safe in AI presently is by doing very contained experiments. Arguments can and should be made on improving the containment. But any delusions that thinking by sitting in an arm chair will make AI technology safer is beyond unconvincing.

The other main way to make AI safer is to make it more transparent. There is a reason why Linux is so much more secure than the Microsoft OSes. There is a reason why Democracy functions under a system of open debate. Transparency is key to safety.

There is another open letter that is much more focused, inclusive, and reality based that the letter before.

We need to focus on stopping the creation of the equivalent of Cyberdyne Systems, when it is clear that the older letter in the OP of this thread is mainly done by people with a bias against reality and a handful of those who really just wanted to stop other people who were ahead of them from making it first.
images
 
Last edited:
Top