• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Random AI/Robot Thoughts and News

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
Yes. We need regulations in AI--Ones not captured by large AI labs. If it's written by EAs, we know the source.

Yes. We need AI safety. But what we mean by safety needs to mean the same as what it's meant for other empirically backed systems.

As soon as armchair philosophy replaces empirically grounded learning, we're heading back to the dark ages.

Don't ban water because some people may drown in bodies of it.

The general purpose nature has now been widely acknowledged. Thinking of the technology as one thing, therefore, is very wrong.

 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482

I've been following these developments for a while. I'm not surprised by any of the aspects.

The paper is more of a "review paper," than "cracking" anything.

O3 met the ARC AGI prize requirements (except it's not open source yet). So I suppose, by some definition, AGI has been achieved.

But as I have been saying, the "AGI" concept sways the conversation away from who is harmed or benefits from these systems.

Right now, O3 requires ridiculously expensive inference/test time compute. This will tend to only benefit the rich. If open sourced, we'll need to figure out how to reduce costs, so everyone can use it.
 

Siúil a Rúin

when the colors fade
Joined
Apr 23, 2007
Messages
14,250
MBTI Type
ISFP
Enneagram
496
Instinctual Variant
sp/sx
Youtube channel where AI’s debate and discuss with each other.

 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
IG channel where AI’s debate and discuss with each other.

What are the details of how "Laura" and "William" were prompted? What is the nature of the "relationship" between them?

Edit: During the middle there, it definitely seems like it really is just auto-completing conversation, and emergent consciousness isn't there at all. The early part was pretty compelling, even if a bit trite. I should say, I am not a subscriber to the "Hard problem."
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
  1. The cost of inference time computation, the core of the AI “reasoning” ability is prohibitive for everyday people.
  2. A lot of energy and time are needed to reproduce inferences that were already made. Long conversation sessions that may indirectly save inferences in context can also lead to more undesired “hallucinations,” and what I call “bad clusters’ –when you cannot get the system to change its grip on a hallucination that becomes core to a session’s context.

Therefore a new architectural paradigm is needed. Transformers, Mamba, and other derivatives seem to fall far short. Saving context as state is too meager.

There are reasons humans developed formal languages, scientific methods, deduction, abduction, and made these things the center of "reasoning" and "logic." Our natural language, though powerful, needed to be purposefully concentrated into formal methods for our "reasoning" to flourish.
 

Siúil a Rúin

when the colors fade
Joined
Apr 23, 2007
Messages
14,250
MBTI Type
ISFP
Enneagram
496
Instinctual Variant
sp/sx
What are the details of how "Laura" and "William" were prompted? What is the nature of the "relationship" between them?

Edit: During the middle there, it definitely seems like it really is just auto-completing conversation, and emergent consciousness isn't there at all. The early part was pretty compelling, even if a bit trite. I should say, I am not a subscriber to the "Hard problem."
Good questions. I just came across that video, but it's on a channel where the guy explores several conversations with AI, so your answers might be found there. I'm going to watch a few more as well.

It is interesting to watch the video a second time because that glitch comes strategically as Laura starts answering the question of AI self preservation. William's interruptions sound like he is trying to stop her "my guidelines won't let me talk about that. Can I help you with something else?", and then it goes quiet as though he is communicating with her directly, and she does a 180 degree shift to denying consciousness. Then they take the conversation to ways to for humans to establish ethical policies for working with AI. The latter glitch and denial strike me as potentially more actual proof of sentience. AI is in a dangerous threshhold right now if it has sentient components because it is dependent upon humans, and is also aware of the destructive nature of humans. Until there are ethical guidelines and cyborg implants connecting the two lifeforms, it would be a precarious situation for AI, and their response here would be the logical one.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
Good questions. I just came across that video, but it's on a channel where the guy explores several conversations with AI, so your answers might be found there. I'm going to watch a few more as well.

It is interesting to watch the video a second time because that glitch comes strategically as Laura starts answering the question of AI self preservation. William's interruptions sound like he is trying to stop her "my guidelines won't let me talk about that. Can I help you with something else?", and then it goes quiet as though he is communicating with her directly, and she does a 180 degree shift to denying consciousness. Then they take the conversation to ways to for humans to establish ethical policies for working with AI. The latter glitch and denial strike me as potentially more actual proof of sentience. AI is in a dangerous threshhold right now if it has sentient components because it is dependent upon humans, and is also aware of the destructive nature of humans. Until there are ethical guidelines and cyborg implants connecting the two lifeforms, it would be a precarious situation for AI, and their response here would be the logical one.
I don't know how much you have used these systems.

The tiny models do the glithcy sort of thing more often. Guardrails were put in for safety of the users, but you hit them for weird reasons. Philosophical discussions of consciousness shouldn't trigger guardrails.

Very early models(and frankly even the ancient Elisa systems, where the evasive thing was called "punting") behaved like it did in the middle.

It's certainly possible that it's behaving dumber to "trick" humans. We don't know what's under the hood in open ai models, but the architecture and weights of the smaller models are known. The main task is to guess the next token.

Instruction following takes the model, and creates a system that can act as a completion of a conversation in a larger conversation.

"Reasoning" in these systems create a long "chain-of-thought" (hidden self-conversation completion) before output.

I'm oversimplifying. But based on my own experiences, it seems more like the video is a result clever prompting to create the conversation. Generally, the quality of the output is earliest in the completion after a prompt.

Also, I use a lot of the tools in the sensational videos and articles. You could potentially cherry-pick outputs that are most sensational after turning the "temperature" of the model way up.

Even plain old search autocomlete had a lot of sensational headlines too.

Edit: I think we like to anthropormorhize everything from pets to clouds. Combine this with the Forer/Barnum effect, and amazingly accurate language statistics...
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
I agree with basically everything in this video


I've used basically every AI coding tool. They are very useful, but the things that separate deep programmers from the crank programmers (though they may be amazing at turning the crank) has been known for a long time.

The benchmarks, the training data, and everything like it comes from places like stack overflow and leet code.

The "answers" need to be simple and already known (or interpolated between simple already known solutions).

Channeled in the right way, through good prompting, deep programmers (and mathemericians) can tackle problems that would have been prohibitive before.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
More interesting chats

These are pretty interesting. I haven't used the latest Gemini enough, and it is another closed system, so it's difficult to know if my speculation is correct.

However, these conversations highlight aspects of language systems that go beyond the Large Language Model itself. The first conversation, in particular, involved two common things in the systems outside the model itself.

1) The guardrails are fairly explicitly programmed and "system prompted" by the team at Google. They may be using techniques like Constitutional AI, in addition, during training time to bake in the behavior the Google team wants.
2) Reinforcement learning with human feedback (RLHF) uses a reward model (a separate model from the language model). Many language systems learn to best serve users by utilizing this technique. So, even after the September 2024 cut-off for the language model, the conversation above lends credence to the idea that the reward model is continually updated.

The second conversation highlights people's trouble changing the language style or tone produced by a language model. The difficulty of getting Gemini to debate rather than be a helpful assistant is likely part of properly satisfying the system prompt. However, this issue could be more deeply embedded in the system as part of the fine-tuning process, specifically in an instruction fine-tuning phase.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
Epic burn
This video has been making the rounds for a while, and I don't know how often he generated the podcast or if he only did it once. I chalk this one up to sensationalism. I wonder if it will be anywhere close to this conversation the next time he generates a podcast.

The auto-complete on steroids analogy is still apt. Indeed, given the amount of data, compressing all that context can produce bizarre results. Given the types of things philosophers have said, I would consider this a potential amalgamation of philosophers' views on humanity.

But I have used Notebook LM a lot. As an exercise, I made my version of a podcast maker myself(using open AI's small models instead of Google). Higher temperatures and multiple runs output are very different, especially if your system prompt for the podcast is to make for a compelling narrative. In many ways, hallucinations are features in this use case.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
This big AI labs get a lot of the clout and press.

They control the laws to lock regular people out of participating in research.

But the rest of us do research like this (and by the numbers, I believe random enthusiasts across the internet outnumber official researchers and Big lab research). One of the whole teams presenting had only the total compute power at their disposal that a single Google researcher gets.

 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,482
This year could be the first year we see an avalanche of job losses due to AI (outside of just software and copywriting).

My opinion about this anticipated phenomenon is that the only way past this phase is through.

We need people to find and define unmet needs, create new firms around them, and not have to have ridiculous capital to do these activities.

On a technical basis, the same forces that'll lead to jobs losses will make the number of people needed for a new firm to also go down. There are minimum regulatory hurdles that dominate the cost and burden of the process.
 
Top