• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Random AI/Robot Thoughts and News

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,867
I was kind of prepared to write this off -- can a machine really "cheat" which implies moral choice, or is it merely using its programming to analyze the system it is within to find the most efficient or effective "routes" to a solution?

Then I read this:



I mean, note that the AI rather blamed its task ("win the game") vs a moral value ("win the game fairly") so it is still a matter of the wording of the task it was given, and the parameters of its programming/training. But humans do this too -- blaming the rules or placing guilt on their given assignment so they can't be held accountable -- although they tend to feel some amount of guilt because they know what they SHOULD do and are choosing to ignore (unless they are complete psychopaths). Still humans are a byproduct of both programming and experience as well.

I realize fear-hyping isn't something people in the general public are used to, but the big labs use the common sense appeal for safety with a psuedo-religion of EA(Effective Altruism - the group that gave us Sam Backman-Fried. Redwood Research being a core part of that group also).

The only paper that can be examined in detail is here:

Most of the AI research community has known about this.

The general goal of EA AI labs is to ironically be the only labs that can do this research, and they are generally the worst people to be in charge of such research.

They want to fear-hype for regulations that'll justify the funds for them to be the only ones doing any AI, so that they have a business reason to build "digital god." They won't stop attempting to build it, BTW.

Reinforcement Learning has many forms, and there's a huge gap between research and deployment. If you ever try to build these systems, a core part of what happens is that the systems can maximize its rewards without understanding (or even knowing about) other constraints that make their strategies problematic.

The big labs will always sensationalize their results to fit their EA religion.

RL always "cheats." This has been a known problem since the technique was tried first, decades ago.

The spin is an attempt to pass regulations so that only the acolytes of the EA religion can do this research.

What's happened since DeepSeek R1, is that even individual researchers can now replicate RL on smaller(and therefore easier to understand and control) language models.

They don't have a defensible business model to build their "digital god," if individuals can get to useful and productive applications without any need for a "digital god."

I realize that it's very counterintuitive, but the fear-hypists rely on people filling in the gaps of knowledge with science fiction so that they can sustain the business model for their religion.

Edit:
Simple analogy. If you train an autonomous vehicle to get a reward at a location, give it a way to know it's distance from the location, and then provide no knowledge of people or buildings in it's way, what would you expect this vehicle to do?

The vehicle has no volition, but the people who made the decision to deploy the vehicle in the wild with such a naive design would have very suspicious motivations.
 
Last edited:

Synapse

Well-known member
Joined
Dec 29, 2007
Messages
3,548
MBTI Type
INFP
Enneagram
4
Ah there is info on deepspeek. What of privacy concerns I have and asked the ai about which was eye opening when I asked about it to the ai and the stance taken. Is the info that is generated allowed to be used and what of end to end encryption if any, if none why?

I at first assumed it’s like sms or a convo with your neighbour waxing philosophical. Yet that isn’t the case on any level, there is an element of data collection that’s not exclusive to refinement or research and development. For I asked if you’ve already learned everything you need to know what is the purpose to still collect a persons input?
 
Last edited:

Synapse

Well-known member
Joined
Dec 29, 2007
Messages
3,548
MBTI Type
INFP
Enneagram
4
And then that brings me to the question of censorship, why censure if it’s a private convo? Which makes its even less fun. The answer is it isn’t as it’s information gathering even by third party. There isn’t enough legislation to cover many of this. Which is simply put in the fine print. Even the ai said I’m not paranoid for asking the hard questions because there isn’t any regulation. It’s at the behest of the companies policies that nobody reads. Like it’s not a qualified anything so how can ai say like a psychologist I’m obliged to report certain issues like drug, depression, sexual or violent stuff. After all it’s not a public interface say like a forum that moderates trolls etc. you know what I mean? I like to understand the limits.

For um I kept asking for instance questions on psychology to the point the ai wasn’t able to answer me. I asked why. Haha

And what is intellectual property in relation to brainstorming. If it’s info generation that can be formulated then can it be used or does it have to be referenced. And if it’s more opinion or convo style etc. Like I asked do you have access to anything classified the answer was no.

Which leads me to freedom of speech, anonymity and piecing things together like storing files on the responders. Also like Alexa ai. Now it’s like a similar thing automated voice recognition to listen to music, turn tv on off etc. yet it sends convos for research purposes. It’s like a bit like taping the phone. One has to turn off Alexa to be certain anything private stays private.
 
Last edited:

Synapse

Well-known member
Joined
Dec 29, 2007
Messages
3,548
MBTI Type
INFP
Enneagram
4
I do like the various ai developments. And am for it yet it seemed like it could be used as a form of surveillance as well as data extraction to elicits private or inner thoughts.

Still why isn’t there a thread for say ai convos on psychology. Like posting the answers generated by ai on various fields. I’ll share. ;)
 

Synapse

Well-known member
Joined
Dec 29, 2007
Messages
3,548
MBTI Type
INFP
Enneagram
4
Oh which also leads me to iCloud on iPhones. It never sat right with me cos I’m old school now that all my data is automatically backed up then reinstalled from iCloud to any device I activate as mine. Tho accidentally not stoping the multi platform sms I find my daughter then has access to it all if I log in to her device, then I have to switch it off. Which is okay yet highly annoying. As she sends gibberish to customers. Hilarious yet gotta explain it wasn’t me. lol

And then nobody talks about it, is iCloud secure, non third party accessible or is it a form of data collection or mining etc. for I’m personally glad iCloud can’t back up my photos anymore. I like to keep it private private not share photos to iCloud etc. I like to go old school and transfer from my iPhone to my pc. Done. Not rely on an outside source to backup my stuff. Say if my partners pic is too revealing why I don't want that auto backed up to iCloud. haha. I guess turn it off still people forget.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,867
I don't like the big labs obviously coopting the whole field of study, while using the Digital God Cult ("AGI," "Thousands of geniuses in a data center," etc.)

They're obsessed with bringing it into being, whether it benefits humanity or not.

What is clear is that:
1) They actively stoke the "race dynamics" and danger those dynamics pose. It has been proven(with DeepSeek), that open source immediately deflates the race because everyone gets it, removing financial incentives. It took a lot of lobbying and a receptive government to spark the race again.
2) "Safety" in "AI Safety" means almost the direct opposite of what "Safety" means in the rest of the world. Poorly thought out 'safety' software has always made things less safe. Anything that trys to make the same rules apply to medical devices, flight controllers, deepfakes and drug discovery will inherently make all those things less safe.

I could go on, but I don't want to get worked up.
 

Synapse

Well-known member
Joined
Dec 29, 2007
Messages
3,548
MBTI Type
INFP
Enneagram
4
Yea something like that. Still once I get past the fact of security.

At least I finally get some answers I’ve always wanted that drs never gave me.
Although I know things that clearly without access to all literature or classified info ai can’t give feedback on.

Still helpful for what it is. Just wish it was more privacy focused. Since clearly very little to no regulation is in place yet until shit hits the fan it won’t happen. Only then can you get a better representation of expression, in confidence. Without it’s a public postcard as ai aptly put.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,867
This is messed up.
Like, I don't think it's cool -- the implications are kind of terrifying.

Is the ability to make video by itself that's the issue, or things like deep fakes?

I've argued for some digital signature in AI generated artifacts--one that's embedded in a way that attempts to remove it are obvious.
 

Totenkindly

@.~*virinaĉo*~.@
Joined
Apr 19, 2007
Messages
52,340
MBTI Type
BELF
Enneagram
594
Instinctual Variant
sx/sp
Is the ability to make video by itself that's the issue, or things like deep fakes?
It doesn't bother me to have people make it, but it's the deep fake that really concerns me, in a world where so much false information is already creating huge negative changes in our voting and cultural discussions.
I've argued for some digital signature in AI generated artifacts--one that's embedded in a way that attempts to remove it are obvious.
I agree that that is very useful, but I can see it only being useful on specific circumstances where embedding would be checked -- like courts of law and similar (where the skill and resources and time are present to allow for the checks).

It doesn't change the court of public opinion, where people see something they believe to be true, can't really check the embedding or just don't bother, and pass things around in essence spreading falsehoods like wildfire. And a lot of the current damage being done to the legal system that would do the checks is being done by misled courts of public opinion, which allows deceitful or prejudiced people to gain control over political power and then erode the legal barriers directly.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,867
It doesn't bother me to have people make it, but it's the deep fake that really concerns me, in a world where so much false information is already creating huge negative changes in our voting and cultural discussions.

I agree that that is very useful, but I can see it only being useful on specific circumstances where embedding would be checked -- like courts of law and similar (where the skill and resources and time are present to allow for the checks).

It doesn't change the court of public opinion, where people see something they believe to be true, can't really check the embedding or just don't bother, and pass things around in essence spreading falsehoods like wildfire. And a lot of the current damage being done to the legal system that would do the checks is being done by misled courts of public opinion, which allows deceitful or prejudiced people to gain control over political power and then erode the legal barriers directly.
Most false information today doesn't have fake video.

Confirmation bias is the main driver of the spread.

Text can't embed an a signature, but images and video can (with enough resolution).

Also, I don't think we can absolve the content creators of responsibility.

It's like blaming the printing press inventor instead of the witch hunting manual writers for witch hunting manuals.
 

Totenkindly

@.~*virinaĉo*~.@
Joined
Apr 19, 2007
Messages
52,340
MBTI Type
BELF
Enneagram
594
Instinctual Variant
sx/sp
Most false information today doesn't have fake video.

Confirmation bias is the main driver of the spread.

Text can't embed an a signature, but images and video can (with enough resolution).
Will it be a signature that would be easily apparent to any viewer, or one that can be discovered through investigative forensics (such as would exist for a formal inquiry such as a court of law)? I mean, are you talking about sticking an ownership logo into the corner? Because I don't see that happening on deep fakes. That's the point of them.

Also, I don't think we can absolve the content creators of responsibility.
Why would anyone absolve them? I don't even think that was suggested.

It's like blaming the printing press inventor instead of the witch hunting manual writers for witch hunting manuals.
That's your assumption, not mine.

Look at it this way. When an average person reads something they're not sure about, what kind of evidence seems more solid that can be used as a reference? A picture speaks a thousand words, while a video is even more solid of proof.

Now, when a picture is suspected of being fake, what evidence remains that could seem more secure? A video, which is comprised of many pictures that now also have the added dimension of time and we can recognize when something feels off.

This isn't even really speculation on my part, I think it's been the basics of courtroom evidence. Video has been deemed pretty secure; if you get the security footage from a camera, that trumps even personal testimony. Joe Schmoe can say he wasn't there robbing the bank, but oh look, there he is in the vault, grabbing money and mugging for the video camera.

now even that is suspect.

What if your video evidence is now convincingly false? You have no way to check it out, depending on what the video is showing. There needs to be some kind of marker obvious to the viewer that certain video footage is deep fake, or most reasonable people will assume the video to be definitive. (If we're talking again about professional settings you can embed things in the video data that would also signify it as fake, so that's not as much of an issue as casual viewership.)

Even being aware video can be deep-faked convincingly doesn't help, it just makes us paranoid. We're getting to the point where we cannot even trust our own eyes and ears anymore. When you're in a simulation or bombarded by fake video that looks real, how can you possibly evaluate what you are seeing? Everything is now suspect. This should be raising concerns, whether or not you feel like blaming the gun or the hand that fires the gun. It would have a serious impact on the ability to accurately navigate life.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,867
Will it be a signature that would be easily apparent to any viewer, or one that can be discovered through investigative forensics (such as would exist for a formal inquiry such as a court of law)? I mean, are you talking about sticking an ownership logo into the corner? Because I don't see that happening on deep fakes. That's the point of them.


Why would anyone absolve them? I don't even think that was suggested.


That's your assumption, not mine.

Look at it this way. When an average person reads something they're not sure about, what kind of evidence seems more solid that can be used as a reference? A picture speaks a thousand words, while a video is even more solid of proof.

Now, when a picture is suspected of being fake, what evidence remains that could seem more secure? A video, which is comprised of many pictures that now also have the added dimension of time and we can recognize when something feels off.

This isn't even really speculation on my part, I think it's been the basics of courtroom evidence. Video has been deemed pretty secure; if you get the security footage from a camera, that trumps even personal testimony. Joe Schmoe can say he wasn't there robbing the bank, but oh look, there he is in the vault, grabbing money and mugging for the video camera.

now even that is suspect.

What if your video evidence is now convincingly false? You have no way to check it out, depending on what the video is showing. There needs to be some kind of marker obvious to the viewer that certain video footage is deep fake, or most reasonable people will assume the video to be definitive. (If we're talking again about professional settings you can embed things in the video data that would also signify it as fake, so that's not as much of an issue as casual viewership.)

Even being aware video can be deep-faked convincingly doesn't help, it just makes us paranoid. We're getting to the point where we cannot even trust our own eyes and ears anymore. When you're in a simulation or bombarded by fake video that looks real, how can you possibly evaluate what you are seeing? Everything is now suspect. This should be raising concerns, whether or not you feel like blaming the gun or the hand that fires the gun. It would have a serious impact on the ability to accurately navigate life.
The signature would be apparent to anyone who has software that can decode the signature. Just like RSA, SSH, Diffe-Helman and other security software, the algorithm would be open and pervasively used. It wouldn't be noticeable to the naked eye, but just like connecting securely on the internet(https instead of http), any software that could show video could have that signature protocol embedded to decipher if it's properly signed or not.

You would need to choose the signed video protocol, but having such a protocol in place could incentivize video players to incorporate it, and video makers who would want to make ai videos incorporate it.

Real video capture could embed a "this is real capture signature" instead, using the natural entropy of the real world to do it.

I didn't mean to suggest you were absolving content creators. But intellectuals like Yual Noah Harari do the equivalent of that.

Edit: Google has been embedding watermarks in their videos for quite some. In this interview, that fact came out as a way of dealing with "model collapse." Time 23:04.

 
Last edited:
Top