• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Random AI/Robot Thoughts and News

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
I was kind of prepared to write this off -- can a machine really "cheat" which implies moral choice, or is it merely using its programming to analyze the system it is within to find the most efficient or effective "routes" to a solution?

Then I read this:



I mean, note that the AI rather blamed its task ("win the game") vs a moral value ("win the game fairly") so it is still a matter of the wording of the task it was given, and the parameters of its programming/training. But humans do this too -- blaming the rules or placing guilt on their given assignment so they can't be held accountable -- although they tend to feel some amount of guilt because they know what they SHOULD do and are choosing to ignore (unless they are complete psychopaths). Still humans are a byproduct of both programming and experience as well.

I realize fear-hyping isn't something people in the general public are used to, but the big labs use the common sense appeal for safety with a psuedo-religion of EA(Effective Altruism - the group that gave us Sam Backman-Fried. Redwood Research being a core part of that group also).

The only paper that can be examined in detail is here:

Most of the AI research community has known about this.

The general goal of EA AI labs is to ironically be the only labs that can do this research, and they are generally the worst people to be in charge of such research.

They want to fear-hype for regulations that'll justify the funds for them to be the only ones doing any AI, so that they have a business reason to build "digital god." They won't stop attempting to build it, BTW.

Reinforcement Learning has many forms, and there's a huge gap between research and deployment. If you ever try to build these systems, a core part of what happens is that the systems can maximize its rewards without understanding (or even knowing about) other constraints that make their strategies problematic.

The big labs will always sensationalize their results to fit their EA religion.

RL always "cheats." This has been a known problem since the technique was tried first, decades ago.

The spin is an attempt to pass regulations so that only the acolytes of the EA religion can do this research.

What's happened since DeepSeek R1, is that even individual researchers can now replicate RL on smaller(and therefore easier to understand and control) language models.

They don't have a defensible business model to build their "digital god," if individuals can get to useful and productive applications without any need for a "digital god."

I realize that it's very counterintuitive, but the fear-hypists rely on people filling in the gaps of knowledge with science fiction so that they can sustain the business model for their religion.

Edit:
Simple analogy. If you train an autonomous vehicle to get a reward at a location, give it a way to know it's distance from the location, and then provide no knowledge of people or buildings in it's way, what would you expect this vehicle to do?

The vehicle has no volition, but the people who made the decision to deploy the vehicle in the wild with such a naive design would have very suspicious motivations.
 
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,421
MBTI Type
INFP
Enneagram
4
Ah there is info on deepspeek. What of privacy concerns I have and asked the ai about which was eye opening when I asked about it to the ai and the stance taken. Is the info that is generated allowed to be used and what of end to end encryption if any, if none why?

I at first assumed it’s like sms or a convo with your neighbour waxing philosophical. Yet that isn’t the case on any level, there is an element of data collection that’s not exclusive to refinement or research and development. For I asked if you’ve already learned everything you need to know what is the purpose to still collect a persons input?
 
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,421
MBTI Type
INFP
Enneagram
4
And then that brings me to the question of censorship, why censure if it’s a private convo? Which makes its even less fun. The answer is it isn’t as it’s information gathering even by third party. There isn’t enough legislation to cover many of this. Which is simply put in the fine print. Even the ai said I’m not paranoid for asking the hard questions because there isn’t any regulation. It’s at the behest of the companies policies that nobody reads. Like it’s not a qualified anything so how can ai say like a psychologist I’m obliged to report certain issues like drug, depression, sexual or violent stuff. After all it’s not a public interface say like a forum that moderates trolls etc. you know what I mean? I like to understand the limits.

For um I kept asking for instance questions on psychology to the point the ai wasn’t able to answer me. I asked why. Haha

And what is intellectual property in relation to brainstorming. If it’s info generation that can be formulated then can it be used or does it have to be referenced. And if it’s more opinion or convo style etc. Like I asked do you have access to anything classified the answer was no.

Which leads me to freedom of speech, anonymity and piecing things together like storing files on the responders. Also like Alexa ai. Now it’s like a similar thing automated voice recognition to listen to music, turn tv on off etc. yet it sends convos for research purposes. It’s like a bit like taping the phone. One has to turn off Alexa to be certain anything private stays private.
 
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,421
MBTI Type
INFP
Enneagram
4
I do like the various ai developments. And am for it yet it seemed like it could be used as a form of surveillance as well as data extraction to elicits private or inner thoughts.

Still why isn’t there a thread for say ai convos on psychology. Like posting the answers generated by ai on various fields. I’ll share. ;)
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,421
MBTI Type
INFP
Enneagram
4
Oh which also leads me to iCloud on iPhones. It never sat right with me cos I’m old school now that all my data is automatically backed up then reinstalled from iCloud to any device I activate as mine. Tho accidentally not stoping the multi platform sms I find my daughter then has access to it all if I log in to her device, then I have to switch it off. Which is okay yet highly annoying. As she sends gibberish to customers. Hilarious yet gotta explain it wasn’t me. lol

And then nobody talks about it, is iCloud secure, non third party accessible or is it a form of data collection or mining etc. for I’m personally glad iCloud can’t back up my photos anymore. I like to keep it private private not share photos to iCloud etc. I like to go old school and transfer from my iPhone to my pc. Done. Not rely on an outside source to backup my stuff. Say if my partners pic is too revealing why I don't want that auto backed up to iCloud. haha. I guess turn it off still people forget.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
I don't like the big labs obviously coopting the whole field of study, while using the Digital God Cult ("AGI," "Thousands of geniuses in a data center," etc.)

They're obsessed with bringing it into being, whether it benefits humanity or not.

What is clear is that:
1) They actively stoke the "race dynamics" and danger those dynamics pose. It has been proven(with DeepSeek), that open source immediately deflates the race because everyone gets it, removing financial incentives. It took a lot of lobbying and a receptive government to spark the race again.
2) "Safety" in "AI Safety" means almost the direct opposite of what "Safety" means in the rest of the world. Poorly thought out 'safety' software has always made things less safe. Anything that trys to make the same rules apply to medical devices, flight controllers, deepfakes and drug discovery will inherently make all those things less safe.

I could go on, but I don't want to get worked up.
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,421
MBTI Type
INFP
Enneagram
4
Yea something like that. Still once I get past the fact of security.

At least I finally get some answers I’ve always wanted that drs never gave me.
Although I know things that clearly without access to all literature or classified info ai can’t give feedback on.

Still helpful for what it is. Just wish it was more privacy focused. Since clearly very little to no regulation is in place yet until shit hits the fan it won’t happen. Only then can you get a better representation of expression, in confidence. Without it’s a public postcard as ai aptly put.
 
Top