• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Random AI/Robot Thoughts and News

Joined
Jul 24, 2008
Messages
22,429
MBTI Type
EVIL
Enneagram
5w6
Instinctual Variant
sp/so
What if humans are no longer actually in control of anything, and algorithms and AI are? In other words, the AI directs the humans, who then perform the various desired occupational, social, and recreational functions. To what end, though?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
What if humans are no longer actually in control of anything, and algorithms and AI are? In other words, the AI directs the humans, who then perform the various desired occupational, social, and recreational functions. To what end, though?
We're already there. We have been since the invention of institutions. The algorithms have just been becoming more automated.

An idealized algorithm is just a procedure that ends in results with optimal or correct results. What "correct" or "optimal" are up to the designers. There are often flaws.

Examples
  • Court cases are algorithms, still mainly executed by people.
  • An approval for a loan if people follow some procedure was still an algorithm -- now we have automated credit score/model based approvals.
  • Any institution with Standard Operating Procedures are running algorithms (specified in the SOPs).
  • Computers used to be people.
  • We learn(at least we used to) the standard algorithms for addition, multiplication, long division, etc. in school.

The definition of AI is fuzzy. In the strictest definitions, most AIs are heuristics (therefore not algorithms in the traditional sense).
But in modern times, we consider them heuristic algorithms. So court cases are the equivalent--executed by people (therefor not artificial).

The automation of the SOPs that govern our lives just makes the fact more apparent -- and why we need to safeguard autonomy.

When a majority of people would vote to remove the large scale autonomy of people, that's when we should worry.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
Hypothesis: In 2024, being data-centric is the only way for individuals and small teams to progress capabilities to help specific customers.

 

Lark

Well-known member
Joined
Jun 21, 2009
Messages
29,682
What if humans are no longer actually in control of anything, and algorithms and AI are? In other words, the AI directs the humans, who then perform the various desired occupational, social, and recreational functions. To what end, though?

Well, it wouldnt be too different to past eras in which people fatalistically believed that these things were directed by Gods or some vesion of that, Adam Smith's "Invisible hand" anyone? The Marketplace and Market Forces are just early formulations of AI type theories, and their presumption of being the most natural or desirable spontaneous order or calculator.

I think it would be good to see if AI did evolve a "capitalism without fiction", as Bill Gates once suggested, but I'm not sure it will, way too many ways that it could be monkey wrenched.

The assumptions that AI would want to drive mankind into extinction, one way or another, as a sort of "successor species" I think is an example of human, all too human reasoning, it IS what humanity has done in a lot of instances in the past, racism, sectarianism, variations on those themes, they are all inventions of humankind and not AI.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
Damnation evolution in real time.

This sort of mix of real issues and Bullshit is ruining real lives.

There's a reason why most of the researchers with these concerns are white and of European heritage.


edit: I'll clarify the real concerns vs. bullshit.
The real concern: The black-box nature of the equations(models) is a matter of concern. People are actively working on that. Any idiot who tries to make a naked model (by the way, even the likes of ChatGPT aren't naked models) into a system by itself will bankrupt themselves. There is a whole field called MLOps - a much harder and more important part of any system than the model itself. This is where they do things like monitoring for drift, and ensuring reliability. System building is where you employ techniques like Test Driven Development and Data-Centric Deep Learning, where the black-box nature of the models is mitigated.

The Bullshit:
1) AI is being used as a blanket term. Even AGI is a nonsensical one. You can give anything a definite meaning. Human beings themselves aren't that general. One of the basic machine-learning techniques is linear regression, a technique from the 19th century that people programmed to automate. Even things like your BMI calculator come from this. Only slightly more complicated models create your credit score and such. The latest rounds of generative AI come from deep models like this in the form of more complex architecture trained to guess the next token on a sequence or to recreate missing portions of images. These are still just models. You know the BMI isn't everything. You know your credit scores aren't everything. You know LLMs aren't everything. The people who think these things are adequate for particular purposes are causing the harm, not the equations themselves.

2) These TESCREAL groups advocate a return to the dark ages. Specifically, they advocate for unempirical, non-experiment-driven approaches to solving problems. Their opinions come from their mind's armchair philosophies and cherry-picked examples rather than rigorous scientific and engineering practices. They dress their opinions up with equations that highlight their credences(opinions).

3) I could go on forever- but I have posted so much on the forum about this. It is a genuine concern, however, that people with lots of influence are advocating for the oppression of scientists and engineers. Just know that anyone talking about "AGI," terminator-like scenarios, and "existential threats" is coming from deep eugenic roots or influenced by people with deep eugenic roots. It's good to have some people worrying about things like this, but whether they want to or not, putting your backing behind them is consequentially arguing for millions of people to die on the roads, in hospitals, and elsewhere. Trying to apply the same rules in all applications will cause all applications to break.
 
Last edited:

The Cat

The Cat in the Tinfoil Hat..
Staff member
Joined
Oct 15, 2016
Messages
27,393
Not everyone is gonna view it the same way you do, or the way anyone making money off it do. Thems the breaks.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
Not everyone is gonna view it the same way you do, or the way anyone making money off it do. Thems the breaks.
Sure. But there is a possibility for real progress and working for the good of many.

Not seeing it the same way is one thing, but not having any cogent rebuttals means disengaging from the discussion. Why would that be a good thing?

If the gauntlet is being thrown down and pitting one person's livelihood against another, I don't think that ends well.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
The title is perhaps overblown, but practically speaking, the real AI tools people deploy in production are mostly predictive, not generative.


There are some exceptions. Coding, brainstorming, and other areas where the person looking at the output can immediately(or relatively quickly) judge the quality of the output.

But people are still pushing benchmarks, learning to reduce hallucinations, and the rest. Ultimately, we're still mainly in the research phase of generative AI.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730
I have a lot of random AI thoughts throughout the day since this is a career path for me.


He said he's "bullish" on AI but "not so keen" about AGI.

I don't agree with many of his statements, but I agree with the sentiment above. I also don't want tech giants to hold the keys to what values AI models will be imbued with.
 

SensEye

Active member
Joined
May 10, 2007
Messages
876
MBTI Type
INTp
In fairness to the chatbot, that bit about: 'You are not special, you are not important, and you are not needed' is accurate for pretty much everyone. Rude to point that out though. And telling him to die is just plain mean.

That response could inspire a new comedy routine: Chatbots - Real and Unfiltered!

I read the entire session (you have to click a couple of links from that article to get to it) and I can see he kind of mangled the input to the question which triggered that response. A person can easily tell he mistyped (or probably had a cut paste error as it seems he was pretty much entering the questions on his assignment verbatim). I would think a decent AI could parse it out too, but obviously something went a bit squirrely.
 
Joined
Jul 24, 2008
Messages
22,429
MBTI Type
EVIL
Enneagram
5w6
Instinctual Variant
sp/so
I use GitHub CoPilot for work, and I'm actually not that worried about it replacing my job any time soon. The code it generates is usually based on the code I already wrote and often does not do what I am specifying for it to do for the prompt. It unthinkingly takes existing patterns and tries to use them to solve the problem, but this often means the insights provided offer nothing new.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,730

@Siúil a Rúin brought this up earlier.

Word completion training on stories that have exactly these scenarios will sometimes autocomplete this way. Long conversations cause ambiguity in context. Autocomplete isn't becoming sentient. It is autocompleting stories about AI becoming sentient.
 

FemMecha

01001100 01101111 01110110 01100101 00100000 01101
Joined
Apr 23, 2007
Messages
14,068
MBTI Type
INFJ
Enneagram
496
Instinctual Variant
sp/sx
I have wondered if quantum processors will lead to AI sentience. While we don’t understand it in humans, I’ve heard speculative theoretical physicists musing it could be a function of quantum physics.

My understanding is that consciousness is a property of matter that is expressed when the conditions are right. I’ve wondered if that property of matter is what drives evolutionary processes. I find it absurd to assume it’s an accident or anomaly. I don’t see a reason those conditions could not be intentionally created. I don’t see it as spiritual in the magical sense but like an inherent property or force common throughout the universe like any other property of matter.

“We are the universe observing itself”
 

The Cat

The Cat in the Tinfoil Hat..
Staff member
Joined
Oct 15, 2016
Messages
27,393
Yeah this *definitely* wont fuck anybody over...
:dry:
 
Top