• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Stopping the 'Digital God' Cult: If it's anthropomorphic, don't fund it

SensEye

Active member
Joined
May 10, 2007
Messages
953
MBTI Type
INTp
I found the last 30 seconds of that video interesting where he references some AI guy who posted that given current capabilities of AI all white collar jobs can be replaced in 5 years and the podcast host doesn't agree and says the right way to think about it is that AI tool will make humans hyper productive.

The podcast host is delusional if he thinks that AI tools making humans hyper productive (which may be true) will not lead to massive layoffs and downsizing of workforces. That is just the way businesses operate. They want to minimize their expenses (in this case wages) so the less actual wage earning employees they need to 'get the job done' the less they will employ.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
I found the last 30 seconds of that video interesting where he references some AI guy who posted that given current capabilities of AI all white collar jobs can be replaced in 5 years and the podcast host doesn't agree and says the right way to think about it is that AI tool will make humans hyper productive.

The podcast host is delusional if he thinks that AI tools making humans hyper productive (which may be true) will not lead to massive layoffs and downsizing of workforces. That is just the way businesses operate. They want to minimize their expenses (in this case wages) so the less actual wage earning employees they need to 'get the job done' the less they will employ.
The key question is whether the problems to be solved by new businesses can outpace the need for fewer people.

Problem finding and the formation of new small businesses to serve the people who have those problems in sustainable ways will determine whether the economic outcomes are good or bad.

Thinking from that point of view, we would want to find ways to make the cost of customer acquisition for businesses go to zero. Then the life time value of the business doesn't have to be too high, especially when considering that the operational costs are also going to zero.

How different is a co-op from a founder only company?

You could have a situation where:
Value to Customers >> money from customers >> costs for the business

The first place we are already seeing laid-off workers trying to be founders of new small businesses is in software development.

I think we should want that to go well, so that as more workers follow that path, there will be somewhat of a blueprint to follow.
 

SensEye

Active member
Joined
May 10, 2007
Messages
953
MBTI Type
INTp
That guy is probably right. At least in the sense of the upheaval. We'll just have to see how it goes in the next decade or so. I don't think AI will create anywhere near as many jobs as it makes obsolete. I doubt re-training folks to use AI will help much either. You are still going to need far less people overall.

In the long term, human labor should be made obsolete. That can't happen in current economic models. We'll see if WWIII or whatever happens before what that guy calls the AI Paradox can get resolved.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
That guy is probably right. At least in the sense of the upheaval. We'll just have to see how it goes in the next decade or so. I don't think AI will create anywhere near as many jobs as it makes obsolete. I doubt re-training folks to use AI will help much either. You are still going to need far less people overall.

In the long term, human labor should be made obsolete. That can't happen in current economic models. We'll see if WWIII or whatever happens before what that guy calls the AI Paradox can get resolved.
The economic catastrophe is definitely first in line (compared to existential risk) along any of the more sensible paths of development.

I still think that having nobody own the means of production is a possible means of avoiding the economic catastrophe. Open standards, open data, open science, and open source go a long ways to minimize the concentration of wealth.

I would also say non-technical people could also educate themselves on the topic (with grounded experience, rather than other people's opinions) so as to make decisions on AI science instead of AI science fiction.

AI has, since inception, been a buzzword for fancy computer science. The line of demarcation between plain old CS and AI is fuzzy.

"AGI" is even more ambiguous. It has so many definitions that who is saying the term really defines what it means.

But for many it's a cult (and it's easy to find people who use religious language around it). Anthropomorphic language is just a little ways away from religious language about human made systems.

Someone who thinks a lot about Human Centeredness in AI is Fei-Fei_Li:
 
Last edited:

SensEye

Active member
Joined
May 10, 2007
Messages
953
MBTI Type
INTp
I still think that having nobody own the means of production is a possible means of avoiding the economic catastrophe. Open standards, open data, open science, and open source go a long ways to minimize the concentration of wealth.
That is probably what has to happen in the long run. Open source might be part of the solution, but if, for example, some company that sells widgets is going to replace 500 customer service reps with AI, even if it is open source AI, that's is still 500 less jobs around. No specific AI tech company might make a profit due to open source AI software, but those savings will still accrue to the widget making company.

Unless you go further and mean the state needs to take over all aspects of the economy (socialist/communism style). That doesn't work now, I think due to human nature and what motivates people, but in a world where human labor is in low demand, it may be inevitable.

But I can't even imagine the chaos in such a transition from the current capitalism model. There will have to be a period of mass poverty and social unrest in order to drive such a drastic change. This is just as likely to lead to widespread war as change to the system. Hopefully, it won't happen in my lifetime so I can skip the chaos.

Actual robots replacing human labor are still some ways off (i.e. not in my lifetime) but I can see any human who currently sits in front of a computer while doing their job being replaced by AI during my lifetime. That may not happen, but it will probably be technically feasible.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
That is probably what has to happen in the long run. Open source might be part of the solution, but if, for example, some company that sells widgets is going to replace 500 customer service reps with AI, even if it is open source AI, that's is still 500 less jobs around. No specific AI tech company might make a profit due to open source AI software, but those savings will still accrue to the widget making company.

Unless you go further and mean the state needs to take over all aspects of the economy (socialist/communism style). That doesn't work now, I think due to human nature and what motivates people, but in a world where human labor is in low demand, it may be inevitable.

But I can't even imagine the chaos in such a transition from the current capitalism model. There will have to be a period of mass poverty and social unrest in order to drive such a drastic change. This is just as likely to lead to widespread war as change to the system. Hopefully, it won't happen in my lifetime so I can skip the chaos.

Actual robots replacing human labor are still some ways off (i.e. not in my lifetime) but I can see any human who currently sits in front of a computer while doing their job being replaced by AI during my lifetime. That may not happen, but it will probably be technically feasible.
The AI being freely available doesn't allow a particular widget maker to use capital for that as a "moat."

Indeed, that doesn't bar the widget maker from having other capital based moats. There are incumbent(first mover) advantages like branding, network effects, and switching costs that capital can use to get to first.

But in someways a new upstart with leaner operating costs can counterposition their way to compete.

Yes. The period of extreme poverty is my biggest fear as well (like what happened at the end of the 1800s). I'm not resigned to it being an inevitability, however.

I don't think Communism (big C) has a proper way to deal with scale and the calculation problem of procurement, unless it tries state controlled rationing. The history of that hasn't been great.

The mixed model cannot be avoided in practice. At minimum, the state needs to properly craft the framework for fair markets.

I don't have all the answers, but I think it's important to fight on this front too. It's a front I am most familiar with.

But there are a lot of simple lies on both sides of the aisle that makes dealing with it harder.

There are complicated uncertain truths about technology.

Poor people are starting business at a rate that hasn't been seen for a long time. I think we need to lean into that phenomenon and help them be more than just simple gig workers. This area is one potential area to level the playing with AI. But it'll need to be a very different from large closed frontier models and more like the pre-fear-hype version of open use-cased focused models--but not all the way.

If you let the big AI companies strip-mine the commons and then put $200 to $300 a month pay walls for access, then the above mechanism of poor gig workers to become real CEOs with the aid of AI gets out of reach.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
I didn't address robots in my last post.

But in terms of the "race with China," the US is clearly losing (and by a lot). It's probably not even second(see Japan, France, Germany), and may not even be in the top 5 (countries in the Middle East and India have large robotics talent pools that have yet to be put into full production)

I think robotics have the potential to come a lot sooner. But cultists would find that direction upsetting because it's too useful and would take funding away from their Digital God projects (unless it's humanoid, I guess).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
I am sympathetic to the concerns in this video(the guests concerns):

I am not as certain that the benefits are out of reach, though I am very skeptical of scaled GPTs getting us there without ridiculous costs both socially and ecologically.

I'm not also baring consciousness as a possibility(I don't understand Roger Penrose's OR Orch theory at all -- though I would like to). But I believe it's cult-like to take the "increase complexity to see what pops out" path. Fear along this path is a core part of the hype. Legitimate fears center around the black box nature of these. Illegitimate fears around existential risk have to be seen as the attempt to guarantee cultists being the only actors in the field.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
Deep in their discussion is the kernel of the complicated truth I have been trying to convey.


The rest of the conversation is mostly in line with what I have been saying, including the fear-hyping/moralizing.

I did not fully appreciate the Empire angle. But I have certainly known it. The number of low-wage "help train AI" scams I have dealt with (I fell for the first one) is ridiculous.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
More complicated truths, I linked to the regulation part of the discussion (but watching the whole thing is a button click away):


There are other questions around regulation, too. But I think the one I linked was the most salient.

I am dismayed that the interviewer didn't ask questions about (or understand) the open vs. closed issue.

I don't know how you can talk about transparency and not mention openness.
 
Last edited:

The Cat

The Cat in the Tinfoil Hat.. ❌👑
Staff member
Joined
Oct 15, 2016
Messages
27,689
I don't know how you can talk about transparency and not mention openness.
In America. We find a way. Just like how we can talk about truth without a scrap of honesty. The way I figure it there's transparency and then there's "transparency." In the end one is a fantasy of clarity and the other is just technically not entirely opaque.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
In America. We find a way. Just like how we can talk about truth without a scrap of honesty. The way I figure it there's transparency and then there's "transparency." In the end one is a fantasy of clarity and the other is just technically not entirely opaque.
Please clarify.

I meant that open source allows researchers to examine how something is put together to:
1) Check if you want to use that software
2) Potentially contribute if you find something objectionable or find a way to improve it
a) Fork the project if the current community around the project isn't taking up your contributions
3) If data is also properly sourced (fully open-sourced instead of just open-weights like in many LLMs and VLMs), it can be replicated to see if results are real (like science in any other field).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
It's definitely not inevitable that taking copyrighted material is the only way to make progress:


 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863

The above form of AI is most likely to solve diseases and do other similar things. But most of the funding and fear-hype is around LLMs and VLMs with diminishing returns. Frankly, the only real breakthroughs in that direction have been using evolutionary strategies(more modern incarnations like Alpha Evolve and The Darwin-Godel Machine) around LLMs, instead of scaling LLMs directly (better than both through pre-training and test-time compute).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863

It should be noted that only the "AGI" cultists favorite architecture's attack vectors are mentioned in the video.

There's little to no chance any of those attack vectors work on Alpha Fold.

In short, we're doing it wrong.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863


This fundamental of machine learning is so old that it in someways pre-date much of modern computing.

Also, hopefully the golf analogy makes more sense now.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
More "we're doing it wrong":

The current "reasoning" paradigm has so much wrong with it, but this is one I think anyone can understand.

I apologize in advance for the anthropomorphic nature of this, but that language is good for talking about problems, though not solutions.

Imagine if when you are thinking through something, you would have only one sit down session to do it. If you wanted to go further, someone else would have to capture your work and feed it back to your in some way.

That's kinda how the current LLM paradigm works.

As I've said before, trying to force scale this paradigm to solve all problems is nonsense.

I assure you a protein folding system is a lot closer to figuring out essential aspects of biology than a language model that attempts to "talk out" essential issues in biology.

There may be a scale where that could happen by some emergent accident, but it's idiotic to pursue it in that way.

Edit:
The actual research:
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
I take these as serious evidence, that the "scaling pill" that so many of the Digital God cultists have swallowed is not at all a good thing (even for their priests).
Clearly other paradigms are working much better:




If OpenAI and Anthropic are playing the "only we are worthy of creating AI" game, they are clearly losing on that conceit of a game as well.
 
Last edited:
Top