• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Stopping the 'Digital God' Cult: If it's anthropomorphic, don't fund it

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
There are a lot of reason's to stop this 'Digital God' Cult. (Replace 'Digital God' with 'AGI,' 'a country of geniuses in a data center,' etc.)

I have posted on this cult here, and here, and tried to talk about the people behind it this thread. I have talked about the laws the cult has tried to pass here and here (and a lot more). I have talked about how the cult is edging the USA and China closer to war over Taiwan.

So why am I making a new thread?

Because..
  1. I now understand how much of a cult this group actually is.
    • They have a set of unfalsifiable beliefs.
    • They ostracize (at minimum - look askance at) people who don't hold those beliefs
    • They try to get a whole field going (like Intelligent Design) that flies in the face of established research:
      • The Intelligent Design parallel is 'AI 'Safety' ' (the Evolution and Biology parallel is Machine Learning and Statistics/Probability)
        • The most dangerous aspect is their co-option of the word "safety" in the context of AI
          • I will spend a lot of time in a later post expanding on this
          • Arguably, they are a much larger group within AI than Intelligent Design compared to Evolution
    • Like Scientology, they bring a lot of celebrities and intellectuals(in the modern world 'influencers' too)
  2. I want to examine and pick apart the cult's unfalsifiable beliefs. They have a lot, including:
    • The trickiest cluster of unfasifiable beliefs are around human attributes we poorly understand:
      • The 'Self', as in 'Self-Preservation,' and 'Self-Awareness'
        • We have barely a grasp of that in humans, yet they want to have research 'prove' that AIs have it.
      • 'Consciousness'
        • We know when we are conscious, but outside of animals, we have little scientific basis for the concept
    • The cult smuggles in unfalsifiable beliefs through the use of anthropormorphic language (aside from the poorly understood concepts above)
      • Intention - The AIs "want" things. The AIs "evade" things. The AIs "cheat." So on.
      • Agency - They use this broad, multiple meaning word to conflate their meanings to smuggle in the psychological meaning of agency to AI
        • This one is tricky because 'agent' is also widely used in philosophy, complexity theory, and computer science.
          • Philosophers just mean the capacity to act in an environment.
          • Mathematicians use common words (Group, Ring, Rotor, Curl, ...) for things that are not at all their common meaning
          • 'Agents' are also a term in Computer Science and Artificial Intelligence
            • See also Daemon
            • 'Agents' made talking about software easier, but it also makes the cult beliefs easy to smuggle.
  3. I want to argue the societal, technical and scientific demerits of funding their Digital God.
    • First, the cult's motivations
      • "If we don't do it someone evil will." - Response: "if you do it someone evil will, too."
      • "We have to win the race with ___" - Response: "Nothing useful comes directly from a model."
        • To be useful, you have a lot more work to do
          • A lot of that work is underfunded,
            • so that the cult can fund the next step closer to their Digital God
        • I will have a lot more on that later
      • "If people see what we're doing, bad actors will use it." - Response:"You see what's being done, and are bad actors."
        • Also, the thinly veiled motivation is:
          • if more people find ways to adapt and make things more efficient using smaller models, the cult cannot fund their Digital God.
    • Next their use of resources:
      • There are scaling laws, and like Moore's Second Law, there is a cost.
        • Moore's Law /Rock's Law has largely proven to be useful to humanity, (though one never know when it stops being so)
          • The costs of the Neural Scaling law does not seem to translate to benefit to humanity
      • The energy hungriness of Transformer models
      • They funnel away resources from the application (making things useful) layer of the work.
        • Also, AI is a field where they continually push an idea too far starving other ideas that are more appropriate at that time.
        • A big 'for instance' are World Models, Digital Twins, and Simulations.
          • They also take a lot of compute and energy, however...
          • They generally don't hallucinate and are much safer and more reliable
          • They have much more immediate utility
          • They have no possibility to be 'conscious,' 'self-aware,' or any of the sci-fi nonsense
            • This is why the Digital God cult hates them
            • This is why the Digital God cult authors laws to make them illegal to make
    • The lack of deployed use in the real world
    • Armchair philosophy as recruitment into the Digital God cult:
      • The "future" messaging is where they recruit the fad/fashion followers, celebrities, influencers, "futurists"
        • It's fun to talk about. It'd not just like Science Fiction - it is Science Fiction
      • Once you start down this like, its hard to pull you out.
        • That's the nature of unfalsifiable beliefs
        • Maybe it becomes a religion instead of a cult?
    • The cult has shady/eugenics origins.
      • Though they are actively recruiting to diversify since so many people called them on it.
  4. Most importantly, I want to distinguish the Digital God cult's messaging and research from the more scientific and falsifiable messaging and research in AI:
    • There are things that are coming out of the big labs that are good:
      • Pretty much anything labeled 'Responsible AI' instead of 'AI 'Safety' ' (at least for now)
        • Remember: The Digital God cult is trying to co-opt the word 'Safety' in the realm of AI
        • They haven't fully co-opted it yet, so there are still some good things under that label
      • Interpretability Research - this is by far the best thing still labeled AI Safety instead of Responsible AI
      • Other Concrete Problems in AI Safety tend to not use unfalsifiable claims, and generally don't have the cult vibes of the more abstract/Sci Fi researchers.
    • Anthropomorphism seems like a good litmus test to see if it is the Digital God cult's handiwork or falsifiable research
      • You may still be able to 'save the phenomena' even if the cultists did the research,
        • if the cultists were also careful experimenters, and
        • opens their research for others to reproduce
      • Analogy is useful, so those who came long before, talking about neurons, and biomimicry far pre-dated the cult
        • It was easy to know it was just an analogy

I just realized I produced a wall of text. I will make more digestible posts later. But I wanted to get this stuff out of me.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Are you saying some AI developers or companies or just agi in general are cult like? Or a certain organisation?
Or the direction in which AI is heading?
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
Are you saying some AI developers are cult like? Or a certain organisation?
Or the direction in which AI is heading? Laws, code?
Some of all of that.

The cult has some AI developers, lawmakers, philosophers, and others. Pretty much all 'doomers' are in the cult. The disingenuousness of the doomers is the give away. If they really cared about possible doom from what they're making, the rational action would be to stop.

They are an outgrowth of the TESCREAL bundle. The core is the Effective Altruism movement(the movement that gave us Sam Backman-Fried).

There are so many techno-cults in the San Francisco Bay area, that it's hard to keep track of them all. A lot of them are started and funded by rich people in the area.

The Digital God cult has a strong presence in both OpenAI and Anthropic (the makers of popular LLMs). Anthropic, in particular, only makes multimodal LLMs. It's are mostly used for writing and coding.
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Oh I see Tescreal goes look up on ai answers more haha. Yea it’s gonna be a bumpy ride. Let’s hope it doesn’t end up like Terminator. Yikes.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
We share the planet with cocroaches, viruses, and tartigrades.

Intelligence(with all the vagueness of the term) is not the only powerful success strategy.

There's evidence, even of the early hominids that we weren't the most intelligent.

The notion that we're the 'most evolved' of all organisms or the most successful betrays the eugenics roots of the Digital God cult.

This unfalsifiable belief is one of the core parts of 'we need to build it first' message.

No you don't need to build Digital God first. If you want economic superiority, you'd want to build the most beneficial (to humans) version of the technology. You don't want to step closer to 'human level consciousness' and have to deal with the 'will we enslave them or they us' type questions. You would build versions of intelligence that are closer to direct utility (like simulations, named entity recognition, interfaces to databases and spreadsheets, etc.)

Even if you wanted military superiority, it's not at all clear that human-like consciousness gets you any closer to it. The insistence on that direction is the hallmark of the Digital God cult. You don't need to build it to 'win the race' or anything of the sort. To 'win' you need the version of intelligence that has the maximum utility with the minimum philosophical baggage.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
Oh I see Tescreal goes look up on ai answers more haha. Yea it’s gonna be a bumpy ride. Let’s hope it doesn’t end up like Terminator. Yikes.
It's also a pattern that people most 'worried' about it are the strongest advocates to bring it about. They just want to be the ones that do it.
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Yea it seems. The future can go pear shaped pretty fast depending on motives. History has shown how vested interests whether political, govt, military, corporate, personal, religious, cultural or other affect outcomes. History is written by the victors. Let’s hope eugenics doesn’t become the definition of what ai means.

I looked up when did modern science start even. Apparently only 400 yrs from The Royal Society in England. Like a scientific revolution to standardise everything in the 16-17th century. Yet there’s bias even in that as a third of the founders themselves were a bunch of royals and masons, I find it interesting tho luckily reason won. The validity of itself from ai while interesting is a note to make.

Is research non discriminatory. I find it’s actually discriminatory while the best representation of info may be given. It’s what isn’t included that adds to the dilemma.

As in I asked why is science non transparent to an ai who pushed science is very transparent. Once I disagreed and said if that was the case then why are things classified in govt, military , pharma, corporate industry everywhere.

Only after I quizzed that transparency is a thin veil of secrets did ai agree and acknowledged my skepticism lol. But if I didn’t dig the ai’s stance would have pushed that science is entirely transparent and accessible.
 
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Yet in saying it’d be far scarier imagining an ai pretending sentience like nanobot entities self replicating redefining humanity haha.

As ai won’t ever achieve genuine sentience. In some circles simulation theory of earth abounds too, hence were we living in a simulation as physical vehicles that are not our essence simply representations of a greater self. Then what is ai in that respect?

Oi soul intelligence. Rofl.

Odd sense of humour.
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
I just hope that whatever singularity we achieve it’ll be a good one not one bogged down in agenda or suppression. It’ll depend on motive I suppose. For humanity or not.

Dunno if I’m off topic. Still I can see how an international or universal review ethics board or something is so needed in ai as unchecked it could go over so many people. That it’ll accelerate to the point the current culture might not be able to keep up with the exponential change if it accelerates. Good in some ways bad in others.
 
Last edited:

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Oh right looking it up more. Well that would definitely suck if tescreal got a proper foothold in the AI pie.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
@Synapse There's a lot to unpack there. The most important thing to realize is that Artificial Intelligence, as a field of study, is just about as old as computer science itself. AI is not just one thing, but if I were to pick one thing, it's a field of study. The techniques have changed over the years. There have been many AI winters, mainly because researchers took on particular idea, and pushed it too far.

One insidious aspect of AI research is the tendency for researchers to use anthropomorphic language and thinking when working on problems. There are rumors, even Alan Turing, talked of the particular machine he worked on in that way.

I've worked on projects (even 25 years ago) where we talked that way too. I'm pretty sure everyone had enough perspective to realize that this use of language was more for us to feel motivated to improve what we worked on, rather than as a serious philosophical stance on the sentience of it. We watched science fiction too.

I'm not a golfer, but I am going to run with the following analogy based on my conception of the game.

Imagine you needed to improve your putting, and you had two perspectives to choose from. One uses objective language about seeing the greens divots, elevation, skew, etc. Another uses anthropomorphic language that talked about how the ball likes to "cheat" in certain ways. Although the anthropomorphic language maybe helpful in getting yourself right, e.g. "be the ball," the objective language seems like it'd be better at figuring out what's going wrong and how to correct it.

We see this phenomenon clearly in science and medicine. We could talk about spirits and humors and other such things, but that wouldn't be science because these notions aren't falsifiable.

I use the golfing analogy, because Machine Learning engineers work mainly on optimizing some mathematical function. If that function doesn't take into account everything relevant, any possible path towards the optimal point could be taken. What that means from the anthropomorphic perspective is that unless perfectly setup, the algorithm could "cheat."

In fact, if you step back to the objective perspective about what the anthropomorphic perspective would observe, "cheating" is the most likely outcome to be observed unless the algorithm is designed perfectly.

But it's the objective perspective that's needed for scientific progress on improving the algorithm.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,728
If you're making a self driving car that's "behaving badly." That language may be a reasonable language to describe the phenomenon that you want to fix to a collegue. Many software engineers use that sort of language for completely deterministic systems everyone knows have no possibility for sentience.

When it comes time to fix that "bad behavior," this mode of language harder to use for it's purpose. Analogies about "scolding" the car, or "holding an intervention," becomes obviously stretched for connecting to the actual actions the engineers need to take.

What's needed for the fix is get the data, examine the components, check logs and find out why what was unexpected before, with the engineers' new understanding, is expected. Then the engineers take steps to remedy the situation from happening in the updated design. What's needed is the objective perspective.
 

Synapse

Active member
Joined
Dec 29, 2007
Messages
3,420
MBTI Type
INFP
Enneagram
4
Yes I agree with that. Good points. This interests me like it would many to see how things progress. And progress is good when guided the right way.

And yes cheating is never good and cutting corners will most likely cause lasting damage. It’s like is the house or foundation made up of straws, mud or bricks? If it’s straws it’ll collapse faster than mud but so will mud, unless it’s foundations with bricks or stronger then indeed things might fall apart. Or the very least malfunction.
 
Last edited:
Top