ygolo
My termites win
- Joined
- Aug 6, 2007
- Messages
- 6,730
There are a lot of reason's to stop this 'Digital God' Cult. (Replace 'Digital God' with 'AGI,' 'a country of geniuses in a data center,' etc.)
I have posted on this cult here, and here, and tried to talk about the people behind it this thread. I have talked about the laws the cult has tried to pass here and here (and a lot more). I have talked about how the cult is edging the USA and China closer to war over Taiwan.
So why am I making a new thread?
Because..
I just realized I produced a wall of text. I will make more digestible posts later. But I wanted to get this stuff out of me.
I have posted on this cult here, and here, and tried to talk about the people behind it this thread. I have talked about the laws the cult has tried to pass here and here (and a lot more). I have talked about how the cult is edging the USA and China closer to war over Taiwan.
So why am I making a new thread?
Because..
- I now understand how much of a cult this group actually is.
- They have a set of unfalsifiable beliefs.
- They ostracize (at minimum - look askance at) people who don't hold those beliefs
- They try to get a whole field going (like Intelligent Design) that flies in the face of established research:
- The Intelligent Design parallel is 'AI 'Safety' ' (the Evolution and Biology parallel is Machine Learning and Statistics/Probability)
- The most dangerous aspect is their co-option of the word "safety" in the context of AI
- I will spend a lot of time in a later post expanding on this
- Arguably, they are a much larger group within AI than Intelligent Design compared to Evolution
- The most dangerous aspect is their co-option of the word "safety" in the context of AI
- The Intelligent Design parallel is 'AI 'Safety' ' (the Evolution and Biology parallel is Machine Learning and Statistics/Probability)
- Like Scientology, they bring a lot of celebrities and intellectuals(in the modern world 'influencers' too)
- I want to examine and pick apart the cult's unfalsifiable beliefs. They have a lot, including:
- The trickiest cluster of unfasifiable beliefs are around human attributes we poorly understand:
- The 'Self', as in 'Self-Preservation,' and 'Self-Awareness'
- We have barely a grasp of that in humans, yet they want to have research 'prove' that AIs have it.
- 'Consciousness'
- We know when we are conscious, but outside of animals, we have little scientific basis for the concept
- The 'Self', as in 'Self-Preservation,' and 'Self-Awareness'
- The cult smuggles in unfalsifiable beliefs through the use of anthropormorphic language (aside from the poorly understood concepts above)
- Intention - The AIs "want" things. The AIs "evade" things. The AIs "cheat." So on.
- Agency - They use this broad, multiple meaning word to conflate their meanings to smuggle in the psychological meaning of agency to AI
- This one is tricky because 'agent' is also widely used in philosophy, complexity theory, and computer science.
- Philosophers just mean the capacity to act in an environment.
- Mathematicians use common words (Group, Ring, Rotor, Curl, ...) for things that are not at all their common meaning
- 'Agents' are also a term in Computer Science and Artificial Intelligence
- See also Daemon
- 'Agents' made talking about software easier, but it also makes the cult beliefs easy to smuggle.
- This one is tricky because 'agent' is also widely used in philosophy, complexity theory, and computer science.
- The trickiest cluster of unfasifiable beliefs are around human attributes we poorly understand:
- I want to argue the societal, technical and scientific demerits of funding their Digital God.
- First, the cult's motivations
- "If we don't do it someone evil will." - Response: "if you do it someone evil will, too."
- "We have to win the race with ___" - Response: "Nothing useful comes directly from a model."
- To be useful, you have a lot more work to do
- A lot of that work is underfunded,
- so that the cult can fund the next step closer to their Digital God
- A lot of that work is underfunded,
- I will have a lot more on that later
- To be useful, you have a lot more work to do
- "If people see what we're doing, bad actors will use it." - Response:"You see what's being done, and are bad actors."
- Also, the thinly veiled motivation is:
- if more people find ways to adapt and make things more efficient using smaller models, the cult cannot fund their Digital God.
- Also, the thinly veiled motivation is:
- Next their use of resources:
- There are scaling laws, and like Moore's Second Law, there is a cost.
- Moore's Law /Rock's Law has largely proven to be useful to humanity, (though one never know when it stops being so)
- The costs of the Neural Scaling law does not seem to translate to benefit to humanity
- Moore's Law /Rock's Law has largely proven to be useful to humanity, (though one never know when it stops being so)
- The energy hungriness of Transformer models
- They funnel away resources from the application (making things useful) layer of the work.
- Also, AI is a field where they continually push an idea too far starving other ideas that are more appropriate at that time.
- A big 'for instance' are World Models, Digital Twins, and Simulations.
- They also take a lot of compute and energy, however...
- They generally don't hallucinate and are much safer and more reliable
- They have much more immediate utility
- They have no possibility to be 'conscious,' 'self-aware,' or any of the sci-fi nonsense
- This is why the Digital God cult hates them
- This is why the Digital God cult authors laws to make them illegal to make
- There are scaling laws, and like Moore's Second Law, there is a cost.
- The lack of deployed use in the real world
- Some claim that these bigger models will bring more use, but
- Diminishing returns are already seen
- There is no 'competitive moat' in Frontier Pre-trained Transformer Models
- That isn't to say AI products and business can't have a 'moat.' Just Frontier Pre-trained Transformer Models
- this is their current design of their Digital God.
- That isn't to say AI products and business can't have a 'moat.' Just Frontier Pre-trained Transformer Models
- Some claim that these bigger models will bring more use, but
- Armchair philosophy as recruitment into the Digital God cult:
- The "future" messaging is where they recruit the fad/fashion followers, celebrities, influencers, "futurists"
- It's fun to talk about. It'd not just like Science Fiction - it is Science Fiction
- Once you start down this like, its hard to pull you out.
- That's the nature of unfalsifiable beliefs
- Maybe it becomes a religion instead of a cult?
- The "future" messaging is where they recruit the fad/fashion followers, celebrities, influencers, "futurists"
- The cult has shady/eugenics origins.
- Though they are actively recruiting to diversify since so many people called them on it.
- First, the cult's motivations
- Most importantly, I want to distinguish the Digital God cult's messaging and research from the more scientific and falsifiable messaging and research in AI:
- There are things that are coming out of the big labs that are good:
- Pretty much anything labeled 'Responsible AI' instead of 'AI 'Safety' ' (at least for now)
- Remember: The Digital God cult is trying to co-opt the word 'Safety' in the realm of AI
- They haven't fully co-opted it yet, so there are still some good things under that label
- Interpretability Research - this is by far the best thing still labeled AI Safety instead of Responsible AI
- Other Concrete Problems in AI Safety tend to not use unfalsifiable claims, and generally don't have the cult vibes of the more abstract/Sci Fi researchers.
- Pretty much anything labeled 'Responsible AI' instead of 'AI 'Safety' ' (at least for now)
- Anthropomorphism seems like a good litmus test to see if it is the Digital God cult's handiwork or falsifiable research
- You may still be able to 'save the phenomena' even if the cultists did the research,
- if the cultists were also careful experimenters, and
- opens their research for others to reproduce
- Analogy is useful, so those who came long before, talking about neurons, and biomimicry far pre-dated the cult
- It was easy to know it was just an analogy
- You may still be able to 'save the phenomena' even if the cultists did the research,
- There are things that are coming out of the big labs that are good:
I just realized I produced a wall of text. I will make more digestible posts later. But I wanted to get this stuff out of me.
Last edited: