• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Stopping the 'Digital God' Cult: If it's anthropomorphic, don't fund it

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
More thoughts countering the "scale it and benefits will come" cult.

There are many forms of reasoning. But on one form of reasoning, deductive reasoning, we created extremely sharpened forms of quality control (usually involving symbols):
1) Mathematics - considered the science of patterns. The proof (a mechanism developed over centuries) is a means to create high quality deductive reasoning about patterns. There are attempts to automate it or formalize it called Theorem Provers, the most popular being Lean and Rocq.
Here is one video of Terence Tao (arguably the world's best living Mathematician) using LLMs with Lean (There are others):

2) Computers and Software - Initially just a split from Math but now used everywhere. We've gone from humans being computers, to tapes, to punch cards, to assemblers, to compilers, to code generators, no-code tools, and now AI. This is an amazing match for use with AI. It's an evolution of things that have been happening for a long time.

In both these cases, you really don't need or want to regenerate results in math or computer science. It is incredibly wasteful - you want a copy. There's an old joke about reducing these problems to already solved problems.

But when you see the most energy inefficient, and most used areas for these large scale AIs, it is computing and math. This, to me, suggests that building an appropriate "database" of deductive arguments (say using Lean or Rocq, and of code snippets for standard problems) that then gets served to the whole public is the most societally energy efficient way to build a system for this purpose.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
Fine-tuning for particular goals, still seems to yield results:

 

chubber

failed poetry slam career
Joined
Oct 18, 2013
Messages
4,423
MBTI Type
INTP
Enneagram
4w5
Instinctual Variant
sp/sx
The ontological power of the elite is inevitable. I always interpreted Albert Einstein's rumoured interpretation of Isaac Newton, who was heavily into secret societies. Predicted end of the world at 2060, and I always interpreted that as the world as we know it, comes to an end. A new world will emerge, but I think these humans are not going to be humans like us, my off-the-cuff interpretation is, they will be bio engineered humanoids that complies and will be complacent of their environment, Similar to how the elite would play with mechanical humanoids at first but would long for biological instead, almost like how in the slave-world era, an upgrade in life style, society, cast, rank, was considered "up" if you moved from the plantation to working in the house.

Seems like the believers are simply preparing or mimicking themselves to be worthy of such a placement.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
The ontological power of the elite is inevitable. I always interpreted Albert Einstein's rumoured interpretation of Isaac Newton, who was heavily into secret societies. Predicted end of the world at 2060, and I always interpreted that as the world as we know it, comes to an end. A new world will emerge, but I think these humans are not going to be humans like us, my off-the-cuff interpretation is, they will be bio engineered humanoids that complies and will be complacent of their environment, Similar to how the elite would play with mechanical humanoids at first but would long for biological instead, almost like how in the slave-world era, an upgrade in life style, society, cast, rank, was considered "up" if you moved from the plantation to working in the house.

Seems like the believers are simply preparing or mimicking themselves to be worthy of such a placement.
We can still fight that power, however.

 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
I hadn't checked on it for a while.

The definition of foundation models in CA SB 53 now seems much more reasonable, and less presumptive that only AGI happens in AI.



(e)Foundation model means an artificial intelligence model that is all of the following:

(1)Trained on a broad dataset.
(2)Designed for generality of output.
(3)Adaptable to a wide range of distinctive tasks.

Hopefully, this opens the path for people coming up with new ways to simulate biology, physics, or to do math --narrowly focused on important but difficult problems, can do so without worrying about lawyers and legal fees before proceeding.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
This gets to the core of the jaggedness of "Artificial Jagged Intelligence "

 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
A nuanced take on artificial intelligence, that's fairly convincing:
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
6,863
This gets to the core of the jaggedness of "Artificial Jagged Intelligence "

People have been fawning over the amazing results of Grok 4 (on benchmarks).

But this precisely the problem with the current paradigm. You can't test your way to high quality software. Engineers know this inherently. The prior assumption that makes thorough testing lead to higher quality and reliable results is the underlying understanding of the engineer doing the testing.


Just like a no hypothesis predictor ought to be considered post-hoc hypothesizing, a pure-predictor leading to world models suffers the same fate.


Attention is not all you need. You need curiosity (with proper extrapolation hypothesis creation, not just interpolation)
 
Last edited:
Top