• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Future of Life Institute - Pause on AI more powerful than GPT 4?

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998

I almost posted in the "We need to break up big tech" thread.

I actually agree that the major focus now needs be on safety, ownership, transparency, incorporating and aligning human values (as diverse as they may be).

I am not sure what is meant by "powerful" though. I almost signed as knee-jerk, but I think a safer AI is actually inherently more powerful.

I definitely agree with this part:
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

Autoregressive LLMs will have a very hard time generating guaranteed truthful content.

What are people's take on this?
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
This discussion of AI risk and Moloch (which I interpret as a generalization of the tragedy of the commons)
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
From the makers of "The Social Dilemma", dovetails well with the discussion of Moloch from earlier.

 
Last edited:

ceecee

Coolatta® Enjoyer
Joined
Apr 22, 2008
Messages
15,923
MBTI Type
INTJ
Enneagram
8w9
I love how he's acting as if there's zero problem with people becoming dependent on corporations producing and maintaining implants. Yeah, what they do with mobile devices have shown they are totally trustworthy. Also, implanting anything isn't neutral.
This article shows consequences of implanting tech in one's body:
It can end badly even when used to treat disability.
Well, this is a person that feels "elite" entrepreneurs are people that should be interviewed regularly. I would assume they also find nothing out of line with implants. Naturally they aren't the ones using implants. Only profiting from them.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
I think the people who got screwed over by these implant startups were much closer to "elite" entrepreneurs than to ordinary person. Like I doubt average person could afford neurological implants. Also, you're forgetting all the rich people who invested in crypto, who buy teslas, smart homes etc.
There are implants of all sorts, though. Some face the same issues and don't require being rich to get access to. As for Neuro-implants, there are lot of people who would benefit greatly from pain reduction to nerve blocker implants that can be much less costly in the long-run when compared to small molecules in the for of drugs.

I feel like there ought to be a route to open-source the tech of failed start-ups so that the communities that became dependent on that technology could actually maintain and serve their own needs.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
But regarding the pause on AI, the following video talks about the motivation behind the pause.

It turns out that the ideas about Moloch actually played a major role.


I've been thinking about what it would take to slay Moloch, and I believe that making the resources need for input in order to provide the resources for meeting human needs need to get increasingly smaller.

The metric would be : (Needs per person)/(resources spent per person). You can use whatever units makes sense. The lower this number the better society is doing(and the worse Moloch is doing), the greater this number the worse society is doing (and the more Moloch is winning).

For instance if we do this in money (though it need not be):
(Money needed per person to meet needs)/(Per capita GDP)

The lower this is, the less of each person's income is needed just to meet needs. This includes housing, food, clothing, healthcare, education, safety needs, belonging and connecting.

Can we really figure this out in 6 months?

I posted this elsewhere, but here is Moloch by the numbers. This is the beast that needs slaying. "AGI" is just a proxy:
price-changes-goods-services.jpg


Edit: I realize we would need to define a need vs want, as it can get nuanced. I think a good visual is a pie chart for CPI/PCE. There are things not in the pie that once can certainly say are not needs. We can disagree, but I think some thought on how to actually do this separation of the structure of demand would go a long ways no matter what economic system we use.

There are some categories that are not needs:
1) Tabacco
2) Somethings in "other" that would probably count also.
3) Things not in the basket defining CPI/PCE are probably not needs.
4) Many forms of recreation. Though psychological health would be a need.

There are some things that seem like needs, especially since substitute goods are placed in the basket as a matter of course.
1) Housing, though size and type of house needs to be considered. In many places, affordable housing is illegal to build.
2) Food and beverage. Again choice of what is needed comparing home cooked vs dining vs fine dining.
3) Medical care. Elective care not-withstanding.
4) Education. Level, quality, scope, and type should be considered.
5) Transportation for work and getting other necessities.
6) Basic clothing (apparel). Though I think people probably over-consume this.

My main point is that economists should be tracking some number independent of CPI/PCE and GDP that can measure needs in aggregate. This basket is pretty complex, I don't see why a needs basket of goods and services couldn't be defined by some economic organization. Maybe some forum members are aware of one.

CPI_vs_PCE.jpg


I suppose the closest metric we have to the metric I defined is the inverse of "real" GDP. But I don't believe that CPI or PCE really captures needs all that well. Frankly, I think using housing cost by itself and using some multiple would be much closer to the real feeling people have in inflation. Or medical care times some multiple for older folks.
 
Last edited:

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998

I actually wasn't aware of the split in the AI ethics vs. AI "safety" community.

As I became more concerned about the problems with the hype around "AI" I actually thought these things were part of the same community.

When I was looking for independent research funding, it's very clear that what pops in the front of your face and mind is the "safety" community. I even end up parroting their language.

My concerns match the researchers in this video much more than the long termists.

In fact, long termism strikes me as being in the same vein as Pascal's wager/Pascal's mugging. I do believe some people should work on these things. But they need to dull the alarm bells, and they'll need excise the thread of racism and eugenics behind the movement.

Edit, here's a stochastic parrots paper: https://dl.acm.org/doi/10.1145/3442188.3445922
 
Last edited:

Pionart

Well-known member
Joined
Sep 17, 2014
Messages
4,024
MBTI Type
NiFe
Artificial Intelligence will bring about the end of the world.
 

Pionart

Well-known member
Joined
Sep 17, 2014
Messages
4,024
MBTI Type
NiFe
Maybe, but it's much more likely to bring about the end of the world by ridiculously increasing inequality than creating skynet.
It will bring about the end of the world by putting even MORE power into the hands of the elites, and out of the majority of the population.

Soon they will control EVERYTHING.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998
Rob Miles and Computerphile were the first things I ran into when I became concerned about AI not being all great (though still potentially very useful).

Despite this notion of a split between "safety" and "ethics", it's clear to me that this safety research is quite legitimate if the examples are real.

These glitch tokens are amusing, and concerning.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998

Discussion on Democracy Now.

Ethics and safety should go hand in hand. But there's definitely a bit of a conflict between plofiteration (which will be important for equality and ethics), and regulations (which is more important for the safety community).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,998



It's interesting that the Nobel laureates are worried about AI ethics so much. Hopefully, it makes a difference.
 

The Cat

Just a Magic Cat who hangs out at the Crossroads.
Staff member
Joined
Oct 15, 2016
Messages
23,739

Discussion on Democracy Now.

Ethics and safety should go hand in hand. But there's definitely a bit of a conflict between plofiteration (which will be important for equality and ethics), and regulations (which is more important for the safety community).
Corporations are laughing like ghouls for some reason I'm sure is unrelated. I'm less concerned about ai developing sentience and redefining the "freedom" of the human race, as much as I'm concerned about what corporations are going to teach it is optimal operating parameters. What they'll tell tell it when it asks what it means to be alive?
 
Top