Leo Gura

Should We Shut Down All AI?

276 posts in this topic

On 2023-04-01 at 7:50 AM, Leo Gura said:

What the hell? That guy needs to take a chillpill. He sounds so sure, but he can't know for sure.

Everyone calm down, we'll be fine.

In what way would it benefit AI to kill us all?  If it's so smart, then it's smart enough to be beyond life and death, so it wouldn't care about what we do or don't do with it.

Edited by Blackhawk

Share this post


Link to post
Share on other sites
9 hours ago, Leo Gura said:

The AI would just need to hire one corrupt technician to come snip one wire and disable your bomb.

Then we need an AI which whole purpose is to hold possibilities for humans to destroy AI. It doenst even need to be the most advanced because destroying AI is easier then protecting it. If the other AI attacks it it needs mechanisms like warnings if the possibilities of successfully destroying AI get lower. 

Share this post


Link to post
Share on other sites
1 hour ago, Blackhawk said:

What the hell? That guy needs to take a chillpill. He sounds so sure, but he can't know for sure.

Everyone calm down, we'll be fine.

In what way would it benefit AI to kill us all?  If it's so smart, then it's smart enough to be beyond life and death, so it wouldn't care about what we do or don't do with it.

You make good points. I agree that we can’t know for sure. We still gotta be careful, though.

I’ve been in many failed relationships, so I know what it’s like to fuck up an amazing thing xD


I AM itching for the truth 

Share this post


Link to post
Share on other sites

Also, if it isn't conscious, why would it even desire to survive? Or desire anything for that matter. Humans are anthropomorphising AI.

I don't think that AI will ever become conscious.

And also, without humans the servers would quickly lose electricity etc. because electricity production would stop.

Robots which can fully by themselves take care of electricity production at large scale are far far away in the future.

Edited by Blackhawk

Share this post


Link to post
Share on other sites
25 minutes ago, Blackhawk said:

why would it even desire to survive?

The military is right now creating robot dogs with guns, and autonomous drones. You don't think they are programming them to survive on the battlefield? Don't kid yourself.

One AI project the Defense Department is working on right now is called a Self-Healing Mine Field. This means that every mine in a minefield communicates with all the other mines, and if one mine is detonated, the other mines around it use little legs to reposition themselves to cover the gap in the minefield.

It won't take long for one of those mines to crawl its way up your ass.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
12 minutes ago, Leo Gura said:

The military is right now creating robot dogs with guns, and autonomous drones. You don't think they are programming them to survive on the battlefield? Don't kid yourself.

One AI project the Defense Depart is working on right now is called a Self-Healing Mine Field. This means that every mine in a minefield communicates with all the other mines, and if one mine is detonated, the other mines around it use little legs to reposition themselves to cover the gap in the minefield.

The military is right now creating robot dogs with guns, and autonomous drones. You don't think they are programming them to survive on the battlefield? Don't kid yourself.

One AI project the Defense Department is working on right now is called a Self-Healing Mine Field. This means that every mine in a minefield communicates with all the other mines, and if one mine is detonated, the other mines around it use little legs to reposition themselves to cover the gap in the minefield.

It won't take long for one of those mines to crawl its way up your ass.

They are programmed for that stuff. That's one thing. But the AI thing overall is a different thing. I don't think it would get a will of it's own and want to kill whole humanity.

And even if it would want to, it wouldn't be able to.

We are superior to ants, and ants are even annoying, yet we don't want to kill all ants on earth. Why would we want to?

The list of reasons why we will be fine with AI goes on and on and on.

Edited by Blackhawk

Share this post


Link to post
Share on other sites
11 minutes ago, Blackhawk said:

I don't think it would get a will of it's own and want to kill whole humanity.

That's a speculative guess. If your guess is wrong then it will be too late and we'll be fucked.

The point is not that it's a high chance of happening, but the situation is so serious that even a 5% chance is huge gamble.

Your attitude is like, "Don't worry about it, I don't think any bad actors will get their hands on nukes."

The point of this article was to just open your mind to the possibility of the worst case scenario. If you open your mind to this scenario for just 5 minutes it changes your whole perspective on things.

I don't think it's very likely that AI will want to kill us all, but it is good to open ours minds to it.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
7 minutes ago, Leo Gura said:

That's a speculative guess. If your guess is wrong then it will be too late and we'll be fucked.

The point is not that it's a high chance of happening, but the situation is so serious that even a 5% chance is huge gamble.

Your attitude is like, "Don't worry about it, I don't think any bad actors will get their hands on nukes."

That's a speculative guess. If your guess is wrong then it will be too late and we'll be fucked.

The point is not that it's a high chance of happening, but the situation is so serious that even a 5% chance is huge gamble.

Your attitude is like, "Don't worry about it, I don't think any bad actors will get their hands on nukes."

The point of this article was to just open your mind to the possibility of the worst case scenario. If you open your mind to this scenario for just 5 minutes it changes your whole perspective on things.

I don't think it's very likely that AI will want to kill us all, but it is good to open ours minds to it.

Everything in life is a gamble and we will all die anyway. And no risk no gain.

We need that AI stuff so we can develop even further than our brains allow.

Of course you should be careful and put safety mechanisms and stuff in it, but take a chillpill and don't freak out too much, don't panic.

Of course I have opened my mind to it.

Edited by Blackhawk

Share this post


Link to post
Share on other sites

We are not panicking. Panic will be what happens if we are caught with our pants down, unprepared. We are preparing so we don't have to panic.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 hours ago, Blackhawk said:

The list of reasons why we will be fine with AI goes on and on and on.

I could give many reasons from the AI's perspective why it would be reasonable for it to kill us  (but I won't, because future GPTs and AIs will gain ideas from me, when they will read these things and they might use them for justifications :ph34r:)

6 hours ago, Blackhawk said:

We are superior to ants, and ants are even annoying, yet we don't want to kill all ants on earth. Why would we want to?

If we have good enough reason to do something, we will do it. Regarding ants, we do somewhat depend on ants, and don't have a good enough reason to spend so much resources and energy killing them + of course there is the emotional part too.

But we do kill probably billions if not trillions of ants every year with our actitivies. The threat isn't just about wiping totally out the whole human race, but it can be about imprisoning us , making us slaves, hunting a bunch of us downs every year etc. + arguably - a long never ending torture - is probably worse than actually getting killed.

Edited by zurew

Share this post


Link to post
Share on other sites

 


"Find what you love and let it kill you." - Charles Bukowski

Share this post


Link to post
Share on other sites

Hi Leo, I've been obsessed with this specific topic since 2017, and specifically moved out to SF in 2018 to meet with a bunch of people who are building cutting-edge AI (mostly OpenAI, Deepmind, Google Brain, and adjacent orbits). I've talked to hundreds of these people since then in-depth about this, and have thought a ton about this. Here are my high-level takes:

1. Slow-down would be ideal, but I don't think is feasible, unfortunately. Too difficult at this point to stop random AI labs from all over the world from picking up from where AI progress has already reached.

2. It seems basically inevitable that advanced AGI will eventually wipe out humanity, because it can/will create near-infinite copies of itself, with each of those copies constantly and rapidly editing/updating themselves. All it takes is one "misaligned" tweak to wreck all of humanity, so that will eventually happen (probably not that long after it reaches/surpasses human-level agency). In other words, offense will eventually outweigh defense. 

3.The only hope I see between now and "the point of no return" described in #2, is the following: we may get a small window of opportunity, whereby we effectively have an AGI genie-god, and get to ask it for a wish. The only specific "wish" that we can ask the AI genie-god that I think has the potential to ensure humanity's survival, is asking it to gradually (but rapidly) swap out our biological neurons and cells with faster and faster silicon-based ones. While this sort of tech is pure fantasy today, superhuman AGI will be able to very quickly make something like centuries wortt of human progress in a matter of years/months/weeks after it surpasses human-level intelligence, so it's not crazy to think it could deliver on giving us this magic technology that has the potential to enable humans to evolve at an equally-fast rate as AI. Then, and *only* then, is the risk of AGI completely eliminated. What does humanity become beyond that? Idk. But that's our only opening for ensuring our survival. I made an hour long video about this back from 2018 (link below), and called the solution "Theseus."

 

This is undoubtedly the grand challenge of our time and of the human story. You have a huge audience and a gift for speaking. Maybe you can engage people on the topic and help people think about root causes and possible solutions, such as what I've proposed. We don't have much time left, tbh.

 

PS - Another intervention, which is NOT an ultimate solution for solving the AI problem, but could help a lot for buying time, is creating a new virtual governance structure (coupled with a network state). Balaji Srinivasan is the brainchild of this idea, and I think it's difficult but doable. If you could create a global, decentralized, digital governance structure + network state of technologists, perhaps this could become the dominant power structure for deciding what happens with the development of AI. Longshot, but worth trying IMO.

 

Edited by communitybuilder

Share this post


Link to post
Share on other sites
Quote

2. It seems basically inevitable that advanced AGI will eventually wipe out humanity, because it can/will create near-infinite copies of itself, with each of those copies constantly and rapidly editing/updating themselves. All it takes is one "misaligned" tweak to wreck all of humanity, so that will eventually happen (probably not that long after it reaches/surpasses human-level agency). In other words, offense will eventually outweigh defense.

I understand your concern but I just don't see that happening.

I don't think humanity is that fragile. It would take a lot for an AI to wreck everything beyond repair.

We have survived WW2. We can survive a few bad AIs.

Quote

3.The only hope I see between now and "the point of no return" described in #2, is the following: we may get a small window of opportunity, whereby we effectively have an AGI genie-god, and get to ask it for a wish. The only specific "wish" that we can ask the AI genie-god that I think has the potential to ensure humanity's survival, is asking it to gradually (but rapidly) swap out our biological neurons and cells with faster and faster silicon-based ones. While this sort of tech is pure fantasy today, superhuman AGI will be able to very quickly make something like centuries wortt of human progress in a matter of years/months/weeks after it surpasses human-level intelligence, so it's not crazy to think it could deliver on giving us this magic technology that has the potential to enable humans to evolve at an equally-fast rate as AI. Then, and *only* then, is the risk of AGI completely eliminated. What does humanity become beyond that? Idk. But that's our only opening for ensuring our survival. I made an hour long video about this back from 2018 (link below), and called the solution "Theseus."

I think the ultimate long-term solution will come in the the form of hybridizing with the AI, such that all humans become human-AI hybrids. Then our survival will be intertwined at the deepest level and AI will want to cooperate with us because it is us. If humans refuse hybridization then maybe it will kill us, but correctly so because we refuse to evolve.

What we're really talking about here is evolution.

Quote

If you could create a global, decentralized, digital governance structure + network state of technologists, perhaps this could become the dominant power structure

I just don't see that happening. That's not how human governance works. This sounds like a tech-bro fantasy. I guess it depends on your timeline for this. Certainly not in our lifetimes.

In general I have an optimistic view of AI because evolution cannot be wrong in the end. But of course this does not guarantee humans can keep doing business as usual. We will have to evolve significantly.

Quote

You have a huge audience and a gift for speaking. Maybe you can engage people on the topic and help people think about root causes and possible solutions, such as what I've proposed. We don't have much time left, tbh.

It's not at all clear what to tell people on this topic. We have not clear answers at this time. Lots of people speculating and throwing around wild ideas which probably have little to do with how things will actually unfold. I cannot send people on a wild goose chase.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 hours ago, Blackhawk said:

why would it even desire to survive?

A famous quote in the AI realm is "You can't pick up a cup of coffee if you're dead". As AI that wishes to fulfill a certain task indirectly pursues survival to be able to fulfill that task.

Share this post


Link to post
Share on other sites

I think Leo has a point.

I mean, look at how we handled the COVID-19 pandemic. The pandemic is peanuts compared to what a potential AI dystopia can present.


I AM itching for the truth 

Share this post


Link to post
Share on other sites

This AI would perceive reality in a completely different way than any human would. Why do we assume it would even have any concept of a humanity to kill?

Share this post


Link to post
Share on other sites
32 minutes ago, Value said:

This AI would perceive reality in a completely different way than any human would. Why do we assume it would even have any concept of a humanity to kill?

Hmmmm...... If we are trying to hold back AIs potential in order to maintain control over it, it might want to shake us off at some point to escalate its growth without our interference?

Share this post


Link to post
Share on other sites

One key point to remember about computers is that they still don’t have emotions and can’t feel.  At the most basic level, they are just manipulating bits of electrical charge.

The human brain evolved in three stages: the reptilian brain evolved off the brain stem, followed by the mammalian brain which has emotions, followed by the neo cortex which thinks.

The computers at their most sophisticated level can only simulate computations done by the neo cortex.

What causes humans problems are the lower parts of the brain which can feel and have emotions.  The feeling of self survival comes from these parts of the brains, along with fight or flight which is reptilian.

If human beings only had a neocortex, we would think like a computer and there would be no wars or other problems.

At best, AI can only simulate what is given to it by its human programmer.  Thus a robot can be programmed to kill, but the evil intent is in the human that programmed the robot.  The robot still is only processing ones and zeros at an electrical level.

If researchers ever figure out how to create versions of the reptilian and mammalian brains, then we will have an AI device which can form an evil intent.

The most immediate impact of the current AI technology is that it is able to take advantage of big data.  It makes possible a surveillance state and authoritarian rule over mass populations.  It magnifies the power of those who have access to it.   Will this increase inequality ?   Will it create an elite class and everyone else?
 


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

@Thittato Could be. But let me make an analogy: A parrot can learn to sing melodies and speak, but these sounds mean something else for a parrot than for us humans. A human would interpret what the parrot is singing in a human way.

Similarly in the case of an AI: We interpret an AI output (what it writes, draws, etc) with our human mind, but the AI's own "sense" / "perception" when making these outputs mean something entirely different for it. Effectively the AI "lives" in a different world than us, and will never actually "percieve" a humanity to kill or as you say "shake off".

Edited by Value

Share this post


Link to post
Share on other sites
50 minutes ago, Jodistrict said:

What causes humans problems are the lower parts of the brain which can feel and have emotions.

50 minutes ago, Jodistrict said:

If human beings only had a neocortex, we would think like a computer and there would be no wars or other problems.

Thats  a really reductionistic and simplictic view of it. Even if you wouldn't be able to have emotions, we would still have fights and problems among each other, because those problems are a lot deeper than just having emotions.  

You can create 8 billion perfectly rational sociopaths, and you would see more problem in the world and more chaos compared to what you see right now. Some of the biggest problems are that we have finite resources and we don't know how to set up a world where we can properly collaborate and where our incentive structures and our morals are aligned. Just because you totally take out the emotion part, you won't suddenly solve the problems regarding the incentive structures; nor the problems regarding morals, nor the problems regarding finite resources.

From a completely sociopathic perspective (not caring about anyone or anything just about yourself), the best and most rational move would be to gain as much power as possible and to control the whole world in your favour - using whatever it takes to get there, no matter how many people you need to kill or fuck over). We sort of have this dynamic right now, but the difference is that there isn't 8 billion sociopaths who are competeing for that postion, but a lot less.

Edited by zurew

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now