Leo Gura

Should We Shut Down All AI?

276 posts in this topic

22 minutes ago, Socrates said:

Truth be told you can't shut down all AI even if you wanted to since you can't control what other countries do behind the scenes. 

As far as I'm concerned China or Russia or North Korea are developing long-term AI strategies to conquer the world and infiltrate their way into global sovereignty.

Not only we don't shut it down, but we should also invest in government programs and companies that will dive head-deep into AI so we have as much understanding as possible.

Unfortunately this seems to be our reality.

As much as we like to talk about Game B, we live in a Game A world.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

I think this is the discovery of fire 2.0 

Edit: On steroids

Edited by Florian

Share this post


Link to post
Share on other sites
5 hours ago, Leo Gura said:

As much as we like to talk about Game B, we live in a Game A world.

There is only really one way to do this (to actually make sure everyone will participate, including all countries), but that thing haven't really been discovered yet. It could be called  a reversed multipolartrap or a positive multipolar trap, where you invent a new method/tool that is the most effective in the dynamics of game A, but at the same time moves us towards game B or has game B characteristic in it. Because it is the most effective, people will automatically start to use it, if they want to stay competitive.

So for instance, in the context of politics (this might or might not be true, I will just use this for the sake of an analogy and demonstration of this concept) if a transparent government model is more effective than other ones, and different countries start to see that, they will eventually need to implement that model, because other governments who implement that model will start to outcompete the ones that don't use this model. Because of that pressure eventually everyone will use that model, but for example  - because of the transparency - these new models could start to change the global political landscape in a way that start moving us towards game B.

Now is that true that a more transparent government model is more effective than other ones? we can argue about that, thats not the point I try to make here, the point is to 1)find/create a method or tool that has inherent qualities in it similar to game B or at the very least has the potential to change certain game A systems internally to move us towards game B, and 2) at the same time so effective in the current game A world, that people will be incentivised to use/implement it, beacuse people will  see that by using ot, they will get short term gains with it.

In the context of this discussion, the challenge would be for smart AI researchers to find/create a new research method that is opmitized for safety but at the same time one of the most if not the most effective and cost efficient method to progress AI (I have no idea if this is possible or not, Im just saying what I think is actually required to make everyone participate [including abroad]).

Edited by zurew

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

So for instance, in the context of politics (this might or might not be true, I will just use this for the sake of an analogy and demonstration of this concept) if a transparent government model is more effective than other ones, and different countries start to see that, they will eventually need to implement that model, because other governments who implement that model will start to outcompete the ones that don't use this model. Because of that pressure eventually everyone will use that model, but for example  - because of the transparency - these new models could start to change the global political landscape in a way that start moving us towards game B.

But this is exactly what doesn't work. Look at how China and Russia refuse the Western model and would rather go to war.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
1 hour ago, Leo Gura said:

But this is exactly what doesn't work. Look at how China and Russia refuse the Western model and would rather go ti war.

If Western models would be much more effective, they would be eventually forced to implement them, because they wouldn't want to lose their political power.

I don't think that necessarily they would go to war, because thats a big potential lose for them especially, if economically the western models would make you much more powerful - therefore more effective at war as well.

Or would you say that this kind of tactic is only effective with stage orange countries and countries that are mostly below stage orange will  always prioritize  their ideology over everything else? I like to think of certain ideologies as just tools to get to a certain society or to get certain things done - however I too have certain things that I would defend and hardly let go regardless of effectiveness (because I care about other things as well, not just effectivness ) - for example democracy.

My first example that I gave above (the transparent kind of government structure) may cut too deep too fast (because it might threatens some core part of a certain ideology), but I think the reversed mutipolar trap tactic is the way to go in general. Maybe first, the more surface level ideas need to be changed using this tactic and then from there we can go deeper and deeper one step at a time.

What we are essentially talking about here, is trigerring or fastening up internal change in these countries (as much as that possible without too much pusback and negative effects). Obviously the hard part of this, is how to balance things to not actually fuck things up unintentionally. There are other tools to achieve or to trigger or to fasten up internal change in other countries but probably this concept is one of the biggest one.

Edited by zurew

Share this post


Link to post
Share on other sites
15 hours ago, Jodistrict said:

Since there is no feeling brain, the computers would have no fear, and, in particular, no fear of death.  They also would have no desire for dominance which is reptillian.   Thus, they could just flip a coin.

If they would have no fear of death, then they would die really fast, because they wouldn't have a really strong incentive to maintain their survival. You can't maintain or create a society who don't give any fucks about their survival.

You can't really escape this problem. If you are talking about AIs who don't care about death, then they would have even less incentive to corrabolate, if you do talk about AIs who care about survival, then we are back to square 0, where they will be forced to make certain decision that will go against each others interest - therefore will make corrabolation hard, and deception and manipulation will kick in - and we are back to the same problems we have in our society (regardless if you take out emotions from this equation or not).

Edited by zurew

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

I don't think that necessarily they would go to war, because thats a big potential lose for them especially, if economically the western models would make you much more powerful - therefore more effective at war as well.

How much more proof do you need? We are at war right now. The Ukraine war is Putin rejecting the Western model. He wants his own model which suits him and his people.

It is questionable how much the Western neoliberal globalist regime is truly as universal as the West loves to tell itself. And even if it is, it will take a lot of conflict before everyone accepts it because no one wants to admit being wrong and giving up power.

The Russian model has little chance of outcompeting the US, but the Chinese model seems very competitive.

In the end I think the Western American model is the best there is, but this is not immediately obvious and many around the world will not accept it willingly for a long time.

Of course we will move to Game B in the long run, but not in the short and medium run. My biggest issue with all these intellectual types talking about Game B is that they are living in fantasy land. The world does not run on Game B and it will not in our lifetimes. The Ukraine war and the China situation prove this very starkly.

The US Defense Department right now has a report that we will be at war with China by 2025. These hippies live in fantasy land. They need to be airdropped into a warzone to ground themselves. It's too easy to philosophize about Game B when you are wealthy, comfortable, and have all your needs met. Most of the world is looking to slit someone's throat for bread and oil.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
13 minutes ago, Leo Gura said:

How much more proof do you need?

I don't think the western model in its current form is actually effective (people are more divided than ever before), therefore I don't think what you brought up is proof for that, the idea I brought up wouldn't necessarily work.

13 minutes ago, Leo Gura said:

Of course we will move to Game B in the long run, but not in the short and medium run.

I didn't say that this would immedately get us to Game B, I said this framework is one necessary tool to start to move towards Game B. 

First we need to hash out the framework and then just after that we can start to think and argue about the specifics

Edited by zurew

Share this post


Link to post
Share on other sites

@zurew Timeframe becomes very important here.

In the long-run everything will probably converge to a hive-mind techno-Communism. But not for a 1000 years.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
21 minutes ago, Leo Gura said:

But not for a 1000 years

I don't think we have that much time to fuck around with a Game A system, just if you think about the technological development alone, it will make the dynamics in the system so that we can't wait for that long. Imagine everyone having access to more powerful tools than nukes - if that world is not organised by that time - we will die or seriously fucks things up.

What we need is not just creating technology but creating social tech and sort of spiritual tech as well, where we can hopefully fasten the social and spritual development and don't have to wait 1000 years. We mostly have the problems we have right now,  because we only have exponential (normal/conventional tech) and not exponentional spiritual,social tech.

Btw, obviously the reversed multipolar trap wasn't invented by me, so credit goes to Daniel and to the Game B team. Here is a video snippet (I timestamped it) , where he talks about it and about transparency.

 

Edited by zurew

Share this post


Link to post
Share on other sites
20 minutes ago, zurew said:

I don't think we have that much time to fuck around with a Game A system,

Except you can't make a baby any faster by telling your wife, "We don't have that much time to fuck around."

It will take as long as it takes. Humans are selfish and stubborn, and you can't moralize them out of it.

As much as I love Schmachtenberger, he lives in a fantasy land of 1st world privlege. As do all of his friends. Putin will come and rape all their wives. That's how Game B ends if you are not careful.

Game B ONLY works when every player in the game is developed to Tier 2. And almost noone is. So just calculate how long it will be till a majority of humans are Tier 2. There is your timeline.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
11 minutes ago, Leo Gura said:

Except you can't make a baby any faster by telling your wife, "We don't have that much time to fuck around."

We know a lot more about our physical limits than about our spiritual and social and collaborative limitations. Even our physical abilities could be pushed to a great extent if the necessary knowledge, care, and tech is there.

Sociology is still fucking new, and almost no one practicing or knows about serious spirituality on this planet, so we have no idea where those limits are, and how fast a human can actually develop spritually and socially and psychologically.

11 minutes ago, Leo Gura said:

It will take as long as it takes. Humans are selfish and stubborn, and you can't moralize them out of it.

It will take much longer if we don't even try or think about it. A lot of assumptions are built in in the thinking "that humans have to wait x years before they can actually develop to certain levels", questioning those things and pushing those things will be a main, and a necessary part of our survival. Again I don't see how you can maintain a game A system while you have tech that can be accessed by any fool and then destroy everyone and everything with.

You would have a point if implementing the frameworks the Game B guys talking about, wouldn't be necessary for our survival.

Edited by zurew

Share this post


Link to post
Share on other sites
5 minutes ago, zurew said:

You would have a point if implementing the frameworks the Game B guys talking about, wouldn't be necessary for our survival.

I don't know that playing Game B will help you survive when you are encircled by sharks.

I think the biggest mistake these Game B theorists make is underestimating the resilience of human civilization. Human civilization is the most anti-fragile thing on this planet. That's my guess. But I could be wrong.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Leo Gura I think we are at a time or getting really close at a time in our evolution where the next step will be either a giant fucking leap in our evolution or death, and there is no room for babysteps anymore.

I agree that one of the most unreal thing to say is that we will achieve something close to a Game B world in a relatively short time, but on the other hand, it seems just as if not more unreal to say that we can maintain our society under a Game A structure for much longer.

Share this post


Link to post
Share on other sites
9 minutes ago, zurew said:

the next step will be either a giant fucking leap in our evolution or death

I am skeptical of this binary paradigm.

I think it is all too popular to overdramatize the death of mankind.

There is a realist middle ground where lots of people suffer but mankind lives on as always, slowly evolving as we tend to do.

One thing I know for sure is that we are not gonna get rid of our devilry any time soon.

Never bet against the devil ;)

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 minutes ago, Leo Gura said:

There is a realist middle ground where lots of people suffer but mankind libes on as always, slowly evolving as we tend to do.

I just don't see that middleground, when we have exponential tech and the development of that tech can't be slowed down. People having access to Godlike tech without having wisdom and knowledge.

Or maybe (this is the only  one I see right now) a wise AI could help us maintaining our society and creating the necessary developmental and social structure for us , where we can develop socially, psychologically, spiritually at our own pace. - so creating artificial environments, where we can develop ourselves and might even fasten up our development.

Edited by zurew

Share this post


Link to post
Share on other sites
6 minutes ago, zurew said:

I just don't see that middleground, when we have exponential tech and the development of that tech can't be slowed down.

Look, we've had nukes, chemical weapons, and biological weapons for 70 years and we've managed to do pretty well. Humans aren't all that stupid.

Realistically, we're not developing anything more dangerous than nukes yet. At the end of the day if this AI gets out of hand we will just bomb the shit out of it. It's not some invincible monster. We can track the location of every major datacenter where these AI could live.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
52 minutes ago, Leo Gura said:

I don't know that playing Game B will help you survive when you are encircled by sharks.

I think the biggest mistake these Game B theorists make is underestimating the resilience of human civilization. Human civilization is the most anti-fragile thing on this planet. That's my guess. But I could be wrong.

I'm sorry but I don't agree at all.

We have lived in nuclear paranoia for decades, and we still are.

Humanity's biggest threat is for itself.

The most important problem I see with humankind is that we are TOO  ingelligent compared to our wisdom.

We need to step back on technology and military power and increase our wisdom, politics, civil rights, diplomacy etc. Wisdom, in poorer words.

I know you could say that human intelligence is way too low for God or an alien entity, but still, human intelligence is way too high compared to human wisdom, which is an absolute low it's almost a shame.

We have such techonology, such mind-power, that we could already be healing the poorest parts of Earth. Yet, we still play this nonsense game of capitalism, which is another form of FEUDALISM in postmodern times. Fuck that.

On some level I despise humans because they do not understand that wisdom is way more important than mathematical/scientific intelligence.

At least, that's my take on it.


Inquire in the now.

Feeling is the truest knowing ?️

Share this post


Link to post
Share on other sites

That wisdom comes with development. You can't have wisdom when you can't feed your children.

You can't just tell people to stop caring about survival and pursue wisdom.

Notice that humans still behave like animals without any technology.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
1 hour ago, Leo Gura said:

I don't know that playing Game B will help you survive when you are encircled by sharks.

Thats why (if you reread what I wrote about the positive multipolar trap) have to be implemented. Its not a simple stage green "lets moralise the fuck out of everything and then naively hope that everyone will get along with us" Its more like implementing a dynamic that is just as if not more effective in the context of game A than other game A tools ,but at the same time lays the groundwork for game B (so game A pariticpants will be incentivised to implement the tool ,and then by implementing it, the tool itself will slowly but surely change the inherent game A countries and structures).

1 hour ago, Leo Gura said:

Look, we've had nukes, chemical weapons, and biological weapons for 70 years and we've managed to do pretty well. Humans aren't all that stupid.

Sure, but have we had a scenario where almost every individual can have access to tools that are just as if not more dangerous than nukes? The biggest point of game B is to build a structure where we can collaborate and somewhat live in peace.

In the current game A structure some forms of collaboration is impossible, even though that kind collaboration is what would be needed to solve certain global problems. The world kind of still gets along with some countries having nukes and they can somewhat manage each other so that they don't have to kill each other (although if we look at the current situation thats very arguable). Now imagine a scenario where actually billions of people have nukes and they all need to manage and collaborate with each other.

1 hour ago, Leo Gura said:

At the end of the day if this AI gets out of hand we will just bomb the shit out of it. It's not some invincible monster.

That would be cool if we assume that 1) we can track it exactly when it will get out of hand and 2) The AI won't deceive or trick us 3)If the AI won't be conscious and will 100% do what we want, then we need to assume that no one will try to use it to fuck everything up (bad intentions aren't even necessary for this). 

Edited by zurew

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now