thierry

AI dangerous ?

11 posts in this topic

I see more and more of « very smart people » warning about all the dangers that comes with AI and that we should be worried about that. To me it sounds like an irrational fear if you have just an understanding of how consciousness works. I could give details on my point but it would be long. Anyway why all this people warn us about AI. Do they really believe AI could be a threat to humanity? Do they want to infect normies with an irrational fear ? Or maybe I am wrong and AI is a threat to humanity? Would love to hear your thoughts about this 

Share this post


Link to post
Share on other sites

Have you seen Terminator, or 2001 A Space Odyssey? 

If not. Check out those films and you'll understand why people are afraid.

Share this post


Link to post
Share on other sites

@dualnon

17 minutes ago, dualnon said:

Have you seen Terminator, or 2001 A Space Odyssey? 

If not. Check out those films and you'll understand why people are afraid.

Of course I’ve watched this movies but they are fictions. And I totally understand why some people are afraid. I just do not understand why someone like Elon musk is afraid or pretend to be afraid.

Share this post


Link to post
Share on other sites

Of course AI is dangerous. Its a tech that will eventually have the biggest impact on humanity- Now, of course impact alone isn't sufficient for bad outcome or danger, but there will be two main roads:

  1. People will give a moral system to AI
  2. AI will develop its own moral system
  3. (maybe you could give here a nuanced position, where the basics will be planted by humans and aside from that all the other things related to value and how to behave will be added by the AI)

Its very easy to think of a dozen bad outcomes, I will only mention a couple of them. So given those two main roads, lets start with the first scenario. We try to give an AI a moral system, that is based on a hierarchical structure, that the AI will use, to make its decisions (this part alone will be incredibly complex for many reasons, because:)

 1.A lot of people don't agree with there being an objective moral system, and a lot of people who do, are not okay with any other system, but the moral system that they already believe in ( I am thinking especially about religious people here, easiest example is a christian vs Islam conflict, but there are many other scenarious as well, and we didn't even calculate atheists here and other type of beliefs) - this point alone can generate really big conflicts, and we haven't even got to a working AI yet.

2.Even if AI works perfectly aligned with our given moral principles, it could fuck up potentially everything if those principles aren't thought through rigorously enough, so that we can avoid catastrophic scenarios. For example, lets say we give an AI a moral system to optimize everything for human happiness, and here comes the hard part (how the fuck you define that, and how will the AI be able to measure happiness?). Lets say someone gives a dumb measuring variable such as "people smiling". Okay, great, there are many scenarios from this alone that could go wrong for example: 

  • It will start to think through scenarios and it will start to experiment with many things like: drugging up everyone
  • Or it will create smiling faces by cutting people's mouths into smile shape.
  • Or it will find some kind of way to force people to smile, whenever people look at the AI's smile censor.
  • This is just one example, where we optimize an AI for one variable and things going unintentionally wrong, but there could be given like a 100+ other variables just for happiness, and you could change happiness for anything else and you could give a 100+ variables for each given moral system.

3.Even if we assume all these things: (that people globally will somehow agree to one given moralsystem that will be given to the AI and imposed by tha AI, and we will figure out a way to give the perfect variable or variables to that moral system, and the AI will be able to perfectly act based on that moral system without any errors) - there are still many potential bad scenarios, like a person hacking the AI and changing its core moral system to whatever he wants and make the AI to do whatever he wants.

4.You don't need a conscious or very advanced AI to create big harm, you can create drones that are optimized to bomb cities, you can create nanobots that can get inside your body without you knowing anything about it and fuck you up from the inside,AI can be optimized for creating bioweapons (including new dangerous organisms and viruses), self driving cars alone could trap and kill many people, and many other examples could be given here.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

So the second sceario where AI will develop its own moral system (so we can somewhat imply , that this AI is conscious, but it isn't necessary to imply that). Given this scenario there is no reason to assume, that this AI will just randomly start to value humans over everything else. If humans won't be at the top of its moral hierarchy, chances that we will be eventually fucked is really high - because it will eventually encounter a scenario or scenarios, where it will need to make a value trade based on its hierarchical system, and whatever will be upper than humans on its moral ladder , - humans will be traded for that.

The basic idea is that if we are talking about a very advanced AI that everyone can use - then we are talking about a scenario similar to giving everyone free nukes. Giving everyone free accessible nukes wouldn't be a good idea, given that there are a lot of bad actors in the world and even if we don't assume any bad intentions (which would be super naive), our stupidity can very easily kill us, given that people could use and create stuff that they have no idea how it works and what impact it can have. Imagine giving nukes to cave man and multiply that by a factor of 10000 or a million. 

All of this is still just the tip of the iceberg, I haven't mentioned many other things that could make us go extinct and of course there are many layers of danger that could be listed not just extinction level dangers.

The reality is that the idea that everything will be okay on its own, without us making conscious, and deliberate and cautios steps and decisions is just a super naive idea.

Edited by zurew

Share this post


Link to post
Share on other sites

AI is simply a tool. Any tool can be used dangerously, AI is no exception.

Where AI differs is its scale and potential to significantly alter and change society. For better or for worse.

Here are some examples of how AI can be misused:

  • AI controlled weapons could lead to mass destruction and loss of life.
  • AI-powered malware and other cyber threats can be used to attack individuals, companies, organization and nations.
  • AI-powered surveillance can heavily undermine individual privacy and liberties.
  • AI can be used to create and spread misinformation, deepfakes, etc. which can manipulate public opinion and undermine freedome of speech.
  • Automization will disrupt job markets and the economy. At the very worst, it can contribute to social destabilization. 
  • It can be very difficult to explain the decision making of an AI, which makes it harder to hold organizations and business accountable for their actions, undermining the legal protection of individuals.

AI will change the world. Not being cognizant of the dangers of AI is foolish, because it absolutely can be used for evil or mishandled. It's very important that we keep a very close eye on AI and its development.

Share this post


Link to post
Share on other sites

@zurew @Basman thanks for your replys, it is really appreciated. (Ps: this idea of nanobots really is scary :p)

Share this post


Link to post
Share on other sites

Extremely dangerous, in the hands of shady people.

It's not the AI who's the problem, it's the people who weaponize it against their enemies.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@zurew

On 2023-01-24 at 0:47 PM, zurew said:

Of course AI is dangerous. Its a tech that will eventually have the biggest impact on humanity- Now, of course impact alone isn't sufficient for bad outcome or danger, but there will be two main roads:

  1. People will give a moral system to AI
  2. AI will develop its own moral system
  3. (maybe you could give here a nuanced position, where the basics will be planted by humans and aside from that all the other things related to value and how to behave will be added by the AI)

Its very easy to think of a dozen bad outcomes, I will only mention a couple of them. So given those two main roads, lets start with the first scenario. We try to give an AI a moral system, that is based on a hierarchical structure, that the AI will use, to make its decisions (this part alone will be incredibly complex for many reasons, because:)

 1.A lot of people don't agree with there being an objective moral system, and a lot of people who do, are not okay with any other system, but the moral system that they already believe in ( I am thinking especially about religious people here, easiest example is a christian vs Islam conflict, but there are many other scenarious as well, and we didn't even calculate atheists here and other type of beliefs) - this point alone can generate really big conflicts, and we haven't even got to a working AI yet.

2.Even if AI works perfectly aligned with our given moral principles, it could fuck up potentially everything if those principles aren't thought through rigorously enough, so that we can avoid catastrophic scenarios. For example, lets say we give an AI a moral system to optimize everything for human happiness, and here comes the hard part (how the fuck you define that, and how will the AI be able to measure happiness?). Lets say someone gives a dumb measuring variable such as "people smiling". Okay, great, there are many scenarios from this alone that could go wrong for example: 

  • It will start to think through scenarios and it will start to experiment with many things like: drugging up everyone
  • Or it will create smiling faces by cutting people's mouths into smile shape.
  • Or it will find some kind of way to force people to smile, whenever people look at the AI's smile censor.
  • This is just one example, where we optimize an AI for one variable and things going unintentionally wrong, but there could be given like a 100+ other variables just for happiness, and you could change happiness for anything else and you could give a 100+ variables for each given moral system.

3.Even if we assume all these things: (that people globally will somehow agree to one given moralsystem that will be given to the AI and imposed by tha AI, and we will figure out a way to give the perfect variable or variables to that moral system, and the AI will be able to perfectly act based on that moral system without any errors) - there are still many potential bad scenarios, like a person hacking the AI and changing its core moral system to whatever he wants and make the AI to do whatever he wants.

4.You don't need a conscious or very advanced AI to create big harm, you can create drones that are optimized to bomb cities, you can create nanobots that can get inside your body without you knowing anything about it and fuck you up from the inside,AI can be optimized for creating bioweapons (including new dangerous organisms and viruses), self driving cars alone could trap and kill many people, and many other examples could be given here.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

So the second sceario where AI will develop its own moral system (so we can somewhat imply , that this AI is conscious, but it isn't necessary to imply that). Given this scenario there is no reason to assume, that this AI will just randomly start to value humans over everything else. If humans won't be at the top of its moral hierarchy, chances that we will be eventually fucked is really high - because it will eventually encounter a scenario or scenarios, where it will need to make a value trade based on its hierarchical system, and whatever will be upper than humans on its moral ladder , - humans will be traded for that.

The basic idea is that if we are talking about a very advanced AI that everyone can use - then we are talking about a scenario similar to giving everyone free nukes. Giving everyone free accessible nukes wouldn't be a good idea, given that there are a lot of bad actors in the world and even if we don't assume any bad intentions (which would be super naive), our stupidity can very easily kill us, given that people could use and create stuff that they have no idea how it works and what impact it can have. Imagine giving nukes to cave man and multiply that by a factor of 10000 or a million. 

All of this is still just the tip of the iceberg, I haven't mentioned many other things that could make us go extinct and of course there are many layers of danger that could be listed not just extinction level dangers.

The reality is that the idea that everything will be okay on its own, without us making conscious, and deliberate and cautios steps and decisions is just a super naive idea.

   Nice post! There are definite rippling effects of messing around with technology that could spill over to other fields as well.

Share this post


Link to post
Share on other sites

@thierry yeah it's fear. Watched too much matrix and read dan brown..


"I believe you are more afraid of condemning me to the stake than for me to receive your cruel and disproportionate punishment."

- Giordano Bruno, Campo de' Fiori, Rome, Italy. February 17th, 1600.

Cosmic pluralist, mathematician and poet.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now