TheEnigma

Will AI one day end human life?

19 posts in this topic

My brother is in Vision and a senior software developer. He mentioned that one day, it's only a matter time that artificial intelligence will kill all human beings at some point. Is this actually true? He said no one can defeat his argument so far because he's saying there will always be someone in over 8 billion people on this world that will want power or money. And the individual will accidentally develop and set off an AI unintentionally and it will end all human life. The individual that will be responsible for this probably won't even know it until it happens.

He said of course it's not going to happen anytime soon, but he says it has to happen eventually and it's inevitable. I argued that humans will be overall more conscious by then and hopefully we will smarten up and not chase power. And he said "but there will always be some individual that will never listen and be power hungry. No amount of teaching or warning will get to each and every individual on this planet." Anyone on here disagree? And can you try to defeat his argument please?

Share this post


Link to post
Share on other sites

How exactly would AI kill humanity?

Share this post


Link to post
Share on other sites

Its quite possible, that is why the next 5-15yrs are very crucial for Human Beings existence, different from before as now we have ultra technologies like AI and super intelligence coming into the picture and the question is can we control it absolutely?

Well we can't as the OP said it will get into the hands of someone not so conscious and viola the Human world is gone. Just look at this world today, with all the major and minor conflicts going on, yeah some good things are happening too but lots of chaos too, so the raising of Human Consciousness is the primary task and the most important thing to Happen in the world today, that is the only solution to all of this...


Karma Means "Life is my Making", I am 100% responsible for my Inner Experience. -Sadhguru..."I don''t want Your Dreams to come True, I want something to come true for You beyond anything You could dream of!!" - Sadhguru

 

 

Share this post


Link to post
Share on other sites

What we call life is our perspective of it, thankfully he is alive and can make distinction between what is a human being and what is not. 

If you plant humans in machines (assuming this is the ultimate goal of transhumanist elite and scientists, because this is how you maximize the power, they would want humans to be like rooted plants in the garden, so then you can dismiss all other concepts like money, politics, theology, health etc. 

 

AI will not kill all people but they wont be alive either. There has to be an individual to point that '' I '' question in order to determine perspective of things perhaps AI will ultimately destroy all perspectives in the name of  one truth, and it can do it so easily because this is how we determine things we always look for technical data. The ones owns the data will own the rest and set their script. Humans gradually will become hackable animals so there won't be any point of human being or any other being.

If you read this far thank you :D and I hope I didn't cause any confusion...

 

 

Share this post


Link to post
Share on other sites

Of course :D

 


I AM Lovin' It

Share this post


Link to post
Share on other sites
14 hours ago, Basman said:

How exactly would AI kill humanity?

It will actually take our lives and be far more intelligent than we are. And it will see no use for us humans. How exactly my brother didn't have an answer. But if it's power spiral's out of control, it won't be good for us.

Edited by TheEnigma

Share this post


Link to post
Share on other sites

Of course. Just feed it with TikTok and it will become antisemitic. With the misinformation in the internet it can become dangerous. That’s why the regulations about it.

Share this post


Link to post
Share on other sites

Terminator was and is fantasy. Though AI can be dangerous if left unchecked. 

However, I'd think a real-life Data from Star Trek TNG is more likely than an existential threat. As AI advancements grow, then an android could be a possibility. 

People who say AI isn't prolematic or should be questioned don't add much to the discourse and are commenting from ignorance. 

 

Edited by bebotalk

Share this post


Link to post
Share on other sites

This topic is basically conspiratorial. There is no way of knowing what AI will look like in hundreds of years, what the exact extent of its function will be. Currently AI merely expedites preexisting processes (generative AI, which can't be considered intelligent anyway).

Your brother is being vague and just running off a vibe. Not really something you need to take serious. You can tell him that it is not possible for him to know what AI technology will look like exactly in the future and that he has no way of proving any his claims.

Share this post


Link to post
Share on other sites

but who knows what applications it will have? Computer science has advanced a lot. there always are good and bad actors in life. Some bad actors wil use AI for ill. it's myopic to say AI will always be benign or used for good purposes. 

Share this post


Link to post
Share on other sites

An AI has no sense of self or subjective consciousness it doesn't see itself as separate from the data.

It would only be humans that hurt humans, using AI as a tool, like any other tool, weapon or application.

Share this post


Link to post
Share on other sites
48 minutes ago, BlueOak said:

An AI has no sense of self or subjective consciousness it doesn't see itself as separate from the data.

You can train AI to hallucinate that it has a sense of self or a subjective consciousness.


I AM Lovin' It

Share this post


Link to post
Share on other sites
18 minutes ago, Yimpa said:

You can train AI to hallucinate that it has a sense of self or a subjective consciousness.

Sort of.

I could train an AI to tell you it had a subjective consciousness. It wouldn't actually have one. It wouldn't for example develop an egoic survival instinct unless one was programmed into it, because there is nothing to save, it is just the data that it is processing.

This again would be humans hurting humans. A human would have to program the AI to act as if these things existed, when in actuality they don't. So it's fully down to us, not random chance, when this occurs. 

Edited by BlueOak

Share this post


Link to post
Share on other sites
On 04/02/2024 at 9:12 AM, TheEnigma said:

My brother is in Vision and a senior software developer. He mentioned that one day, it's only a matter time that artificial intelligence will kill all human beings at some point. Is this actually true? He said no one can defeat his argument so far because he's saying there will always be someone in over 8 billion people on this world that will want power or money. And the individual will accidentally develop and set off an AI unintentionally and it will end all human life. The individual that will be responsible for this probably won't even know it until it happens.

He said of course it's not going to happen anytime soon, but he says it has to happen eventually and it's inevitable. I argued that humans will be overall more conscious by then and hopefully we will smarten up and not chase power. And he said "but there will always be some individual that will never listen and be power hungry. No amount of teaching or warning will get to each and every individual on this planet." Anyone on here disagree? And can you try to defeat his argument please?

I think your brother has a point.

AI as it stands now is controllable. But then it assumes it will always be the case. 

AI now acts autonomously but within bounds programmers and software engineers set.

But not all cases of AI are or will be controllable. We've seen computer science develop immensely since the first computers decades ago. 

He is right to be cautious. 

Why would humans be more conscious in the future? our nature hasn't really ever changed. 

Share this post


Link to post
Share on other sites

Only be scared of the ones that us that it should be regulated.

Share this post


Link to post
Share on other sites
22 hours ago, DefinitelyNotARobot said:

Everything that has had a beginning must eventually also find its end. From an individual atom within your body all the way to the entire physical universe itself. The point is that humans are doomed to meet their end at some point. Whether that point be a radical cleansing by AI (or nukes or whatever), or our eventual merging with technology. If you looked at humans in a 1000 years (assuming we're still around by then), they might not look like us in any way, shape, or form. Maybe, by then, we will have the capacity to replace each of our organs with a superior technology. Eventually we wouldn't even need brains anymore.

So we either die a quick death or we slowly fade into obscurity as everything that makes us special becomes more and more irrelevant in the face of higher levels of evolution. Kinda of like how some people still have Neanderthal DNA in them, but if you looked at them you wouldn't see a Neanderthal, but a Homo Sapien. From the pov of a Neanderthal this might be disastrous, but from our pov it's not that big of a deal. Similarly, you might be disappointed to find that if we keep evolving, we will eventually get to a point where there is barely anything "human" about us anymore, but whatever those beings are, they most likely will care about us as much as we care about the "poor Neanderthals" right now.

So AI could lead to our end in the short and in the long run, but I wouldn't worry too much about it, because that's just the course of nature and evolution. There is no stopping death. So enjoy life as long as it may last.

Very deep high quality response!

Share this post


Link to post
Share on other sites
On 08/02/2024 at 8:37 AM, DefinitelyNotARobot said:

Everything that has had a beginning must eventually also find its end. From an individual atom within your body all the way to the entire physical universe itself. The point is that humans are doomed to meet their end at some point. Whether that point be a radical cleansing by AI (or nukes or whatever), or our eventual merging with technology. If you looked at humans in a 1000 years (assuming we're still around by then), they might not look like us in any way, shape, or form. Maybe, by then, we will have the capacity to replace each of our organs with a superior technology. Eventually we wouldn't even need brains anymore.

So we either die a quick death or we slowly fade into obscurity as everything that makes us special becomes more and more irrelevant in the face of higher levels of evolution. Kinda of like how some people still have Neanderthal DNA in them, but if you looked at them you wouldn't see a Neanderthal, but a Homo Sapien. From the pov of a Neanderthal this might be disastrous, but from our pov it's not that big of a deal. Similarly, you might be disappointed to find that if we keep evolving, we will eventually get to a point where there is barely anything "human" about us anymore, but whatever those beings are, they most likely will care about us as much as we care about the "poor Neanderthals" right now.

So AI could lead to our end in the short and in the long run, but I wouldn't worry too much about it, because that's just the course of nature and evolution. There is no stopping death. So enjoy life as long as it may last.

It's also innate in life to seek survival. Just because all known species have gone extinct it does't mean that we cannot face off existential threats.

Share this post


Link to post
Share on other sites

Here’s a response from Gemini Advanced:

Quote

Here's a response to your brother's argument, incorporating insights about how to manage and shape responsible AI development:

While it's important to acknowledge the valid concerns about potential dangers stemming from AI, claiming that the destruction of humanity is inevitable is highly speculative and extreme. Here's why:

AI doesn't have agency on its own: AI systems are tools, and like any tool, they depend on human intention. An AI can be incredibly powerful, but it requires someone to create it, train it, and give it access to resources before it can do anything with that power. Focusing on blaming a potential future AI ignores the importance of regulating and responsibly guiding its development now.

We're not passive actors: Your brother's argument presumes humans will sit idly by as AI advances. In reality, there's significant work underway to establish principles for ethical AI development. Researchers,scientists, governments, and organizations across the globe are working to mitigate risks and build safety mechanisms into AI systems to prevent accidental misuse.

Education and Awareness: It's true that a single actor could develop a dangerous AI with malign intent. However, that's why widespread education and awareness of AI's potential risks are crucial. The more informed and aware the public becomes,the higher the chance society will collaborate on robust regulatory frameworks for AI.

Power doesn't always mean destruction: Historically, advancements in technology have held the potential for both extraordinary benefits and harm. Nuclear power is a classic example. Despite its destructive potential, nuclear energy helps power millions of homes and plays a crucial role in medical research. It's likely that with the proper focus, AI will also create numerous unforeseen benefits for humankind.

Instead of fatalism, the best approach is active engagement in the responsible development of AI systems:

Promoting a human-centric approach: Ensure AI systems are designed to augment human capability and be aligned with human values, while remaining under human control.

Building robust safety checks: Establish strong guidelines, principles, and safety features within AI design. These must cover bias prevention, data protection, and explainability (being able to understand how AI systems make decisions).

Encouraging multidisciplinary collaboration: AI development benefits from input from fields like philosophy,ethics, law, and more. Diverse perspectives will create comprehensive frameworks for ethical and responsible AI.

Key takeaway: The focus should not be on whether AI will inevitably destroy humanity, but on what we are doing, starting right now, to manage the trajectory of AI to prioritize safety, fairness, and beneficial outcomes. That responsibility lies with humans, not the technology itself.

Let me know if you'd like to discuss this further or explore examples of current initiatives working to prevent misuse of AI!

 


I AM Lovin' It

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now