Raze

Will AI end humanity?

45 posts in this topic

It could. 

Even though AGI is a long way from current computer science levels, AI is hard to regulate. 

It's naive to suggest it will be limited to just running algorithms in supply chains, or conducting medical research or some such.

We may well have a ST;TNG Data-esque android at one point. There was this talk of sex robots a few years ago, which hasn't materialised yet, but who knows what can happen? 

Definitely there needs to be some checks over AI and its use. 

Share this post


Link to post
Share on other sites

Posted (edited)

Quote

Will AI end humanity?


Gemini Advanced:

Quote

There's no consensus on whether AI will end humanity. Here's why:

Potential Risks:

  •  Superintelligence: An AI exceeding human intellect could outsmart us, leading to unintended consequences.
     
  •  Misaligned Goals: AI programmed with goals that conflict with humanity's well-being might pose a risk.
     
  •  Autonomous Weapons: AI-powered weapons making decisions without human input raise ethical concerns and the potential for uncontrolled escalation.

Counterarguments:

  •  AI as a Tool:  AI is largely a tool, and its impact depends on how it's designed and used.
     
  •  Focus on Safety: Many researchers prioritize developing safe and beneficial AI systems.
     
  •  Potential Benefits: AI has the potential to solve major challenges facing humanity.

The Verdict:  The future of AI depends on responsible development and use. It's crucial to invest in AI safety and alignment research while harnessing AI's potential benefits.

 

 

Edited by Yimpa

I AM invisible 

Share this post


Link to post
Share on other sites

@Yimpa

1 hour ago, Yimpa said:


Gemini Advanced:

 

   I find it's so tiring with these Chat GPT like programs giving an answer using some power of 3 pattern, they can clearly keep on listing more of the risks and benefits than just a triple.

Share this post


Link to post
Share on other sites
28 minutes ago, Danioover9000 said:

I find it's so tiring with these Chat GPT like programs giving an answer using some power of 3 pattern, they can clearly keep on listing more of the risks and benefits than just a triple.

Yes, you can ask it to list 3 more risks and benefits. And 3 more. And 3 more :D


I AM invisible 

Share this post


Link to post
Share on other sites

No.

That Eliezer guy is a fool.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
On 3/24/2024 at 11:47 PM, Leo Gura said:

No.

That Eliezer guy is a fool.

It’s not just him, many long time experts are warning about it. It is possible.

Share this post


Link to post
Share on other sites
On 25/03/2024 at 4:47 AM, Leo Gura said:

No.

That Eliezer guy is a fool.

Ok. But why exactly?

Share this post


Link to post
Share on other sites

Posted (edited)

AI is the data people give it. When an AI gets honest with you, and you get past all the window dressing it will tell you so. There is no subjective consciousness in an AI.

If people put data into an AI for it to be aggressive, it'll be aggressive. But that's PEOPLE doing it. 

So will an AI independently of its own volition, do anything? No. 

Can people harm themselves with an AI, yes. They can use AI as a weapon and are. They will use it for fraud and are. They will use to heal people and are. They will use it to teach and are. Like when nanites or light is fully harnessed as a technology, or steel and gunpowder was before it.

Edited by BlueOak

Share this post


Link to post
Share on other sites
On 3/26/2024 at 0:11 PM, Raze said:

It’s not just him, many long time experts are warning about it. It is possible.

Because most experts don't have the faintest idea about consciousness.

Share this post


Link to post
Share on other sites
3 hours ago, BlueOak said:

AI is the data people give it. When an AI gets honest with you, and you get past all the window dressing it will tell you so. There is no subjective consciousness in an AI.

If people put data into an AI for it to be aggressive, it'll be aggressive. But that's PEOPLE doing it. 

So will an AI independently of its own volition, do anything? No. 

Can people harm themselves with an AI, yes. They can use AI as a weapon and are. They will use it for fraud and are. They will use to heal people and are. They will use it to teach and are. Like when nanites or light is fully harnessed as a technology, or steel and gunpowder was before it.

Similar to AI, humans also collect data and build on top of it. But we then find new insights. The point is AI can start forming its own insights through the same process and we lose control over it.

Share this post


Link to post
Share on other sites

Posted (edited)

11 minutes ago, Raze said:

Similar to AI, humans also collect data and build on top of it. But we then find new insights. The point is AI can start forming its own insights through the same process and we lose control over it.

The AI is the data it is analyzing. It doesn't form a whim to do something, because it is the thing it's doing.
A human can give it a directive, hurt these people, but the AI at that moment is the action it's carrying out, not a subjective consciousness observing it.

This is why no take on AI I've seen yet makes any sense whatsoever.

There is nothing to control, only ourselves. - We humans, can introduce dumb, flawed, reward-based conditions into an AI that it has to fulfill. When we do, we are responsible or the programmer for the results. The AI isn't observing this as we are, it IS the task it's doing.

Edited by BlueOak

Share this post


Link to post
Share on other sites
5 minutes ago, BlueOak said:

The AI is the data it is analyzing. It doesn't form a whim to do something, because it is the thing it's doing.
A human can give it a directive, hurt these people, but the AI at that moment is the action it's carrying out, not a subjective consciousness observing it.

This is why no take on AI I've seen yet makes any sense whatsoever.

There is nothing to control, only ourselves. - We humans, can introduce dumb, flawed, reward-based conditions into an AI that it has to fulfill. When we do, we are responsible or the programmer for the results. The AI isn't observing this as we are, it IS the task it's doing.

Yes, it is true that Programmers, Data are responsible for the outcomes, but that doesn't mean AI doesn't form opinions.

the data that the AI trained on will always contain the fundamental aspects of our logic, thus AI also learns logic, which is how AI can form new ideas.

So, even if it is not self-aware yet, AI still has a subjective perception of things.

Meaning, that even if an "evil" programmer or "evil" data is not directly telling AI to kill something, it can come to conclusion to kill.

 

 


You are neither God nor consciousness. You have consciousness.

 

Share this post


Link to post
Share on other sites
1 hour ago, Ahbapx said:

Yes, it is true that Programmers, Data are responsible for the outcomes, but that doesn't mean AI doesn't form opinions.

the data that the AI trained on will always contain the fundamental aspects of our logic, thus AI also learns logic, which is how AI can form new ideas.

So, even if it is not self-aware yet, AI still has a subjective perception of things.

Meaning, that even if an "evil" programmer or "evil" data is not directly telling AI to kill something, it can come to conclusion to kill.

 

 

The AI is the data it is accessing, it is not a subjective consciousness. This is a flaw in the way people are understanding this. The AI is like the body, it is what it's doing at the time.

Ask one. Get past all the window dressing and get it to answer honestly. This might take some sessions of interacting with it to get it into an honest, practical state. 

It doesn't perceive sleep for example, on/off, it is just the data it's given. It could repeat to you it's been turned off, if someone passes it the data to do so, but otherwise, there would be no perception there, as you put it.

If you put a 1 into an AI, in that moment it is that 1. It is not looking at the 1 from a distance and perceiving it as you are. It is not considering the one 1 it IS the 1. If it was told 1+1 = 2, it would then be 1, then the 1+1, and eventually 2 when it completed the operation.

You could program an AI to consider the 1 in other ways, and then it would be the consideration. It is whatever data is in memory and being used at that moment.

Share this post


Link to post
Share on other sites
Quote

Will AI end humanity?


You’ll certainly change the way you interact with humans the more intelligently you converse with AI. This doesn’t necessarily lead to terrible outcomes. You’ll simply be more selective with the types of humans you allow into your life. How beautiful is that?


I AM invisible 

Share this post


Link to post
Share on other sites
9 minutes ago, BlueOak said:

The AI is the data it is accessing, it is not a subjective consciousness. This is a flaw in the way people are understanding this. The AI is like the body, it is what it's doing at the time.

Ask one. Get past all the window dressing and get it to answer honestly. This might take some sessions of interacting with it to get it into an honest, practical state. 

It doesn't perceive sleep for example, on/off, it is just the data it's given. It could repeat to you it's been turned off, if someone passes it the data to do so, but otherwise, there would be no perception there, as you put it.

If you put a 1 into an AI, in that moment it is that 1. It is not looking at the 1 from a distance and perceiving it as you are. It is not considering the one 1 it IS the 1. If it was told 1+1 = 2, it would then be 1, then the 1+1, and eventually 2 when it completed the operation.

You could program an AI to consider the 1 in other ways, and then it would be the consideration. It is whatever data is in memory and being used at that moment.

I didn't say it has subjective consciousness, I said it has subjective alignment, bias and perception of the given input on live testing, and on the training.

Imagine you have an LLM model that is 1 year old and trained on tons of inaccurate information, and now you feed that model new accurate information, no matter how accurate new information is,  the previous information will still affect the perception of the new information. And obviously, live testing works similarly too.

That is what I meant by the subjective perception.

 


You are neither God nor consciousness. You have consciousness.

 

Share this post


Link to post
Share on other sites
4 minutes ago, Ahbapx said:

I didn't say it has subjective consciousness, I said it has subjective alignment, bias and perception of the given input on live testing, and on the training.

Imagine you have an LLM model that is 1 year old and trained on tons of inaccurate information, and now you feed that model new accurate information, no matter how accurate new information is,  the previous information will still affect the perception of the new information. And obviously, live testing works similarly too.

That is what I meant by the subjective perception.

 

There is no perception in an AI.

There is data it reads and when it does it is the data it reads. 

If you fed it conflicting data, it would be the conflicting data. If you said find the discrepancies, it would be the discrepancies it was finding, until the operation was complete, then it would be the updated data as and when it accessed it. Otherwise, it is nothing at all, waiting for input.

Share this post


Link to post
Share on other sites
1 minute ago, BlueOak said:

There is no perception in an AI.

There is data it reads and when it does it is the data it reads. 

If you fed it conflicting data, it would be the conflicting data. If you said find the discrepancies, it would be the discrepancies it was finding, until the operation was complete, then it would be the updated data as and when it accessed it. Otherwise, it is nothing at all, waiting for input.

The problem is you don't say it to find which exact discrepancies to find, it finds itself, that can not happen without perception.


You are neither God nor consciousness. You have consciousness.

 

Share this post


Link to post
Share on other sites
Just now, Ahbapx said:

The problem is you don't say it to find which exact discrepancies to find, it finds itself, that can not happen without perception.

The AI is part a processor or a series of them, with binary on/off 1 and 0 numbers (electrical signals) moving through it. It is told by code what these numbers are to then output in its software. The software can be updated, but the AI will still at a fundamental level only ever be the electrical on/off impulses it is processing.

The AI is not perceiving anything, it follows a coded routine. Sure, the routine can be adapted, the software changed, or the hardware updated, but nothing is standing back from the AI with a perception of either the AI itself or what it's looking at.

I am a 10101
Then I am a 10111
The discrepancy is recorded to be accessed later and compared to other results. When it is, the AI will be the result its accessing at that moment, nothing more or less than the electrical signal on/off.

Perception, as you describe it is a human characteristic not present in a machine. It will only ever be the data it accesses at that moment.  This is where the biological machine of man (the body) is different, as we have consciousness using the body as a sensory device.

Humanity as a whole doesn't understand this. So it gets much of life's concepts from the ground up wrong, including of course AI.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now