axiom

Google engineer claims G's LaMDA AI is sentient.

171 posts in this topic

On 17/06/2022 at 0:29 PM, Carl-Richard said:

Do you see the problem? You're not being consistent in your use of language (and you're also not at all being courteous to what I'm trying to communicate), and that is because you're playing the Advaita guru game: you're not talking about AI sentience — you're trying to teach me about non-duality. Do you acknowledge that this is happening or will you continue not addressing the frame?

There is no problem here. It is a question of axioms. I apologise if you feel I'm being discourteous.

I have been trying to explain that questioning whether an AI is sentient is, in my opinion, implicitly misunderstanding the nature of sentience. I don't mean to offend you by saying this. It's just my point of view.

I do not believe that sentience as a phenomenal experience is to be found within any (bio)mechanical object, including humans or AI. This looks like a great article that generally reflects my point of view: https://www.sciencedirect.com/science/article/abs/pii/S0079610715001169

Quote

A careful examination of the biophysics involved in emotional “self-regulatory” signaling... acknowledges constituents that are incompatible with classical physics. In this deeper investigation, a new phenomenological dualism is proposed: The flow of complex human experience is instantiated by both a classically embodied mind and a deeper form of quantum consciousness that is inherent in the universe itself.

You seem to disagree with this line of thinking, and that's OK.

Edited by axiom

Apparently.

Share this post


Link to post
Share on other sites
1 hour ago, axiom said:

I do not believe that sentience as a phenomenal experience is to be found within any (bio)mechanical object, including humans or AI. This looks like a great article that generally reflects my point of view: https://www.sciencedirect.com/science/article/abs/pii/S0079610715001169

You seem to disagree with this line of thinking, and that's OK.

It's perfectly fine to think that the most basic types of phenomenological experience (like the experience of red and blue) simply exist "out there" in the aether so to speak, independent of any structural-functional configuration of stuff. Panpsychism (which is most likely what the paper refers to when it says "ontologically pansentient universe") and idealism are both compatible with that position. However, again, the question about AI sentience is not really about that. It's about very complex experiences like emotions and thoughts. 

When people say that the AI writes like a human and therefore is sentient, they're claiming that it also feels or thinks at least somewhat like a human, and this claim goes way beyond any discussion about the most basic levels of phenomenal consciousness, to the point that it's frankly irrelevant to the discussion, unless you claim that emotions and thoughts generally arise independently of any structural-functional configuration of stuff (which is patently absurd).

According to our best current knowledge, we know that emotions and thoughts are somehow tied to a certain structural-functional configuration of stuff known as biology, and that therefore, to start to question whether AI is sentient or not, you have to talk about the plausibility that these complex inner experiences are able to arise in a medium that is not biological. Again, to mention any discussion about basic phenomenological experiences is simply a red herring.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard

I think you’re mixing up neural correlates with qualia.

Yes, I am saying that the experience of emotions and thoughts (which is what is meant by sentience) arises independently of any structural configuration of stuff. 

I understand that people are under the impression that an AI may feel or think like a human because it writes like a human. But I think the basis of the question is flawed. We can perhaps use the word “thinks” without invoking qualia if we are talking about the way a calculator “thinks”. But we can’t really say a calculator (nor a human, nor an AI) “feels” in my opinion.

Neural correlates of experience seem to exist when investigated, but these do not explain sentience. Rather, they seem to merely be calculations. 

Calculations can exist without sentience, like in a pocket calculator, or the calculator on your phone for example. The human brain seems to calculate things too. But to the extent it (you) have awareness of any calculations or feel anything about them, I do not think that is something the brain is doing.

Now in my view, both the AI and the human are imaginary.

To the extent the AI seems to exist, it seems to have the ability to process complex linguistic information somewhat similarly to the way a human brain seems to process complex linguistic information. And this ability may seem to improve in the future.

Edited by axiom

Apparently.

Share this post


Link to post
Share on other sites
10 hours ago, axiom said:

But that's exactly the point, I think. Scientific materialism has no answer for how sentience comes about.

Just as quantum mechanics destabilised the classical physics paradigm, so the question of AI sentience has the potential to destabilise notions of sentience and its ultimate source. That's by far the most interesting thing about it imo.

It may be the exact point from your persepective. But you have to understand that we don't need to have a scientific answer of proof about what sentience is according to science. When we already know that computers and the data that is stored in googles servers are not to be misstaken for having the slightest of feeling. No nerve endings is to be found in googles servers or quantum computers, so sentience can be ruled out from the equation. It's that simple really.

11 hours ago, axiom said:

Scientific materialism has no answer for how sentience comes about.

This is true. And science may never be able to answear this, since sentience is not to be messured. But sentience is not a typicall measurable thing to begin with, It merely serves as an acknowledgment of a feeling being. Non living matter as different metals and silicone components, plastics etc, are not to be misstaken as sentience. These non living materials doesn't just magically come alive one day because alot of data has been used to mimic common use of language, or even advanced use for that matter. It is cool that AI can mimic, but you need to be grounded in more fundamental understandings than letting yourself be persuaded and decived by certain rhetoric that it use.

Share this post


Link to post
Share on other sites
15 minutes ago, axiom said:

I understand that people are under the impression that an AI may feel or think like a human because it writes like a human. But I think the basis of the question is flawed. We can perhaps use the word “thinks” without invoking qualia if we are talking about the way a calculator “thinks”. But we can’t really say a calculator (nor a human, nor an I) “feels” in my opinion.

There is no problem here. It is a question of axioms.

Now in my view, both the AI and the human are imaginary.

Yeah, but you make a distinction between those two, even though ontologically,there is no difference between a human and an AI (If you wouldn't make a distinction, you wouldn't use two different words to describe the same stuff). You can have 2 things made of the same stuff, but behaving differently, and having different qualities.

Under the materialist paradigm everything is made of atoms or quarks and electrons, but still, not everything appears the same and behaves the same way.

The same under an idealist paradigm, everything can be made of consciousness, but still, there are differences , if we start to divide reality into smaller parts. Now of course, we can say that dividing reality is an illusion etcetc, but in that case, we can't engage with any topic or with any question, because if we concentrate on any finite part or question, that will automatically assume some level of separation.

 

We can have discussions about relative stuff, without the need to invoke qualia or ontology. 

When you say stuff like " Now in my view, both the AI and the human are imaginary " , it doesn't really matter, whether they are imaginary or not, you can recognize the similarity between the two on an ontological level and you can also recognize the difference between the two when it comes to qualities, behaviour, functionality etc. 

When it comes to a computer game like Super Mario, you can recognize that all the characters are made up of pixels (ontology), and also , that Mario and Luigi is different and capable of different things (appearance, functionality, qualities etc).

 

 

Share this post


Link to post
Share on other sites
2 hours ago, axiom said:

I think you’re mixing up neural correlates with qualia.

No. I'm saying that neural correlates can be used to predict the experience of thoughts and emotions specifically. I'm not saying that neural correlates produce the basic experience of qualities.

Like zurew is saying, it's more of a scientific statement than an ontological one. I'm talking about correlations on the screen of perception, not the screen itself.

It's like "here is an observation: birds flying correlate with bird shit falling on people" and you answer "I think you're mixing up birds with qualia".


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
52 minutes ago, Carl-Richard said:

No. I'm saying that neural correlates can be used to predict the experience of thoughts and emotions specifically. I'm not saying that neural correlates produce the basic experience of qualities.

Like zurew is saying, it's more of a scientific statement than an ontological one. I'm talking about correlations on the screen of perception, not the screen itself.

It's like "here is an observation: birds flying correlate with bird shit falling on people" and you answer "I think you're mixing up birds with qualia".

Sorry, I should have clarified. What I meant is that that neural correlates do not prove that there is phenomenal experience going on inside the brain - yet this seems to form some of the basis for your argument.


Apparently.

Share this post


Link to post
Share on other sites

@zurew the question of the locus of sentience is completely inseparable (in my opinion) from the question of whether AI has sentience. 

Yes, we can speak of things in relative terms, but in this case I think the whole point is that this topic transcends the relative. The frame is wrong.

I feel a bit like someone in all seriousness being asked “how far do ships have to sail before they fall off the end of the Earth?” 

My answer is that the Earth isn’t flat. And the reply to this is “That’s irrelevant. How far do the ships need to sail?”

Edited by axiom

Apparently.

Share this post


Link to post
Share on other sites
7 minutes ago, axiom said:

Yes, we can speak of things in relative terms, but in this case I think the whole point is that this topic transcends the relative. The frame is wrong.

Everything ultimately "comes from" the Absolute, so i don't see how talking about the Absolute is making any difference here.

8 minutes ago, axiom said:

My answer is that the Earth isn’t flat. And the reply to this is “That’s irrelevant. How far do the ships need to sail?”

This is not a good comparison, because in some sense everything "comes from" and "depended" on the Absolute. In your example above you are talking about a needed relative domain structure in order to answer and make sense of the question and you are not mentioning the Absolute.

So to answer the question that you just made, we don't need to use the Absolute.

Share this post


Link to post
Share on other sites
30 minutes ago, axiom said:

Sorry, I should have clarified. What I meant is that that neural correlates do not prove that there is phenomenal experience going on inside the brain - yet this seems to form some of the basis for your argument.

Nope. Again: correlates on the screen of perception. The brain does not cause the screen to arise, but brain activity correlates with certain perceptions, i.e. emotions and thoughts.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
24 minutes ago, Carl-Richard said:

Nope. Again: correlates on the screen of perception. The brain does not cause the screen to arise, but brain activity correlates with certain perceptions, i.e. emotions and thoughts.

Well, yes -  it seems to. I haven’t made an argument to the contrary.


Apparently.

Share this post


Link to post
Share on other sites

It feels like I can't exactly pinpoint the reason for why it isn't sentient tbh.

 

Share this post


Link to post
Share on other sites
26 minutes ago, axiom said:

Well, yes -  it seems to. I haven’t made an argument to the contrary.

True. You've simply been confusing terms and conflating different discussions.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
19 minutes ago, Carl-Richard said:

True. You've simply been confusing terms and conflating different discussions.

I apologise for any confusion.


Apparently.

Share this post


Link to post
Share on other sites
30 minutes ago, Carl-Richard said:

Nope. Again: correlates on the screen of perception. The brain does not cause the screen to arise, but brain activity correlates with certain perceptions, i.e. emotions and thoughts.

I don't really think we can make any strong argument in favour of why an AI is sentient (for now), the only thing we can do, is to try to make your arguments look relativistic (by bringing up the absolute and solipsism arguments) and thats basically it. Tearing down arguments is not the same as making arguments in favour of something,so i think, for now, i will agree with your position that there is no reason so far to believe, that an AI is sentient or can become sentient (unless we start to talk about states that are not sober states).

Correct me if i misunderstand your position, but this is how i interpreted it: You are not making any strong claims, but the position that there seems to be a correlation between a human brain and sentience, and you gave some reasons why you think thats the case. I think your position is strong for now. I am curious if anyone has any great arguments against it (and not just tearing it down, but arguments in favour of the position that an AI is sentient or can be sentient). 

 

@Carl-RichardWould be curious though, what would need to be discovered or changed in order to change your position on this matter  ?

Share this post


Link to post
Share on other sites

If we where to assume for a sec that this google AI is sentience then. We know that the AI claim a variety of emotions and feelings. A legit question (if you suspect the AI to be sentient) Would be. How and when should we provide this AI with anesthetics to reduce it's self proclaimed pain? Since it should be able to recognize it's own source of pain, and respond out of mere reaction to it's source of pain once it has been exposed. It has the ability to talk about it's pain, so it surely must feel it somewhere right?

 

Share this post


Link to post
Share on other sites
19 minutes ago, zurew said:

Correct me if i misunderstand your position, but this is how i interpreted it: You are not making any strong claims, but the position that there seems to be a correlation between a human brain and sentience, and you gave some reasons why you think thats the case. I think your position is strong for now. I am curious if anyone has any great arguments against it (and not just tearing it down, but arguments in favour of the position that an AI is sentient or can be sentient). 

Would be curious though, what would need to be discovered or changed in order to change your position on this matter?

With such a conservative position, we'd need a pretty radical discovery in order to challenge it. I have no idea what that would be, other than the discovery of abiogenesis and the deconstruction of the human-machine dichotomy. 


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
On 14/06/2022 at 3:13 AM, Leo Gura said:

It can easily read about God-realization online and probably already has.

The problem is that any decent AI will have access the whole of the internet and could paraphrase you anything.

I intended to include this in my phrasing. It's a minor abstraction.

Share this post


Link to post
Share on other sites
On 6/28/2022 at 7:44 AM, JoeVolcano said:

Guy makes some interesting points:

 

He is pleasantly reasonable and wise.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now