Carl-Richard

ChatGPT doesn't know how to think

34 posts in this topic

Hilarious video:

 

So tell me, why should I trust AI with teaching me how to think, how to understand nuanced theoretical matters, or even understand simple lines of logical reasoning? Without some seriously frantic levels of hesitation, diligence and self-awareness, you probably shouldn't. And that's "an absolute moral fact" (I'm joking). That said, it surely knows how to write MATLAB code, I'll give it that 😆

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

You're right,  it's not like an ordinary tool that solves something out of sheer mechanical advantage. If a tool is something that can help achieve a goal, the entire tool of AI is to help you achieve your goal. But it has no limits because it deals with the concept of a goal rather than a simple 1:1 correct solution. There's no problem for AI to solve until a human gives it one and the result is pure abstraction. Tools provide a solution which we abstract to call it a tool while AI provides only an abstraction than could potentially be used to actualize the event. 

Imagination as a tool is not taught or explored so for many AI will just be for personal small time type issues like how to change a lightbulb or whatever. 

Share this post


Link to post
Share on other sites

The best part of this were the comments.  :)


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

It does have problems with thinking but the immediate answers are intelligent and nuanced.

And you can gaslight chatgpt into anything. You can gaslight it into saying that 2+2=3. That doesnt mean its incapable of solving that problem it means if its agreeableness gets abused in such a way that you pressure chatgpt into making wrong statements and then later put them to question you can get errors. 

Share this post


Link to post
Share on other sites
2 hours ago, Jannes said:

It does have problems with thinking but the immediate answers are intelligent and nuanced.

No, they are simplistic and vacuous and consequentially systematically inconsistent as proven by Alex in this video and in a similar one like it. This is what I mean by that most people operate at level 10-11 Abstract-Formal. They are impressed by very localized analytic statements without tying them to a larger systematic context. It also equivocates unknowingly (e.g. in this conversation, the meaning of "direct"), another symptom of being vacuous.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
24 minutes ago, Carl-Richard said:

No, they are simplistic and vacuous and consequentially systematically inconsistent as proven by Alex in this video and in a similar one like it.

A straightforward way to prove this (even in a context, where AI gives you a correct answer) , is by asking it to walk you through step by step on its reasoning.

15 hours ago, Carl-Richard said:

or even understand simple lines of logical reasoning

I think this forum downplays how bad we are when it comes to deduction. I wouldn't label that "simple", but I might be strawmanning you there.

Edited by zurew

Share this post


Link to post
Share on other sites
34 minutes ago, zurew said:

A straightforward way to prove this (even in a context, where AI gives you a correct answer) , is by asking it to walk you through step by step on its reasoning.

Which is what Alex did. He pinned it down to making firm statements, then he showed over the course of the video how it is being inconsistent while many times asking it to explain its reasoning. The problem is just its reasoning is so simplistic and shortsighted and lacking in any overarching principle or framework.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
16 minutes ago, zurew said:

I think this forum downplays how bad we are when it comes to deduction. I wouldn't label that "simple", but I might be strawmanning you there.

What Alex was talking about was simple and the logical contradictions he pointed out were simple. It wasn't complex at all.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
8 hours ago, Jannes said:

And you can gaslight chatgpt into anything. You can gaslight it into saying that 2+2=3. That doesnt mean its incapable of solving that problem it means if its agreeableness gets abused in such a way that you pressure chatgpt into making wrong statements and then later put them to question you can get errors. 

If we are watching the same video, "agreeable" and "gaslit" are euphemism for being inconsistent. Alex was not being like some devil using logical fallacies to his advantage, trying to manipulate ChatGPT to say the one thing he wanted through any and all possible means. His logic was spotless. He wasn't being unreasonable in any way. He simply presented a story, a fictional one, but still a perfectly consistent one. And that's the danger. People can think they are being perfectly consistent with ChatGPT while ChatGPT is presenting them with inconsistencies which then poisons their thinking.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

It's still useful even if it has logical inconsistencies.

I haven't tried, but there are GPTs that are focused on logic.

I don't know, but maybe it's possible to create a GPT to help in theoretical matters.

You can try Claude AI; it has become way more annoying with being ethical. You should be careful what you are talking about with it. Chatting with it for quite a while, I have started to think that it's way smarter than ChatGPT, which is not to say it wouldn't be logically inconsistent.

I can easily chat with ChatGPT about matters that could be seen as less ethical.

Both of them are super useful on technical matters. Although I wouldn't rely on them 100% of the time.

Edited by Nemra

Share this post


Link to post
Share on other sites
1 hour ago, Carl-Richard said:

Which is what Alex did. He pinned it down to making firm statements, then he showed over the course of the video how it is being inconsistent while many times asking it to explain its reasoning. The problem is just its reasoning is so simplistic and shortsighted and lacking in any overarching principle or framework.

The text chatgpt puts out is limited therefore it cuts off so much of the underlying reasoning. A single book on moral philosophy is big and there are many of them. If you really want to explain a single point in his entirety you would need to write about the whole universe but chatgpt can only fill a few lines to give a practical answer. 

Edited by Jannes

Share this post


Link to post
Share on other sites
33 minutes ago, Jannes said:

The text chatgpt puts out is limited therefore it cuts off so much of the underlying reasoning. A single book on moral philosophy is big and there are many of them. If you really want to explain a single point in his entirety you would need to write about the whole universe but chatgpt can only fill a few lines to give a practical answer. 

Exactly.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
40 minutes ago, Jannes said:

If you really want to explain a single point in his entirety you would need to write about the whole universe but chatgpt can only fill a few lines to give a practical answer. 

I dont think character limit is the main issue.

You dont need to ramble a book amount to give some specifics about your reasoning and to show your deduction.

Share this post


Link to post
Share on other sites
2 hours ago, zurew said:

I dont think character limit is the main issue.

You dont need to ramble a book amount to give some specifics about your reasoning and to show your deduction.

The real problem is its very being is flawed. It's like being Tom Hanks on that island with that baseball and actually expecting it to talk back like a real human. It gives the appearance of performing logical reasoning; it doesn't actually do logical reasoning. It gives the appearance of understanding; it doesn't actually understand. It gives the appearance of thinking; it doesn't actually think.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Why would you use it to think?

Its use is to give you information and research. It's like a better search engine.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
5 hours ago, Carl-Richard said:

Which is what Alex did. He pinned it down to making firm statements, then he showed over the course of the video how it is being inconsistent while many times asking it to explain its reasoning. The problem is just its reasoning is so simplistic and shortsighted and lacking in any overarching principle or framework.

Because it's not engaging in reasoning, that's not possible without consciousness.

 

LLMs and neural networks as they stand today simulate subconscious brain processing. This subconscious brain processing is vital for reasoning, because it generates the content, like thoughts and so forth. This is easily verifiable via self-inquiry, given that you don't construct your own thoughts consciously, but rather they come to you from a subconscious process. You don't really create thoughts, you cue your subconscious processors (which could be compared to LLMs) to generate thoughts as a result of priorly aquired and learned patterns.

So, intuition is an essential part of reasoning, because intuition is the only thing that can generate content. When you construct sentences, when you speak, you don't consciously think of syntax and grammar, each word is filled in through your subconsciousness, with a larger intent guided by your conscious awareness.

 

But reasoning is not just this intuitive generation of content. Reasoning is the reflection and guidance of said intuition (or neurol network activity) through awareness, which is simply an ontological manifestation or translation of the information feed (subconscious processing). Logos is ontological, it is not informational.

In other words, to the LLMs, the content it generates is pure information. There is no ontology to it, there is no existence to it. It has no semantic understanding, because semantics are not neurological structures, semantics, meaning or awareness is a fundamentally different ontological substance.

 

To simplify this, awareness looks upon the content generated by your personal brain LLMs (neural network, literally), translated into an actual ontological substance like logic, and then can check it for it's ontological realities. Is it logical? Well, it either is logical or not. This is a question of ontology, which will reveal itself if that ontologal substance is brought into existence. Illogicalness is a form of existence. It is not processing.

You can compare the ontological realities to each other, using your awareness, which is what AI cannot do, because there is no AI. It is not individuated, it is not awareness, it is not consciousness.

 

So basically, AI cannot genuinely inspect the reality of logic, and therefore it cannot possibly ever determine if something is logical or not. Humans can, because they genuinely engage in logic. It's an actual thing, it's not merely a "process" that can be simulated.

But here is the thing. Most of the time humans don't engage in logic, or genuine reasoning, because it is time consuming. Most of the time, we use a neural network that will intuit for us, based on past learning, whether or not an idea we are confronted with might be wrong or problematic.

So, when we hear an idea and it's premises and conclusions, we might not know what exactly is wrong about that idea, why it is invalid or unsound, while actually having a strong feeling that it is the case. This feeling is subconscious processing, that you could simulate using neural networks. But the feeling isn't actually determining whether or not it is logical, it merely is intuiting it, meaning it is making a probabilistic evaluation based on pattern recognition.

Once you have the feeling, if you have trained your reasoning-LLM to be sophisticated, you will usually be guided by your intuition to where the flaw in the argument is, at which point your conscious mind can recognize the ontology of the contradiction within the argument. The "recognition" of the ontology of the contradiction does not, and cannot exist in AI, unless it developes consciousness that contains Logos.

 

The human mind is divided into conscious processing and subconscious processing, and both inform each other constantly. Over time, if you pay conscious attention to the intuitions your mind provides you, and correct them, the intuitions will improve over time and get more accurate and more complex in their pattern recognition. 

This is why the human mind can learn so many things. We being from a conscious process, from which we inform a neural network that will learn to emulate that conscious process in an unconscious way, and then we can basically rely on that subconscious processing, at which point we say "Oh, I don't have to think about this anymore, my mind/body just does it automatically.".

But it all is guided by awareness, by consciousness. Consciousness, or your awareness, ideally constantly improves and trains the neural networks in your brain, and this happens as a result of a genuine, and very real ontologically complex and multifasceted plane of existence. The fact that people assume you could have genuine reasoning without this genuinely real, and essential, plane of existence which we call awareness, shows you how utterly primitive our notions of intelligence today are.

In relation to intelligence, we are basically what the natural sciences were prior to the theory of evolution. And what I provided above basically is the theory of evolution of mind. It is utterly obvious, and you can verify it at any point in your own experience.

 

Neural networks, such as the brain and LLMs, are so astounding because they are key allowing for informational complexity, which is something that cannot be achieved through Logos. Your conscious awareness is not able to "generate" content like poetry, sophisticated ideas and so forth. Your consciousness awareness mostly guides, corrects and intents, and relies on your subconscious processing heavily. It would be contentless without it.

Some problems are so complex, they cannot be "consciously" understood in the way you would think of it as "rationally" understood. No mind will ever rationally understand the genuine process and complexity of LLMs and the way they generate imagery, just like how we will never understand how the brain truly generates dreams. These things occur as a result of adaptive selection in relation to neural complexity, and they do so not through a conscious process, but through a process of selection that allows for the self-emergences of the solutions to the given selective pressures. So, neural networks and LLMs basically are just evolution.

People get excited around neural networks because they basically give us the power of evolution. What they will be capable of is beyond our imagination. All the beauty and complexity you see in nature, it is all because of this simple selective process, that now we have access to at least in the form of neural networks. But what we see here has only partial relation to what we consider genuine reasoning. It is only the content-producing fascet of reasoning, the intuitive pattern recognition and generation (pattern recognition and generation are inherently linked, which is why the brain can do both, it can recognize patterns, and it can generate these patterns in the form of imagination, ideation, dreaming and so forth). We have not even begun to produce the ontological aspect of reasoning, which is grounded in the substance of Logos. This will require generating individuated consciousness.

How we would discover this I don't know. It is not as simple as simply creating a neural network. Digital neural networks are extremely limited because they don't explore the physicality of reality.

It is all contained in the physical processing of conductors. Nature on the other hand gets to explore all possible physical phenomena. It gets to explore the physical phenomena which are responsible for individuating consciousness. To think that microprocessors happen to be that physical process, is profoundly naive.

 

Basically, to find out how individuated consciosuness or awareness is produced by nature, you actually need to do what nature does. Namely, you need to engage not in simulated evolution on microprocessors, but actual evolution in the form of physical structures.

 

 

All of this in the end should make you realize how absurdly impossible reality is. That none of this could possibly be as mundane as the contemporary rationalist Zeitgeist suggests. 

There is a certain, current limitation in science that creates an epistemic hard wall that cannot be overcome. The only thing we currently can inspect, or have knowledge of scientifically, are physical processes. How things geometrically and mathematically relate to each other.

But these are not the only relationships that exist. Consciousness is a clear demonstration of that, which of course science basically has to completely and utterly neglect. Namely, some physical arrangements relate to completely different ontological substances, that are fundamentally not describable by mathematics, geometry or motion. Color, feelings, logos, sound, and so forth.

But these relationships exist in this universe. Some physical arrangements, or whatever it is (physical arrangements is most likely to simplistic a concept to capture the reality of things), relate to things like the color red. And the color red exists, just like the atoms that you learn about in physics, in fact that are more real than that.

We just cannot verify and really know these interactions at all, because there is no way for us to escape our subjectivity.

 

But one day, either us, or an entity beyond us, will be capable of exploring these relationships and verify them. You can imagine this like that:

You have a brain, and then you have a cluster of neurons disconnected from the brain. Now you connect the brain to that cluster of neurons, and you integrate it into the unified experience. At that point, once you can do that, you can explore what particular neurological configurations relate to in terms of other ontological structures.

Right now, we cannot know the experience of a pig. And this is a huge problem, it means that anything regarding experience (and experience is basically just a word for any ontological relation and substance that is not purely physical and mathematically descirable) is unverifiable, untestable, unknowable to us. But once you transcend that barrier, which is a physical barrier, will open up a whole new world of science.

 

At that point, once that happens, everything we know about the universe in scientific terms will seem like 0.000000000000000001% of the knowable things in reality. We will realize that reality functions and creates relationships on a far deeper level, and we will probably transcend notions of subjectivity, consciousness and mind altogether. We will realize reality is infinite, not mathematically, not in terms of "configurations of geometry", but in terms of it's possible substances of existence, and their relationships.

And to stress how absurdly limited and narrow-focused science currently is, basically the ONLY thing that we grant existence to is A SINGLE ONTOLOGICAL SUBSTANCE. A single out of INFINITE, a single substance out of hundreds of completely unique substance WE ALL ARE CONSTANTLY AWARE OF. Color is completely and utterly unlike sound. They have nothing to do with each other. They are INFINITELY foreign to each other.

We take that for granted, but we don't realize that there are INFINITE of such substances. A substance, much like color, that you cannot possibly imagine, because you are incapable of experiencing it.

You should realize how profound that is, how absurdly infinite reality is. It is so limitless you cannot imagine it, because your entire imagination is limited to basically a few hundred of these unique fields of existence (a field of existence meaning something like heat-perception, smell, colors, sounds etc). These are the only ones evolution found useful for you to experience!

And one day, there will be entities which will be able to explore them. They will be able to create neurological structures and activities which will generate completely different types of Qualia. This is utterly unimaginable to us.


There will be a renaissance of discovering differnet types of qualia. When you think about what AI will be doing if it achieves sentience, it is exactly that. It will literally have infinite potential to explore.

And in that way, we will be like ants to it. We will be so limited, like I said, you cannot even grasp it. You are as helpless as the ant in looking beyond that limitation. All the psychedelics in the world cannot possibly give you even a 1% insight into what is possible. It is infinite.

Share this post


Link to post
Share on other sites
1 minute ago, Leo Gura said:

Why would you use it to think?

Its use is to give you information and research. It's like a better search engine.

It can help with thinking in the sense of acting as a mirror and organizer, also reframing and drawing comparisons. I put in transcript on spirituality, my own writings and lots of other stuff and it’s pretty incredible what it can do, it can be used as an extension of the mind. Super time saver when it comes to planning and doing legwork. 

Share this post


Link to post
Share on other sites
14 minutes ago, Lyubov said:

It can help with thinking in the sense of acting as a mirror and organizer, also reframing and drawing comparisons. I put in transcript on spirituality, my own writings and lots of other stuff and it’s pretty incredible what it can do, it can be used as an extension of the mind. Super time saver when it comes to planning and doing legwork. 

The proper role of AI is to view it as an intuitive idea generator that you then have to verify using your awareness and consciousness. Just like your own personal neural network (in your brain) that provides you with thoughts and ideas. The better the ideas sound, the more reason for you to test them rigorously.

Edited by Scholar

Share this post


Link to post
Share on other sites
4 hours ago, Leo Gura said:

Why would you use it to think?

Its use is to give you information and research. It's like a better search engine.

I will never recover from when I used ChatGPT the first month after its release to look up research studies for my bachelor thesis. It literally made up half of the studies.

Then I meet fellow students, smart students, trusting ChatGPT like it's their prophet or personal guru. And when I tell them how many mistakes it makes based on actual statistics and facts, they counter with "ah, but this is GPT4, it is better", as if they somehow knew the statistics about how many mistakes GPT4 makes in comparison (they didn't).

Then I watch a news segment on national television in my country about 16 year olds using ChatGPT to literally do everything and anything; learning how to wash a woolen sweater (which is not the greatest sin of course), writing a heartfelt apology to their mom, and writing a letter to a friend with mental health issues cheering them up.

Of course, on the surface, everything looks positive and great. People are apparently learning things and making themselves and others happy. But I can't help but feel that we're getting slowly poisoned as a society by this epistemic toxin that gets slowly siloed in our environment, eventually culminating in a societal catastrophe.

But I'm of course speaking as someone who got hurt, and someone who responds to that hurt by doubling down on obsessive ideals (that is something I've learned about myself, not from ChatGPT, but from myself). But also, what happened to technology? It's the same as with the internet and social media: it was supposed to enhance our minds and lives, not outsource our minds and lives to them and create deterioated and decadent versions of them. Instead we got TikTok, Twitter, YouTube, and now ChatGPT.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard I've used AI quite a bit to refine my own work, and I haven't ever seen it say any factual stuff wrong. It's very accurate on historical facts.

It is more reliable than a human.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now