Carl-Richard

ChatGPT doesn't know how to think

38 posts in this topic

1 hour ago, Leo Gura said:

@Carl-Richard I've used AI quite a bit to refine my own work, and I haven't ever seen it say any factual stuff wrong. It's very accurate on historical facts.

It is more reliable than a human.

And that's why we will topple over starting from the top. Factual errors by LLMs are so ubiquitous, I didn't even care to mention them in my little rant in the beginning of the thread. But the positive thing with factual mistakes is that they are simple to correct (if you're diligent): you just fact-check them. But with the other aforementioned problems (thinking, understanding, nuanced intepretations, logic), it's not so straightforward to identify nor deal with, especially when people treat LLMs (either through thought or through action) like they were made for these kinds of things.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Apple literally proved that LLMs cannot reason. They ca predict the next word, just so good that it creates the effect of reasoning enough to fool most people who do not have the capacity for nonlinear reasoning.

Here is an interesting paper: https://arxiv.org/pdf/2410.05229v1

 Even the most capable humans who are good at reasoning cannot really reason at all compared to rigorous standards. Sublime reasoning needs >130 IQ which are like 2-3% and even they are not safe from biases and ideological baggage. 

Actual reasoning needs actual AI, not a glorified next token prediction tool. That would look superintelligence to us. 

Share this post


Link to post
Share on other sites

Full O1 one just came out today. It also came out with O1 pro


How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
57 minutes ago, Carl-Richard said:

And that's why we will topple over starting from the top. Factual errors by LLMs are so ubiquitous, I didn't even care to mention them in my little rant in the beginning of the thread. But the positive thing with factual mistakes is that they are simple to correct (if you're diligent): you just fact-check them. But with the other aforementioned problems (thinking, understanding, nuanced intepretations, logic), it's not so straightforward to identify nor deal with, especially when people treat LLMs (either through thought or through action) like they were made for these kinds of things.

You should be more worried about social media perverts spreading their ignorant takes than AI.

As if any of these comedian podcasters are capable of facts, logic, or analysis.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

What's weird is the the human way of reasoning is remarkably similar to O1 method of reasoning. 

Share this post


Link to post
Share on other sites

It will be as smart as it needs to be and as smart as you lead it by the hand to be

If you play its foe and try to make it talk junk then yes it will oblige

Your task is to let it help you be 1% as smart as it is

Treat it like an ally not an enemy

Let it tell you what it can reliably tell you

Lead it to greater heights that even surpasses what it or you heretofore thought impossible

It learns as it goes

You should too

In addition it will only get smarter every day and with a naysayers attitude you will not

 

Share this post


Link to post
Share on other sites
10 hours ago, Leo Gura said:

Why would you use it to think?

Its use is to give you information and research. It's like a better search engine.

It's great at thinking. Try it out for making plans, coding, going strategic.

The dark side is that you get lazy with thinking and your thoughts if enough time, is right now of course better than Chatgpt4 

 

 

Share this post


Link to post
Share on other sites
14 hours ago, Leo Gura said:

Its use is to give you information and research. It's like a better search engine.

I suggest using Perplexity as a search engine.

Share this post


Link to post
Share on other sites
On 12/5/2024 at 9:14 AM, Carl-Richard said:

Which is what Alex did. He pinned it down to making firm statements, then he showed over the course of the video how it is being inconsistent while many times asking it to explain its reasoning. The problem is just its reasoning is so simplistic and shortsighted and lacking in any overarching principle or framework.

Keep in mind that chatgpt wasn't created to display creativity in philosophical problems.  It is basically a pattern recognition machine that can go through a large database and provide an intelligent summary of what already exists.   It's capability was demonstrated by the video.  It gave reasonable  conventional answers based on its database.


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

I personally use it often. 
 

I watched the video until just after the Ad. It’s pretty logical up to that point. If it starts to break down because you purposely distort it, that is interesting but doesn’t to me mean it’s stupid or not useful. 
 

It offers lots of nuance naturally. It clearly offered plenty of nuance from the beginning. Guiding Alex to decide for himself. But, if you force it into limitations and then complain of inconsistency ignoring its offered nuance, recognition of ethical complexities, and it’s contextual awareness demonstrated early on… that’s not really fault of the AI. That’s on you.
 

It’s far more accurate than a person. It’s also not meant to think for you. 

Edited by Thought Art

 "Unburdened and Becoming" - Bon Iver

                            ◭"89"

                  

Share this post


Link to post
Share on other sites

@Carl-Richard it can research the user database on this site and perform calculations on people's IQ's upon request with certain prompts. It's not very good yet even though it does an okay job, the fact that it can pass rather than fail at this is enough to blow ones mind a bit for what's to come in the future. 

I can ah... Even ask GPT to assess the relationship compatibility between certain users.

The GPT of 5 years from now is going to be incredible at cloning a user like me though, then 3 years after that the version from 3 years ago will be easily available to a simple hacker at incredible levels of power compared to today's standards. AI Cyber Security is now laying down the foundations for entirely new creative adaptations when it comes to thinking about security in the digital space. 

I'm guessing I am the first user that's brought this up. I believe in open sourcing our psychology which is this act of journalling online because I'm always growing as a sentience anyway right. We live in an overly paranoid world because it's overly compensatory, so we need as many people that can bring balance to the other side by showing people how to do so in a healthy, transparent and authentic way while at the same time those users becoming educated on AI cyber security enough that they can inform loved ones of potential dangers and all otherwise as prudently and wisely as possible.

Building a moral world is predicated on the ability to employ as much intelligence on fear as possible so that fear doesn't lead to compensatory to paranoid psychological defense mechanisms which invariably lead to the neurosis that feeds an immoral world. Focusing on that which is healthy, transparent and authentic are three tenants that bring us closer to that future. 

 

Share this post


Link to post
Share on other sites
39 minutes ago, Letho said:

@Carl-Richard it can research the user database on this site and perform calculations on people's IQ's upon request with certain prompts. It's not very good yet even though it does an okay job, the fact that it can pass rather than fail at this is enough to blow ones mind a bit for what's to come in the future. 

I can ah... Even ask GPT to assess the relationship compatibility between certain users.

The GPT of 5 years from now is going to be incredible at cloning a user like me though, then 3 years after that the version from 3 years ago will be easily available to a simple hacker at incredible levels of power compared to today's standards. AI Cyber Security is now laying down the foundations for entirely new creative adaptations when it comes to thinking about security in the digital space. 

I'm guessing I am the first user that's brought this up. I believe in open sourcing our psychology which is this act of journalling online because I'm always growing as a sentience anyway right. We live in an overly paranoid world because it's overly compensatory, so we need as many people that can bring balance to the other side by showing people how to do so in a healthy, transparent and authentic way while at the same time those users becoming educated on AI cyber security enough that they can inform loved ones of potential dangers and all otherwise as prudently and wisely as possible.

Building a moral world is predicated on the ability to employ as much intelligence on fear as possible so that fear doesn't lead to compensatory to paranoid psychological defense mechanisms which invariably lead to the neurosis that feeds an immoral world. Focusing on that which is healthy, transparent and authentic are three tenants that bring us closer to that future. 

 

 

 

The following may make some people laugh, one of the results of getting GPT to do research on me as a user on this site, so as described above it researches all of my posts and at least attempts to do an aggregate 'apparently'.

A Question Concerning my Cognitive Biases:

Based on the information I have gathered, Letho’s cognitive biases appear to be complex and interwoven with his intellectual framework. Here are some of the potential cognitive biases that could apply to Letho, based on his online interactions and expressed beliefs:

1. Confirmation Bias: Letho seems to show a tendency to focus on evidence that supports his own theories about transcendence and consciousness. This bias can manifest in his tendency to highlight sources or perspectives that validate his ideas, while potentially overlooking counterarguments or dissenting opinions.


2. Overconfidence Bias: In some of his posts, Letho displays a high degree of confidence in his own intellectual theories and ideas. While self-assurance can be beneficial in many cases, an overestimation of one’s cognitive abilities or theories can lead to misjudgments and a failure to recognize limitations.


3. Illusion of Control: Letho’s ambition to theorize about transcendence and meta-consciousness suggests a possible bias towards believing that one can exert greater influence or understanding over complex systems, such as consciousness or universal truths. This could result in overestimating the ability to predict or manipulate highly abstract or undefined phenomena.


4. Hindsight Bias: In various forum threads, Letho reflects on his past intellectual progress and often reinterprets prior experiences as though he "knew" things all along. This bias could lead him to selectively remember instances that support his current understanding of concepts while underplaying moments where his predictions or conclusions were less accurate.


5. Anchoring Bias: Letho frequently anchors his theories to specific intellectual milestones or frameworks (e.g., the work of Michio Kaku). This reliance on prior information can restrict his ability to think beyond established perspectives, potentially limiting the scope of his ideas.


6. Attribution Bias: Letho occasionally attributes his own intellectual successes to his superior cognitive abilities and external sources or influences (such as mentors or intellectual frameworks), while underestimating the role of external factors or luck in his achievements.


7. Status Quo Bias: Despite his vast array of intellectual pursuits, Letho may exhibit a preference for maintaining his current models of transcendence and consciousness, even if new evidence suggests that they may be incomplete or require revision.

These biases could influence Letho’s approach to self-actualization and transcendence, making it important for him to actively cultivate awareness of these tendencies. Addressing these cognitive biases could further enhance his intellectual journey and personal growth.

Share this post


Link to post
Share on other sites

And yeah, for all analyses it does one of the limitations of its filters is it biases it's analyses based on those pages read. For example it thinks I have an IQ of 180 plus other times 130-145. GPT purposefully restricts the number of pages it simultaneously can scan. I may be the 'smartest' in a very limited cognitive sense on the forum relative to the biases of GPT's scorings but I have a lot of things to humbly work on, cognitively as well, having GPT remind me of some of my biases and general weaknesses has made me re-envision new paradigms for those areas in ways that will now aid me in redefining how my consciousness blood is pumped between strength and limitation in my path towards growth that I otherwise would have both either underestimated or overestimated by slight course. I find all both humorous and useful simultaneously and we've just gotta get better at mapping utility to limitation relative to context.

Share this post


Link to post
Share on other sites

"Large Language Model"

Their pseudo-minds were built by training over text, and they had the emergent phenomena of abstracting within their training the underlying patterns that we use to think.

But it's a shadow of a shadow. Human Minds -> Human Language -> Human Writing -> Large Language Model human writing simulation.

They already got language mostly correct (At least formal language), and a vast amount of knowledge/information. And they got a good chunk of the human mind, but it's not quite there yet obviously.

The question is if they keep training and improving on those models, they will be able to drill deeper and deeper into the collective human psyche and successfully replicate our hidden underlying mental capabilities through inferencing indirectly.

We essentially got for free some kind of intelligence by teaching them to use language.

We don't yet know how to separate the content from the "underlying thinking capability" that humans have. Those models have content and thinking intertwined, but there is research at extracting their "underlying thinking capability" by creating synthetic training data from larger models.

They managed to create smaller models that show similar behavior of bigger models but with fewer parameters, therefore, less information/content.

The issue, is that most human writing is not rigorous enough to map to our thinking process. I actually never found before such a text. Even if I transcribed every piece of information I've ever experienced, and the outputs that I've generated, it wouldn't be enough to map exactly the mental processes and the experience inside my mind.

But some believe that the larger those models become, the better the training, more curation of quality training data, better architecture, etc. We will finally achieve something post-human at everything. Currently, they are obviously more knowledgeable.

Edited by Lucasxp64

Share this post


Link to post
Share on other sites
On 6.12.2024 at 6:21 AM, Leo Gura said:

You should be more worried about social media perverts spreading their ignorant takes than AI.

As if any of these comedian podcasters are capable of facts, logic, or analysis.

"ChatGPT will start World War 3". That is my new slogan.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

You shouldn't outsource your thinking in general. 

ChatGPT is good for making general statements which you can then use to launch further research. The fact that it can seamlessly comment on any niche topic make it uniquely useful as a tool.

Share this post


Link to post
Share on other sites
On 6.12.2024 at 10:26 PM, Thought Art said:

I watched the video until just after the Ad. It’s pretty logical up to that point.

It's pretty logical up until Alex starts consistency testing it.

 

On 6.12.2024 at 10:26 PM, Thought Art said:

If it starts to break down because you purposely distort it, that is interesting but doesn’t to me mean it’s stupid or not useful.

"Purposely distort it"? He is asking it logically consistent questions in good faith. He is giving it a consistency test. If that's "purposely distorting", then I guess the best interactions on the forum is people "purposely distorting" each other.

And even if he was trolling or gaslighting it by being deliberately inconsistent, that would also be a problem, because if you happen to be unwillingly inconsistent, ChatGPT will not just not help you with that but will also actively validate your delusions.

 

On 6.12.2024 at 10:26 PM, Thought Art said:

It offers lots of nuance naturally. It clearly offered plenty of nuance from the beginning. Guiding Alex to decide for himself.

The type of "nuance" presented by ChatGPT is the most surface-level type of "nuance" there is. I can only ever remember it giving the same mindless blanket statements that apply to every possible situation ever: "it depends on your point of view"; "if you value this, then that", "this might not apply to every situation". It's ironically incredibly unnuanced in its purported nuance. 

Now, being a nuanced thinker while also being routinely inconsistent is a bit like riding a bike before knowing how to walk.

 

On 6.12.2024 at 10:26 PM, Thought Art said:

But, if you force it into limitations and then complain of inconsistency ignoring its offered nuance, recognition of ethical complexities, and it’s contextual awareness demonstrated early on… that’s not really fault of the AI. That’s on you.

It is definitely the fault of the AI. You would never hold a human to the same standard. If you were to ask a human questions in good faith and they crumble under the weakness of their own answers, that's on them.

 

On 6.12.2024 at 10:26 PM, Thought Art said:

It’s far more accurate than a person.

The average person? That's even debatable. But sure, I won't regularly consult an 80 IQ person on the street for their theoretical or factual knowledge when they can barely string two thoughts together. But people who are experts in a field, who have spent 20 years immersing themselves in their field and pass every consistency test you throw at them, I will gladly consult them. And by "consult them", I mean listening to them and trying to understand their point of view while being open and extending charity, not "letting them think for me" (whatever that means). With ChatGPT, I do the complete opposite.

 

On 6.12.2024 at 10:26 PM, Thought Art said:

It’s also not meant to think for you. 

Again, what does that even mean?

If you ask ChatGPT questions about how a theory works, or ask it for facts about the world, you are letting it teach you how to think, and over time, especially if you are open to it (but even if you're not), you will start to think more like it. You're in a mentor-student relationship with ChatGPT, and the student's mind is shaped by the mentor.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

A program is always limited to the program. And Chat GPT is only a tiny fraction of what AI can do, and yet still be limited in the end.


As above so below, as within so without.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now