Carl-Richard

Conflating knowledge with intelligence

109 posts in this topic

10 hours ago, Nilsi said:

This is like saying, "the faster you move through an art museum, the faster you can go do other things."

There is actually a rather fitting Drake lyric about this mindset of spectacle:

„I know a girl whose one goal was to visit Rome
Then she finally got to Rome
And all she did was post pictures for people at home
'Cause all that mattered was impressin' everybody she's known“

For once, I managed to decode one of @Reciprocality's posts. He is not giving a prescription that you should try to be a generalistic as possible and not engage your mind in detailed, concrete knowledge. He is basically re-phrasing what I said in his own words: intelligence is a generalistic thing, and the more generalistic you are, the more you're able to generalize.

 

4 hours ago, Nilsi said:

To be fair, you’re not making it easy to grasp what you’re getting at

I agree. He is a tough nut to crack. I generally (using my generalistic abilities) avoid trying to understand his posts, but today I had the impulse, stamina and luck to try and succeed. That is the strength of the "neuroticism" by the way. It sometimes throws you a curve ball that you manage to deliver right in the corner of the net.

On that, I'm about to talk to my potential advisor about a potential project about mindfulness and mind-wandering/"neuroticism" (and many other ideas). She is coincidentally a researcher on mindfulness and the leader of the institute where I got most my education, so that's fun :D


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Leo Gura

On 2024-05-18 at 2:10 AM, Leo Gura said:

Yes, I recently realized this myself.

Most people simply aren't interested in metaphysical or philosophical questions, but they apply intelligence in other domains like business.

   Yes I also agree, philosophy, sometimes metaphysics, mathematics especially equations and formulas, I find mostly boring and unimportant. But when it comes to language and the arts like drawing and music I'm pretty intelligent and visualizing stuff is easy and interesting to me.

   Maybe I draw what I see and know, not just what I see and not know.

Share this post


Link to post
Share on other sites

@Carl-Richard

On 2024-05-18 at 0:54 AM, Carl-Richard said:

Bill Burr has said something like "some people think you're dumb just because you don't share the same interests as them". The concept of conflating knowledge with intelligence has gotten really clear for me the last year or so.

There have been many times where someone else didn't seem to understand what I was talking about, and it somehow contributed to them thinking I'm smart. Conversely, I tend to feel the same way when I don't understand somebody else. I think there is a mental heuristic that tells you "if you don't understand something, it must be due to your lack of innate abilities", while in reality, it's probably much more about your lack of experience in a certain area; contextual factors. It has really opened my mind about how I view "smart people" and how much of it probably boils down to experience.

While this mostly applies to general knowledge, you can also observe it on a micro level in single conversations. For example, if you're talking to a group of people and you zone out for a few seconds, you might find yourself not understanding what is being said, and you might feel quite dumb for the rest of the conversation. But the moment you regain attention and immersion in the conversation, you understand it and you no longer feel like a dunce. Here, the knowledge about that specific conversation was lacking. But again, it also applies to general knowledge.

As for general knowledge, there is one particular example that sticks out. So I'm currently taking a statistics class, and I attend as many lectures as I can. I'm in a group project with five other people, and it's generally just me and another person who attends the lectures needed to understand the assignments. Not surprisingly, the other people are seemingly amazed that we're able to understand this stuff, thinking we're so much smarter than them and that this is why we're carrying the group. But in reality, the true difference is that we went to the lectures and they didn't.

Now, you can argue that we're the one attending the lectures because we have the innate abilities to understand what is being taught, while the others don't and therefore stopped attending for that reason. While this could be true, it could also just be that they never attended many lectures and therefore never built up the momentum or a continuous progression in knowledge, and if they had done so, they would've gotten a better understanding. After all, they admit that attending the lectures helps them understand it at least a little better. And it's not like me and the other person understand everything 100% either. When we're working in the group, we're constantly learning new things, making mistakes, getting stuck, having insights, making adjustments. We feel stupid all the time, but we still work through it.

Truly, if you want to point to an innate factor that is maybe significantly different between us, it's conscientiousness, especially the industriousness part (how much you're willing to work). But even that can be learned to a large extent. I had to consciously learn to be this conscientious, or at least how to manifest it in my actions to this extent. Regardless, at least in this situation, it suggests the main deciding factor is how much you work and the experience you gain rather than innate abilities. And according to this mathematician, if you're behind when comparing yourself to another person in your class, it only takes two weeks to catch up. In other words, when taking into account that you're in the same class which requires a certain level of skill to get into (and which is especially true for graduate level classes), the differences in outcome is virtually only a question of time and effort (attention, attendance).

 

 

This is somewhat related to how sophistry works. When somebody makes you believe that they understand something but you don't understand them, you go by their level of conviction and other superficial markers like fluency and verbal richness to determine if they're actually being coherent. It's basically like a back-up plan for when you don't understand someone but you still need to determine if the person can be trusted or not. And this is a very necessary thing to do, because it's often the case that you simply don't understand someone but they're being 100% coherent.

In fact, this assumption is a prerequisite of most learning. You need to trust in what you're learning before you actually learn it, and if you stop at the first sign of incoherence/conflict/friction, you won't learn much about anything at all. So ironically, you need to be somewhat complacent with sophistry in order to actually become knowledgeable and to be able to spot sophistry when it's truly happening. Knowledge is a catch-22. And ironically, the people in my group who don't attend the lectures, need to become complacent with sophistry when it truly matters (during the lectures), instead of when they're in the group listening to those who attended the lectures.

   I agree. This is basically the 9 types of intelligences, and differences between specialized and generalized intelligence. There's also attention span which IMO plays a factor as well. With me I can easily zone out of very technical and very equation heavy lectures, but if the lectures are highly visual and align with my biases and interests I tend not to zone out too much. Even in areas where I'm pretty competent, say in chess or in drawing, I know and have enough experiences that I zone out but the output and process isn't effected at all.

Share this post


Link to post
Share on other sites

Posted (edited)

45 minutes ago, Carl-Richard said:

He is a tough nut to crack

Use an AI.

I'm using AI to better understand you all. 😁

Edited by Nemra

Share this post


Link to post
Share on other sites

By the way, a quick tip for anyone who wants to "increase" their intelligence, or more practically improve their work: start high-intensity cardio (e.g. sprint training). If you want proof, just re-read the thread and see how much easier it is to read it now (I revised it after my sprint training :D). (Of course a confounding factor is I slept really bad the day I wrote it and ate really bad food the day before; thank you 17th of May, our national holiday ^_^no.png).

I might as well drop this one in here as well:

 


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
11 minutes ago, Nemra said:

Use an AI.

I'm using AI to better understand you all. 😁

I might try that actually. Even if it provides misinformation, it's better than nothing xD


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

@Carl-Richard

If the AI provides you with misinformation, say to it: Please be as unbiased as you can be. 😄

Some AIs may give you misinformation, some may be limited, and some may give you too much. I'm still learning and testing its potential.

I think the most important and amazing thing is its comprehensive ability.

Edited by Nemra

Share this post


Link to post
Share on other sites

Posted (edited)

28 minutes ago, Nemra said:

If the AI provides you with misinformation, say to it: Please be as unbiased as you can be. 😄

Hehe, well it's not just a problem of bias, but of incoherence, irrelevance and delusion ("hallucinations"). It's like it's building sentences with these wooden blocks with words on them, and sometimes the words it chooses are from the wrong bucket, but the overall sentence looks relatively fine. So there is an additional deceptive element to the misinformation, which is scary. You can't use the back-up plan for determining coherence in humans on an AI, because they're always as fluent and verbally rich as they are when providing accurate information.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

On 5/18/2024 at 3:16 PM, Carl-Richard said:

Interesting example. It brings up an interesting topic, and I'm curious what you think: Tyler1 the streamer recently hit 1960 elo in Chess, and he only started playing under a year ago (infamously at 200 elo lol), which is outright insanity. How do we explain such an amazing feat?

Chess is a weird sport in that it's often associated with raw intelligence, but there is also the notion that you only get really good if you started playing when you were really young (implying that experience is crucial), and there is also tons of concrete knowledge involved (openings, remembering games of other players, etc.).

That said, these notions might be somewhat outdated due to Chess becoming increasingly digitalized, where you can endlessly play games over and over, practice Chess puzzles, analyze your games, etc. And that is partially what I think Tyler1 has capitalized on: he is a video game streamer who is used to grinding games for multiple hours a day, so when he started fixating on Chess, it's not surprising that he would experience some great results compared to an average person with a job or who came up during the pre-digitalized era. But 1960 elo in 9 months? Surely he must have some intellectual gift, right?

So it begs the question: is his incredible 1960 elo in 9 months mostly due to his intelligence, or is it mostly due to his massive grinding schedule and use of clever skill-improving online technology (experience, knowledge)? I don't remember ever hearing Tyler1 being described as an intellectual genius, if anything quite to the contrary. Could anybody else achieve something similar if they put in the same number of hours and ferocious attention?

 

 

Well, there are some people who can see the entire board at once and think many moves ahead.  With chess even if you can think 3 moves ahead you have a strong advantage.   Do you think it's possible that anyone can learn to think several moves ahead - yes.  But then you get to- now how many moves ahead can you think... 4 - 5...infinite?  In order to see an infinite number of moves ahead you would have to have infinite intelligence. 

Edited by Inliytened1

 

Wisdom.  Truth.  Love.

Share this post


Link to post
Share on other sites
5 hours ago, Inliytened1 said:

In order to see an infinite number of moves ahead you would have to have infinite intelligence. 

What would be the difference between a person who has infinite memory vs a person who has infinite intelligence?

Share this post


Link to post
Share on other sites
12 hours ago, LordFall said:

That's a good point. I think those factual mistakes will be fixed but I think the even greater fool's trap is it will remove the need to think entirely. Already AI has read every book released so you could ask it for book summaries of all the books on Leo's list for example. 

I don't think it will remove the need to think and to understand things.

Your level of understanding of a certain thing will determine what quality of questions you can ask to the AI and if your questions are vague or meaningless, the AI won't be able to provide you the answers that you might be looking for or you won't even realize that you are asking the wrong questions in the firstplace.

The more understanding you have and the more nuance you can recognize, the better and more specific questions you will be able to ask to the AI and that will elevate the AI's usefulness and efficiency.

Share this post


Link to post
Share on other sites

Posted (edited)

Didn't read the whole thread but here's my take. 

Knowledge = memory and data

Intellect = how good are you at using the data in your memory to your advantage. Ability to analize and rationalize. 

Intelligence = is beyond memory and intellect. It's the ability to know what is true and right without any use of reasoning or data. Without intelligence intellect can be used in very stupid ways. 

______________

It's impossible for ai to develop the third kind of intelligence imo. A computer will always run on data and analysis of that data only. 

Edited by Salvijus

I simply am. You simply are. We are The Same One forever. Let us join in Glory. 

Share this post


Link to post
Share on other sites

Posted (edited)

10 hours ago, Carl-Richard said:

So there is an additional deceptive element to the misinformation

10 hours ago, Carl-Richard said:

You can't use the back-up plan for determining coherence in humans on an AI, because they're always as fluent and verbally rich as they are when providing accurate information.

Can you trust humans to be accurate?

It's not that I believe in AI; rather, I see it as a reflection of human biases, which, interestingly, makes it more unbiased than humans.

Edited by Nemra

Share this post


Link to post
Share on other sites

Posted (edited)

3 hours ago, Nemra said:

Can you trust humans to be accurate?

At this stage, at least more than an AI. Look up the hallucination rates in AI language models. They're staggering. And again, you have more ways to uncover inaccuracies or untruthfulness in humans. Humans generally care about being truthful — AIs don't (they simply happen to be generally truthful if they're coded and trained well). And when humans aren't being truthful, you have many ways to uncover the untruthfulness. A person might stumble in their words, make awkward pauses, blush, avert their gaze, change their posture in a weird way, start fidgeting, becoming restless or uneasy, becoming blunt or defensive, changes in their vocal tone, become emotional or insecure, etc. An AI doesn't do that. I already mentioned markers like variations in fluency and verbal richness (untruthfulness often decreases these things). AI doesn't have such variations. Additionally, you can check which biases and incentives the person has (e.g. ideological affiliations, professional affiliations, economic incentives) and you can judge their character and past actions (e.g. positions of authority which require trust, general reputation, times caught lying). AI doesn't have that (except for past actions).

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

10 minutes ago, Carl-Richard said:

Humans generally care about being truthful

Ahahahahahahha...

AI is more truthful than humans.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

2 hours ago, Leo Gura said:

Ahahahahahahha...

AI is more truthful than humans.

 

Quote

In a new preprint study by Stanford RegLab and Institute for Human-Centered AI researchers, we demonstrate that legal hallucinations are pervasive and disturbing: hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models.

https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive

I hope you're not using it to learn about law 😝

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Posted (edited)

2 hours ago, Carl-Richard said:

A person might stumble in their words, make awkward pauses, blush, avert their gaze, change their posture in a weird way, start fidgeting, becoming restless or uneasy, becoming blunt or defensive, changes in their vocal tone, become emotional or insecure, etc.

That doesn't automatically make a person not care about the truth. Although being defensive and/or blunt can drain your energy so much, caring about the truth might not be a reality for a person. Also, a person can be emotionless and not care about the truth.

2 hours ago, Carl-Richard said:

AI doesn't have such variations. Additionally, you can check which biases and incentives the person has (e.g. ideological affiliations, professional affiliations, economic incentives) and you can judge their character and past actions (e.g. positions of authority which require trust, general reputation, times caught lying).

AIs are exposed to multiple perspectives compared to humans. I agree that AIs don't "experience" the world like humans, but that doesn't mean that they are somehow untruthful because of that. They are literally injected loads of information into them. AI is a non-self-aware, data-driven entity. It's like an actor, but in this case, the actor has forgotten that he's acting.

The control of information that humans can have over AI can be problematic. If AIs are created in such a way that truth is an important thing, then it's our problem that we don't understand it or made in such a way that it sounds gibberish to us.

Edited by Nemra

Share this post


Link to post
Share on other sites

Posted (edited)

9 minutes ago, Nemra said:

That doesn't automatically make a person not care about the truth. Although being defensive and/or blunt can drain your energy so much, caring about the truth might not be a reality for a person. Also, a person can be emotionless and not care about the truth.

They're signs, not proofs. AI doesn't have those signs.

 

9 minutes ago, Nemra said:

AIs are exposed to multiple perspectives compared to humans. I agree that AIs don't "experience" the world like humans, but that doesn't mean that they are somehow untruthful because of that. They literally inject loads of information into them. AI is a non-self-aware, data-driven entity. It's like an actor, but in this case, the actor has forgotten that he's acting.

Look up the hallucination rates.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard

Me:

Give me AI hallucination rates

Gemini:

AI hallucination rates vary depending on the model, the complexity of the task, and the quality of the training data. However, recent studies have shown that even the most advanced models can still hallucinate a significant portion of the time.

Here are some estimates of AI hallucination rates:

  • OpenAI's GPT-4: Has the lowest hallucination rate among the tested models, estimated at around 3%.
  • Other large language models: Hallucination rates can range from 3% to 10% of responses.
  • Chatbots: Analysts estimate that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses.

It's important to note that these are just estimates, and the actual hallucination rate can vary depending on the specific use case. However, it's clear that AI hallucinations are a significant issue that needs to be addressed in order to improve the reliability and trustworthiness of AI systems.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now