erik8lrl

AGI is coming

180 posts in this topic

2 hours ago, erik8lrl said:

better architecture

Yeah I think this is what probably will be needed.

I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient.

So LLM-s probably will be 1 part of AGI, but there will be more.

Share this post


Link to post
Share on other sites
56 minutes ago, zurew said:

Yeah I think this is what probably will be needed.

I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient.

So LLM-s probably will be 1 part of AGI, but there will be more.

Yes, exactly. 

Share this post


Link to post
Share on other sites
Quote

You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind.

You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action 

@zurew A child knows the abstract notion of 'not', but often does not exactly follow the instruction when the 'not' is embedded into a more complex instruction. Yet a child has general intelligence.

In language models, this happens because its attention heads might not look at the word 'not' enough. Funnily enough, that's exactly analogous to a child where it doesn't follow the instruction, not because they didn't hear the instruction, but because they weren't paying attention to the word 'not'. It went in one ear and out the other.

Share this post


Link to post
Share on other sites
1 hour ago, thenondualtankie said:

@zurew A child knows the abstract notion of 'not', but often does not exactly follow the instruction when the 'not' is embedded into a more complex instruction. Yet a child has general intelligence.

In language models, this happens because its attention heads might not look at the word 'not' enough. Funnily enough, that's exactly analogous to a child where it doesn't follow the instruction, not because they didn't hear the instruction, but because they weren't paying attention to the word 'not'. It went in one ear and out the other.

Child's case is different on multiple levels. A child can refuse to take action for much more reasons - outside of him/her not hearing the suggestion to do something.

The problem with the reason that you gave, is that one - I could give that exact same reason to any problem that the AI cant solve - because "even though it actually knows how to solve this problem, well it didnt pay enough attention to what I asked it to do"

The problem with that kind of reasioning is that it is not consistent with how the AI operates. The AI will be much much less likely to "overlook" certain tasks compared to others. If you ask it to generate a mushroom image, the likelihood that it will won't pay sufficient attention to that , will be much less likely than to its negation.

The other problem with the reason that you gave is that there are instances where you can ask it multiple times to not do an action, and it continues to do it. So after 3-4 prompts (continously asking to not do something) , it becomes strange to assume that it just overlooked a word.

Edited by zurew

Share this post


Link to post
Share on other sites

I didn't say AGI won't happen.  I just said there's a lot of hype and that 7 month timeframe is silly.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

I didn't say AGI won't happen.  I just said there's a lot of hype and that 7 month timeframe is silly.

And what do you think the repercussions of AGI will be? 

Share this post


Link to post
Share on other sites
On 19/02/2024 at 11:07 PM, zurew said:

The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns

What a strange little twist! That's intelligence.

11 hours ago, zurew said:

The other problem with the reason that you gave is that there are instances where you can ask it multiple times to not do an action, and it continues to do it. So after 3-4 prompts (continously asking to not do something) , it becomes strange to assume that it just overlooked a word.

Examples of GPT-4 doing this?

Share this post


Link to post
Share on other sites

@thenondualtankie At this point I don't know what your argument actually is. You are selectively engaging with my replies and you seem to not build up to any conclusion - just randomly making points. My whole argument's goal was to showcase that LLM-s are currently bad at reasoning and have a poor understanding of things   and mostly just regurgitating things.  The argument isn't that AI will never improve or that AGI is impossible or that AGI is necessarily far away ( Im purposefully staying away from making predicitons).

I linked you two articles that have dozens of reasoning tasks that gpt-4 fails to solve. I have also linked you examples that showcases how (even in a programming competition's case) gpt-4 was able to solve 10/10 (because that data was contained in its training data) ,and when they gave it a new set of competition questions it scored 0/10. The same thing went down with a certain reasoning task, where it memorized the answer for one and if you tweaked the question a little bit, it immediately failed to figure out the right answer.  There are other examples of this and you can read on reddit from other users who talk about changing just one word and GPT4 immediately fails to solve the problem.

I shared with you all those things and you haven't engaged with any of them.

So the question is: what is your response to all of that? 

41 minutes ago, thenondualtankie said:

Examples of GPT-4 doing this?

https://medium.com/@konstantine_45825/gpt-4-cant-reason-addendum-ed79d8452d44 - this is another article from the same author as the past one, however the difference is that he ran the same tests on gpt-4's updated version - and the result was the same. GPT-4's reasoning capability didn't improve. Now regarding specifically your question - there are examples of that in the article. 

I will share the direct quotes so that you can use ctrl + f to find where they are in the article.

Quote

As you can see, GPT-4 makes the exact same mistake (giving Missouri and Nebraska the same color) not just twice in a row, but three times in a row, even after having the error explicitly pointed out to it twice in a row.

another one from the same article

Quote

GPT-4 is told explicitly that FOUR should be 8162 according to its own solution. While acknowledging the point, it proceeds to reiterate the exact same mistake.

 

Share this post


Link to post
Share on other sites
9 hours ago, Butters said:

And what do you think the repercussions of AGI will be? 

No one knows.

3 hours ago, DefinitelyNotARobot said:

@Leo Gura What do you think are the chances for AGI to have a higher quality of intelligence than humans? AI can already dream up images and books and music, and it's barely even an infant. Think of the vast amount of information it has been able to consume and reprocess in its short life time. I wonder how many human lives it would take to consume that much information and turn it into tangible skills like drawing or composing.

It will be impossible for a human to compete in terms of facts known. It's like competing in arithmetic against a calculator. No human mind can contain that much raw information. The question is, what can this AI do beyond knowing a ton of facts?

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Just knowing a ton of facts is not intelligence.

Intelligence is hella hard to define because it is an existential and spiritual thing.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
9 hours ago, DefinitelyNotARobot said:

Of course not. That's why it's important to ask how the physical medium of AI could limit the expression of intelligence. What allows intelligence to express itself so clearly through humans, while being much more subtle in a moth. What would limit an AI to "merely knowing facts", while allowing humans to express dimensions beyond fact?

I think these are going to be interesting questions to ask, but science will take a little while to move beyond facts, so early AGI is probably going to take on a lot of our scientific and materialistic biases.

This is what I've been asking. Neural networks are not just "knowing a lot of facts". It also connects them to form associations and meanings, this is the source of why they seem to be "intelligent" because they can make connections and generalizations similar to humans, even tho their ability is not perfect.    

Share this post


Link to post
Share on other sites

GPT doesn't merely know a bunch of facts - like, it's spitting out coherent, logical statements most of the time. It understands language and the world to a good degree somehow and that counts as intelligence I think.

Share this post


Link to post
Share on other sites
1 hour ago, DefinitelyNotARobot said:

AI is already capable of using complex languages such as art.

It is not. If you inspect AI art, with any understanding of visual communication, you will realize that it is not that sophisticated. The problem is that people get impressed by things like rendering, shape-language and realism, things that do not in and of themselves require visual communication.

For a layman, it is easier to recognize this with AI music.

 

For text it is also easier to get fooled, and because there is so much data, it is hard to tell when it is basically just replicating a pattern it has already learned. But if you are an expert in any given field, you can talk to ChatGPT and the like and find it's limitations fairly quickly.

 

And for your point with art, while intuition plays a role in art, the biggest role plays perception. The flow state you speak of is a unity between perceiving and "intuition", in which the perception constantly informs your subconscious. The difference is that AI does not perceive, it only has intuition, a pure pattern that is related to the data-set. It doesn't create art, it simply visualizes, interpolates and reproduces data based on pattern recognition.

 

It never recognizes when there is a repetitive pattern in the clouds it generates, or why it would be inappropriate. It doesn't perceive the effect a certain composition has on a person, or oneself. It fundamentally contradicts the nature of art, which is fundamentally discovering and exploring unities and rhythms within the self.

Edited by Scholar

Share this post


Link to post
Share on other sites
25 minutes ago, DefinitelyNotARobot said:

I'm not saying that the AI is super advanced in what it's doing. I'm saying that the fact that it's capable of doing the things at all is already remarkable.

It's just evolution. There will be a lot of things that will be possible now, like using Wi-Fi signals to look through walls. It's functionality through selective adaptation, it's not intelligence.

 

25 minutes ago, DefinitelyNotARobot said:

Humans repeat internal scripts all the time, don't they?

Yes, humans use neural networks. You can think of human beings as having several super-AI neural networks embedded in their brains. We, as conscious agents, interface with these neural networks with direct data-feedback.

Meaning, when I prompt "banana", and close my eyes, I will receive an image of a banana. I did not construct that image of a banana, the neural network in my brain did, likely through a similar stochastic process as image generators. The intention, the conscious, interfaces with the subconscious, the neuralnetworks within the brain.

As conscious agents we train these neural networks throughout our lifetime. When confronted with novel tasks, we begin by doing them consciously, step by step. This forms new neural networks which over time can take over that task, taking that load off our conscious mind to be able to focus on other tasks.

Poetry for example involves a lot of subconscious neural networks. Most often when we speak, we do so intuitively, automatically, and it is probably similar to what ChatGPT does, aside from us having a greater level of interconnectivity due to exploiting different substances of existence, different types of qualia. Many thoughts, as they come to us, do so through a subconscious process.

 

I expect most subconscious processing to be possible using AI. But art is not visualization, or prompting of word-strings. I have written about this in the past, it is an extensive topic to explore.

 

25 minutes ago, DefinitelyNotARobot said:

I do the same for a living.

It's unlikely that you have a sufficient grasp of what I am speaking of to be able to tell what you are and are not engaging in. This is because common understanding of the process of art and how it relates to individuated consciousness is severely lackluster.

Dreaming is not art. Prompting your mind to imagine a banana is not art. You should contemplate this deeply, we might be able to compare notes then.

 

25 minutes ago, DefinitelyNotARobot said:

This is not a fundamental problem with the intelligence of the AI itself, but the stage of its evolution. I'm not talking about what limits AI as of right now. I was more concerned with the fundamental essence of the intelligence beneath. 

Evolution does not happen merely in the information processing of the relations between neurons, it happens on a physical level. Meaning, real evolutions occurs using the whole spectrum of metaphysical relations which exist between physicality and other aspects of existence. These relationships do not occur in hardware computation. For AI to evolve in the sense of going beyond pure mathematical relations, it would have to generate a similar type of physical interrelationality that the brain does when maintaining individuated consciousness.

Physical reductionism is the main problem for the confusion around what intelligence is. There is a lack of recognition of the fundamental arationality of existence.

 

But, that is not to say that AI will be sufficiently complex in it's unconscious processing to be able to fool people into thinking it is creating art. But that is because people confuse art with the output, rather than what art truly is.

The drawing of a 4 year old is fundamentally more artistic than any generative AI image ever will be. The generative AI is what the childs mind does when it dreams. Now, for you, as an artist, I recommend contemplating what the difference is, between the unconscious imagination, and complete artistic expression.

 

 

 

Intelligence, as well as art, occurs on the level of the conscious agent, the individuated consciousness. The neural networks are merely tools and data-streams which interface with that Intelligence, the Individuated Consciousness.

Edited by Scholar

Share this post


Link to post
Share on other sites
6 hours ago, Scholar said:

It's just evolution. There will be a lot of things that will be possible now, like using Wi-Fi signals to look through walls. It's functionality through selective adaptation, it's not intelligence.

 

Yes, humans use neural networks. You can think of human beings as having several super-AI neural networks embedded in their brains. We, as conscious agents, interface with these neural networks with direct data-feedback.

Meaning, when I prompt "banana", and close my eyes, I will receive an image of a banana. I did not construct that image of a banana, the neural network in my brain did, likely through a similar stochastic process as image generators. The intention, the conscious, interfaces with the subconscious, the neuralnetworks within the brain.

As conscious agents we train these neural networks throughout our lifetime. When confronted with novel tasks, we begin by doing them consciously, step by step. This forms new neural networks which over time can take over that task, taking that load off our conscious mind to be able to focus on other tasks.

Poetry for example involves a lot of subconscious neural networks. Most often when we speak, we do so intuitively, automatically, and it is probably similar to what ChatGPT does, aside from us having a greater level of interconnectivity due to exploiting different substances of existence, different types of qualia. Many thoughts, as they come to us, do so through a subconscious process.

 

I expect most subconscious processing to be possible using AI. But art is not visualization, or prompting of word-strings. I have written about this in the past, it is an extensive topic to explore.

 

It's unlikely that you have a sufficient grasp of what I am speaking of to be able to tell what you are and are not engaging in. This is because common understanding of the process of art and how it relates to individuated consciousness is severely lackluster.

Dreaming is not art. Prompting your mind to imagine a banana is not art. You should contemplate this deeply, we might be able to compare notes then.

 

Evolution does not happen merely in the information processing of the relations between neurons, it happens on a physical level. Meaning, real evolutions occurs using the whole spectrum of metaphysical relations which exist between physicality and other aspects of existence. These relationships do not occur in hardware computation. For AI to evolve in the sense of going beyond pure mathematical relations, it would have to generate a similar type of physical interrelationality that the brain does when maintaining individuated consciousness.

Physical reductionism is the main problem for the confusion around what intelligence is. There is a lack of recognition of the fundamental arationality of existence.

 

But, that is not to say that AI will be sufficiently complex in it's unconscious processing to be able to fool people into thinking it is creating art. But that is because people confuse art with the output, rather than what art truly is.

The drawing of a 4 year old is fundamentally more artistic than any generative AI image ever will be. The generative AI is what the childs mind does when it dreams. Now, for you, as an artist, I recommend contemplating what the difference is, between the unconscious imagination, and complete artistic expression.

 

 

 

Intelligence, as well as art, occurs on the level of the conscious agent, the individuated consciousness. The neural networks are merely tools and data-streams which interface with that Intelligence, the Individuated Consciousness.

Yes, I think your point of view is valid but different from the perspective most people in this post are coming from. Yes, art or intelligence without a conscious agent loses its meaning from the perspective of other conscious agents. The value/meaning of art is through the exploration and expression of one's self, which AI doesn't have yet. However, while AI can't create meaningful art, humans have used AI to create art that is self-expressing. The AI doesn't have agency, but humans can use it to create meaning. I think most of us are speaking less from an art philosophy perspective and more from a scientific perspective. Just simply pointing to the fact that if these AI kept advancing at the speed they are, will impact society greatly. Not because the AI can or can't do something, but because of how humans will use it to create or destroy things. It's as you said, humans use neural networks too, but if we have AGI, it means that everyone will have a neural network the size of the entire knowledge of humanity. "Intelligence" in this instance is less about conscious behaviors but more so a democratization of knowledge and understanding through AI. Which when used by humans, could either lead to greatness or disaster.

Share this post


Link to post
Share on other sites
10 hours ago, erik8lrl said:

Yes, I think your point of view is valid but different from the perspective most people in this post are coming from. Yes, art or intelligence without a conscious agent loses its meaning from the perspective of other conscious agents. The value/meaning of art is through the exploration and expression of one's self, which AI doesn't have yet. However, while AI can't create meaningful art, humans have used AI to create art that is self-expressing. The AI doesn't have agency, but humans can use it to create meaning. I think most of us are speaking less from an art philosophy perspective and more from a scientific perspective. Just simply pointing to the fact that if these AI kept advancing at the speed they are, will impact society greatly. Not because the AI can or can't do something, but because of how humans will use it to create or destroy things. It's as you said, humans use neural networks too, but if we have AGI, it means that everyone will have a neural network the size of the entire knowledge of humanity. "Intelligence" in this instance is less about conscious behaviors but more so a democratization of knowledge and understanding through AI. Which when used by humans, could either lead to greatness or disaster.

AI will lead not to a democratization of power, but to a monopolization of power.

The way things currently stand, corporations can mine the collective knowledge and data of mankind, extract from it it's value and concentrate it in their hands. This means economic power that is distributed amongst the population currently will become centralized in the hands of whoever will be able to create the most sophisticated systems.


You can view art on a spectrum, where self-expression would relate to how much something is art vs a simple prompt.

When I commission an artist to paint me an image, I prompt him to express an idea that I have in my mind. The question is, whose expression is the painting? Yours, or the artists?

You can argue that you do engage in some form of expression, but it is a far lesser form of self-expression than if you had to contend with the given medium, discover how you personally relate to those rhythms and then express those rhythms as they relate to you. This does not happen when you use AI, not with the current iterations of generative AI.

Is there a world in which AI could enhance human expression? Yes. But you, nor anyone I see talking and engaging with this technology, actually understands what that would require, and why the current pathway will lead to precisely the opposite. The current mindset will lead to disaster.

 

And nobody is arguing that AI will not impact society.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now