erik8lrl

AGI is coming

180 posts in this topic

David Shapiro gives me conman vibes. He quit his job and now needs to hype up AI as much as possible if he is relying on his youtube channel now. I hope I'm wrong and I hope David is correct about his AGI predictions. 

Share this post


Link to post
Share on other sites
1 hour ago, DocWatts said:

I'd disagree with this slightly. While it's true that a bee's mind doesn't use symbolic processing, I'd instead argue that bees are on a spectrum of general intelligence, along with people.

The most sophisticated AI that we have still can't come close to replicating all of the things that a bee can do.

I agree, we are nowhere near simulating the level of intelligence of real living beings, and that might not happen in 50 or even 100 years. 
What I mean by AGI in this instance is more practical. An AI that can solve novel and general problems better than an average human would be my definition of AGI. It doesn't have to have the same complexity that an organic being has for survival.  Heck, it doesn't even have to know how to drive cars. I think self-driving agent is a very difficult general problem, the AI would have to reach the level of conscious being in order to truly not make any mistakes, since the range of problems/situation you could run into is near infinite due to the complexity of reality. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites

@Seth

@Phil King 

Yeah, I saw this video randomly. He is clearly exaggerating by a lot. But the acceleration rate of AI is definitely alarming, which is the main point of this post.

 

Share this post


Link to post
Share on other sites
1 hour ago, Leo Gura said:

No more complex than a flying bee.

Making an flying bee AI would be much harder than a self-driving car.

It took nature millions of years to create bees 

 

The timeline from the first prototypes of self-driving cars to achieving Level 5 autonomy is reasonably within 10 to 30 years from now (assuming linear progression) 

Share this post


Link to post
Share on other sites
23 minutes ago, RightHand said:

It took nature millions of years to create bees

This needs to be emphasized more. 

We've been  trying to create in a laboratory over the course of a few decades what took hundreds of millions of years to develop through natural selection.

Add to that that a science is still very far from understanding how consciousness works and how life emerged from non-living material, and it would behoove us to approach these claims the topic with more skepticism and humility.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

Regardless of AGI and what even constitutes AGI, We are creating things that have shown emergent behavior (gpt wasn't trained on research-grade chemistry but one day it just could do it). We don't know what they are capable of at any given time. We don't know the timeline, We don't know what comes next, and we aren't acting considerately when it comes to new tech and its externalities.

Share this post


Link to post
Share on other sites

The term AI is a misnomer because, fundamentally, the AI is doing the opposite of what we consider to be intelligence.

 

A human being, given enough time, can know nothing about math at all and develop all of math from the ground up, simply by analyzing reality, and simply by analyizing their mind.

A human being, given enough time, can go from no artistic expression to developing all the artistic expressions we see currently.

A human being, given enough time, can create language itself, can create new concepts, new words, new ideas, without ever having seen and heard of any of them. Given enough time, a human being could create all possible words, all possible concepts that can exist within the reality of his mind.

 

 

AI is precisely the opposite. It cannot do anything without data. This is because machine learning has nothing to do with intelligence in this sense, it is probabilistic, stochastic parroting. It is more akin to intuition than anything else. You could give AI photorealistic images of all objects in the universe. And it would be great at depicting those objects, in photorealism. It could never move beyond that, because in the AI, there is nothing beyond the data.

 

This is the fundamental reason why AI is not in the same way intelligent as a human mind:
A human mind does not simply come to intelligent conclusions, a human minds understands why the conclusion is correct. Why? Because the conclusion and the process is part of their being. The idea of "addition" and "substraction" exists in a human mind, it does not exist in a calculator. Calculators do not do math, they calculate. Logic exists in the human mind, as an actual substance of existence. There is no computational system that contains logic, it simply can attempt to mimic the dynamics of logic.

In the same way, no computational system has a sense of appeal, because appeal and beauty is actually a substance of existence. It is actually something that exists in the human mind, and it relates to other part of the human mind, which are other, actually existing substances of reality.

 

In other words, experience is essential to general intelligence. Because general intelligence simply means being conscious, being individuated. The more substances of existence, and interrelation between them, a mind can contain, the higher it's potential for "general intelligence" is.

 

 

Now, this doesn't mean AI cannot achieve great things. It is basically machine evolution. It should be able to achieving anything that the human mind does unconsciously. This is why image generation is possible, it is much like human imagination. When you think "apple", you don't consciously image that apple. You don't construct it, it comes to you as you intend it.

The same is true for thoughts. You don't come up with your own thoughts, you don't think them. It's not intelligent to have thoughts, what's intelligent is you realizing what the thoughts mean, what they are, and how they relate to the rest of existence.

 

The AI does not understand why poetry is poetry, it simply learns to mimic it. There is no poetry in the machine, the poetry only exists in the human mind, as he reads it, as the words form a new substance of existence.

 

 

I suspect genuine AGI will not happen until we create physical artifical evolutionary systems. Individuating consciousness is essential for this, and this will have to happen on a physical basis, in the same or similar manner as the brain does.

Edited by Scholar

Share this post


Link to post
Share on other sites

@Scholar The discussion isn't wether or not AI could obtain qualia.

 

I personally don't care if GPT-87 is a philosophical Zombie, I just want it to be smarter than me.

Share this post


Link to post
Share on other sites
7 minutes ago, RightHand said:

@Scholar The discussion isn't wether or not AI could obtain qualia.

 

I personally don't care if GPT-87 is a philosophical Zombie, I just want it to be smarter than me.

You are missing the point. If you aren't conscious, you aren't "Generally Intelligent", you simply have intelligent functions.

 

If you want it to be smarter than you, just hit yourself on the head real hard. That will fix your problems.

Share this post


Link to post
Share on other sites

@Scholar I get what you are saying but I don't think we are using the same definition of AGI. 

When discussing AGI, the focus is on the functional and cognitive abilities of an AI system rather than on replicating consciousness. As long as your "'intelligent functions" allow you to solve any problem we're good to go, and if they can't you just make new ones with your existing set.

Share this post


Link to post
Share on other sites
8 minutes ago, RightHand said:

@Scholar I get what you are saying but I don't think we are using the same definition of AGI. 

When discussing AGI, the focus is on the functional and cognitive abilities of an AI system rather than on replicating consciousness. As long as your "'intelligent functions" allow you to solve any problem we're good to go, and if they can't you just make new ones with your existing set.

You can get far with parroting, memory and intuition, but I suspect there will be a lot of hard-lines that will be impossible to cross.

 

The danger here is, of course, that you will get intelligence without consciousness. There is nothing more destructive than intelligence that lacks consciousness. It's kind of ironic, because the last world war was caused precisely by this kind of dynamic.

Machine learning has the potential to give power to the least conscious of individuals. Would have been nice if the nazis had been a little less smart and had less "intelligent functions". The kind of potential for destruction now possible will pale in comparison to what we they had been capable of.

Edited by Scholar

Share this post


Link to post
Share on other sites

@Scholar If we use the metaphor of Adam and Eve eating from the tree of the knowledge of good and evil, can we not envision a scenario where AI, having accumulated a vast array of intelligent functions, experiences an AHA moment that instantaneously grants it consciousness?

 

Maybe using artificial DMT :D

Share this post


Link to post
Share on other sites
10 minutes ago, RightHand said:

@Scholar If we use the metaphor of Adam and Eve eating from the tree of the knowledge of good and evil, can we not envision a scenario where AI, having accumulated a vast array of intelligent functions, experiences an AHA moment that instantaneously grants it consciousness?

 

Maybe using artificial DMT :D

I think that is confusing what individuated consciousness is. It is a physical thing, a specific shape within the wavefunction of the universe. Computers aren't anything like that shape, so they will not be individuated.

It's not the AHA moment that grants consciousness, it's the other way around.

Edited by Scholar

Share this post


Link to post
Share on other sites

The AI hype these days is exhausting.


"Find what you love and let it kill you." - Charles Bukowski

Share this post


Link to post
Share on other sites

@Space A year ago, GPT-4 hadn't even been released. Now, you can talk with a stage yellow entity whenever you want, 24/7.

Edited by RightHand

Share this post


Link to post
Share on other sites
4 minutes ago, RightHand said:

Now, you can talk with a stage yellow entity

lol

Share this post


Link to post
Share on other sites

 

11 hours ago, Scholar said:

AI is precisely the opposite. It cannot do anything without data. This is because machine learning has nothing to do with intelligence in this sense, it is probabilistic, stochastic parroting. It is more akin to intuition than anything else. You could give AI photorealistic images of all objects in the universe. And it would be great at depicting those objects, in photorealism. It could never move beyond that, because in the AI, there is nothing beyond the data.

I think it's important to point out that we don't really fully understand how intelligence emerges from LLMs yet. Yes, on the surface it's just predicting the next word from a massive neural network. But as we increase the size of these neural networks, the LLMs start to become more and more "intelligent" through emergent processes. And no one actually knows how this process happens, due to the complex nature of these neural networks. I don't think we can even 100% be sure that these models are not conscious. Yes, they don't behave with a sense of self, and you can program the system to be limited and modify the behavior of the AI. However, this emergent property of large neural networks might be the very early basis for developing Qualia. We are only at the beginning of this development, and it already exhibits near-human-level intelligence in some areas, we don't know where this will lead as we scale up. For example, GPT3 has 175 billion parameters. These parameters can be thought of as the connections between artificial neurons, rather than the neurons themselves, so they are more like synapses. The human brain is estimated to contain approximately 100 trillion synapses, so the human brain is 571 times the size of GPT3, who knows what LLMs would be if it reached the same size as humans. Of course, this won't happen in a long time. But due to the accelerating nature of this tech, even if it doesn't reach qualia, the impact it will have on humanity will be significant. Google Gemini Ultra was announced last December and it has 540 billion parameters. This means parameter size grew 3 times in 3 years, we are only at the start of the exponential curve and we don't even know how many parameters GPT5 will have, which is coming this year.     

And also you can say the same for humans or any living being, we also need data/input to develop any form of intelligence. We won't be able to do anything without our senses interacting with the world. Even creativity, is often the process of interaction between existing ideas and inputs. AI art today can totally generate creative and novel artworks, artworks that are not drawn from any single data but can be synthesized from a massive amount of different data to realize the wholeness of an idea. Of course, the AI themselves don't have a self or will to create yet, but humans can already produce extremely creative art by using AI. If AI has a self, I don't see why it can't be creative and artistic. But yeah, to develop Qualia is a far and unknown road down the future.       

Edited by erik8lrl

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now