erik8lrl

AGI is coming

180 posts in this topic

Hmm, I don't know.
Recent estimations by surveying over 2000 leading AI-experts were roughly at around a 50% chance that we get "high level machine intelligence" by 2050. Here is the paper: https://arxiv.org/abs/2401.02843

Personally (and I am clueless), I think that AI-developement will hit a really tough roadblock sooner or later.
But we will see what happens :) 

Edited by undeather

MD. Internal medicine/gastroenterology - Evidence based integral health approaches

"Perhaps all the dragons in our lives are princesses who are only waiting to see us act, just once, with beauty and courage. Perhaps everything that frightens us is, in its deepest essence, something helpless that wants our love."
- Rainer Maria Rilke

Share this post


Link to post
Share on other sites
1 hour ago, undeather said:

I think that AI-developement will hit a really tough roadblock sooner or later.

I agree.

The assumption that this speed of development will stay being on an exponentional curve is big and I havent heared a strong argument yet that would properly ground it.

There also seems to be an assumption, that if we just scale up these models and if we use more computing power - eventually general intelligence will just emerge. But, there are certain problems that can't be simply solved by more computing power  - for example the relevance realization and other moral and philosophical problems.

It feels similar to saying that "well with more technological development we will make something so that we can travel faster than light" - well noo, its not a matter of lack of technological development , its a problem with laws of physics and until you can contradict that, you can have as much technological development as you want, you will have certain limitations that you can't cross.

 

Edited by zurew

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

I agree.

The assumption that this speed of development will stay being on an exponentional curve is big and I havent heared a strong argument yet that would properly ground it.

There also seems to be an assumption, that if we just scale up these models and if we use more computing power - eventually general intelligence will just emerge. But, there are certain problems that can't be simply solved by more computing power  - for example the relevance realization and other moral and philosophical problems.

It feels similar to saying that "well with more technological development we will make something so that we can travel faster than light" - well noo, its not a matter of lack of technological development , its a problem with laws of physics and until you can contradict that, you can have as much technological development as you want, you will have certain limitations that you can't cross.

 

It won't stay on an exponential curve forever, but because we are still so early in this development, it will be a while before the curve slows down. 
I think I define intelligence in this instance more as logic, reasoning, and problem-solving capabilities. Moral and philosophical problems are often relative and paradoxical. They are more so wisdom rather than intelligence. For an AI to have wisdom it would need a consciousness, a perspective in which it can project intelligence onto. That consciousness might emerge from AGI, we don't know. 
But at least for now, it seems that by simply scaling up the quality and size of the data, the models are able to increase their intelligence regardless of domain. This is why AGI might happen sooner than people expect. 
Honestly, I expected AGI to not happen in 5-10 years, but with the Sora release it made me change my perspective. I work in AI video production, so the leap in quality of Sora versus every other model in the market right now is unbelievable. It's like jumping from 10 to 100. It single-handedly solved almost all the problems with video generation models. I didn't expect this quality until 2 years from now, but it is here now. 
The thing with scaling intelligence this way is that there is no hard limitation to how large you can scale. As long as you have the resources, you can scale as large as you want, which means that the speed at which this tech develops is not limited by technical limitations, but by scale only.
I'm not an AI researcher. However, it seems that if the scaling factor works for both language and image training, it will likely work for other domains and functions as well. An AGI is likely a combination of all the models trained on every domain possible, and it seems like that is totally possible. The 8 trillion dollar ask makes a lot of sense if you think from this perspective.  

Share this post


Link to post
Share on other sites

Sora is another example of a domain specific AI , but its not clear how we are advancing towards AGI (where you can connect all the domain specific AIs together under one framework that actually works the way we want it to work).

Its like we are always pushing back the problem of how we need to connect each pieces together so that AGI can emerge, while pretending that we are making real progression on the problem. Merely creating more and more advanced domain specific AIs wont be sufficient - you need to connect them in a specific way.

Its like we use the progress of domain specific AI and mistaken it for the progress towards AGI.

Share this post


Link to post
Share on other sites

They can't even get cars to drive themselves.

Be ware of AI hype.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@zurew Language is the connector for different domains. The way we interface with these models is through language. Even with an image/video model like Sora, it is fundamentally an LLM as well. Multimodal LLM is the foundation of AGI, when the number of domains and level of intelligence reach a point, it will develop general intelligence. 

Share this post


Link to post
Share on other sites
1 minute ago, Leo Gura said:

They can't even get cars to drive themselves.

Be ware of AI hype.

Self-driving agents are difficult to develop precisely because they require high levels of intelligence in multiple domains. Robotic and robotic agents will likely come after AGI is achieved digitally.  

Share this post


Link to post
Share on other sites
19 minutes ago, erik8lrl said:

@zurew Language is the connector for different domains. The way we interface with these models is through language. Even with an image/video model like Sora, it is fundamentally an LLM as well. Multimodal LLM is the foundation of AGI, when the number of domains and level of intelligence reach a point, it will develop general intelligence. 

Maybe or maybe not. Maybe they will eventually abandon LLM-s because they find a roadblock - we have no idea. Making confident statements and predictions is useless, because even experts are shooting in the dark nad are making wildly different statements about AGI.

It seems to be the case that it gets the syntax of things (the rules) but it doesn't really get the semantics (the abstract meaning of things) - this can be demonsrated if you use any gpt or LLM model. 

And again there is still the problem with- the how you need to connect the domain specific AI-s together.

 

There is also a big problem with self deception as you increase intelligence. If you scale things up a lot, that will make it harder for the AI to introspect and we probably want the AI to have an ability to develop its own self - and a prerequisite to that is the ability to introspect.

Edited by zurew

Share this post


Link to post
Share on other sites

Is it though?

The idea that AGI is just around the corner is akin to thinking that one is making progress towards reaching the moon because they've managed to climb halfway up a very tall tree.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
4 minutes ago, DocWatts said:

The idea that AGI is just around the corner is akin to thinking that one is making progress towards reaching the moon because they've managed to climb halfway up a very tall tree.

Yeah this is a good metaphore.

Does the progress in climbing up trees a good and reliable metric to track the progression of reaching the moon?

Share this post


Link to post
Share on other sites

If anyone is interested, I did a write up on the subject of AGI for a book I'm working on, where I delve into some of the substantial barriers to creating AGI, owing to fundamental differences between living minds and machine intelligence. 

(Apologies if these pages get uploaded out of order).

Screenshot_20240217-092752.png

Screenshot_20240217-092800.png

Screenshot_20240217-092806.png

Screenshot_20240217-092815.png

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
40 minutes ago, zurew said:

It seems to be the case that it gets the syntax of things (the rules) but it doesn't really get the semantics (the abstract meaning of things) - this can be demonsrated if you use any gpt or LLM model. 

This is because it is not intelligent enough yet. It will improve over time and soon.

40 minutes ago, zurew said:

Maybe or maybe not. Maybe they will eventually abandon LLM-s because they find a roadblock - we have no idea. Making confident statements and predictions is useless, because even experts are shooting in the dark nad are making wildly different statements about AGI.

 To be fair yes no one knows for sure. I'm simply posting this to bring awareness of the acceleration of this development. What will happen is yet to be seen. But due to the sheer impact this could have on humanity. I would start preparing for it to become a reality, and pay attention to this space. 

 

40 minutes ago, zurew said:

There is also a big problem with self deception as you increase intelligence. If you scale things up a lot, that will make it harder for the AI to introspect and we probably want the AI to have an ability to develop its own self - and a prerequisite to that is the ability to introspect.

I think our definition of AGI is different, from what I'm reading I feel you are thinking of AGI as creating a conscious intelligence. I do not know if that would happen. My definition of AGI is more so what we have now but far better and more intelligent at solving problems or providing understanding in different domains. 
I think Sam Altman's definition of AGI is when AI can help solve new and novel physics problems. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites
1 hour ago, erik8lrl said:

Self-driving agents are difficult to develop precisely because they require high levels of intelligence in multiple domains.

Not true.

A bee has no problem navigating in 3D and it needs no symbolic general intelligence.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Reminds me of chess masters saying chess computers will never beat human chess masters.

Share this post


Link to post
Share on other sites
39 minutes ago, Leo Gura said:

Not true.

A bee has no problem navigating in 3D and it needs no symbolic general intelligence.

I'd disagree with this slightly. While it's true that a bee's mind doesn't use symbolic processing, I'd instead argue that bees are on a spectrum of general intelligence, along with people.

The most sophisticated AI that we have still can't come close to replicating all of the things that a bee can do.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

@erik8lrl I like David Shapiro but he is a techno optimist. He literally said last year when ChatGPT came out that we would have AGI by mid 2023, end of 2023 at the latest. Like Leo said, we dont even have a car that can properly drive itself yet. AGI is 10+ years away if it is even possible

Share this post


Link to post
Share on other sites
39 minutes ago, Leo Gura said:

Not true.

A bee has no problem navigating in 3D and it needs no symbolic general intelligence.

I agree with your point, but I don't think the metaphor is very fitting.
The locomotoric modalities of a bee are based on an evolutionary process that stretches over hundreds of millions of years.
There is an insane amount of intelligence encoded in this dynamic itself.
Bees don't need to align with the same spacial constraints (streets, other vehicles...), nor with the moral conditions (i.e trolley problem) that are inherent to this conceptual framework of vehicle-transportation. 

The decision tree underlying just one , very standard traffic situation is basically infinite - and I don't see how computing power will deal with this problem even theoretically. But then again, I have no idea what I am talking about in this area lol.

Edited by undeather

MD. Internal medicine/gastroenterology - Evidence based integral health approaches

"Perhaps all the dragons in our lives are princesses who are only waiting to see us act, just once, with beauty and courage. Perhaps everything that frightens us is, in its deepest essence, something helpless that wants our love."
- Rainer Maria Rilke

Share this post


Link to post
Share on other sites
34 minutes ago, undeather said:

The decision tree underlying just one , very standard traffic situation is basically infinite - and I don't see how computing power will deal with this problem even theoretically.

No more complex than a flying bee.

Making an flying bee AI would be much harder than a self-driving car.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now