Bobby_2021

AI has plateaued!

67 posts in this topic

1 hour ago, Bobby_2021 said:

An AGI should be able to generate controls for driving a fighter get that can refuel in mid air, dodge missiles...

and the SAME AI should also be able to play chess endgames with perfectionism...

And that SAME AI should be able to do supercomputer calculations...

False.

All humans have GI but cannot do most of those things.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
10 hours ago, Leo Gura said:

False.

All humans have GI but cannot do most of those things.

It's certainly plausible for a human. It's just that we have limited time and energy to learn all of those and most of it isn't even spent on learning. It's spent on survival.

Someone with 150+ IQ could easily do all of those. It wouldn't be crazy hard. 

Because all of those distinct tasks are mathematical in nature. But mathematics cannot unify all those different tasks in a singular model. You need specialised multiple models.

A human mind can hold multiple specialised models in her head and switch between them holistically. 

A machine cannot have this holistic part, not to mention higher orders of complexity in terms of holism.

That's what General Intelligence is about and should be able to do. The more a mathematical model generalises, the more it loses in power. 

Share this post


Link to post
Share on other sites

The bottleneck for humans is mathematical/intellectual/calculation ability. AI or any normal computer can outsmart a human in terms of pure mathematical ability in a FINITE domain. 

A finite domain is a countable infinity. (Natural numbers or countable numbers is a countable infinity)

A computer would crush a human in chess for this reason. Chess is a countable infinity.

But reality isn't a countable infinity. Reality is an uncountable infinity, an infinity of infinities. To select the infinity of interest, you need holistic evaluation of infinities. That's where a general intelligence would help you. Mathematical or linguistic model can never have general intelligence.

You need emotions for post linguistic general intelligence paired with high IQ for mathematical ability or linguistic intelligence.

----

Also would someone explain why would we need an actual AI to do everything while it's probably easier and efficient and effective to make specialises models. 

I don't get the hype about AGI. You don't need it to be frank. 

The AI for playing chess is so (leela) unfathomably intelligent than even the computation based chess playing softwares like stockfish.

Share this post


Link to post
Share on other sites

 

This could be a hitting a plateau for neural net based learning.

Share this post


Link to post
Share on other sites
On 2024-08-20 at 1:10 AM, Bobby_2021 said:

I don't get the hype about AGI. You don't need it to be frank. 

The AI for playing chess is so (leela) unfathomably intelligent than even the computation based chess playing softwares like stockfish.

The Leela chess model was able to reach such a high level of performance because it could play against itself. Chess is a game where you have a black and white easy evaluation, win or lose, it’s very easy to evaluate whether something is good or bad in chess this is why the AI was able to reach unbelievable levels of intelligence.

but for language and uncertain open systems like the real world it is very hard to determine a good evaluation function. How do you know if the AI used language to solve a math problem effectively? There’s no clear way to prove it or evaluate it so how can you train the AI to do it? And math is one of the easy ones imagine things like is this poem beautiful? They become subjective evaluations.

so because of this we need huge models that are thousands of times larger than Leela.

on the video you just posted it’s possible and AI well figure out how to pass below the trend line.

maybe they will figure out how to make an AI train against an AI like in chess. The AI will battle itself or play against itself and the winner will be the AI that is more intelligent and then it will keep doing this for trillions of years of simulated time until it reaches superhuman levels of intelligence.

Edited by integral

How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
14 hours ago, integral said:

very hard to determine a good evaluation function

Exactly why AGI isn't possible.

Humans have GI because we have survival as out evaluation function trained in the open world for hundreds of millions of years.

Share this post


Link to post
Share on other sites
On 19/08/2024 at 7:54 PM, Leo Gura said:

False.

All humans have GI but cannot do most of those things.


A machine ultimately has way more computational resources, higher throughput and bandwidth to reason across all domains.
Humans are constrained in time, energy and genetics to scale their learning and thinking like an AI does.
Therefore we hold AI's capabilities and accuracy rate to much higher standards than a human. 

Pre-GPT we always defined AGI as a virtuoso AI that is a master at any intellectual task. This is how DeepMind and most of the estabilished AI labs approached the ultimate AGI objective. Now with OpenAI stealing the show and Sam Altman claiming that AGI should basically be "a median human". We've degraded the term AGI to the 50th percentile of humans while for decades it represented a virtuoso and multidisciplinary master.

All these new AI labs that use this non-standard definition of AGI is all done to attract in more investor money which keeps the hype circus going on. If you can redefine the bar and lower it then yea sure we can reach AGI in 1 or 2 years. But when it comes to an AGI that can do any intellectual task a human can we are still a couple major breakthroughs away.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now