Bobby_2021

AI has plateaued!

67 posts in this topic

Didn't take long. 😀

If you haven't been replaced now, congrats. You are not going to be replaced for a long time.

The AI you have now is pretty much the best you can get. Use that to do more of what you were already doing. 

Share this post


Link to post
Share on other sites

Way too premature to dance on their grave.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
2 hours ago, Bobby_2021 said:

The AI you have now is pretty much the best you can get. 

This is just False.

However, it is true that we are heading into an AI winter, since the current scaling paradigm has already reached the two fundamental bottlenecks of data and compute/energy. I think it is necessary to sober up. I am foreseeing a year or two until the markets correct for all the burned money that hasn't seen returns (check out Sequoia's Article on AI's 600B Questions

We need fundamentally different approaches to architectures, employing all the scientific knowledge we have through a functorial bridge that is yet to be built (Paper by Google Deep Mind and others suggesting precisely that)


Chaos, Entropy, Order

Share this post


Link to post
Share on other sites

In a way it's good news. A lot of people feared replacement. Yet I think there will be more unique uses of AI in the future. 

 


My name is Reena Gerlach and I'm a woman of few words. 

 

Share this post


Link to post
Share on other sites
12 hours ago, Bobby_2021 said:

Didn't take long. 😀

If you haven't been replaced now, congrats. You are not going to be replaced for a long time.

The AI you have now is pretty much the best you can get. Use that to do more of what you were already doing. 

see you in 3 years bro, i would like to know your opinion then

Share this post


Link to post
Share on other sites

What you're saying is not true though and specifically because of open source models like Llama 3.1. Everyone and their moms are working on startups and applications with these AI models and they will soon start having commercial viability. 

I was just at the Toronto tech fest and one company was working on AI video generation with a built in avatar just based on text. So you put like I want a video of a woman with brown hair explaining the difference between bitcoin and ethereum while she walks in a park near a river and it'll generate it for you. That's one startup in video generation and there's also Sora that's coming out soon with customizable generated video. I also recently got a call from a fully automated AI sales rep so that'll start coming out soon.

Even if we've reached the current peak of the hard performance that these models have reached, the actual apps that these will produce are just about to come out. If we think generally the only two apps that we have so far for AI is ChatGPT and image generation. 

I'm personally really looking forward to full on AI personal assistant/life coach type thing that will make life easier to manage. 


<3

Owner of creatives community all around Canada

 Instagram is @Kylegfall

Share this post


Link to post
Share on other sites

No no. AI still has a lot of exponential growth. The bottleneck of power consumption will be solved in time by dedicated silicon, it's already happening. And AI will get deployed into everything and that's only going to carry on accelerating.

The problem of data for training AI is real, there's only a finite supply of it, but that only applies to the top of the range cutting edge AI. Lesser AI will just not need that quantity of training data. Also, new more effective techniques will come along for training AI, so more will be squeezed from the data we do have. I don't believe synthetic data will fix the supply problems though.


57% paranoid

Share this post


Link to post
Share on other sites

if my_head_on_properly == False:
    if AI_training_data == nonsense:
        outcome = "we are so screwed"


I AM a goy 

Share this post


Link to post
Share on other sites
35 minutes ago, LastThursday said:

No no. AI still has a lot of exponential growth.

Not within the current paradigm, i.e Transformer/LLM . Mira Murati, the CTO of OpenAI has already stated they don't have anything in production that is much better. Check the MMLU benchmark plateau in the second image - you can see only marginal improvements. LLMs are already at the their maximum, since they have exhausted pretty much all data, synthetic data doesn't work as yesterday's article from nature shows. 

38 minutes ago, LastThursday said:

The bottleneck of power consumption will be solved in time by dedicated silicon, it's already happening.

What you are referring to is ASICs, or dedicated chips, such as those my classmates built for etched.ai. The problem with them is that they work only for inference and not for training. You still need GPUs for that and even with the latest H100, LLama 3 or GPT-4 is estimated to have exceeded $1B, and that is only training. So no, dedicated silicon will not solve this problem. 

41 minutes ago, LastThursday said:

 Also, new more effective techniques will come along for training AI, so more will be squeezed from the data we do have. I

I agree with this, but you seem to take for granted the invention of 'new more effective techniques'. The last innovation was the transformer and that was 7 years ago. We haven't had any vertical/ architectural jumps since then, it has only been scaling. And even that doesn't prevent representational collapse or incorrect reasoning.


Chaos, Entropy, Order

Share this post


Link to post
Share on other sites
8 hours ago, Vibes said:

@Leo Gura Is it you who reads the reported posts or mods?

Both


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Ero Isn't the whole problem that it needs this much compute because it doesn't actually think on it's own and just predicts outputs from inputs? You would think that they need to find a way to reverse engineer the human thinking process and apply it to these models as well. 


<3

Owner of creatives community all around Canada

 Instagram is @Kylegfall

Share this post


Link to post
Share on other sites
8 hours ago, Vibes said:

@Leo Gura Is it you who reads the reported posts or mods?

It’s an AI. ;) 


“Our most valuable resource is not time, but rather it is consciousness itself. Consciousness is the basis for everything, and without it, there could be no time and no resource possible. It is only through consciousness and its cultivation that one’s passions, one’s focus, one’s curiosity, one’s time, and one’s capacity to love can be actualized and lived to the fullest.” - r0ckyreed

Share this post


Link to post
Share on other sites
3 minutes ago, r0ckyreed said:

It’s an AI. ;) 

It’s an Alien Intelligence!

16 minutes ago, Leo Gura said:

It’s Both!!

 


I AM a goy 

Share this post


Link to post
Share on other sites
1 hour ago, Ero said:

The last innovation was the transformer and that was 7 years ago.

Don't forget AI research has been going on since the 1960's. Seven years is no time in the scheme of things, another big innovation will come along soon enough, maybe even helped by AI itself.

1 hour ago, Ero said:

You still need GPUs for that and even with the latest H100,

The fact that there are dedicated GPU's coming onto the market, shows that optimisation for AI has already started, it's not the end of the process. The whole pipeline for AI will be optimised in hardware in time and reduction in power consumption will be part of the mix.

1 hour ago, Ero said:

Not within the current paradigm,

Perhaps not, but new paradigms come along. In the end AI will seek and suck in it's own data autonomously. Most likely with embodied AI.

In any case, in the end widespread usage and a large ecosystem of smaller AI's are going to have more impact than single monolithic AI's. The usage of AI in all its forms is still increasing exponentially and hasn't reached saturation.


57% paranoid

Share this post


Link to post
Share on other sites

Posted (edited)

9 hours ago, LastThursday said:

Don't forget AI research has been going on since the 1960's. Seven years is no time in the scheme of things, another big innovation will come along soon enough, maybe even helped by AI itself.

How could I? I am writing a thesis, referencing many of those papers.

9 hours ago, LastThursday said:

The fact that there are dedicated GPU's coming onto the market, shows that optimisation for AI has already started, it's not the end of the process.

Nowhere did I say it is the end of the process. GPUs will always help, but what I was pointing at is that alone won't be enough. There are physics- and biology-related systems that are provably unsolvable by Transformers (part of what I am working for my thesis). That is what I meant by that not being sufficient, no matter how much compute you throw at it. 

9 hours ago, LastThursday said:

Perhaps not, but new paradigms come along. In the end AI will seek and suck in it's own data autonomously. Most likely with embodied AI.

Again, you seem to take for granted 'new paradigms' coming along. As someone who works on the categorical/reasoning side of AI, I can tell you that their construction would be painstakingly slow, utilizing all the domain-specific knowledge we have. An example are Lie-Poisson Networks, part of a broader class of PINNs.
Embodied AI would definitely be a revolutionary technology, but you won't achieve it without having first some breakthroughs in the PINNs and Reasoning paradigms.

9 hours ago, LastThursday said:

In any case, in the end widespread usage and a large ecosystem of smaller AI's are going to have more impact than single monolithic AI's. The usage of AI in all its forms is still increasing exponentially and hasn't reached saturation.

I agree with the AI ecosystem vision, moreso than singular foundational models. Local domain-optimized models would be far more effective for the wide range of scenarios that would cause the disruption you are foreseeing. The thing is, when you say exponential, then you assume a multiplicative improvement year-by-year, similar to the AGI by 2027 wacko, whereas I am predicting more a cycle-like trend with general improvements with two important notes: first, the current peak is not the final, and second, the 'intelligence explosion'/ 'run off' scenario is almost impossible, considering the underlying energy constraints - with the current Energy production, there is just not enough to handle such a scenario (Wells Fargo predicts a 550% surge by 2026 to 52 TWh). We gotta scale up 5-10x Nuclear and work towards Fusion to have even a remote chance of satiating the upcoming energy demand. 

Edited by Ero

Chaos, Entropy, Order

Share this post


Link to post
Share on other sites
9 hours ago, LordFall said:

@Ero Isn't the whole problem that it needs this much compute because it doesn't actually think on it's own and just predicts outputs from inputs? 

Predicting outputs from inputs is what all ML models do (that's why they are called models). They need this much compute, because they are 'searching' a parametric space of trillion parameters - it comes from the sheer size of the models. 

10 hours ago, LordFall said:

You would think that they need to find a way to reverse engineer the human thinking process and apply it to these models as well. 

You are intuition is on point IMO. Similar to Gary Marcus, and more recently Yann LeCunn, I am a proponent of the idea of 'embedding' the representations of human thinking/ knowledge to achieve immense efficiency speed-ups. I am quoting the Lie-Poisson paper in my previous post for this:

'The advantage of PINNs is their computational efficiency: speedups more than 100,00x for evaluations of solutions of complex systems like weather have been reported' - referencing Accurate medium- range global weather forecasting with 3d neural networks  and Fourcastnet: A global data-driven high-resolution weather model using adaptive Fourier neural operators


Chaos, Entropy, Order

Share this post


Link to post
Share on other sites
2 hours ago, Ero said:

We gotta scale up 5-10x Nuclear and work towards Fusion to have even a remote chance of satiating the upcoming energy demand. 

Can you tell I'm on the optimistic side of things? I don't deny your domain knowledge hopefully you'll be part of the future of AI going in the "right" direction. I know by exponential I really mean the S-shaped curve, it's not exponential forever. I just don't believe that in a lot of areas the ceiling of AI has been reached yet, it really is early days. Some ceilings might be power consumption, transistor density and availability of data, but ingenuity knows no bounds and some of these blocks will be bypassed one way or another. Shifts and paradigms have an uncanny ability to "come out of nowhere", the transformer architecture being exactly one of those shifts.

What it might take is something akin to Turing with his ideas on what a computation is, instead we need ideas on what reasoning is and how to model that successfully - something way beyond propositional logic say.  We want AI to actually understand (reason about) what it is doing, i.e. self reflection and context awareness.


57% paranoid

Share this post


Link to post
Share on other sites
15 hours ago, Yimpa said:

It’s an Alien Intelligence!

 

That’s what AI stands for these days. ;) 


“Our most valuable resource is not time, but rather it is consciousness itself. Consciousness is the basis for everything, and without it, there could be no time and no resource possible. It is only through consciousness and its cultivation that one’s passions, one’s focus, one’s curiosity, one’s time, and one’s capacity to love can be actualized and lived to the fullest.” - r0ckyreed

Share this post


Link to post
Share on other sites

The IMO result is extremely impressive. I am waiting for the paper to dig a little deeper into their exact method, outside of what we already know about the Lean-based RL. 

7 hours ago, nuwu said:

I wonder if AI solves all with more time, at least, theoretically necessary as long as solution expressions are included in proof language, specifically Lean. 

This is an interesting inquiry. Having worked with Lean in academic setting, my general impression is similar to that of the following paper about the proof as a social process. There is a disconnect between the human-centric proofs we write and the computationally verified proofs by Lean. The transfer in both directions is non-trivial, and I wonder if at a certain scale it would be possible. (I can see for example from the AlphaProof IMO files that the annotations are human-made)

Most of the times the approach/ framework used is far more important than the proof itself, as was the case for example with Grothendieck's etale cohomology, which had an impact far wider than just its use for the Weil conjectures. I am not yet certain myself how much of that can be supplemented by formal verification tools. Lean itself is also very definition-dependent and sometimes it is precisely the reworking of the definitions that is needed, instead a procedural generation based on it as an axiom. 

 

 

Captura de pantalla 2024-07-26 a la(s) 12.54.44 p.m..png


Chaos, Entropy, Order

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now