Bobby_2021

AI has plateaued!

67 posts in this topic

Posted (edited)

6 hours ago, LastThursday said:

Can you tell I'm on the optimistic side of things?

Hah, I can. I am sure you are responsible, I am not saying otherwise, but I am just giving a little pushback so you don't fall prey to the current FOMO which will undeniably wipe out a lot of wealth/investments if you are not careful. 

6 hours ago, LastThursday said:

I know by exponential I really mean the S-shaped curve, it's not exponential forever

Good intuition. A technological innovation is normally an agglomeration of separate S-shaped curves that build on top each other until all the vertical and horizontal growth is exhausted. AI is supposedly different due to the fundamentally different end-case scenario, which is the so-called 'superintelligence explosion'. We are undeniably headed there, however, imo it will take a lot longer than Musk or Leopold are currently predicting. 

6 hours ago, LastThursday said:

 I just don't believe that in a lot of areas the ceiling of AI has been reached yet, it really is early days. Some ceilings might be power consumption, transistor density and availability of data, but ingenuity knows no bounds and some of these blocks will be bypassed one way or another. Shifts and paradigms have an uncanny ability to "come out of nowhere", the transformer architecture being exactly one of those shifts.

Absolutely. AI has a lot of room to grow, but for that to happen, LLMs and Transformers need to stop sucking all the air (talent and investments) out of the room, which is why I think a bursting of the bubble will be useful in the long run. 

Edited by Ero

Share this post


Link to post
Share on other sites
On 7/25/2024 at 10:35 PM, Applegarden8 said:

see you in 3 years bro, i would like to know your opinion then

AGI or whatever, will always remain 3 years away. For silicon valley tech bros it will be two weeks away and some already have it. 

There are real structural and social problems with AI.

Your best shot is human + AI.

AI capacity is fixed. So it's the good old human vs human that's going to make all the difference. It's other humans that use AI effectively are the ones going to replace you. 

 

Share this post


Link to post
Share on other sites

@Bobby_2021 Even from a spiritual context it would make sense that you're right. What is the whole point of humanity self-actualizing if some AGI just comes in and solves all our problems? 

Unless it somehow serves as a mechanism for spiritual and character growth, it's just gonna end up in a large % of the population living in a VR porn metaverse. 


<3

Share this post


Link to post
Share on other sites

The crux of the matter is just overinflated expectations in my opinion. Tech bros spin up hype to garner investment and we are still learning how to exactly use this technology in practice. The possibilities are not actually endless but they can appear such with a little imagination.

We might be starting to cross the peak of overinflated expectations right now.

0_rJa9ZBKiHHFw0ItB.png

Share this post


Link to post
Share on other sites

Posted (edited)

Let’s wait for chatgpt 5 and Sora to be releases

Edited by integral

How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
22 hours ago, LordFall said:

self-actualizing if some AGI just comes in

There will never be an AGI.

Even an AGI will not help you actualize.

AI will get unfathomably good at specific domains that are well defined, like chess.

The more general it gets, the more it deteriorates in quality.

---

I don't see anyone solving Energy or Data problem.

At the end of the day, AI is as good as the data you send it. Garbage in Garbage out. 

And energy. Nuclear Fusion is always a couple of decades away.

A bold move will be to solve either of these, but we are nowhere close. Love to see someone breaking the deadlock.

Share this post


Link to post
Share on other sites

Every AI I use is still improving its usefulness to me, so for me personally this is objectively false, but I understand my sample size is only 1.

I use Chat GPT 4 (Free), AI Dungeon (Currently subbed to Mythic on the Mixtral model), and Novel AI occasionally.

What you have to remember is, as with the millennium supposedly ending all computer systems, AI will not end all jobs, its just what people do, they make things overly dramatic.

Share this post


Link to post
Share on other sites

Posted (edited)

15 hours ago, Bobby_2021 said:

At the end of the day, AI is as good as the data you send it.

The data problem cannot be true.

There obviously exists enough data on Earth to create an Einstein or any level of human genius. So the problem is not the data but the ineffective way current LLMS use it.

The current LLM approach is brute force. This approach will definitely plateaue. But a new kind of neural network will be invented which works more like a human brain.

The human brain does not brute force billions of data.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

1 hour ago, Leo Gura said:

The data problem cannot be true.

There obviously exists enough data on Earth to create an Einstein or any level of human genius. So the problem is not the data but the ineffective way current LLMS use it.

The current LLM approach is brute force. This approach will definitely plateaue. But a new kind of neural network will be invented which works more like a human brain.

The human brain does not brute force billions of data.

Do you think AGI could happen before 2030?

You posted a great interview on your blog there he says scaling up will not increase the intelligence of the AI one bit. It's just memory 

So changing the architecture is critical. I ask me if there are things in the pipeline which could make AGI possible in the near future 

Edited by OBEler

Share this post


Link to post
Share on other sites

Posted (edited)

1 hour ago, OBEler said:

Do you think AGI could happen before 2030?

Very unlikely. Sillicon Valley techbros are clearly blowing smoke up their own asses. They are foaming at the mouth with greed and FOMO.

They cannot even get a car to drive itself.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

That's only if you conflate all of AI with Large Language Models (like ChatGPT).

While the idea that LLM's will somehow result in AGI is laughable (a bit like thinking that you're making tangible progress towards reaching the moon because you've manages to climb halfway up a very tall tree), it's completely reasonable to expect the continued proliferation of AI systems hyper specialized to a specific domain. And that these will almost certainly be a disruptive technology that brings massive changes to everyday life as a many types jobs that people work today will be automated.


I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
7 hours ago, Leo Gura said:

The data problem cannot be true.

There obviously exists enough data on Earth to create an Einstein or any level of human genius. So the problem is not the data but the ineffective way current LLMS use it.

The current LLM approach is brute force. This approach will definitely plateaue. But a new kind of neural network will be invented which works more like a human brain.

The human brain does not brute force billions of data.

If you are talking about AGI, then I can assure you thay it will not happen in a million years.

If you are talking about something better than the current models but not an AGI, there are stark problems with it as well.

At the end of the day, all these algorithms or models or whatever is glorified mathematics and statistics. That right there is the bottleneck of the algorithm.

An algorithm is well defined, finite and hence dumb. 

A finite algorithm cannot predict or give rise to something that could work with the infinite, which is what a brain like Einstein does.

That should be pretty obvious.

No finite mathematical model can make sense of all the data available to us.

TL;DR

The bottleneck for models is mathematics and symbols. 

Humans have a sense making capacity that transcends symbols and mathematics.

---------

Which is why the boldest move is to feed in high quality dense data. 

Feed the model with wise sayings and shit. Then there might be a chance that the models give out high quality output.

Even then, don't expect AGI. AGI is a lost cause. We should be building AI for specific use cases with tailor made data. 

We will soon have no option but to go for it once we have exhausted the theoretical limits for what an algorithm can do.

Share this post


Link to post
Share on other sites

@Bobby_2021 I'm so confused as to why you specifically claim that AGI is not possible. Can you elaborate on why you believe that? A human mind basically works like a pattern recognition algorithm as well so I don't see why we couldn't make an artificial one. 


<3

Share this post


Link to post
Share on other sites

Posted (edited)

3 hours ago, Bobby_2021 said:

If you are talking about AGI, then I can assure you thay it will not happen in a million years.

I disagree.

It should be possible to copy the structures of the brain and produce AGI.

How long that will take is completely unclear. I would guess 20-50 years.

A neural network is not really an algorithm. It is a near infinitely complex system. And it's not like human minds have anything close to infinite requisite variety. Even Einstein's mind was quite limited and predictable.

If you think about it, DNA is a finite algorithm which produces the Einstein brain. So obviously it is possible. Your DNA is not infinitely long.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

2 hours ago, Leo Gura said:

I disagree.

It should be possible to copy the structures of the brain and produce AGI.

How long that will take is completely unclear. I would guess 20-50 years.

A neural network is not really an algorithm. It is a near infinitely complex system. And it's not like human minds have anything close to infinite requisite variety. Even Einstein's mind was quite limited and predictable.

If you think about it, DNA is a finite algorithm which produces the Einstein brain. So obviously it is possible. Your DNA is not infinitely long.

Assuming for the sake of argument that a 1:1 algorithmic recreation of the human brain is 50 years away, that doesn't necessarily mean that AGI will be right around the corner.

As I'm sure your aware, human intelligence is inherently embodied - meaning that it extends beyond the brain, and is tied in important ways to how our brains are holistically integrated with a living body. (Or that you're aware of this perspective, at any rate, even if you don't entirely agree with the premise).

The fundamental problem as I see it is that AI doesn't have any 'skin in the game' for what it 'reasons' about. It doesn't have a capacity for Care, because Reality doesn't have any consequences for a computer algorithm.

Access to food and socialization and self actualization opportunities aren't abstractions that we relate to in a disconnected way - when we're deprived of these things, we end up suffering in real ways.

Which is to say, living minds operate on axiomatically different principles than that of digital computing. The human brain literally changes its physiological structure as it learns - it's not clear how you would create an analogue for this, even in principle, on a digital computer (without external human input).

While I'm open to hanging my mind changed on this topic, I've yet to see these inherent difficulties substantively addressed, without hand-waving them away.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

Posted (edited)

1 hour ago, DocWatts said:

As I'm sure your aware, human intelligence is inherently embodied - meaning that it extends beyond the brain, and is tied in important ways to how our brains are holistically integrated with a living body. (Or that you're aware of this perspective, at any rate, even if you don't entirely agree with the premise).

The fundamental problem as I see it is that AI doesn't have any 'skin in the game' for what it 'reasons' about. It doesn't have a capacity for Care, because Reality doesn't have any consequences for a computer algorithm.

I don't see this as a problem at all.

I'm confident its possible to make a brain in a vat with no body.

Of course it won't be identical to an embodied human, but it will still have AGI.

Quote

Which is to say, living minds operate on axiomatically different principles than that of digital computing. The human brain literally changes its physiological structure as it learns

Current LLMs also change as they learn. I don't see that as a problem.

All that is needed to create AGI is to copy the abstract structure of the human neural network. The problem is that this structure is not yet known or understood.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

8 minutes ago, Leo Gura said:

I'm confident its possible to make a brain in a vat with no body.

While a brain in a vat is of course possible (since we at present do hypothetically have the technology to clone an entire human being), the point is that a disembodied brain wouldn't have the type of intelligence or reasoning faculties that an actual human being has.

Our brain is designed (not literally designed, but you catch my meaning) to work holistically with a human body. It's a bit like expecting an engine on a table to be able to do all of the things that a car can do.

 

 

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

Posted (edited)

14 minutes ago, Leo Gura said:

Current LLMs also change as they learn. I don't see that as a problem.

LLM's are fine tuned and adjusted through external human input, by actors who have a context to be able to understand the abstract symbols they're manipulating.

Symbol manipulation isn't in any way meaningful to a disembodied LLM. They're meaningful to people because of our embodied interactions with the everyday world, which computer algorithms are incapable of.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
12 minutes ago, Bandman said:

If we cant even make an artificial mosquito yet, Why would we be able to make an artificial human brain

Hence I said 20-50 years.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

AIs input

Timeline Estimates

Estimating a timeline for achieving a 1:1 mapping of the human brain with AI is challenging and speculative. Here are some perspectives based on current trends and expert opinions:

Near-Term (Next 10-20 years): Significant progress in specific areas, such as understanding neural networks in certain brain regions and improving AI's ability to simulate these networks. However, a complete 1:1 mapping remains unlikely due to current technological and knowledge constraints.

Mid-Term (20-50 years): Possible breakthroughs in computational power (e.g., quantum computing), data collection technologies, and neuroscience. This period might see more comprehensive models that can simulate more complex brain functions, but a full-scale mapping of the entire brain at the neuronal level is still uncertain.

Long-Term (50+ years): Depending on advancements in multiple scientific and technological domains, a more realistic timeframe for potentially achieving a 1:1 mapping. This would require revolutionary advances not only in computational power but also in our understanding of the brain and data acquisition capabilities.

Conclusion

While the idea of fully mapping the human brain with AI is a compelling and ambitious goal, it is a multi-decade, if not multi-century, project. The challenges are immense, and while progress is being made, the complete replication of the human brain's complexity in AI remains a distant goal. Advances in neuroscience, computer science, and related fields will determine the actual timeline, but significant breakthroughs are still needed before this goal can be realized.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now