Bobby_2021

AI has plateaued!

67 posts in this topic

Posted (edited)

We don't need a complete mapping of the brain. Just an understanding of core features of neural networks.

Transformers and back-propogation are two such features. We just need to discover more like that.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

4 hours ago, Leo Gura said:

We don't need a complete mapping of the brain. Just an understanding of core features of neural networks.

A complete mapping of the brain would be of more benefit to medicine and cognitive science than it would artificial to intelligence research.

A 'neural network'  is a metaphor, not a description of what large language models are actually doing under the hood.

David Chapman is an AI researcher who left the field to become a monk, and he had this to say about neural networks:

"Everyone working in the field knows "neural networks" are almost perfectly dissimilar to biological ones, but the language persists  "Yes, of course, everyone knows that, so it's harmless". No, it's not. And it's not just that is reliably confuses people outside of the field [it's also misleading, and impedes AI safety measures]". 

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

Posted (edited)

 

18 hours ago, Leo Gura said:

The data problem cannot be true.

There obviously exists enough data on Earth to create an Einstein or any level of human genius. So the problem is not the data but the ineffective way current LLMS use it.

The current LLM approach is brute force. This approach will definitely plateaue. But a new kind of neural network will be invented which works more like a human brain.

The human brain does not brute force billions of data.

Look into the Biocomputing developments from the past couple of months. 

Actual brain tissue is grown from stem cells and incorporated into computing.

It has cut energy usage and improved overall processing while enabling further study into more organic algorithms and compressing data.

@Leo Gura

 

 

 

Edited by yetineti

Share this post


Link to post
Share on other sites
27 minutes ago, yetineti said:

Look into the Biocomputing developments from the past couple of months. 

Actual brain tissue is grown from stem cells and incorporated into computing.

It has cut energy usage and improved overall processing while enabling further study into more organic algorithms and compressing data.

I don't know what effect this has on AGI.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@nuwu

25 minutes ago, nuwu said:

Wetware is not science, this is slavery.

Actually, slavery is what was used to make the device you’re typing on.

Biocomputing uses lab grown brain tissue from stem cells.

Your fear, however, is not unwarranted. This technology would be the foundation to, yes, harvesting brains or lifeforms for computational power.

Share this post


Link to post
Share on other sites

Posted (edited)

@Leo Gura

AGI is a misnomer.

AGI would just be all of the complex systems of a human, but man made.

Not going to happen.

Realistically, it will be a combination of the useful systems and pissing and shitting, for example, will be excluded because it’s not useful for the type of intelligence the people building this stuff care about. Nobody is going to program a robot to think it is peeing when it is not.. But that all would be required to get ‘AGI.’

Nobody is trying to create AGI or humanoids and if they say they are they are mistaken. They are just creating robots and they just need to be ‘relatable.’

We maximize specific fields, such as language, vision and moving to more of the physical abilities such as walking, grabbing, etc.

All of these require new types of computing, mass data input, mass data retention, high levels of energy expenditure, etc.

All of these things biocomputing will do better.

I don’t know what else to tell you.

Whether or not we agree on what is ‘AGI’ or how we get there- high levels of computing will be needed and biocomputing is clearly going to have certain advantages.

Edited by yetineti

Share this post


Link to post
Share on other sites

Posted (edited)

@yetineti All AGI means to me is a chatbot which can do everything a human mind can do with language.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Posted (edited)

@nuwu You chimed in saying wetware was slavery, and you want to talk to me about gaslighting and straw manning?

Yeah, what you mentioned is not revolutionary, or what I was talking about.

Edit: It was what I was talking about. I didn’t know Leo’s exact ‘AGI definition’ and I did not ask. Instead, I just poorly explained what I don’t think it is and what it is and a bunch of nonsense, besides the point.

Also, wetware can be slavery.

Lastly, the tone I had is paranormally defensive for an internet interaction.

❤️

 

Edited by yetineti

Share this post


Link to post
Share on other sites

@Leo Gura Biocomputing will help with your chatbot view of AGI, humanoid robots - all of the above.

Share this post


Link to post
Share on other sites

Posted (edited)

@nuwu I am sorry for my defensiveness.

You are correct. I did state very obvious things, centered around qualia, as you mentioned in a a well put, instructional manner. Thank you.

I did not understand how Leo did not see how biocomputing would how an effect on AGI. Or maybe that was just his way or probing me to share what I know. My solution was, not to ask him what his definition of AGI was, but to tell him mine, and poorly.

I do not know what is gaslighty about that or what ‘sentiment to infinity’ means, but I am sorry for my defensiveness.

Edited by yetineti

Share this post


Link to post
Share on other sites

AGI by 2030 might not be off the table imo

 

Share this post


Link to post
Share on other sites
On 30/07/2024 at 10:40 AM, Leo Gura said:

The data problem cannot be true.

There obviously exists enough data on Earth to create an Einstein or any level of human genius. So the problem is not the data but the ineffective way current LLMS use it.

The current LLM approach is brute force. This approach will definitely plateaue. But a new kind of neural network will be invented which works more like a human brain.

The human brain does not brute force billions of data.

@Leo Gura Can I share another perspective?

1. The human brain DOES require a ton of data. Think of all the terabytes of sensory input we went through as a baby and child.

2. LLMs CAN indeed learn without billions of data points!!! It's called in-context learning. If you showed GPT a picture of a new kind of animal, then later on gave it a multiple choice question asking it to pick out the new animal, it'll get it right.

In-context learning is obviously limited but the point is, it exists. Google's Gemini managed to learn an entire language through in-context learning, it was given a dictionary + grammar rules.

Share this post


Link to post
Share on other sites
19 hours ago, Leo Gura said:

A neural network is not really an algorithm. It is a near infinitely complex system. And it's not like human minds have anything close to infinite requisite variety. Even Einstein's mind was quite limited and predictable.

A human mind can **deal** with infinite data. It can make sense of infinite data in no time, no matter how imperfect it maybe. 

A neural net cannot deal with infinite data, only countably infinite data, that could be organized, like in chess. 

[ I know chess is not infinite, but we are relatively speaking here ] 

20 hours ago, LordFall said:

@Bobby_2021 I'm so confused as to why you specifically claim that AGI is not possible. Can you elaborate on why you believe that? A human mind basically works like a pattern recognition algorithm as well so I don't see why we couldn't make an artificial one. 

Human minds make sense of things with Emotions. We are scared of death, so we had to make sense of things. This emotion is post linguistic which is a bottleneck for machines. All machines do it to process digital information. 

Humans have intellect and intuition.  To make sense of something, you need both in the right balance. 

Now, neural nets can mimic emotions to some degree, but it will miss out on intellectual part. 

This is why AI always mess up the fingers on the hands. You cannot give a rule to AI that human hands have five fingers. It does not take rules. It loses out on intellect. 

To strike the right balance of intuition and intellect, you need intuition and intellect. 

The correct rules are determined by intuition. But that same rules might take precedence over the intuition. 

In AI chess that uses neural nets, there is an objective. The AI plays chess with itself and learns and how to reach the objective. This is fine for a specific AI. How would you **teach** a general AI? What is its objective? Humans have emotions and the objective of survival. 

For AI it is a complicated mess. 

But who knows quantum computing and analog computing may be able to mimic emotions, but I am skeptical even then. 

Share this post


Link to post
Share on other sites

Good breakdown on how AI is gonna reach AGI, by doing automated AI research on itself

The researched mentioned Leopold Aschenbrenner has a good point that a human thinks really at most at like 100/token per minute. Even if you believe that AIs are incredibly stupid that doesn't really matter when they're gonna be able to compute stuff at millions of token per minute and above and basically reverse engineer the universe. 

Incredibly fascinating stuff. This is God having pity on us and giving us a handout; humans were never gonna figure out much about infinity fast enough to solve our issues. 


<3

Share this post


Link to post
Share on other sites

Posted (edited)

  • The responses a human produces are somewhat predictable and follow an ever-evolving pattern.
  • The input a human receives is infinite.

Our reality uses both these things to be what it is.

Infinite complexity, infinite subjects, infinite connections, infinite variables, infinite calculations, infinite ways to structure the data etc. Yes also the output a human makes is infinite even if you can recognize a familiar tone or way of being.

Receiving an infinite amount of something is not going to be possible in a fixed form. You are not a fixed form. 

Until they can make the AI actually adapt itself physically to receive infinity, just like your body, life, cells, social groups, mind, distinctions, consciousness etc does, then it cannot receive infinity.

Vortex maths is far superior to regular math; It may require using life's building blocks to better imitate life. Imitate, because that can still be patterned, modeled and improved. The process of improving AI is the closest we'll get to an AGI imho.

Edited by BlueOak

Share this post


Link to post
Share on other sites

Posted (edited)

@Bobby_2021 I think you would be terribly disappointed by how quantifiable and mechanical human emotions are. 

Robert Plutchik has one of the most scientifically backed emotional model out there and it's based on just 8 main emotions that can be further divided into more precise ones. This one here has 96 different ones

 

 

il_fullxfull.3410150587_4vz5.jpg

From there all you have to do is run experiments with weights and predictions to fine-tune the model and probably predict human behaviour. Like you could do it with exams and test how much each emotions affect testing scores. Or use something like Neuralink to track emotional responses in brains in real time over weeks/months and then at that point humans will be reverse engineered basically. 

aWaUfbk.png

@BlueOak I would argue that AI models today can already receive infinity moreso than humans can. GPT-4 can read at about 3000 WPM for example whereas humans only 200-400 wpm.

e33VteJ.png

Also worth noting that AI can "cheat" by having the smartest, most well-funded humans on earth label the data for it and help it with cognition. After doing more research I would now argue that not only is AGI inevitable but good chances it will happen around 2027. 

Edited by LordFall

<3

Share this post


Link to post
Share on other sites

Posted (edited)

9 hours ago, LordFall said:

 

@BlueOak I would argue that AI models today can already receive infinity moreso than humans can. GPT-4 can read at about 3000 WPM for example whereas humans only 200-400 wpm.

e33VteJ.png

Also worth noting that AI can "cheat" by having the smartest, most well-funded humans on earth label the data for it and help it with cognition. After doing more research I would now argue that not only is AGI inevitable but good chances it will happen around 2027. 

This touches on subjects that are hard to quantify (obviously :) ) Infinity, and put in a way we've not discussed much before. So there'll be obvious gaps here in my communication, but that's fun too.

1) Part of me wants to say all that exists is the moment you are having and if you are reading text, that is the universe. Along with the page, what it evokes inside you, the air around you, your emotional/energetic state, the feel of the paper, how and where it unfolds in life etc. I experience that all that exists is the observation of the current moment, if that's 1's and 0's then that is what it is.

2) Part of me wants to say that the potential of infinity is infinitely greater than the written word or an electronic impulse.

If we consider AGI to be the process of adapting the AI itself, that we are doing to it, then I can consider we've reached it now because we are altering the shape of the machine internally and externally in its entirety in the ongoing process of receiving infinity. However, there is no 'superior' in an infinite state, the term doesn't make sense.

If we consider AGI to be a fixed state you that you are trying to example to me, no, because no fixed form is able to receive infinitely. I've been wanting a way of wording this for a long time, and this is the best I have. In order to receive infinity the form has to be infinitely changing.

Edited by BlueOak

Share this post


Link to post
Share on other sites

I think I understand what you mean. This infinity that we experience though can be sliced into pretty small pieces though. For example we can cut it with time and talk about what you experience within the timespan of the next 10 seconds of your life. You I guess argue that's infinite but cognitively your brain can only experience and remember so much. So lets say after 10 seconds you write down everything that you remember about these 10 seconds.

You could argue that not only an AI could experience(or at least interpret/take in) infinitely larger amounts of data that we could depending on its itterations but it can also basically multiply its consciousness and for example experience 10 seconds within the data that it's getting fed from lets say documents that it's being uploaded to it, video data its getting from cameras, eventually literally neuralink human brain data, etc. In terms of time you could say it's the same 10 seconds or it manages to multiply its time depending on your perspective.

But I think precisely AI is gonna be able to receive infinite amounts of data way before we can. 


<3

Share this post


Link to post
Share on other sites

Posted (edited)

@LordFall

We already do receive infinity. Not slices.

Try not to think in terms of purely size, infinity is also tiny. This moment is all that exists in your reality, you are experiencing it in its entirety right now. Which means you are experiencing infinity (yourself), and that is all that exists.

Then we invent things in our mind that are possible or potential, but these don't exist because they are outside of our reality, they are concepts. I'll touch on this more in the ending below.

Are there states of being where you can recall more information, yes. Are there states of being where you can perceive more, yes. But the receiving of infinity you are doing in its entirety right now. It is altering your bodies, your mind, your cells, your relationships, your choices, your distinctions etc. As we do to the AI (and it does to itself in its self-learning).

If we want to fix on size or complexity (which is perhaps more helpful) Its true there is much stored outside of the memory to recall. I've touched on some things, the unconscious has a vast amount of data for you to access, and how and where things occur in life is crucial to remember as a factor, all the things that make your mind and your reality yours.

So while I agree we can create anything. It doesn't make it more or less towards the absolute, because infinity doesn't have a size, dimension, form, amount, or nature, in its totality. Humanity is part absolute, part self, these are the two things we are talking about, the self can be fashioned in many ways, but the absolute half remains unquantifiable. 

Edited by BlueOak

Share this post


Link to post
Share on other sites

An AGI should be able to generate controls for driving a fighter get that can refuel in mid air, dodge missiles...

and the SAME AI should also be able to play chess endgames with perfectionism...

And that SAME AI should be able to do supercomputer calculations...

The same model, which is mathematical in nature do all of this?

I am not saying that we cannot build specialised AIs that could do either one of those. That's certainly possible.

But one mathematical model that can address (make sense of) any number of distinctions will not be able to do all of those tasks. 

Humans beings use post linguistic sense making that is holistic while AI is not.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now