erik8lrl

AGI is coming

180 posts in this topic

If you want to make some money this year, remember: the people who made most during the gold rush were selling the shovels. 

Share this post


Link to post
Share on other sites
2 hours ago, erik8lrl said:

Very cool! Yeah, it's not perfect, we'll see how it develops. 

It's not about perfection. Humans aren't perfect either.

 

You are missing that this is fundamentally doing something different than humans do when using their intelligence. We don't inform intelligence through unreasonable amounts of data, we inform intuition through unreasonable amounts of data.

Share this post


Link to post
Share on other sites
1 hour ago, Scholar said:

It's not about perfection. Humans aren't perfect either.

 

You are missing that this is fundamentally doing something different than humans do when using their intelligence. We don't inform intelligence through unreasonable amounts of data, we inform intuition through unreasonable amounts of data.

Yes, if you want to word it this way. It's true, that we don't inform intelligence through data alone, we inform them through connection and pattern recognition, which is what neural networks do also aside from data collection.  
My point is we don't know that it is fundamentally different for sure, it's still too early to tell. We don't know enough about neural networks to definitively say that it might not be the starting point that could lead to Qualia. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites

@Leo Gura They can, the problem is simply they can't put powerful enough chips that will fit into a car.

 

I believe I read an article that Waymo already has less crashes per mile than the average driver. Of course this isn't all terrain and road conditions but the models now are the worst they will be from here on out. They will only improve. If clunky biological humans can drive a car then precise machines will be able to one day too. 

Edited by Shodburrito

Share this post


Link to post
Share on other sites

 l love how everyone in here is acting like their predictions about AI just being a hype cycle and how it is overblown is true because y'all are clearly the experts on AI. Dunning Kruger Effect on full display

Share this post


Link to post
Share on other sites

I'm sure the experts in the field who nearly all are saying we will have AGI by the end of the decade are just wrong because you all are obviously top executives at the largest AI companies and have access to their best models. There was a leak over a year ago that SORA was developed in MARCH 2023 but they've delayed release until now due to public fear over AI. They have way better models behind closed doors but you all aren't ready for that conversation. 

Share this post


Link to post
Share on other sites
3 hours ago, Shodburrito said:

I'm sure the experts in the field who nearly all are saying we will have AGI by the end of the decade are just wrong because you all are obviously top executives at the largest AI companies and have access to their best models. There was a leak over a year ago that SORA was developed in MARCH 2023 but they've delayed release until now due to public fear over AI. They have way better models behind closed doors but you all aren't ready for that conversation. 

Totally. It's possible they already have AGI or getting really close.

Share this post


Link to post
Share on other sites
6 hours ago, Shodburrito said:

l love how everyone in here is acting like their predictions about AI just being a hype cycle and how it is overblown is true because y'all are clearly the experts on AI. Dunning Kruger Effect on full display

I love how you were incapable to engage directly with anything that was said and had to strawman everyone and you had to use one of the worst arguments to make a case for your position.

Most of us didn't say that AGI definitely won't come in the next 5-10 years, most of us just tried to point to the fact, that the arguments that are used to make a case for such claims are weak and have a lot of holes in them and some of you have unreasonably  high confidence in such claims.

 

If you really want to make the "but experts said this  and you reject that" argument, then

First show us a survey where there is a high consensus between AI experts on when will AGI emerge (But it has to be such a survey, where AGI has a concise definition, so that those experts have the same definition in mind when they answer the question)

Secondly, you will have to make a case for why this time the experts prediciton will come true (because there is a clear inductive counter argument to this survey argument - namely, that most of the AI experts predictions in the past have failed miserably).

Edited by zurew

Share this post


Link to post
Share on other sites
21 hours ago, erik8lrl said:

The world simulation model is exactly what OpenAI, Google, and even Tesla are working on right now. We can already see it with Sora, GPT5 will likely include Sora in its model understanding. Similar with Google Gemini going multi-modal.

WIll look forward to GPT5.

I will give one more thing that they will eventually need to solve: Right now it seems to be the case, that AI doesn't have an abstract understanding of most things (I already said this), but more specifically,  doesn't have an abstract understanding of negation or the negative. The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it.  Yes in some cases it might get it right, but this is a principle problem, where you either have a real understanding of negation or you don't. 

This includes instances, where you want to create an image and say 'please don't inlcude x on the image' or if you want it to not include a specific thing in its answer. This problem seem to be a tough one , given  that you can't just show it a clear pattern what negation is just by using a dataset that has a finite set of negating things. - in fact, I would say the whole category of 'abstract understanding of things' cant be learned by purely just trying to find the right pattern between a finite set of things. - it seem to require a different approach in kind. Because, for instance, if I have a prompt of 'don't generate a mushroom on the image" - I would need to show an infinite number of instances of images that don't include a mushroom on it,  and even then the prompts full meaning wouldn't be fully captured.

I will grant though, that AI won't necessarily need to have a real abstract understanding of things to do a lot of tasks, but still; eventually - to make it as reliable as possible, some solution will need to be proposed to this problem.

Edited by zurew

Share this post


Link to post
Share on other sites
4 hours ago, zurew said:

I will give one more thing that they will eventually need to solve: Right now it seems to be the case, that AI doesn't have an abstract understanding of most things (I already said this), but more specifically,  doesn't have an abstract understanding of negation or the negative. The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it.  Yes in some cases it might get it right, but this is a principle problem, where you either have a real understanding of negation or you don't. 

Yeah, I think this is mainly caused by the size of the context window and the architecture. Right now, even with GPT4, the context window is small, meaning it can't remember long-term conditions naturally. You could get around this sometimes with prompting, something like "Don't do (this) from now on when answering my questions". You can also use the new memory feature that's released recently, it basically allows you to add a system prompt to the GPT so that it remembers it and applies it globally.  
 

5 hours ago, zurew said:

This includes instances, where you want to create an image and say 'please don't inlcude x on the image' or if you want it to not include a specific thing in its answer. This problem seem to be a tough one , given  that you can't just show it a clear pattern what negation is just by using a dataset that has a finite set of negating things. - in fact, I would say the whole category of 'abstract understanding of things' cant be learned by purely just trying to find the right pattern between a finite set of things. - it seem to require a different approach in kind. Because, for instance, if I have a prompt of 'don't generate a mushroom on the image" - I would need to show an infinite number of instances of images that don't include a mushroom on it,  and even then the prompts full meaning wouldn't be fully captured.

I think if you are using GPT4/Dalle3 to generate images, you won't have a good time with customization. Dalle-3 is really good at text interpretation, but it can't really do inpainting or negative prompts. How it changes images is by reverse prompting text description from your images, and then using that prompt to generate similar results, but it's not editing the images, so you won't have consistency. GPT4/Dalle-3 is really only good for generating initial images that require complex text interpretation. For image generation, I think it's best to use Midjourney, SD, or if you can, learn comfyUI. These give you far more customization ability and negative prompting works great for them. With comfyUI you can even customize the lower-level processes of the models to fit your needs, there are also endless workflows shared by other users for specific image generation tasks. GPT4/Dalle-3 is really not designed for image generation, GPT5, however, will be a different story.  

Share this post


Link to post
Share on other sites
5 hours ago, zurew said:

The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it. 

Because it is intuitive. It is for this same reason that, when you think "Don't think about an elephant", that your subconscious mind will provide you with an elephant.

The subconscious and unconscious mind are basically neural networks, and they aren't intelligent because they aren't conscious.

 

We are on actualized.org, how can people be so confused about the nature of intelligence? Just go sit down and meditate for a while. You can directly see what it is, and how absurd the notion is that somehow complexity will lead to individuated consciousness.

 

Edited by Scholar

Share this post


Link to post
Share on other sites
36 minutes ago, erik8lrl said:

Yeah, I think this is mainly caused by the size of the context window and the architecture. Right now, even with GPT4, the context window is small, meaning it can't remember long-term conditions naturally. You could get around this sometimes with prompting, something like "Don't do (this) from now on when answering my questions". You can also use the new memory feature that's released recently, it basically allows you to add a system prompt to the GPT so that it remembers it and applies it globally.  

I disagree that it can be solved by a better context window. The nature of the problem is much deeper than that.  

Its not a matter of forgetting something , its a matter of not doing it in the firstplace.The examples that I mentioned were ones, where you give it a prompt and it immediately fails to do what the prompt says  (not like it fails to maintain a long term condition that you gave it few responses ago) . Its like User: Hey GPT dont do  x GPT4: does x. 

Edited by zurew

Share this post


Link to post
Share on other sites

Zurew, I'd like you to provide some examples of that.

You have no idea what you're talking about.

GPT-4 does not have perfect reliability in following instructions, including negations. This also applies to humans. It says nothing about its general ability to understand.

If it doesn't understand, then how do you explain its ability to solve unseen problems? For example, as I mentioned above, it can solve a wide array of programming problems it has never come across. You could probably google more examples of its problem solving.

Share this post


Link to post
Share on other sites
1 hour ago, thenondualtankie said:

GPT-4 does not have perfect reliability in following instructions, including negations. This also applies to humans. It says nothing about its general ability to understand.

You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind.

You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. 

I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant.

Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time.

Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time.

 

1 hour ago, thenondualtankie said:

If it doesn't understand, then how do you explain its ability to solve unseen problems? For example, as I mentioned above, it can solve a wide array of programming problems it has never come across. You could probably google more examples of its problem solving.

I already gave a response to this:

Quote

Right now you could do this with me:

Give me a foreign language that I understand literally nothing about - in terms of meaning of sentences and words - and then give me a question in that foreign language and the right answer below. If I memorize the syntax (meaning, if I can recognize  which symbol comes after which other symbol) then I will be able to give the right answer to said question even though I semantically understand nothing about the question nor about the answer - I can just use the memorized patterns.

The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns and if it sees a pattern that is very similar to another pattern that it already ecountered with in its training data, then - in the context of answering questions - it will assume the answer must be the exact same or very similar to it, even though changing one word or adding a , to a question might change its meaning entirely.

https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with  GPT4's reasoning capability and understanding of things.

Edited by zurew

Share this post


Link to post
Share on other sites
8 hours ago, zurew said:

You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind.

You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. 

I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant.

Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time.

Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time.

 

I already gave a response to this:

https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with  GPT4's reasoning capability and understanding of things.

Yeah, I think current models definitely have this problem, especially for complex logical reasoning. It might be solved with more parameters or better architecture. We do see that the scaling of "intelligence" and reasoning ability increase as we scale. I think the new Gemini 1.5's reasoning ability scored 80-90% on benchmarks. We'll see if newer models like GPT5 improve this area, it might or might not be a problem related to scale since we still don't fully know what emergent property can appear as we scale larger and larger. I think for me, a lot of the times when GPT4 doesn't do what I ask, it's mostly because of my prompt not being specific or detailed enough, most of the time I find improving my prompting helps, but not always tho. 
I think for coding, context window size does help, for me, most of the code GPT4 gives me is out of context, so they are often the most common implementation of something. If Gemini can generate code in the context of the whole system, then it should produce better results. 

Share this post


Link to post
Share on other sites


This shows how AI expert's prediction of the AGI timeline keeps getting shortened as the exponential curve progresses. 

Share this post


Link to post
Share on other sites

AI Chip breakthroughs also happening, hardware is accelerating as well. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now