zurew

Member
  • Content count

    2,815
  • Joined

  • Last visited

Everything posted by zurew

  1. I have read somewhere that this is a sort of safety-check on AI, because the standard model would produce racist images (but this might be total bullshit). All I know is that I have seen some wild shit generated with some of these models. Like:
  2. @thenondualtankie At this point I don't know what your argument actually is. You are selectively engaging with my replies and you seem to not build up to any conclusion - just randomly making points. My whole argument's goal was to showcase that LLM-s are currently bad at reasoning and have a poor understanding of things and mostly just regurgitating things. The argument isn't that AI will never improve or that AGI is impossible or that AGI is necessarily far away ( Im purposefully staying away from making predicitons). I linked you two articles that have dozens of reasoning tasks that gpt-4 fails to solve. I have also linked you examples that showcases how (even in a programming competition's case) gpt-4 was able to solve 10/10 (because that data was contained in its training data) ,and when they gave it a new set of competition questions it scored 0/10. The same thing went down with a certain reasoning task, where it memorized the answer for one and if you tweaked the question a little bit, it immediately failed to figure out the right answer. There are other examples of this and you can read on reddit from other users who talk about changing just one word and GPT4 immediately fails to solve the problem. I shared with you all those things and you haven't engaged with any of them. So the question is: what is your response to all of that? https://medium.com/@konstantine_45825/gpt-4-cant-reason-addendum-ed79d8452d44 - this is another article from the same author as the past one, however the difference is that he ran the same tests on gpt-4's updated version - and the result was the same. GPT-4's reasoning capability didn't improve. Now regarding specifically your question - there are examples of that in the article. I will share the direct quotes so that you can use ctrl + f to find where they are in the article. another one from the same article
  3. This is a much much deeper and more nuanced topic, than how it is usually phrased and how most non-philosophers try to debate/talk about it. There is a lot of philosophy knowledge that is needed to be able to properly understand and ground most of the arguments. If you are interested I would suggest you to dive deep into sources like: https://iep.utm.edu/foreknow/#H2 https://iep.utm.edu/freewill/ https://plato.stanford.edu/entries/determinism-causal/ https://plato.stanford.edu/entries/compatibilism/ I like Robert Sapolsky as a biologist, but I think he is pretty much out of his depth here, when it comes to philosophy.
  4. It seem to lack some fundamental physics knowledge or at the very least seem to be confused about it (based on the last reddit post I sent), but the time is probably not far away when they can give it all the necessary physics equations , so that it can get a more fundamental understanding how this world works and it won't be limited to only learn just by observing. All that being said, I agree that it doesn't need to be perfect in order to destroy the current market. It will be 100% used in multiple fields, especially given that you can talk to it and use it to edit (as many times as you want) videos that you feed to it.
  5. Child's case is different on multiple levels. A child can refuse to take action for much more reasons - outside of him/her not hearing the suggestion to do something. The problem with the reason that you gave, is that one - I could give that exact same reason to any problem that the AI cant solve - because "even though it actually knows how to solve this problem, well it didnt pay enough attention to what I asked it to do" The problem with that kind of reasioning is that it is not consistent with how the AI operates. The AI will be much much less likely to "overlook" certain tasks compared to others. If you ask it to generate a mushroom image, the likelihood that it will won't pay sufficient attention to that , will be much less likely than to its negation. The other problem with the reason that you gave is that there are instances where you can ask it multiple times to not do an action, and it continues to do it. So after 3-4 prompts (continously asking to not do something) , it becomes strange to assume that it just overlooked a word.
  6. Well, yeah we are getting closer to that time although they will need to solve these things first: https://www.reddit.com/r/OpenAI/comments/1arrqpz/funny_glitch_with_sora_interesting_how_it_looks/ Whats interesting though, is that Sora generated a lot of negative feeling and comments towards AI in general. A little bit of a social panic of some sort. (155k likes on a hate tweet)
  7. https://www.reddit.com/r/singularity/comments/1avk2hr/sora_style_transfer/
  8. Yeah I think this is what probably will be needed. I have seen some experts talking about combining multiple architectures together - because they think that will be the best approach to AGI and LLM-s alone wont be sufficient. So LLM-s probably will be 1 part of AGI, but there will be more.
  9. You still don't get the depth of the problem. Its not a matter of sometimes I can follow the instructions and sometimes I don't. Its not a cherry pciking problem, its different in kind. You either know the abstract meaning of negation or you don't. If you tell a human to not do x - that human won't do that action , given that he/she understands what that 'x' is. If you do the same thing with an AI, it will still do the action even if it knows what x is. And not just that, but in its response it will give the bullshit memorized take of "Yeah I apologize for my mistake, I won't put x on the image anymore" and then right after that it generates an image that has x on it. I will apply this to a programming problem, because it will illustrate perfectly whats wrong with the saying of "sometimes it can work and sometimes it cant" - If you know programming you will understand this: I can write a function that will ask for an integer input, and after that it will print out that input. Now given that the condition is met correctly ( that I give an integer as an input, that the computer can read) it wouldn't make any sense to say that yeah this function will only work 40% of the time or 30% of the time - no, once all the required conditions are met - It either can perform the function or it cant. Another way to put it is to conflate inductive and deductive arguments. When it comes the deductive arguments the conclusion will always follow from the premises, but when it comes to inductive ones, the conclusion won't necessarily follow 100% of the time. Another way to put is to talk about math proofs. It would make 0 sense to say that this x math proof only works 90% of the time. I already gave a response to this: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 - this articles breakes down pretty thoroughly the problems with GPT4's reasoning capability and understanding of things.
  10. I disagree that it can be solved by a better context window. The nature of the problem is much deeper than that. Its not a matter of forgetting something , its a matter of not doing it in the firstplace.The examples that I mentioned were ones, where you give it a prompt and it immediately fails to do what the prompt says (not like it fails to maintain a long term condition that you gave it few responses ago) . Its like User: Hey GPT dont do x GPT4: does x.
  11. WIll look forward to GPT5. I will give one more thing that they will eventually need to solve: Right now it seems to be the case, that AI doesn't have an abstract understanding of most things (I already said this), but more specifically, doesn't have an abstract understanding of negation or the negative. The proof for this is the fact, that in a lot of cases, when you tell it to not do something - it will still do it. Yes in some cases it might get it right, but this is a principle problem, where you either have a real understanding of negation or you don't. This includes instances, where you want to create an image and say 'please don't inlcude x on the image' or if you want it to not include a specific thing in its answer. This problem seem to be a tough one , given that you can't just show it a clear pattern what negation is just by using a dataset that has a finite set of negating things. - in fact, I would say the whole category of 'abstract understanding of things' cant be learned by purely just trying to find the right pattern between a finite set of things. - it seem to require a different approach in kind. Because, for instance, if I have a prompt of 'don't generate a mushroom on the image" - I would need to show an infinite number of instances of images that don't include a mushroom on it, and even then the prompts full meaning wouldn't be fully captured. I will grant though, that AI won't necessarily need to have a real abstract understanding of things to do a lot of tasks, but still; eventually - to make it as reliable as possible, some solution will need to be proposed to this problem.
  12. Yeah, pretty impressive. We are probably not very far from AI being able to make sense and extract meaning from videos (outside of just using context clues from a given prompt or outside from reading transcripts)
  13. Its interesting how it can combine videos: https://www.reddit.com/r/OpenAI/comments/1arztj9/sora_can_combine_videos/
  14. I love how you were incapable to engage directly with anything that was said and had to strawman everyone and you had to use one of the worst arguments to make a case for your position. Most of us didn't say that AGI definitely won't come in the next 5-10 years, most of us just tried to point to the fact, that the arguments that are used to make a case for such claims are weak and have a lot of holes in them and some of you have unreasonably high confidence in such claims. If you really want to make the "but experts said this and you reject that" argument, then First show us a survey where there is a high consensus between AI experts on when will AGI emerge (But it has to be such a survey, where AGI has a concise definition, so that those experts have the same definition in mind when they answer the question) Secondly, you will have to make a case for why this time the experts prediciton will come true (because there is a clear inductive counter argument to this survey argument - namely, that most of the AI experts predictions in the past have failed miserably).
  15. Yeah I agree that it can be useful for multiple different things , but I tried to show some of things tthat I and others have recognized regarding its semantic understanding and its reasoning capability. The article I linked shows many examples ,but here is another one with many examples: https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523 I recommend to check this one as well: https://amistrongeryet.substack.com/p/gpt-4-capabilities This is also interesting as well:
  16. Yes and it fails answering trivially easy questions that a guy in elementary school could answer. It makes zero sense to say that it has a semantic understanding of things and at the same time it fails giving the right answer for trivial questions. Yes, sometimes it can provide the right answer to more complex questions, but if it would actually have a semantic understanding - it wouldn't fail answering the trivial questions - therefore , I will say it again - it only deals with patterns and doesnt understand the meaning of any thing. Right now you could do this with me: Give me a foreign language that I understand literally nothing about - in terms of meaning of sentences and words - and then give me a question in that foreign language and the right answer below. If I memorize the syntax (meaning, if I can recognize which symbol comes after which other symbol) then I will be able to give the right answer to said question even though I semantically understand nothing about the question nor about the answer - I can just use the memorized patterns. The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns and if it sees a pattern that is very similar to another pattern that it already ecountered with in its training data, then - in the context of answering questions - it will assume the answer must be the exact same or very similar to it, even though changing one word or adding a , to a question might change its meaning entirely. Here is one example that demonstrates this problem
  17. The stage yellow AI needs to be exceptionally and intelligently prompted? --------- But yeah, I know sometimes it can do good things (if you figure out what prompt can work). But it proves my earlier point about the problem that currently it doesn't really know the semantics of things - it only remembers patterns - and once you change that pattern(in this case the prompt) a little bit (in a way where the meaning is essentially the same), it falls apart and fails to apply the right pattern.
  18. @Yousif - see what @Raze did there? He actually made points and he directly engaged with the question that was asked to you. Once you grow out of your non-dual-rambling-wannabe-guru phase, maybe you will be capable to do that to, but until then - I will stop engaging with you, because its a total waste of everyones time.
  19. @Yousif Yeah as I thought, you have nothing of substance to contribute to any of the conflict - you are here to give platitudes (that everyone knows) and to virtue signal. The problem with what you are doing is that you are derailing the thread and stopping other people from having a substantive conversation.
  20. Yeah this is virtue signaling - literally everyone knows this, but war comes with the killing of innocent lives and what you don't take into account is that being passive sometimes can bring more innocent deaths than engaging in wars - thats why you need to drop the platitudes and come back to real life and try to actually analyze and engage with the situation so that you can come up with the best strategy according to your knowledge that can actually minimize the global suffering and death long term.
  21. @Yousif without virtue signaling and rambling about non-duality can you give an exact, concise plan what should Israel do?
  22. Fell free to suggest something different than eliminating Hamas
  23. @Danioover9000 If you want to engage productively stop posturing and virtue signaling - literally no one cares. Everyone has emotions and feelings around this topic so I dont see how thats engage or contradicts anything that was said. So many new and novel things being said there, good job. "you are biased, therefore I won't directly engage with anything that was said" - a very intelligent and productive way to argue. All of your points are stupid, because it can be used for both sides. I taught you werent in favour of relativising the morality of both sides. Waiting for another of your schizo-rants.
  24. Why would you use the absolute numbers of civilians killed to establish intent? I already gave posts why thats a much worse metric to go by compared to relative risk.