zurew

Member
  • Content count

    2,815
  • Joined

  • Last visited

Everything posted by zurew

  1. Maybe with Google's future chatbot it will be different, because none of the current AIs were optimized to be factually correct. I think this is only half right. Yes it can make some really dumb mistakes that no other human would make, but I have tested (ofc just on superficial level with just a few examples) its ability for deductive reasoning (giving it examples like this: "If all dogs have 4 legs, and we know, that a rottweiler is a dog, then how many legs a rottweiler has?") and it always knew the right answer. I gave it a different example as well, where I gave it a factually incorrect information and first it recognized that it is factually incorrect, then I asked it, to "regardless of how factual the given information is, please use deductive reasoning to find the conclusion", and its answer was correct. After that I tried to trick it with an example like this: "All swans are white. Jane is a human and white. whats the conclusion?" and it gave an answer like this: "The conclusion cannot be drawn that Jane is a swan based on the information given because the statement only says that all swans are white, not that all white things are swans." I gave it similar tasks and examples to test its inductive reasoning and it always knew the right conclusion. So given that It has some knowledge about entities (it can differentiate concepts from each other and has some information about them [showing it words like dog in a 1000 different context, so it can recognize the "dog" entity and could have some information about it]) it should be able to use its deductive and inductive reasoning to find the right conclusions. I wouldn't claim that it has an internal understanding of things, because then it wouldn't make any dumb reasoning mistakes, but at the same time, I wouldn't claim either, that it has nothing, because it doesn't seem to be 100% random. If it has more than a 50% probability to be right given a reasoning task (ofc I don't know if this claim is true or not, i just base it on my super limited testing and interaction with it, I just assume this claim to be true), then it has something going on, regardless if its simulated or not.
  2. Yeah, I think you are right. Your advice about sex positions is much better and a lot can be done in bed that can replace dick size for sure. @Someone here Develop your game in bed, and find girls that are okay with your penis size. You have an average dick size, so you shouldn't be worried about it.
  3. Don't take this advice for granted, because it might be total bullshit, so do your own research on this topic. What about using a penis pump right before sex? Pumping more blood into your dick should make it temporarily bigger (at least, that what I would expect from it). But again, if you want to try this out, do your own research on this topic and keep this in mind:
  4. Not so fast. We don't know yet how far we can push the current AI tech and we don't know what the limitations are. Sure we might reach a point in the future where almost all jobs will be replaced by AI, but we don't know when that time will come and there is no guarantee that it will happen in the next 1 or two decades. Replacing most human workers overall (where there is no need for any human attention or knowledge or free will or thinking) seems an incredibly hard task. We still have a very poor understanding of our own intelligence, so it seems very unlikely to intentionally create(without luck) an intelligence that has all the main capacities that a human has. Sure it might be the case, that we could achieve it by randomly experimenting with stuff, but that is still hitting shots in the dark without knowing what we are doing. It might be the case, that a different kind of structure will be needed to build a superintelligence and a multi-layer transformer network (that chatgpt was trained on) is too limited in a structural way that can't be overcomed by adding more computational power to it.
  5. I think since they made ChatGPT public, they have made some updates on it.
  6. Maybe copywriters and maybe some others but mostly it will automate some parts of certain jobs. Jordan Peterson is doing a big hype in that video, I would suggest to almost never take at face value any information coming from people that are not experts in the field , they are talking about (even experts can say sometimes dumb or outrageous shit). You should be very sceptical about predictions regarding to AI , because its just so unreliable and everything is changing so fast, that there is no way that anyone can make such reasonable predictions and assumptions that will be true with a high probability.
  7. Yeah, I have seen this idea (to combinate chatgpt with midjourney ai) and I think this combo is very fascinating. Although midjourney AI is not the best to describe/illustrate long texts with images, on the other hand, this combo can be used to show or to illustrate complex , hard concepts that would be hard to explain in words alone otherwise.
  8. In most of the cases, unfortunately victims can't meet those proof standards. You don't know, whether they have enough evidence or not. There were allegedly leaked audio files and chat logs, those combined with Tate's own statements (bunch of videos, where he self snitches) + there is a copy of his past website where he sold a course about how to manipulate and use women for your own purposes and all of that combined with other victim statements might be enough.
  9. You call it "throwing away energy lol", because you don't value the things that some people value, but your value is just as subjective as anyone elses. If your definition of smart is to never do anything for fun, or for pleasure or to release stress, then I guess you can call it dumb.
  10. Fun could be very important to people, pleasure can be important to people, releasing stress can be important to people. Obviously people are doing this for a specific purpose, once that purpose is met you can't call it dumb or a waste of time anymore.
  11. Thats okay, but this claim is totally different from your starting implication,that masturbation is actually objectively bad. For some, semen retention is a net negative. Thats up to the individual to decide what one wants to use to release stress and why.
  12. Ejaculation will serve whatever purpose one want to give to it, there is no universal purpose here. You can do it to feel pleasure, or to release stress etc. If you recognize that your case is not universal, then why do you try to make arguments using your special case to say that masturbation is bad?
  13. this: Or just simply give a copy to people who you think are qualified enough to give a quality analysis or response.
  14. Things is that you don't really know if its true what he said, or if its 100% true without any streches. Is it a possibility? Sure, but we need more than just one source to confirm his claims, because the source that "leaked" this, is a source that is known for being anti mainstream. If we all are searching for the truth, then we shouldn't take any single source as 100% true, because that just shows an incredible bias and thats the opposite of trying to be objective. Even if taken all at face value, this still isn't a big deal. If they actually did gain of function research in an illegal way - sure sue them. But the implications are not that big from the currently leaked information.
  15. The best lie is a half lie, because that makes it seem much more legit.
  16. There are ways to explain this without assuming he was actually leaked something true here, but of course all of this is speculation just as assuming that he told the truth. The thing is that he was on a date, so there is a chance that he just wanted to impress the other guy on his date by telling and leaking "secrets".
  17. Because according to that conspiracy the gain of function research was done by an American company, so China could have dunked on the US massively and could have gained massive political gains, if they would have exposed them. So the idea that no one would have leaked anything about it is very unlikely to me. But the reality is that we haven't seen any tangible evidence to prove it and I have not seen any leaks by any institution or country or company. But blame doesn't mean anything, they didn't show or leak any tangible evidence about it. But again even if I take all that for granted, we are still very far away from proving bad intentions and bad planning behind the scenes.
  18. I don't think anyone here who critisizes him, want him to be a doormat or to be super kind, just don't be a dick thats all, - be normal, act normal. There is no need to be sweet or to lie and there is also no need to be a dick or a bully. Thats a different case. If someone is clearly a dick towards you and act toxic then its definitely justified to act back, but thats not what most of us have problem with, what we have problem with is him assuming in his videos that his viewers are just dumb and toxic, and also sometimes on this forum him assuming a lot of bad things about a lot of people and then use that to justify his behaviour.
  19. Why wouldn't have China or Russia leaked anything about it?
  20. Yeah sure, this is why I am against gain of function research. But again keep in mind, that this "lab leak hypothesis" hasn't been proven yet, it is just a theory. Trust them for what? Speaking of trusting, why do you 100% trust a single person telling you stuff about a company?
  21. You make it sound like , that sometimes it is necessary to be a dick, but I don't think you have established a compelling argument for it yet. Do you think anyone in this thread have changed their mind after you turned on your "raw" mode? Honesty could be necessary to be an effective teacher, but honesty doesn't require being a dick. If you don't want to change your teaching style thats fine, but don't act like it is necessary to do your teaching style this way.
  22. I think its possible that covid originated from gain of function research - but I haven't seen any compelling evidence that would prove it, but even if I take that for granted, that alone is far from proving bad intentions. There is no leak yet, that could be considered significant. This "leak" assuming it is true, only proves that they are doing gain of function research. I don't need to trust any single company. I can look at the overall body of research (that is done on a multinational scale by different companies, countries and institutions) about a topic and go with that.
  23. I am personally against gain of function research, because I find it very risky. On the other hand, If its done safely, it can be very useful to learn more about viruses and to do preemptive measures before the new variants are infecting everyone.
  24. Gain of function research is not a new thing. Nothing new here.