zurew

Member
  • Content count

    3,221
  • Joined

  • Last visited

Everything posted by zurew

  1. Things is that you don't really know if its true what he said, or if its 100% true without any streches. Is it a possibility? Sure, but we need more than just one source to confirm his claims, because the source that "leaked" this, is a source that is known for being anti mainstream. If we all are searching for the truth, then we shouldn't take any single source as 100% true, because that just shows an incredible bias and thats the opposite of trying to be objective. Even if taken all at face value, this still isn't a big deal. If they actually did gain of function research in an illegal way - sure sue them. But the implications are not that big from the currently leaked information.
  2. The best lie is a half lie, because that makes it seem much more legit.
  3. There are ways to explain this without assuming he was actually leaked something true here, but of course all of this is speculation just as assuming that he told the truth. The thing is that he was on a date, so there is a chance that he just wanted to impress the other guy on his date by telling and leaking "secrets".
  4. Because according to that conspiracy the gain of function research was done by an American company, so China could have dunked on the US massively and could have gained massive political gains, if they would have exposed them. So the idea that no one would have leaked anything about it is very unlikely to me. But the reality is that we haven't seen any tangible evidence to prove it and I have not seen any leaks by any institution or country or company. But blame doesn't mean anything, they didn't show or leak any tangible evidence about it. But again even if I take all that for granted, we are still very far away from proving bad intentions and bad planning behind the scenes.
  5. Why wouldn't have China or Russia leaked anything about it?
  6. Yeah sure, this is why I am against gain of function research. But again keep in mind, that this "lab leak hypothesis" hasn't been proven yet, it is just a theory. Trust them for what? Speaking of trusting, why do you 100% trust a single person telling you stuff about a company?
  7. I think its possible that covid originated from gain of function research - but I haven't seen any compelling evidence that would prove it, but even if I take that for granted, that alone is far from proving bad intentions. There is no leak yet, that could be considered significant. This "leak" assuming it is true, only proves that they are doing gain of function research. I don't need to trust any single company. I can look at the overall body of research (that is done on a multinational scale by different companies, countries and institutions) about a topic and go with that.
  8. I am personally against gain of function research, because I find it very risky. On the other hand, If its done safely, it can be very useful to learn more about viruses and to do preemptive measures before the new variants are infecting everyone.
  9. Gain of function research is not a new thing. Nothing new here.
  10. Yes, but thats just my personal suggestion/bias. If you are okay with risking it, without getting quality feedback first, then do it. I thought your original intention was, to publish it to the general public, if thats not the case, then forget what I said.
  11. I think a very good portion of his camworkers haven't done any camwork before they met tate - and that alone tells a lot about manipulating them into doing completely new stuff , that they have more than likely would have never done on their own.
  12. My point was to challenge your ideas before you write a book about it, because it might be the case that without getting feedback from people who understand really well the people and the ideas that you want to critisize - you will regret your book in a few years down the road. So my point isn't to never write a book, but to first challenge yourself: make sure that you are not making surface level arguments, and after getting sufficient feedback and after you contemplate a lot about the feedback - you can start to write your book with confidence (with earned confidence) and you will be much more equipped in general , because you will know a bunch of counter arguments and pushback that are targeted at your work and ideas.
  13. When it comes to philosophy, its really easy to bullshit yourself that you know stuff, because in most cases there is no way to test your ideas. You should have definitely asked for feedback from qualified people before you started writing your book. How many people have you talked to (about your critcisms and ideas), who has a phd in philosophy ?
  14. Of course AI is dangerous. Its a tech that will eventually have the biggest impact on humanity- Now, of course impact alone isn't sufficient for bad outcome or danger, but there will be two main roads: People will give a moral system to AI AI will develop its own moral system (maybe you could give here a nuanced position, where the basics will be planted by humans and aside from that all the other things related to value and how to behave will be added by the AI) Its very easy to think of a dozen bad outcomes, I will only mention a couple of them. So given those two main roads, lets start with the first scenario. We try to give an AI a moral system, that is based on a hierarchical structure, that the AI will use, to make its decisions (this part alone will be incredibly complex for many reasons, because:) 1.A lot of people don't agree with there being an objective moral system, and a lot of people who do, are not okay with any other system, but the moral system that they already believe in ( I am thinking especially about religious people here, easiest example is a christian vs Islam conflict, but there are many other scenarious as well, and we didn't even calculate atheists here and other type of beliefs) - this point alone can generate really big conflicts, and we haven't even got to a working AI yet. 2.Even if AI works perfectly aligned with our given moral principles, it could fuck up potentially everything if those principles aren't thought through rigorously enough, so that we can avoid catastrophic scenarios. For example, lets say we give an AI a moral system to optimize everything for human happiness, and here comes the hard part (how the fuck you define that, and how will the AI be able to measure happiness?). Lets say someone gives a dumb measuring variable such as "people smiling". Okay, great, there are many scenarios from this alone that could go wrong for example: It will start to think through scenarios and it will start to experiment with many things like: drugging up everyone Or it will create smiling faces by cutting people's mouths into smile shape. Or it will find some kind of way to force people to smile, whenever people look at the AI's smile censor. This is just one example, where we optimize an AI for one variable and things going unintentionally wrong, but there could be given like a 100+ other variables just for happiness, and you could change happiness for anything else and you could give a 100+ variables for each given moral system. 3.Even if we assume all these things: (that people globally will somehow agree to one given moralsystem that will be given to the AI and imposed by tha AI, and we will figure out a way to give the perfect variable or variables to that moral system, and the AI will be able to perfectly act based on that moral system without any errors) - there are still many potential bad scenarios, like a person hacking the AI and changing its core moral system to whatever he wants and make the AI to do whatever he wants. 4.You don't need a conscious or very advanced AI to create big harm, you can create drones that are optimized to bomb cities, you can create nanobots that can get inside your body without you knowing anything about it and fuck you up from the inside,AI can be optimized for creating bioweapons (including new dangerous organisms and viruses), self driving cars alone could trap and kill many people, and many other examples could be given here. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- So the second sceario where AI will develop its own moral system (so we can somewhat imply , that this AI is conscious, but it isn't necessary to imply that). Given this scenario there is no reason to assume, that this AI will just randomly start to value humans over everything else. If humans won't be at the top of its moral hierarchy, chances that we will be eventually fucked is really high - because it will eventually encounter a scenario or scenarios, where it will need to make a value trade based on its hierarchical system, and whatever will be upper than humans on its moral ladder , - humans will be traded for that. The basic idea is that if we are talking about a very advanced AI that everyone can use - then we are talking about a scenario similar to giving everyone free nukes. Giving everyone free accessible nukes wouldn't be a good idea, given that there are a lot of bad actors in the world and even if we don't assume any bad intentions (which would be super naive), our stupidity can very easily kill us, given that people could use and create stuff that they have no idea how it works and what impact it can have. Imagine giving nukes to cave man and multiply that by a factor of 10000 or a million. All of this is still just the tip of the iceberg, I haven't mentioned many other things that could make us go extinct and of course there are many layers of danger that could be listed not just extinction level dangers. The reality is that the idea that everything will be okay on its own, without us making conscious, and deliberate and cautios steps and decisions is just a super naive idea.
  15. For anyone who is interested here is the debate: This guy wasn't prepared at all, he was all over the place, he brought up a bunch of irrelevant things that had nothing to do with the specific subject at hand, and basically had nothing to refute the evidence that Destiny brought up. The arguments that Daniel brought up were horrible and are indefensible. The idea that if you don't have pictures and or videos about women being chained to stuff, or trafficked then you can't build a confident case about someone being a sex trafficker - that standard is crazy, and for someone who so much cares about banning all things related to sex work, its very interesting that he would have so insanely high standards, that would cause basically almost all sex trafficker to get away with their crimes. His other idea, that if you don't have a completely rigid definition for sex trafficking, then you have no chance in deciding whether someone actually did sex trafficking or not - is also an indefensible and stupid claim, because the logical extension of that argument is that any definiton of crime that has some level of subjectivity to it - are crimes that you can't and shouldn't charge anyone with.
  16. You can have stored value and functional value, but both of those are dependent on the fact, that people collectively give value to them. Functional value is completely depended on the context (what tech society wants to build, what kind of lifestyle we are okay with,etc): If we think of a scenario where we get stuck in stone age a lot of materials that we currently value - we either wouldn't value at all, or would give much less value to them, because most of them would immediately lose their functional value. Or if we think of a scenario where AI does everything you can basically drive down the value of human labour to almost zero. Sure, the likelyhood that certain context will dramatically change randomly is not high (I agree with that when it comes to volatility), but its essentially all boils down to the same thing, that it all depends on how much we collectively value that particular thing and that drives ultimately all economic value (so there is no objective or "real" economic value outside of that) - and the same thing applies to cryptocurrency as well.
  17. Bitcoin is still a new technology compared to other financial stuff, and none of these criticisms seem like structural criticisms - they more like "its useless right now, so it won't be useful in the future". Do you think a guy, who made billions of dollars from having a centralized system will be inclined to give a non biased, objective analysis on a decentralized financial system?
  18. I don't think anyone here is jumping to premature conclusions here, we go with what the current data suggest right now with the least amount of jumps in logic and with the least amount of assumptions. Most of us here are able to update our view on this, if enough evidence is presented. My problem is this: Not all hypothesis that could explain this has the same probability. Once we dive down and start to see what reasons we have and what evidence we have for each hypothesis - as we did previously, the vaccine sceptic "hypothesis" is less and less likely, and start to be very shaky on many points and we have to be honest about that and don't pretend that all explanations have the same probability and weight here. The vaccine sceptic side is very far from establishing their argument about the vaccine. So far we haven't got anything else from "If you can't present any alternative, it must have been because of the vaccines" - and this alone is not an argument, just a start to try to build a hypothesis. Its good for them to hide in vague land, so they don't have to defend any of their claims, but for us its annoying, because there isn't any point to debunk or to break down, because they haven't even made a single point yet, just a claim with not just no evidence, but all the current evidence going against that claim.
  19. I assume you didn't mean that you will write whole papers and essays without any original input, but just in case: https://futurism.com/college-student-caught-writing-paper-chatgpt That being said, its really good to get inspiration for sure.
  20. Here is some explanation for the increased excess mortality: Possible alternative explanations to explain some portion of the increased excess death:
  21. This kind of reasoning that "I wasn't raped by him, therefore its impossible for him to rape anyone" is ridiculous and weak. If it comes out that he actually raped some women, all these people will going to look very silly - this is why you don't defend anyone that you barely know or know little about. The other idea, that he was banned and arrested, because "he told the truth" is also a laughably stupid claim.
  22. Sure, If anyone shows data to prove their point about vaccines being dangerous or much more dangerous than we thought, I will change my position on this. Thanks for the good faith discussion!
  23. That is true, but again no tests were done involving pregnant women, after they started testing for it I believe the causal link didn't take too much time (Feel free to correct me , if i am wrong). In our case, we have already done many tests involving myocarditis and all the data Ive seen suggest that it is pretty rare. Even if I take your point for granted that (during the early stages, when the vaccines were approved they didn't find any causal link between the vaccine and myocarditis), I could explain that by this: during the approval process the vaccine was tested only on a few thousand people compared to billions of people, so if the chance of myocarditis is incredibly low, it won't show up or it will show up but in like a handful of cases - but even with that many volunteers, the probable side effectc should visibly show up). So I don't think your point about myocarditis is good, but if you would want to make a strong point I think you would have to show a long term side effect that is much more probable and that didn't show up during the approval process or before the approval for vaccine was done. Plus it seems, that after the reports more research was done regarding to certain side effects and concerns, - more research was done regarding to myocarditis and all of those still showed that side effects will show up after weeks at max under a 2 month time window and that the occurence is really rare. - this point is important, because it shows that we don't have to rely solely on the test that were done before the vaccine approval, but we can see studies that were done after the approval process and as far as I know the conclusions from them are consistent with what I said.
  24. Such as what? Show me any vaccine where you see long term side effects showing up after years, or give me reasons for why should we expect this time to long term side effects show up after years. "If a side effect can take 1-2 years" such as what side effects? Again all the claims ,that the vaccine "sceptics" made was around myocarditis and related to mostly hearth issues, and we knew about them from testing that they show up after a few weeks at max under 2 month time window. So what side effects are we talking about that will take more than 2 month to show up? Even in the case of thalidomide most side effect showed up very quickly and not after years. The idea, that the vaccine can lurk in your body for years without any side effects and then suddenly cause serious long term side effects is not supported by any data and there weren't presented any good reasons for why that would be the case.
  25. No, this is where you are wrong. The reason for multiple year testing process is not because the doctors are expecting long term side effects (that will show up after years). But because if you have a completely new drug , then you need to establish all the slow testing process before you get to testing on humans and that requires a lot of funding, but when you get there you don't need years to approve the vaccine, because there hasn't been shown any empirical data that would suggest that after years suddenly long term side effects will show up when it comes to the vaccines. In this case there were a lot of people who voluntered for testing and that has accelearated the approval process dramatically. As far as I know, regarding to thalidomide, no tests were done involving pregnant women, so thats the reason why it took time to notice, but in this case, the claims are mostly involving side effects involving heart problems such as myocarditis and things like that, and a lot of tests has been done on all of those stuff. So the question is this: what side effects do people suspect the vaccine cause, that the doctors haven't done any tests about?