zurew

Member
  • Content count

    2,845
  • Joined

  • Last visited

Everything posted by zurew

  1. "Let me a grab a random set of metrics and then let me assume that the data the AI will present me with will be accurate" Wow, not all practicing doctors conduct experiments and doing research and spending their time studying philosophy of medicine on a daily basis? 1) If you don't want to create a world where each practicing doctor is freely allowed to come up with their own epistemic and ethical norms when it comes to treating patients, then you will eventually end up with a system similar like this. 2) You probably don't want all doctors to do experiements and to do research - in a working society you want some doctors to spend their time treating patients. Its very clear, that some of you abuse the fck out of the buzzwords that Leo shared with you like "holistic" or "appeal to authority". With regards to the appeal to authority - yes it can be said that its fallacious reasoning , but thats not the enire story, because it can be used as a heruistic (where given that you have low info and low knowledge in a given field, you assume that whatever the experts or the expert consensus concluded will be probably your best bet). You don't know how to even properly contextualize and what kind of norms to use to properly evaluate the data infront of you, because you are not trained in the field - so the question is why dont you ever question your ability to make reliable inferences about fields that you have 0 training in? The funny thing is that almost everyone can recognize this when it comes to fields where your assumptions are tested immediately (like engineering jobs and roles). You cant just come up with your own set of metrics and norms and then build a bridge or put a car together. "Bro you haven't directly tested how flammable gasoline is, you just believe in the dogmas that the stupid and unconscious experts feeding you with, get more holistic and wake up from the matrix". Given the complexity of medical fields and given that you cant conduct experiments on a big sample of people and given that you have no ability to even begin to isolate variables , you can infinitely bullshit yourself and pretend that you are smarter than everyone else and that you have some kind of special insight. ---------------------------------------------------------------------------------------------------- When it comes to the holism and holistic part, just because you use more norms from that doesn't follow that you will be more accurate. I can have an epistemic norm of observing the grass for 10 seconds and if the wind blows the grass within that timeframe then I will infer that the answer to my question is yes and if the wind doesn't blow within that timeframe then I will infer that the answer is no. I can then integrate this epistemic norm with my other epistemic norms and pretend that Im more special and im smarter than people who havent integrated as many epistemic norms as I did. ------------------------------------------------------------------------------------------------------ When it comes to the direct experience criticism - what do you think you are saying there? Should all doctors try all the treatments and all the pills on themselves before they prescribe anything to patients?
  2. Yes, I agree the way I outlined it is not the only way we use the term, but I thought you were using it in that sense, because you were originally responding to Nilsi about metaphysics. Regardless, my main point is that given that this term can be used in multiple ways, we should use it context sensitively so that we don't engage in equivocation.
  3. I will add one more to the necessary conditions - You are not yellow if you don't look like a redneck https://www.facebook.com/photo/?fbid=10162079724626779&set=gm.8315254938529114&idorvanity=8277508165637125&locale=de_DE
  4. In that case , you are using reductionism basically as just "explanation". I take reductionism to be a special position with regards to realness and I also take it as a subset of explanations - where it is an explanation but a specific kind - where the notion of realness is only accounted by fundamental parts (this is where the 'you are just atoms bro' come from) . The idea that" lower levels" can exhaustively explain things that are on the "higher level". Basically, I take it to be the rejection of strong emergence , where higher parts have certain causal powers that cant fully be accounted by their lower parts. To me its very clear that if there are two people and if one of them believes in strong emergence and the other doesn't, then calling both of them reductionist would be very misleading.
  5. It supposed to mock the idea that some MAGA has that creating more factories and therefore more factory jobs is actually so cool because factory work is cool and masculine and people need those jobs over some gay liberal office jobs.
  6. https://x.com/jamiewheal/status/1910704519693971812
  7. @Leo Gura Where do you put Jordan Hall?
  8. That way of using those words seem to be wildly misleading and inappropriate in most contexts. When you give for example a causal explanation, you dont suddenly provide a new substance to thing that is being explained . "Why are you drunk? Well, because I drank 10 beers" - did I provide 'drinking acohol' metaphysics to being drunk? - that question doesn't make much sense. Or another example would be saying that the reason why matter exists is because God created matter - that doesn't mean though that God is made of matter. John Vervaeke has a metaphysics that very clearly don't buy into the idea that things can be exhaustively explained by or that things can be reduce to their simpler/smaller components .
  9. Yeah, I think people here often times confuse having the "correct" take with level of development. I dont know if you have seen that convo, but this kind of goes back to the morally lucky convo Destiny had with Rem about Hasan. It was almost exactly was you said there - the reason why you(in that case Hasan) is not a neo-nazi is not because of your level of development or because you actually reasoned your way there on your own , but because you were lucky that your close environment indoctrinated you with beliefs that we collectively take to be more acceptable and correct. "If you are a very good reasoner and you have the ability to synthesize and to juggle mutliple perspectives towards an acceptable moral and value system that is aligned with mine - you are highly developed and very much above orange and at least yellow , but if all the same things apply except the fact that you have a different moral and value system you are stuck in orange at best".
  10. And thats just the start - we can easily attach other arguments to this like you cant maintain a finite planet with exponentially growing energy need. Before anyone would say "but efficiency bro" , that doesn't work in practice mostly because of jevons paradox. The more efficient shit gets the more accessible it gets and on a net scale we end up spending much more energy. Today, the average person uses more energy in a single day than what a king did in an entire year centuries ago. AI just makes this whole thing 10x worse, 1 because it makes things more efficient and 2 because the better it gets the more things it can be used for and this goes back to jevons paradox. We also have no good way to properly price things (in a way where the price is not decontextualized, but its contextualized in the context of the whole world) - which inevitably leads to the fact that we externalize harm - why? Because we don't immediately need to pay the price for it ("other stupid people who will be directly affected by it , will pay the price for it") - if everyone would need to pay the real price for it almost all businesses would go immediately bankrupt. And we can go on with other issues (AI alignment issue, environmental issues etc) ,but one main point is that a naive "but history though" doesn't work, because shit is widly different now in multiple ways.
  11. I agree mostly, but you can be rational and have a lot of IQ and good reasoning capabilities and still be lost. You probably agree with this, that having those traits won't help you with navigating self-deception - in fact, in some cases, having those traits absent of some other traits will make you more prone to self-deception. Motivated reasoning, when you are good at it can be crazy misleading and the better you are at it, the higher the chance that you will manage to bullshit yourself.
  12. This will only get worse, unless something tangible will be done to restore the trust in institutions and authority figures. You either use proxies to make sense of things for you or you make sense of things. Each have downsides, but the fact of the matter is that you don't have the time nor the expertise to properly evaluate all data, especially given that we are talking about multiple domains. Because of the distrust - even if these people agree on all the facts , they will choose the most adverserial and bad-faith explanation , no matter how far-fetched it will sound , because that will be the one that will match closest the model that they have about insitutions. This problem is way beyond what any doctor can handle or deal with. Even if one doctor would know all the facts and all the conspiracy theories and could respond to all of them perfectly, this shit wouldn't change or move most of these people and the lack of trust would remain the same. You wont increase trust by being right, but you can destroy trust forever just by fucking up once. If there is a new situation where there is any lack of certainty - the worst possible and most adverserial scenarios will be projected on it by default.
  13. Yeah sure, but once the purposes are specified, we can give an answer to that question. If we care about x set of values and we are clear about what that set contains, then we can go on talking about how does Trump affect those values. Of course, there can be other layers of disagreement like - we disagree about how should we measure how much we progress or digress from that value set, but hopefully we can ground that in a shared meta-epistemic norm and using that we can figure out which epistemic norm is better for measuring more accurately. I think a lack of clarity and a lack of explication of what kind of norms we are using to collect a set of facts and what kind of norms we use to tell a story about that set of facts is one of the things that makes us very much prone to self-deception because our underlying biases can hop in the evaluative process (especially when it comes to the story telling part - the weight of each fact will be different and even what set of facts you collect will be different). But yeah, of course this goes much deeper , because there are some meta-norms (that has to do with relevance-realization) that we unconsciously use to determine what we consider reasonable vs far-fetched and those meta-norms will probably never be exhaustively explicated, and because of that some disagreements in practice wont ever be "solved". Even if we agree on an argument (with all the premises and the conclusion and with the rule of inference as well), we can still disagree on what kind of implications come from the conclusion. There is an infinite set of logically possible implications that can come from any given conclusion and this goes back to the "reasonableness" problem I outlined above, which I have no good answer to, other than we should train our ability to explicate those epistemic norms as much as we can (so that ever deeper layers of disagreements can be specified, pointed out and then argued about without being vague). Tldr - ultimately there has to be a norm when it comes to navigating any disagreement because otherwise disagreements wouldn't be possible. But exhaustively explicating that norm is impossible in practice, so we probably never solve the deepest disagreements.
  14. There can be an attitude and a habit to integrate and to synthesize, but to me it seems there is so much nuance when it comes to what "proper" integration/synthesis means. There is a normativity to it that is often times not specified - and because of it - talks about higher and lower perspectives becomes messy and unproductive. I can be aware of multiple perspectives ,but then I can choose properties from each perspective randomly and create an assembled mess. What makes the outcome not an assembled mess? How can I know what kind of properties are relevant and what should be get rid of given a set of perspectives? To me, the answer to those questions is grounded in pragmatism - so when it comes to specifying based on what kind of norms and characteristics I create a hierarchy of perspectives - it will be based on the given purpose/goal/function and that will automatically give meaning to terms like "better" or "higher". This provides the ability to define and to interpret those terms in a non-vague way , and this would be the opposite to the other approach where there is some kind of vague goal and function independent meaning is attached to those terms. There is also often times an underlying assumption that if the context window that one consideres is bigger (which is often labeled as a more complex persepctive), that is in and of itself better, but that is purpose dependent as well. A bigger context window might give more noise to it and make it worse.
  15. Its not the case yet (if you interpret it literally), but as AI gets more advanced, the capability to do more harm by anyone who has access to it goes up as well.
  16. I dont think you appreciate the level of setback and depth "shit will just break down" entails in the context we are in now. I think appealing to history doesn't work much. Times in the past were different compared to how it is now. Sure you could externalize harm, but not on this scale. In the past, a seriously bad actor could only do so much damage (even if he had all the wealth and all the necessary people), as time passes -especially with the rapid advancement of AI - the potential for a single individual to cause significant harm is increasing at an almost exponential rate, without any need for substantial wealth or for a powerful network of people. The idea that we will have a casual setback in a society, where each member can have at least a nuclear bomb level effect on a global scale, while having perverse incentives , misinformation and a bunch of bad actors is just naive and not realistic. We wont go extinct, but there will be a serious and chaotic setback.
  17. Is this the classic move of "The Tao that can be told is not the eternal Tao' ? - the moment you try to formalize and strategize a plan you've already lost. But regardless, what does "become totally unbound in your own becoming" mean in practice?
  18. I dont share his idea, I just shared his view, which is simiar to yours in that we have no fucking clue what we are doing.
  19. Thats roughly what Jordan Hall is saying. Jordan Hall would say that we have no fucking clue what we are doing and from where our motivations come from and we should connect to God and act based on that not based on our ideologies or random motivations. The idea is that whatever our intellect will come up with will be dogshit and useless , so we might as surrender and let the transcendent show the way.
  20. I dont think I am tracking what you are saying. The premise is not that we are fully in charge of what we are becoming and what we are doing - the premise is that our actions inevitably affect nature and to the degree to which we have control over those actions (even if its very little), we should use that wisely.
  21. I think there is some truth to that (the need for control and the need for taking credit), but on the other , it is also about this - given that we are here and we are planning on staying here in the future, why wouldn't we try to do it in a conscious,wise rather than an unconsious,unwise way?
  22. If you agree with the premise that there are problems that pops up and maintained by Game A dynamics , then if your solution doesn't involve something that takes care of that , then it might help with gaining time, but its not really a solution that prevents from shit breaking down in the future. I don't think you can realistically maintain a game A world without there inevitably being civilizational collapse. In principle, its not impossible to solve these problems in game A, for example using power and other tools of persuasion you can theoretically make everyone to do what you want them to do, but maintaining this long term is just as unrealistic as transitioning to game B.
  23. I think your sentiment makes it impossible to progress on anything. Its one thing to be realistic , but its another to not even attempt to work on the problems and then because of it - make it a self fulfilling prophecy, where the solution never comes (not because you establish that in principle there cannot be one, but because your attitude make it so that you never even try to work on one). So if we are actually solution oriented there are multiple moves - even though its extremely unlikely, still giving the transition a chance and or given that shit will inevitably break down, thinking about how to deal with once it breaks down.
  24. I dont think they would disagree with that. I think the idea was to come up with the necessary and sufficient conditions that a solution must satisfy, and if by its nature it makes it so that it is extremely unrealistic and absurd, then so be it, but it being unrealistic or unlikely in principle won't take away from the need that it has to be done. Or in other words, if there is no other alternative option other than an extremely unrealistic one, then you try to implement that. Its like if its a fact that you will have an MMA match with Jon Jones in two weeks and there is no possible way to get around that, then you will try to do your best with what options you have, even though the chance of you winning the fight is basically 0. But yeah, it seems that you don't even agree with the premise that there is a meta-crisis let alone with what necessary charactersitics the solution must have.