-
Content count
3,404 -
Joined
-
Last visited
Everything posted by zurew
-
I guess I just dont see how that contradicts what I said. I dont think I necessarily need to affirm a particular metaphysics (that wouldn't be compatible with what you outlined) in order to make the statements I made. I will lay out what I believe and assume is happening here, but again I can be wrong and Its perfectly possible that I don't track at all. Working with the poison example further - its irrelevant whether being poisioned is a relational phenomenon or not, what matters is what makes 'you dying from consuming poision' true or false. Does Trump telling his opinion about this particular matter has any weight whether it will kill you or not? No, even if Trump tells you that it will kill you, thats still irrelevant , because sure his opinion can be right, but you won't die because he said it or because he had the belief that its true, you die because its a fact of the world. We can define facts in a relational way, but I think that won't have any bearing on what I am saying (because in a similar way I can have false beliefs and opinions about those relational facts as well). Further clear up - by opinion I just meant having an attitude towards a proposition and by proposition I just mean a declarative statment that can be true or false. There are truths that are true independent from what attitude (opinion or belief or preference) we have about them - this is what I meant by objective truths and by independently true, I mean that even if all agents would change their opinion about a particular proposition, the truthvalue of said proposition still wouldn't change (in the case of subjective truths, it would change).
-
I might have a completely wrong read on him, but I think that he is trying to establish objective morality in the sense I outlined it, but I can be wrong. To me this is similar to how he uses the term "God". Given his definition of God "whatever is on the top of your value hierarchy", all atheists can say that they value God and that they believe in God, but lets not pretend that given this completely different sense of God that somehow he established that all atheists believe in some kind of all powerful , all good Mind. Also to be clear, it doesn't have to be transcendental in the sense that 'truth existing independently of all life' (like truth existing in some weird realm independent from this world), its just that its not dependent on the opinion of any agent or any group of agents. Its similar to the idea that consuming a large amount of poision will kill you, no matter what any group of agents say or think about it (because the truthvalue of it killing you isn't dependent on their opinion.) This doesnt entail, that the truth of the poision killing you exists in some transcendental logic realm, because it can be dependent on the laws and vulnerabilities of this particular world (where changing the truthvalue would be done by changing physical laws and it wouldnt be done by changing the opinions of people) So the definition I gave is compatible with both a transcendental realm, but its also compatible with it being the fact of this world.
-
I dont think thats the issue, the issue is that (as almost always) there is an equivocation going on. I dont want him to make a syllogism , I want him to be honest and not confused about what argument he is actually making. It has to do with what is meant by the term objective morality - if Peterson uses that term as something like "there are perennial patterns and acting those out will lead to certain outcomes" sure, I can grant that - but that doesn't really respond to the issue of subjective morality (the position where the truthvalue of moral statements are dependent on a subject or a group of subjects - where if they change their stance about a particular value , the truthvalue of those moral statements change as well). Peterson's "critique" is not a reponse to subjective morality, its just a completely separate claim that can be denied or affirmed completely independent from what position you have on subjective morality. What I would look for is an argument that establish that there are moral statements (statements that actually use terms like good , bad ) that are meaningful and that can be true or false completely independent from what any individual or what any group of agents think about them. So under this definition of objective morality, for example the truthvalue of this moral statement 'rape is bad' could be true even if all people on Earth would think otherwise. Making an analysis that ends with a conclusion that has a set of objectively true descriptive statements in it (like rape will lead to x,y,z outcome) has nothing to do with morality. subjectivists can agree with all of that even if some of them think that rape is good. And to be clear, I dont care about the definition game, what I care about is this - if Peterson wants to critique objective morality (under the definition how most people use it), then he should index his criticism to that, but using the exact same term with different semantics doesn't really do the job , the only thing he esablish with that is that he makes a completely separate claim ( and thats all fine as long as he isn't confused about it and as long as he doesn't pretend that he established objective morality in a different sense).
-
Well mr Peterson that just seems to be a descriptive claim about what set of values and what set of behaviors would be most aligned with human flourishing - but its nothing more than a descriptive claim, there is no ought embedded there. Its not just that its not an objective moral claim, its that its not even a moral claim at all. Its similar to giving a very precise physics equation about how the rock will behave once you interact with it in a certain way and then saying that it ought to be that way and that objective morality is established by that.
-
Yeah I personally think that Peterson is confused about morality. I dont see how him and Jonathan Pageau talking extensively about all the archetypes and perennial patterns would be reflective of an objective morality, to me its just a description of our collective unconscious and our conscience at best - but its nothing more than a description and there is no 'ought' embedded there. There is a big difference between making an analysis of what kind of values most people have and how we act and behave and what the outcome of that vs making moral statements that are true independent from what any particular person or what any particular group of people think about it. They haven't made any argument (I am aware of) , that wouldn't be compatible with subjective morality.
-
zurew replied to integral's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
"Let me a grab a random set of metrics and then let me assume that the data the AI will present me with will be accurate" Wow, not all practicing doctors conduct experiments and doing research and spending their time studying philosophy of medicine on a daily basis? 1) If you don't want to create a world where each practicing doctor is freely allowed to come up with their own epistemic and ethical norms when it comes to treating patients, then you will eventually end up with a system similar like this. 2) You probably don't want all doctors to do experiements and to do research - in a working society you want some doctors to spend their time treating patients. Its very clear, that some of you abuse the fck out of the buzzwords that Leo shared with you like "holistic" or "appeal to authority". With regards to the appeal to authority - yes it can be said that its fallacious reasoning , but thats not the enire story, because it can be used as a heruistic (where given that you have low info and low knowledge in a given field, you assume that whatever the experts or the expert consensus concluded will be probably your best bet). You don't know how to even properly contextualize and what kind of norms to use to properly evaluate the data infront of you, because you are not trained in the field - so the question is why dont you ever question your ability to make reliable inferences about fields that you have 0 training in? The funny thing is that almost everyone can recognize this when it comes to fields where your assumptions are tested immediately (like engineering jobs and roles). You cant just come up with your own set of metrics and norms and then build a bridge or put a car together. "Bro you haven't directly tested how flammable gasoline is, you just believe in the dogmas that the stupid and unconscious experts feeding you with, get more holistic and wake up from the matrix". Given the complexity of medical fields and given that you cant conduct experiments on a big sample of people and given that you have no ability to even begin to isolate variables , you can infinitely bullshit yourself and pretend that you are smarter than everyone else and that you have some kind of special insight. ---------------------------------------------------------------------------------------------------- When it comes to the holism and holistic part, just because you use more norms from that doesn't follow that you will be more accurate. I can have an epistemic norm of observing the grass for 10 seconds and if the wind blows the grass within that timeframe then I will infer that the answer to my question is yes and if the wind doesn't blow within that timeframe then I will infer that the answer is no. I can then integrate this epistemic norm with my other epistemic norms and pretend that Im more special and im smarter than people who havent integrated as many epistemic norms as I did. ------------------------------------------------------------------------------------------------------ When it comes to the direct experience criticism - what do you think you are saying there? Should all doctors try all the treatments and all the pills on themselves before they prescribe anything to patients? -
Yes, I agree the way I outlined it is not the only way we use the term, but I thought you were using it in that sense, because you were originally responding to Nilsi about metaphysics. Regardless, my main point is that given that this term can be used in multiple ways, we should use it context sensitively so that we don't engage in equivocation.
-
I will add one more to the necessary conditions - You are not yellow if you don't look like a redneck https://www.facebook.com/photo/?fbid=10162079724626779&set=gm.8315254938529114&idorvanity=8277508165637125&locale=de_DE
-
In that case , you are using reductionism basically as just "explanation". I take reductionism to be a special position with regards to realness and I also take it as a subset of explanations - where it is an explanation but a specific kind - where the notion of realness is only accounted by fundamental parts (this is where the 'you are just atoms bro' come from) . The idea that" lower levels" can exhaustively explain things that are on the "higher level". Basically, I take it to be the rejection of strong emergence , where higher parts have certain causal powers that cant fully be accounted by their lower parts. To me its very clear that if there are two people and if one of them believes in strong emergence and the other doesn't, then calling both of them reductionist would be very misleading.
-
It supposed to mock the idea that some MAGA has that creating more factories and therefore more factory jobs is actually so cool because factory work is cool and masculine and people need those jobs over some gay liberal office jobs.
-
https://x.com/jamiewheal/status/1910704519693971812
-
@Leo Gura Where do you put Jordan Hall?
-
That way of using those words seem to be wildly misleading and inappropriate in most contexts. When you give for example a causal explanation, you dont suddenly provide a new substance to thing that is being explained . "Why are you drunk? Well, because I drank 10 beers" - did I provide 'drinking acohol' metaphysics to being drunk? - that question doesn't make much sense. Or another example would be saying that the reason why matter exists is because God created matter - that doesn't mean though that God is made of matter. John Vervaeke has a metaphysics that very clearly don't buy into the idea that things can be exhaustively explained by or that things can be reduce to their simpler/smaller components .
-
Yeah, I think people here often times confuse having the "correct" take with level of development. I dont know if you have seen that convo, but this kind of goes back to the morally lucky convo Destiny had with Rem about Hasan. It was almost exactly was you said there - the reason why you(in that case Hasan) is not a neo-nazi is not because of your level of development or because you actually reasoned your way there on your own , but because you were lucky that your close environment indoctrinated you with beliefs that we collectively take to be more acceptable and correct. "If you are a very good reasoner and you have the ability to synthesize and to juggle mutliple perspectives towards an acceptable moral and value system that is aligned with mine - you are highly developed and very much above orange and at least yellow , but if all the same things apply except the fact that you have a different moral and value system you are stuck in orange at best".
-
Yeah, precisely.
-
And thats just the start - we can easily attach other arguments to this like you cant maintain a finite planet with exponentially growing energy need. Before anyone would say "but efficiency bro" , that doesn't work in practice mostly because of jevons paradox. The more efficient shit gets the more accessible it gets and on a net scale we end up spending much more energy. Today, the average person uses more energy in a single day than what a king did in an entire year centuries ago. AI just makes this whole thing 10x worse, 1 because it makes things more efficient and 2 because the better it gets the more things it can be used for and this goes back to jevons paradox. We also have no good way to properly price things (in a way where the price is not decontextualized, but its contextualized in the context of the whole world) - which inevitably leads to the fact that we externalize harm - why? Because we don't immediately need to pay the price for it ("other stupid people who will be directly affected by it , will pay the price for it") - if everyone would need to pay the real price for it almost all businesses would go immediately bankrupt. And we can go on with other issues (AI alignment issue, environmental issues etc) ,but one main point is that a naive "but history though" doesn't work, because shit is widly different now in multiple ways.
-
Yeah sure, but once the purposes are specified, we can give an answer to that question. If we care about x set of values and we are clear about what that set contains, then we can go on talking about how does Trump affect those values. Of course, there can be other layers of disagreement like - we disagree about how should we measure how much we progress or digress from that value set, but hopefully we can ground that in a shared meta-epistemic norm and using that we can figure out which epistemic norm is better for measuring more accurately. I think a lack of clarity and a lack of explication of what kind of norms we are using to collect a set of facts and what kind of norms we use to tell a story about that set of facts is one of the things that makes us very much prone to self-deception because our underlying biases can hop in the evaluative process (especially when it comes to the story telling part - the weight of each fact will be different and even what set of facts you collect will be different). But yeah, of course this goes much deeper , because there are some meta-norms (that has to do with relevance-realization) that we unconsciously use to determine what we consider reasonable vs far-fetched and those meta-norms will probably never be exhaustively explicated, and because of that some disagreements in practice wont ever be "solved". Even if we agree on an argument (with all the premises and the conclusion and with the rule of inference as well), we can still disagree on what kind of implications come from the conclusion. There is an infinite set of logically possible implications that can come from any given conclusion and this goes back to the "reasonableness" problem I outlined above, which I have no good answer to, other than we should train our ability to explicate those epistemic norms as much as we can (so that ever deeper layers of disagreements can be specified, pointed out and then argued about without being vague). Tldr - ultimately there has to be a norm when it comes to navigating any disagreement because otherwise disagreements wouldn't be possible. But exhaustively explicating that norm is impossible in practice, so we probably never solve the deepest disagreements.
-
There can be an attitude and a habit to integrate and to synthesize, but to me it seems there is so much nuance when it comes to what "proper" integration/synthesis means. There is a normativity to it that is often times not specified - and because of it - talks about higher and lower perspectives becomes messy and unproductive. I can be aware of multiple perspectives ,but then I can choose properties from each perspective randomly and create an assembled mess. What makes the outcome not an assembled mess? How can I know what kind of properties are relevant and what should be get rid of given a set of perspectives? To me, the answer to those questions is grounded in pragmatism - so when it comes to specifying based on what kind of norms and characteristics I create a hierarchy of perspectives - it will be based on the given purpose/goal/function and that will automatically give meaning to terms like "better" or "higher". This provides the ability to define and to interpret those terms in a non-vague way , and this would be the opposite to the other approach where there is some kind of vague goal and function independent meaning is attached to those terms. There is also often times an underlying assumption that if the context window that one consideres is bigger (which is often labeled as a more complex persepctive), that is in and of itself better, but that is purpose dependent as well. A bigger context window might give more noise to it and make it worse.
-
Its not the case yet (if you interpret it literally), but as AI gets more advanced, the capability to do more harm by anyone who has access to it goes up as well.
-
I dont think you appreciate the level of setback and depth "shit will just break down" entails in the context we are in now. I think appealing to history doesn't work much. Times in the past were different compared to how it is now. Sure you could externalize harm, but not on this scale. In the past, a seriously bad actor could only do so much damage (even if he had all the wealth and all the necessary people), as time passes -especially with the rapid advancement of AI - the potential for a single individual to cause significant harm is increasing at an almost exponential rate, without any need for substantial wealth or for a powerful network of people. The idea that we will have a casual setback in a society, where each member can have at least a nuclear bomb level effect on a global scale, while having perverse incentives , misinformation and a bunch of bad actors is just naive and not realistic. We wont go extinct, but there will be a serious and chaotic setback.
-
Is this the classic move of "The Tao that can be told is not the eternal Tao' ? - the moment you try to formalize and strategize a plan you've already lost. But regardless, what does "become totally unbound in your own becoming" mean in practice?
-
I dont share his idea, I just shared his view, which is simiar to yours in that we have no fucking clue what we are doing.
-
Thats roughly what Jordan Hall is saying. Jordan Hall would say that we have no fucking clue what we are doing and from where our motivations come from and we should connect to God and act based on that not based on our ideologies or random motivations. The idea is that whatever our intellect will come up with will be dogshit and useless , so we might as surrender and let the transcendent show the way.
-
I dont think I am tracking what you are saying. The premise is not that we are fully in charge of what we are becoming and what we are doing - the premise is that our actions inevitably affect nature and to the degree to which we have control over those actions (even if its very little), we should use that wisely.
-
I think there is some truth to that (the need for control and the need for taking credit), but on the other , it is also about this - given that we are here and we are planning on staying here in the future, why wouldn't we try to do it in a conscious,wise rather than an unconsious,unwise way?
