-
Content count
2,513 -
Joined
-
Last visited
Everything posted by DocWatts
-
While a criminal conviction won't deter Trump's MAGA cultists, those folks were already locked in for Trump, regardless of what he does. The election is instead going to hinge on what proportion of the other %70 of the country will bother to come out to support Biden and the Dems in the 2024 election. All signs are that it's going to be a closer election than in 2020, but that's a far cry from predictions that 2024 is going to be a disaster for Dems.
-
We're still close to a year a way from the elections, and Dems have electorally overperformed in 2018, 2020, 2022, and 2023. Overt religious extremism (such as abortion bans) and running incompetent, unlikeable candidates seems to be badly hurting the GOP as far as actual elections. Trump is facing multiple criminal indictments, and a conviction on any one of them could for all intents and purposes put an end to his ability to campaign (even if this ends up being something akin to house arrest, that'd be an end to his Nuremberg-esque rallies). Which isn't to say that Biden is a shoe in, but the situation likely isn't as dire as the polls will lead you to believe. Additionally, it's worth keeping in mind that political polls only capture the views of people who are willing to answer a phone call from an unknown number. Generally speaking, this tends to be people with landlines who are older and thus more conservative. As a personal aside, I know very few people under the age of 40 who would actually answer a call from an unknown number.
-
I tend to answer in a few different ways depending on their level of interest. For small talk, I'll just say that I pursue spiritually in a secular way, and leave it at that. If I sense that they might actually interested in discussing spirituality, I might say something along of: 'My practice involves integrating spiritually with insights from science and philosophy'. Or: 'I'm not religious in a traditional sense, but I'm highly interested in how the mind constructs its Reality.' Then they'll either find this interesting and we'll end up having a conversation, or they'll politely change the subject. 😆
-
If you want an actual answer to the question (as opposed to a place to vent about the portion of Americans who are hurtling us towards a possible Trump dictatorship in 2024), I'd recommend checking out Jonathan Haidt's 'The Righteous Mind', where he gives an explanation of how liberals and conservatives form the moral intuitions which are the foundation of thier politics and worldview. To massively simplify for the sake of brevity, he identities five basic foundations for morality that have been universals throughout human societies. They are: Care, Fairness, In-Group Loyalty, Authority, and Purity. He also demonstrates that the 'point' of these moral foundations isn't to make a just or equitable society so much as to allow us to live together as social animals and have functioning societies that can compete against other groups. Liberals tend to put more emphasis towards Care and Fairness as thier moral foundations. While conservatives tend to put more emphasis towards In-Group Loyalty, Obedience To Authority, and Purity as the foundations of their morality. Not hard to see how the moral intuitions behind conservatism make consecutives more susceptible to things like the racism, conspiracism, and cult behavior we're seeing in the MAGA movement.
-
DocWatts replied to Danioover9000's topic in Society, Politics, Government, Environment, Current Events
I'd highly recommend John Verveake's recent book on the subject, Mentoring the Machines, where he helpfully differentiates three types of AI, and the domain of problems each is suited to. 1) Narrow AI is suited for well defined problems. Makes use algorithms to solve problems. Self driving cars are a good example of this. 2) Artificial General Intelligence (AGI) is suited for ill-defined problems that are combinatorially explosive (ie can't be brute forced), and require a novel approach. Makes use of heuristics to solve a wide range of problems. All animals (including humans) are examples of general intelligence. 3) Super Intelligent AI is a hypothetical type of AI that would purportedly be able to address undefinable or existential problems that don't have a solution (ie 'what is the meaning of life'). Unlike the earlier two, this category is more akin to religious belief than something that we need concern ourselves with in our lifetime. Ray Kurtzweil's 'technological singularity' pseudo religion is a good example of this. I've found that this differentiation can be helpful in how discussions around AI are framed. For example, it's helpful to keep in mind that the forms of AI that are already having a noticable impact on our society are Narrow AI. It's an open question as to whether we'll see AGI in our lifetimes, but it's impact on the world has the potential to be orders of magnitude more consequential than Narrow AI. IMHO super Intelligent AI is a fantasy and not worthy of serious discussion or consideration, when it's not even clear whether or not AGI will prove to be a practical possibility. -
DocWatts replied to Buck Edwards's topic in Society, Politics, Government, Environment, Current Events
If you want to be the modern day equivalent of someone voting to end democracy in 1930s Germany I can't stop you, but I question what you're doing on a conscious politics forum if you're unable to recognize that Trump is an completely unhinged and unprincipled authoritarian con-man. -
DocWatts replied to Buck Edwards's topic in Society, Politics, Government, Environment, Current Events
If you value being able to vote in future elections, voting for Democrats is essential since the Republican plan is to essentially install Trump as dictator. Look into 'Project 2025' if you want details on MAGA's plan to dismantle democracy in America. https://accountable.us/right-wing-network-plots-to-undermine-democracy-with-project-2025/ “...democracy experts view Project 2025 as an authoritarian attempt to seize power by filling the federal government, including the Department of Justice and the FBI, with unwavering Trump supporters, which could potentially erode the country’s system of checks and balances.” -
As an interesting aside, this was the view of Marx as well, whose view was that socialism needs to be be built on top of the massive increase in productivity that developed as a result of capitalism. Something that actual 20th century Marxists often ignored as they tried to implement socialism in feudal societies like Tzarist Russia, to disastrous results. (Note that I'm not saying that ignoring this developmental aspect of Marx's theory is the only reason why communist experiments didn't work out in practice).
-
Robert Evans (the person who's informing the other two guys on Kissinger's history in that podcast) is a journalist who's covered military conflicts around the world, and has also written about online extremism and political violence. Which is to say he's very well informed about the geopolitical topics he covers on his Bastards podcast. The other two guys are guests, who happen to be comedians that run an American history podcast called The Dollop. I've listened to both podcasts for years, would highly recommend both.
-
DocWatts replied to martins name's topic in Society, Politics, Government, Environment, Current Events
Understanding Marxism by Richard Wolfe is a good starting place, as he does a good job of taking Marx's theory and updating it for our modern era. Might be a bit basic if you're already very well versed in Marxism, but I found it to be highly helpful. Marxism A Very Short Introduction by Peter Singer is also a good summation of some of Marx's texts. -
DocWatts replied to martins name's topic in Society, Politics, Government, Environment, Current Events
I'll fully admit that most of my reading on Marx comes from contemporary sources who translate his ideas into a format that's understandable for someone living in our modern era, rather than banging my head against something like Das Kapital directly. (By this I mean people who attempt to give a good faith interpretation of his work, rather than someone like Jordan Peterson). As someone who's read lots of philosophy (including some very difficult primary sources), I'll almost always recommend that people get the gist of a philosopher from contemporary sources, rather than trying to decipher highly difficult texts that were written in a different era. For instance %99 of people are better off getting the gist of someone like Immanuel Kant from contemporary scholars, rather than trying to wade one's way through Critique of Pure Reason. As for your analysis of Marx in your original post (apologies for not addressing this directly), I'd argue that his Labor theory of Value, along with his depiction of the Alienation of Labor, are generally true, at least in a broad sense (especially so under unregulated Capitalism). I won't go as far as Marx as to say that the CEO of a company adds nothing, but the vast majority of the value that's created by a company like Tesla comes from the actual workers who engineer and build the electric cars, not from Musk or it's board of directors. Which why I say that his critiques of capitalism are largely valid, even if his idea of a classless and stateless society isn't a realistic or workable solution. -
DocWatts replied to martins name's topic in Society, Politics, Government, Environment, Current Events
The primary flaw of Marxism is that it makes an analytical rather than a moral mistake, in that it fails to adequately account for how its proposed alternative to capitalism could also develop its own coercive power hierarchies which consolidate power into a small, unaccountable elite. Which is exactly what happened in the Soviet Union, and just about everywhere else Communism has been tried. In practice, what happened is that private capitalism was simply swapped out for a version of state capitalism, with worker ownership over the means of production never materializing. The root of why this happened is that this proposed alternative to capitalism was mired in a form of Game Denial, which is to say a denial of the ways that aspects of society are a zero sum game, and a denial of how human beings are in competition with one another in unavoidable ways. That doesn't just magically go away in a post-capitalist society where everyone is meant to be working for the common good of all. Failing to account for this and build safeguards inevitably leads to abuses of power, which is exactly what we see in societies which tried this experiment. Note that all of this is a separate issue from the largely valid critiques that Marxism makes of Capitalism. One can be correct in the diagnosis of a problem and mistaken in how that problem should be addressed. -
DocWatts replied to Maximilian's topic in Society, Politics, Government, Environment, Current Events
Sometimes I jokingly wonder if with Elon's handling of Twitter, we're watching a real life version 'The Producers' (where a shady producer intentionally sets out to make a terrible Broadway musical called 'Spring Time For Hitler', in the hope that it will bomb so that he can commit tax fraud). But that would probably be giving Elon too much credit. I think he really is just as dumb as he seems -
Would highly recommend the Behind the Bastards podcast about Kissinger, if like me you were unaware of the full extent of this guy's devilry. Growing up I associated him with Richard Nixon and the terrible decision to needlessly extend the Vietnam War for another 5 years, but the totality of his influence on US foreign policy was so much worse than that.
-
A 'meta' (as in a metamodern) way of framing the conflict would be one that uses systems thinking, game theory, dialectical/developmental models as framing devices to try and understand the perspective and motivations of the different sides of the conflict. It doesn't entail having to take a 'middle of the road' stance on every issue, especially when there's a clear injustice that remains unaddressed.
-
DocWatts replied to Parallax Mind's topic in Society, Politics, Government, Environment, Current Events
David Foster Wallace on postmodernism: -
I'd argue that attempting to 'both-sides' a conflict where there's an obvious and overwhelming power imbalance at work, and where one side is clearly more responsible for the state of events that led to the conflict, is a misuse of whatever framework you happen to be using to arrive at your 'neutrality' (be that Spiral Dynamics, Integral, spirituality, etc). Because the power dynamics are so overwhelming in favor of the Israeli state, the primary responsibility for taking the first steps to end the conflict is also in their court. Recognizing the obvious injustice of the current situation doesn't mean wanting israel wiped off the map. Instead, it means an end to Israeli's illegal military occupation and forced ghettoization of Palestinians, and support for steps to begin implementing a two-state solution that would be a first step in ending the conflict. Obviously any political solution isn't going to be a 'quick fix'. Intergenerational trauma and decades of brutalization doesn't disappear overnight, but the process has to start somewhere.
-
Thing is that genocide doesn't always look like a group of people being rounded up and sent to death camps (ie how a typical Israeli is likely to understand genocide). Sometimes an indigenous people just happen to have the misfortune of being in the way of a more powerful entity who feels entitled to the place that they've been living, which is what happened to the Native Americans and is a closer analogue to what's happening to the Palestinians. In both instances genocide is a byproduct of the policy goals of these more powerful entities, but in both cases the result is the systematic destruction of a way of life, along with appalling conditions for the survivors of this process. Gaza has been described as the world's largest Open Air Prison, and the long term goal of Israeli's far right has been to make life so unlivable for Palestinians that they lose hope of ever changing thier situation, and try to find a way to leave as a result. Basically, make life so hopeless and unpleasant in Gaza that the life of a refugee would be preferable to remaining under Israeli military occupation. Of course the irony is that making life for a minority so unpleasant that the hope is that most of them leave the country as refugees echos policies that have been weaponized against Jews in the early years of Nazi Germany and elsewhere (such as Tzarist Russia).
-
Might as well say that you can't figure out whether you're for or against slavery. Does humankind have a long and complicated history with slavery, and is there a ton of nuanced understanding that can be applied to slavery as an institution? No doubt. But figuring out whether or not slavery should be abolished is not ethically complex (at least not for a person living in our modern world). Likewise, figuring out where you stand on whether the Israeli state should be given a free hand to commit a slow genocide against a disempowered ethnic minority is not an ethically complex question. That's not to say a sustainable solution is obvious or straightforward, but recognizing that the current status quo is absolutely unacceptable should be easy.
-
I'd heavily caution against using Spiral Dynamics (or Integral) as a personal development model, since it's all to easy to decieve oneself into thinking that you're much more 'developed' than you actually are. Much better to frame Spiral Dynamics/Integral as a sociological model that attempts to work out some of the dialectics behind how worldviews work, in a very broad sense. It's certainly not a replacement for understanding specific domains in a more contextual way
-
We're still a year out from the election, and the Democratic Party has been consistently overperforming Polling predictions since 2018. Biden won by 7 million Votes in 2020, no Red Wave materialized in 2022, and Dems did very well in 2023. Trump is undergoing 4 criminal trials right now, and there's a non-negligable chance that he'll be removed from the ballots of some states (such as Michigan) for inciting an insurrection. Don't get discouraged by the polls. Vote. Encourage others to Vote. Get involved in the political process by canvasing.
-
DocWatts replied to Rafael Thundercat's topic in Intellectual Stuff: Philosophy, Science, Technology
Ursula Le Guinn really is an amazing writer. The Left Hand of Darkness and The Dispossesed are both highly recommended if you're looking to check out her novels. -
I'd counter that things can only be 'mostly' abstract for us. I'm doubtful as to whether 'pure' abstraction can actually exist, because explicit knowledge of anything (including abstract concepts such as arithmetic) is grounded in a mountain of tacit knowledge that we pick up from having consequential interactions with the world around us. The only reason that 1+1=2 is meaningful for us is because we're able to manipulate things in Reality, which are sometimes encountered alone and which are sometimes encountered in pairs.
-
If anyone is interested enough in this topic for 5-10 mins of reading, a section of the philosophy book I'm writing deals with some of the difficulties in creating an AGI. The tldr version is that artificial intelligence doesn't actually understand anything, and a capacity for understanding isn't something that can just be programmed into a computer. Rather, understanding is an embodied process that relies on Reality having consequences for the being in question. Unlike living beings, AIs have no 'skin in the game' as far as thier interactions with Reality. Living beings and digital computers are organized around very different axiomatic principles, so it's an open question as to whether or not a capacity for understanding can be replicated in a disembodied AI. ______________________________ What Artificial Intelligence Can Teach Us About Minds As of the time of this book’s writing in 2023, machine learning algorithms such as ChatGPT have advanced to the point where their responses to questions can correspond to an impressive degree with how human beings use written language. ChatGPT’s ability to incorporate context in conversationally appropriate ways makes interacting with these models feel uncannily natural at times. Of course, training an AI language model to interact with humans in ways that feel natural is far from an easy problem to solve, so all due credit to AI researchers for their accomplishments. Yet in spite of all this, it’s also accurate to point out that artificial intelligence programs don't actually understand anything. This is because understanding involves far more than just responding to input in situationally appropriate ways. Rather, understanding is grounded in fundamental capacities that machine learning algorithms lack. Foremost among these is a form of concernful absorption within a world of lasting consequences; i.e., capacity for Care. To establish why understanding is coupled to Care, it will be helpful to explore what it means to understand something. To understand something means to engage in a process of acquiring, integrating, and embodying information. Breaking down each of these steps in a bit more detail : (1) Acquisition is the act of taking in or generating new information. (2) Integration involves synthesizing, or differentiating and linking, this new information with what one already knows. (3) Embodiment refers to how this information gets embedded into our existing organizational structure, informing the ways in which we think and behave. What’s important to note about this process is that it ends up changing us in some way. Moreover, the steps in this sequence are fundamentally relational, stemming from our interactions with the world. While machine intelligence can be quite adept at the first stage of this sequence, owing to the fact that digital computers can accumulate, store, and access information far more efficiently than a human being, it’s in the latter steps that they fall flat in comparison to living minds. This is because integration and embodiment are forms of growth that stem from how minds are interconnected to living bodies. In contrast, existing forms of machine intelligence are fundamentally disembodied, owing to the fact that digital computers are organized around wholly different operating principles than that of living organisms. For minds that grow out of living systems, interconnections between a body and a mind, and between a body-mind and an environment, is what allows interactions with Reality to be consequential for us. This is an outcome of the fact that our mind’s existence is sustained by the ongoing maintenance of our living bodies, and vice versa. If our living bodies fail, our minds fail. Likewise, if our minds fail, our bodies will soon follow, unless artificially kept alive through external mechanisms. Another hallmark of living systems is that they’re capable of producing and maintaining their own parts; in fact, your body replaces about one percent of its cellular components on a daily basis. This is evident in the way that a cut on your finger will heal, and within a few days effectively erase any evidence of its existence. One term for this ability of biological systems to produce and maintain their own parts is autopoiesis (a combination of the ancient Greek words for ‘self’ and ‘creation’). The basic principles behind autopoiesis don't just hold true for your skin, but for your brain as well. While the neurons that make up your brain aren’t renewed in the same way that skin or bone cells are, the brain itself has a remarkable degree of plasticity. What plasticity refers to is our brain’s ability to adaptively alter its structure and functioning. And the way that our brains manage to do this is through changes in how bundles of neurons (known as ‘synapses’) are connected to one another. How we end up using our mind has a direct (though not straightforward) influence on the strength of synaptic connections between different regions of our brain; which in turn influences how our mind develops. Accordingly, this is also the reason why the science fiction idea of ‘uploading’ a person’s mind to a computer is pure fantasy, because how a mind functions is inextricably bound with the network of interconnections in which that mind is embodied. This fundamental circularity between our autopoietic living body and our mind is the foundation of embodied intelligence, which is what allows us to engage with the world through Care. Precisely because autopoietic circularity is so tightly bound with feedback mechanisms that are inherent to Life, it’s proven extraordinarily challenging to create analogues for this process in non-living entities. As such, it’s yet to be demonstrated whether or not autopoietic circularity can be replicated, even in principle, through the system of deterministic rules that governs digital computers. Furthermore, giving machine learning models access to a robotic ‘body’ isn’t enough, on its own, to make these entities truly embodied. This is because embodiment involves far more than having access to and control of a body. Rather, embodiment is a way of encapsulating the rich tapestry of interconnections between an intelligence and the physical processes that grant it access to a world (keeping in mind that everything that your body does, from metabolism to sensory perception, is a type of process). For the sake of argument, however, let’s assume that the challenges involved in the creation of embodied artificial intelligence are ultimately surmountable. Because embodiment is coupled to a capacity for Care, the creation of embodied artificial intelligence has the potential to open a Pandora’s box of difficult ethical questions that we may not be prepared for (and this is in addition to the disruptive effects that AI is already having on our society). Precisely because Care is grounded in interactions having very real consequences for a being, by extension this also brings with it a possibility for suffering. For human beings, having adequate access to food, safety, companionship, and opportunities to self actualize aren’t abstractions, nor are they something that we relate to in a disengaged way. Rather, as beings with a capacity for Care, when we’re deprived of what we need from Reality, we end up suffering in real ways. Assuming that the creation of non-living entities with a capacity for Care is even possible, it would behoove us to tread extraordinarily carefully since this could result in beings with a capacity to suffer in ways that we might not be able to fully understand or imagine (since it’s likely that their needs may end up being considerably different than that of a living being). And of course, there’s the undeniable fact that humanity, as a whole, has had a rather poor track record when it comes to how we respond to those that we don’t understand. For some perspective, it’s only relatively recently that the idea of universal human rights achieved some modicum of acceptance in our emerging global society, and our world still has a long way to go towards the actualization of these professed ideals. By extension, our world’s circle of concern hasn’t expanded to include the suffering of animals in factory farms, let alone to non-living entities that have the potential to be far more alien to us than cows or chickens. Of course, that’s not to imply that ‘humanity’ is a monolith that will respond to AI in just one way. Rather, the ways that beings of this type will be treated will almost certainly be as diverse as the multitude of ways that people treat one another. Of course, all of this is assuming that the obstacles on the road to embodied artificial intelligence are surmountable, which is far from a given. It could very well be that the creation of non-living entities with a capacity for understanding is beyond what the axioms of what the rules of digital computation allow for. And that apparent progress towards machine understanding is analogous to thinking that one has made tangible progress towards reaching the moon because one has managed to climb halfway up a very tall tree. Yet given the enormity of the stakes involved, it’s a possibility that’s worth taking seriously. For what it’s worth, we’ll be in a much better position to chart a wise course for the challenges that lie ahead if we approach it with a higher degree of self understanding.
-
While ordinarily I might write out a mini-novella on the genealogy of hatred, I think that this cartoon sums up a major component of it beautifully. While there of course forms of hatred that come being brutalized by other people / groups, fear mixed with ignorance is a major aspect of how you get someone to hate a person/group who hasn't actually harmed them.