-
Content count
2,418 -
Joined
-
Last visited
About Joshe
-
Rank
- - -
Personal Information
-
Location
United States
-
Gender
Male
Recent Profile Visitors
6,142 profile views
-
Joshe replied to Xonas Pitfall's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Cognition occurs within awareness. What you experience when you're not thinking is awareness with nothing flowing through it. The less cognition, the more noticeable awareness is. Awareness starts to feel infinite at low levels of cognitive activity, just like a dark room. Cognition is how we interface with the presumed infinity. But cognition has constraints. We can't compute 2x8 and 8x3 at the same time. To interpret "reality is infinite" and "reality is love" at the same time requires cognition. You can only focus on one or the other. Or train cognition such that both compress into a single gestalt. But this is modifying interpretation through cognition, not expanding awareness. -
Joshe replied to AtmanIsBrahman's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
It's supply and demand. Spiritual narratives stabilize the self, solving many serious problems at once. They make people feel: special purposeful enlightened morally superior connected to ultimate truth part of a cosmic story And more Belief's main function is stabilization. Most people, including intelligent ones, are very willing to adopt sets of beliefs if they're packaged coherently and can stabilize the self. Stabilizing the self is what's most important. Most seekers aren't really after truth. That's just post-hoc rationalization 99% of the time. The supplier must tend to the fact that the demand largely consists of unconscious stabilization needs and not a need for truth. This seems a hard thing to balance for an integrous teacher. Serving stabilization is antithetical to many of the truths an integrous teacher would want to teach. But if you don't serve ample stabilization, your audience will look elsewhere. -
Joshe replied to Xonas Pitfall's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Significant expansion is a pipe dream because human awareness has limited bandwidth. People mistake the practice of channeling awareness for expanding. You can practice a specific conceptual or perceptual skill so much that you gain compression artifacts substituting for a lot of granularity that you can experience as a gestalt, and those can feel profound, but that isn't "expanding" awareness. It's just focusing/channeling awareness on a specific thing. One could do nothing but train their mind to perceive reality as pure consciousness to the point they live from that recognition, and they could ponder the logical implications and work to integrate those, and this would still not produce "expanded" awareness. Trained awareness, yes, expanded, no. Conceptual spirituality (interpretation + metaphysical narrative) is held together by unconscious, accumulated logic and premises that, over time, coalesce into a metacognitive gestalt. The experience of the gestalt is often mistaken for "expanded" awareness or transcendence or awakening. If a practice is largely building and compressing an interpretive framework, then "more practice" doesn't strip away filters to reveal what already is. It's just replacing one set of filters for another. "The only way to verify my claims is to build the same compressed gestalt I have, at which point you'll agree with me." -
So far I see 5 types who regularly hate on AI. 1. Inexperienced users not interested in the tool 2. Conspiratorial thinkers 3. People who were already heavily focused on technological harm pre-AI 4. Conservatives 5. Intellectual egos (AI threatens the visible gap between expert and non-expert cognition) #5 is interesting. There are legit intelligent people out there who are fundamentally biased against AI just because it makes access to high quality information and thinking more generally available. They cloak their disdain in virtue (concern), but the truth is, AI is a massive threat to intellectual hierarchies and many identities built on intellectual superiority feel threatened by it. Just something to keep in mind when you watch a smart person bash the shit out of AI. If they advocate to slow down or use it less instead of education/AI literacy, their concern is most likely not epistemics. It's about the moat.
-
Progress has been made. Gotta start somewhere.
-
By not being passive. By interrogating and being critical of the output. It's a reasoning instrument, not an authority. The most powerful use of AI is interactive interrogation. The very fact that everyone is asking "How do we know if we can trust it?" means the idea of epistemic responsibility has gone mainstream. Humans will adapt, just like how they stopped trusting the first answer they saw on Google. AI will eventually normalize interrogating answers. If that happens, it would be one of the biggest shifts in the epistemic environment humans have ever experienced.
-
But that's not all it will do. In almost every conversation I have with AI, it corrects or checks me on something. It very often lets you know when you haven't reasoned well or have missed something. The average person coming in contact with that several times a week is a huge deal for epistemic responsibility. IMO, ER will skyrocket with each new generation because they'll grow up in an environment where it's a common topic and a necessary skill. They'll be taught early how to verify answers, how to prompt effectively, detect hallucinations. Learning how to not be duped by AI is essentially learning epistemic responsibility.
-
It's even wrong there IMO. Intellectual rigor is like a cognitive habit that remains relatively stable. Lazy thinkers of yesterday will usually be lazy thinkers tomorrow. People who used to search Google to find the first easiest answer are using AI just like that. AI doesn't create laziness - it just reveals the level of intellectual rigor that was already there. Serious thinkers still verify, but now they can do it faster and better. Also, people have always stopped double-checking once they feel confident in a source. Books, teachers, Google, Stack Overflow. Most people were already doing the minimum verification they felt necessary with Google. The verification process usually includes a ton of logistical busywork - not higher-order cognition. AI eliminates the bulk of that busywork, freeing serious thinkers up for higher reasoning. Evaluation is a higher form of cognition than logistical busywork. With AI, people will evaluate MORE, not less, because AI reduces the logistical friction. Even unserious thinkers will increase in higher level reasoning as a result of AI. The net effect on cognition will be positive, IMO. "Tools change the speed of thinking loops, but intellectual rigor comes from the mind running the loop. Reducing friction around information tends to increase the number of reasoning cycles people run. When the number of cycles increase, even people who are not highly rigorous will engage in more higher-level reasoning than they previously did." Someone who previously reasoned through 10 questions per week might now reason through 100.
-
"Some" doesn't quite capture it. Almost everything it presents is sloppy as hell and misleading. Their main evidence that programming productivity is a mirage is that they tracked 16 SWEs about a year ago who did worse with agentic coding. They present weak, incredibly low sample size, non peer-reviewed studies as meaningful science. You shouldn't let a source like this sway you on anything. And of course brain activity decreases when using tools. That's the whole point of tools. lol. Historically, tools tend to expand cognition, not collapse it. Also, the video serves as a perfect example of doing exactly what it warns about: presenting information confidently enough that people assume it's correct via authority signals (studies). It's worth noting the video title is "LLMs can't reason - AGI impossible" while only 20% of the video is about that. The other 80% is ai-bad for humans. It has a propaganda feel to it.
-
Yes, it’s called a prediction. Most people who have an opinion predict AI use will lead to cognitive atrophy. And maybe I am mistaken but it seems I’ve seen you lean heavily in that direction as well. I predict it won’t. Most people who predict it will are making several errors, the first of which is having failed to think seriously about the issue for an extended amount of time, and they just parrot that which is most intuitive. I can build a very strong case the common prediction is wrong, at which point confidence in it takes a nosedive. At least if you allow reason to update your models.
-
Of course. I was referring to early stages of physical and cognitive development. But my main point was that the vast majority of humans would be made more cognitively capable by AI usage, not less.
-
I used to think that too, until a few hours ago. I’ll explain why it’s not the case in a new post.
-
Cognitive decline due to AI usage would mostly only occur in minds still under development. Even unserious thinkers who believe everything the AI tells them will be made more intelligent by AI, not less.
-
Yes, dropping agents into an existing (brownfield) production codebase is very risky, but a lot of that risk can be mitigated with good strategy. Getting agents up and running in a brownfield codebase is largely a project management and system design problem that requires lots of iteration and creativity. The problem is largely solvable, you just have to figure it out. You could instruct CC to write comprehensive test coverage for every database operation. Before anything touches a production db, you'd have a test suite confirming everything works right in staging. That's just one guardrail you could build into your agentic workflow.
-
You don't need 80% of that legacy PHP for this use case. Talk about bloated slop. It's funny you guys are invoking code quality while defending Wordpress-adjacent legacy PHP.
