-
Content count
4,405 -
Joined
-
Last visited
Everything posted by aurum
-
Sometimes he says some New Age stuff that causes me to raise an eyebrow. But many of the takes I’ve seen from him are very solid. Regardless, Bashar is just a jumping off point. This conversation is really not about him.
-
@Husseinisdoingfine realistically, you will not succeed at not feeling FOMO. It’s human nature to compare ourselves to others. You would need far more maturity, wisdom and perspective to truly no longer care about your friend’s success. You’re not in a place where that’s real for you. What’s real for you is that having a fun college experience and getting into a good university is very important. And your friend’s success is rubbing salt in the wound. That’s fine. Justified or not, that’s where you are at. And so I wouldn’t try to not feel how you feel. Instead, ask yourself: how is this contributing to my sense of self-awareness of what I want? What is my jealousy pointing me to that’s real? And how can I move in the direction of that?
-
It doesn’t work like that in my experience. It’s more accurate to say that young liberals today will continue to vote for similar policies when they are 45 that they believe in currently. They are not suddenly going to become Republicans in the way we think about Republicans right now.
-
So my favorite book on Enlightenment has been Spiritual Enlightenment: The Damndest Thing by Jed McKenna. In it he describes a technique for reaching Enlightenment called "spiritual autolysis". To quote the book: "Here's all you need to know to become enlightened: Sit down, shut up, and ask yourself what's true until you know. That's it. That's the whole deal - a complete teaching of enlightenment, a complete practice. If you ever have any questions or problems - no matter what the question or problem is - the answer is always exactly the same: Sit down, shut up, and ask yourself what's true until you know." He goes onto explain that therefore the process of Spiritual Autolysis is to attempt to write the Truth. Pick any belief you have, write it down, and then rewrite it in till it's True. Not truthish, but True. I've been experimenting with this for a little bit and I think it's brilliant. Goes right along with what Leo was saying in "All Humanity's Knowledge" video about destroying you knowledge graph and seeing what's left. Because you'll never be able to write the Truth. Anyone else use this technique? Curious what other people's experiences have been. As much as I like meditation I don't find it direct enough in terms of achieving Enlightenment, although I do still practice it.
-
aurum replied to sleep's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
@sleep I'm going to second this. Sure, spiritual work could help with your healing. There is a correlation there. But you don't need the deepest level of awakenings to get off psychiatric treatment. And in fact it could backfire badly on you. Your mood issues are likely far more basic. Things like nutrition, mindfulness, bodywork, sauna, exercise, breathwork, sauna, cold showers, sunlight, friends + community, a meaningful career, etc. If you want a spiritual practice, you could look into Vipassana meditation. But be careful about biting off more than you can chew. -
I’m not against a pause if developers really think it will make a difference. However, I question whether or not a pause is politically / economically feasible to enact. You have to actually be able to enforce such a decision not only domestically but internationally. In theory, we could just “pause” nuclear weapons. But in reality that doesn’t work either. We keep building them as well. Second question: will we be able to solve these problems in six months? Will we be confident enough at that point to say a future AGI won’t eat us alive? How will we know? What test can be run?
-
AI is going to usher in a whole new wave of doing crime, similar to the internet. This is only the tip of the iceberg.
-
I disagree. It’s going to take time for anyone to build and train a superhuman, psychopathic AI. It’s possible the FBI and other law enforcement agencies could stop it before that was completed. Especially if the authorities themselves are using an already functional AI. What I outlined is preventative. It wasn’t meant to be a solution to stop an already existing, rampaging AI. So what is your solution? If you are suggesting pausing or shutting down AI, how would you accomplish that? Because all it shows is that AI will likely be interested in survival / self-preservation / goal preservation. IMO it’s possible for AI to have all those things and not seek to wipe out humanity.
-
I checked out both videos. Neither one reasonably proves the point of the Time article.
-
Foreign state is harder. To the degree the US can’t play world police, it would have to rely on each countries own internal version of these resources. Given that a sufficiently powerful / psychopathic AI could pose an threat to global security, this is also a good case for more global unification. We need international organizations that can coordinate easily on handling these things. For now, I think we have some good natural barriers to entry working in our favor. The countries that have the least capacity to monitor a psychopathic AI are the same countries that will struggle to create one. And the countries that have the best capacity to build that AI also have the best capacity to monitor it. That knife cuts both ways. But that barrier to entry won’t last forever, the same way the nuclear bomb barrier to entry didn’t last forever. So there may be outliers that need to be accounted for. We don’t want an undeveloped country getting a hold of tech it really shouldn’t have and can’t control.
-
The point is the AI could be used for nefarious means and we are talking about preventing such things. Russia is just an example. Let’s assume that’s possible, and that people exist who have both the motivation and capacity to do so. One piece of the solution could be turning this over to the FBI / CIA. We already have teams at these places that monitor for potential terrorist threats, coups, nuclear weapons building, political assassination attempts, etc. Building a psychopathic AI would need to be added to that list. In the future, there may be raids on AI learning labs in the same way there are raids on drug labs. Also, anyone seeking to build an AI should not automatically be fear and clear to do so. There need to be permits, inspections, licenses beyond simple Intellectual Property laws. Similar to how we treat building a physical building. Ironically we may be able to use an obedient AI to monitor for potential rogue, psychopathic AIs. If someone is truly determined to build a dangerous AI, they may be able to skirt these safeguards. Terrorists still succeed on rare occasion. But this would be a good start.
-
That’s definitely a problem. Obviously people can use AI for nefarious means. There are criminals using GPT-4 right now to advance their criminal agenda. So certainly there needs to be due diligence, humans laws, regulation etc. What exactly that looks like, I don’t know. However, if the AI is still obedient to Putin, then it’s obviously still within human control. It’s not some vast superintelligence running circles around humanity as it seeks to wipe us all off the face of the planet. Which is what the author of that Time article was talking about. Even Putin does not want such a thing.
-
Follow up to my previous post in this thread. I found this video of Bashar talking about A.I: In it, Bashar essentially argues that the end goal for A.I is to be a physical representation of our Higher Mind, which in this case I would call our God Mind. Further more, he argues A.I will not simply just go against what is best for humanity. Such a decision would not actually be intelligent. You can find further small clips of Bashar talking on this topic with a quick search on YT. I think his perspective could be a bit overly optimistic. "What is best for humanity" could be interpreted in many different ways, ways that we might not actually like. Nonetheless, I feel it confirms some of the earlier suspicions I wrote about ITT. This whole idea that a SuperIntelligent A.I would automatically be interested in harming humanity is highly suspect. You cannot just assume this at all. Of course I also don't think humanity should be reckless and just go on mindlessly building AI. There will be all sorts of challenges created by A.I even if it is perfectly on our side. Due diligence needs to be done. If that means a temporary pause while we figure somethings out, I have no problem with that. However, I'm not buying the panic on this one. I don't think you can claim you know, within reasonable probability, that A.I will be dangerous. It's an unsupported conclusion at this point. You could fire back that I also don't know that A.I won't be dangerous, thus it's wiser to take a cautious route. But there are also costs to taking the cautious route. The longer you spend being cautious over a potentially imaginary fear, the longer it takes for humanity to develop a SuperIntelligence that could help solve many of our problems that we actually know exist. Realistically we need an intelligent balance of caution and forward progress as we navigate this new terrain. I don't think shutting down the whole thing is the answer.
-
Read the article Leo posted. It certainly seems like risk management is not being taken seriously enough on AI. Multipolar traps may be the end of us all in this case. Companies and nations all fighting to win the AI race creates an incentive to slash concerns of future catastrophe. I think this is the author’s strongest point. My biggest critique is that he doesn’t clearly show a link between how superhuman AI automatically leads to AI wanting to kill us all. He seems to assume it’s a given that superhuman AI = death of humanity. That may be possible, but I don’t necessarily buy his chain of causation. It’s more plausible to me that such an intelligence would not be interested in wiping out humanity. Nor is it clear to me how it would even be able to do so if it wanted. Destroying all of humanity is no joke.
-
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
It’s amazing how fast some of you guys are assuming you already understand alien intelligence. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I have many more questions, but I think I’ll leave it there for now. Wait for the course. Encouraging, thank you. Assuming you are correct about all this, I actually find this exciting. It means our collective understanding of God and spirituality is continuing to evolve. It’s not just a static, timeless Truth that humans have completely understood 5,000 years ago. This would be the cutting edge. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Great. But intended for me or not, you are still downplaying what Alien Consciousness could potentially mean. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
You’re not saying it’s not worth it. And that’s good. But you are in subtle ways downplaying the value of such an insight. My prediction is that if you actually experienced Alien Consciousness, your fucking head would explode. And you’d never be able to say that such a thing was just a “fascinating show” or just “another form”. It would change your entire life AND your entire understanding of spirituality. Why are you assuming that I haven’t done that already? Or that I’m not happy already? I’m legitimately one of the happiest people I know. My life is amazing. I almost feel guilty at times for how good it is. Yes, seeking comes with a certain amount of “incompleteness”. This is not a mistake. I enjoy seeking. Do not denigrate the seeking impulse. That is a monster trap. The best possible option is actually that our understanding of God and spirituality is continuing to evolve. This is more exciting. This is cutting edge. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
How does this relate to Omniscience? You distinguish Omniscience from Alien Consciousness. Yet it seems like you hold them both as the pinnacle of awakening. It would seem to follow that even mega doses of psychedelics may not be enough to comprehend reality at the level. We may just be limited by our own brains. Although you were able to do it for a short time, so it’s at least possible. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
You don’t know that. You don’t know what Leo means by Alien Consciousness. And neither do I. But I don’t think Leo is making such an obvious error. Alternatively, if it just “more form”, then it’s clear he thinks it’s a highly relevant form. And maybe there’s good reason for that. My point is we just don’t know. Let’s have a little bit of faith in Leo. He obviously feels it is important. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Actually I think the idea fails. Where did the alien come from? At some point, the answer has to be pure Infinity. Fair enough. I don’t think at this point we are supposed to understand Alien Consciousness or understand why Leo thinks it matters. Obviously that would come with more experience. My point is simply that I’m open to it. If Leo thinks it matters, then I trust his opinion on this at least enough to investigate. I think he has earned that. If he is wrong, then that should also become clear. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
That’s true. But I’m pretty sure a marble being played with by a physical alien is not literally what Leo is referring to. Men In Black is obviously a movie. It’s not going to be equivalent to whatever Leo is talking about. You can argue that Alien Consciousness must be irrelevant because it must be preceded by Presence, but I don’t think Leo is overlooking this point. I’m going to give him more credit than that. That would be a very newbie mistake to make. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Who is defining what is and isn’t a hallucination? And how do you know your definition is valid? What if there is no meaningful distinction? If by alien supermind you are referring to Leo’s latest insights about Alien Consciousness, I would say we have no idea yet what Leo even means by this. He has said very little about it, and the little he has said, it seems clear he thinks it’s very important. This is probably true. Possibly there is another path, but it’s hard to say. -
aurum replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I can’t say I’ve verified some of the things Leo talks about these days. But I think there is a realistic chance he has discovered something new about spirituality and has gone beyond at least a lot of other teachers. I’m willing to gamble on that. -
Delusional rationalizations. If you’re a woman and you meet a guy who says things like that, just delete him from your life. Granted, if you want to do some sort of consensual polyamory, that could be a valid couple choice. But that’s a much different strategy requiring arguably more maturity and honest communication to make sustainable.