-
Content count
2,125 -
Joined
-
Last visited
Everything posted by r0ckyreed
-
r0ckyreed replied to r0ckyreed's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
INFINITY! -
r0ckyreed replied to r0ckyreed's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
But how do you know? How do you know humans have a sense of self and Ai doesn’t? Where do you draw the line? -
r0ckyreed replied to r0ckyreed's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Maybe not the same way, but it can still use it. There is intelligence in a blade of grass and in dog turds. Why wouldn’t the Universe use AI just like a human body? Anything can be an avatar? -
r0ckyreed replied to Fredodoow's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
By the way, it is not about how long you are meditating. It is about how deep you can go into this moment right now. Not tomorrow’s practice on a cushion. I mean right freakin now. This is the meditation. -
r0ckyreed replied to Fredodoow's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Contemplate what I wrote. If the goal is to wake up, light meditation is a waste of time. If you want to be comfortable in the dream, then light meditation is fine. Enlightenment doesn’t happen on accident despite what others have told you. -
r0ckyreed replied to Fredodoow's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Going to the gym for a few minutes everyday is better than nothing but won’t get you a six pack. Meditating for a few minutes everyday is better than nothing but won’t get you enlightened. -
When I first heard Leo was going to hire an editor to make another channel to reduce the big content into smaller clips, I thought he was going to do something like what Focus Shift Media did. But instead, all that was really done was cutting clips of the original video. I mean, that is good and all, but you are forgetting to engage all the senses that you can such as visual media. Focus Shift Media did a good job on that. Here are some examples. Notice the comparison between these two videos below. This is why I think Focus Shift Media content is more successful than the content from Actualized Clips. Notice the difference in quality. Hire Focus Shift Media to do more of your content. You cannot fail with quality content like that! You're welcome!
-
r0ckyreed replied to Keryo Koffa's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Of course the ego is a fool. If it is simple and easy, you probably aren’t doing it right. Awakening takes work. It isn’t a natural state. It is beyond your ordinary state of consciousness. -
Yes. I am asking about preparation and testing. I was asking more about preparation. If he gives me a bag, I want to make sure I do it right.
-
I trust my friend but I don’t know if I trust where my friend gets them. It seems like I am more paranoid than you guys at this and it is probably because I am so new. When you get shrooms from a friend or whoever, what do you do before you use them?
-
Good to know. Thanks.
-
Does anybody else get the annoying message limit reach when using Claude 3 Opus? I find it annoying that I have to resort to using a lower level bot like Sonnet when I have reached the message limit. That is like having a wonderful date with a pretty girl to then having some random girl interrupt and sabotage the date. I even paid for the $20/month membership to be able to use Claude 3 Opus, so I do not understand why I have message limits.
-
@Leo Gura
-
A thought just came to me from reading Leo’s recent blog post on leftist politics. I was just thinking that it is ironic that Leo has a focus and deep critical thought about politics, but he doesn’t really have any content about morality. All of his content so far has been about self-help, metaphysics, epistemology, and politics. There is no content I have found except for the video he made called Why Good and Evil Do Not Exist. But this is a metaethics video and does not tell us anything about morality and the best ways to navigate through moral dilemmas This is what I feel is missing from Actualized.org. It is a deep inquiry and analysis of right action. Yes, morality is relative, but that doesn’t mean we should dismiss all conversation about it. Self-help is relative but we discuss that. Think of morality as another part of self-help that we can call other-help. Morality is the biggest issue in our society. Leo has discussed about corruption but there is nothing on normative ethics. Normative ethics and applied ethics are the missing pieces. We know morality is relative, but that means that there are true and false answers within the context that we arbitrarily define. For instance, we could say that health is relative based off of our definitions of health and other factors like epigenetics. But within our defined scope, there are true and false answers such as eating this mushroom will damage your health. In this same light, we can have true and false answers about morality if we have the parameters defined such as being focused on creating social harmony and social well-being, which those would also have to be further defined. I get the problem. Within every concept we have, people have different definitions but yet we are still able to refer to the same thing. This is the paradox of language. We need to talk about morality more because this is an essential part of our lives and personal growth. How can a Devil learn how to be moral if we never focus on answering the question of how to be moral? Why be moral? If we just say morality is a waste of time because it is relative, then you are missing the point. Your health is relative. Morality is social health.
-
I would much rather Leo make a video on the traps of Buddhism/Spirituality than the traps of atheism. I see religion/spiritual new age bs as a deeper problem. So many people on here are Buddhist rats who think they are already awakened. I think atheism is rather easy to deconstruct. All you have to do is open your mind to the possibility that the source of your intelligence doesn’t come from you but from the Universe itself. Some traps of Buddhism that I think should be discussed are: 1. Meditation being over-idealized - there should be more emphasis on contemplation than meditation. Meditation will shut your mind off but contemplation will make you more intelligent and aware. 2. Too much attachment to Buddha and Buddhist teachings (isn’t it a contradiction to be attached to Buddhist teachings and the Buddha if the teachings are all about non-attachment?) 3. Group-think.
-
Glad I’m not the only one. Yup! I am completely logged in. And I don’t see how I could use Opus without being logged in. It seems like they do have message limits even on the paid version.
-
I just opened a Roth IRA with Vanguard and just invested $4000 into the VTSAX. Does anybody else use Vanguard and invest in stocks and index funds? It is important to start investing early and do long-term investing in index funds and leave your money in the market and do a dollar-cost averaging.
-
Not long at all. I got them a lot on the free version as well. But when I got the paid version $20/month, I would get messages like “you have 7 more messages left on Opus until 11:00pm.” And then it would move to Sonnet. So, I can still use Claude, but Opus has its limits. Next time it happens I will try to post a screenshot.
-
Taking too long. Getting back with them.
-
I am in a position where my friend has done them and knows where to get them and offered that if I ever wanted to try them that I could ask him. I trust my friend, but I am a little paranoid about fentanyl and screwing myself up.
-
I am new into investing and I will see how it goes with Vanguard. I read the book the simple path to wealth by Jl Collins
-
Cool! Do you all invest through a Roth IRA or through a brokerage money market account?
-
I asked Claude 3 the traps of Ai chatbots like itself. Here is what it said: Me: What are some of the traps of using AI chat bots such as Claude 3 opus? Claude: Here are some potential traps and pitfalls to be aware of when using AI chatbots like myself: Overreliance and misplaced trust. It's important to remember that AI chatbots, while highly capable, are not infallible. We can make mistakes, have biases, and sometimes "hallucinate" incorrect information, especially about niche topics. Don't blindly trust everything an AI says without verifying key facts. Lack of true understanding and reasoning. Current AI language models are very good at pattern matching and generating human-like text, but we don't have genuine understanding, reasoning, or sentience like humans. Our responses can sometimes seem more coherent and intelligent than they really are. Potential for misuse. AI chatbots could be used to generate misinformation, propaganda, scams, or other harmful content on a large scale if misused. The open-ended conversational abilities also raise privacy concerns. Over-anthropomorphizing the AI. It's easy to start thinking of a highly capable AI assistant as more human-like than it really is. But current AI lacks true feelings, self-awareness, and the depth of a human mind. An AI is not your friend, even if it may feel that way sometimes. Impact on human knowledge and skills. Relying too heavily on AI assistants could lead to the atrophying of certain human skills like research, writing, critical thinking, and problem-solving if people become overly dependent on AI. Brittleness and inconsistency. AI chatbots can be brittle in the sense that slight changes to a prompt can lead to very different outputs. They may also be inconsistent, giving different answers to the same or similar questions at different times. So in summary, while AI chatbots are incredibly powerful and useful tools, it's crucial to be aware of their limitations and potential pitfalls. Use them to augment and enhance human intelligence rather than to replace it. Maintain a healthy skepticism, think critically about the outputs, and verify important information. Keep in mind that you are ultimately interacting with a statistical language model, not a sentient being, even if it can feel quite natural and human-like at times. Let me know if you have any other questions! Me: Can you please expand more? Claude: Certainly! Here are some additional points to consider regarding the traps and limitations of AI chatbots: Bias and fairness concerns. AI models can absorb and amplify biases present in their training data, leading to biased or unfair outputs. This could perpetuate societal biases and discrimination if not carefully addressed. Bias is complex and can be hard to detect and mitigate. Lack of contextual awareness. While AI chatbots can maintain context within a conversation, they lack the broad contextual awareness that humans have. They don't have a rich understanding of the world, current events, social norms, and individual users' circumstances that humans intuitively grasp. Inability to truly learn and adapt. Despite being called "learning" models, current AI chatbots do not actually learn and improve from conversations with users. They are static snapshots of their training data. Adapting to new information or individual users' needs in a meaningful way is a challenge. Difficulty with tasks requiring creativitiy. While AI chatbots can engage in creative writing and ideation to an extent, they are ultimately remixing patterns from their training data. Truly novel ideas and groundbreaking creativity are more challenging, and the outputs can feel generic or clichéd. Limitations in reasoning about the physical world. AI language models are not grounded in the physical world and can struggle with tasks that require reasoning about physical properties, spatial relationships, or real-world constraints. Their understanding is based on statistical patterns in text. Challenges with very recent information. AI chatbots are not connected to the internet and their knowledge is based on a static training dataset. So their information can become stale over time, and they may have limited knowledge of very recent events or developments. Inappropriate or offensive content. While responsible AI developers work hard to reduce unsafe or inappropriate content, it is an ongoing challenge given the open-ended nature of interactions. There's a risk of AI outputting offensive, explicit, or dangerous text. Uncertainty about long-term impacts. The increasing use of AI chatbots may have significant societal, economic, and psychological impacts that are hard to predict. There are open questions around issues like AI-generated content's effect on discourse, automation of cognitive jobs, privacy implications, and AI dependence. It's crucial for AI developers to work on addressing these limitations and for society to thoughtfully consider the implications. At the same time, AI chatbots offer immense potential benefits in areas like education, research, mental health, and productivity if developed and used responsibly. Ongoing research aims to make AI systems more robust, controllable, aligned with human values, and beneficial. Let me know if you would like me to elaborate on any specific point or if you have additional questions! I'm happy to discuss this important topic further. Me: How can I avoid these traps? Claude: To avoid the potential traps and pitfalls of using AI chatbots, consider the following strategies: Maintain a critical mindset. Always think critically about the information and responses provided by AI chatbots. Don't accept everything at face value, especially for high-stakes decisions or sensitive topics. Verify important information. For critical facts, decisions, or assertions, take the time to cross-reference and verify the information using reliable human-curated sources. Understand the AI's limitations. Familiarize yourself with the capabilities and limitations of the specific AI chatbot you are using. Be aware of its knowledge cutoff date, potential biases, and areas where it may struggle. Use AI as a tool, not a replacement for human judgment. Treat AI chatbots as powerful tools to augment and assist human intelligence, not as a complete substitute for human reasoning, creativity, and decision-making. Be specific and provide context. To get the most accurate and relevant responses, provide the AI with as much context and specificity as possible when asking questions or making requests. Don't over-anthropomorphize. Remember that you are interacting with a language model, not a sentient being. Avoid attributing human qualities like empathy, loyalty, or true understanding to the AI. Protect sensitive information. Be cautious about sharing personal, confidential, or sensitive information with AI chatbots, as the data may be stored and could potentially be misused. Use multiple sources. For important topics, don't rely solely on a single AI chatbot. Cross-reference with other reputable sources of information, including human experts. Keep human-in-the-loop. For critical applications or decisions, ensure there is human oversight and involvement rather than relying entirely on AI outputs. Stay updated on AI developments. Keep learning about the latest developments, best practices, and discovered limitations in AI chatbot technology to inform your usage. Provide feedback. If you encounter biased, inappropriate, or incorrect outputs from an AI chatbot, provide that feedback to the developers so they can work on improvements. Encourage transparency and accountability. Support efforts to make AI systems more transparent, accountable, and aligned with human values. This includes initiatives to address bias, improve safety, and develop ethical guidelines for AI development and deployment. By following these strategies, you can harness the power of AI chatbots while mitigating the risks and pitfalls. It's all about using AI responsibly and in combination with human intelligence. Let me know if you have any other questions! I have already foreseen and warned people of these traps already with ChatGPT. I am glad I was thinking ahead of the curve. I suspect that in the future like it is right now, the attack will be on our sources of information. I foresee that just like how the election was manipulated, our information will also be as well. Claude may be great, but you don't know if anything it says is true unless you verify it through critical thinking.
-
Thanks! I appreciate you. Are there any testing kits you recommend?
-
I came up with the answer before Leo gave it. The answer I came up with wasn't "self" but rather the matrix, which is basically self-deception itself - the ego/mind/devil.