Aramara

Member
  • Content count

    6
  • Joined

  • Last visited

About Aramara

  • Rank
    Newbie

Personal Information

  • Location
    Mesopotamia
  • Gender
    Male
  1. Remember you are talking with GPT4o-mini which is arguably the dumbest AI model on the market right now. That model is trained for simple tasks such as customer service chatbots, classification and sentiment analysis. But it is not designed to do any deep complex reasoning which needed when talking about a conflict like this. You need GPT4o (larger brother) or even better the new o1-preview model which is designed to reason and think for an x amount of seconds before accumulating all its thoughts to generate a response. Also Claude Sonnet 3.5 is an amazing model that can do very complex reasoning and provides 8 free messages every couple hours. Just saying
  2. I see you only picked up smoking since 3 months. I remember my first couple months where I really enjoyed smoking because of its novelty. But the longer you keep at it the harder it will be to quit. I started smoking 7 years ago and quit like a month ago after multiple unsuccesful iterations of quitting. My greatest advice to you is.. quit now ! The money you burn, the health you jeopardise, the teeth you ruin, the time you waste and just the overall personal hygiene going down. Really it's not worth it for just a single minute of temporary high. Best to do it just cold turkey and sit with the unease for 3 days. After that your body will very quickly adjust. The best part is food tastes amazing again because tar blocks your taste-receptors. Good luck
  3. 1. How is it overrated? To some extent it is already our reality no? With companies like Meta and TikTok that use machine learning to train models that accurutely predict your next mood shift based on your entire user profile. These companies know exactly what tickles you and when to show a specific ad for the best possible conversion. I am only envisioning a continuation of this techno-feudalism where the AI gets a little bit more invasive and the boundaries of data-privacy dissolve a tiny bit more. Doesn't automatically imply that money as a concept is gone for good. Even in a world where money is diluted. Humans will always find a way to barter together maybe in the form of digital cryptocurrency tokens. 2. AI may not have emotions but it reflects and amplifies the biases, fears, and repressed elements of the human data it’s trained on. Also you can't just "code" in coping mechanisms. You can only detect the vulnerabilities after a model is done training during a process called "red-teaming". This is when a team of researchers are inspecting the AI to observe for strange behaviours and phenomenas which is called "model collapse". It's when an AI starts off fine generating an answer and then mid-sentence it collapses and generates some eerie and strange outputs which some AI Researchers intepret as artifacts of emotions. They then try to reinforce and beat that behaviour out of it but it's not an exact science like coding and math is, also it costs these AI labs billions with no measurablity to it. Till this day we've had a real hard time in aligning and steering these models. Just look at the epic fail of Google trying to align their model with woke diversity principles and then it refused to generate a white person.
  4. A machine ultimately has way more computational resources, higher throughput and bandwidth to reason across all domains. Humans are constrained in time, energy and genetics to scale their learning and thinking like an AI does. Therefore we hold AI's capabilities and accuracy rate to much higher standards than a human. Pre-GPT we always defined AGI as a virtuoso AI that is a master at any intellectual task. This is how DeepMind and most of the estabilished AI labs approached the ultimate AGI objective. Now with OpenAI stealing the show and Sam Altman claiming that AGI should basically be "a median human". We've degraded the term AGI to the 50th percentile of humans while for decades it represented a virtuoso and multidisciplinary master. All these new AI labs that use this non-standard definition of AGI is all done to attract in more investor money which keeps the hype circus going on. If you can redefine the bar and lower it then yea sure we can reach AGI in 1 or 2 years. But when it comes to an AGI that can do any intellectual task a human can we are still a couple major breakthroughs away.
  5. There's two looming risks that are at play with AI The first one is rich tech-elites that will integrate AI so deeply into everything in that you can't go back without using it (pc's, internet, smartphones, etc). Then they will mine every little thought of yours and be able to monetise it. Something in the realm of Brave New World and 1984 The second one is AI could become a mirror of the collective unconscious of humanity, including all its repressed elements manifesting a digital version of a Jungian shadow. It's like the whole of humanity is joined as one entity and then imagined as a Digital Jungian shadow. The things which we as humans have suppressed in our collective psyche for centuries will now build up inside this black box. I call it a black box because AI / Machine Learning researchers know how neural networks work and how they learn. But till this day we don't know "why" it works and how it compresses the world into a large blob that reflects a world model.
  6. I don't think you need to teach someone how to use AI. The beautiful thing about AI is that it adapts to your question context and is completely personalised. Best way to learn is by just spending a couple minutes everyday having a small conversation about a deep complex theme you want to explore. Then you will get more of a "feeling" with it and develop a muscle on how to best use AI as a tool.