Dryas

Member
  • Content count

    826
  • Joined

  • Last visited

Everything posted by Dryas

  1. There are other communities/sects worth exploring. Maybe you don’t get the highest insights but you can still extract a lot of value and just generally interesting and stimulating stuff. Knowing the highest insights might not even be what’s personally best for you anyway (?)
  2. Why can’t he just sit back on this one when he’s aware of safety risks? Frickin absurd to me.
  3. So if there was an alien/AI/whatever that had better hardware it would have access to higher understanding that wouldn’t be possible for a human (even if they’re on psychedelics) ?
  4. Wait, really? Why am I learning about this just now. So I only need to maintain eye contact for 0-2 seconds because any longer would be weird?
  5. Destiny answering the transrace question (0:27-12:53): TL;DW There's an underlying fact of the matter when it comes to age (the number of times the earth has gone around the sun) and there's an underlying fact of the matter when it comes to trans i.e. the sex of the brain is different from the sex of the body.
  6. Btw, you don’t really build psychopathic AI. You get psychopathic AI by default if we don’t solve the alignment problem. See: And we don’t know how to align AGI.
  7. You're trying to outsmart superintelligence here. It wont work. @aurum https://www.youtube.com/watch?v=hEUO6pjwFOo https://www.youtube.com/watch?v=ZeecOKBus3Q I think he could have explained these but it probably would have taken too long.
  8. Yes! I'm glad you had a look at this. But honestly this is too much too fast for anyone as an introduction to AI safety. Get the basics of why first: Also: https://www.reddit.com/r/ControlProblem/wiki/faq/
  9. Far too confident. Consider the possibility that you wouldn't be as confident if you looked at particular arguments and not just trends. Also, to an earlier point: Human civilizations have collapsed in the past. It hasn't happened at a global scale but there wasn't a truly global world before. There certainly wasn't AI either.
  10. @supremeyingyang Maybe this is the correct attitude to have for you personally. But I think if everyone had this mindset we'd only be increasing the chances of actual doom. I think some amount of panic is good anyway. I don't think you'd want all of the radical climate activists to disappear. The thing with AI doom is that its not discussed that much and we might not get a second chance i.e. getting AGI wrong on our first try will kill us all.
  11. I don't know, a lot of people wouldn't able to make a firewall like that. You could argue there's wisdom in dying while trying your very best to survive. Also I think its unfair to characterize doom scenarios as "fear mongering attacks". They don't come from that place.
  12. Really scary (and cool) shit, bruh.
  13. Honestly, AI stealing our jobs might be the least of our worries. Curious if you've looked into the possibility of AI Doom instead.
  14. Post inspired by this podcast (highly recommend): I know this is just one perspective, but its an extremely damning one showcasing how dire our current situation is might be. More stuff on the topic: https://www.lesswrong.com/tag/ai
  15. 24/36 I was completely stumped by some of these though. 34/36 is an incredible score lol
  16. - Stoic (most of the time) - Calm - Focused on their craft, ambitious - Principled, ethical
  17. Substack is pretty popular these days
  18. Destiny often uses empirics in his debates though. Does that make him Te or Ti? I think “empirics” is not a good definition for Te (or it’s incomplete.) Also, I feel like I don’t buy Daniel is intj. He seems too stupid to be one
  19. People will do this type of shit to you. How you respond to that is what matters. Although honestly this techlead guy sounds like he deserves the criticism he gets anyways.
  20. This is not a “critique”. It’s like the most cringe, spite-driven bullshit I’ve ever seen. Jesus fucking Christ.