aurum

Member
  • Content count

    5,912
  • Joined

  • Last visited

Everything posted by aurum

  1. Don't overcomplicate it. If she matched with you, assume she's already interested in a date. You don't have to "land" anything. A full conversation is often not necessary on a dating app. Just a few messages can be enough. As soon as things seem like they're heading in the right direction, get to setting up the logistics of the date.
  2. Of course. But the entire purpose of genuine truth-seeking is to overcome such self-deception. If the concern is self-deception, then what people need is more truth-seeking, not less.
  3. Conspiracy theorists are atrocious with their epistemology. They lack construct-awareness and introspection. They speculate beyond what they can verify. They focus all their skepticism outward rather than on self. They rationalize from their survival. Their perspectives are unholistic. They are usually completely unaware of their own bias. Which is not to say all conspiracy theories are automatically wrong. It just means the sense-making process is atrocious.
  4. @theleelajoker That could be the case for some people. But it would be mistake to reduce truth-seeking to just that. Truth-seeking by definition involves confrontation with reality and facing imaginary fears. And yes, that is possible. Also, consider that everyone needs truth to be able to navigate reality. To frame this as an inherently pathological need for control is a mistake. It would set a bar for being non-controlling that no one will be able to reach.
  5. The argument has not been that it "can't" eventually evolve to AGI. The argument has been that tech CEOs are selling a fantasy on how soon it's going to happen and what the ramifications are for society. They are selling this fantasy because of some combination of genuinely not understanding how intelligence works + financial incentive + silicon valley group think. I personally think we will eventually get AGI.
  6. DeepMind is somewhat different than strict LLMs, but it is essentially still just a prediction engine. Reinforcement learning is based on reward functions that allow it to predict the probability of the best move.
  7. Yes that would be a good start. Although labeling intelligence as a binary, i.e true intelligence or not true intelligence, can be a bit misleading. Really what you have is a spectrum of intelligence. Saying “true intelligence” is mostly short-hand for a relatively high intelligence that is more conscious of its unity. Low levels of intelligence still exist, but will be less conscious of its unity.
  8. No, not at all. The LLM is at its core just a prediction algorithm based on its training data. It cannot perceive, self-reflect, reason, generalize principles, generate deeply novel insight, update itself or act informally. It does not even know it exists. This is why the jagged intelligence phenomena happens. LLMs are just brute force compute. They are not a true intelligence.
  9. But what would it even mean to sell "pure truth" anyway? Truth must include relative survival advice.
  10. To perceive and act in alignment with truth.
  11. Maybe. But then I'd say it shouldn't be considered intelligence.
  12. Intelligence should not be reduced to problem-solving ability. If we create a reductionistic definition for intelligence, of course our conclusions about a super-intelligent AI will be wrong.
  13. Brain paradigm only gets you so far. Consciousness cannot be understood at deep levels through the paradigm of having a brain.
  14. I have frameworks and overarching principles which I use during sense-making. I would not say I use any formal process. My sense-making tends to be highly informal.
  15. I don't think I have some simple set of heuristics. I mostly think in terms of tradeoffs, feedback loops and what will lead to greater societal development / holism. I prioritize depth of sense-making, not immediately actionable solutions.
  16. That's another thing I feel sometimes gets missed in this AGI discussion. People conflate AGI with human-like or even god-like intelligence. But you could make an AGI that is still relatively dumb compared to humans. That would probably be the starting point. So even if the tech bros create AGI in the next couple years like they're betting on, they're also betting on that this AGI will be of at least human-level intelligence. That's an additional bet.
  17. To some degree, but I'd say that's a largely incomplete theory if we were just to stop there. People in poverty often have some of the highest birth rates. Birth rates are some complex mix of biology + contraception availability + needing children for economic reasons + cultural narratives + gender equality + environmental constraints.
  18. I like Bashar's thinking here, but I'd push back in a couple ways. 1) Yes, true intelligence operates on whole systems thinking. This is correct. But it would be a mistake to assume whole systems thinking = totally harmless. A true intelligence could still make decisions that cause tradeoffs within the system. 2) He seems to think that AI will be more intelligent than humans, and we will have to catch up to its level. I suspect it's the other way around. Humanity will align with greater intelligence, and then we may create intelligence. Greater intelligence coming first feels backwards. 3) We are not close to building the kind of AI he is describing
  19. Okay, but now that essentially creates a two-tier system, where the rich can afford to opt out and raise their children how they like. Whereas poorer people who need the money will be subject to state regulation and bureaucracy. It would be analogous to public and private schools. Private schools have become a luxury good. Is it worth it? Maybe. The point is simply to not be foolish enough to think that socialized motherhood won't have significant tradeoffs. And to think carefully through what they might be, rather than just plowing ahead like a bull in a china shop. They aren't novel. They are extensions of the same general tension between individualism and collectivism. Nothing this absurd is being proposed. Free speech absolutism is obviously wrong. In practice, every society will have to decide how much they want to socialize motherhood. Absolutes tend to be way too politically controversial and impossible to implement. So we end up with some mix. The question is what is the right mix. My rule of thumb is subsidiarity. The state should step in when individualism is not enough, but the state should not come first.
  20. Careful though. Once you socialize motherhood, that opens up a whole can of worms. It's an increase in blurring the lines between private and public life. There will be serious tradeoffs, like additional regulations and politicization around parenting. Mothers need support. But how much role the state should play is not an easy question.
  21. For the purposes of debating whether a crash will happen, it should be considered AGI when it can replace and even do a better job than humans. This is what these companies are betting on, not just cool LLMs. You’re right that we have not seen what massive amounts of compute will do yet. This is my prediction based on how I understand intelligence. Scaling compute will fail. In the future, people may wise up and invest in other strategies. But right now, scaling is the dominant strategy. And it’s an increasingly failing one. This is not just my opinion either. This is the opinion of many serious AI researchers who understand the technical details better than I do. The crash could be serious for the economy because so much is being propped up by investments in AI right now. Whether or not it will be as big as 2008, I don’t know.
  22. I don't know If there will be a huge crash per se, but I know many of these big AI companies are overhyped right now. They will not create AGI any time soon. These CEOs are betting on that scaling compute is enough, and it very clearly isn't. They need that to be true, because that theory is what fueled the success of these companies in the first place. We got GPT-3 and the other current LLMs because of scaling. If scaling doesn't work moving forward, they are cooked. What we have instead is a non-intelligent tool (LLMs) that appear useful in some limited contexts such as coding, customer service and brute-force calculation. But this does not justify the insane amount of money coming into these companies. These companies are investing in infrastructure assuming trillions in revenue over the next couple of years from AGI. This is laughable. They are in way over their heads with their own investments. All this infrastructure may later turn out to be useful once it's already built, but either way that doesn't mean it isn't going to crash on them before that happens. It very well could.