Carl-Richard

Moderator
  • Content count

    15,657
  • Joined

  • Last visited

Everything posted by Carl-Richard

  1. It's just an argument to be careful, and it's especially important to be careful when the probability of deception is shown to be high (69%-88% in some cases) and when you have fewer ways of uncovering the deception compared to humans (fluency markers, socioemotional markers, incentive markers, etc.). That is why you should use AIs like Perplexity; cut out the middleman The human sources that Perplexity references could certainly be faulty, but it doesn't help if it additionally misrepresents those sources. That's just more problems.
  2. I just thought about this suggestion (of using AI to summarize a text that is worded clumsily and hard to understand). Will the AI ever respond with "this text is incomprehensible and does not make any sense?" or will it always give you an apparently coherent summary which doesn't necessarily reflect the text at all?
  3. Then the question is: are you trusting yourself as a flawed human to be able to identify the mistakes? Would it be wise to be generally careful with things that you know can possibly deceive you without you knowing it? I sometimes use Perplexity which is an AI that provides the sources it used to create an answer. If you actually control check the sources, the general rule is that you'll find a lot of factual inaccuracies. It's the best way to use an AI imo, but if you have to control check the sources all the time, it works basically just like a sophisticated Google Search. It's useful for that. Regardless, if you use AIs like Perplexity, it makes it even more clear how inaccurate AIs can be.
  4. Meditate.
  5. Video on art plz? Especially explain how modern art works 🤔
  6. There are a lot of videos critical of Leo. Is Leo legit?
  7. Just curious, but do you live in an area with high air pollution (PM 2.5)? I got myself a PM 2.5 measurement device and an air purifier for when I lived close to a highway tunnel last year, and it helped a lot with various symptoms (eye, nose, throat and lung irritation, coughing, sneezing, runny nose, and shortness of breath).
  8. Well, there is "body language analysis", and then there is body language analysis. Two very different things.
  9. I know how to use an AI. I was one of the first ones to use it 😉
  10. Ironic that you asked an AI for that. Those are general estimates, and apparently they're not very good: https://gleen.ai/blog/the-3-percent-hallucination-fallacy/ The AI did not give you that information
  11. They're signs, not proofs. AI doesn't have those signs. Look up the hallucination rates.
  12. https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive I hope you're not using it to learn about law 😝
  13. At this stage, at least more than an AI. Look up the hallucination rates in AI language models. They're staggering. And again, you have more ways to uncover inaccuracies or untruthfulness in humans. Humans generally care about being truthful — AIs don't (they simply happen to be generally truthful if they're coded and trained well). And when humans aren't being truthful, you have many ways to uncover the untruthfulness. A person might stumble in their words, make awkward pauses, blush, avert their gaze, change their posture in a weird way, start fidgeting, becoming restless or uneasy, becoming blunt or defensive, changes in their vocal tone, become emotional or insecure, etc. An AI doesn't do that. I already mentioned markers like variations in fluency and verbal richness (untruthfulness often decreases these things). AI doesn't have such variations. Additionally, you can check which biases and incentives the person has (e.g. ideological affiliations, professional affiliations, economic incentives) and you can judge their character and past actions (e.g. positions of authority which require trust, general reputation, times caught lying). AI doesn't have that (except for past actions).
  14. God damn 😍 As if I couldn't get any more gay for that guy 😂
  15. Hehe, well it's not just a problem of bias, but of incoherence, irrelevance and delusion ("hallucinations"). It's like it's building sentences with these wooden blocks with words on them, and sometimes the words it chooses are from the wrong bucket, but the overall sentence looks relatively fine. So there is an additional deceptive element to the misinformation, which is scary. You can't use the back-up plan for determining coherence in humans on an AI, because they're always as fluent and verbally rich as they are when providing accurate information.
  16. I might try that actually. Even if it provides misinformation, it's better than nothing
  17. By the way, a quick tip for anyone who wants to "increase" their intelligence, or more practically improve their work: start high-intensity cardio (e.g. sprint training). If you want proof, just re-read the thread and see how much easier it is to read it now (I revised it after my sprint training ). (Of course a confounding factor is I slept really bad the day I wrote it and ate really bad food the day before; thank you 17th of May, our national holiday ). I might as well drop this one in here as well:
  18. For once, I managed to decode one of @Reciprocality's posts. He is not giving a prescription that you should try to be a generalistic as possible and not engage your mind in detailed, concrete knowledge. He is basically re-phrasing what I said in his own words: intelligence is a generalistic thing, and the more generalistic you are, the more you're able to generalize. I agree. He is a tough nut to crack. I generally (using my generalistic abilities) avoid trying to understand his posts, but today I had the impulse, stamina and luck to try and succeed. That is the strength of the "neuroticism" by the way. It sometimes throws you a curve ball that you manage to deliver right in the corner of the net. On that, I'm about to talk to my potential advisor about a potential project about mindfulness and mind-wandering/"neuroticism" (and many other ideas). She is coincidentally a researcher on mindfulness and the leader of the institute where I got most my education, so that's fun
  19. @Yimpa I somehow predicted you would post that Now play them some Meshuggah and watch them headbang and fistpump with their trunk.
  20. Reminds me of this song: They're intentionally singing "bad", probably because that is the theme of the song ("the radio is broken"), or because of some other artistic impulse (regardless, the singers know how to sing better than that lol). For these kinds of songs, whether it will be considered good or bad more boils down to the discrepancy between the intended performance and the actual performance, which is really no longer about the music but the skill of the artist. If it's a live performance, people can compare it to the studio record. If it's recorded in the studio, people can compare it to the general expectation of the genre. If there is no expectation of the genre (which you can argue for this particular case), it's compared to the musicians and listeners own personal expectations, which not coincidentally starts to become very subjective.
  21. Currently, I believe people who use AI uncritically is decreasing not necessarily their intelligence but the quality of their knowledge, as AI often makes simple factual mistakes. Especially if you're using AI very often (and especially uncritically), it means you're generally using an unreliable tool, which can certainly impact your intelligence. So AI is probably already making less smart people dumber, while smarter people are maybe getting a little smarter.
  22. But why? What would be an objective standard for rating music?