-
Content count
2,814 -
Joined
-
Last visited
Everything posted by zurew
-
@gettoefl Whats inherently wrong with watching porn, if you engage with it in a healthy way, and you are not addicted to it?
-
I can have an ego death experience during a heavy meditation session, without contemplating death. I could have an ego death experience by consuming heavy psychedelics like 5meo DMT without contemplating death. Thats true, but i think its little different in kind compared to having an ego death exp during a psychedelic trip or having ego death during meditation, but maybe not.
-
@JoeVolcano What would you say, which one gives more insight about death? When you have an ego death exp When you contemplate suicide seriously
-
@JoeVolcano I think that you and Eckhart Tolle are two exceptional people in this regard ,because you two were able to pull this off without actually physically killing yourselves. But i wouldn't expect from normal people to to pull the same thing off, what you guys did. important fact: Suicidal people are the ones who contemplate death and suicide the most frequently and in the most serious way. Not trying to be argumentative, but If contemplating death would really prevent suicidal people from physically killing themselves, then why is that, that suicidal people are the most likely ones to commit suicide?
-
Most depressed people want to kill themselves because of their suffering not because of choice. I think what you are talking about is being aware of death and actually having the choice if one wants to die and being okay with the death of one's self. I wouldn't say that its an authentic choice to kill yourself, when you are totally hopeless and depressed. Choosing to die when you have everything and when you are emotionally stable is a whole different level, because at that point you are the one who actually authentically chooses to die without your traumas and emotions dictating your choices. It requires detachment, but how can you be deatched or how can you practice detachment, when you are very tightly attached to your suffering and ego? I think contemplating death without being detached is useless. You actually paradoxically kill yourself if you are too attached to yourself and to your life and to your suffering. Thats what depressed and emotionally unstable people do, they are so attached to themselves and to their suffering, that they kill themselves (point here is not because of choice, but because of attachment). This whole talk is not really about the contemplation of death, more about the detachment from life and yourself.
-
This is true, and applicable advice and usable advice ,however this advice is not applicable to people who are totally depressed and emotionally unstable. This is a good way to build your way up to deal with the idea of death, but you have to start from an already healthy ground(imo).
-
@Scholar I like your hyped up vibe, thats the kind of energy what we need to face with these issues. The vibe what you give through the screen is actually motivating. Kind of unrelated to this thread, but this is exactly why i think if aliens will manage to visit us (assuming they haven't already), they will have to be really developed not just technologically, but spiritually too, because after a certain level of technological development, your species won't be able to survive if they don't wise up.
-
You might be familiar with this series and with Daniel but this whole series is about how to make sense of information, how to distinguish between signal (relevant information/insight/facts) and noise(narrative, assumptions, opinions) I understand that point and agree , but what i meant is that in order to survive we will have to think ahead and wise up before we actually do big fuckups that are irreversible (or before we let things escalate so much, from where there is no going back), because these things are different in substance and in kind to past historical fuckups and challenges. We are at a time, where we can't entirely rely on the "i will learn from my past fuckups and i won't think ahead" idea. But i do believe that there are enough wise and intelligent people on this Earth so that collectively we will manage to survive and solve these global issues.
-
@Scholar If we wise up in time, then this 21st century will be one of the most significant, profound, instructive, beautiful phase in human history. Totally agree, if we can take our collective and individual sensemaking abilities to the next level, then that will be the most relevant and important foundation in our evolution, after that we should expect our evolution speeding up exponentially.
-
Yes this is a very important and relevant question that we collectively need to think about. The time when AI will mostly overtake most of the job markets, that you mentioned above is not far away, maybe 1 decade or maybe just a little bit more far away than that. But we don't need to wait a decade to see the effects AI will create. The transitioning phase will be hard as well, we will probably see people from certain job markets migrating to other job markets and that will has its on effects on the global economy. The problem comes when we and the government and companies don't think ahead and this "job migration" happens in a chaotic or in a random way. @Nilsi is also super right about people who are occupying the job markets that you mentioned above, namely: they should start planning and think ahead because their ideal LP will be overtaken by AI in the near future, so why would you put thousands of hours in a field that you can't do for much longer.
-
I think this is kind of good, but i wouldn't necessarily call this an honest approach, because it is already kind of baked in laws to restrict companies from doing such moves. Related to Dall-E 2, one real concern could be around the art,designer job market (how it will revolutionise the market, how many people will lose their jobs, what kind of new jobs could be created and how can we take care of those artist who will lose their income , what will happen with art schools and designer schools, what will happen with art and designer teachers) --> this would be one way to think about this specific issue in a systemic way and thinking ahead before the shit kicks in. Related to GPT-3, one big concern could be related to misinformation. 1)In the future how can we differentiate between human vs AI generated information, articles, scientific papers. 2) How can social media sites will be able to differentiate between AI operated vs human operated profiles and accounts (and how can we help them to be prepared before we let GPT-3 public), 3) How GPT-3 will make the writers job a lot less valuable, and how GPT-4 will probably totally destroy the writer job market and what alternative solutions can we provide for those people. Related to DeepFake, how can we differentiate between faked and not faked images, videos, audio files. When it comes to talking on phone, how can we determine if we are talking with an AI or with a real person. Related to self driving cars and trucks, what alternative job/ solution can we provide for those people who will lose their jobs in the near future (truck,bus,train,taxi drivers) Related to the entertainment market, how can we take care of comedians, musicians, who will probably lose their jobs in the next decade, because AI will be able to generate super funny memes, messages, videos and what not, and AI will be able to generate any kind of music in a much greater quality than a human would ever could and with a much greater efficiency as well. In the future when most jobs will be occupied by AI, how can we wrestle with the meaning crisis, where most people lose their motivation and hope and purpose to do anything, because there will be UBI and humans will be worthless (in terms of market value and labour). So basically what artificial pillar(s) can we create that can provide the same or higher level of meaning to people than jobs and religions combined. I could go on and on, but the point is that we should think how we can create a system that incentivise us and companies to think ahead and to think about these issues and try to find solutions to these problems before they occur. I know, some of these are more far away than others, but some of these problems are so big and so complex that they crave a lot of brain power and time to find solutions for them , and they will inevitably emerge, so we better start somewhere. I also know, that some of these problems can't be addressed by only one company or agent because they are too big and some of them are collective and some of them are global issues. So the relevant question would be how can we create a system where we can help these companies and incentivise them to think about ethical issues, and how can we create a trustworthy relationship with them, where companies that are working on AI can safelyand willingly provide information about what tech they are working on, so the government can create systems that are directly related to solving problems that will emerge from those AI services/items. One othee relevant question would be where can we tangibly see the companies or any government to take these concerns / issues seriously? Now this one hold some weight. Thanks for this article, its good to see something like this.
-
Every preference has its own advantages and disadvantages.
-
I think Chris Duffin could be considered yellow.
-
Where do you see those considerations being manifestested in practice? Btw some of them might be aware, but thats not the point. The point is that they are not incentivised to care about those ethical concerns. Do you think that message was intentioned as a solution or more of a pushback to your overly positive narrative? Okay, then its all clear now, it seems like there are a lot less that we disagree on, lets focus on the remaining disagreements.
-
This theory is falsifiable. We could give some shrooms to a monkey and then monitor his brain. Of course this wouldn't be a 2 day long experiment this would probably take 100s of years or even thousands to see if this if the theory is actually true or not. But to see if the theory has some validity to it, we wouldn't need to wait thousands of years, if the shroom has even a slight effect on the monkey's brain, then that should be detectable without waiting around for 100s of years. We could also see if the descendants of that particular monkey has a slightly different brain than the normal ones. Of course that wouldn't automatically mean that the theory is true, but it might strengthen the potential validity of the theory.
-
I believe this will be possible for us. We can learn about immortality from many species: jellyfish, Turritopsis dohrnii Hydra Planaria
-
Why not ask him to further elaborate before you cry about him not giving you a 20 page long , extremely detailed answer?
-
A high libido guy won't be able to control his instincts unless he has good sex very freqently. Its easy to control your urges if you have a low libido and you don't even want to have sex 90% of the time, but its a different talk when you have the urge to have sex at least once a day.
-
Yeah, i agree. In the current system its not a realistic option. That message was mostly targeted at Scholar. Yes, the question is how can you incentivise all members to work towards the same goals? The answer will be an overly complex game B answer, that we haven't totally figured out yet. So the goal is to work towards that game B system, and help the game B guys in our own ways.
-
No, the vast majority of professionals don't give a single fuck about contemplating how to prevent a fuckup scenario. Yeah maybe some layman people who don't know shit about AI and about tech they might be sceptical about it and they might come up with a lot of dooms day scenarios but professionals who could actually have a direct impact on it, don't give a fuck and even if some of them do, they can't do shit because other professionals will push it mindlessly anyway. Also there is a huge difference between coming up with dooms-day scenario's vs taking action to prevent those outcomes from happening. Thats not my job here, my job here was to outline not to just naively believe and assume that everything will be okay on its own. I have some solutions in my mind, but i had to react to your naive positivity on this subject. Being overly sceptical or naive about this subject are both bad and not useful. We can only solve the problems that we can recognise. If we naively assume that everything is okay, then there isn't anything to prepare for or to be solved there. Identifying problems as they are and knowing whats the potential problems are , is the process that can open the gates to be able to solve these problems and to be able to prevent these stuff from escalation. There are no easy answers here, its a systemic problem. Some of these problems can't be solved without radical changes. Not just that, focus both on the potential problems aand the potential opportunity as well. This is a not a subject where you can revert back your fuckups and mistakes, thats exactly why we have to be really careful, calculated, smart and think about this issue in a systemic way. The vast majority of the professionals are naively pushing this subject and they are way overly positive about it to a point where if the 'potential danger talk' comes up, they don't give a fuck about it or they change the subject or they leave the convo, or they get heated about it. I wouldn't consider myself pessimistic about this subject, i would consider myself to be realistic and careful. I can clearly see the potential opportunities and how much an AI will be capable of both in a good and bad way. Its already much more advanced in some ways than a human, its good , its fantastic etc, but its also dangerous if its being used mindlessly. Nice, so one shouldn't outline some potential problems if he/she can't solve it right away. Problem with people like you is that you create and give people a false sense of safety and hope. I had to push back with the negative side to balance your overly naive positive side, this way people can see both sides of the coin and have a more whole view about this subject. Right now what we need is to slow the fuck down and think. The last thing what we need here is to push this subject even more without thinking about the dangers in depth and about the solutions.
-
@The Mystical Man @Scholar Being wise means being able to recognise threat as it is. Thats the first step that needs to be done. After that we can talk about the positive views and aspects and how good things can get, but first we have to drop the naive notion that everything will be okay on its own.
-
Do you watch it, because you are too horny and you guys don't have enough sex, or there is a different reason? For instance, if you want to have sex every day and you girl doesn't want to, of course you will want to find a way to satisfy your needs. Your sexual needs won't just go away, so you have to talk with her, because sexual satisfaction is a main pillar in a relationship. If you won't get your sexual needs met , even though you two guys are compatible in all the other ways that one pillar is worth enough to quit the relationship because you won't be able to maintain it down the road. Some girls and guys have low libido and others have high, you need to find a partner who has a same level of libido like you. If she has the same level of libido like you, then you guys could just have more sex and that way you could satisfy your sexual needs. If sex isn't good enough then that will be a different problem and talk.
-
That sounds good , but in practice its not that easy to just create an AI to counter an AI. Its always easier to destroy and to deceive than to protect and to recognise truth from fakeness. Its really good at faking stuff. Its so good at faking people's voice, that some scams are already in place. "Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find" https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=3478015a7559 "Criminals are using deepfakes to apply for remote IT jobs, FBI warns" https://techmonitor.ai/technology/cybersecurity/deepfakes-it-jobs-fbi If its a sophisticated scam, a scammer could create a fake version of you doing something in a video, talking about some stuff (using your voice), and if the scammer has enough data about your speeches for instance it could use this forum for data gathering, the AI could pretty well predict what words would you use in what context so it would be a really really convincing scam. There is an AI that can remove objects from a video. It make it look like that particular object wasn't even there and it is pretty convincing and this kind of tech could be used to make even more disinfo and to be even more misleading. Its not that hard to access some of these sites to use their AI to generate pretty convincing stuff for you. Of course its not so easy that a layman could do it, but we don't need several people to make a big fuckup. One person is enough who have access to a large database and an advanced AI that can generate thousands of fake scientific papers and articles and statistics in a few minutes. That one person wouldn't have to be an outsider, it could be done as an "inner job" and that would be enough to generate so much misinfo to fuck up our sensemaking ability a lot. We agree that AI doesn't understand stuff, what we don't agree on is that how dangerous an AI currently is, and how dangerous can it get in the future, and how hard to counter those problems , (the big assumption hereis if its even counterable). Its relatively cheap to create an AI drone that can shoot a person down based on face-id. Its accessible and affordable for normal people, because its not that expensive and you can get a face id chip relatively easy. I could mention the hacking part as well. There are AI's that can be used for hacking purposes. There are counter AI's as well, but again its easier to attack than to counter and a normal layman don't have the defensive capability to defend him/herself from an AI's attack lets be it physical or cyber. We could get more deeper into how AI could be used for military purposes because that is a huge problem as well. So assuming all those things will be or could be countered and solved, we still haven't talked about the fuckup problem, where we try to push more and more the current AI technology to make it more and more advanced ,without thinking 1 second about the ethical consequences, and without making sure not to make a wrong step where there is no going back.
-
Yeah im impressed as well and i am honestly kind of scared because how good the deepfakes currently are . Im just sceptical of the NLP part , but other than that, we can create specific AI-s and those can do tasks that a human could never do and they can sometimes do other impressive stuff as well. I think the biggest problem we will have to face will be associated with deepfakes. Unfortunately the better an AI can parrot our language structure and the better it can fake our voice, and the better it can create deepfake images and videos the harder it will get to recognize which one is fake and which one is real. Btw appretiate this thread, because we will have to face reality, and yes a lot of jobs will be out of the market, because AI will take most people's place. I will say this: over the next 2 decades 20-30 % of the current jobs will be completely taken over by AI, but that % might be higher, we will see.
-
Nono, the current training methods and models are having limits to them. If you learn in more depth how you train an AI and how neural networks works, then you can realise that it has its own limits to it, the question is how far the model can be maxxed. That will be answered in the next decades, but as i said, NLP is the hardest part here and it won't be solved easily thats for sure. Its naive to think that AI will just figure it out, without any major changes in its training. It might be able to figure it out but it might not we will see. Passing the turing test doesn't really mean shit. Its parroting human text and human behaviour, but parroting doesn't require inner understanding of that particular thing. GPT-3 is still very bad in understanding text, in fact it doesn't really understand anything , it just predicts what the next word should be, or what answer should be given for a particular question based on what answer a human would give for it. GPT3 is way too overhyped