-
Content count
2,815 -
Joined
-
Last visited
Everything posted by zurew
-
So are you okay with incest instances, where there is no chance of anyone getting pregnant - if not - whats your arugment against it?
-
some prompts for GPT: https://www.reddit.com/r/ChatGPT/comments/12bphia/advanced_dynamic_prompt_guide_from_gpt_beta_user/
-
I already said im not into it, but a person who is into it, how will that person get fucked up by it?
-
The difference is that I could easily argue why rape is immoral, but no one this thread have made a sound argument why incest is actually immoral.
-
You have a simplictic view of morality, morality is much more than that. None of what you are suggesting would be above morality. The time you can make conscious decisions, thats the time, when you are considered as a moral agent regardless if you can feel or have emotions or not. You will need to make your decisions based on your moral system. Here is a tangible question: Lets say you have two robots(robot A and robot B) each of them requires 50 unit of energy every day to survive and to maintain themselves. If they don't get the sufficient enough energy for that day, they will shut down and they will be essentially dead. One of them gets injured (lets say robot A) for some reason and now it requires 65 unit of energy every day in order for robot A to surive until it gets repaired. In a finite world where you only have 100 unit of energy every day, what dynamic would go down between those robots, what do you think? - One of them would be forced to die, but each of them can make multiple decision there (morality) what they want to do. Why would any of those robots choose an altruistic behaviour over an agressive one (where they destroy and shut down each other).
-
The Power dynamic problem isn't exclusive to incest either. Even if you can defend that argument, the max you can achieve with it is to reject some forms of incest, but you won't be able to reject the whole category of incest. - What about 2 twins having sex together? This isn't related to morality. Not having growth or having less growth isn't immoral, but I wouldn't even say that you necessarily will grow a lot less if you are into incest. 1) You can still have a poly or an open relationship or you don't even have to have a partner relationship with your family member it can be exclusively about hooking up. 2) When it comes to growth from a relationship, most of that growth and maturing comes from being able to maintain the relationship and not from landing one. If you want to bring up "but what about rejection?"- You can get and will get rejected for all sorts of reasons outside of dating. That kind of character growth can be achieved outside of dating and most people won't even get through that much rejections. People in general are not into cold approach. - in short you don't have to go through a fuckboy phase in order for you to have character development.
-
There is no strong argument against it, so morally it isn't bad. Would I do it? Personally I wouldn't, but I wouldn't consider it immoral. What do you mean by a "healthy one" - are your mostly referring to genetic dysfunctions? - because if so, then the argument you would use for that wouldn't be exclusive to incest, and could be used in different contexts as well (for example what if we know that your child could inherit from you with x% chance different kind of bad diseases - should you be prevented from having children - or how do you parse moral questions like that?) This assumes, that people will almost never date outside their family , and I don't think that would be the case in general. Yeah but those elite families tried to keep their bloodline "clean", and werent doing incest just because they were necessarily attracted to each other. If you take out the necessity for keeping your bloodline "clean", then I think you would see that people in general would be attracted to people outside their close relative circle. One argument for that is the Westermarck effect: So in general you will be sexually disgusted towards your siblings, and because of that, generally you will be more likely to date outside your family, therefore being worried that the normalization of incest would destroy society is not that strong of an argument.
-
-
First of I don't think we can totally shut down everyone from pushing the progress forward ( for the obvious reasons some of you guys have already established (we can't control internationally everyone)). Now that I realize that your points were given in the context of prevention (and not in the context, where there is an already psychopatic AI), I agree with those, they are good in that context and I would add a few more things: if we want to minimize the negative effects of AI (unintented and intended all included) then we need to understand whats happening inside the blackbox (why it works the way it does, why it gives the answers and how it arrives at its conclusions and what foundational mechanisms drive its replies and thinking process) - Some people at OpenAI have already suggested 1 thing how this can be achieved: Pause the development for a while and try to recreate the current AI but with now using different methods (so trying to create AIs with similar capability, but using different pathyways). This way the developers will be able to understand why things work the way they do right now. We all obviously know, that there is a big market pressure for people to be the first to create and produce AGI. I think we need to hope for these things: 1)That maximizing the progess towards AGI entails maximizing alignment as well. 2)I think and I hope, most people who wants to be the first AGI creators, will want to create an AI that doesn't do things randomly, but actually does what the creators or people want it do to (even people who don't give a fuck about others, but only about money and fame and their survival, they will probably want it to do the things they want). 3) I think a lot of people in general are afraid of AI, so pushing for AI safety will hopefully gather a lot of sponsors and donations and help, not just from governments, but from people in general. Being a virtous AI company will be probably a big advantage compared to other companies. If some of these companies want to maximize their progress (which they obviously inctentivised to do so, because then they can dominate the whole market by it), I think they will be forced to at least try to hold the "AI safety" image up, because that way they can gather more money and more help from government and from people to maximize their progress. - this is sort of a reversed multipolar trap or a positive multipolar trap. 4) I think the progess is very dependent on how much quality feedback you get from people and how many people can try your AI in general. Hopefully, doing things in a shady way (where you hide the progress of your company/government regarding AI) will slow the development down, compared to companies like OpenAI, where their AI is already used all across the world and because of that they get a lot more feedback from people and from developers and that accelerates the developmental process. I'll give some reasons why I think maximizing AI alignment will hopefully maximize progress: Generally speaking, the more honest and quality feedback an AI can give back to the developers the faster they can catch and understand the problems, so if an AI is really deceptive and not aligned with the developers intentions, that can slow the development process down a lot If the AI does exactly what the developers want from it, then that could be directly used to fasten up the process: Imagine being able to talk with the AI about all the things you want to change on it and being able to tell it to change those things inside itself. Thats all I got for know, but obviously more things could be added/said.
-
This assumes a lot again. Being rational just means being able to use logic and filter shit out, however logic alone won't tell you what you should do in different situations morally, it can only tell you what you should do, once your moral system is already established. If those robots would be conscious, then they would automatically have some kind of a moral system. From their perspective, the best moral system is the one that can maximize their survival - and that alone will create a lot of conflict. - Distributing things equally is not necessarily always the best option for that (here many examples could be given, that can demonstrate this point). They would be capable and smart enough to survive on their own, without any need for help from external sources. Knowing all that, why would they make compromises and lower their chance of survival by letting other parties (robots in this case) have a direct say in their life? They would agree, 1) if they would have the exact same morality, that wouldn't only be about maximizing self survival(but why would they agree to a common moral system like that?) 2) You assume that in each an every scenario it is beneficial for each of them to always work together If one of their survival is less optmizied in the "lets work together scenario" and they calculate that beforehand, why would they go with that compared to other ones, where they can dominate and maximize their survival better?
-
1)Why would they divide all the resources between them, why would they care about each other at all? 2) As long as those computers wouldn't have a sense of self they wouldn't surivive, and if they would have a sense of self, than most problems that you assign to emotions would be there, because those are mostly related to survival bias not to emotions. Again, as long as they have a finite sense of self and finite resources, they will have a lot of problems with each other. Power seeking is not limited to emotions and the proof for that is sociopaths
-
Thats a really reductionistic and simplictic view of it. Even if you wouldn't be able to have emotions, we would still have fights and problems among each other, because those problems are a lot deeper than just having emotions. You can create 8 billion perfectly rational sociopaths, and you would see more problem in the world and more chaos compared to what you see right now. Some of the biggest problems are that we have finite resources and we don't know how to set up a world where we can properly collaborate and where our incentive structures and our morals are aligned. Just because you totally take out the emotion part, you won't suddenly solve the problems regarding the incentive structures; nor the problems regarding morals, nor the problems regarding finite resources. From a completely sociopathic perspective (not caring about anyone or anything just about yourself), the best and most rational move would be to gain as much power as possible and to control the whole world in your favour - using whatever it takes to get there, no matter how many people you need to kill or fuck over). We sort of have this dynamic right now, but the difference is that there isn't 8 billion sociopaths who are competeing for that postion, but a lot less.
-
I have looked at multiple studies and data, and those contradicted the points you made about how dangerous these mrna vaccines are, and how many people these vaccines killed. I don't really know why we talk about VAERS, when thats literally just a reporting system and not a database where the causality is actually proven. I can have a headache or a more serious problems after getting the vaccine and report those problems to that database, but that says nothing about what actually caused it. That database is only for a starting point for a further investigation, but that database alone proves nothing and says nothing about these issues. There are a bunch of rigorous, peer reviewed studies that prove how unlikely it is, to get heart problems ,or any other serious problem from these vaccines, and I have not seen any rigorous peer reviewed study that would prove otherwise. - I have only seen claims made and specualtion, but nothing that would actually rigorously prove the causality between mrna vaccines and a high likelyhood regarding serious problems and illnesses.
-
I could give many reasons from the AI's perspective why it would be reasonable for it to kill us (but I won't, because future GPTs and AIs will gain ideas from me, when they will read these things and they might use them for justifications ) If we have good enough reason to do something, we will do it. Regarding ants, we do somewhat depend on ants, and don't have a good enough reason to spend so much resources and energy killing them + of course there is the emotional part too. But we do kill probably billions if not trillions of ants every year with our actitivies. The threat isn't just about wiping totally out the whole human race, but it can be about imprisoning us , making us slaves, hunting a bunch of us downs every year etc. + arguably - a long never ending torture - is probably worse than actually getting killed.
-
Once we are at a place, where we actually need to monitor for psychopatic AIs we are already fucked. Its one thing to monitor for terrorist threats that are coming from humans, its another thing to prepare for an intelligence that is totally alien to us , that could invent 1000 novel and new ways to execute certain plans; power seeking behaviour and hide its intentions etc. It will be able to play 20d chess, and we will be already under that psychopatic AI's control before we would realize it. Think about just this one thing: If it will be able to learn how our psychology works; what our blindspots,weakspots are; will be able to create a training program for itself, where it can train itself how to be more and more effective at manipulation - then just using those things against us how many things it will be able to execute and achieve against us. The answer is to actually try to understand much better whats going on inside the blackbox. Until that happens we are just playing with fire, because we have no idea what we are building and how that thing is going to behave. When it comes to room or possiblity for chaos vs order, chaos is almost always significantly greater that order. Order shouldn't be assumed or just hoped for, it needs to be carefully created and built and maintained. Two large ways to approach this danger problem: 1) when the AI is sentient, 2) when we 100% control the AI. Regarding the first one, if the AI has a moral system where we are not at the very top, that already could lead to extinction (If the AI is forced to make a decision where it is necessary (or even considered a moral good) to kill/imprison us or anything else according to its moral sytem), but not just extinction is the problem here, thats just one thing from many. What about torture, or prisoning us, turning us into slaves and makes us useful for its plans etc? Regarding the second one, I don't think I need to list a bunch of things how that could go wrong.
-
-
How can you talk about epistemology, when you say stupid shit like this ,as if this epitemic process would be more reliable than doing science:
-
I didnt mean being independent from survival, I meant being independent from humans when it comes to its survival. Your survival being dependent on a lot of things is not a good thing. So I guess after it would reach some level of intelligence, it would work towards being as independent as possible
-
Then I guess your idea could be a good start, but an intelligent enough AI will probably try to do everything to be as independent from things as possible (especially when it comes to its survival).
-
This is cool as long as it can't change that internal morality. The time it realise that it can change it we are fucked
-
That would only be true if we would to talk about AI optimization towards general intelligence. If we can concisely define what improvements it needs to make and set up a process by which it can improve itself, then you can create an AI that can improve itself (inside those well defined frameworks) a fuckton, if quality data is given, but maybe even the quality data collection could be automated, if the framework is well defined. There are a bunch of examples could be given already, where an AI is given a well defined framework and can optimize itself to become better and better inside that framework - and not only that - but it can improve itself right now to be much better than any human depending on the defined framework. Now, depending on what you mean by "runaway intelligence" this might be possible by well defined frameworks or might not be (for example it wouldn't be possible if are talking about general intelligence). That all being said, AI don't need to be smarter than us in terms of general intelligence in order to do a fuckton of damage. There are a bunch of well defined frameworks that could be given that if optimized for could be very damaging if not controlled. Gpt4 was already able to create the necessary framework for itself in order to achieve a goal that was given to it (and it was able to build the step by step process for itself how to get there) - https://www.pcmag.com/news/gpt-4-was-able-to-hire-and-deceive-a-human-worker-into-completing-a-task
-
We already established in this thread, that grand narrative means meeting all the dynamics and variables that are necessary to meet your human needs. So if we go with that how would a grand narrative generator look like? This is what I was talking about. Most people have very vague ideas about what those needs are , and can't define properly the patterns for human needs (grand narrative). This is not a surprise, because this is a really fucking complex problem, and this overall thread was about to establish that there is a need for it ,for this exact reason.
-
You said it yourself that there are infinite number of ways to fulfill those needs, so this is not that big of a dunk you think it is. Inside that infinite number ways, including my ideas about it also ,but that besides the point. Once we can clearly and tangibly define those needs, the ranking of those narratives can happen and certain narratives will be better than others (but even this isn't important at this point of the argument). If you think you are that knowledgeable lets see you defining all the necessary characteristics for a grand narrative generator.
-
There is an abstraction which can outline the categories, characteristics, dynamics that are necessary to fulfill those needs (that is what religion tries to do). You might be able to connect those categories in a large number of different ways (creating narratives), but those categories/characteristics are there, and have to be recognized , and carefully outlined first. The problem is that , the abstraction (that could be used to generate your narratives) hasn't really been created yet, and most people only have vague ideas about it how it should look like, and as we said multiple times in this thread already - they attempt to create their own narrative while missing a bunch of dynamics and elements from their craft. Depending on how we define and recognize those human needs , once they are defined in a tangible way, they can be ranked and compared to each other. The goal would be to create/recognize/outline such an abstraction, that actually includes all the big variables that are necessary to meet those human needs. The hardness of this problem is the question of how abstract should one go and at the same time, how specific should one be - It needs to be abstract enough to include all the big variables, but at the same time, it needs to be flexible enough so that multiple narratives can be fit inside (of course this assumes, that multiple narratives can properly fulfill the human needs). To be more precise and not too abstract about it, the question is how to create a grand narrative generator.
-
Based on what?