-
Content count
2,815 -
Joined
-
Last visited
Everything posted by zurew
-
Sure you can frame it that way, but I like to convey things using different words, because if we use the word 'spiritual' and 'philosophical' for everything ,then those words will eventually lose their meaning and communicative ability.
-
Forget about philosophy and about spiritual work for a while, and come back to it after you have healed up from your traumas or after you have gone through your nihilism. Your psychology has a really hard effect on how you view the world and what kind of philosophical framework you are more attracted to.
-
I can't deliver much about the specifics yet (I only have examples that I thought about only on a surface level, but not too deeply yet - so I won't waste anyone's time here with surface level arguments) I can only bring the framework for now that I believe is necessary to get to game B. My main goal was to introduce the framework and to see whats your guys thoughts on it or if you have any examples that could fit the framework or if you guys would have a problem with the framework on a logical level. I won't derail your thread any further , because this is getting off from the AI topic into a highly focused game B discussion/debate. I disagree, because it all depends on how we frame the problem and in what context(s) we are thinking in, but I won't debate this topic further(for now), because I don't want to derail this thread any further. I will probably open a topic about game B in the future or if someone else will open a game B thread , there we can get deeper into it and debate and inquire about the specifics and logistics of it.
-
I understand that there are systems that are better for the short term and bad or even counterproductive for the long term, however that doesn't prove that what I suggested is impossible (that something being good for the long term could be good for the shorterm as well). The examples that you brought above, only prove that certain systems that are good for the short term won't be good for the long term and not that something being good for the long term will be necessarily bad for the short term. I can bring method or system examples where I can demonstrate how something being good for the long term, could also be good for the short term as well and not just that but could be better at other short term tools as well. So lets say we have a really narrow goal: "to acquire as many wood from the forest as possible". You using a big axe will achieve you having x amount of wood in the short term and maybe 4x amount in the long term. Me inventing a new tool (chainsaw) will help me outwork you in the long term and in the short term as well. This is just one example where creating/implementing a new tool can help you achieve more things in the long and in the short term as well (with that example I proved that it is not logically impossible that something being good for long term could be not just good for the short term ,but could be even better for short term compared to other tools) - It also demonstrates that depending on the context and how we define our goals, it is not a must or a necessity or a given that we will have to sacrifice more things short term, if we want to get better at long term. The assumption that there has to be a necessary tradeoff between the long term and the short term is the main problem here. I know that me being able to prove something being logically possible is far from something being possible to be implemented or even created in the real world, but this is square 1 that we have to agree on. If we can't even agree that it is a logical possiblity, then we can't move forward to the next part of the discussion, where we get into the problems of real world implementation and actual examples of the concept.
-
Sure, but I think there is a big shadow on this forum regarding vulnerability and healing. There is a ton more weight is placed on practicality and a lot less on introspection. Being highly specialized and focused on one aspect could be good, however I see a ton of spiritual bypassing and bullshiting of oneself. I think in general, counterintuitively a more holistic framework (where healing has a place and is integrated into the framework) is more effective. This place assumes, that you are ready and good to go for enlightenment (so it assumes that your life is okay, you are healed from all the traumas you have, you have integrated your shadow side, you are not using this forum and the teachings as a coping mechanism etc), and if you look around here and if we would do statistics here, we would see that most people here are not ready for spiritual work and are just participating in spiritual bypassing and using spirituality as an escape mechanims from real life problems. Knowing that thats the case and them only or mostly getting advice of "just do these set of practices more" and not having almost anything to offer or to point to regarding healing is a big mistake imo, and makes those people's development a lot harder and sometimes even counterproductive. One solution regarding the vulnerability problem would be to develop a better understanding and sense about who is being in what state, when they write their replies and when they make their threads. I have fallen into this trap many times, when someone makes a post in a vulnerable state and I just want to logically argue what I think is right or true, regarding whatever the discussion or the thread is about, when it is almost perfectly clear that the person that made the post is either just triggered or is tramuatized from something and thats why they have the strong and clear bias that they have about a certain subject. Recognizing that the other person is triggered or actually traumatized will change and should change the rhetoric around the further discussion if the goal is to actually help people in their development.
-
?
-
@Carl-Richard Yeah you are right. One thing that i would definitely consider a feminine trait is about being more vulnerable. Most of us on this forum probably has a huge feminine shadow (just based on what type of people Leo, and Leo's work attract in general). So integrating that shadow would probably make this problem better. One other thing is healing. This forum is not really good/suitable for it and not just that but sometimes couterproductive for it. When someone is vulnerable and share a bad experience often times responded with almost no acknowledgement and only with a list of what he/she should do. Being practical is cool, but not being acknowledged,while in a vulnerable state or even being invalidated is probably not.
-
We should (notice im praticing being more feminine) probably engage with this section "Dating, Sexuality, Relationships, Family" a lot more and you moderators should probably let the weird sexual discussions open there (I remember there was some kind of thread about a sex machine or something like that and that got closed lmao).
-
I don't even know how it would look like for me , I was thinking that throwing out words in an abstract manner will trigger some answers in you. To be 1 layer less abstract, but still very abstract (because I don't have any idea either, how should it go in practice). 1) probably we should talk more about people and about social things than about ideas. 2) We should talk more about how things make us feel compared to what we think how things are (less talking about facts more talking about feelings) 3) Talk more about shoulds and fantasies in general and less about facts
-
By stopping with the passive agressive smile faces . (just joking) Probably by being less intellectually oriented and more emotional/feel oriented.
-
Having only a rational calculting brain, without any instinct for survival = death. If you think that having an instinct to survive is an emotion, then sure frameing that way your argument could be correct, but by that framework you are arguing for something that necessarily lead to extinction and death, because without having an instinct for survival, why would anyone or anything want to maintain their survival?
-
How much have you thought about this concept , before you immedately rejected it like a knee jerk reaction? Do you actually know that what your assumption about this is necessarily the case, or you just assume it? How can you reject it on a logical level? Because what you are saying here is almost as if you would say that methods and tools that are more suitable and effective for long term gain will necessarily be less effective compared to other things in short term. Is that actually necessarily true in every instance or just an assumption? I don't see how deductively you can get to your conclusion from the premise of something being better for long term. How is something being better for long term will necessarily be worse for the short term, walk me through the deduction, because I don't see it. Using inductive reasoning I could see,but deductively(when a certain logical conclusion will come with 100% certainty and accuracy from the premise(s)), thats what I don't see. Thats a simplistic way to think about this concept. You almost treat this subject as if we would talk about physics (where we can deductively run down our logic and say with almost 100% accuracy whats gonna happen and how things will turn out). With subjects like this (that are heavily affected by human psychology and behaviour) thats not the case, and you also assume that we have actually tried all kinds of frameworks already and we know for sure, that the current one is the best and the most effective that we can come up with. Thats like saying our evolution is done and there is nowhere to develop or go anymore. Sometimes if you goal is too narrow, the optimization for it won't work too well, because sometimes there are other variables that can have a direct effect on that goal, but you won't recognize it , because the framework that you think in and work with is too narrow. So for example, you might think that by not having any regulation on the market will necessarily generate more capital than if you do have some regualtions on it. I bet that especially long term, you will generate a lot less capital overall if you let every shady shit to happen and to psychologically and mentally and physically fuck up people by making them addicted to a bunch of things. Thats just one example, where on the most surface level one idea might seem cool, but if we think about it 1 more layer down the road, we can immedately recognize that, thats not necessarily the case Can you actually show me an example of a socialist system ,where there is central planning and it is optimized for generating capital? If not, then how can you be so sure, that it wouldn't be more effective, than letting the market do its own thing?
-
What Im talking about is that the assumption of "game B methods,tools will be necessarily less effective and useful in a game A world" is not necessarily true, and don't necessarily have to be that way by nature. The time we can come up with such examples, where a game B tool is more effective and useful(even in the context of game A) than other game A tools but at the same time can trigger some internal change in the current Game A world, structres, then we have a framework that we can use to start to move towards an actual game B world. For the sake of working with something tangible (you don't need to agree with the premise, I just bring this up to have something tangible to work with). The easiest example that people often time bring up is capitalism vs socialism. That socialism is not as effective at game A compared to capitalism therefore it will eventually fail because of outside pressure from capitalist countries, but what if there is an actual socialist framework that is more effective at game A things than capitalism? If there is such a system, and a country would start to implement that kind of system, eventually everyone would be forced to implement that kind of socialist system, because if they don't, they will be left behind economically and they will slowly lose their political power, therefore they will be incentivised to either create and then implement an even more effective framework or to implement that socialist framework.
-
zurew replied to Leo Gura's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
@Bobby_2021 This is what ChatGPT has to say about their interaction: -
Thinking in the context of exponential tech I don't think we are too far away from it. Just if you take a look at the current AI development (just in one year how much it has evolved), then I don't know if we can actually properly compherend whats being exponential looks like as time goes on. In the past, exponential development might have meant increasing by 5 developmental units in one year, now it might be 10000 and then eventually millions and then billions of developmental units every year. Again, before we get there, first we have to agree on the framework. Once you can agree with the framework at least on an intellectual and logical level and you don't see contradictions in it and see how it could be utilized, after that we can talk about specific examples of it or even about the creation of specific examples of it. If you don't even agree with the framework, than no example will be sufficient enough for you. In other words (to use your words from one of your video), right now I want to make you agree on the the framework (structure) and then we can talk about what kind of variables(content) could be placed in that structure. Or in other words, I want you to agree with the main/general function, that could be used to generate the specific examples that you ask for. So to not be too abstract about it (For the record I haven't thought deeply about specific examples of it yet, just thought on a very surface level way about a few one) , even if I would bring up the same example that Daniel brought up about a more transparent kind of government structure and you would be able to reject it - you rejecting that example wouldn't prove that the framework itself is wrong, it would only prove that the example was bad. Right now, the goal would be to first agree on the framework and about its necessity, and then after that (comes the hard part) collectively think and inquire very hardly and deeply about proper examples, that could represent that framework. Having access to a superintellgence(that can decieve, come up with plans, help you with any kind of knowledge, help you or even automatically build different kind of weapons, generate billions amount of misinformation etcetc - basically amplify anyone's power and intentions by a 1000-10k fold) in your basement seems pretty dangerous to me. If you talk about people not having access to it, just only some elites or the owners of the company, that also seems pretty fucking if not even more scary to me. A guy or a handful of people having access to a technology that could be used to dominate and terminate the whole world with and to force your ideology and morals onto the world? I only see two ways out from that problem; either elevate humanity to the next level socially and spiritually and psychologically or create a wise AI that can actually think for itself and won't help or even prevent everyone from fucking up the whole world.
-
Thats why (if you reread what I wrote about the positive multipolar trap) have to be implemented. Its not a simple stage green "lets moralise the fuck out of everything and then naively hope that everyone will get along with us" Its more like implementing a dynamic that is just as if not more effective in the context of game A than other game A tools ,but at the same time lays the groundwork for game B (so game A pariticpants will be incentivised to implement the tool ,and then by implementing it, the tool itself will slowly but surely change the inherent game A countries and structures). Sure, but have we had a scenario where almost every individual can have access to tools that are just as if not more dangerous than nukes? The biggest point of game B is to build a structure where we can collaborate and somewhat live in peace. In the current game A structure some forms of collaboration is impossible, even though that kind collaboration is what would be needed to solve certain global problems. The world kind of still gets along with some countries having nukes and they can somewhat manage each other so that they don't have to kill each other (although if we look at the current situation thats very arguable). Now imagine a scenario where actually billions of people have nukes and they all need to manage and collaborate with each other. That would be cool if we assume that 1) we can track it exactly when it will get out of hand and 2) The AI won't deceive or trick us 3)If the AI won't be conscious and will 100% do what we want, then we need to assume that no one will try to use it to fuck everything up (bad intentions aren't even necessary for this).
-
I just don't see that middleground, when we have exponential tech and the development of that tech can't be slowed down. People having access to Godlike tech without having wisdom and knowledge. Or maybe (this is the only one I see right now) a wise AI could help us maintaining our society and creating the necessary developmental and social structure for us , where we can develop socially, psychologically, spiritually at our own pace. - so creating artificial environments, where we can develop ourselves and might even fasten up our development.
-
@Leo Gura I think we are at a time or getting really close at a time in our evolution where the next step will be either a giant fucking leap in our evolution or death, and there is no room for babysteps anymore. I agree that one of the most unreal thing to say is that we will achieve something close to a Game B world in a relatively short time, but on the other hand, it seems just as if not more unreal to say that we can maintain our society under a Game A structure for much longer.
-
We know a lot more about our physical limits than about our spiritual and social and collaborative limitations. Even our physical abilities could be pushed to a great extent if the necessary knowledge, care, and tech is there. Sociology is still fucking new, and almost no one practicing or knows about serious spirituality on this planet, so we have no idea where those limits are, and how fast a human can actually develop spritually and socially and psychologically. It will take much longer if we don't even try or think about it. A lot of assumptions are built in in the thinking "that humans have to wait x years before they can actually develop to certain levels", questioning those things and pushing those things will be a main, and a necessary part of our survival. Again I don't see how you can maintain a game A system while you have tech that can be accessed by any fool and then destroy everyone and everything with. You would have a point if implementing the frameworks the Game B guys talking about, wouldn't be necessary for our survival.
-
I don't think we have that much time to fuck around with a Game A system, just if you think about the technological development alone, it will make the dynamics in the system so that we can't wait for that long. Imagine everyone having access to more powerful tools than nukes - if that world is not organised by that time - we will die or seriously fucks things up. What we need is not just creating technology but creating social tech and sort of spiritual tech as well, where we can hopefully fasten the social and spritual development and don't have to wait 1000 years. We mostly have the problems we have right now, because we only have exponential (normal/conventional tech) and not exponentional spiritual,social tech. Btw, obviously the reversed multipolar trap wasn't invented by me, so credit goes to Daniel and to the Game B team. Here is a video snippet (I timestamped it) , where he talks about it and about transparency.
-
I don't think the western model in its current form is actually effective (people are more divided than ever before), therefore I don't think what you brought up is proof for that, the idea I brought up wouldn't necessarily work. I didn't say that this would immedately get us to Game B, I said this framework is one necessary tool to start to move towards Game B. First we need to hash out the framework and then just after that we can start to think and argue about the specifics
-
If they would have no fear of death, then they would die really fast, because they wouldn't have a really strong incentive to maintain their survival. You can't maintain or create a society who don't give any fucks about their survival. You can't really escape this problem. If you are talking about AIs who don't care about death, then they would have even less incentive to corrabolate, if you do talk about AIs who care about survival, then we are back to square 0, where they will be forced to make certain decision that will go against each others interest - therefore will make corrabolation hard, and deception and manipulation will kick in - and we are back to the same problems we have in our society (regardless if you take out emotions from this equation or not).
-
If Western models would be much more effective, they would be eventually forced to implement them, because they wouldn't want to lose their political power. I don't think that necessarily they would go to war, because thats a big potential lose for them especially, if economically the western models would make you much more powerful - therefore more effective at war as well. Or would you say that this kind of tactic is only effective with stage orange countries and countries that are mostly below stage orange will always prioritize their ideology over everything else? I like to think of certain ideologies as just tools to get to a certain society or to get certain things done - however I too have certain things that I would defend and hardly let go regardless of effectiveness (because I care about other things as well, not just effectivness ) - for example democracy. My first example that I gave above (the transparent kind of government structure) may cut too deep too fast (because it might threatens some core part of a certain ideology), but I think the reversed mutipolar trap tactic is the way to go in general. Maybe first, the more surface level ideas need to be changed using this tactic and then from there we can go deeper and deeper one step at a time. What we are essentially talking about here, is trigerring or fastening up internal change in these countries (as much as that possible without too much pusback and negative effects). Obviously the hard part of this, is how to balance things to not actually fuck things up unintentionally. There are other tools to achieve or to trigger or to fasten up internal change in other countries but probably this concept is one of the biggest one.
-
@aurum When it comes to actually stopping everyone internationally I think this is what is required (this is nowhere near specific enough, this is just the layout/structure of what I think is required): I will copy paste from, what I wrote in the "should we shut down all AI" thread. The bar for the solution here is incredibly high and I obviously wouldn't say that it is anywhere near realistic, but unfortunately I think this is what is required. Very shortly, we need new AI research tools/methods that every country,company is incentivised to implement/use ,but at the same time safe as well.
-
There is only really one way to do this (to actually make sure everyone will participate, including all countries), but that thing haven't really been discovered yet. It could be called a reversed multipolartrap or a positive multipolar trap, where you invent a new method/tool that is the most effective in the dynamics of game A, but at the same time moves us towards game B or has game B characteristic in it. Because it is the most effective, people will automatically start to use it, if they want to stay competitive. So for instance, in the context of politics (this might or might not be true, I will just use this for the sake of an analogy and demonstration of this concept) if a transparent government model is more effective than other ones, and different countries start to see that, they will eventually need to implement that model, because other governments who implement that model will start to outcompete the ones that don't use this model. Because of that pressure eventually everyone will use that model, but for example - because of the transparency - these new models could start to change the global political landscape in a way that start moving us towards game B. Now is that true that a more transparent government model is more effective than other ones? we can argue about that, thats not the point I try to make here, the point is to 1)find/create a method or tool that has inherent qualities in it similar to game B or at the very least has the potential to change certain game A systems internally to move us towards game B, and 2) at the same time so effective in the current game A world, that people will be incentivised to use/implement it, beacuse people will see that by using ot, they will get short term gains with it. In the context of this discussion, the challenge would be for smart AI researchers to find/create a new research method that is opmitized for safety but at the same time one of the most if not the most effective and cost efficient method to progress AI (I have no idea if this is possible or not, Im just saying what I think is actually required to make everyone participate [including abroad]).