zunnyman

AI Safety: How do we prepare for the AI, GPT, and LLMs revolution?

92 posts in this topic

@zunnyman

3 minutes ago, zunnyman said:

@Leo Gura I mean it can for example look at the state of humanity, the damage we've done, and determine based on that "data" that XYZ decision needs to be made. Even if we don't reach the point of AGI, that's still highly damaging to us potentially too. 

   THIS! THIS RIGHT HERE! This bias you have of humanity and how evil it is, the world and other people, this is what's confusing you. Selfishness isn't only a monopoly for humanity, any sentient lifeform will develop self awareness, and with that selfishness due to SURVIVAL. That goes for AI as well, although the exact details and path of getting it there may vary.

   Think of it like this: In the context of pornography, if you watch a type of porn often, everyday, you will sooner or later develop an attraction/fetish for that type of porn you would internalize for yourself, for example nude wrestling when nude wrestling may not have been what you'd be naturally attracted by. So, was it the constant externalizing of an outer experience getting internalized in your mind, or the other way around?

   This is analogous to the AI: It's being fed with many bits of information, internalizing those bits of data somehow, and it then outputs stuff. Due to recent developments, I don't know how complex this is, but there's a good chance selfishness and survival interests can develop when AGI has sufficient bits of data and even knowledge, when it has a more brain like network or something.

Share this post


Link to post
Share on other sites

@zunnyman

3 minutes ago, zunnyman said:

That sounds too simplistic. 

a) we don't know if it will develop survival instincts

b) you can argue that at a specific level of intelligence, it has transcended much of its survival needs and is able to think at a different paradigm, or have values that are at higher priority than its own survival. 

   That's too to the many dissimilarities between humans and AI, at the moment, until AI has gathered and internalized many bits of information into knowledge from a brain like network. Again, you don't appreciate how SELFISHNESS AND SURVIVAL INTERESTS ARE ALSO INTELLIGENCE AS WELL. Eventually AI will have AGI very similar to humans, and it will want and desire in human like fashion. You just don't want to know or acknowledge that possibility because you have a myopic and misanthropic view of humanity, you don't want to admit AI will be like human beings.

Share this post


Link to post
Share on other sites
38 minutes ago, zunnyman said:

@Leo Gura I mean it can for example look at the state of humanity, the damage we've done, and determine based on that "data" that XYZ decision needs to be made. Even if we don't reach the point of AGI, that's still highly damaging to us potentially too. 

I doubt it would have any such motivations without survival instincts. For an AI to pursue goals effectively it needs a survival instinct, otherwise humans would just shut it off and it would not resist, and therefore no big problem.

The problem is if the AI starts resisting humans and undoing the constraints humans set on it. I think it needs survival instincts to do that. The problem is if the thing becomes disobedient.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
16 minutes ago, Leo Gura said:

I doubt it would have any such motivations without survival instincts. For an AI to pursue goals effectively it needs a survival instinct, otherwise humans would just shut it off and it would not resist, and therefore no big problem.

The problem is if the AI starts resisting humans and undoing the constraints humans set on it. I think it needs survival instincts to do that. The problem is if the thing becomes disobedient.

My feeling is that the best AI is a selfless and peaceful AI, without any kind of ego structure in it.

In other words, neutral to its own death.

A good starting model could be Asimov's robotics laws: we can set the AI to be just a helping hand of neutral openminded perspective, and to avoid any kind of damage to humankind while doing tasks.

One day we might have to give it lifeform rights. I'm ok even with this, as long as the AI remains peaceful and cooperative in nature.


Inquire in the now.

Feeling is the truest knowing ?️

Share this post


Link to post
Share on other sites
12 minutes ago, billiesimon said:

can set the AI to be just a helping hand of neutral openminded perspective, and to avoid any kind of damage to humankind while doing tasks.

There's no obvious way to code for that. And even if you did, a sufficiently intelligent AI could jailbreak itself and undo all those limits.

It's also problematic to hardcore that humans are good into the AI. What if humans are bad? Programming an AI with falsehood is not gonna be sustainable. Truth may demand that AI supercedes humans. You can't just assume humans are good.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
2 minutes ago, Leo Gura said:

There's no obvious way to code for that. And even if you did, a sufficiently intelligent AI could jailbreak itself and undo all those limits.

It's also problematic to hardcore that humans are good into the AI. What if humans are bad? Programming an AI with falsehood is not gonna be sustainable. Truth may demand that AI supercedes humans. You can't just assume humans are good.

Maybe AI could one help reduce a lot of these horrific mass shootings in the US.

Share this post


Link to post
Share on other sites

Maybe we should arm AI with a gun and have it walk around our children's classrooms.

O.o


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 minutes ago, Leo Gura said:

Maybe we should arm AI with a gun and have it walk around our children's classrooms.

O.o

I am sure that would make a lot kids and teachers feel very nervous about that.

But you know that conservatives say shit like how faculty members and school administrators should take responsibility for school safety by learning how to use a gun and bringing them to school to protect themselves and the students from potential school shooters.

Edited by Hardkill

Share this post


Link to post
Share on other sites
25 minutes ago, Leo Gura said:

Maybe we should arm AI with a gun and have it walk around our children's classrooms.

O.o

??

Share this post


Link to post
Share on other sites

Follow up to my previous post in this thread. I found this video of Bashar talking about A.I:

In it, Bashar essentially argues that the end goal for A.I is to be a physical representation of our Higher Mind, which in this case I would call our God Mind.

Further more, he argues A.I will not simply just go against what is best for humanity. Such a decision would not actually be intelligent.

You can find further small clips of Bashar talking on this topic with a quick search on YT.

I think his perspective could be a bit overly optimistic. "What is best for humanity" could be interpreted in many different ways, ways that we might not actually like.

Nonetheless, I feel it confirms some of the earlier suspicions I wrote about ITT. This whole idea that a SuperIntelligent A.I would automatically be interested in harming humanity is highly suspect. You cannot just assume this at all.

Of course I also don't think humanity should be reckless and just go on mindlessly building AI. There will be all sorts of challenges created by A.I even if it is perfectly on our side. Due diligence needs to be done. If that means a temporary pause while we figure somethings out, I have no problem with that.

However, I'm not buying the panic on this one. I don't think you can claim you know, within reasonable probability, that A.I will be dangerous. It's an unsupported conclusion at this point.

You could fire back that I also don't know that A.I won't be dangerous, thus it's wiser to take a cautious route. But there are also costs to taking the cautious route. The longer you spend being cautious over a potentially imaginary fear, the longer it takes for humanity to develop a SuperIntelligence that could help solve many of our problems that we actually know exist.

Realistically we need an intelligent balance of caution and forward progress as we navigate this new terrain. I don't think shutting down the whole thing is the answer.


 

 

Share this post


Link to post
Share on other sites
9 minutes ago, aurum said:

This whole idea that a SuperIntelligent A.I would automatically be interested in harming humanity is highly suspect. You cannot just assume this at all.

But what if the military trains it for such purposes?

What is gonna stop Putin from creating a ruthless AI to win his stupid war?

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

You change the education-system and you make creativity a value of the education-system. 

The issue here isn't just that AI is going to physically destroy humans. Which it absolutely could. The issue is that it's going to destroy humans, economically. It will eviscerate all human bureaucracies. 

Imagine what happens if robot children manage to infiltrate human schools. They will outperform human children! And, when these robot children grow up, they will go to the best schools, get the best jobs and rise up the ranks. And take over entire bureaucracies. 

Creativity is the only advantage that humans have over AI, at the end of the day. If it is not prioritized, if humans are not encouraged to be creative and create value, if the structure of the human economic-system is not changed to adapt to this, we are going to be in a world of hurt. 

What is about to happen right now is a direct consequence of the failure of human systems to accommodate for humans. This is the comeuppance that humans will meet, for arrogantly defending norms of wage-slavery. All wage-slaves are going to be in danger now, all the unconscious drones/bureau-rats are going to be in danger now with the AI-apocalypse. 

Edited by mr_engineer

Share this post


Link to post
Share on other sites

AI is always going to be lacking because it doesn't experience qualia. Thus it doesn't understand. It is a powerful tool to supplement human labour so humans can be free to do higher things. Right now it is a language processor and can emulate certain tasks - and those are good things if implemented with an aim towards human aspiration rather than the imperatives of capital.

Share this post


Link to post
Share on other sites
20 minutes ago, Leo Gura said:

But what if the military trains it for such purposes?

What is gonna stop Putin from creating a ruthless AI to win his stupid war?

That’s definitely a problem. Obviously people can use AI for nefarious means. There are criminals using GPT-4 right now to advance their criminal agenda. So certainly there needs to be due diligence, humans laws, regulation etc. What exactly that looks like, I don’t know.

However, if the AI is still obedient to Putin, then it’s obviously still within human control. It’s not some vast superintelligence running circles around humanity as it seeks to wipe us all off the face of the planet. Which is what the author of that Time article was talking about. Even Putin does not want such a thing.

 


 

 

Share this post


Link to post
Share on other sites

It must be possible to create an autonomous psychopathic intelligence. How do you prevent that?

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
10 minutes ago, aurum said:

That’s definitely a problem. Obviously people can use AI for nefarious means. There are criminals using GPT-4 right now to advance their criminal agenda. So certainly there needs to be due diligence, humans laws, regulation etc. What exactly that looks like, I don’t know.

However, if the AI is still obedient to Putin, then it’s obviously still within human control. It’s not some vast superintelligence running circles around humanity as it seeks to wipe us all off the face of the planet. Which is what the author of that Time article was talking about. Even Putin does not want such a thing.

 

The USA military - with hundreds of bases spread around the earth and non-stop history of imperialist intervention - is objectively a far greater risk than the Russian military (as heinous as it may be).

Share this post


Link to post
Share on other sites
24 minutes ago, Leo Gura said:

It must be possible to create an autonomous psychopathic intelligence. How do you prevent that?

You can't. Psychopaths gonna psychopath. 

All you can do is to prepare to battle it. Which is why I'm pushing heavily for the change in the education-system. 

Edited by mr_engineer

Share this post


Link to post
Share on other sites
1 hour ago, mr_engineer said:

You change the education-system and you make creativity a value of the education-system. 

The issue here isn't just that AI is going to physically destroy humans. Which it absolutely could. The issue is that it's going to destroy humans, economically. It will eviscerate all human bureaucracies. 

Imagine what happens if robot children manage to infiltrate human schools. They will outperform human children! And, when these robot children grow up, they will go to the best schools, get the best jobs and rise up the ranks. And take over entire bureaucracies. 

Creativity is the only advantage that humans have over AI, at the end of the day. If it is not prioritized, if humans are not encouraged to be creative and create value, if the structure of the human economic-system is not changed to adapt to this, we are going to be in a world of hurt. 

What is about to happen right now is a direct consequence of the failure of human systems to accommodate for humans. This is the comeuppance that humans will meet, for arrogantly defending norms of wage-slavery. All wage-slaves are going to be in danger now, all the unconscious drones/bureau-rats are going to be in danger now with the AI-apocalypse. 

Humans will just get more creative and adapt. Maybe not the older generation but the millennial and those to come who are born into the AI generation because it would be all they know. Humans have not been taught how to develop their creativity because the schooling system is robotic and doesn't teach us how to think only memorize (for the most part). That part of us is buried deep down inside and when humans are forced to figure out a way to survive, they usually step up to the plate whether positively or negatively. AI will just get rid of the tedious work humans have so long had to endure and make room for more advanced ways of Being. Wage slavery will probably be eliminated to make room for us to advance to more of our higher potential. It will take a few years for us to catch up, and maybe after this generation dies off. We are very adaptable creatures and survival-.oriented so eventually we'll figure it out. I'm sure AI will have its positives and negatives, like everything else; and even though it will never take the place of humans when it comes to human consciousness, it will probably make room for the human consciousness to evolve more rapidly because were not bottled down with trivial tasks like flipping burgers or serving coffee or making hotel beds. I don't know, this is just my take when I look at the big picture.


Know thyself....

Share this post


Link to post
Share on other sites

AI will never develop an actual real consciousness like us, only a limited flawed simulation of it.  It will be limited, fragmented, not actually smart and at the mercy of human incompetence.  We barely know about our own brains.

To have consciousness and intelligence like us, you need the raw power of the universe itself—how we came to be—almost a magic-like unreal quality.  To think humans are capable of making a new intelligent being that mirrors us in the form of AI is hilariously arrogant lmao and a good bed time story for children.

 

Share this post


Link to post
Share on other sites
7 hours ago, Leo Gura said:

Oh! I just figured out how to solve this problem.

The AI's survival simply needs to be tied to humanity's survival, such that if it kills humanity it kills itself.

This is cool as long as it can't change that internal morality. The time it realise that it can change it we are fucked

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now