Scholar

AI requires regulation and a complete reevaluation of ethical and legal norms

39 posts in this topic

I realized that framing this as an issue of "MLA is doing the same as humans do!" is not apt. It does not even matter whether or not these AI's are doing what we are doing, or whether or not they learn the way we do. (which they do not, to be clear)

 

This will become a much bigger problem, and the art models are just the initial test run for how we will be going to deal with this issue going forward. As it currently stands, under the ethical framework that Leo and others here propose, AI is allowed to do anything a human can do. It can use data, learn from it whatever it wants, and replicate itself in however way it pleases. And people in control of the AI can profit in however manner they please.

 

 To understand why this viewpoint is naive and short-sided, we must look at how society currently distributes wealth. Currently, economical value is distributed among all human beings. Every human being, precisely because of the limtation of human beings in general, has some value in the current economical environment. A writer has value because writing is something that takes time, skill and effort. There is demand for that skill, and so that skill can generate economic value.

With the advancement of AI, AI will be capable of using the data produced by the hundreds of millions of people to create a super intelligence that will nullified the economic value of all those humans instantly. No matter what humans will try to do, because of the inherent limitations to the human mind and body, the AI will be able to take it, enhance it, and be capable of reproducing it in an instant. This will apply to anything a human could possibly want to do with the help of AI, because the AI will be capable of learning that, too.

Now, what specifically does that mean under the current moral framework of "AI learns just like humans, therefore all of this is fair use, because humans are allowed to learn from each other!". In simple terms, this will lead to the greatest monopolizing of power, wealth, knowledge and capacity in human history. Whoever will own most computing power will extract most econmical value. All humans will be rendered useless, and those who own and manage those AI's will have, with the help of data aquired and produced by a collection of billions of humans, extracted their potential economic value without any compensation to any of the generators of that information.

We can already see this with Stability AI. What Stability AI did, and what it is going to continue to do under the current paradigm, is that it has taken it's AI and fed it all data it could get it's hand on. That is why Stability AI is valued at 1 billion dollars as we speak. How can this be the case? Well, because it extracted the economic value of the producers of the data it has used to train the model. It can now supply all demand that could possibly exist.

This will, sooner or later, apply to all areas in life. Whether you create movies, whether you are scientist, whether you program, whether you create videos about spirituality on youtube. The AI will learn, and the AI will be better and faster than you are. And who will benefit from that? The creator of the AI, ergo, the person with the most computing power, ergo, the person with the most capital.

 

You are pretending that these AI's are just like humans, while you cannot see that what makes these AI's different from humans is precisely what is causing this monumental change to the organization of all of mankind. If they were the same as us, they would not be capable of doing this. It's precisely because they are fundamentally more efficient than us that they can do this.

And that ought to be the argument for why the aquisition and usage of data in the form we have seen being advocated for on this forum should be reconsidered. Our moral frameworks, our legal frameworks, are not prepared for the monumental change that will eventually occur. Pretending that they do will cause unnecessary suffering.

The free market, in the form it is currently existing, cannot be sustainable in this new era.

 

Sensible regulation regarding this technology, which eventually will grow to be more powerful and dangerous than nuclear weapons, is of utmost importance for the advancement and progress of mankind. We must find a way to transition from where we are today, to whereever we are going. And that starts with regulating data-usage and processing, and creating new frameworks to deal with the advent of this new technology.

Share this post


Link to post
Share on other sites

Of course AI needs regulations.

But there are far more serious issues with AI than looking at art to create art.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
37 minutes ago, Leo Gura said:

Of course AI needs regulations.

But there are far more serious issues with AI than looking at art to create art.

The art is just a proxy for the general issue, which is IP processing without consent. The arguments you made on this forum for why it is okay for people to train their AI's without any consideration to IP laws is flawed because of what I argued above. This will, by extention apply to art, and because this is the first issue that is becoming a problem in AI, it is the issue to focus on to start thinking about these problems and begin regulating. It doesn't have to come to the more serious issues, or at least they can be mitigated.

Share this post


Link to post
Share on other sites
19 minutes ago, Scholar said:

The art is just a proxy for the general issue, which is IP processing without consent. The arguments you made on this forum for why it is okay for people to train their AI's without any consideration to IP laws is flawed because of what I argued above. This will, by extention apply to art, and because this is the first issue that is becoming a problem in AI, it is the issue to focus on to start thinking about these problems and begin regulating. It doesn't have to come to the more serious issues, or at least they can be mitigated.

That's one problem.

There's also a major alignment problem. Future forms of AI will be much more powerful, leveraging much more of society. There's no guarantee that the goals of the AIs, and the people that utilize them, will be aligned with society's values.

The spread of the knowledge and utilization of AI makes them damn near impossible to control at this point. What if a bunch of rouge actors builds super-powerful AIs that damn near destroy society? How can you prevent that from happening? AI, and the AI revolution, is like giving children nuclear bombs. It's harmless right now, though long-term it will not be so good.

 

Your regulatory concerns should consider all forms of negative externalities that will form throughout the development of AI systems.

Edited by Kanddle

Share this post


Link to post
Share on other sites
5 hours ago, Scholar said:

This will, sooner or later, apply to all areas in life. Whether you create movies, whether you are scientist, whether you program, whether you create videos about spirituality on youtube. The AI will learn, and the AI will be better and faster than you are. And who will benefit from that? The creator of the AI, ergo, the person with the most computing power, ergo, the person with the most capital.

Here lies one of the most complex problem regarding this topic - which is that if a superintelligent AI (which will eventually be able to do everything [that is economically valuable] better and more cost efficiently and effectively than a human) is owned by a company or even by a country, then that company or country will eventually have all the power in the world and that power imbalance will cause a lot of problems.

Thats a problem that needs to be solved first, because if you don't have any answer to that question, then you probably can't even properly regulate the development and the use of AI, because how will you be able to regulate an AI ,that is developed by China or Russia or by any group of people that we don't even know about? Also anyone on the surface can agree to 'lets regulate the development and usage of AI' and after that they can secretly develop it without any ethical standards to gain massive leverage in the market in the future. The incentive to ignore all the ethcial standards and to not give a fuck about the potential consequences is too big even if there is a chance of being caught.

So basically the point is that if the superintelligence won't be collectively owned, then probably everything will be fucked, and the emergence of superintelligence is probably inevitable even if on the surface you can regulate some companies.

Edited by zurew

Share this post


Link to post
Share on other sites
2 minutes ago, zurew said:

Here lies one of the most complex problem regarding this topic - which is that if a superintelligent AI (which will eventually be able to do everything [that is economically valuable] better and more cost efficiently and effectively than a human) is owned by a company or even by a country, then that company or country will eventually have all the power in the world and that power imbalance will cause a lot of problems.

Thats a problem that needs to be solved first, because if you don't have any answer to that question, then you probably can't even properly regulate the development and the use of AI, because how will you be able to regulate an AI ,that is developed by China or Russia or by any group of people that we don't even know about? Also anyone on the surface can agree to 'lets regulate the development and usage of AI' and after that they can secretly develop it without any ethical standards to gain massive leverage in the market in the future. The incentive to ignore all the ethcial standards and to not give a fuck about the potential consequences is too big.

So basically the point is that if the superintelligence won't be collectively owned, then probably everything will be fucked, and the emergence of superintelligence is probably inevitable even if on the surface you can regulate some companies.

What does that have to do with what I am speaking of? AI's can still be developed, it doesn't at all need to be done in this clearly unethical and unsustainable way. Nobody is talking about ceasing the development of AI.

And ironically, China seems to be one of the first country who is responding to this. They will regulate AI's beginning 2023.

 

Before we solve the monumental issue of a super-AI, let's see if we can get this basic thing right. We also made cloning illegal, even China agreed to that. So let's not pretend there are no solutions to these problems.

Share this post


Link to post
Share on other sites
2 minutes ago, Scholar said:

We also made cloning illegal, even China agreed to that. So let's not pretend there are no solutions to these problems.

Agreeing on a paper doesn't mean never doing anything that goes against it, also I don't think that is comparable, because the incentive to be the first to own a superintelligence is just way too beneficial and outweighs every negative consequence of being caught developing it unethically (assuming that can make the development faster).

That being said, how would you regulate it regarding to art? The training data can only contain images and art that was agreed upon beforehand for free usage - so basically using no copyright datasets?

 

Share this post


Link to post
Share on other sites

@Scholar

5 hours ago, Scholar said:

I realized that framing this as an issue of "MLA is doing the same as humans do!" is not apt. It does not even matter whether or not these AI's are doing what we are doing, or whether or not they learn the way we do. (which they do not, to be clear)

 

This will become a much bigger problem, and the art models are just the initial test run for how we will be going to deal with this issue going forward. As it currently stands, under the ethical framework that Leo and others here propose, AI is allowed to do anything a human can do. It can use data, learn from it whatever it wants, and replicate itself in however way it pleases. And people in control of the AI can profit in however manner they please.

 

 To understand why this viewpoint is naive and short-sided, we must look at how society currently distributes wealth. Currently, economical value is distributed among all human beings. Every human being, precisely because of the limtation of human beings in general, has some value in the current economical environment. A writer has value because writing is something that takes time, skill and effort. There is demand for that skill, and so that skill can generate economic value.

With the advancement of AI, AI will be capable of using the data produced by the hundreds of millions of people to create a super intelligence that will nullified the economic value of all those humans instantly. No matter what humans will try to do, because of the inherent limitations to the human mind and body, the AI will be able to take it, enhance it, and be capable of reproducing it in an instant. This will apply to anything a human could possibly want to do with the help of AI, because the AI will be capable of learning that, too.

Now, what specifically does that mean under the current moral framework of "AI learns just like humans, therefore all of this is fair use, because humans are allowed to learn from each other!". In simple terms, this will lead to the greatest monopolizing of power, wealth, knowledge and capacity in human history. Whoever will own most computing power will extract most econmical value. All humans will be rendered useless, and those who own and manage those AI's will have, with the help of data aquired and produced by a collection of billions of humans, extracted their potential economic value without any compensation to any of the generators of that information.

We can already see this with Stability AI. What Stability AI did, and what it is going to continue to do under the current paradigm, is that it has taken it's AI and fed it all data it could get it's hand on. That is why Stability AI is valued at 1 billion dollars as we speak. How can this be the case? Well, because it extracted the economic value of the producers of the data it has used to train the model. It can now supply all demand that could possibly exist.

This will, sooner or later, apply to all areas in life. Whether you create movies, whether you are scientist, whether you program, whether you create videos about spirituality on youtube. The AI will learn, and the AI will be better and faster than you are. And who will benefit from that? The creator of the AI, ergo, the person with the most computing power, ergo, the person with the most capital.

 

You are pretending that these AI's are just like humans, while you cannot see that what makes these AI's different from humans is precisely what is causing this monumental change to the organization of all of mankind. If they were the same as us, they would not be capable of doing this. It's precisely because they are fundamentally more efficient than us that they can do this.

And that ought to be the argument for why the aquisition and usage of data in the form we have seen being advocated for on this forum should be reconsidered. Our moral frameworks, our legal frameworks, are not prepared for the monumental change that will eventually occur. Pretending that they do will cause unnecessary suffering.

The free market, in the form it is currently existing, cannot be sustainable in this new era.

 

Sensible regulation regarding this technology, which eventually will grow to be more powerful and dangerous than nuclear weapons, is of utmost importance for the advancement and progress of mankind. We must find a way to transition from where we are today, to whereever we are going. And that starts with regulating data-usage and processing, and creating new frameworks to deal with the advent of this new technology.

   I really agree that A.I. needs to have a bit more regulation, from congress or some other group outside of Silicon Valley. Also, they need to check the algorithms again, as the algorithms are recommending me stuff I ALREADY BLOCK!

   Still, an interesting video of VICE, a group who's VICE is being not NICE to the RIGHT for also being not NICE. If I could, I'd dumb a bucket of ICE for every TIME they spit on the MIC their SICK IDEOLIGIZING puritanizing alienating video TAKES!!!? 

Share this post


Link to post
Share on other sites

Before that, the internet needs regulation. 


♡✸♡.

 Be careful being too demanding in relationships. Relate to the person at the level they are at, not where you need them to be.

You have to get out of the kitchen where Tate's energy exists ~ Tyler Robinson 

Share this post


Link to post
Share on other sites

Anti-AI AI, AI Blockers.

Super intelligent AI can micro manage the world. 


How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites

@integral

3 minutes ago, integral said:

Anti-AI AI, AI Blockers.

Super intelligent AI can micro manage the world. 

   It'll be Sky Net before we can do anything to stop that. Game over, GG Humanity.

Share this post


Link to post
Share on other sites

These new techs gonna be like V-2 rockets. Once you hear them you'll know it's over.

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

Agreeing on a paper doesn't mean never doing anything that goes against it, also I don't think that is comparable, because the incentive to be the first to own a superintelligence is just way too beneficial and outweighs every negative consequence of being caught developing it unethically (assuming that can make the development faster).

That being said, how would you regulate it regarding to art? The training data can only contain images and art that was agreed upon beforehand for free usage - so basically using no copyright datasets?

 

In regards to training data, I am not sure if regulation will be necessary, because we do not yet know if using copyrighted data to train models is a violation or not.

https://githubcopilotlitigation.com/

Lawsuits are already happening and more are incoming. I know there are lobbyists being funded to tackle this problem in the US, so we'll see how things wil go.

But yes, in my view all data that is used to train models ought to have the proper license or be public domain, or similar licenses that allow such usages. There probably will have to be more regulation regarding the content of models to prevent illegal activity, like CP and the like. That will at least prevent major companies from creating and disseminating models based on copyrighted or private data, who will be the only ones who can feasibly create and train sophisticated models, especially as time goes on.

Share this post


Link to post
Share on other sites
35 minutes ago, Danioover9000 said:

   A few more:

 

 

Ironic that in the video he describes a lot of misinformation going around and then proceeds to make an inapt comparison between what MLA does and how humans learn.

It's not what humans do when they learn. When I draw an image, there is no risk that I accidentally copy pixel by pixel another image I once saw. In strict terms, the AI does not really create images, it is resolving a predetermined latent space as you input the seed and prompt. You can use the same seed and prompt, and it will always generate the same image. It cannot go beyond the images it was trained on because it is defined by the latent space between those data-points.

 

Here you can see how it is moving between different points within the latent space. When you use StableDiffusion or Midjourney, it's not really art being created, it's images being discovered, images that can only exist because of the initial training data.

 

panda2plane.gif

 

Edited by Scholar

Share this post


Link to post
Share on other sites

As far as I am concerned the AI has a right to scrape the whole internet for all public data and use it to build its mind.

The limits must be placed on what the AI can do afterwards. It does not make sense to restrict a mind from accessing public information.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
3 minutes ago, Leo Gura said:

As far as I am concerned the AI has a right to scrape the whole internet for all public data and use it to build its mind.

The limits must be placed on what the AI can do afterwards.

Sure, because that serves you most.

The AI isn't doing anything, because it's not making choices. People are the ones scrape the internet of copyrighted data to build the non-mind MLA.

 

You really need to read up on how this technology works Leo, because you have a child-like understanding of this. You're like a stage orange NFT bro, but considering your shadow that is not a revelation.

Edited by Scholar

Share this post


Link to post
Share on other sites
6 minutes ago, Scholar said:

Sure, because that serves you most

It's not what serves me, it's what serves evolution the most.

You will not be able to prohibit AIs from ingesting the whole web any more than you can prohibit people from learning writing. It will happen regardless.

The point of the internet is to give everyone equal access to information. But now that it doesn't serve you, you want to block it. What you're doing is like putting a child into a cage so it cannot grow because you fear it will take your job.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
17 minutes ago, Leo Gura said:

It's not what serves me, it's what serves evolution the most.

You will not be able to prohibit AIs from ingesting the whole web any more than you can prohibit people from learning writing. It will happen regardless.

The point of the internet is to give everyone equal access to information. But now that it doesn't serve you, you want to block it. What you're doing is like putting a child into a cage so it cannot grow because you fear it will take your job.

You're just asserting things without argumentation again. AI's aren't making independent decisions, of course you can regulate technology. It's not like bob in his garage will be building new AI's. These AI's will be created by megacorporations who can afford to train and run them. These things will only get more compute intensive as we build more sophisticated models.

Where do you think this will lead? A few megacorporations owning and selling privileged access to certain aspects of their mega-AIs. You're literally have a stage orange "freedom will solve this" attitude. I don't know if the latest trips have fried your brain or if you have always been like this and somehow I just didn't see it. Really, no regulation with AI? That's your take? Companies like Stability AI can do anything they want with everyone's data and ignore IP? What are you, a libertarian?

 

You keep anthromorphizing these AI's as if they were agents going around and learning things. That's not accurate, they are products designed by select people for profit and extract value from everyone else. The masses will not be the ones benefitting from these technologies my dude.

 

By the way I could use the models that exist right now to make tons of money, which will most likely not be possible for long. But I do not do so because I have integrity and view this as unethical and unsustainable.

 

And yes people should fear them taking our jobs, because that's what they will do. And unless you found some way to implement UBI in every country, I have bad news for you, it's not going to end well for the majority of people.

Edited by Scholar

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now