erik8lrl

AGI is coming

180 posts in this topic

14 hours ago, Scholar said:

AI will lead not to a democratization of power, but to a monopolization of power.

The way things currently stand, corporations can mine the collective knowledge and data of mankind, extract from it it's value and concentrate it in their hands. This means economic power that is distributed amongst the population currently will become centralized in the hands of whoever will be able to create the most sophisticated systems.


You can view art on a spectrum, where self-expression would relate to how much something is art vs a simple prompt.

When I commission an artist to paint me an image, I prompt him to express an idea that I have in my mind. The question is, whose expression is the painting? Yours, or the artists?

You can argue that you do engage in some form of expression, but it is a far lesser form of self-expression than if you had to contend with the given medium, discover how you personally relate to those rhythms and then express those rhythms as they relate to you. This does not happen when you use AI, not with the current iterations of generative AI.

Is there a world in which AI could enhance human expression? Yes. But you, nor anyone I see talking and engaging with this technology, actually understands what that would require, and why the current pathway will lead to precisely the opposite. The current mindset will lead to disaster.

 

And nobody is arguing that AI will not impact society.

I see your perspective and I agree. No one knows what will happen in the future, but let's hope there will be countermeasures to these problems.

I work in film production, so from my experience, if you have mastered the ins and outs of AI art generation, you can absolutely produce work that is completely unique and self-expressive. Of course, most people will just prompt and be done so I see your point of diminishing the value of art. However, it goes the other way too, I go through a very complex workflow in order to get the image I want down to every detail, which has totally unique style and wholeness developed through a long process of iteration. There are quite a lot of tools and techs that give you total control over the image generation process, and new tools are being released daily. We use AI mostly as a tool for quick iteration and the starting point of our work, you still need to edit and paint and work on these generations to fix things and polish them, it's really a mixture of AI and human traditional work that allows you to make something good. AI simply speeds up the production process by 4-5 times. Most people only tried generation models like Midjourney or Dalle and don't know how much depth and freedom you can have with image generation.  
For example, we could have our artist develop an art style, and paint a series of concept art in that style, then we train our own Lora model and then be able to generate images from that style. We could use ControlNet and other tools to control every aspect of the image, and this allows us to iterate very quickly in our production process. The artist is still very much present in these works, they are the ones prompting with both text and image and often painting fast compositions or color palettes to guide the AI, they pick out what's good based on their artistic taste and vision. They then edit and improve the work to make sure every part of the work is whole and meaningful. It helps save a lot of time and the work they produce is better overall because they can try more things within a shorter amount of time frame. It all depends on how you use it. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites

The centeralization of AI power in a handful of maasive corporations is deeply concerning. AI mixed with shameless late stage capitalism is very problematic for us little people.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
On 2/21/2024 at 11:40 AM, Leo Gura said:

I didn't say AGI won't happen.  I just said there's a lot of hype and that 7 month timeframe is silly.

It depends on your definition of AGI. 15 years ago people would have considered ChatGPT 4 an AGI.

ChatGPT has already passed the turing test. But now that test is considered outdated.

https://www.pnas.org/doi/full/10.1073/pnas.2313925121

 

I personally think we'll have AGI by 2029 as predicted by Ray Kurzweil. By that time we'll have AI that satisfies most people's definition of AGI. 

Edited by abundance

Share this post


Link to post
Share on other sites

Recent interview with Demis Hassabis, whose the CEO of Google DeepMind

 

 

 

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

The centeralization of AI power in a handful of maasive corporations is deeply concerning. AI mixed with shameless late stage capitalism is very problematic for us little people.

The centralization of AGI is a huge concern

I found I a recent post from an employee on OpenAI's Future/Governance Team that brings up this point. 

https://www.reddit.com/r/singularity/comments/1axsmtm/daniel_kokotajlo_openai_futuresgovernance_team_on/?utm_medium=android_app&utm_source=share

 

Share this post


Link to post
Share on other sites
2 hours ago, abundance said:

It depends on your definition of AGI. 15 years ago people would have considered ChatGPT 4 an AGI.

ChatGPT has already passed the turing test. But now that test is considered outdated.

https://www.pnas.org/doi/full/10.1073/pnas.2313925121

The Turing test was always stupid.

ChatGPT is no doubt very impressive.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
21 hours ago, Leo Gura said:

The centeralization of AI power in a handful of maasive corporations is deeply concerning. AI mixed with shameless late stage capitalism is very problematic for us little people.

Yes, tho many open-source models are being developed and released too.   
https://x.com/thibaudz/status/1761506136455340470?s=20

Edited by erik8lrl

Share this post


Link to post
Share on other sites
On 2/24/2024 at 3:38 PM, Leo Gura said:

AI mixed with shameless late stage capitalism is very problematic for us little people.

How much if this "late stage capitalism" is real? I don't see anything significant happening to capitalism as a whole and I don't see why we have to.

Are we all going to shift to the Nordic model after capitalism? We would need a lot of resources for that, for which you need capitalism. The only way out of capitalism is through. 

Share this post


Link to post
Share on other sites
11 hours ago, Bobby_2021 said:

The only way out of capitalism is through. 

Pretty much. But still it sucks. Stage Orange is running amok. The corpos are soulless money-chasing devils.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
4 hours ago, Leo Gura said:

Pretty much. But still it sucks. Stage Orange is running amok. The corpos are soulless money-chasing devils.

This is harmful rhetoric. AI is putting power in the hands of people. The only way anyone make money in the free market is if they deliver value to people en masse.  

I do not care about some shameless billionaire adding an extra few billion into his pocket. It doesn't affect any of us except for climate change by their mammoth size emissions from private jets and yachts. Even capitalism is solving climate change by mass producing solar, wind and even nuclear infrastructure. Nobody is stopping us from solving the problems for solutions is right Infront of us.

Also, they shouldn't use money & power to break or bend the laws. For e.g. stock buy backs is bad. All is well apart from that.  We need to avoid such obvious breaking of rules. Apart from that all is well to be really honest.

Now you can start many businesses without having a huge capital by levering the power of AI. You can start a business with 1k or 5k when that number used to be 10 times larger 20 years ago. That is only because of the tools that were produced by capitalism. You can use AI to help you assist in gathering information, doing small tasks, content writing, virtual assistant etc. You do not need to employ people to slave away in this kind of dead-end jobs.

So, I am I supposed to be pissed off that the OpenAI board is getting billions from AI? I do not really care. I get more free stuff. We should be celebrating all of these. Only very tiny portion of stage orange include those "soulless corpos". That is why laws exist. If the government is not enforcing those laws, it is upon them. They shouldn't be taking bribes.

 

 

Edited by Bobby_2021

Share this post


Link to post
Share on other sites

https://www.hollywoodreporter.com/business/business-news/tyler-perry-ai-alarm-1235833276/

Now we all can start a film with a keyboard and some creativity instead of relying on Disney to give lectures on wokeism shoved in their movies. 

You do not need to invest 200 million in capital to make a movie. The cost should come down with AI. There should be some impact that put some power in the hands of the people. All these tools are out there for people who want to use it well. These are more reasons to be optimistic. 

--------------------

Government regulation of small business is the biggest thing that you should be scared of. You should lower the barrier to entry for small businesses. Regulation should be done in a less disruptive way. That is the real danger in the room.

Edited by Bobby_2021

Share this post


Link to post
Share on other sites
23 minutes ago, Bobby_2021 said:

https://www.hollywoodreporter.com/business/business-news/tyler-perry-ai-alarm-1235833276/

Now we all can start a film with a keyboard and some creativity instead of relying on Disney to give lectures on wokeism shoved in their movies. 

You do not need to invest 200 million in capital to make a movie. The cost should come down with AI. There should be some impact that put some power in the hands of the people. All these tools are out there for people who want to use it well. These are more reasons to be optimistic. 

--------------------

Government regulation of small business is the biggest thing that you should be scared of. You should lower the barrier to entry for small businesses. Regulation should be done in a less disruptive way. That is the real danger in the room.

Yes, this will be the case, anyone will be able to make a film soon. Of course, it still takes craft to make anything good, and there are still limitations to AI. However, the release of these models will be a huge deal for independent and documentary filmmakers. Hollywood will likely take full advantage of it as well. It will be implemented into all the workflows to make things faster and cheaper. 

For some AI is scary because it might very well take their jobs. But for others, it's a major game-changing development that will allow them to do what was impossible. I think people should all learn AI, and keep up with the development. We are already at a time where if you don't incorporate it into your workflow, you will be far less productive than others who do, and eventually be replaced. I think smart business owners will not seek to just replace people with AI, but train people to learn to use AI to multiply their productivity. You can keep the same amount of people but instead produce 5-10 times more results. This will likely be the case for film. 

Share this post


Link to post
Share on other sites
38 minutes ago, Bobby_2021 said:

This is harmful rhetoric. AI is putting power in the hands of people. The only way anyone make money in the free market is if they deliver value to people en masse.  

I do not care about some shameless billionaire adding an extra few billion into his pocket. It doesn't affect any of us except for climate change by their mammoth size emissions from private jets and yachts. Even capitalism is solving climate change by mass producing solar, wind and even nuclear infrastructure. Nobody is stopping us from solving the problems for solutions is right Infront of us.

Also, they should use money & power to break or bend the laws. For e.g. stock buy backs is bad. All is well apart from that.  We need to avoid such obvious breaking of rules. Apart from that all is well to be really honest.

Now you can start many businesses without having a huge capital by levering the power of AI. You can start a business with 1k or 5k when that number used to be 10 times larger 20 years ago. That is only because of the tools that were produced by capitalism. You can use AI to help you assist in gathering information, doing small tasks, content writing, virtual assistant etc. You do not need to employ people to slave away in this kind of dead-end jobs.

So, I am I supposed to be pissed off that the OpenAI board is getting billions from AI? I do not really care. I get more free stuff. We should be celebrating all of these. Only very tiny portion of stage orange include those "soulless corpos". That is why laws exist. If the government is not enforcing those laws, it is upon them. They shouldn't be taking bribes.

 

 

I think although it's definitely important to keep AI accessible for people and not be owned by major corporations, companies like OpenAI, StabilityAI, or Tesla are operating not only for profit, tho profit is important. Their goals for advancing tech and helping humanity are real. I think Sam Altman said that his biggest fear is to release something with good intentions but people end up using it the wrong way which leads to disasters. Hence why they are releasing things slowly to give society time to respond.  

Share this post


Link to post
Share on other sites
1 hour ago, erik8lrl said:

Their goals for advancing tech and helping humanity are

I don't think so. It's hard to operate in the AI industry without a solid profit motive. Not that it matters.

Either way I am happy with more AI tools available to me. What concerns me is they putting guard rails on AI because of woke stuff and now it's doesn't work as it used to.

Even now chat gpt is not nearly usable. It's giving trash responses and less relevant information and declining answers without explanation. Another instance of why socialisms/communism/wokeism would never produce the intended outcomes. 

Share this post


Link to post
Share on other sites
On 17/02/2024 at 9:24 AM, erik8lrl said:

 

The naysayers on the dangers of AI are naive and blinded.

Yes, AI largely right NOW is benign. But this doesn't excuse bad actors. Bad state actors or private bodies developing things without public knowledge. All other computer science has advanced over the years. So why won't AI? 

Comparing the dangers to nukes, which some do, is equally as cringe. Nuclear weapons are no doubt destructive. But they only work via human agency and action. There are also non-proliferation treaties to stop the spread of nuclear weapons. AI is way harder to prevent being stopped. How can one stop spreading coding or software development knowledge being spread? A nuke cannot fire itself. An AI, even a benign one like Chat GPT, can literally function independently. 

As a species, we need to put the halt on it. Even sign a treaty at the UN level limiting their development. And even get North Korea or other pariah stats to sign it. They signed the nuclear non-proliferation treaty, so there is hope. 

Share this post


Link to post
Share on other sites
45 minutes ago, bebotalk said:

How can one stop spreading coding or software development knowledge being spread?

Why would you even want to stop it? It's so ridiculous. We need to build zero trust systems that could operate without being fooled by a freaking AI. 

47 minutes ago, bebotalk said:

As a species, we need to put the halt on it

"As a species"

Do you even realize how silly this sounds?

Do you think China/Russia is simply going to put a halt because someone fearmongers about it?

Do you think even the companies in US are going to stop it? Stop the fear mongering for a moment .

Whatever problems we face, we will solve it then. Just like we always did it.

Share this post


Link to post
Share on other sites
48 minutes ago, bebotalk said:

The naysayers on the dangers of AI are naive and blinded.

Yes, AI largely right NOW is benign. But this doesn't excuse bad actors. Bad state actors or private bodies developing things without public knowledge. All other computer science has advanced over the years. So why won't AI? 

Comparing the dangers to nukes, which some do, is equally as cringe. Nuclear weapons are no doubt destructive. But they only work via human agency and action. There are also non-proliferation treaties to stop the spread of nuclear weapons. AI is way harder to prevent being stopped. How can one stop spreading coding or software development knowledge being spread? A nuke cannot fire itself. An AI, even a benign one like Chat GPT, can literally function independently. 

As a species, we need to put the halt on it. Even sign a treaty at the UN level limiting their development. And even get North Korea or other pariah stats to sign it. They signed the nuclear non-proliferation treaty, so there is hope. 

Yeah, regulations are needed for sure. 

Share this post


Link to post
Share on other sites
29 minutes ago, Bobby_2021 said:

Why would you even want to stop it? It's so ridiculous. We need to build zero trust systems that could operate without being fooled by a freaking AI. 

"As a species"

Do you even realize how silly this sounds?

Do you think China/Russia is simply going to put a halt because someone fearmongers about it?

Do you think even the companies in US are going to stop it? Stop the fear mongering for a moment .

Whatever problems we face, we will solve it then. Just like we always did it.

Why not? It's something that affects us all, as humans. So yes, as a species. They have signed nuclear non-proliferation treaties. Despite being Western "enemies", it doesn't mean that we cannot find common solutions. We do on climate change, and China is leading some efforts in this regard. As it's a global problem, then ideally we should find a global solution for it. Solving problems "when we meet them" is myopic. We often solve problems we cannot foresee. Like global warming for instance. 

You're pushing the common narrative that any fear is overblown. We don't and shouldn't solve any problem in life without being proactive. 

 

Share this post


Link to post
Share on other sites

A Terminator-esque scenario is unlikely, for now. But then we could in decades to come get a Data-esque android. There has even been talk of sex robots that are essentially sapient. If that is ever produced, then would they have rights? they would have consciousness, like we do. we can't say we should have rights by merely being human. 

Even computer scientists in the 1960s could scarcely imagine computers today. With such exponential growth, then who knows what is possible? 

Putin in his interview even spoke about the dangers of AI. He's not stupid. He knows the threat to his country at the least. 

This deepfake porn shit is disturbing too. The Trump arrest pics were funny asf but then anybody can create that about anybody else, and for what ends? Voices can be faked too. So yes, as a species, we need to have a hold on this. it affects us all. 

And enemies can't form common bonds, ever. OK. Explain then why the USA and USSR in the 1960s (during the Cold War no less) signed nuclear proliferation treaties that still stand? It's not as black and white and limited as you are making out.  Assuming that AI will always just be limited to Chat GPT level technology, or can never be a threat, is exceedingly myopic and dangerous thinking.

Edited by bebotalk

Share this post


Link to post
Share on other sites
3 hours ago, bebotalk said:

Explain then why the USA and USSR in the 1960s (during the Cold War no less) signed nuclear proliferation treaties that still stand?

Exactly. The treaty was signed to prevent other nations from developing nuclear weapons. And what happened? India, Pakistan, North Korea and even Israel developed nuclear weapons anyway. Even US and Russia withdrew from the treaty. Which is why all such treaties are utter trash.

The exact same thing will happen with AI, now that corporations are playing the game. So, they would absolutely zero flying fucks about any "treaty". If you try to regulate them too much they move to a country that does not try to regulate them and train their AI on all the data they can get their hands on.

I oppose all regulations that increases the barrier to entry to AI. Some basic regulations are necessary.

I do not doubt the sincerity or good will of your arguments. Just that it will not produce the intended effects that you are looking for.

Edited by Bobby_2021

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now