integral

Super intelligence self improvement loop

3 posts in this topic

The Superintelligence, self improvement loop that leads to the singularity prophecy might happen as soon as next year.

https://youtu.be/UWvebURU9Kk?si=UFMHgZB3_UEK9o5g

o3 came out today and it destroyed its previous benchmarks. This willKeep happening until AI reaches ELO score too high to be measured. The only way they can measure the AI’s will be by allowing the AI to play against itself because no test that can measure its intelligence. The only thing that can measure its intelligence is another version of itself.

o3 is better at 99.9% of all coders for small tasks. At this rate of progress, AI will be reaching. ELO score better than all humans by next year and if it’s able to self improve, it will immediately surpass even that.

This is obvious fear mongering, but the world is absolutely not ready. No one is ready, they will not be able to control this thing. It’s laughable that they think they’re gonna be able to control something smarter than themselves or somehow give the public access to it in some kind of restricted Safe way.

There is nothing safe about this. There is no way to give these model that are smarter than humans for humans to play with.

It’s so naïve. giving people more intelligence means you’re giving them more power and why would you want to give a bunch of unconscious, foolish apes more power? 

Edited by integral

How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
1 hour ago, integral said:

why would you want to give a bunch of unconscious, foolish apes more power? 

Democracy ? Culture is not your friend.

Share this post


Link to post
Share on other sites

Small tasks, sure. But current AI doesn’t have a big enough context window or memory to handle much complexity at the same time. If there’s just one thing to solve, it can do it. But if it has to track and manage, say, 5 things, each with 10 variables, it’ll screw up—every time. Even if you show it exactly where it’s likely to fail, it’ll still make mistakes. AI is great at predicting the next thing, but it’s focused on one thing at a time. It doesn’t know how or where to look for potential pitfalls because doing that would mean effectively handling 1,000 moving parts just to get one complex request right.

Chat: Why Skepticism Is Warranted

Sequential Prediction Isn’t Understanding

At its core, AI is pattern recognition and next-token prediction. It doesn’t "know" or "understand" in the way humans do. General intelligence requires causal reasoning, goal-setting, and an ability to model the world beyond patterns in data. There’s no clear path from sequential prediction to these capabilities.

Lack of Intentionality and Agency

Intelligence involves agency—the ability to set goals, prioritize, and act autonomously. Current AI has no self-directed goals; it reacts to inputs based on pre-trained patterns. Adding intentionality requires fundamentally different approaches that are not yet well understood.

The Frame Problem

In AI, the "frame problem" refers to the challenge of knowing what information is relevant in a given situation. Humans instinctively filter out irrelevant details, but AI has no innate mechanism for this. Without solving this, it’s hard to imagine AI managing the complexity of real-world reasoning.

Emergence ≠ Generality

While emergent abilities in large models are impressive, they are still narrow and task-specific. There’s no evidence that simply scaling models will lead to truly general intelligence. Extrapolating from current trends may be overly optimistic.

Human Cognition Is Not Purely Sequential

Human intelligence is multi-modal, involving memory, sensory processing, emotion, and intuition—all deeply interconnected. AI lacks these "soft" aspects, which are critical for general intelligence.

Philosophical and Ethical Limits

Intelligence isn’t just computation. Some argue that aspects of consciousness, subjective experience, or biological embodiment are essential to intelligence and cannot be replicated by machines.


If truth is the guide, there's no need for ideology, right or left. 

Maturity in discussion means the ability to separate ideas from identity so one can easily recognize new, irrefutable information as valid, and to fully integrate it into one’s perspective—even if it challenges deeply held beliefs. Both recognition and integration are crucial: the former acknowledges truth, while the latter ensures we are guided by it. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now