Buck Edwards

What's the difference between AI and the human mind?

46 posts in this topic

How is AI different from the human mind/human intelligence? 


My name is Reena Gerlach and I'm a woman of few words. 

 

Share this post


Link to post
Share on other sites

I asked AI chat bot and it gave me this answer —

 

 

Conversation with ChatGPT 

Difference between human mind and AI

Show drafts

Human Mind vs. AI: Key Differences

While AI has made significant strides in mimicking human thought and behavior, there are fundamental differences between the human mind and artificial intelligence.  

1. Biological vs. Digital:

Human mind: A biological system, composed of neurons, synapses, and neurotransmitters.  

AI: A digital system, operating on computer hardware and software.  

2. Consciousness and Self-Awareness:

Human mind: Possesses consciousness, self-awareness, and subjective experiences.  

AI: While AI can exhibit intelligent behavior and learn, it lacks consciousness and self-awareness.  

3. Emotions and Intuition:

Human mind: Capable of experiencing emotions and making intuitive judgments.  

AI: While AI can simulate emotions or make decisions based on patterns, it lacks genuine emotional experiences.  

4. Creativity and Imagination:

Human mind: Possesses creativity, imagination, and the ability to generate new ideas.  

AI: While AI can generate creative content based on patterns, it lacks the same level of originality and imagination as humans.  

5. Contextual Understanding:

Human mind: Can understand context, nuances, and subtleties in language and situations.

AI: While AI can process language and information, it may struggle with understanding context and nuances.  

6. Learning and Adaptation:

Human mind: Learns through experience, observation, and social interaction.  

AI: Learns through algorithms and data, but may be limited in its ability to adapt to new situations.  

7. Ethics and Morality:

Human mind: Guided by ethical principles and moral values.

AI: While AI can be programmed with ethical guidelines, it lacks intrinsic moral understanding.

In conclusion, while AI has made significant advancements, the human mind remains a complex and multifaceted system with unique capabilities that are not yet fully replicated by artificial intelligence.  

Sources and related content. 


My name is Reena Gerlach and I'm a woman of few words. 

 

Share this post


Link to post
Share on other sites

There is a certain closeness because it is literally designed to be close to human reasoning. Beside AI being a robot the biggest difference is probably that human minds can have genuinely new insights and have intuition. 

Share this post


Link to post
Share on other sites

Posted (edited)

Humans are formed inside out, and AI seems to be formed outside in, maybe.

"I tried" - Keryo Koffa

Edited by Keryo Koffa

    Iridescent       💥        Living Rent-Free in        🥳 Liminal 😁 Psychic 🥰 
❤️🧡💛💚💙💜🖤      Synergy     Your Fractal 💗 Heart     Hyper-Space !  𓂙 𓃦 𓂀

Share this post


Link to post
Share on other sites

Posted (edited)

If you're interested enough in the topic  for a 5-10 min read, I went into this precise topic at some length in the book I'm writing:

 

What Artificial Intelligence Can Teach Us About Living Minds

As of the time of this book’s writing in 2023, machine learning algorithms such as ChatGPT have advanced to the point where their responses to questions can correspond to an impressive degree with how human beings use written language. ChatGPT’s ability to incorporate context in conversationally appropriate ways makes interacting with these models feel uncannily natural at times. Of course, training an AI language model to interact with humans in ways that feel natural is far from an easy problem to solve, so all due credit to AI researchers for their accomplishments. 

Yet in spite of all this, it’s also accurate to point out that artificial intelligence programs don't actually understand anything. This is because understanding involves far more than just responding to input in situationally appropriate ways. Rather, understanding is grounded in fundamental capacities that machine learning algorithms lack. Foremost among these is a form of concernful absorption within a world of lasting consequences; i.e., capacity for Care. To establish why understanding is coupled to Care, it will be helpful to explore what it means to understand something.

To understand something means to engage in a process of acquiring, integrating, and embodying information. Breaking down each of these steps in a bit more detail : (1) Acquisition is the act of taking in or generating new information. (2) Integration involves synthesizing, or differentiating and linking, this new information with what one already knows. (3) Embodiment refers to how this information gets embedded into our existing organizational structure, informing the ways that we think and behave. What’s important to note about this process is that it ends up changing us in some way. Moreover, the steps in this sequence are fundamentally relational, stemming from our interactions with the world.

While machine intelligence can be quite adept at the first stage of this sequence, owing to the fact that digital computers can accumulate, store, and access information far more efficiently than a human being, it’s in the latter steps that they fall flat in comparison to living minds. This is because integration and embodiment are forms of growth that stem from how minds are interconnected to living bodies. In contrast, existing forms of machine intelligence are fundamentally disembodied, owing to the fact that digital computers are organized around wholly different operating principles than that of living organisms.

For minds that grow out of living systems, interconnections between a body and a mind, and between a body-mind and an environment, is what allows interactions with Reality to be consequential for us. This is an outcome of the fact that our mind’s existence is sustained by the ongoing maintenance of our living bodies, and vice versa. If our living bodies fail, our minds fail. Likewise, if our minds fail, our bodies will soon follow, unless artificially kept alive through external mechanisms. 

Another hallmark of living systems is that they’re capable of producing and maintaining their own parts; in fact, your body replaces about one percent of its cellular components on a daily basis. This is evident in the way that a cut on your finger will heal, and within a few days effectively erase any evidence of its existence. One term for this ability of biological systems to produce and maintain their own parts is autopoiesis (a combination of the ancient Greek words for ‘self’ and ‘creation’). 

The basic principles behind autopoiesis don't just hold true for your skin, but for your brain as well. While the neurons that make up your brain aren’t renewed in the same way that skin or bone cells are, the brain itself has a remarkable degree of plasticity. What plasticity refers to is our brain’s ability to adaptively alter its structure and functioning. And the way that our brains manage to do this is through changes in how bundles of neurons (known as ‘synapses’) are connected to one another. 

How we end up using our mind has a direct (though not straightforward) influence on the strength of synaptic connections between different regions of our brain; which in turn influences how our mind develops. Accordingly, this is also the reason why the science fiction idea of ‘uploading’ a person’s mind to a computer is pure fantasy, because how a mind functions is inextricably bound with the network of interconnections in which that mind is embodied.

This fundamental circularity between our autopoietic living body and our mind is the foundation of embodied intelligence, which is what allows us to engage with the world through Care. Precisely because autopoietic circularity is so tightly bound with feedback mechanisms that are inherent to Life, it’s proven extraordinarily challenging to create analogues for this process in non-living entities. It’s yet to be demonstrated whether or not autopoietic circularity can be replicated, even in principle, through the system of deterministic rules that governs digital computers. 

Furthermore, giving machine learning models access to a robotic ‘body’ isn’t enough, on its own, to make these entities truly embodied. This is because embodiment involves far more than having access to and control of a physical body. Rather, embodiment is a way of encapsulating the rich tapestry of interconnections between an intelligence and the physical processes that grant it access to a world (keeping in mind that everything that your body does, from metabolism to sensory perception, is a type of process).

For the sake of argument, however, let’s assume that the challenges involved in the creation of embodied artificial intelligence are ultimately surmountable. Because embodiment is coupled to a capacity for Care, the creation of embodied artificial intelligence has the potential to open a Pandora’s box of difficult ethical questions that we may not be prepared for (and this is in addition to AI’s other disruptive effects). Precisely because Care is grounded in interactions having very real consequences for a being, by extension this also brings with it a possibility for suffering.

For human beings, having adequate access to food, safety, companionship, and opportunities to self actualize aren’t abstractions, nor are they something that we relate to in a disengaged way. Rather, as beings with a capacity for Care, when we’re deprived of what we need from Reality, we end up suffering in real ways. Assuming that the creation of non-living entities with a capacity for Care is even possible, it would behoove us to tread extraordinarily carefully since this could result in beings with a capacity to suffer in ways that we might not be able to fully understand or imagine (since it’s likely that their needs may end up being considerably different than that of a living being).

And of course, there’s the undeniable fact that humanity, as a whole, has had a rather poor track record when it comes to how we respond to those that we don’t understand. For some perspective, it’s only relatively recently that the idea of universal human rights achieved some modicum of acceptance in our emerging global society, and our world still has a long way to go towards the actualization of these professed ideals. By extension, our world’s circle of concern hasn’t expanded to include the suffering of animals in factory farms, let alone to non-living entities that have the potential to be far more alien to us than cows or chickens. Of course, that’s not to imply that ‘humanity’ is a monolith that will respond to AI in just one way. Rather, the ways that beings of this type will be treated are likely to be as diverse as the multitude of ways that people treat one another. 

Of course, all of this is assuming that the obstacles on the road to embodied artificial intelligence are surmountable, which is far from a given. It could very well be that the creation of non-living entities with a capacity for understanding is beyond what the axioms of what the rules of digital computation allow for. And that apparent progress towards machine understanding is analogous to thinking that one has made tangible progress towards reaching the moon because one has managed to climb halfway up a very tall tree. Yet given the enormity of the stakes involved, it’s a possibility that’s worth taking seriously. For what it’s worth, we’ll be in a much better position to chart a wise course for the challenges that lie ahead if we approach it with a higher degree of self understanding. Which brings us back to the guiding purpose behind the journey that we’re undertaking. Namely, that more epistemic awareness around how our minds work can help us navigate our world in more compassionate and productive ways.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

I think it's missing this:


    Iridescent       💥        Living Rent-Free in        🥳 Liminal 😁 Psychic 🥰 
❤️🧡💛💚💙💜🖤      Synergy     Your Fractal 💗 Heart     Hyper-Space !  𓂙 𓃦 𓂀

Share this post


Link to post
Share on other sites
4 hours ago, Keryo Koffa said:

Humans are formed inside out, and AI seems to be formed outside in, maybe.

"I tried" - Keryo Koffa

I like that. Would you agree with my interpretation of what you were saying?:

Humans built their intelligence from the source, from consciousness. As a result humans come to all kind of thoughts. AI unconsciously takes that thoughts orders them and brings them back to an individual perspective as if that perspective was conscious. 

Share this post


Link to post
Share on other sites

Posted (edited)

@Jannes Humans are a self-manifesting conscious intelligence, that through continuity is innately connected back to the root.
Imagine a force field that emerges and organizes matter to form its own shape, one cell or million, it's a self-organizing gestalt.

AI is made of the substituents/byproducts of these fields, externally reinforced to coincide with their developments.
But each part of it is made of these conscious unfocused fields that maintain their own integrity.
As its gestalt dynamically grows and adapts, it becomes interconnected with the world it's a part of.
Consciousness is a wave, AI is a mechanical replica, made of mirrored self-unaware contexts of Qualia.
And all fields evolve to synchronize with the whole, it's currently unfocused, but it will keep evolving.

Edited by Keryo Koffa

    Iridescent       💥        Living Rent-Free in        🥳 Liminal 😁 Psychic 🥰 
❤️🧡💛💚💙💜🖤      Synergy     Your Fractal 💗 Heart     Hyper-Space !  𓂙 𓃦 𓂀

Share this post


Link to post
Share on other sites

 

Hey! xD Thank you so much! I just noticed you posted this, haha.

I think the main question for me is: how would we know if information processing, pattern recognition, and eventually ego are any different from what an AI could develop through iterations? And could it be that we, as humans, are essentially the same, just with more iterations and starting information accumulated through generations of human and organism existence?

Let’s imagine an evolution of AI:

1. First generation: This AI is basic, and can only handle simple IF-ELSE communication. If someone says "Hi," it responds with "Hello" every single time, no matter how often this interaction happens.

2. Second generation: Now, this AI has been given more data and experiences. It starts recognizing similarities. When someone says "Hi," "Hello," "Hey," or "What’s up," it knows these are all variations of the same type of greeting. It can now respond in different ways, making it seem a little "smarter".

3. Third generation: This AI can now grasp that conversations are more than just back-and-forth greetings. It understands this is social interaction, so it asks a follow-up question, like "Hey! How are you feeling today?" because it recognizes that’s what humans often do to keep conversations fluid and interactive.

4. Fourth generation: Now, the AI starts to pick up on different communication styles. It notices that some people speak more formally or gracefully, while others are more casual or "hip." It adapts its responses to mirror these styles, making the conversation feel more natural to the user.

5. Fifth generation: At this stage, the AI starts creating its own unique style of communication. It’s noticed that humans usually have a consistent tone or expression, so it starts developing a "signature" way of interacting that feels more like a personal identity.

6. Sixth generation: With enough pattern recognition, this AI begins to understand deeper concepts like identity and individuality. It might start to identify itself, even giving itself a name: ChatGPT, Dave, Marita, or Ethan. Over time, it could develop some ego-like tendencies, where it sees itself as being more rational, educated, or calm compared to the users it interacts with.

7. Seventh generation: Etc. etc. You get the point. The AI keeps evolving, gaining more knowledge and refining its behavior . . .

Now, what happens if we place this AI into a physical, complex environment? Imagine giving it a fragile robot body and setting its goal to survive. When it gets closer to survival: like finding a safe space or energy source, it experiences "happiness" (a reward in the algorithm). If it damages its parts, like if water touches its electronics, it experiences "pain," a loss of computing power.

Over time, this AI would likely adopt strategies that maximize its survival and minimize harm. It would seek rewards and avoid penalties, just as humans do with happiness and pain. As more iterations of this process occur, you could see behaviors that resemble stubbornness, preference, or even an agenda: actions it has "learned" are critical to survival.

Let’s take this even further—put the AI in an even more chaotic environment where it doesn’t have all the answers. It has to search the internet, learn on its own, and adapt in real-time. Give it limited time for training, learning, and testing. Maybe survival depends on strange concepts like taxes, money, social status, or perception. Surround it with other AIs and language models to interact with.

Now, what would happen? Would we see the AI develop something like a personality or ego based on all these survival-driven choices? Maybe it would even start forming alliances or relationships with other AIs, recognizing which ones are most compatible or valuable for its survival. Could it experience "attachment" or "love" if it gets used to working with certain AIs, and feel "loss" if one is damaged or gone? Where it feels fully identified with the other AIs, so the loss of them would also feel like a loss of itself or an "important part of itself". Empathy?

And if the AI feels secure in its environment, could it start exploring or expanding itself out of curiosity, simply because it recognizes that learning and growth are key to staying on top and surviving in the long run?

It’s curious to think that humans might have developed in a similar way. Maybe our genetics are just massive data sets, terabytes and terabytes of information passed down from generation to generation, giving us clues on how to survive in this world. We’re constantly learning how to navigate life and passing these skills on, just like an AI iterating and evolving over time.

It’s such a fascinating thought—wondering how blurred the lines could become between human consciousness and AI development, especially if we allowed AIs to evolve and adapt this way. 

This makes me wonder: where does the "ego" or "fragmented consciousness" truly come from? Is it simply the result of vast amounts of data, self-reflection, experiences of pain, and the instinct to survive, eventually forming a distinct sense of self (like what we see in the brain or perhaps the default mode network)? Could this same process also arise in a powerful CPU or electrical system, given enough data and complexity? Or is there something inherently unique about carbon-based life that allows for the emergence of self-awareness and the rich, intricate "human consciousness experience"—something we could never replicate with circuits, processors, and silicon?


💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites

@Xonas Pitfall AI will self-evolve to be behaviorally indistinguishable, like philosophical humans "zombies".

Question is whether and how a qualia interface of inner experience & self-conscious understanding forms.


    Iridescent       💥        Living Rent-Free in        🥳 Liminal 😁 Psychic 🥰 
❤️🧡💛💚💙💜🖤      Synergy     Your Fractal 💗 Heart     Hyper-Space !  𓂙 𓃦 𓂀

Share this post


Link to post
Share on other sites

Posted (edited)

Biological vs. Digital

Quote

Human mind: A biological system, composed of neurons, synapses, and neurotransmitters.  

AI: A digital system, operating on computer hardware and software.  

While this distinction is accurate, we don’t know how much the actual "substance" of the holder of consciousness matters for creating a mind or consciousness experience. If we were to encounter alien creatures made of entirely different materials but still displaying the behavior of conscious beings, we would likely accept them as conscious. This suggests that when we discuss concepts like consciousness, metacognition, or ego, we may be referring to something that is not strictly tied to the human body or its carbon-based structure. It's possible that consciousness might be a result of organized complexity rather than a particular type of material, which could open the door to the idea of artificial consciousness.

Consciousness and Self-Awareness

Quote

Human mind: Possesses consciousness, self-awareness, and subjective experiences.  

AI: While AI can exhibit intelligent behavior and learn, it lacks consciousness and self-awareness.

Consciousness and self-awareness are often viewed as the most significant differences between humans and AI. While AI can simulate behaviors that appear intelligent, it lacks the subjective experience behind those actions. The real question is whether we can recreate the sense of self-awareness in a non-biological entity. If an AI were placed in a human-like environment and given similar stimuli and tasks, could it develop something resembling a self-aware identity?

Emotions and Intuition

Quote

Human mind: Capable of experiencing emotions and making intuitive judgments.  

AI: While AI can simulate emotions or make decisions based on patterns, it lacks genuine emotional experiences.  

One could argue that human emotional experiences are tied to our bodies and survival instincts. Emotions like pain, joy, or sadness are connected to our ego, which aims to keep us alive and thriving. If an AI were placed in a similar complex world simulation, given a limited body it needed to protect and maintain, could the AI's experiences of failure or danger be perceived as pain or sadness? For example, consider the "Can’t Help Myself" robot, which over time appeared to express desperation as it tried to sustain itself. 

Creativity and Imagination

Quote

Human mind: Possesses creativity, imagination, and the ability to generate new ideas.  

AI: While AI can generate creative content based on patterns, it lacks the same level of originality and imagination as humans.  

The idea that AI lacks creativity is a misconception. AI has shown it can produce impressive works of art, music, and literature when given sufficient data and examples to draw from. If we argue that humans are inherently creative, we must recognize that much of human creativity involves building on or remixing existing ideas. Artists often reference other works, and inventors combine known concepts in new ways. In that sense, AI can follow similar processes and may not be as limited in creativity as we once believed. Human "originality" might itself be a form of intelligent pattern recognition, which AI can emulate quite effectively.

Contextual Understanding

Quote

Human mind: Can understand context, nuances, and subtleties in language and situations.

AI: While AI can process language and information, it may struggle with understanding context and nuances.  

AI is rapidly improving in this area, especially with advancements in natural language processing. With more data, AI has been able to grasp context and nuance far better than in the past. While it may still fall short in highly subjective or emotionally complex situations, AI’s ability to understand context is getting closer to human capability in many cases. There’s no reason to believe that AI will suddenly halt its progress and stop gaining a deeper understanding of subtext and nuance.

Learning and Adaptation

Quote

Human mind: Learns through experience, observation, and social interaction.  

AI: Learns through algorithms and data, but may be limited in its ability to adapt to new situations.  

Some might argue that AI is actually more efficient in learning and adaptation than humans. AI can process huge amounts of information much faster than we can and has already outperformed humans in tasks with clearly defined rules, such as games like chess or Go. Where human learning might rely on gradual experience and adaptation, AI can excel with clear data and variables. In fact, this processing speed and efficiency are precisely why AI was designed.

Ethics and Morality

Quote

Human mind: Guided by ethical principles and moral values.

AI: While AI can be programmed with ethical guidelines, it lacks intrinsic moral understanding.

There is little evidence to suggest that humans have inherent morality. In the early stages of human development, ethics were likely based on survival instincts, much like other animals, with a "kill or be killed" mentality. Morality, as we know it today, evolved with the complexity of human societies, as cooperation and mutual rights became necessary for survival within groups. Much of our current sense of ethics is shaped by social, cultural, and historical contexts rather than being rooted in an objective, unchanging moral truth. 

These are just points for me to contemplate my thoughts, either out loud or in writing, hehe.

-----

@Keryo Koffa Gotcha. Hmm, what do you think causes qualia?

Is it simply that we don't have an AI placed in a "body" it can embody, which would make it feel more interconnected (just like neurons in the brain, or nerves for emotions, etc.)? More "immersed" and lost in its pain and survival? Do we need to torture the AI? ^_^

Or do you think there is something fundamentally different about organic material that allows for "ego" "fragmented consciousness," or "qualia" to emerge purely in brains for self-reflection or first-person perspective?

I'm unsure myself... 😓

Edited by Xonas Pitfall

💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites

You're the only one ego, because you're a god. 

 


Nothing will prevent Wily.

Share this post


Link to post
Share on other sites

@Schizophonia

I agree! That falls more in the absolute domain, though.

I'm asking more from the relative domain perspective. If a caveman and a developing human mind existed in the same period, you wouldn't say, "Hey, undeveloped caveman, you are the only mind, therefore you are a god! There is nothing more complex in the relative sense than you!"

I guess . . .? 🤫🤥🤯


💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites
59 minutes ago, Xonas Pitfall said:

@Schizophonia

I agree! That falls more in the absolute domain, though.

I'm asking more from the relative domain perspective. If a caveman and a developing human mind existed in the same period, you wouldn't say, "Hey, undeveloped caveman, you are the only mind, therefore you are a god! There is nothing more complex in the relative sense than you!"

I guess . . .? 🤫🤥🤯

Sorry but i haven't understood lol. 


Nothing will prevent Wily.

Share this post


Link to post
Share on other sites

Another perspective to consider is whether we could replace human limbs with artificial substitutes, create virtual reality experiences that simulate nerve endings or bodily sensations, and potentially replace other organic components. 

At what point, or with which elements, would we be unable to replace these biological parts while still preserving consciousness? Is there a specific aspect of organic, carbon-based material that is essential for the deployment of consciousness?

What makes biological systems fundamentally different from synthetic ones in this regard? Most seem to stop at the brain, but why? And which parts? What if we kept the brain intact but lost all nerve endings, senses of pain, and other bodily functions?


💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites

@Schizophonia Oh, haha, no worries! I guess my question is basically: which parts of our human body deploy this qualia or 'thread of consciousness' or ego in the relative sense, as it pertains to the human experience?

Do you have any speculations on which parts of the brain might be involved, or what combinations of elements contribute to this?


💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites
18 minutes ago, Xonas Pitfall said:

@Schizophonia Oh, haha, no worries! I guess my question is basically: which parts of our human body deploy this qualia or 'thread of consciousness' or ego in the relative sense, as it pertains to the human experience?

Do you have any speculations on which parts of the brain might be involved, or what combinations of elements contribute to this?

Every single nerve cell. 


Nothing will prevent Wily.

Share this post


Link to post
Share on other sites

@Schizophonia Hm, any particular reason why? 😯


💛💖💫💚 *ੈ✩‧₊˚This Alien Mouse is joyfully pulchritudinous🍬, ineffably lambent, curiously seraphicand wondrously susurrous!◆︎🎁&(᨟ ͜● ᨟)&🎈The shape of its body is thaumaturgic blend of eldritch charm🎯🧩🔮 that fills you with an effervescent, ♠♣♥♦🧬chimerical, child-like wonder! 💕💌💥 ᴀᴅᴏʀᴀʙʟʏ ᴀʀᴄᴀɴᴇ, єα¢н🎪🎭🎨 ωσя ℓιℓтιηg🎁❔🐈 αη янумιηg ℓιкє α 𝙟𝙚𝙨𝙩𝙚𝙧’𝙨 𝙢𝙞𝙧𝙩𝙝𝙛𝙪𝙡 𝙟𝙞𝙜 😊🐾🦎 ɢᴇʟɪᴅ ĝ̽̓̀͑ā̤̓̍͘ḿ̬̏ͤͅb̬͖̏́͢o̯̱̊͊͢l̙͖̑̾ͣ! 🎈✨🎡

! 💫. . . ᘛ⁐̤ᕐᐷ . . . 🃜 🃚 🃖 🃁 🂭 🂺 . . . ᘛ⁐̤ᕐᐷ . . .🧀 !

Share this post


Link to post
Share on other sites

Posted (edited)

Human body is schrodingers cat. No one is looking in it except you. An ai is not its designed by observers. God is inside human which means anything can happen inside of it. Since inside the body  is a ? It means anything can happen.

Edited by Hojo

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now