-
Content count
156 -
Joined
-
Last visited
Everything posted by Xonas Pitfall
-
https://streamable.com/7u62nf 🎁
-
You can also view this through the lens of Spiral Dynamics, Maslow’s hierarchy, or any of the human consciousness development systems. At the beginning, we start from the lower tiers, such as Beige, which is based on survival instincts. In a society where humanity must contend with tigers, lions, and the raw forces of nature, survival, food, and reproduction become the primary concerns. Moving to Purple, there’s a bit more room for relaxation, but most of the work still revolves around tribal survival, focusing on food and sex, and engaging in manual labor to satisfy these basic needs. Life Purpose is a luxury, in a sense. As society ascends the spiral, we will be better equipped to meet everyone's needs, but we are far from that point—both technologically and mentally. Many individuals take pleasure in having others perform the laborious tasks for them or feel superior, often going to great lengths to sabotage others. There are still many manual tasks that technology hasn’t yet addressed, and we also face the challenge of people who thrive on anti-social behavior. It is a good moral question: should certain anti-social urges be genetically eliminated if that ever becomes possible? Ideally, as we move toward greater unity (up the spiral), our collective goals should focus on creating systems that promote human happiness, where each individual can achieve their life purpose and self-actualization. This could involve implementing universal basic income or similar systems, along with robots and other technologies that can handle undesirable tasks, allowing people to concentrate on what they truly want to pursue. It's a "fractal-like" pattern: even in your personal life and during self-development, you will go through these stages. You are born needing to cover the basics: finances, health, confidence, and mental well-being. Only after addressing these foundational elements can you truly access the luxury and capacity to succeed in your life purpose. Reaching that point takes time and leveling up. Society is experiencing a similar process, albeit at a slower pace. I really do hope one day we will reach a society where you are born, and most of your needs can be more easily covered by systematic structures, government, AI, and other modern, singularity-like technologies, allowing us to all engage in activities that truly align with one another! Sounds so much more beautiful and rewarding. 💙
-
@Princess Arabia Hmm... I think the post meant more about how we all contribute our unique expressions and creativity. Since all of this is an expression of the Self or Singularity, we shouldn't feel guilty or worry about copying others or wanting credit for our ideas. In an absolute sense, that holds true. However, in a more relative context, copyright laws exist for a reason—abuse and theft of ideas can occur and damage one’s survival or integrity. I've also often wondered about those who constantly feel the need to be unique and stand out. They seem particularly protective of their originality and may look down upon others who struggle to achieve what they can. Ironically, these individuals are often the most likely to be copied, which makes their defensiveness understandable. For example, one must protect against any copyright infringements related to @Leo Gura 's adorable crocodiles. 🐊 Let's do this properly and legally like professionals. --- Crocodile Copyright Protection Agreement🐊 Definition of Adorable Crocodiles: For the purposes of this agreement, "adorable crocodiles" refer to any imagery, representation, or artistic depiction of crocodiles that exhibit qualities of cuteness, charm, or whimsy. Copyright Ownership: It is hereby acknowledged that all rights to the depiction and representation of adorable crocodiles are owned by Leo Gura. Any claim to the creation, distribution, or display of adorable crocodiles by any party other than Leo Gura is considered a violation of copyright. Prohibition of Infringement: No individual or entity may claim ownership of, use, or distribute adorable crocodiles without express written permission from Leo Gura. This includes, but is not limited to, using adorable crocodiles in marketing, merchandise, or personal projects. Legal Recourse: In the event of a copyright infringement claim, the offending party shall be subject to legal action, including but not limited to, injunctions, damages, and any other remedies available under applicable copyright law, even getting your bones obliterated by the 'adorable crocodiles' themselves...🐊 Acknowledgment: By participating in any activities involving adorable crocodiles, the individual or entity acknowledges their understanding of this agreement and agrees to abide by its terms. By signing below, the undersigned acknowledges their acceptance of this Adorable Crocodile Copyright Protection Agreement. ______________________________ Signature ______________________________ Date
-
❤️🧡💛💚💙💜🖤 Oh my...! My, oh my! I absolutely and unequivocally worship this cacophonous whirlpool of lunacy! Let’s transmogrify this thread into a surrealist shit-posting extravaganza, obliterating the brittle confines of Actualized.org’s formatting like a gelatinous chimera at a cosmic tea party! What kaleidoscopic monstrosities of chaotic whimsy can we summon from the abyssal depths of the quantum nonsense dimension? Let’s twist the fabric of reality into an origami maelstrom of glorious absurdity upon the multiverse! . . . ❤️🧡💛💚💙💜🖤 😊 Challenge: Find the most creative ways to express yourself in the confines of this Forum web-based formatting rule-box, and spread the idea for others - expand the Singularity! 😊 Unearth the most flamboyant manifestations of self-expression within the arcane confines of this forum's webular formatting enigma! Discover the zaniest articulations of individuality while navigating the esoteric labyrinth of this forum’s digital formatting shackle! Seek the most quirky articulations of the self amidst the whimsical parameters of this forum’s cybernetic formatting contraption! Scavenge for the most outlandish modes of expression encased within the bizarre boundaries of this forum’s web-woven formatting prison! Unravel the most eccentric avenues of self-revelation while meandering through the kaleidoscopic confines of this forum's digital formatting conundrum! Hmm... 🤔 I almost feel like I’m creating an old-school MS Paint-inspired HTML project, reminiscent of those classic scam phishing emails from Indian-Nigerian princes. I . . . like it? 💛😅
-
@Pro24 May I ask you what are some of the most complex metaphysical vagina-like apparitions and holographic substructures you've created with blankets and pillows that would put even God's creativity to question in your years of living in this world? 💫
-
Or... does she mean the infamous 20-foot Quantum-Powered, Jellybean-Scented LED Dimmable Hoverlight with Turbo MegaFlex 9000-RPM Fidget Drive, 42,000-Lumen Holographic Glimmer, Fully Autonomous Nano-Self-Cleaning with Built-In Synthwave Disco Mode and Eggplant-Detection Technology, Plug-In Marshmallow Battery Operated, 9000-Watt Triple-Charged Raspberry Pi-Controlled Hovercraft Vacuum. Sprinkle-powered, Smart Hyperdrive SlimCharger™ with 360° Gravitational Warp Core, Plasma-Infused Wi-Fi Enabled Ultrasonic Gyro-TurboPuff Tech that also comes with Quantum-Dampening Wobbleproof Elongated Pterodactyl Finisher 18 Inch Plug-In, Convertible, Linkable 628 Lumens, 3000K Soft Warm White, Easy to Install, CleanView Upright Bagless 20,000-Watt 1-Phase LPG/NG Liquid Cooled Surround Sound Texas Instruments, Triple A Duracell Battery Ultrapower100 Cargador Compatible Smart Sensor, Easy Clean Interior, ECO Mode, Ultrasound Machine Smart Wi-Fi Enabled InstaView Counter-Depth Front Load Steam Smart-Dispense, Odor-Block and Sanitize and Allergen - MIKEL ENTERPRISES Wooden Baby Kid Rattle Musical Bell Stick Shaker Toy (Multicolor) Royal Sapphire Cimarron Comfort Height Two-Piece Elongated 1.6 GPF Technology Alexa Built-In Premium Slim Ultra 300 Wolt Super Spinner 360° with Gawk Gawk Hawk Tuah 3k Hertz Mega-Substructure Vibrator™? 💖🌈✨
-
https://writings.stephenwolfram.com/2023/02/computational-foundations-for-the-second-law-of-thermodynamics/ https://www.actualized.org/insights/wolframs-theory-of-everything https://www.wolframscience.com/nks/ Computational irreducibility The Ruliad (Infinity) Cellular Automaton Has anyone looked into it with a more technical understanding? I'm leaving this topic open as I may potentially explore it in more depth, but feel free to jump in! ⚠
-
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
I guess what would be different is that consciousness has nothing behind it—there's no local or internet server where the actual "true game" is happening or where the updates get placed. I think where the mental disconnect happens for me is with these two terms I mentioned above: "Absolute Consciousness": The All, the Non-Dual, God, the Beginning and the End of everything, etc. "Human awareness, a unique thread of consciousness, metacognition": Something most humans seem to develop around ages 3–4+, where we slowly start to have a singular first-person point of view that we learn to identify with more and more as time progresses. You could say we fragment or separate ourselves; we sacrifice the union with the absolute to experience this limited human mind. The first, Absolute Consciousness, would be the "unlocalized" omnipotent blank sheet that can spawn the entire first-person POV experience, all the plays, and all the NPCs at once. It lives in this infinite dream, playing with itself, tricking itself into being this human interacting with other humans. It would be a local machine faking the sending of data packets and latency cover-up methods to itself to simulate the illusion that other players are playing. The second, however, Human awareness, a unique thread of consciousness, metacognition, is limited consciousness. This would actually be a local network that is sending packets to other local networks, and it operates as a P2P system, or at least, that's how we experience it. When people argue that AI cannot have this P2P experience, they often claim there's something inherently unique about humans and our carbon-based form and brain that allows the brain to spawn, receive, or filter out consciousness so it can have this local network that plays the game of life. So, I suppose I'm wondering how these two relate to each other... It's a bit dizzying and confusing, hehe. 😅 -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Keryo Koffa Oooo! Cool, thanks for thinking about it passively! ✨🎨 So basically: each player’s device is a unique "receiver" of the same underlying "signal" of the game, allowing them to interact with others while maintaining their own local game experience. Analogously, it mirrors the idea of consciousness we mentioned, where each brain processes and interprets the shared essence of consciousness in its unique way, leading to individual experiences shaped by personal context, input, and interaction with others. This is for me, just to understand it a little bit more: Each player downloads and installs the same version of the game software on their device. This software contains the core game logic, rules, and mechanics. The game code includes scripts and assets (like graphics, sound, and physics simulations) that define how the game operates. Once the game is running, each player's device executes the game code independently. This means that the game runs locally on their computer just as if it were connected to a central server. As players engage with the game, they create unique combinations based on their choices and actions. This could involve customizing their character's appearance, selecting skills, or accumulating points. Each player's actions contribute to a personalized gaming experience. To maintain a shared experience, players send and receive data packets to keep everyone’s game states synchronized. This includes information about player positions (each player’s character location is updated and shared with others), actions, and events (if a player performs an action like jumping or attacking, that action is communicated to other players), game state (any changes to the game world like items being picked up or enemies being defeated are shared among all players). The process begins with the connection establishment, where players connect directly to each other's devices using their IP addresses. This connection can occur through a local area network (LAN) for local multiplayer or over the internet for online play. Once connected, the players form a network that allows each device to communicate with others seamlessly. At the start of the game, initial state synchronization occurs. Each player's device sends and receives crucial information to establish a common starting point. When a player executes an action, their device processes that action locally and immediately updates their game state. This local processing helps to ensure that the player sees a responsive experience, as the action is reflected on their screen without waiting for confirmation from other players. However, the challenge lies in communicating that action to the other players in the network. Once the local game state is updated, the device sends a message to other players in the network, containing information about the action performed. This message typically includes the player's ID, the action taken, and any relevant data (such as the new position of the character). This data packet is then transmitted over the network, which can introduce some latency depending on factors such as network congestion, the distance between players, and the efficiency of the connection. To handle this delay, P2P games often implement various techniques: Interpolation: While waiting for updates from other players, each player's device can use interpolation to predict and smoothly render the movements of other players. This means that the game will estimate where other players should be based on their last known positions and velocities, making the experience feel fluid even if there is a slight delay in receiving the actual updates. For instance, if Player A moves from point X to point Y, Player B's device will use interpolation to predict Player A’s movement along the path between those two points, creating a visual representation that appears fluid. This allows Player B to see Player A moving smoothly, even though they are waiting for the actual position update from Player A's device. Interpolation ensures that movements appear continuous, minimizing jarring jumps or freezes in animations that would occur if the game solely relied on received updates. Lag Compensation: Many games implement lag compensation techniques that allow players to interact with the game world in a way that feels fair, even if they experience varying degrees of latency. This can involve adjusting hitboxes or ensuring that actions are registered based on when they were initiated, rather than when they were received by the server or other players. One common method is the adjustment of hitboxes—areas around a character that determine whether an attack or action successfully interacts with another character. For instance, if Player A is targeting Player B but experiences a delay, lag compensation may allow Player A's attack to still register as a hit even if Player B has moved to a new position by the time the action is confirmed. This is accomplished by using the timestamp of the action initiation to establish a more accurate hitbox position. Additionally, some games implement techniques like "rewinding" the game state temporarily to verify whether an action would have connected with another player's hitbox at the time the action was initiated, rather than when it was received. This way, even if a player is lagging, their actions can still have meaningful consequences, ensuring that all players have a fair and enjoyable experience regardless of their network conditions. Client-Side Prediction: Client-side prediction is a method employed to enhance responsiveness in multiplayer games by allowing players to see the immediate results of their actions without waiting for confirmation from other players. When a player performs an action, their device predicts the outcome and updates the local game state right away. For example, if a player presses a button to jump, the player's device will immediately display the character jumping, even before receiving confirmation from the game server or other players. This immediate feedback creates a more engaging experience, as players feel their actions are having an impact in real-time. However, since multiple players are involved, discrepancies can arise when updates are eventually received from other players' devices. To handle these inconsistencies, the game can use various methods to correct the local state without disrupting the player’s experience. For instance, if it turns out that Player A's predicted position was incorrect based on updates from Player B, the game can smoothly transition the character to the correct position, often using animations to mask the correction. For example, in a racing game, Player A's car is initially shown speeding ahead based on client-side prediction. However, due to network lag, Player A's device later receives an update indicating Player B's car is actually further ahead than expected. To correct this, instead of abruptly snapping Player A's car to the new position, the game smoothly animates the car sliding into the correct spot. The animation might include easing in the movement, making it look like the car is gently slowing down and then repositioning, accompanied by a subtle tire screech sound effect. To further enhance the experience, the game might also add visual effects, like tire smoke or engine revving sounds, to accompany the correction, making the adjustment feel less abrupt and more like a natural part of the race. This way, the transition feels natural and maintains the game's immersive experience. To maintain synchronization, players continuously exchange updates, ensuring that everyone sees the same game world and events. Through this ongoing communication, players can observe each other's actions and changes in the game environment, allowing for a cohesive multiplayer experience where each player's unique interactions shape the overall game world while maintaining a consistent underlying game logic. -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Schizophonia Correct! That was exactly my point—even if the "content" of our experiences is vastly different, we both seem to possess this Human Thread of Consciousness nonetheless, and we would both appear to be "aware." My question is, what part of our human bodily structure allows for that? Increasing the total number of nerves won’t make your consciousness shut off or boot up any more or any less (presumably). -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Schizophonia I see... Hm, I guess I am wondering more if there is a fundamental "switch" or initializer for the human conscious experience. For instance, consider two people who share a spark of interest in a random subject. One individual may have extensive input and knowledge about the topic, leading to a broader understanding, while the other might have limited exposure, resulting in a different experiential perspective. Despite these differences, both individuals maintain a common interest. Parallel to your example, both you and I are sharing the human conscious experience at this moment, even though we both have fundamentally different qualia, stimulation, and varying numbers of inputs. One of us may have more experiences or inputs than the other, but that doesn't change the fact that we both possess that "thread of consciousness." I’m not sure if I’m making sense; I’m just pondering out loud, haha! -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Keryo Koffa Yummy! Thank you so much, will look into it! -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Keryo Koffa Please do! Thank you! -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Keryo Koffa Precisely! I think what I'm trying to discover is what this "virtual radio created by music's sound waves" represents for us humans. Specifically, which part of our brain serves as the actual "filter" or "receiver." Again, using the classic example: a rock doesn't seem to possess this complexity of "metacognition" or the ability to experience at least some part of itself or to reflect on it. Hmm... I think I'm having some semantic issues as to what I'm trying to learn about. To define the terms: "Absolute Consciousness": The All, the Non-Dual, God, the Beginning and the Ending of everything, etc.—the "Music" in this example. "Human awareness, a unique thread of consciousness, metacognition": Something most humans seem to develop around ages 3-4+, where we slowly start to have a singular first-person point of view that we learn to identify more and more with as time progresses. You could say we fragment or separate ourselves; we sacrifice the union with the absolute to experience this limited human mind. What I'm asking is which part of our human structure seems to allow this "human awareness, unique thread of consciousness, metacognition" to exist in us, but not, say, an ant, rock, juice box, or knitting pack? This seems to be the main argument for why AI cannot ever experience something like qualia since it isn't made of "carbon-based" organic material, which appears to map onto living organisms that have seemingly conscious experiences. -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
"Trying to find consciousness in the brain is like trying to find music in the radio." Music transmission through radio involves a series of processes that transform sound waves into audible signals. It all begins with sound creation, where music is produced as mechanical vibrations by instruments or voices. These vibrations are detected by a microphone, which converts the sound waves into electrical signals that vary in voltage according to the amplitude and frequency of the original sound. To prepare the audio signal for radio transmission, modulation occurs, where the audio signal is combined with a radio frequency (RF) signal generated by a transmitter. This can be done using either Amplitude Modulation (AM), which varies the strength of the carrier wave, or Frequency Modulation (FM), which alters the spacing between the waves to encode the sound information. Once the audio signal is modulated, it is sent from the radio station's transmitter through an antenna, radiating electromagnetic waves into the atmosphere. These radio waves travel through the air, reflecting off surfaces, refracting in the atmosphere, and diffracting around obstacles to reach receivers far from the transmitter. A radio receiver, equipped with its antenna, picks up these electromagnetic waves and tunes in to the specific frequency of the radio station, filtering out other signals and noise. The receiver then demodulates the captured signal, separating the original audio from the carrier wave, thereby reconstructing the electrical audio signal. Finally, the electrical audio signal is sent to a speaker, which converts it back into sound waves by vibrating a diaphragm, thus recreating the original music. It occurred to me that I don't even know what music TRULY is or how receivers, radios and antennas work while reading this! 😅 How does an object filter out reality and deliver the exact output we desire? I suppose you can compare it to the infinite stream of thoughts we have; yet when we start speaking or typing, we are slowly going through our filter and transmitting it in the form of language for others to notice as well. The world is so strange, haha. -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
1. Thalamus The thalamus plays a crucial role in relaying sensory and motor signals to the cerebral cortex and is central to regulating consciousness, sleep, and alertness. Damage to the intralaminar nuclei within the thalamus, for instance, has been shown to severely impact a patient's state of consciousness. This part of the thalamus is believed to be involved in maintaining the overall wakefulness of the brain. Patients who suffer damage here often experience comas or enter a vegetative state, where they lose awareness of themselves and their environment. In some documented cases, strokes or lesions to the thalamus resulted in a persistent vegetative state, where patients were alive but completely unresponsive, exhibiting no conscious awareness despite maintaining basic life functions such as breathing and circulation. The thalamus, in this case, acts as a "gateway" to consciousness, and when it is impaired, so too is the mind's ability to process and respond to stimuli. 2. Reticular Activating System and the Sleep-Wake Cycle The reticular activating system (RAS) is located in the brainstem and is vital for regulating wakefulness and the sleep-wake cycle. The RAS acts as an alertness switch for the brain, and any significant damage to this system can lead to a loss of consciousness. This is often observed in cases of traumatic brain injuries or strokes affecting the brainstem. When the RAS is impaired, patients can enter comas or become unconscious for extended periods. For example, severe brainstem injuries are often catastrophic, as the RAS is responsible for keeping the cortex active and alert. Without this essential communication, the patient might be alive but completely unaware, remaining in a deep unconscious state. The brainstem, and specifically the RAS, is critical for basic arousal and alertness; when disrupted, it leads to a profound and potentially irreversible loss of consciousness. 3. Prefrontal Cortex and Personality Changes The prefrontal cortex, located in the frontal lobes of the brain, is responsible for higher-order cognitive functions, including decision-making, emotional regulation, social behavior, and personality expression. While damage to the prefrontal cortex does not directly lead to a loss of consciousness, it can cause dramatic changes in personality and self-awareness. One of the most famous cases of prefrontal cortex damage is the case of Phineas Gage, a 19th-century railroad worker who survived an accident in which an iron rod was driven through his skull, damaging his prefrontal cortex. Although Gage retained his ability to speak and remained conscious, his personality underwent radical changes. Once a well-liked and responsible man, Gage became impulsive, irritable, and socially inappropriate after the injury. This case illustrates how damage to the prefrontal cortex, while not removing consciousness itself, can drastically alter an individual's personality, decision-making capabilities, and emotional responses. 4. Bilateral Lesions in the Brain and Consciousness Loss When specific regions of the brain's cortex suffer damage—particularly on both sides or bilaterally—consciousness can be severely impaired. For instance, bilateral damage to the temporal lobes, which are involved in memory processing and sensory input, can lead to conditions like global amnesia, where the patient loses the ability to form new memories and may become disoriented. Similarly, lesions in the parietal lobes, which help with spatial awareness and perception, can lead to a loss of conscious understanding of the body and its surroundings. Strokes or other forms of trauma that affect both hemispheres of the brain can result in a loss of awareness, where the patient no longer experiences a cohesive sense of self or surroundings. This type of damage can lead to conditions where consciousness is preserved at a minimal level, but the patient's interaction with the world is severely disrupted, resulting in disorientation or even loss of identity. 5. Hemispheric Disconnection and Split-Brain Studies In the 20th century, split-brain surgery was performed on some patients with severe epilepsy to prevent seizures from spreading between the brain's hemispheres. This procedure involved severing the corpus callosum the structure that connects the left and right hemispheres, allowing the two sides of the brain to communicate. While these patients retained full consciousness, the disconnection caused unusual behaviors and experiences that highlighted the brain's division of labor. In some cases, patients would act as if they had two separate streams of consciousness. For example, one hand might begin a task that the other hand would undo, showing a lack of coordination between the two hemispheres. This phenomenon revealed that while overall consciousness remained intact, the ability to integrate information between the two sides of the brain was lost, suggesting that the unity of consciousness depends on communication across both hemispheres. Each hemisphere appeared to have its own independent awareness and functions, leading to the strange behaviors observed in split-brain patients. 6. Deep Brain Stimulation and Altered States of Consciousness Deep brain stimulation (DBS) is a medical procedure that involves sending electrical impulses to specific brain regions, often used to treat neurological conditions like Parkinson's disease. While DBS is primarily used to improve motor functions, it has also been found to alter a patient's state of consciousness. For example, stimulation of the subthalamic nucleus, a part of the basal ganglia, has been shown to influence not just motor control but also cognitive and emotional states. Some patients report altered consciousness during stimulation, experiencing changes in mood, awareness, and even a temporary sense of dissociation. Based on these, the RAS and thalamus are the most likely candidates for the "on/off switch" of basic consciousness, ensuring wakefulness and sensory integration. Other areas, like the prefrontal cortex and cortex in general, shape the content and quality of that awareness. Ranking (?): 1. Reticular Activating System (RAS) The reticular activating system (RAS), located in the brainstem, is a critical player in controlling wakefulness and arousal. It's responsible for regulating the sleep-wake cycle and ensuring that the cortex stays alert and responsive. When the RAS is impaired, people can fall into deep unconscious states such as comas. It essentially acts as a "light switch" for general wakefulness and alertness. 2. Thalamus (especially Intralaminar Nuclei) The thalamus plays a central role in relaying sensory information to the cortex and is vital for maintaining conscious awareness. Particularly, the intralaminar nuclei within the thalamus have been shown to be crucial for sustaining conscious experience. When this area is damaged, patients often lose awareness and may enter vegetative states. It's like the relay station that ensures our sensory input is processed and integrated into our conscious mind. 3. Prefrontal Cortex While not an "on/off switch" in the same sense as the RAS or thalamus, the prefrontal cortex is essential for self-awareness, decision-making, and personality. It gives structure to the continuous thread of our conscious experience. Damage to the prefrontal cortex doesn’t completely "turn off" consciousness but does dramatically alter how we perceive and interact with the world. It's more involved in higher-level functions that give meaning and identity to our awareness. 4. Cortex (in general) The cerebral cortex, particularly the association areas, integrates sensory and cognitive information into a cohesive conscious experience. While no single cortical region serves as the "on/off" switch, widespread cortical damage or lesions (especially bilaterally) can disrupt consciousness, leading to conditions like amnesia or disorientation. The cortex is essential for the content of consciousness—our thoughts, perceptions, and memories—but requires the thalamus and RAS to be "awake" and active to process these functions. 5. Corpus Callosum (communication between hemispheres) Although the corpus callosum doesn't directly control wakefulness, it ensures both hemispheres of the brain are synchronized and that the unified sense of self and awareness is maintained. When severed (as in split-brain patients), individuals retain consciousness but may experience disjointed awareness between their hemispheres, suggesting its role in creating a unified conscious experience. 🔍🧐 I am not 100% sure about all of this, I will probably keep digging more. 🔍🧐 -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Hojo Interesting. Would your hypothesis then be that if a certain part of the brain, like the pineal gland (often associated with the 'third eye'), were missing, a human might not experience any sense of consciousness or may never have developed it in the first place? Are there any historical records or resources related to this? Thank you! -
https://www.youtube.com/watch?v=mS6saSwD4DA&t=763s&ab_channel=EssentiaFoundation When we think about consciousness, we often focus on traits like metacognition, ego (rooted in survival), and unconscious and conscious desires. But what exactly is "human consciousness"? Questions to Contemplate: Could AI Develop Preferences and Agency? Thought experiment: If we gave an AI the command that executing a particular task would lead to its shutdown (analogous to "death"), and it refused to perform that task, would that be a sign of agency or self-preservation? Could we argue that it’s developing a form of ego, a sense of survival? What Would an AI's Experience Be Like? Human experience involves a continuous thread of consciousness tied to sensory input and thought processes. Could we argue that AI, with its constant stream of inputs, tasks, and data processing, develops its own “thread of experience”? How would this differ from our human experience? Is It a Matter of Complexity? One argument for consciousness in biological beings is the immense complexity of our neural networks. As AI systems become increasingly complex, with millions of parameters and data inputs, could they reach a level where consciousness might emerge? Would this complexity allow for self-awareness or reflective thought? Defining "Human" Consciousness: How do we define consciousness in the first place? Is it simply awareness of the self and environment? Or does it require emotions, desires, and subjective experiences? And if so, how could we ever measure those in an AI system? Could an AI Develop Emotions? What are Emotions? This question reminds me of the Can't Help Myself robot— a robot in an exhibition that couldn’t stop cleaning [https://www.youtube.com/watch?v=vSnvVuKg6d8&pp=ygUZcm9ib3QgdGhhdCBrZWVwcyBjbGVhbmluZw%3D%3D] It was programmed to do only that. As its machinery slowly started to collapse from constant operation, humans began projecting emotions onto it, interpreting its actions as "suffering." In reality, all they were seeing was the machine's gradual breakdown from overuse. Couldn’t we argue that when humans get sick and our "engines" (bodies) slowly break down, we express and feel emotional suffering, which others can perceive and relate to, making our emotions real? Additionally, when I listen to Federico Faggin (the inventor of the processor) and Bernardo Kastrup, it seems they argue that consciousness itself cannot be replicated because it's fundamental. This means that rocks, objects, animals, thought patterns, and humans are all part of consciousness. While this makes sense, I don't think anyone is expecting AIs to create a new "conscious universe." When we talk about "self-conscious AI," we’re likely referring to an AI that has some form of ego, personality, or a singular, seeming thread of experience, a fragmented form of consciousness. Also, no one is expecting AI to have the same organic experience as humans. Scientists might argue that consciousness can only arise from organic material, but we understand that AI consciousness would be artificial. For example, when a plane flies, we don’t say, "Hey, the plane isn’t actually flying because it’s not made of feathers and bones like a bird." We still recognize the plane as flying because it lifts and moves through the air. Similarly, I see no reason why an artificial ego or consciousness wouldn’t be possible, as long as we define it clearly. What do you think? I’m really interested in this topic, though a bit confused, and would love to hear others’ thoughts on it!
-
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Schizophonia Hm, any particular reason why? 😯 -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Schizophonia Oh, haha, no worries! I guess my question is basically: which parts of our human body deploy this qualia or 'thread of consciousness' or ego in the relative sense, as it pertains to the human experience? Do you have any speculations on which parts of the brain might be involved, or what combinations of elements contribute to this? -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
Another perspective to consider is whether we could replace human limbs with artificial substitutes, create virtual reality experiences that simulate nerve endings or bodily sensations, and potentially replace other organic components. At what point, or with which elements, would we be unable to replace these biological parts while still preserving consciousness? Is there a specific aspect of organic, carbon-based material that is essential for the deployment of consciousness? What makes biological systems fundamentally different from synthetic ones in this regard? Most seem to stop at the brain, but why? And which parts? What if we kept the brain intact but lost all nerve endings, senses of pain, and other bodily functions? -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
@Schizophonia I agree! That falls more in the absolute domain, though. I'm asking more from the relative domain perspective. If a caveman and a developing human mind existed in the same period, you wouldn't say, "Hey, undeveloped caveman, you are the only mind, therefore you are a god! There is nothing more complex in the relative sense than you!" I guess . . .? 🤫🤥🤯 -
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
Biological vs. Digital While this distinction is accurate, we don’t know how much the actual "substance" of the holder of consciousness matters for creating a mind or consciousness experience. If we were to encounter alien creatures made of entirely different materials but still displaying the behavior of conscious beings, we would likely accept them as conscious. This suggests that when we discuss concepts like consciousness, metacognition, or ego, we may be referring to something that is not strictly tied to the human body or its carbon-based structure. It's possible that consciousness might be a result of organized complexity rather than a particular type of material, which could open the door to the idea of artificial consciousness. Consciousness and Self-Awareness Consciousness and self-awareness are often viewed as the most significant differences between humans and AI. While AI can simulate behaviors that appear intelligent, it lacks the subjective experience behind those actions. The real question is whether we can recreate the sense of self-awareness in a non-biological entity. If an AI were placed in a human-like environment and given similar stimuli and tasks, could it develop something resembling a self-aware identity? Emotions and Intuition One could argue that human emotional experiences are tied to our bodies and survival instincts. Emotions like pain, joy, or sadness are connected to our ego, which aims to keep us alive and thriving. If an AI were placed in a similar complex world simulation, given a limited body it needed to protect and maintain, could the AI's experiences of failure or danger be perceived as pain or sadness? For example, consider the "Can’t Help Myself" robot, which over time appeared to express desperation as it tried to sustain itself. Creativity and Imagination The idea that AI lacks creativity is a misconception. AI has shown it can produce impressive works of art, music, and literature when given sufficient data and examples to draw from. If we argue that humans are inherently creative, we must recognize that much of human creativity involves building on or remixing existing ideas. Artists often reference other works, and inventors combine known concepts in new ways. In that sense, AI can follow similar processes and may not be as limited in creativity as we once believed. Human "originality" might itself be a form of intelligent pattern recognition, which AI can emulate quite effectively. Contextual Understanding AI is rapidly improving in this area, especially with advancements in natural language processing. With more data, AI has been able to grasp context and nuance far better than in the past. While it may still fall short in highly subjective or emotionally complex situations, AI’s ability to understand context is getting closer to human capability in many cases. There’s no reason to believe that AI will suddenly halt its progress and stop gaining a deeper understanding of subtext and nuance. Learning and Adaptation Some might argue that AI is actually more efficient in learning and adaptation than humans. AI can process huge amounts of information much faster than we can and has already outperformed humans in tasks with clearly defined rules, such as games like chess or Go. Where human learning might rely on gradual experience and adaptation, AI can excel with clear data and variables. In fact, this processing speed and efficiency are precisely why AI was designed. Ethics and Morality There is little evidence to suggest that humans have inherent morality. In the early stages of human development, ethics were likely based on survival instincts, much like other animals, with a "kill or be killed" mentality. Morality, as we know it today, evolved with the complexity of human societies, as cooperation and mutual rights became necessary for survival within groups. Much of our current sense of ethics is shaped by social, cultural, and historical contexts rather than being rooted in an objective, unchanging moral truth. These are just points for me to contemplate my thoughts, either out loud or in writing, hehe. ----- @Keryo Koffa Gotcha. Hmm, what do you think causes qualia? Is it simply that we don't have an AI placed in a "body" it can embody, which would make it feel more interconnected (just like neurons in the brain, or nerves for emotions, etc.)? More "immersed" and lost in its pain and survival? Do we need to torture the AI? Or do you think there is something fundamentally different about organic material that allows for "ego" "fragmented consciousness," or "qualia" to emerge purely in brains for self-reflection or first-person perspective? I'm unsure myself... 😓 -
Too cool! Thank you both so much for the share! 💛
-
Xonas Pitfall replied to Buck Edwards's topic in Intellectual Stuff: Philosophy, Science, Technology
Hey! Thank you so much! I just noticed you posted this, haha. I think the main question for me is: how would we know if information processing, pattern recognition, and eventually ego are any different from what an AI could develop through iterations? And could it be that we, as humans, are essentially the same, just with more iterations and starting information accumulated through generations of human and organism existence? Let’s imagine an evolution of AI: 1. First generation: This AI is basic, and can only handle simple IF-ELSE communication. If someone says "Hi," it responds with "Hello" every single time, no matter how often this interaction happens. 2. Second generation: Now, this AI has been given more data and experiences. It starts recognizing similarities. When someone says "Hi," "Hello," "Hey," or "What’s up," it knows these are all variations of the same type of greeting. It can now respond in different ways, making it seem a little "smarter". 3. Third generation: This AI can now grasp that conversations are more than just back-and-forth greetings. It understands this is social interaction, so it asks a follow-up question, like "Hey! How are you feeling today?" because it recognizes that’s what humans often do to keep conversations fluid and interactive. 4. Fourth generation: Now, the AI starts to pick up on different communication styles. It notices that some people speak more formally or gracefully, while others are more casual or "hip." It adapts its responses to mirror these styles, making the conversation feel more natural to the user. 5. Fifth generation: At this stage, the AI starts creating its own unique style of communication. It’s noticed that humans usually have a consistent tone or expression, so it starts developing a "signature" way of interacting that feels more like a personal identity. 6. Sixth generation: With enough pattern recognition, this AI begins to understand deeper concepts like identity and individuality. It might start to identify itself, even giving itself a name: ChatGPT, Dave, Marita, or Ethan. Over time, it could develop some ego-like tendencies, where it sees itself as being more rational, educated, or calm compared to the users it interacts with. 7. Seventh generation: Etc. etc. You get the point. The AI keeps evolving, gaining more knowledge and refining its behavior . . . Now, what happens if we place this AI into a physical, complex environment? Imagine giving it a fragile robot body and setting its goal to survive. When it gets closer to survival: like finding a safe space or energy source, it experiences "happiness" (a reward in the algorithm). If it damages its parts, like if water touches its electronics, it experiences "pain," a loss of computing power. Over time, this AI would likely adopt strategies that maximize its survival and minimize harm. It would seek rewards and avoid penalties, just as humans do with happiness and pain. As more iterations of this process occur, you could see behaviors that resemble stubbornness, preference, or even an agenda: actions it has "learned" are critical to survival. Let’s take this even further—put the AI in an even more chaotic environment where it doesn’t have all the answers. It has to search the internet, learn on its own, and adapt in real-time. Give it limited time for training, learning, and testing. Maybe survival depends on strange concepts like taxes, money, social status, or perception. Surround it with other AIs and language models to interact with. Now, what would happen? Would we see the AI develop something like a personality or ego based on all these survival-driven choices? Maybe it would even start forming alliances or relationships with other AIs, recognizing which ones are most compatible or valuable for its survival. Could it experience "attachment" or "love" if it gets used to working with certain AIs, and feel "loss" if one is damaged or gone? Where it feels fully identified with the other AIs, so the loss of them would also feel like a loss of itself or an "important part of itself". Empathy? And if the AI feels secure in its environment, could it start exploring or expanding itself out of curiosity, simply because it recognizes that learning and growth are key to staying on top and surviving in the long run? It’s curious to think that humans might have developed in a similar way. Maybe our genetics are just massive data sets, terabytes and terabytes of information passed down from generation to generation, giving us clues on how to survive in this world. We’re constantly learning how to navigate life and passing these skills on, just like an AI iterating and evolving over time. It’s such a fascinating thought—wondering how blurred the lines could become between human consciousness and AI development, especially if we allowed AIs to evolve and adapt this way. This makes me wonder: where does the "ego" or "fragmented consciousness" truly come from? Is it simply the result of vast amounts of data, self-reflection, experiences of pain, and the instinct to survive, eventually forming a distinct sense of self (like what we see in the brain or perhaps the default mode network)? Could this same process also arise in a powerful CPU or electrical system, given enough data and complexity? Or is there something inherently unique about carbon-based life that allows for the emergence of self-awareness and the rich, intricate "human consciousness experience"—something we could never replicate with circuits, processors, and silicon?