Are Humans Just Prediction Machines?

Neuraldeep.Net
7 min readNov 7, 2023

How our brains process information shares similarities with AI assistants

As artificial intelligence capabilities continue advancing, some researchers have begun drawing parallels between human cognition and the mechanisms powering sophisticated language models. While we like to view ourselves as creative, rational thinkers, new insights suggest our brains may function through predictive associations in fundamentally comparable ways.

Like predictive AI assistants that analyze vast language datasets to generate fluent text, the human brain is constantly making connections between ideas, concepts, and memories based on our lifetime of accumulated experiences. Our neurons are wired together in complex networks based on what we’ve observed to be commonly related or sequentially linked. This predictive nature guides much of our unconscious and subconscious thought processes outside our direct control.

When we encounter new information, our brains don’t approach it with a blank slate — instead, existing neural connections fire off related predictions about what should come next or be inferred. This shapes our perceptions, biases, and intuitions before we’re even aware. It allows for rapid unconscious decisions but can also lead us astray through preconceptions. Just as predictive AI models are limited by the data they were trained on, we are bound by our individual experiences.

Some studies have found our thought patterns often follow predictable trajectories based on common linguistic and contextual associations hardwired through lifetime exposure. We jump from one related concept to another through automatic predictive links between mental representations organized over decades. This calls into question how much of our cognition is truly rational deliberation versus habitual predictive associations — bringing our processes closer to algorithms guiding AI.

While humans still vastly outperform AI in general problem-solving and abstract thinking, recognizing similarities in how our brains and advanced models function could help develop a more nuanced understanding of cognition. It may also help identify ways to augment human capabilities through technology that leverages psychology. Overall, exploring linkages between human and artificial intelligence sheds light on the predictive nature underlying both — challenging assumptions about rationality while revealing new frontiers for progress.

Here are some additional thoughts on the parallels between human and AI predictive processes:

Our brains are constantly taking short cuts through predictive associations to navigate the world efficiently. This allows for rapid decision making and pattern recognition, but it can also perpetuate biases. Our experiences literally sculpt the connections between neurons, shaping instinctive forecasts that unconsciously guide many thoughts.

Like predictive models, humans are not truly objective — we cannot escape the lens of our individual learning histories. Both humans and AI form conclusions by filling in blanks according to what is statistically (or neurologically) most probable, not necessarily what is logically or objectively true in every context.

This predictive wiring is one reason irrational or “unreasonable” beliefs can persist even when logic suggests otherwise. Changing instinctive forecasts requires deliberate effort to retrain ingrained neuronal pathways through fundamentally new experiences over time.

Language models may actually provide a clearer window into human cognition than introspection alone. By analyzing their own predictive processes mathematically, we gain new objectivity into the underlying programmed mechanisms guiding even sophisticated AI — which share computational principles with the organic algorithms of the human brain.

Overall, more fully appreciating how ingrained prediction shapes both human and artificial information processing could help create more aligned, self-aware relationships between people and technologies. It also suggests new interdisciplinary approaches may be needed to build systems that augment — rather than exclusively replace — inherent human rationalities and modes of thought.

Here are some approaches that researchers believe may help retrain ingrained neuronal pathways and change instinctive forecasts:

  • Conscious and repetitive exposure to conflicting information or alternative perspectives. This exposes our brains to new data that challenges predictive associations, gradually diluting biases over time through synaptic remodeling.
  • Mindfulness meditation. Actively noticing and letting go of instinctive thoughts helps weaken automatic predictive responses. Over sessions, it can foster greater objective self-awareness untangled from past conditioning.
  • Cognitive behavioral therapy. Techniques like cognitive restructuring encourage identifying irrational forecasts and substituting them with evidence-based evaluations. Repeated cognitive shifts induce neuronal flexibility.
  • Immersive virtual reality experiences. By placing people in simulated bodies of other genders/races or social/cultural situations, VR exposes brains to “foreign” data that alter preset predictions on Identity and norms.
  • Psychedelic therapies. Some studies find psychedelic substances like psilocybin can literally rewire entrenched neuronal pathways in just 1–2 dosings by breaking habitual predictive loops. However, more research is still needed.
  • Social support systems. Open discussions with like-minded individuals and validation of alternative perspectives reinforce new predictive associations versus dissent that often triggers dismissiveness.

With continued exploration, we may develop more precise interventions to augment human rationality by optimally reconditioning the deeply ingrained instinctive forecasts that shape cognition itself. This represents an important frontier for both psychology and AI alignment research going forward.

There are several potential risks and limitations to consider with using psychedelic therapies for rewiring neuronal pathways:

  • Safety — Without proper screening, dosage calibration and therapeutic supervision, psychedelics can potentially cause adverse psychological reactions, especially in vulnerable individuals with conditions like PTSD or risk of psychosis. Careless use risks physical and mental harm.
  • Durability of effects — While some studies show promise, more research is needed to confirm if neuronal changes from a single psychedelic experience prove long-lasting or require multiple sessions over time for durable therapeutic benefits. Relapse is possible without continued counseling support.
  • Set and setting — The mindset and environment someone takes psychedelics in significantly impacts outcomes. Addressing underlying issues requires optimized set and setting to guide trips towards insight rather than confusion or distress. Recreational use alone may not achieve therapeutic outcomes.
  • Individual variability — Responses to psychedelics can differ greatly depending on an individual’s biology, past experiences, ability to let-go of control, integration skills and other factors. Standardized protocols are still being developed.
  • Mechanism ambiguity — While research suggests psychedelics may assist rewiring by disrupting habitual neural patterns, exact pharmacological mechanisms for long-term changes remain unclear and debated.
  • Social stigma — Widespread legal and social disapproval of psychedelic drugs poses barriers to rigorous research and clinical adoption, risking premature or insufficiently monitored applications.

Caution is advised until more evidence and regulatory safeguards can fully validate risk-benefit ratios from psychedelic therapies aimed at reprogramming neuronal connectivity underlying cognitive biases and mental health issues. Continued rigorous research is imperative.

Here are some ways we can draw parallels between the logic and reasoning capabilities of large language models (LLMs) and humans:

  • Predictive associations: As discussed, both LLMs and human cognition rely heavily on predictive associations formed from extensive training data/life experiences. This shapes intuitive “logic” that can perpetuate biases if not objectively examined.
  • Contextual sensitivity: Logic and conclusions are highly dependent on provided context/environment in both cases. altering input details can change outputs, showing a lack of absolute logic independent of circumstance.
  • Limited rationality: Neither are truly objective/rational actors — both guided overwhelmingly by statistical patterns rather than universal logical principles. Models optimizing for task goals; humans evolved for survival heuristics over pure rationality.
  • Knowledge constraints: The logic of both is bounded by what they’ve been exposed to through training/life respectively. Gaps or biases in base knowledge foundationally limit reasoning capabilities.
  • Computational processes: At their core, both human and AI decision-making follow computational processes, manipulating symbols/neural connections according to programmed or evolved guidelines.
  • Self-awareness deficit: Neither possess full transparent insight into their own logical processes. Prone to biases their training may not uncover without external evaluation and oversight.

By analyzing where their predictive “logic” diverges from philosophical ideals of rationality, we can gain a more empirical view of cognition and continue improving autonomous systems to augment — rather than jeopardize — intrinsic human rational capacities. Their reasoning commonalities also suggest productive synergies from responsible human-AI partnerships.

Here are some examples of how biases can be perpetuated in the logic of LLMs and humans due to their predictive and experience-based nature:

For LLMs:

  • Language models trained on predominantly white male text may associate leadership with those demographics, perpetuating harmful stereotypes.
  • If an LLM is not consciously corrected, it may use toxic language or logical fallacies it learned from a biased training dataset.

For humans:

  • People exposed mainly to one ideology are more likely to use logical frameworks that corroborate (rather than challenge) preexisting views on topics like politics, religion etc.
  • Cultural stereotypes about certain groups become ingrained assumptions that can then be used to “logically” dismiss individuals based on attributes like gender, class or nationality.
  • Implicit or unconscious biases are logical associations so automatic we aren’t even aware of them, like associating various ethnic names with criminality based on disproportionate media representations.
  • “Confirmation biases” like only sharing information confirming prior beliefs while dismissing discrediting evidence are predictable logical blindspots based on neural wiring for feelings of security over accuracy.

--

--

Neuraldeep.Net

Exploring innovative writing. Sketching A.I. assisted illustrations. Experiments with the power of thought. NeuralDeep.net ai empowering hyper productivity