Information

Evolution, self organization and neuroscience

Evolution, self organization and neuroscience


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have rudimentary knowledge of evolution, and biology in general, so bear with me if this question is a bit naive.

Let's say we have a particular trait, like highly sensitive peripheral vision. There are two ways to go about explaining how this came about:

One is to say that evolution selected for it over time, because animals with a poor sensitivity to peripheral activity could easily become prey. So there is an advantage to having sensitive peripheral vision, and such a trait is adaptive. The picture I have in my mind is of an assembly line where there are randomly wired visual cortices and the best ones get selected for automatically. This seems to be the view of the field of evolutionary algorithms, at least to my mind.

Now, if we focus on the random nature of the wiring, we might see that there is an explosion of possibilities in terms of how the visual system can be wired together. After all, evolution is blind, and it doesn't know beforehand which physical signals are of interest.

I'm trying to think of a more plausible mechanistic explanation. Suppose we assume that given certain reliable properties of the physical world, like permanence of objects for example, evolution has found the optimal way to wire the visual cortex so that peripheral vision (and a host of other good-to-have features) is a natural outcome, then it would give us more focus in what environmental features and algorithms to look for. This can also be thought of as the idea that evolution, over time, has come to assume that certain things in the physical world will be a given and exploits these. (For example, objects in the peripheral areas of vision will tend to have different perceived velocities, so perhaps normalizing velocities across the field of vision automatically results in a need for higher sensitivity in that region. It just so happens that this could also be useful in detecting predators).

My question then is, is my assumption in the above paragraph valid? (Of course, the question then gets shifted to "Why normalize velocities across the visual field in the first place?").


I don't fully understand your question. I hope the following will help a bit.

I'd like to pinpoint what seems to me to be one important mistake in your text. Reading you we have the feeling that selection acts on wires so to select the best combination. If this is what you meant, then it is obviously wrong. The thing is that the change that get inherited are alleles. An allele is a variant of a gene. For example: gene codes for eyes color. A bi-allelic gene has an allele which codes for blue eyes while the other codes for brown eyes. Assuming that there is genetic variance underlying the wire connection variance in the population, then yes selection will select for the best genetic variant. This does not necessarily mean that this best genetic variant is the most optimal possible to imagine solution. The most optimal allele might have never been created just because of the randomness nature of mutations.

EDIT:

To answer:

Do natural selection always bring a particular phenotypic (loosly speaking, phenotype = morphological) trait to its optimal state?

No! It does not.

  1. The main reason is that the genetic variant coding for this optimal trait does not necessarily exist because of the randomness of mutational process. Also, because of the metabolic pathway bringing some variance to a trait, it is often very hard to even imagine a mutation that would allow creating a new, more optimal, phenotypic variant. So indeed developmental biology has to be taken in account to.

  2. Because in absence of genetic drift (infinite population), selection can only bring genotypes uphill in the fitness landscape. If a very high peak exist somewhere in the fitness landscape but in order to reach such peak, 10 mutations are needed were all of them in any other genetic background are deleterious (correspond to valleys in fitness landscape), then the optimal high will probably never be reached (except if all 10 mutations appear suddenly in the same individual).

  3. Because selection might be weaker than genetic drift. Also, different subpopulations might be in different environments were different traits correspond to optimal traits and because of migration (gene flow) between subpopulations, it is possible that no optimal phenotypic trait can be ever seen although all the genotypic variance exist.

  4. The seemingly most optimal trait might actually not be really optimal. For example: Peripheral vision might be limited by the brain rather than by the eye because it would be an important energy cost for the brain to process more information, a cost that might outweight the benefit of having a better peripheral vision.


Your two arguments are actually the same

The import point you are missing about the algorithm approach of optimizing a genome is it optimizes for the environment it was cycled through in the past. the laws of physics have been constant through those generations so the system becomes optimized for the laws of physics. Well optimized for physics at the scale of life at least, we are not good at handling relativity for instance.

It is much the same reason the human brain favors a desire for sweetness that is counter productive in a modern industrial world, because in the environment of our ancestors getting problematically high concentrations of sugar is simply impossible, so there is no benefit to having evolved a limiting behavior. Evolution has only had a handful of generations to select for anything else. Advantage and disadvantage are always relative to the environment, so the environment of the ancestral line effects how evolution optimizes an organism. Consider being very good at conserving heat would be very beneficial in the artic and very detrimental in a hot jungle. Retaining water would be very helpful in a hypertonic environment and very deadly in a hypotonic environment. Things are adaptive or maladaptive depending on the environment so evolution favors or disfavors the passing on of genes differently depending on the environment. A brain the predicts motion well under earth gravity has a big advantage, on earth, but since earth life evolved on earth brains can be slowly optimized for predicting motion under earth gravity, note I say can be sometimes the brain is stead simply made plastic and learns to predict motion in earth gravity because it is only exposed to earth gravity. It can be very hard to tell which is occurring.

Now it is worth noting optimization is only local not ultimate, Remi covers this quite well in his discussion of fitness landscape so I will refer you to it, I will just say optimized is not the same thing as perfect, it is only the best given what is available and given a history of X.


Gerald Edelman

Gerald Maurice Edelman ( / ˈ ɛ d əl m ən / July 1, 1929 – May 17, 2014) was an American biologist who shared the 1972 Nobel Prize in Physiology or Medicine for work with Rodney Robert Porter on the immune system. [1] Edelman's Nobel Prize-winning research concerned discovery of the structure of antibody molecules. [2] In interviews, he has said that the way the components of the immune system evolve over the life of the individual is analogous to the way the components of the brain evolve in a lifetime. There is a continuity in this way between his work on the immune system, for which he won the Nobel Prize, and his later work in neuroscience and in philosophy of mind.


The astonishing self-organization skills of the brain

A team of researchers from Tübingen and Israel uncovers how brain structures can maintain function and stable dynamics even in unusual conditions. Their results might lay the foundations for better understanding and treating conditions like epilepsy and autism.

The neurons in our brains are connected with each other, forming small functional units called neural circuits. A neuron that is connected to another one via a synapsis can transmit information to the second neuron by sending a signal. This, in turn, might prompt the second neuron to transmit a signal to other neurons in the neural circuit. If that happens, the first neuron is likely an excitatory neuron: one that prompts other neurons to fire. But neurons with the exact opposite task are equally important to the functionality of our brain: inhibitory neurons, which make it less likely that the neurons they are connected to send a signal to others.

The interplay of excitation and inhibition is crucial for normal functionality of neural networks. Its dysregulation has been linked to many neurological and psychiatric disorders, including epilepsy, Alzheimer's disease, and autism spectrum disorders.

From cell cultures in the lab.

Interestingly, the share of inhibitory neurons among all neurons in various brain structures (like the neocortex or the hippocampus) remains fixed throughout the lifetime of an individual at 15 to 30 percent. "This prompted our curiosity: how important is this particular proportion?," recalls Anna Levina, a researcher at Tübingen University and the Max Planck Institute for Biological Cybernetics. "Can neural circuits with a different proportion of excitatory and inhibitory neurons still function normally?" Her collaborators from the Weizmann Institute of Science in Rehovot (Israel) designed a novel experiment that would allow to answer these questions. They grew cultures that contained different, even extreme ratios of excitatory and inhibitory neurons.

The scientists then measured the activity of these artificially designed brain tissues. "We were surprised that networks with various ratios of excitatory and inhibitory neurons remained active, even when these ratios were very far from natural conditions," explains Levina's PhD student Oleg Vinogradov. "Their activity does not change dramatically, as long as the share of inhibitory neurons stays somewhere in the range of 10 to 90 percent." It seems that the neural structures have a way of compensating for their unusual composition to remain stable and functional.

. to a theoretical understanding

So naturally the researchers asked next: what mechanism allow the brain tissue to adjust to these different conditions? The researchers theorized that the networks adapt by adjusting the number of connections: If there few inhibitory neurons, they have to take on a bigger role by building more synapses with the other neurons. Conversely, if the share of inhibitory neurons is large, the excitatory neurons have to make up for this by establishing more connections.

The theoretical model of the Tübingen scientists can explain the experimental findings of their colleagues in Rehovot and uncover the mechanisms helping to maintain stable dynamics in the brain. The results provide a clearer picture of how excitation/inhibition balance is preserved and where it fails in living neural networks. In the longer term, they might be useful for the emergent field of precision medicine: induced pluripotent stem cell derived neural cultures could be used to find mechanisms of neuropsychiatric disorders and novel medications.


Ecological Modeling: An Introduction to the Art and Science of Modeling Ecological Systems

Hsiao-Hsuan Wang , William E. Grant , in Developments in Environmental Modelling , 2019

1.4.5 Self-organizing

Self-organization is the capacity of a system to reorganize its own structure, most often making its own structure more complex. Self-organization is so ubiquitous that we tend to take it for granted. A seed self-organizes into a plant. An egg self-organizes into a chicken. A group of wolves self-organizes into a wolf pack. A group of species self-organizes into an ecological community. Such examples perhaps imply that the rules of self-organization producing complex systems necessarily are complex. But the formation of a snowflake also is an example of self-organization giving rise to a complex structure, as is the formation of complex chemical compounds resulting from a single original autocatalytic set. In fact, recent advances in complexity science confirm that complex systems can arise as the result of quite simple rules of self-organization. Science, which, by the way, is itself a self-organizing system, currently hypothesizes that all complexity arises from simple rules. This remains to be seen.

Regardless of its genesis, self-organization produces heterogeneity in system structure and unpredictability in system behavior. In the opening section of this chapter, we cautioned that it is impossible to predict the future with certainty. We added that even theoretically, it is impossible to predict the future states of a system unless it is a completely closed system, and that such systems do not exist. Self-organization guarantees the lack of closure of system structure, and the fact that system structure is the source of system behavior thus guarantees the lack of predictability of system behavior.


The Origin of Life, Self-Organization, and Information

In an article here yesterday, I described the thermodynamic challenges to any purely materialistic theory for the origin of life. Now, I will address one of the most popular and misunderstood claims that the first cell emerged through a process that demonstrated the property known as self-organization.

As I mentioned in the previous article, origin-of-life researchers often argue that life developed in an environment that was driven far from equilibrium, often referred to as a non-equilibrium dissipative system. In such systems, energy and/or mass constantly enters and leaves, and this flow spontaneously generates “order” such as the roll patterns in boiling water, the funnel of a tornado, or wave patterns in the Belousov-Zhabotinsky reaction. The assertion is that some analogous type of self-organizational process could have created the order in the first cell. Such claims sound reasonable at first, but they completely break down when the differences between self-organizational order and cellular order are examined in detail. Instead, the origin of life required complex cellular machinery and preexisting sources of information.

The main reason for the differences between self-organizational and cellular order is that the driving tendencies in non-equilibrium systems move in the opposite direction to what is needed for both the origin and maintenance of life. First, all realistic experiments on the genesis of life’s building blocks produce most of the needed molecules in very small concentrations, if at all. And, they are mixed together with contaminants, which would hinder the next stages of cell formation. Nature would have needed to spontaneously concentrate and purify life’s precursors. However, the natural tendency would have been for them to diffuse and to mix with other chemicals, particularly in such environments as the bottom of the ocean.

Concentration of some of life’s precursors could have taken place in an evaporating pool, but the contamination problem would then become much worse since precursors would be greatly outnumbered by contaminants. Moreover, the next stages of forming a cell would require the concentrated chemicals to dissolve back into some larger body of water, since different precursors would have had to form in different locations with starkly different initial conditions. For more details on these problems, see Robert Shapiro’s book on Origins, or the classic book The Mystery of Life’s Origins.

In addition, many of life’s building blocks come in both right and left-handed versions, which are mirror opposites. Both forms are produced in all realistic experiments in equal proportions, but life can only use one of them: in today’s life, left-handed amino acids and right-handed sugars. The origin of life would have required one form to become increasingly dominant, but nature would drive a mixture of the two forms toward equal percentages, the opposite direction. As a related but more general challenge, all spontaneous chemical reactions move downhill toward lower free energy. However, a large portion of the needed reactions in the origin and maintenance of life move uphill toward higher free energy. Even those that move downhill typically proceed too slowly to be useful. Nature would have had to reverse most of its natural tendencies in any scenario for extended periods of time. Scientists have never observed any such event at any time in the history of the universe.

These challenges taken together help clarify the dramatic differences between the two types of order:

  1. Self-organizational processes create order (i.e. funnel cloud) at the macroscopic (visible) level, but they generate entropy at the microscopic level. In contrast, life requires the entropy at the cellular size scale to decrease.
  2. Self-organizational patterns are driven by processes which move toward lower free energy. Many processes which generate cellular order move toward higher free energy.
  3. Self-organizational order is dynamic — material is in motion and the patterns are changing over time. The cellular order is static — molecules are in fixed configurations, such as the sequence of nucleotides in DNA or the structure of cellular machines.
  4. Self-organizational order is driven by natural laws. The order in cells represents specified complexity — molecules take on highly improbable arrangements which are not the product of natural processes but instead are arranged to achieve functional goals.

These differences demonstrate that self-organizational processes could not have produced the order in the first cell. Instead, cellular order required molecular machinery to process energy from outside sources and to store it in easily accessible repositories. And, it needed information to direct the use of that energy toward properly organizing and maintaining the cell.

A simple analogy will demonstrate why machinery and information were essential. Scientists often claim that any ancient energy source could have provided the needed free energy to generate life. However, this claim is like a couple returning home from a long vacation to find that their children left their house in complete disarray, with clothes on the floor, unwashed dishes in the sink, and papers scattered across all of the desks. The couple recently heard an origin-of-life researcher claim that order could be produced for free from any generic source of energy. Based on this idea, they pour gasoline on their furniture and then set it on fire. They assume that the energy released from the fire will organize their house. However, they soon realize that unprocessed energy creates an even greater mess.

Based on this experience, the couple instead purchase a solar powered robot. The solar cells process the energy from the sun and convert it into useful work. But, to the couple’s disappointment the robot then starts throwing objects in all directions. They look more closely at the owner’s manual and realize they need to program the robot with instructions on how to perform the desired tasks to properly clean up the house.

In the same way, the simplest cell required machinery, such as some ancient equivalent to ATP synthase or chloroplasts, to process basic chemicals or sunlight. It also needed proteins with the proper information contained in their amino acid sequences to fold into other essential cellular structures, such as portals in the cell membrane. And, it needed proteins with the proper sequences to fold into enzymes to drive the metabolism. A key role of the enzymes is to link reactions moving toward lower free energy (e.g. ATP → ADP + P) to reactions, such as combining amino acids into long chains, which go uphill. The energy from the former can then be used to drive the latter, since the net change in free energy is negative. The free-energy barrier is thus overcome.

However, the energy-processing machinery and information-rich proteins were still not enough. Proteins eventually break down, and they cannot self-replicate. Additional machinery was also needed to constantly produce new protein replacements. Also, the proteins’ sequence information had to have been stored in DNA using some genetic code, where each amino acid was represented by a series of three nucleotides know as a codon in the same way English letters are represented in Morse Code by dots and dashes. However, no identifiable physical connection exists between individual amino acids and their respective codons. In particular, no amino acid (e.g., valine) is much more strongly attracted to any particular codon (e.g., GTT) than to any other. Without such a physical connection, no purely materialistic process could plausibly explain how amino acid sequences were encoded into DNA. Therefore, the same information in proteins and in DNA must have been encoded separately.

In addition, the information in DNA is decoded back into proteins through the use of ribosomes, tRNAs, and special enzymes called aminoacyl tRNA sythetases (aaRS). The aaRSs bind the correct amino acids to the correct tRNAs associated with the correct codons, so these enzymes contain the decoding key in their 3D structures. All life uses this same process, so the first cell almost certainly functioned similarly. However, no possible connection could exist between the encoding and the decoding processes, since the aaRSs’ structures are a result of their amino acid sequences, which happen to be part of the information encoded in the DNA. Therefore, the decoding had to have developed independently of the encoding, but they had to use the same code. And, they had to originate at the same time, since each is useless without the other.

All of these facts indicate that the code and the sequence information in proteins/DNA preexisted the original cell. And, the only place that they could exist outside of a physical medium is in a mind, which points to design.


Evolution – the ultimate computer engineer

Fred Wolf from the Max Planck Institute for Dynamics and Self-Organization, head of the Bernstein Centre for Computational Neuroscience (BCCN) Göttingen and designated director of the Campus Institute Dynamics of Biological Networks will be the coordinator of a new DFG priority programme „Evolutionary Optimization of Neuronal Processing“. The establishment of the new programme was decided by the Senate of the German Research Foundation at its spring meeting this year. The programme is scheduled to run for six years and will start in early 2019. It is one of 14 programmes selected by the DFG from 53 initiatives and with a funding volume of 80 million euros over the next three years. The new Campus Institute Dynamics of Biological Networks, which is jointly funded by the Max Planck Institute for Dynamics and Self-Organization, the University and Göttingen University Medicine, will coordinate the overarching activities of the research network.

Marion Silies (ENI) and Fred Wolf (MPIDS) from the Bernstein Centre for Computational Neuroscience in Göttingen. Both are founding members of the new Göttingen Campus Institute Dynamics of Biological Networks.

The new priority programme (PP) is the world&aposs first coordinated research programme that combines systems and theoretical neuroscience with evolutionary and developmental biology to clarify basic principles of brain evolution. The aim of the SPP is to decipher how the networks and algorithms of biological nervous systems have developed in the course of evolution. Essentially, it will be about applying the theory of evolution to the basic principles of neuronal information processing. „It is exciting to see that it is now possible to examine the design and performance of biological nervous systems from a stringent evolutionary perspective,“ said Wolf. Key questions such as: How close are biological nervous systems to absolute performance limits of information processing? or Which genetic changes underlie the optimization of their performance? are crucial to scientists working on interdisciplinary projects in order to find answers to core questions of brain evolution.

High innovative power - high future potential

The new DFG priority programme can build on the strong infrastructure of the Bernstein Network Computational Neuroscience, which was created in the past decade with the support of the BMBF. „When selecting PP initiatives, the DFG Senate attaches great importance to the innovative power and future potential of the funded research approaches,“ explains Eberhard Bodenschatz, Managing Director of the Max Planck Institute for Dynamics and Self-Organisation. Ulrike Beisiegel, President of the University of Göttingen, adds: „Through the new DFG Priority Programme, the Campus Institute Dynamics of Biological Networks will also play a visible role throughout Germany in the development of emerging topics in the life sciences.“

The new programme is internationally connected not only through the cooperation partners of the Bernstein Network. The programme was developed together with international experts and will be supported by an international scientific advisory board throughout its duration. The initiative was developed by Marion Silies and Fred Wolf (both BCCN Göttingen and Campus Institute). In addition, the steering committee of the priority programme includes the heads of two other Bernstein Centres (Michael Brecht, BCCN Berlin Matthias Bethge, BCCN Tübingen) and the evolutionary developmental biologist Joachim Wittbrodt (University of Heidelberg).

Developing Darwin&aposs theory of evolution

The foundations of modern brain research were laid at the beginning of the 20th century by such researchers as Ramon y Cajal, Korbinius Brodmann and Ludwig Edinger. For this generation, Darwin&aposs theory of evolution was for the first time a part of scientific education and they were already asking how highly developed brains could emerge from simpler precursors.

The recent advances in neurotechnology, developmental biology and theoretical neuroscience have opened up new comprehensive approaches to the functioning and evolution of the brain. Computer-based and mathematical optimization methods are now capable of making precise predictions about ideal circuit structures and theoretical performance limits for many biological neuronal systems. Experiments can now record the activity of thousands of nerve cells simultaneously and map the structure of their networks with unprecedented accuracy. Genomic data is starting to enable reconstructing the evolutionary refinement of neuronal cell types in the coming years. The new priority program „Evolutionary Optimization of Neuronal Processing“ will bring these advances together to capture fundamental principles of brain evolution.


Editorial: Self-Organization in the Nervous System


  • 1 Virtual Structures Research Inc., Potomac, MD, United States
  • 2 Department of Bioengineering, Imperial College London, London, United Kingdom
  • 3 Institute of Neurology, Wellcome Trust Centre for Neuroimaging, London, United Kingdom

“Self-organization is the spontaneous—often seemingly purposeful𠅏ormation of spatial, temporal, spatiotemporal structures, or functions in systems composed of few or many components. In physics, chemistry and biology self-organization occurs in open systems driven away from thermal equilibrium” (Haken, Scholarpedia). The contributions in this special issue aim to elucidate the role of self-organization in shaping the cognitive processes in the course of development and throughout evolution, or 𠇏rom paramecia to Einstein” (Torday and Miller). The central question is: what self-organizing mechanisms in the human nervous system are common to all forms of life, and what mechanisms (if any) are unique to the human species?

Over the last several decades, the problem of self-organization has been at the forefront of research in biological and machine intelligence (Kohonen, 1989 Kauffman, 1993 Pribram, 1994, 1996, 1998 Kelso, 1997 Camazine et al., 2003 Zanette et al., 2004 Haken, 2010, 2012, and others). The articles collected in this issue present recent findings (and ideas) from diverse perspectives and address different facets of the problem. Two features of this collection might be of particular interest to the reader: (i) the scope of discussion is broad, stretching from general thermodynamic and information-theoretic principles to the expression of these principles in human cognition, consciousness and understanding and (ii) many of the ideas speak to a unifying perspective outlined below. In what follows, we will preview the collection of papers in this special issue and frame them in terms of a unified approach to self organization—leaving the reader to judge the degree to which subsequent articles are consistent with or contradict this framework.

Living organisms must regulate flows of energy and matter through their boundary surfaces to underwrite their survival. Cognitive development is the product of progressive fine-tuning (optimization) of regulatory mechanisms, under the dual criteria of minimizing surprise (Friston, 2010 Sengupta et al., 2013, 2016 Sengupta and Friston, 2017) and maximizing thermodynamic efficiency (Yufik, 2002, 2013). The former implies reducing the likelihood of encountering conditions impervious to regulation (e.g., inability to block inflows of destructive substances) the latter implies maintaining net energy intakes above some survival thresholds. Energy is expended in regulatory processes formed in the course of self-organization and predicated on lowering thermodynamic entropy “on the inside” and transporting excessive entropy (heat) “to the outside.” Efficient regulation requires mechanisms that necessarily incorporate models of the system and its relation to environment (Conant and Ashby, 1970). Primitive animals possess small repertoires of genetically fixed, rigid models, while—in more advanced animals—the repertoires are larger and their models become more flexible i.e., amenable to experience-driven modifications. Both the evolutionary and experience-driven modifications are forms of statistical learning: models are sculpted by external feedback conveying statistical properties of the environment. Human learning mechanisms, although built on the foundation of statistical learning, depart radically from conventional (e.g., machine) learning: the implicit models become amenable to self-directed composition and modification based on interoceptive, as opposed (or in addition) to exteroceptive, feedback (Yufik, 1998). Interoceptive feedback underlies the feeling of grasp, or understanding that accompanies the organization of disparate “representations” into cohesive structures amenable to further operations (mental modeling). The work of mental modeling requires energy consciousness is co-extensive with deliberate (attentive, focused) application of energy (𠇌ognitive effort”) in carrying out that work. Learning with understanding departs from statistical (machine) learning in three ways: (i) mental models anticipate experiences, as opposed to be shaped by them (e.g., the theory of relativity originated in gedanken experiments) (ii) feedback conveys properties of implicit models (coherence, simplicity, validation opportunities the models afford, etc.) and (iii) manipulating (executing or inverting) models enables efficient exchange with the environment, under conditions with no precedents (and thus no learnable statistical representation) (Yufik, 2013). Regulation of this sort�sed on statistical learning�s a challenging complexity. As the number of regulated variables grows energy demands can quickly become unsustainable. Using self-organization to implement the process of “understanding” (i.e., composing more general models) has the triple benefit of minimizing surprise, while averting complexity and advancing thermodynamic efficiency of regulatory processes into the vicinity of theoretical limits.

Annila argues that the most fundamental function performed by the nervous system is shared by all open systems and entails a generation of entropy, by extracting high-grade free energy from the environment and returning low-grade energy. As dictated by the second law of thermodynamics, cognitive processes seek out opportunities (paths) for consuming free energy in the least time. Evolution obtains progressively more efficient mechanisms for detecting and exploiting free energy deposits, culminating in consciousness that emerges in systems pertaining to the ability to “integrate various neural networks for coherent consumption of free energy…” (Annila, this issue).

Street reviews discussions in the literature that examine the tension between𠅊nd synthesis of—information-theoretic and thermodynamics-motivated conceptualizations of brain processes. Tensions are rooted in the theory of information, designed to allow analysis of information transfer, irrespective of the physical processes that mediate transfer. Synthesis is necessitated by considerations of energy costs incurred in neuronal signaling. A consensus is anticipated, within a theoretical framework that views cognitive development as self-organization in the nervous system—seeking to minimize surprise, while incurring minimum energy costs.

Torday and Miller discuss the conceptual framework needed for tracing evolution of the mammalian brain 𠇏rom paramecia to Einstein.” The framework encompasses three key notions: (i) complex multicellular organisms share fundamental organizational properties, with precursors in unicellular forms of life, (ii) the most basic property is the ability to extract energy from the environment and dissipate heat in a manner enabling homeostasis and processing of information and (iii) evolutionary improvements in homeostasis, self-maintenance and information processing derive from increased cellular collaboration (coherence). Within this framework, “life is cognition at every scope and scale” and 𠇊ny cognitive action as a form of cellular coherence can be better understood as both an information exchange and reciprocally then, as energy conversion and transfer” (Torday and Miller).

Campbell argues that Darwinian evolution can be expressed as a process of Bayesian updating. Conventionally, the ability to draw inferences and update Bayesian models has been attributed exclusively to (human) reasoning. The range of attribution can be expanded to include all organisms, by assuming that genotypes carry latent models of the environment receiving varying expressions in the phenotype. On that view, genetically transmitted models are the source of hypotheses (phenotype variations) subjected to confirmation (survival) or rejection (extinction) by the environment. Changes in the phenotype over somatic time and the genotype over evolutionary time minimize surprise thus increasing the likelihood of survival of individuals and the species.

Kozma and Freeman analyze alternations between highly organized (low entropy) and disorganized (high entropy) neuronal activities induced by visual stimuli. In rabbits implanted with ECoG arrays of electrodes fixed over the visual cortex, presentations of stimuli were accompanied by metastable patterns of synchronized activity𠅌ollapsing quickly into the background activity upon cessation of the stimuli. The authors define alternations between metastable patterns and disorganized firings as phase transitions and propose a 𠇌inematic” theory of perception treating alternations that spread across the cortex as successions of 𠇏rames” combined into perceptual units (percepts). Synchronized neuronal populations are identified with Hebbian assemblies, acting in a self-catalytic fashion: Interactions between assemblies maintain the cortex in the critical state, conducive to the emergence of organized (low entropy) structures, such as Hebbian assemblies.

Stankovski et al. present novel findings concerning the coherence of neuronal assemblies. Assemblies oscillate within characteristic frequency intervals, with cross-frequency coupling serving to integrate assemblies into functional networks that span distant regions in the brain. In this study, cross-frequency coupling functions were reconstructed from EEG recordings from human subjects in the state of rest, with the eyes either open or closed. They review early evidence that closing the eyes triggers an increase in coupling strength. A novel method of analysis then allows them to determine variations in coupling strength across frequency ranges: crucially, they find that increases in the strength of inter-assembly coupling are accompanied by narrowing variation envelopes.

Tang et al. recorded experience-induced changes in the connectivity of large-scale brain networks. Subjects were resting in a state of “mindfulness,” under minimal exposure to external stimuli. A comprehensive array of mathematical analyses was applied to the fMRI data. The analyses reveal statistically significant increases in connectivity between different brain areas. Many earlier studies have demonstrated increased connectivity in brain networks under external stimuli however, according to this study, similar increases can be produced in the course of internally-induced, restful states.

Werbos and Davis review progress to date in modeling cognitive functions, focusing on the neural net model of learning employing back-propagation algorithms. Neural nets represent learning as the acquisition of desired mappings between input vectors (environmental conditions) and output vectors (desired responses), via iterative reduction of mapping errors. The model posits successions of calculations propagating forward and backward in the neuronal system, orchestrated by some global clock. Empirical substantiations of this model have been scarce𠅋ut new experimental findings and analysis are presented that speak to its biological plausibility.

Perlovsky's “physics in the mind” research program tries to define the principles of cognition in a rigorous way (a la Newtonian mechanics). Some principles are suggested including mental modeling, vague representations, knowledge instinct, dynamic logic and dual hierarchy. A mental model is the basic functional unit of cognition, models are vague (lacking detail), while sensory inputs are crisp (rich in detail). Acquiring knowledge involves reconciling models and inputs in a process driven by knowledge instinct and employing mechanisms of dynamic logic. Model hierarchy has a counterpart in linguistic hierarchy (hence, the dual hierarchy).

Newton analyzes composition of understanding and identifies three constituents: (i) imagery, (ii) the state of mental tension (surprise) caused by a novel situation and (iii) the state of tension resolution, provided by having worked out responses afforded by the situation. The feeling of having reached understanding (Aha!) precedes response execution and thus depends on factors other than external feedback (although failures can restore tension). Execution involves some forms of bodily activities so “understanding” is anchored in the mechanisms that control such activities. Understanding can then expand via mapping new situations onto those that are already understood.

Yufik and Friston suggest that the same self-organization principle manifests in both the emergence of life and evolution of regulatory mechanisms sustaining life: Regions (subnets) in networks of interacting units (molecules, neurons) fold into bounded structures stabilized by boundary processes. Evolution expanded regulation mechanisms from conditioning to anticipatory planning—that is accomplished via self-directed composition and execution of mental models. Hebbian assemblies stabilized by boundary energy barriers (neuronal packets) are produced by folding and phase transition in neuronal networks and represent (model) persistent constellations of stimuli (objects). Variations in packet responses (changes in the composition of responding groups and the order of their firing inside the packet) represent behavior. “Understanding” accompanies the composition of models representing behavior coordination (inter-object relations), as bi-directional (reversible) mapping between packets. Such reversible mapping underlies behavior prediction and explanation (retrodiction). Coordination establishes thermodynamic equilibrium in the volume of a model thus minimizing dissipation (costs) and enabling reversible execution. Expanding models and exploring new inputs necessary moves the system away from equilibrium. Regulation via anticipation and explanation is a uniquely human form of surprise minimization. The regulatory process is supported by verbalization and imagery but is driven by modeling. Arguably, mental modeling, i.e., coordination of packets (mental objects) in the mental space builds on the neuronal machinery engaged in coordinating limbs and objects in the physical space.

This concludes our brief survey of the articles offered in the special issue. To an outside observer, cars might appear to have the purpose of seeking out gas stations and converting fuel into heat and exhaust. A closer inspection will reveal intelligent regulators inside the cars (i.e., you and me) concerned with having enough fuel to reach the next station𠅊nd averting the “surprise” of finding the fuel tank empty. Other concerns—that contextualize this regulation𠅊re the cost of fuel and the desire to keep the car running for the greatest distance possible. In the process, cars must maneuver in coordination with other cars, traffic rules and terrain. Such self-motivated, self-evidencing and self-regulated cars might be a plausible metaphor for minds embedded in a self-organizing nervous system.


Access options

Get full journal access for 1 year

All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.

Get time limited or full article access on ReadCube.

All prices are NET prices.


Assessment and intervention in the prelinguistic period

Assessing infant readiness for communication.

Information gathered from an instrument, such as the APIB, helps identify the level of interactive, motor, and organizational development that the infant in the NICU is showing. This information is crucial for deciding whether the infant is ready to take advantage of communicative interaction. Gorski, Davison, and Brazelton (1979) defined three stages of behavioral organization in high-risk newborns. The child’s state of organization determines when he or she is ready to participate in interactions. These states include the following:

Turning In (or physiological state): During this stage, the baby is very sick and cannot really participate in reciprocal interactions. All the infant’s energies are devoted to maintaining biological stability.

Coming Out: The baby first becomes responsive to the environment when he or she is no longer acutely ill, can breathe adequately, and begins to gain weight. This stage usually occurs while the baby is still in the NICU, and this is the time when he or she can begin to benefit from interactions with parents. It is essential that the SLP be aware when this stage is reached so that interactions can be encouraged.

Reciprocity: This final stage in the progression usually occurs at some point before the baby is released from the hospital. Now the infant can respond to parental interaction in predictable ways. Failure to achieve this stage, once physiological stability has been achieved, is a signal that developmental deficits may persist.

An important function that the SLP can serve in fostering communicative development in an infant in the NICU is to acquaint the parents with this progression and help them to learn from the medical staff when the child turns the corner from the first to the second stage. At this time, more active parental involvement with the infant should be encouraged by the SLP.


Neuroscience – Science of the Brain is primarily aimed at sixth form students or first-year undergraduates. Richard Morris creates a wonderfully neat and concise ‘primer’ of neuroscience, touching on everything from development to drug addiction, with leading UK neuroscientists contributing chapters on their respective fields of expertise in a simple yet imaginative and visually appealing way. Any student uncertain what to specialize in can’t fail to be swayed by this refreshing booklet!

Inside our heads, weighing about 1.5 kg is an astonishing living organ consisting of billions of tiny cells. It enables us to sense the world around us, to think and to talk. The human brain is the most complex organ of the body, and arguably the most complex thing on earth. This booklet is an introduction for young students.

In this booklet, we describe what we know about how the brain works and how much there still is to learn. Its study involves scientists and medical doctors from many disciplines, ranging from molecular biology through to experimental psychology, as well as the disciplines of anatomy, physiology, and pharmacology.


IMPLICATIONS

As evidence grows for the ancient origins and widely differing mechanisms of animal cognition (Irwin, 2020), the need to conduct more comparative studies with a greater variety of species becomes clear. For example, the highly diverse behavioral repertoire of reptiles, with their paleocortical precurser of the mammalian hippocampus, illustrates the value of a comparative approach toward understanding hippocampal function. (Reiter, Liaw, Yamawaki, Naumann, & Laurent, 2017).

The argument that sensory consciousness is ancient and widespread in the animal kingdom, and that diverse neural architectures can create it (Feinberg & Mallatt, 2016), raises the question of which cognitive processes and underlying physiological/neuroanatomical states have deep roots because of needs shared by all animals and which are differentiated and derived because of environmentally specific adaptations.

The situated and embedded nature of an animal's natural cognitive milieu requires greater attention. Neuroanatomical complexity and cognitive abilities are correlated with environmental uniqueness and variability (Rosati, 2017 Mettke-Hofmann, 2014 Shumway, 2008), so experimenters should conduct more studies of cognition in natural, or at least more complex, environments.

If, as Merleau-Ponty (1945) wrote, “A sense of space emerges through movement within a milieu,” and movement is a precursor for consciousness (Sheets-Johnstone, 1999), the contribution of movement to animal cognition needs more emphasis and exploration. Merker (2007) has argued that consciousness arose as a solution to problems in the logistics of decision-making in mobile animals. What the cognitive systems for such decision-making are deserves more study.

Recognition that behaving animals internalize a concept of place that goes beyond simple localization calls for new and creative experimental approaches. The well-documented mechanisms in the hippocampal–entorhinal cortical system for placement and orientation of mammals is almost certainly an incomplete picture of the entire neural substrate for how the embodied animal senses and generates spatial information about its environment. How that system incorporates multimodal sensory information, enacts motor activity, and integrates information at higher levels of integration needs further flushing out.

The confounded use of “representationalism” in cognitive science needs to be untangled. The clear evidence from neuroscience that elements of the extended brain–body–environment are correlated with defined neural processes does not negate the insight that neural representation, although in a certain sense necessary, provides an incomplete understanding of cognitive functions and consciousness. Clark (1997a) argued that putative internal representations may involve indexical or action-oriented contents rather than compositional codes. In a similar vein, Gallagher (2014) warns against a version of embodied cognition that leaves the body out of it—placing instead all the essential action of cognition in the brain. The content of consciousness should be construed as a dynamically recursive interaction between a whole organism and its milieu.

The ability to visualize engrammatic traces of memory in the brain is one of the most exciting recent advances in neuroscience, but this technology is based excessively on one species (the mouse) and one learning paradigm (contextual fear conditioning). Until a greater variety of animals and behaviors are investigated, generalizations from currently available data need to be tempered.

The neurophenomenological approach advocated by Varela (1996), using first-person reports to guide neural analyses in the study of subtle human consciousness states (Berkovich-Ohana, 2017), should be an integral element of the validation of neurobiological processes in humans. Phenomenological descriptions can influence the design of scientific experiments to help neuroscience determine more precisely what phenomena it should explain (Gallagher & Zahavi, 2008). As Crick, Koch, Kreiman, and Fried (2004) pointed out, neurosurgeons, probing the living human brain on a daily basis, could make significant contributions in this regard.

For those neuroscientists willing to tackle the “hard problem” of reconciling subjective experience with objective physical substrates (Chalmers, 1995), we suggest that attention to the embodied, embedded, and interactional nature of consciousness is more likely to result in a fruitful enquiry. We do not claim that any of the new approaches in cognitive neuroscience have solved that problem, nor that they necessarily will, but we do maintain that these approaches go further than classical cognitive science in properly addressing the preliminary question posed by phenomenology, namely, what is the nature of consciousness in the first place? To this question, we contribute the suggestion: It is always an occurrence in and of place.

The distinctive features of consciousness need to be defined in neurocognitive detail. The neuroanatomical substrate of awareness in the mammalian brain is fairly well understood. Further studies have suggested some previously unrecognized but specific neurophysiological correlates that may distinguish brain activity tied to conscious awareness (Massimini, Boly, Casali, Rosanova, & Tononi, 2009 Seth, Baars, & Edelman, 2005). Other approaches, such as IIT, which deduces from the essential properties of phenomenal experience, the requirements for a physical theory of consciousness (Tononi et al., 2016), might be a productive framework for further investigation, provided it is expanded to incorporate environmental interaction.

Feinberg (2012) has argued that a fuller understanding of consciousness will require a combination of evolutionary, neurobiological, and philosophical approaches (Feinberg & Mallatt, 2016). In advocating for a more phenomenological approach to the study of consciousness, Gallagher has written, “cognitive science [and] phenomenology [both] view consciousness as a solvable problem and affirm its openness to scientific, objective interpretation. In this case it is only the gap that continues to persist between phenomenology and cognitive science that seems mysterious.”

Conclusion

Many contemporary cognitive neuroscientists are heeding Andy Clark's (1997b) admonition to abandon the idea of neat dividing lines between perception, cognition, and action and to avoid research methods that artificially divorce thought from embodied action-taking.

Gallagher (2018) recently suggested that the best explanation of brain function may be found in the mixed vocabularies of embodied and situated cognition, developmental psychology, ecological psychology, applied linguistics, and the theory of affordances and material engagement, rather than the narrow vocabulary of computational neuroscience.

To these perspectives, we would add that a more intensive focus on the primacy of place will further a better understanding of the cognitive life of humans and other animals, in a manner specific to the unique needs of each.


Watch the video: Dr. Gilles Laurent - Evolution and Brain Computation - CCCN 2017 (July 2022).


Comments:

  1. Nikonris

    Phrase deleted

  2. Abdul-Jalil

    Incredible ))))))))))))))))))))

  3. Kajigor

    Now all is clear, many thanks for the information.

  4. Fenriran

    The attempt does not torture her.

  5. Chatuluka

    If you look closely, you can find some interesting points here ...

  6. Gallagher

    This valuable message



Write a message