Cognitive Systems and Cognitive Science

Henrique Schützer Del Nero

ABSTRACT

Voluntary control seems to be one of the important features that differentiates humans from machines. As cognitive science has done its work, without any serious approach on consciousness, it seems that it has built cognitive systems that are far from being legitimately 'cognitive' in the human's sense. If cognitive systems are to model human behavior, it is better to examine closely the problem of conscious control. However, as mental contents (thoughts, intentions, plans, etc.) are not translatable to physical languages, it could be a solution to convert objects into classes of functions, seeking the nature of computations that might be performed in order to obtain such systems. To perform a topological evaluation of stability seems interesting since structural stability is a concept that might underlie the functional nature of such 'voluntary computations'. I try to show in this article that a division of cognition into four classes of 'cognitive' modes may help one to understand that topology might underlie the processing of novelties, voluntary acts, opposed to automatic algorithms. Despite the existence of many complicated algorithms that perform 'automatic control', Cognitive Science should consider the function of voluntary control as the mark of human cognition, trying to suggest formalisms that render possible these types of control.

INTRODUCTION

There is an eternal battle that has regained vigor with the appearance of computer science: must semantics and ontology be maintained separated from syntax? Or, are the rules of processing (syntax) different from the rules, if there are any, of meaning and existence (semantics)?

Nobody doubts that there are telephonic communications, columns of destilation, complicated machines and engines of all sorts.

Artificial Intelligence has improved many methods to control, identify and manipulate such systems. But Artificial Intelligence went further, presuming that the mind could also be the subject of models, or at least, models could gain a mind status, were they able to be intelligent or to behave intelligently. Then, began the disente, because Psychology that was at that time an orphan of the behavioristic dream, catched on immediately to the revival of intermediate agencies, mixing information, mental content and mental rules in the same strange concept of representation. Intermediate agencies were recruited to simplify the problem, being the links that can interpret internal phenomena. Is it legitimate to recruit entities when the existance of these same entities is in doubt?

Artificial Intelligence with programs that have a psychological flavor and Psychology with the computational flavor of a symbol-processor mind, began a new era that should be better understood.

The aim of this article is to show that Cognitive Systems will not enrichen our knowledge of the human cognitive system, if one doesn't pay attention to the nature of the presumed mental, or intelligent, objects they handle and the results they allow. Cognitive Science in a very broad sense is devoted to study both the process and the nature of the objects that can be recruited as internal-mental agencies. In this sense, it has to pay attention to both sides of the coin: how can human cognition enrichen our artificial models for different purposes, and how can artificial models and formalisms enrichen our understanding of the human mind, launching the basis of a scientific-formal Psychology? Besides being a question of interpretation, it is a trial on establishing the limits and boundaries that a set must have to be called the set of mental and intelligent operations. There might be objects that are intelligent and operations that, in spite of being complex and unpredictable, are not intelligent. Careful conceptual research must be undertaken in order to clarify the field. Cognitive Systems will never be cognitive, if one doesn't define with precision what it is to be cognitive and to be a system. The search for these concepts pushes one to do both science and metascience before labeling 'cognitive' a result or an inquiry. Engineering without scientific and conceptual care will be problem-solving. If it is problem solving, with an acute sense of purpose and pragmatical-oriented work, it is better to abandon contaminated terms like intelligence, cognition and mind.

My aim in this work is to propose an alternative method to ascribe intelligence and cognition to an architecture. If one doesn't pay attention to consciousness, one will never be modeling cognitive-intelligent behavior, but consciousness is a very complicated concept full of traps and fallacies. Let us try to face the enemy, trying to prevent its opaque sides.

I. Consciousness

The trick behind all the discussions about consciousness, and all the failures they have led to, is the misunderstanding between form and content.

Consciousness can be considered a class of contents, i.e. all the objects like feelings, sensations, ideas, etc. that populate our inner subjective lives. It is bad strategy to conceive the problem from the content's point of view, because all the conscious contents are expressed in a first-person language, 'I feel..', "I think..', etc. and the translation of these languages to third-person language is often difficult. Without translation, however, the mark of subjectivity survives on consciousness, in spite of having a great deal of objectiveness, qua directly acquainted, preventing any scientific statement to be made about the subject. [1]

Consciousness can be considered as a form, or as a class of forms, that underlies a certain functional feature. As a biological trait, it is reasonable to think that it has been selected and that it represents a gain over environmental challenges. The human mind has been considered something that enabled animals to better manipulate food resources and to acquire social relationships.[2]

Consciousness is the main predicate and the very essence of mind. When one talks about intelligent or cognitive behavior one is forcefully talking about mindful behavior. Then, if one talks about cognitive systems, one is talking about mindful systems, hence conscious ones.

Consciousness as a class of objects can be considered as the phenomenological flow of inner experiences one has, experiences of the world, of the inner body, of the past, of the future, of imagination, of reality, of bizareness, of properness, of reasonability, etc. Phenomenological descriptions are the contents that play the conscious continuous role of our inner life.

Conscious forms are the neural functions that allow consciousness to manipulate its objects in a particular way. These neural functions are highly circumscribed to certain structures of the Central Nervous System (CNS), particularly in the neocortex.

The adaptive functions of consciousness are the ability to master novelties, imagination, to-be-learned things, dangerous-risky situations, creation, reflection, inquiry, justifications, responsibility, moral values, normative-contingent rules, etc.

Language, in spite of being so rich and noble, can be highly manipulated through automatic behavior: when one wants to discuss something, only the nature of the arguments is focused, being almost all the rest automatically driven.

If we consider among the objects of consciousness all the inner experiences, including self-reflection and awareness, attention etc., we must at the same time consider that a particular structure renders all these possible. There is a whole body of arguments tied to the structure, function and form that renders consciousness possible. One thing is to consider an object of will. Other is to consider the will as a function that renders possible objects to fulfill this mode. I think of A. A is the object of my thinking, but there is something happening that makes thinking possible. Is it the nature of the object A (be it a value, a set of values or a stochastic distribution of values) or the nature of the process that enables A to be the subject of my thinking?

Consciousness as a class of contents is every phenomenological object one experiences. As a form, it is the way those objects are computed in order to be conscious.

The conscious mode is tied to phenomenological-inner experience, hence phenomenology is almost immediately recognized as the science of conscious contents. If there is no future to a science based upon phenomenological contents, then one should pay attention to the functional traits that underlie this mode of operation.

There is a class of sub-routines that enables an architecture to perform esthetic, ethical judgments, voluntary control, justification, etc. Observing the functional roles and the form that underlie these operations, one can access consciousness from the form point of view.

Suppose someone is learning how to drive a car, where the sequence ABC should be skilled. First, it is done slowly and with full attention. As it is learned, it becomes quicker, less conscious and more automatic. The ABC sequence seems to be the same as it was in the beginning, but there is something that changed in our brains that enabled the structure to manipulate the situation in a faster and more automatic mode. The brain structures that manipulate these events are different: while the conscious novelties are highly manipulated by frontal structures, the automatic mode is highly circumscribed to cerebellar structures. [3] [4]

The ABC sequence can be the subject of conscious experience, i.e. phenomenal experience, and can also be, after learning, the subject of automatic manipulation. It must be stressed that ABC is not a triplet of single values: it can be strictly deterministic or stochastic, continuous or discrete. This explains the fact that learning enables one to a large scale of different actions, all subsumed by a class of values. Within these values a lot of computations can be performed, be they 'if ..then' - rules, be they statistical correlations of numerical series. All these computations are made within the limits defined by the possible values of A's, B's and C's for the automatic mode. Whenever something new, risky, anomalous, etc. happens, the sequence is gated back to the conscious-phenomenological-attentional mode. ABC is an object that can be manipulated both by the conscious-mode or by the automatic-mode. There are several algorithms that may perform corrections during the automatic mode, but in certain situations ABC will be pushed back to the conscious mode.

If ABC is an object, there must be something tied to it that enables the mutual gating mechanisms to go from the conscious to the automatic mode, and vice versa. Moreover, if ABC is not a single triplet, but a class of values within a certain range, allowing the systems to perform subtle and complicated corrections, what is the essential feature of ABC, qua object, that makes it to be phenomenologically relevant?

It is not only to be phenomenologically accessible that makes ABC important. This is the result from the contents' point of view, since all the objects that populate our consciousness will enhance the phenomenological experience. It is to be driven to a special mode that contains a certain class of functions that are not present in the automatic mode that makes ABC conscious.

If consciousness is a mode and a class of contents, if there is always phenomenological experience (from the content point of view), what else can characterize consciousness from the structural point of view?

Voluntary control, the counterpart of will and freedom as conscious content, is one of the major structural traits that differentiate the conscious mode from the automatic one. During automatic computation there is not the slightest appearance of voluntary control, something that can be inferred from the fact that there are no wishes on the phenomenological-conscious screen.

Then, we can hypothesize that phenomenological experience is common to all conscious objects, and so is voluntary control. But, this is not trivially true because there are two classes of objects of conscious experience that seem to be detached from voluntary control: psychoses and dreams.

Voluntary control is an important feature of conscious manipulation, in opposition to the automatic mode. If the automatic mode performs complicated calculations, rule-based or statistically driven, they do not include voluntary modes of operations, nor phenomenological experience. But, if phenomenological experience is common to all forms of consciousness, the same might not be true of voluntary control that seems absent during psychoses and dreams.

Consciousness as content is phenomenological, as opposed to automatic- 'blind' operation. Whatever structure that mimics mind powers, must have phenomenological experience or voluntary control.

Voluntary control is a kind of special operation tied to wakefulness, creativity, justifications and actions but it seems absent during other common happenings of cognitive architectures: dreams and psychoses. During both there is a certain degree of phenomenological experience and a certain lack of control. Does it render voluntary control a bad functional-equivalent to conscious contents?

We will try to examine the object ABC that is able to be learned voluntarily, that is able to be handled by automatic modes, that can be present during dreams and during psychoses, trying to grasp the intricacies that enable this object, or class of objects, to be subject of four different modes of cognitive operation: wakefulness, dreams, psychoses and automatic operation. (table 1)

II. Wakefulness, dreams and psychoses

Let me quickly resume the above said about cognitive objects and modes of operation upon them. Every object can be in principle conscious or automatic. It may be the same object that is gated from one mode to the other. Among the cognitive, qua phenomenological objects, one must include awake-type objects (objects one perceives during wakefulness), psychotic-type objects (objects one perceives while psychotic) and dream-type objects (objects one perceives while dreaming). All the last three have phenomenic experience underneath.

If voluntary control is opposed to an automatic one (the fourth cognitive mode of computation), and if there is phenomenological experience in dreams and psychoses without explicit or effective voluntary control, then neither the automatic one would mean coming out of voluntary control nor would the voluntary would mean conscious.

This is the story of all the failures to explain consciousness in a reasonable way, because there is something that characterizes it, phenomenological experience that cannot be, at least till now, modeled or putatively ascribed.

Wakefulness = phenomenological + voluntary

Dreams = phenomenological + automatic (?)

Psychoses = phenomenological + automatic (?)

Automatic = non-phenomenological + non- voluntary

TABLE 1: four cognitive modes

What is the interest of this scheme to the problem of cognitive architectures? And what is its relation to engineering from the models' point of view and from the conceptual point of view? If one doesn't pay attention to consciousness one never builds cognitive architectures (at least those that can help one to better understand the human's mind, giving new elements for a better Psychology and a better Psychiatry); but if one pays attention to consciousness, every practical project and model will be doomed to failure, because consciousness seems foreign to these realms. Then, it is better to examine carefully the above table searching for some traps that must be hidden. Voluntary control might be a good candidate for a functional synonym of phenomenological experience, and the existence of these two defective-representatives -- dreams and psychoses -- might be a source to strengthen the formalisms that shall be proposed to underlie conscious control.

III. Environmental feeding and short-term memories

Table 1 contains many simplifications and was conceived as a way to help one to turn from the common way of seeing consciousness from the phenomenological point of view (content) to a functional mode (form).What may be the trap that appears in the table? Sensorial information about the environment and about the system (be it a body or a machine) is continuously fed to short-term memories, mainly represented by the hippocampus [5], except during dreams when sensorial feeding is almost interrupted, leaving the system exposed autonomously to its natural-endogenous frequency (a deep continuous oscillation that pervades the whole brain). [6]

Sensorial information is not an equivalent map of any of the above categories: phenomenological x non-phenomenological (from the content point of view) nor voluntary x automatic (from the form point of view). The input signal to the system that performs the four cognitive modes is the result of environmental information + the inner frequencies that pervade the system. This could explain why dreams mimic certain features of consciousness.

wakefulness

dream

psychoses

automatic mode

phen. experience

yes

yes

yes

no

volun-tary control

yes

no

no

no

senso-rial infor-mation

yes

no

yes

yes

If one wants to model cognition, one has to deal with phenomenological aspects but why not choose voluntary control as a source of difference that must underlie the cognitive style of computation? The only reason would be the argument that there is not an equivalence between voluntary and conscious because there are dreams and psychoses when lacking control. But, maybe voluntary control is still present in these two phenomenological-cognitive classes but in a deviant form. This may enrichen a model, instead of representing an obstacle

During dreams there is no environmental feeding, but there can be a certain amount of control regarding contents and interruption of sleep.[7] And during psychoses? In principle there is voluntary control over a large range of behaviors during psychoses, but the very subject of diagnosis. Schizophrenia will be considered here as the prototype, a situation which a lot of troubles in the frontal lobe happens [8], with loss of command over will, isolation (autism) and progressive lack of purpose and goal-oriented behavior.

Then one might suggest that voluntary control is present in wakefulness in its purest and full form, in dreams as a quasi-absent mechanism, and in psychoses as a pathological combination of automatic and ill-voluntary.

Voluntary control (in its pure and normal form) could be in principle ascribed to each of the three cognitive phenomenological states, despite being almost absent in dreams and in psychoses. Instead of eliminating voluntary control from dreams and psychoses, the very examination of the nature of volunteerism can enable us to understand the normal and the pathological aspects of the concept of 'volition', that could be a good candidate to substitute phenomenological experience from the content point of view.

IV. Algorithms and cognition

The very core of science is to produce models of knowledge that simplify, abstract, explain and predict phenomena. The idiosyncratic aspect of each object must be removed in order to find affinities, classes, and to find rules that connect classes in legal forms.

Cognitive Science [9] [10] appeared as a revival of the old presence of intermediate levels of processing information. Behaviorism failed to explain behavioral phenomena based upon inputs and outputs and internal representations were recruited to stabilize knowledge. But the enterprise committed two mistakes from my point of view: a) it overlooked conscious phenomena as the real mark of mental phenomena, and b) it adopted a 'content' (opposed to form-functional) way of seeing mental categories.

The doctrine that states that objects and rules can be abstracted in spite of preserving the nomic rules that enhance the formation of these objects, and of their rules of connection, led to strong Artificial Intelligence positions that dislodge neurology as the very substratum of cognition: "mind is software and brain is hardware".[11] [12]

The brain as a Universal Turing Machine was a mere implementer that could be dismissed, leaving to researchers the goal of finding the classes of objects and the classes of rules. The first would be a matter of semantics and ontology and the second a matter of syntax. As there were no rules of strict equivalence between the syntax of mental phenomena and of brain phenomena, the brain as a general implementer could be omitted. [13]

The criticism of adopting a "content" point of view comes from the omission of consciousness and the strange way some authors define representations. When representations took the form of ideas, beliefs, etc., they became contentful, in spite of appearing as the form of mind phenomena. If one considers the sentence 'Paul believes that P', then P is the object and believe is the mode. This seems a way of building a system where the opposition between content and form is preserved. But beliefs are already mentally interpreted objects, then they could be called structures from the mental point of view, but they would be contents of an other order regarding brain operations. If there is not a way to translate 'beliefs' radically in the brain vocabulary then, either belief is a functional-emergent predicate or it is kind of special mind- content that seems as if it were a mode, regarding the object of belief, but indeed it is a content too.

Adopting a kind of dissociationistic position, regarding cognition as a kind of algorithm, but leaving the primitives of interpretation in the mental level, cognitive science precludes the distinction between voluntary algorithms and automatic algorithms, that is the only one, in my view, that grasps the very nature of cognitive phenomena. Doing so, cognitive science modeled only automatic modes of operation leaving the doors open to a large class of criticisms that saw in the computer metaphor of mind a mere syntax processor without any real semantic power. [14]

If cognition is not defined regarding the phenomenological aspects of consciousness, or its equivalents-- voluntary modes of operations, be they normal or deviants-- cognition is only automatic operation. We know that the automatic computations that take part in our life are very complicated. Maybe some are rule based (more tied to a software way of considering the mind) and others are more shadowed and statistical, more tied to a neural net way of considering the mind. But both, traditional AI and Connectionist models are not able to grasp the essence of cognition because they don't face the problem of voluntary control, or of phenomenological experience. Then, they are always driven by complicated formalisms and data analysis, but fail to answer simple questions: does the architecture understand what is going on? Or does the architecture have control over its acts?

There is a very common way to escape from these questions, giving complicated philosophical explanations: it is a matter of imputation that makes something possess or not consciousness.[15] This is the well-known Turing test. If the computer is able to lie, pretending what it is not, then it has cognitive abilities.

The answer to the questions may be twofold: a) one may say that it is a matter of time until we reach enough computational power and memory to enable a computer to pass the test

b) it is a mistake to consider this because the very essence of cognition is tied to non-algorithmic operations. [16]

I think one should not abandon the algorithmic metaphor to the mind, but the nature of the algorithms that will mimic cognition must be retailored regarding the very nature of the opposition between voluntary and automatic control.

Algorithmic considerations of mind can be seen as legitimate science regarding the abstractive nature of the enterprise. The rules of cognition must be abstracted in spite of preserving their nomic form. This is science and determinism. Without a kind of determinism of structure, regularities don't hold, then science becomes impossible. There must be an equivalence between a computational-algorithmic credo, and deterministic and causal theories: both are seeking regularities that can render contingencies explainable and predictable.

When then is Cognitive Science wrong, becoming only cognitive systems in a very degraded sense of 'cognitive'? When it takes the objects of mental experience, hence conscious objects, and transforms them into mental forms, in the case of beliefs, or in mental blocks in the case of genuine mental objects, as in the case of rooms and tables.

We don't have the faintest idea of how the brain computes the frequencies of neurons to codify mental objects and rules. But we can suspect that the complete dissociation of brain and mind was an extreme misunderstanding of what a science of the mind must pursuit. The objects and the rules in a cognitive architecture must be abstractions, but they have to pay attention to all the potential computations that a cognitive architecture can perform. Comparing a cognitive architecture with a Turing machine must elicit the mistake of reducing all the laws to a kind of predicate calculus (a branch of Mathematical Logic). Were there laws that connect things in another manner that are not described by rules, then Turing machines would not grasp the very nature of cognitions. Comparing a cognitive architecture with a neural network commits the mistake of adopting as legitimate categories that are under suspicion: mental categories. Even a very complicated neural architecture has to have an interpretation. The categories that will be recruited to interpret nodes and attractors and convergent solutions will be the chosen categories: if one interprets a neural network as a system's identifier then it may be closer to the brain but far from the mind. If one interprets it with mental blocks then one is farther from the brain than the traditional AI proposer was.

Considering a Turing-machine argument to model cognition is a rather syntactical prejudice that may lead to a false understanding of the mind. Considering a neural net model of the mind is a semantical prejudice, because the blocks that will be the interpreters will at the end be the mind contents. What is difficult is to choose primitives or state variables. Identification of a system is sometimes much more complicated than the rules of processing the chosen elements. [17]

There remains always the question whether a Universal Turing machine would be able to compute all the functions a neural net does. Of course, if one answers no, then Penrose could be right, but the nature of the problem would be not the quantum problems sometimes alluded, but only an impossibility of translating rules of the differential calculus to the predicate calculus.

Adopting a more cautious and humble position, it is difficult to ascertain if the very nature of mind phenomena is classical or quantal. In a certain sense, regarding models, there is only a problem of having structure ( a kind of nomic relation between the elements) and a matter of measure. Both, classical mechanics and quantum physics have deterministic structures [18] [19] and the problem of quantum physics is that there is no strict causal relation between isolated elements, but only probabilistic distributions. In a very broad sense of algorithm one can consider the connection of A and B, be these connections a matter of strict or stochastic link. Abstractions exist exactly to affirm that the distribution of A's has a certain relation to the distribution of B's. Determinism is not derogated and stochastic determinism is still determinism and not mere chance and casualty that would prevent science to exist.

Then, the discussion about the nature of the physical event that underlies cognition is not a very problematic one. Taking algorithms in a broad sense, that of necessary stochastic connection, one can hold that a science of cognition is algorithmic and computational. But, why has it failed? Because it has overlooked the four aspects of a cognitive system: the voluntary-conscious mode, the dream mode, the psychotic mode and the automatic mode.

All the algorithms that have been tailored might have problems because: they have syntactical prejudices, semantical prejudices, and they don't face the problem of mind as a structure that enabled us to have genuine cognition.

V. Semantics and syntax

One of the cues that might underlie cognition is that the entities that appear in our conscious mental screen are not the best candidates for a semantical interpretation of the world. This position is called eliminative materialism and holds that a genuine science of cognition will be a neurophysiological analysis, full of neurophysiological terms. [20] [21]. The hard dissociation between mind and brain would be a mistake and the future would be to consider cognitive what is explainable in brain terms (brainly syntactical). Representation, in this sense, would be a mere topography of the brain sub-systems . This extreme version of the reaction against 'strong AI' commits a kind of simplification because it considers all our phenomenological experience as a mere result of acquired knowledge through ordinary language. Voluntary control could be in this sense considered only as a trait that happens whenever the locus of computation is the neocortex. Mind has more to do with contents, and if we believe in our contents this must have adaptive significance. If, however, it is impossible to map contents from the mind realm to the physical entities at the brain level, why not ask if the separation between brain syntax, brain semantics, mind syntax and mind semantics is not a trap?

Brain syntax means firing action potentials at synapses, modifying strengths of connectivity between them, conversing pulse to frequency codes at the level of neuron assemblies, etc.[23] (prototype discipline--Neurophysiology)

Brain semantics means a localizationistic style of recognizing structures and objects, like olfactions, visions, emotions, etc. (prototype discipline-- Neuropsychology)

Mind syntax means finding rules of connection and of formation of significant sentences and actions. It can be a predicate calculus in traditional AI version or a differential calculus (or Statistical Mechanics) analysis in neural networks. (prototype discipline: Cognitive Science with strong emphasis in computational models)

Mind semantics means the objects we perceive, sense, feel, and the modes our conscious experience stands for them (in the case of intentional objects like beliefs, fears, hopes, etc) (prototype discipline: Cognitive Science with strong emphasis in Psychology and Philosophy)

Adopting that there might be a trap, or an anomaly in the relation between mind and brain, is only another version of saying that what we need is a way of connecting brain syntax and mind semantics. But in such a crude version this seems dull. The inversion between conscious phenomenological contents to voluntary modes can enlighten partially the problems because:

a) it preserves the brain syntax since it can propose a mechanism of recognizing categories based upon syntactical features

b) it preserves brain semantics because it preserves the gating structures, particularly frontal lobes, cerebellum and hippocampus

c) it preserves mind syntax because it doesn't deny that there are rules of connection between elements that mimic brain connections in a quasi-homologous form.

d) it preserves mental semantics because it doesn't deny freedom to exist qua conscious objects, but it encapsulates freedom in a functional mode, that has deep brain reasons underneath, called voluntary control.

In other words, the opposition between semantics and syntax must preclude our understanding of the very nature of cognition leading one to model either the brain or the mind but, never their connections. A model of the connection might surpass this dichotomy and propose a formal character that treats the four modes of cognitive computation.

VI. Topological computation

Let us be back to the object ABC that must be learned. Consider ABC an object, a sequence of operations, a rule of connecting A to B and then to C, or a transformation of one to the other, etc. Consider A, B and C as having one value, a set of values, as being state variables, probabilistic distributions, etc. it doesn't matter, what counts is to ask what particular trait in the object ABC makes it subject of voluntary-conscious computation or subject of automatic computation. Is the same object ABC that is computed in the conscious- voluntary mode, or in the dream mode, or in the psychotic mode or in the automatic mode? If it is not the same object, there must be something one level above that is the same, i.e. there must be some kind of designation that makes the ABC sequence subject to the performance of the frontal lobe or of the cerebellum. The final product, the motor acts that command the car are (almost!) indistinguishable. As we don't want to proliferate entities, it is not recommended to suppose the existence of a supervisory system that qualifies the ABC objects because the problem would be transferred to this supervisory system in an infinite chain. [23]

If it is the same object that is in the frontal lobe (voluntary) or in the cerebellum (automatic) , then there must be a kind of label on it that enables the system to recognize when it has to be gated from one mode to the other.

The formal concept that seduces when one deals with slight variations in a certain variable, leading or not to dramatic changes in the solutions, is that of structural stability.

Considering a pendulum, the parameter that describes the friction coefficient may be positive, negative or zero. In the domain of positive and negative values of this parameter every slight perturbation e doesn't lead to topological variations in the state space: (see table 3) there is no topological variation, the qualitative solution is the same, the system shows structural stability. For zero values in this parameter, a slight perturbation will change dramatically the system's behavior from the topological point of view. This is structural non-stability and this value of the parameter is called a bifurcation value.

TABLE 2

structural stability

structural non-stability

parameter + e

nothing happens topologically

changes dramatically topologically

parameter

ordinary values(OPV)

bifurcation value(BPV)

Bifurcations are topological variations in the system's behavior when parameters are in critical values (called bifurcation values) that when slightly perturbed lead to dramatic changes in space of states.

If one is dealing with values in a distribution of course each perturbation of a parameter in a branch of structural stability will lead to a variation in the system's behavior. However, this variation will lead to homeomorphic solutions regarding the perturbated system, i.e. both solutions will have common traits from the qualitative-topological point of view.

Chaos, a phenomenon that may happen in deterministic non-linear dynamical systems that are characterized by sensitivity to the initial conditions is structurally stable. In spite of having an enormous quantity of states in the state-space, all unpredictable, small perturbations in the parameters values don't lead to topological variation. Chaos can occur after successive bifurcations.[24][25] Walter Freeman and other researchers have proposed that there is chaotic behavior in the Central Nervous System and that it might be the source of richness and variability some systems present.[26] Moreover, the school of this author and others is highly neurophysiologycal, far from strong AI, etc.

The concept of structural stability might be a very rich one to explain the nature of the computational that the CNS performs in the four modes: voluntary, automatic, psychotic and dream.

Suppose one has a description of the system (which is impossible given the codimension). Novelties and voluntary control represent structural non-stable modes. As soon as system parameters are set in order to obtain structural stability, then the mode turns to the automatic. The automatic is a large set of values, all with qualitative homeomorphic performances. Robust chaos could be a kind of mixed state of consciousness and automatisms that occur in psychoses. Weak or non-robust chaos would be the state of dreams, when the system may perform some kind of calculations in order to reset variables, eliminating spurious attractors, or reinforcing memories. [27] [28]

From the point of view of contents, of the systems flow (the temporal function that describes the systems dynamics)[24] , there is always difference when one alters the value of the parameter. Consciousness as phenomenological experience sees only states in the state space. Computations that perform the gating throughout voluntary frontal modes and automatic-cerebellar modes must analyze homeomorphisms and topological similarities. Determinism still holds but, predicatability for a certain range of bifurcations values is severely affected, which leads to the dissociation between explanations and predictability.

When given the state A in t we can predict and explain the state B in t + 1 we have explanation and predictability. [29]

When we have only the connection of A to a set of values of B we have explanation without predictability. When we have both A and B as distributions of probability there is still determinism but the extreme of chance- behavior is reached. However, there still are explainable things happening that maintain A's and B's in a deterministic relation. That's why there is no interest, in a certain sense, of examining the nature of the problem, if classical or quantal.[30]

VII. Information and Brain Syntax

The relevance of considering the computational way the brain uses to gate from the voluntary to the automatic mode, based upon the notion of structural stability and dynamics, is that it allows us to understand phenomena that are considered as information- driven in the CNS.

The major concept that inspired Cybernetics and later Cognitive Science was that of information. However, information as content doesn't say much and is a misunderstanding of the original work of Shannon.[31]

There are basically two formalisms that describe a theory of communication. D. Gabor's [32] is a theory that uses formalisms of quantum mechanics and is very popular among Quantum Neurodynamics. But there is a strong equivalence between Shannon's and Gabor's formalisms as is shown in a recent article [33]. Both are measures of a probability density function and hence the limits of interpretation are both the same. Probability is tied to a state and not to a content. Then, one may understand informationally why the hippocampus gates states to the voluntary mode or to the automatic mode. Whenever given one state in t the next state in t + 1 is predictable, its probability is one, henceforth, the information measured in bits is zero. Structural non-stability means that there are more states in t + 1 than one, hence the informational entropy increases. Richness of states means voluntary and means a gating mechanism to the frontal lobe. This is compatible with certain neuropsychological syndromes where a certain degree of semantical understanding of information without consciousness (priming effects, etc) happens to occur. These problems are interpreted under the label of shallow outputs from the hippocampus and other structures tied to short-term memories.[5].

Information based on forms and states as proposed by the scheme above, is able to explain these problems, in spite of being totally theoretical and speculative. We cannot imagine if these ideas can enlighten new directions of research, but we suggest they represent a change in the way one see cognitive systems and cognitive legitimate science.

VIII. Sketch of a model

I propose that the gating mechanism that renders topological computation possible is the presence of bifurcation parameters values (BPV) or ordinary parameters value (OPV).(fig.1)

When information reaches the CNS through short-term memories, mainly in the hippocampus, there happens an evaluation of the stability from the structural point of view. If it is unstable, information goes to the frontal lobes. As it becomes stable it goes back to the cerebellum, main site of automatic behavior.

Information is fed to both, the automatic and voluntary systems, all the time allowing to a continuous evaluation of the problem of stability. The effectors are the same but, the structures that trigger the process are different. If it is to be considered from the dynamical point of view, each of these three structures can be considered as an oscillator (van der Pol) or a Phase Locker Loop (PLL). To show the qualitative aspects we want, it is enough to remember that large assemblies of neurons, treated as dynamical systems, will have bifurcations and even chaos in the space of frequencies. Further analysis and comments have to await another work.

Undisplayed Graphic

FIG.1

CONCLUSION

One must seek a legitimate way that compatibilizes algorithms that enable the architecture to perform four classes of computation: voluntary, dream, psychotic and automatic. Due to conceptual mistakes, till today Cognitive Science has been only Cognitive Systems Analysis, proposing algorithms that can be automatic but, that don't pay attention to the crucial aspect of cognition: the mind has two axes, one that is firmly tied to Nature, the realm of brain physics, and other that is firmly tied to Culture and personal owns story. Freedom and responsibility are aspects that are possible thanks to the mind, henceforth brain, maybe both, or none.

The strongest assumption is that there is a map from stability from the syntactical point of view to ambiguity from the semantical point of view, or that dynamical systems can describe the intimacies of the mind's structure.

REFERENCES

[1] J.R. Searle, The Rediscovery of the Mind. MIT Press, 1992

[2] J.Fischman, "New Clues Surface About the Making of the Mind" in Science, vol.262. p.1517, dec.1993

[3] J.Horgan "Fractured Functions: does the brain have a supreme integrator'? in Scientific American Dec. 1993

[4] M. Ito "How Does the Cerebellum Facilitate Thought?" in T.Ono,L.Squire, M.Raichle, D.Perrett, M.Fukuda (ed) Brain Mechanisms of Perception and Memory. Oxford University Press.1993

[5] M.Moscovitch, C.Umilta, "Conscious and Nonconscious Aspects of Memory: A Neuropsychological Framework of Modules and Central Systems" in R.Lister,H.Weingartner (ed), Perspectives on Cognitive Neuroscience. Oxford University Press. 1991

[6] R.Llinás, D.Paré, "Commentary of Dreaming and Weakfulness" in Neuroscience, vol 44, n.3,1991

[7] J.Hobson, The Dreaming Brain. Basic Books. 1988

[8] J.Gold, D.Weinberger, "Frontal lobe structure, function, and connectivity in schizophrenia" in R.Kerwin (ed) Neurobiology and Psychiatry. Cambridge Medical Reviews. Cambridge University Press. 1991

[9] M.Posner (ed) Foundations of Cognitive Science. MIT Press.1989

[10] D.Osherson et al (ed) An Invitation to Cognitive Science. (3 volumes) MIT Press.1990

[11] J.Fodor, The Language of Thought. Harvard University Press. 1975

[12] Z.Pylyshyn, Computation and Cognition. MIT Press. 1986

[13] A.Anderson (ed) Minds and Machines. Prentice-Hall. 1964

[14] J.Searle, Minds, Brains and Science. Harvard University Press. 1984

[15] P.Churchland, T.Sejnowski, The Computational Brain. MIT Press. 1992

[16] R.Penrose, The Emperor's New Mind. Penguim Books.1991

[17] D.Hammerstrom, "Working with neural networks"in IEEE Spectrum July 1993

[18] E.Nagel, The Structure of Science. Harcourt, Brace & World, Inc. 1961

[19] M.Bunge, La investigación científica. Ariel.Methodos. Barcelona. 1985

[20] S.Stich, From Folk Psychology to Cognitive Science. MIT Press.1983

[21] P.Churchland, Matter and Consciousness. MIT Press. 1988

[22] W. Freeman, "Tutorial on Neurobiology: from single neurons to brain chaos"in International Journal of Bifurcation and Chaos, vol.2. No.3.1992

[23] T. Shallice, From Neuropsychology to Mental Structure. Cambridge University Press. 1991

[24] J.Guckenheimer, P.Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer- Verlag. 1983

[25] R.Abraham, C.Shaw, Dynamics: The Geometry of Behavior. Addison-Wesley. 1992

[26] C.Skarda, W.Freeman, "How brains make chaos in order to make sense of the world"in Behavioral and Brain Sciences, 10, 161-195. 1987

[27] F.Crick, G.Mitchison, "The function of dream sleep"in Nature, vol.304, july 1983

[28] J.Winson. "The Meaning of Dreams" in Scientific American, november 1990

[29] C.Hempel, Aspects of Scientific Explanation and other essays in the philosophy of science. The Free Press. 1965

[30] K.Pribam, Brain and Perception. Lawrence Erlbaum Associates. 1991

[31] H.Atlan, L'organisation biologique et la théorie de l'information. Hermann Editeurs.1992

[32] K.Pribam (ed) Rethinking Neural Networks: quantum fields and biological data. INNS Press. Lawrence Erlbaum Ass. 1993.

[33] J.Piqueira "Information and Complexity" (submitted)