E. Ch


Intermediate variables 5 arise during the course of the process or are the intermediate result of calculation operations. In principle, they can be calculated in advance if the disturbances and the variables to be set are known. If some of the intermediate variables should be at the upper or lower limit, then they are included in the list of parameters characterizing the process.  

Intermediate variables can be presented in an unlimited number, but each must be expressed as a function of the set variables and disturbances before it can be used. In addition to flow, temperature, pressure and other operating conditions at individual process stages, the list of intermediate variables may include yield or conversion used to calculate productivity and profit from an operation; calculated concentrations and ratios of reagents; the average productivity of each of the units (needed to distribute the supply of raw materials) and many other auxiliary parameters.  

Intermediate variables G and P are presented in inverse form. They, as will be discussed below, can be used in the formation of parallel transfers between individual sections of 564IPZ when constructing multi-bit ALUs.  

It is more preferable to use intermediate variables: scalar and vector potentials.  


Here two intermediate variables i and / are introduced, which have scope only within this block. The initial data are given in the form of global values ​​l, m, x, and the result is assigned to the global variable y. These global values ​​must be described in one of the external blocks that contain this block.  

Obviously, intermediate variables A and B should be placed in registers, which we will give the names RA and PB, respectively.  


K denotes controlled intermediate variables, and f denotes uncontrolled ones.  


The CUPL language allows you to define intermediate variables that can be used later in expressions. In this case, it is convenient to define the obvious variables from zero to next in terms of possible digit mappings in terms of segment inputs. These are simply large product terms (AND) of the input segment variables, which you can read from the images of the numbers in Fig. 8.76. Finally, each binary output bit is written as the sum (OR) of the digital variables that set that bit. We use negative logic levels because 16L8 represents an AND-OR-NOT matrix. This concludes the logical specification for the language.  

Now let us express in these formulas the intermediate variables /d, q(t) and As through the original functions.  

VM functions and function procedures can use input, output, and intermediate variables.  

It introduces intermediate variables into the process of studying human behavior psychological nature, expressing unobservable elements of the mechanism of motivation for this behavior. However, supporters of this approach avoid analyzing the interpretation process itself, limiting themselves to registration external influences on the psyche and its external behavioral responses to these influences. Representatives of this approach consider observation and experiment to be the main methods of studying behavior, with the help of which one can deliberately induce a behavioral reaction in people and record its characteristics.  

(intervening variable) Intervening variable is an unobserved relationship between two observed variables. In plural our assumptions about the causes of people. behavior postulated intermediate psychol. variables that act as a link between stimulus and response. Let's look at an example. Imagine two boys on a playground. George pushes Sam, then Sam pushes George. At first glance, it appears that Sam's response (that he pushed George) was driven by the fact that George pushed him. However, in order to understand the causal relationship, we must assume the existence of P. p. Sam was pushed (this is a stimulus), and he thinks: “Yeah, George pushed me, which means I have the right to fight back” (P. p.), and he pushes George (reaction). The introduction of P. p. allows us to understand why different people react differently to the same stimulus. Eg. William runs away when George tries to push him, and David, in a similar situation, laughs. Perhaps P. p. for William was his thought: “George is stronger than me. If I don’t run away, he’ll push me again.” David's laughter may be due to the fact that he explains George's behavior as his excessive playfulness or clumsiness. P. p. cannot be seen. We only see 2 things: the stimulus (George's push) and the response (the push back, running away, or laughing). Psychotherapists work with their clients, trying to understand P. p., leading to maladaptive reactions. Psychoanalysts may look for P.P. associated with experiences acquired in early childhood. Cognitive therapists can help people replace unacceptable thinking points (negative cognitions) with more adaptive thinking points (eg positive cognitions). Thus, a client who is afraid of the dark can be taught to redefine darkness as promising rest and relaxation. Psychologists explain the sequence of people. behavior, postulating such P. items as personality traits or abilities, which are relatively stable characteristics of people. It can be accepted that Sam is pugnacious, William has low self-esteem, and David has a good sense of humor. The interpretation of the reaction depends on the P. item used. Imagine this situation: the child failed the exam. It can be assumed that P. p. is competence, motivation to study hard, or support loving parents. Which of these three variables - ability, motivation or parental support - was responsible for failing the exam? The therapist's help to the child in achieving success depends on how the P. p. is interpreted. Should the child be transferred to a lower grade, does he need some more serious motivation, or is it not the child's problem, and the therapist should work with the parents? If P. is chosen incorrectly, therapy may be ineffective. To assess P., psychologists use interviews and tests. In psychology theories postulate ego strength, locus of control, and cognitive dissonance as P.P. These unobservable variables are the link between stimuli and reactions. The right choice P.P. allows you to better understand and more accurately predict behavior. RET by A. Ellis is based on the concept of changeability of cognitive P. p. See also Individual differences, Rational-emotive behavioral therapy M. Ellin

Under the pressure of the three problems noted above - memory, motivation and cognition, most creators of learning theories supplemented Skinner's experimental analysis of environmental and behavioral variables with intermediate variables. Intervening variables are theoretical constructs whose meaning is determined through their relationships with a variety of environmental variables whose overall effects they are intended to summarize.

Tolman's expectancy theory

Thorndike, influenced by Darwin's premise of continuity of evolution biological species, began the transition to a less mentalistic psychology. John B. Watson concluded it with a complete rejection of mentalistic concepts. Acting in line with the new thinking, Tolman replaced the old speculative mentalistic concepts with logically definable intermediate variables.

Regarding the subject of our discussion (reinforcement), Tolman did not follow Thorndike's example. Thorndike viewed the consequences of a response as being of utmost importance in strengthening the associative connection between stimulus and response. He called this the law of effect, which was the forerunner modern theory reinforcements Tolman believed that reaction consequences do not affect learning as such, but only the external expression of the processes underlying learning. The need to distinguish between learning and execution arose in the course of attempts to interpret the results of experiments on latent learning. As the theory developed, the name of Tolman's intermediate learning variable was changed several times, but the most appropriate name was probably expectancy. Expectancy depended solely on the temporal sequence—or contiguity—of events in the environment rather than on the consequences of the response.

Pavlov's physiological theory

For Pavlov, as for Tolman, a necessary and sufficient condition for learning was the contiguity of events. These events are physiologically represented by processes occurring in those areas of the cerebral cortex that are activated by indifferent and unconditioned stimuli. The evolutionary consequences of a learned response were recognized by Pavlov, but not tested under experimental conditions, so their role in learning remained unclear.

Ghazri's molecular theory

Like Tolman and Pavlov, and unlike Thorndike, Edwin R. Ghazri believed that contiguity was a sufficient condition for learning. However, co-occurring events were not determined by such broad (i.e., molar) events in the environment as Tolman argued. Each molar environmental event, according to Ghazri, consists of many molecular stimulus elements, which he called signals. Each molar behavior, which Ghazri called an “action,” is in turn composed of many molecular reactions, or “movements.” If a signal is combined in time with movement, this movement becomes completely determined by this signal. Learning a behavioral action develops slowly only because most actions require learning of many component movements in the presence of many specific signals.

Hull's drive reduction theory

The use of intervening variables in learning theory reached its greatest development in the work of Clark L. Hull. Hull attempted to develop a general interpretation of behavioral changes resulting from both classical and operant procedures. Both stimulus-response conjugation and drive reduction were included as necessary components in Hull's concept of reinforcement.

Fulfillment of learning conditions influences the formation of an intermediate variable - habit. Habit was defined by Hull as a theoretical construct that summarizes the overall effect of a number of situational variables on a number of behavioral variables. The relationships between situational variables and the intervening variable (habit), and then between habit and behavior, were expressed in the form of algebraic equations. Despite the use of physiological terms in the formulation of some of his intermediate variables, Hull's experimental research and theory were concerned exclusively with the behavioral level of analysis. Kenneth W. Spence, a collaborator of Hull who made a significant contribution to the development of his theory, was particularly careful in defining intermediate variables in purely logical terms.

B. F. Skinner. Operant behavior. Law of Acquisition Reinforcement with a fixed frequency and at a fixed interval.

Edward Chase Tolman (1886-1959)

Tolman's system is goal-directed behaviorism, which combines the objective study of behavior taking into account purposefulness or orientation towards achieving a specific goal.

One of the early followers of behaviorism, Edward Tolman studied engineering at the University of Massachusetts Institute of Technology. He switched to psychology and, under Edwin Holt, began working at Harvard, where he received his Ph.D. in 1915. In the summer of 1912, Tolman studied in Germany with Gestalt psychologist Kurt Koffka. In his final year of graduate school, while studying traditional Titchenerian structural psychology, Tolman was introduced to Watson's behaviorism. As a graduate student, Tolman questioned the scientific usefulness of introspection. In his autobiography, written in 1952, he wrote that Watson's behaviorism became a "powerful stimulus and support" for him.

The main provisions of Tolman's teachings are presented in his work “Goal-directed behavior in animals and humans” (1932). His system of goal-directed behaviorism may at first glance seem like a curious mixture of two contradictory concepts: target And behavior. Attributing a goal to an organism implies invoking the concept of consciousness - that is, a mentalistic concept that has no place in behavioral psychology. Nevertheless, Tolman made it clear that in his methodology and subject matter he remained a consistent behaviorist. He did not encourage psychologists to accept the concept of consciousness. Like Watson, he rejected introspection and was not interested in any implied inner experiences of organisms that were not accessible to objective observation.

Goal-directed behavior, Tolman wrote, can be defined in terms of objective behaviorism, without reference to introspection or assumptions about what an organism “feels” about a particular experience. It was quite obvious to him that any behavior is aimed at achieving a specific goal. For example, a cat is trying to get out of a “problem box”, a rat is mastering a maze, and a child is learning to play the piano.

As Tolman himself said, behavior “smells of purpose.” Any behavior is aimed at achieving a certain goal, at mastering certain means. The rat repeatedly and persistently goes through the maze, each time making fewer and fewer mistakes in order to quickly get to the exit. In other words, the rat learns, and the very fact of learning - for a rat or for a person - is objective behavioral evidence of the presence of a goal. Tolman deals only with the reactions of organisms. All his measurements were made in terms of changes in response behavior as a function of learning. And these measurements provide objective information.


Watson's behaviorism readily criticized the attribution of a purpose to any kind of behavior, since purposefulness of behavior implies the assumption of consciousness. Tolman responded to this that it makes no difference to him whether an organism has consciousness or not. Experiences of consciousness associated with goal-directed behavior, even if they occur, do not have any effect on the behavioral reactions of the body. Tolman dealt exclusively with overt reactions.

As a behaviorist, Tolman believed that initiating causal behavior and final resulting behavior must be objectively observable and capable of being described in operational terms. He proposed that the causes of behavior include five main independent variables: incentives environment, psychological drives, heredity, prior learning and age. The behavior is a function of all these variables, which is expressed by a mathematical equation.

Between these observed independent variables and the resulting response behavior (the dependent observed variable), Tolman introduced a set of unobservable factors that he called intervening variables. These intervening variables are actually the determinant of behavior. They represent those internal processes, which relate the stimulus situation to the observed response. The behaviorist formula S-R (stimulus-response) should now read S-O-R. Intermediate variables are everything that is connected with O, that is, with the organism, and forms a given behavioral response to a given irritation.

Because these intervening variables are not objectively observable, they are of no practical use to psychology unless they can be related to experimental (independent) variables and to behavioral (dependent) variables.

A classic example of an intervening variable is hunger, which cannot be observed in a human or animal test subject. And yet, hunger can be quite objectively and accurately linked to experimental variables - for example, to the duration of the period of time during which the body did not receive food. In addition, it can be linked to an objective response or to a behavioral variable - for example, the amount of food eaten or the rate of food absorption. In this way, an unobserved intervention factor—hunger—can be accurately estimated empirically and therefore becomes available for quantitative measurement and experimental manipulation.

By defining independent and dependent variables, which are observable events, Tolman was able to construct operational descriptions of unobservable, internal states. He initially called his approach "operant behaviorism" before choosing the term "intervening variables."

Intervening variables are unobservable and hypothesized factors in the organism that are actually determinants of behavior.

Intervening variables have proven to be very useful for behavioral theory development to the extent that they have been empirically related to experimental and behavioral variables. However, such an enormous amount of work was required to make this approach comprehensive that Tolman eventually gave up all hope of "composing full description at least one intermediate variable"

Learning theory. Learning played a vital role in Tolman's goal-directed behaviorism. He rejected Thorndike's law of effect, arguing that reward or encouragement has little effect on learning. Instead, Tolman proposed a cognitive theory of learning, suggesting that repeated performance of the same task strengthens the connections created between environmental factors and the organism's expectations. In this way, the body learns about the world around it. Tolman called these connections created by learning Gestalt signs and, which are developed during the repeated performance of an action.

The rat runs through the maze, sometimes exploring the correct and sometimes incorrect passages or even dead ends. Finally the rat finds food. During subsequent passages of the maze, the goal (search for food) gives purposefulness to the rat’s behavior. Each branch point has some expectations associated with it. The rat comes to understand that certain cues associated with the branch point do or do not lead to where the food is.

If the rat's expectations are met and it actually finds food, then the gestalt sign (that is, the sign associated with some choice point) receives reinforcement. Thus, the animal develops a whole network of gestalt signs at all choice points in the maze. Tolman called it cognitive map. This schema represents what the animal has learned: a cognitive map of the maze, not a set of certain motor skills. In a sense, the rat gains a comprehensive knowledge of its maze or other environment. Her brain develops something like a field map that allows her to move from point to point without being limited to a fixed set of learned body movements:

A cross-shaped labyrinth was used. Rats of the same group always found food in the same place, even if in order to get to the food they sometimes had to turn left instead of right at different entry points. The motor reactions were different, but the food remained in the same place.

The rats of the second group had to always repeat the same movements, but the food was in a different place each time. For example, starting at one end of a plus maze, rats found food only by turning right at the choice point; If the rats entered the maze from the opposite side, then in order to find food, they still had to turn to the right.

The results of the experiment showed that rats from the first group, that is, those who studied the scene of action, oriented much better than rats from the second group, who learned the reactions. Tolman concluded that a similar phenomenon occurs among those people who know their neighborhood or city well. They can go from one point to another different routes, since their brains have formed a cognitive map of the area.

Another experiment examined latent learning—that is, learning that cannot be observed while it is actually happening. A hungry rat was placed in a maze and allowed to roam freely. At first there was no food in the maze. Can a rat learn anything in the absence of reinforcement? After several unreinforced attempts, the rat was allowed to find food. After this, the rat's speed through the maze increased sharply, which showed the presence of some learning during the period of absence of reinforcement. This rat's performance very quickly reached the same level as that of rats that received reinforcement on every trial.

Latent learning is learning that is not observable at the time it occurs.

B.F. Skinner (1904-1990)

The most influential figure in psychology for several decades was B. F. Skinner. Skinner successfully graduated from college with a degree in English, affiliation with Phi Beta Kappa, and aspirations to become a writer. After reading about Watson's and Pavlov's conditioning experiments, Skinner made a sharp turn from the literary aspects of human behavior to the scientific ones. In 1928, he entered Harvard University's graduate school in psychology - despite the fact that he had never taken a psychology course before. Three years later he received his Ph.D. Upon completion scientific work After completing his doctorate, he taught at the University of Minnesota (1936-1945) and Indiana University (1945-1974) before returning to Harvard.

The topic of his dissertation relates to a position that Skinner followed steadily throughout his career. He proposed that a reflex is a correlation between stimulus and response, and nothing more. His 1938 book, The Behavior of Organisms, describes the basic principles of this system.

Operant behavior occurs without the influence of any external observable stimuli. The body's response appears spontaneous in the sense that it is not externally related to any observable stimulus.

The classic experimental demonstration involved pressing a lever in a Skinner box. In this experiment, a food-deprived rat was placed in a box and given full opportunity to explore it. During the research, she inevitably had to touch the lever that activated the mechanism that pulled out the shelf with food. After receiving several portions of food, which were supposed to serve as reinforcement, the rat quickly developed conditioned reflex. Note that the rat's behavior (lever pressing) has an effect on the environment and is a tool for acquiring food. The dependent variable in this experiment is simple and straightforward: the rate of reaction.

The difference between respondent and operant behavior is that operant behavior affects on the environment surrounding the organism, while respondent behavior does not. The experimental dog in Pavlov's laboratory, harnessed, can do nothing more than react (for example, salivate) when the experimenter offers it any stimuli. The dog itself cannot do anything to get the stimulus (food).

The operant behavior of a rat in a Skinner box, in contrast, is instrumental in the sense that the rat achieves its stimulus (food). When the rat presses the lever, it receives food; and if he doesn’t press the lever, he doesn’t get food. This is how the rat impacts its environment.

Skinner believed that operant behavior is characteristic of everyday learning. Since behavior, as a rule, is operant in nature, the most effective approach behavioral science is the study of the conditioning and extinction of operant behavior.

Based on this experiment, Skinner formulated his law of acquisition, which states that the strength of operant behavior increases if the behavior is accompanied by a reinforcing stimulus. Although it takes practice to develop a quick response when pressing a lever, key parameter after all, there is reinforcement. Practice in itself does not achieve anything: it only provides the opportunity for additional reinforcement to occur.

Skinner's law of acquisition differs from Thorndike's and Hull's provisions on learning. Skinner did not at all address such consequences of reinforcement as pain-pleasure or pleasure-dissatisfaction, as Thorndike did. Skinner also did not try to interpret reinforcement in terms of impact reduction.

In the Skinner box, the rat's behavior was reinforced with each lever press. That is, every time the rat performed the correct action, it received food. Skinner noted that although real life Reinforcement is not always consistent or continuous, however, learning still occurs and behavior is maintained, even if the reinforcement was random or infrequent.

One Saturday evening, Skinner discovered that he was almost out of food. At that time (the thirties) it was still impossible to buy food from special companies supplying research laboratories; the experimenter had to make the balls by hand, which was a rather lengthy and labor-intensive process.

Instead of spending his weekend making food pellets, Skinner asked himself: What would happen if he gave his rats a reinforcer once per minute, regardless of the number of responses? With this approach he will need much less feed and should have enough for the weekend. Skinner decided to conduct a long series of experiments to test various options reinforcement systems.

In one such study, Skinner compared the response rate of animals that received reinforcement on every response with the response rate of those animals that received reinforcement only after a certain interval of time. The latter condition is called a fixed-interval reinforcement schedule. Reinforcement could be given, for example, once per minute or every four minutes. An important point in this case is that the experimental animal received reinforcement only after a certain period of time. Skinner's research showed that the shorter the interval between reinforcements, the more often the animal exhibits a conditioned response. Conversely, as the interval between reinforcements increases, the frequency of the response decreases.

The frequency of reinforcement also influences the extinction of a conditioned response. The manifestation of a conditioned response fades away at a faster rate if there was continuous reinforcement, which was then abruptly stopped, than in the case when reinforcement was given intermittently. Some pigeons demonstrated up to ten thousand reactions without reinforcement if they were initially conditioned on the basis of periodic, intermittent reinforcement.

Skinner also investigated fixed frequency reinforcement schedules. In this case, reinforcement is given not after a certain period of time, but after a certain number of conditioned reactions have been completed. The animal's behavior itself determines how often reinforcement will be given. For example, it takes ten or twenty conditioned responses to obtain a new reinforcer. Animals receiving reinforcement on a fixed frequency schedule respond much more intensely than those receiving reinforcement on a fixed interval schedule. It's obvious that high frequency responding with a fixed interval schedule does not lead to additional reinforcement; the animal can press the lever five times or fifty times, but the reinforcement will only appear when the specified amount of time has elapsed.

The sounds that the human body makes in the process of speech, Skinner argued, are also a form of behavior, namely, verbal behavior. They are responses that can be reinforced by other speech sounds or gestures, just as a rat's pressing a lever is reinforced by receiving food.

Verbal behavior requires two interacting people - a speaker and a listener. The speaker reacts in a certain way - this means that he utters a sound. The listener can control the speaker's subsequent behavior by expressing reinforcement, non-reinforcement, or punishment - depending on what was said.

For example, if a listener smiles every time a speaker uses a word, he or she increases the likelihood that the speaker will use that word again. If a listener reacts to a word by furrowing his brow or making sarcastic remarks, he increases the likelihood that the speaker will avoid using that word in the future.

Examples of this process can be observed in the behavior of parents when their children learn to speak. Inappropriate words or expressions, incorrect use of words, poor pronunciation cause a reaction that is completely different from that which polite

The formula of behaviorism was clear and unambiguous: “stimulus-response.”

Meanwhile, in the circle of behaviorists there appeared outstanding psychologists who questioned this postulate. The first of them was a professor at the University of Berkeley (California), an American Edward Tolman(1886-1959), according to which the formula of behavior should consist not of two, but of three members, and therefore look like this: stimulus (independent variable) - intermediate variables - dependent variable (reaction).

The middle link (intermediate variables) is nothing more than mental moments inaccessible to direct observation: expectations, attitudes, knowledge.

Following the behaviorist tradition, Tolman experimented with rats looking for a way out of a maze. The main conclusion from these experiments was that, based on the behavior of animals strictly controlled by the experimenter and objectively observed by him, it is possible to reliably establish that this behavior is not controlled by the same stimuli that act on them in the first place. at the moment, but special internal regulators. Behavior is preceded by a kind of expectations, hypotheses, and cognitive “maps.” The animal builds these “maps” itself. They guide him in the labyrinth. From them it, being launched into the labyrinth, learns “what leads to what.” The position that mental images serve as a regulator of action was substantiated by Gestalt theory. Taking her lessons into account, Tolman developed his own theory, called cognitive behaviorism.

Tolman outlined his ideas in the books “Target Behavior in Animals and Humans” and “Cognitive Maps in Rats and Humans.” Experimental work conducted mainly on animals (white rats), believing that the laws of behavior are common to all living beings, and can most clearly and thoroughly be traced at the elementary levels of behavior.

The results of Tolman's experiments, presented in his main work “Goal-directed behavior in animals and humans” (1932), forced a critical rethinking of the cornerstone scheme of behaviorism S R (“stimulus-response”).

The very idea of ​​goal-directed behavior contradicted the programmatic guidelines of the founder of behaviorism, Watson. For classical behaviorists, goal-directed behavior implies the assumption of consciousness.

To this Tolman stated that it does not matter to him whether an organism has consciousness or not. As befits a behaviorist, he focused on external, observable reactions. He proposed that the causes of behavior included five major independent variables: environmental stimuli, psychological drives, heredity, prior learning, and age. The behavior is a function of all these variables, which can be expressed by a mathematical equation.

Between the observed independent variables and the resulting behavior, Tolman introduced a set of unobservable factors, which he called intervening variables. These intervening variables are actually determinants of behavior. They represent those internal processes that link the stimulus situation to the observed response.

However, while remaining in the position of behaviorism, Tolman was aware that since intermediate variables are not subject to objective observation, they are of no practical use to psychology unless they can be linked to experimental (independent) and behavioral (dependent) variables.

A classic example of an intervening variable is hunger, which cannot be observed in a test subject (whether animal or human). Nevertheless, hunger can be quite objectively and accurately linked to experimental variables, for example, to the duration of the period of time during which the body did not receive food.

In addition, it can be linked to an objective response or to a behavioral variable, such as the amount of food eaten or the rate of food absorption. Thus, this factor becomes available for quantitative measurement and experimental manipulation.

In theory, intervening variables have proven to be a very useful construct. However, the practical implementation of this approach required such enormous work that Tolman eventually abandoned all hope of “compiling a complete description of even one intermediate variable.”

The results obtained in the experiments forced Tolman to abandon the law of effect, which was fundamental for the entire behavioral doctrine, discovered by Thorndike. In his opinion, reinforcement has a rather weak effect on learning.

Tolman proposed his own cognitive theory of learning, believing that repeated performance of the same task strengthens the emerging connections between environmental factors and the organism's expectations. In this way, the body learns about the world around it. Tolman called such connections created by learning Gestalt signs.

Historians of science make a bold assumption that the father of behaviorism, John Watson, suffered from a specific disorder - an-ideism, that is, he was completely devoid of imagination, which forced him to interpret all observed phenomena purely literally.

Tolman cannot be denied creative imagination, however, he also based his theoretical reasoning on objectively observable phenomena. What did he see in his experiments that made him go beyond Watson's ideas?

Here is a rat running through a maze, randomly trying either successful (you can move on) or unsuccessful (dead end) moves. Finally she finds food. During subsequent passages of the maze, the search for food gives the rat's behavior purposefulness.

Each branching move comes with some expectations. The rat comes to “understand” that certain signs associated with the fork do or do not lead to the place where the desired food is located.

If the rat's expectations are met and it actually finds food, then the gestalt sign (that is, the sign associated with some choice point) receives reinforcement. Thus, the animal develops a whole network of gestalt signs at all choice points in the maze. Tolman called this a cognitive map.

This pattern represents what the animal has learned, not just a collection of some motor skills. In a certain sense, the rat acquires a comprehensive knowledge of its labyrinth, in other conditions - of a different environment around it. Her brain develops something like a field map that allows her to move in the right direction without being limited to a fixed set of learned body movements.

In a classic experiment described in many textbooks, Tolman's ideas found clear and convincing confirmation. The maze used in this experiment was cross-shaped. Rats of the same group always found food in the same place, even if, in order to get to it, they sometimes had to turn left rather than right at different entry points into the maze. The motor reactions, of course, were different, but the cognitive map remained the same.

The rats of the second group were placed in such conditions that they had to repeat the same movements each time, but the food was in a new place each time.

For example, starting at one end of the maze, a rat found food only by turning right at a certain fork; if the rat was launched from the opposite side, then in order to get to the food, it still had to turn to the right.

The experiment showed that the rats of the first group - those who “studied” and “learned” the general scheme of the situation, orientated much better than the rats of the second group, which reproduced learned reactions.

Tolman suggested that something similar occurs in humans. A person who has managed to navigate a certain area well can easily go from one point to another along different routes, including unfamiliar ones.

Another experiment examined latent learning, that is, learning that cannot be observed while it is actually happening.

A hungry rat was placed in a maze and allowed to roam freely. For some time the rat did not receive any food, that is, no reinforcement occurred. Tolman was interested in whether any learning takes place in such an unreinforced situation.

Finally, after several non-reinforced trials, the rat was given the opportunity to find food. After this, the speed of completing the maze increased sharply, which showed the presence of some learning during the period of absence of reinforcement. This rat's performance very quickly reached the same level as that of rats that received reinforcement on every trial.

It would be wrong to perceive Tolman as a “rat mentor”, far from human problems. His article, revealingly titled “Cognitive Maps in Rats and Humans” (also available in Russian translation), became not only a collection of evidence against the S ® R scheme, but also a passionate appeal to reduce the level of frustration, hatred and intolerance generated in society by narrow cognitive maps.

In view of the fact that this classic text risks remaining outside the circle of interests of our psychologists, we will allow ourselves an extensive and, it seems, very important quotation. Noting the destructive nature of human behavior, Tolman ends his article with these words:

“What can we do about this? My answer is to preach the powers of the mind, that is, broad cognitive maps. Teachers can make children intelligent (that is, open their minds) if they ensure that no child is overmotivated or overly irritable. Then children will be able to learn to look around, learn to see that there are often roundabout and more careful paths to our goals, and learn to understand that all people are mutually connected to each other.

Let's try not to become over-emotional, not to be over-motivated to such an extent that we can only deal with narrow cards. Each of us must place ourselves in sufficiently comfortable conditions to be able to develop broad maps, to be able to learn to live according to the reality principle rather than according to the too narrow and immediate pleasure principle.