2. Operant Reinforcement Theory
B. F. Skinner (1938, 1953, 1959/1972, 1969, 1971/ 1972, 1974), as a consequence of his genius and of his lifelong experimental research, has advanced a stimulus-response theory of learning based on the reinforcement of operant behaviors. This theory has been called operant reinforcement theory. Interestingly enough, however, Skinner (1950) once wrote a paper in which he negated the necessity of learning theories because he felt that all new situations would be investigated, irrespective of the existence of theories, if a systematic, molecular type of research schedule were carried out. Nevertheless, Skinner altered his position 3 years later by accepting the fact that he was a theorist, but only under his operational conception of “theory.” Skinner defined theory as “any explanation of an observed fact which appeals to events taking place somewhere else, at some other level of observation, described in different terms, and measured, if at all, in different dimensions” (Skinner, 1953, p. 26). This definition of theory further reinforced his position as an experimentalist and researcher of behavior.
According to Skinner (1953), an operant is any behavior produced by the organism in the absence of eliciting stimuli. This concept of operants is expressed in Skinner’s own words:
The unit of predictive science is, therefore, not a response, but a class of responses. The word operant [Skinner’s emphasis] will be used to describe this class. The term emphasizes the fact that the behavior operates upon the environment to generate consequences . . . . A single instance in which a pigeon raises its head is a response [Skinner’s emphasis]. It is a bit of history which may be reported in any frame of reference we wish to use. The behavior called “raising the head,” regardless of when the specific instances occur, is an operant (1953, p. 65).
Learning is defined as a change in the potentiality of a behavior, with varying degrees of permanence, as a consequence of reinforced practice (Farray, Note 2). Thus, Skinner’s perspective is that learning is produced when an operant behavior is reinforced. According to Skinner (1969), a reinforcer is an event, behavior, or material object that increases the frequency of any behavior upon which it is contingent. In addition to increasing the frequency of a given behavior, a reinforcer also increases the intensity with which it is performed (Deese & Hulse, 1967). In Skinner’s own words, “the kinds of consequences which increase the rate [of a response, that is, “reinforcers”] are positive or negative, depending upon whether they reinforce when they appear or disappear” (1969, p. 7). Further, a reinforcer is also likely to increase the occurrence of a given behavior regardless of the social desirability of the behavior in question. This last characteristic of reinforcers has been posited as an explanation of deviant behavior that produces pain, loss of privileges, or discomfort in the individual (Patterson, 1976; Patterson & Reid, 1970).
Instrumental or operant conditioning was first systematically researched by E. L. Thorndike (1911, 1932). The operant type of responses are to be differentiated from the respondent or Pavlovian types of responses in that the former are emitted and the latter are elicited. The Pavlovian responses gave rise to the type of learning termed classical conditioning; these responses and learning paradigm were first researched by Ivan I. Pavlov (1906, 1927), and later by V. Bechterev (1932), both in Russia. In reference to the difference between operant and classical forms of conditioning, Skinner states:
In the Pavlovian experiment, however, a reinforcer is paired with a stimulus; whereas in operant behavior it (the reinforcer) is contingent upon a response [Skinner’s emphasis]. Operant reinforcement is therefore a separate process and requires a separate analysis. In both cases, the strengthening of behavior which results from reinforcement is appropriately called “conditioning.” In operant conditioning we “strengthen” an operant in the sense of making a response more probable or, in actual fact, more frequent. In Pavlovian or “respondent” conditioning we simply increase the magnitude of the response elicited by the conditioned stimulus and shorten the time which elapses between stimulus and response (1953, p. 65).
Skinner cogently makes the point that genetic determinants of behavior, even when present, are of limited use in the modification of behavior. To this effect, Skinner (1957) states:
Even when it can be shown that some aspect of behavior is due to season of birth, gross body type, or genetic constitution, the fact is of limited use. It may help us in predicting behavior, but it is of little value in an experimental analysis or in practical control because such a condition cannot be manipulated after the individual has been conceived. The most that can be said is that the knowledge of the genetic factor may enable us to make better use of other causes. If we know that an individual has certain inherent limitations, we may use our techniques to control more intelligently, but we cannot alter the genetic factor (p. 371).
© 1976 Angel Enrique Pacheco, Ph.D., C.Psych. All Rights Reserved.
We are here to help you…
Learn to Live Better ®