Part One
After having gotten back into academia and having taught graduate-level courses in Behavior Analysis for over a year now, some signs pertaining to the health and state of Applied Behavior Analysis have become clear to me. Painfully clear.
I seem to be engaged in a perhaps losing battle against what I term “junk behaviorism.” Let me elaborate.
“Junk behaviorism” is a term I’ve come up with to describe a set of beliefs and practices that seem rampant in applied behavior analysis, but which are beliefs and practices that are not based on science and not based on B.F. Skinner’s experimental analysis of behavior science so far as I can tell.
Some preamble: Not long ago I was listening to the late, great George Carlin’s “A Modern Man” routine. Carlin had keen insights on our language. His “A Modern Man” routine had him speaking just about every modern cliche’d word or phrase that now infests our language. At one point in the routine he said “I read junk mail, I eat junk food, I buy junk bonds and I watch trash sports!” (You can find many copies of his entire routine on YouTube and other sites, including transcripts.) Carlin’s routine served as an sD to prompt me to think about other kinds of “junk” that we indulge in, including, alas, “junk behaviorism.”
Of course, in recent years some commentators have discussed what they term as “junk science.” Wikipedia defines “junk science”: http://en.wikipedia.org/wiki/Junk_science
So, what’s “junk behaviorism”:
1. It’s saying that you “reinforce the person,” when you discuss positive reinforcement. “I reinforced Joe the Plumber,” for instance. Well, how? By giving him a wall to lean against? From Skinner’s science we know that behaviorally all you can do is reinforce behavior. You don’t reinforce the person.
2. It’s calling an event or thing a “reinforcer” despite the absence of any evidence that it has functioned as a reinforcer or that it is currently functioning as a reinforcer. “Verbal praise is the reinforcer for Jill the Plumber.” Or, “we will use tokens as the reinforcer for Janet the Student.” What? How do we know that verbal praise “is” the reinforcer, or that the tokens “will” reinforce anything (let alone reinforce Janet the Student)? We don’t. This is extreme faulty use of language. Careless. Disregarding. Even intellectually arrogant. But above all, conceptually unsound. The term reinforcer ought to be used only for events that have demonstrated a functional relationship with respect to behavior. Well, in response to that, what other term should we use? More about that in a bit.
3. Lack of clarity about what a reinforcer does. Sometimes some students arrive to grad school after having worked for a year or two or even several years out in some agency/clinic that provides “behavioral” services of various kinds to individuals “diagnosed” with various behavioral problems. In some cases they’ve learned that “reinforcement” “increases behavior.” Well, no it doesn’t. The phrase “increases behavior” is way too ambiguous. Case in point: In discussing the definition of behavior, some individuals wanted to defend “behaviors” that do not pass Lindsley’s “Dead Man’s Test.” A kid saying seated in his seat is therefore construed as behavior, even though a dead person could do better at this “behavior” than a live person ever could do. That’s a bad pinpoint, when you apply the “Dead Man’s Test.” So, what’s “increasing behavior” in this example? It’d be the kid staying in his seat for a longer period of time! Egads! Talk about turning Skinner’s science on its head? How many rotations per minute is Skinner spinning in his afterlife? (Said as an update to the common metaphor.)
A variation of this misconception is that “reinforcement” “increases the probability of behavior,” or that it “increases the likelihood of behavior.” While slightly better than the even more ambiguous “increases behavior,” these still qualify as bad phrases; phrases that obscure more than they clarify. In contrast, Skinner was very clear: a reinforcer affects the RATE OF RESPONSE. More specifically, a reinforcer increases the frequency of behavior over time, where frequency refers to, and means the exact same thing as, rate of response.
To get a rate of response you have to COUNT instances of behavior and determine how many there are per unit of time. You need to determine the frequency of behavior and then see whether that frequency changes over time. If it does, and if it increases, then you begin to have some evidence that the event, or thing, functioned as a reinforcer.
In terms of probability and changes to probability, Skinner was always very clear: Probability referred to rate of response. This type of probability addresses the “how often?” question, not the “what are the odds?” question. If we loosely say that the “probability of the behavior increases,” in Skinner’s science we really mean that the response rate increased over time. The count per minute went from one level up to another level. For example, if we start “reinforcing” behavior, it’s frequency might increase from 5 per minute up to 20 per minute. Or, perhaps behavior increases from .1 responses per minute up to .5 responses per minute. If, but only if, those sorts of increases in response rate occur, do we begin to have evidence that we have reinforcement.
4. Treating nonbehavior as though it is behavior. I have already alluded to how nonbehavior, such as remaining seated, is now thought of and construed as being “behavior.” Well, only in the junk behaviorism world can this be so! Nonbehaviors represent a failure to pinpoint actions such that when one instance of an action occurs, it can be counted. Nonbehaviors also confuse goals, outcomes, or results with behavior. “Remains seated” might well represent a desired goal (for the classroom teacher, perhaps). I won’t comment here on the desirability of this as a goal; we’ll deal with that at another time. Right now, suffice to say that it’s a goal, and moreover, a state of being, not a behavior. There’s no action in it. This is one reason why Lindsley came up with the “Dead Man’s Test.” Well, the “Dead Man’s Test” cuts against the grain of what appears to be modern-day junk behavioral practices in school or agency settings. Their definitions of behavior are sometimes so dysfunctional that goals and states of being are confused with movement and action. That represents a severe and profound failure to conceptualize behavior. In the long run, it will lead to failure of “behavioral” practices, and perhaps ultimately to the dissolution of behavior analysis as a science, to the extent that it really still is a science.
5. Confusing “near-behaviors” with actual behavior. I got the term “near-behavior” from Jamie Daniels when I worked for Aubrey Daniels & Associates. I don’t know off-hand if Jamie published it, but let me give him credit. Words such as “use,” “try,” “get,” “give” and so on are “near-behaviors.” They sort of sound behaviorish, and sort of seem to imply that there’s some action. Yet, they remain very ambiguous. They do not refer to actual actions or movements. Ironically, words such as “do,” “respond,” and “behave” are themselves “near-behaviors”! Well, how does one “respond,” you should ask. Seek clarification. In junk behaviorism these terms are all used, and seem to be used rather thoughtlessly, as if precision and clarification don’t really matter.
6. “ABC.” In the field of behavior analysis the “three-term contingency” has become iconic. Moreover, it’s become declarified into the term “ABC,” which stands for “Antecedent, Behavior, Consequence.” This aligns well with Discrete Trial Training (DTT), which almost seems to have become a standard way of viewing behavior one the one hand and the procedure of choice on the other hand. In DTT there is a learner who is probably just sitting there, waiting. The learner, so to speak, sits across a table from a teacher or therapist, so-called. The teacher or therapist, so-called, conducts a “session” with the client learner. During a “session,” the client is presented with “stimuli.” These are the “antecedents.” The teacher or therapist, so-called, will present, one at a time, some item to the client. The item could be a flashcard with a picture on it, for example. This item is shown to the learner. The learner then is supposed to give some response — the “behavior” part of the “ABC” model acronym. Let’s say that the learner does do this behavior. Then the teacher or therapist, so-called, will “deliver” a “consequence” or perform a “correction” routine, depending on how the client responded. Once that’s accomplished, the item is put aside and the teacher or therapist, so-called, picks up the next item and presents it and the same routine is conducted. This takes place until the session completes, which is usually a fairly short period of time. (I say that the person presenting these stimuli is a teacher or therapist, “so-called,” because a real teacher or therapist would understand that DTT represents but one procedure out of many to change behavior, and not always the best!)
Some people have the audacity to refer to the behavior in DTT as “operant” behavior. But, if you observe such DTT, the kid is mainly just sitting there, passively, awaiting environmental events to happen to him or to her. The response given is entirely reactionary, not “operating on one’s environment” in any significant sense. The learner, to the extent that he or she is learning anything at all, may simply be learning to be passive; that events are to be presented to him or her. “Stimuli” are presented.” Later on, after some response is given, “reinforcement” or “corrections” are likewise presented. Then one waits for the next “stimulus” to be presented.
This turns Skinner’s model on its head, too. One can imagine his spin rate accelerating (though, not due to any reinforcement, since you can’t reinforce the dead!). I will concede that the actions of the teacher or therapist, so-called, represents operant behavior: That individual is clearly operating on his or her environment!
The “ABC” model has become reified, I contend, as being the model of “operant” behavior. It’s taken the so-called “three-term contingency” and morphed it into something different from what it was and taken it to what it never should have been.
In actual fact, the three term contingency might be somewhat better expressed as Stimulus: (Movement –> Consequence). The discriminative stimulus, sD, doesn’t “cause” the response to occur, though that seems implied in the “ABC” model. The sD occurs in relation to the the (MC –> Consequence) contingency pair. In the presence of the sD, MC –> Consequence relation entails a particular type of consequence, such as one that functions as a positive reinforcer. In an “sDelta,” which is just a different type of sD, the MC –> Consequence relation differs. Perhaps the consequence isn’t a positive reinforcer.
Let’s parse this out a little, since I’ve introduced some terms (“MC”) without defining them. You start with a two-term contingency relation, MC –> Consequence, where MC stands for “Movement Cycle.” A Movement Cycle is an instance of behavior. If it has a known function, you may call it a response. An MC has a beginning point and an ending point, and the organism can do another of the same type of MC once the current one finishes. Informally, we may say that an MC has a “start time,” a “do time,” and a “stop time.” Those are the boundaries of a single instance of an MC. In other words, an MC also represents some action or movement by the organism that you can count (which, in turn, enables us to compute the rate of response). The “consequence” in this relation may be understood better as simply the “effect” produced by the action. We can substitute action for MC and effect for the consequence to add clarification.
This “Action” –> “Effect” pair forms a two-term contingency. This two-term contingency can come under stimulus control. But it does not necessarily have to do so, or certainly does not have to do so in the “ABC” model sense.
In actual operant behavior, the organism moves around, and acts upon its environment. It changes and alters the environment. If nothing else it captures and engulfs some nutritious substance that functions to sustain animal life, since the organisms we’re talking about, including human organisms, are animal life. The organism doesn’t sit there awaiting stimuli to come down at it. It moves. It operates on its environment. It changes things around. The environment differs somewhat after it has been operated upon. Moreover, the organism itself gets changed in some way, perhaps a small way, as a result of its acting upon its environment. There is reciprocity in operant behavior in its relation to organism and environment.
All of this seems to be obscured by the “ABC” model. First, the “ABC” model ignores conditions of deprivation and aversive stimulation, which some behaviorists dub the “establishing operation” (though the term “potentiation” may work better). The “EO,” as the establishing operation is also called, is not an “antecedent event.” It doesn’t fit into that term. So, right away we’re faced with a fourth term.
Next, the “ABC” model leads us back into the old, and rightfully discredited, “S-R” model of behavior. Some people in ABA seem to think that the “A” causes” the “B” to occur, and why should they think otherwise, given that the very model implies that? Moreover, the “A” gets put into an equivalent status with the “C,” the consequence! But, in actuality, the “B –> C” relation is far more important in the operant behavior equation than the “A –> B” relation ever would be.
Unseen and unnoted, what also gets obliterated by the dysfunctional “ABC” model is the CONTINGENCY relation! This we can denote with another term to identify the relation between the “behavior” and the consequence.” The contingency, in fact, is far more important than the “B” or the “C” themselves.
But note that in the “ABC” model, the question of what the contingency relation is will become quite limited. How does one factor in a schedule of reinforcement into that paradigm? Can you imagine doing a VR50 schedule in a DTT paradigm? I can’t either. The model suggests, rather strongly, that EACH “behavior” will be consequated. And typically, each one is.
In applying the “ABC” model with a DTT procedure, the question of measurement then arises. What does one measure? Well, the “behavior” that the client performs is deemed to be “correct” or incorrect.” One knows the total number of presentations. So, it’s fairly easy to calculate the percent of behaviors that were correct. Percent correct becomes the measure of choice. It’s easy to do. The data needed to compute it are easy to “take.”
The model ignores time as a fundamental parameter of behavior, however. In principle, it would be possible to measure the LATENCY between when such a “stimulus” is presented to a client and when that client makes a response. Latencies could be directly charted onto a Standard Celeration Chart, because latencies really are frequencies. (Don’t think so? A latency is the count of 1 response per however much time elapsed between when the “stimulus” was first presented and the point in time when the behavior began.) The chart can handle latencies down to .006 seconds, which would indeed be an incredibly short latency. Of course, in the typical DTT situation, latency isn’t recorded, and some might object that it be recorded, because the logistics of carrying out “trials” is already cumbersome enough as things stand. However, scientifically, that’s no excuse. If the science is to be advanced further, then perhaps some enterprising individual will invent some measurement technology that makes the recording of such latencies as easy and convenient as the current percent correct recording is. Of course, that won’t address the other lingering problems with the underlying paradigm implied by the “ABC” model.
7. Lack of clarity about the terms we use. I have put words such as “stimulus” in double quotes above, because, again, unless there exists some evidence that a thing or event FUNCTIONS to exert stimulus control over a two-term Action –> Effect relation, the event or thing should not be called a stimulus. The same goes for “reinforcer,” “consequence,” “contingency” and “response.” All of these terms should be used only when we have demonstrated evidence that they functioned in some way. Otherwise, we end up with the “junk behaviorism” nonsense statement that “I tried the reinforcer, but it didn’t work.” Well, sorry to report that EVERY single reinforcer in the 5 billion year history of this planet has worked — each and every time!
So, how do we get past the “junk behaviorism” tendency to use function words when we do not have evidence of function?
Dr. Og Lindsley supplied the answer back in the mid-1960s, by suggesting we use two sets of terms, one to simply describe events as they are, and then a second set to identify terms when we have evidence that they functioned in some way. He named this the IS-DOES operant behavioral equation.
On the IS side of the equation, the term “antecedent event” would never be used to denote a thing or event that has demonstrated stimulus control over an action–> effect pair. Antecedent Event, abbreviated AE, would simply refer to events that happened before some behavior occurred. That’s all we know about them, that they took place before behavior, and nothing else. They may be functionally related, or may not be, but when discussing what they ARE, we don’t know what they DO. We don’t assume that they have a stimulus function, either. (Alas, because the term “antecedent” has become so deeply embedded now in the junk behavioral culture as meaning the exact same thing as “stimulus,” it may be too late to revive “antecedent event” in the same sense that Lindsley meant! This does not negate the point. It rather suggests we need to keep working at terminology.)
Likewise, use “response” only when a Movement Cycle has a functional relationship to other events. Use Movement Cycle (MC) to simply describe an instance of behavior. Likewise, relegate “consequence” only to those events that have had a demonstrated effect on the response rate of a response. If the function of an event that follows behavior in time is unknown, then use Subsequent Event (abbreviated SE). It makes perfect sense to say, “I tried the SE, but the frequency of behavior didn’t increase!” Well, try another SE to see if it will increase the response rate!
Likwise further, use “arrangement” to denote descriptively the number or time or other relation between an MC and an SE. But, once you have a clearly evident functional relationship between a response and consequence, then, but only then, use the term contingency.
The IS side of the equation is thus written:
Program: (Antececent Event: Movement Cycle — Arrangement –> Subsequent Event)
Using abbreviations:
P: (AE: MC — Arr –> SE)
The DOES side of the operant behavioral equation then becomes:
Disposition: (Stimulus: Response — Contingency –> Consequence)
Using abbreviations:
D: (sD: R — K –> C).
Note the use of parentheses and colons. A single instance of an MC (IS side), or R (DOES side) gets enclosed in the parentheses. Colons are used to signify that the item in question might be either a discrete event or a more sustained condition (e.g., in a MULT schedule, responding on a VR schedule when the green light is ON — the light being on before, during, and after any given response). The time arrow, –>, gets used only to signify the temporal relation between the events where we need to indicate it. In other words, if we had put an arrow between the AE and MC, we risk reintroducing the junk behavioral “S-R” mindset. To avoid that possibility, don’t put an arrow there. It doesn’t fit anyway.
I must note that the operant behavioral equation was conceptualized by Dr. Ogden R. Lindsley in 1964 in a published paper, “Direct Measurement and Prosthesis of Retarded Behavior,” published in the Journal of Education. It morphed a couple of times, with his earlier acronyms and terms changing slightly. Then it became defunct when it appeared to be too difficult to engage would-be behavior analysts in learning the IS-DOES equation. Let me suggest that now we must reintroduce it. Moreover, I have tweaked the equation somewhat through the use of those parentheses and colons, for the aforementioned reasons. Will it work? Maybe, but we won’t know if we don’t try, try again!
Well, there’s a lot more “junk behaviorism” that afflicts the field of behavior analysis, and I’ll discuss that in Part Two of this article, and include the relevant references then.
— JE