Charting versus “Junk Behaviorism”

November 2, 2008

Part One

After having gotten back into academia and having taught graduate-level courses in Behavior Analysis for over a year now, some signs pertaining to the health and state of Applied Behavior Analysis have become clear to me.  Painfully clear.

I seem to be engaged in a perhaps losing battle against what I term “junk behaviorism.”  Let me elaborate.

“Junk behaviorism” is a term I’ve come up with to describe a set of beliefs and practices that seem rampant in applied behavior analysis, but which are beliefs and practices that are not based on science and not based on B.F. Skinner’s experimental analysis of behavior science so far as I can tell.

Some preamble: Not long ago I was listening to the late, great George Carlin’s “A Modern Man” routine. Carlin had keen insights on our language.  His “A Modern Man” routine had him speaking just about every modern cliche’d word or phrase that now infests our language.  At one point in the routine he said  “I read junk mail, I eat junk food, I buy junk bonds and I watch trash sports!”  (You can find many copies of his entire routine on YouTube and other sites, including transcripts.) Carlin’s routine served as an sD to prompt me to think about other kinds of “junk” that we indulge in, including, alas, “junk behaviorism.”

Of course, in recent years some commentators have discussed what they term as “junk science.” Wikipedia defines “junk science”: http://en.wikipedia.org/wiki/Junk_science

So, what’s “junk behaviorism”:

1. It’s saying that you “reinforce the person,” when you discuss positive reinforcement.  “I reinforced Joe the Plumber,” for instance.  Well, how?  By giving him a wall to lean against?  From Skinner’s science we know that behaviorally all you can do is reinforce behavior. You don’t reinforce the person.

2. It’s calling an event or thing a “reinforcer” despite the absence of any evidence that it has functioned as a reinforcer or that it is currently functioning as a reinforcer.  “Verbal praise is the reinforcer for Jill the Plumber.”  Or, “we will use tokens as the reinforcer for Janet the Student.”  What?  How do we know that verbal praise “is” the reinforcer, or that the tokens “will” reinforce anything (let alone reinforce Janet the Student)?  We don’t.  This is extreme faulty use of language.  Careless. Disregarding. Even intellectually arrogant.  But above all, conceptually unsound.  The term reinforcer ought to be used only for events that have demonstrated a functional relationship with respect to behavior.  Well, in response to that, what other term should we use? More about that in a bit.

3. Lack of clarity about what a reinforcer does.  Sometimes some students arrive to grad school after having worked for a year or two or even several years out in some agency/clinic that provides “behavioral” services of various kinds to individuals “diagnosed” with various behavioral problems.  In some cases they’ve learned that “reinforcement” “increases behavior.” Well, no it doesn’t.  The phrase “increases behavior” is way too ambiguous.  Case in point: In discussing the definition of behavior, some individuals wanted to defend “behaviors” that do not pass Lindsley’s “Dead Man’s Test.”  A kid saying seated in his seat is therefore construed as behavior, even though a dead person could do better at this “behavior” than a live person ever could do.  That’s a bad pinpoint, when you apply the “Dead Man’s Test.”  So, what’s “increasing behavior” in this example? It’d be the kid staying in his seat for a longer period of time!  Egads!  Talk about turning Skinner’s science on its head?  How many rotations per minute is Skinner spinning in his afterlife? (Said as an update to the common metaphor.)

A variation of this misconception is that “reinforcement” “increases the probability of behavior,” or that it “increases the likelihood of behavior.”  While slightly better than the even more ambiguous “increases behavior,” these still qualify as bad phrases; phrases that obscure more than they clarify.  In contrast, Skinner was very clear:  a reinforcer affects the RATE OF RESPONSE.  More specifically, a reinforcer increases the frequency of behavior over time, where frequency refers to, and means the exact same thing as, rate of response.  

To get a rate of response you have to COUNT instances of behavior and determine how many there are per unit of time.  You need to determine the frequency of behavior and then see whether that frequency changes over time. If it does, and if it increases, then you begin to have some evidence that the event, or thing, functioned as a reinforcer.

In terms of probability and changes to probability, Skinner was always very clear:  Probability referred to rate of response. This type of probability addresses the “how often?” question, not the “what are the odds?” question.  If we loosely say that the “probability of the behavior increases,” in Skinner’s science we really mean that the response rate increased over time.  The count per minute went from one level up to another level.  For example, if we start “reinforcing” behavior, it’s frequency might increase from 5 per minute up to 20 per minute. Or, perhaps behavior increases from .1 responses per minute up to .5 responses per minute. If, but only if, those sorts of increases in response rate occur, do we begin to have evidence that we have reinforcement.

4. Treating nonbehavior as though it is behavior. I have already alluded to how nonbehavior, such as remaining seated, is now thought of and construed as being “behavior.”  Well, only in the junk behaviorism world can this be so!  Nonbehaviors represent a failure to pinpoint actions such that when one instance of an action occurs, it can be counted.  Nonbehaviors also confuse goals, outcomes, or results with behavior. “Remains seated” might well represent a desired goal (for the classroom teacher, perhaps).  I won’t comment here on the desirability of this as a goal; we’ll deal with that at another time. Right now, suffice to say that it’s a goal, and moreover, a state of being, not a behavior.  There’s no action in it.  This is one reason why Lindsley came up with the “Dead Man’s Test.”  Well, the “Dead Man’s Test” cuts against the grain of what appears to be modern-day junk behavioral practices in school or agency settings.  Their definitions of behavior are sometimes so dysfunctional that goals and states of being are confused with movement and action.  That represents a severe and profound failure to conceptualize behavior. In the long run, it will lead to failure of “behavioral” practices, and perhaps ultimately to the dissolution of behavior analysis as a science, to the extent that it really still is a science.

5. Confusing “near-behaviors” with actual behavior.  I got the term “near-behavior” from Jamie Daniels when I worked for Aubrey Daniels & Associates.  I don’t know off-hand if Jamie published it, but let me give him credit. Words such as “use,” “try,” “get,” “give” and so on are “near-behaviors.” They sort of sound behaviorish, and sort of seem to imply that there’s some action.  Yet, they remain very ambiguous.  They do not refer to actual actions or movements.  Ironically, words such as “do,” “respond,” and “behave” are themselves “near-behaviors”!  Well, how does one “respond,” you should ask.  Seek clarification. In junk behaviorism these terms are all used, and seem to be used rather thoughtlessly, as if precision and clarification don’t really matter.  

6. “ABC.” In the field of behavior analysis the “three-term contingency” has become iconic. Moreover, it’s become declarified into the term “ABC,” which stands for “Antecedent, Behavior, Consequence.” This aligns well with Discrete Trial Training (DTT), which almost seems to have become a standard way of viewing behavior one the one hand and the procedure of choice on the other hand.  In DTT there is a learner who is probably just sitting there, waiting.  The learner, so to speak, sits across a table from a teacher or therapist, so-called.  The teacher or therapist, so-called, conducts a “session” with the client learner.  During a “session,” the client is presented with “stimuli.” These are the “antecedents.”  The teacher or therapist, so-called, will present, one at a time, some item to the client.  The item could be a flashcard with a picture on it, for example. This item is shown to the learner. The learner then is supposed to give some response — the “behavior” part of the “ABC” model acronym.  Let’s say that the learner does do this behavior.  Then the teacher or therapist, so-called, will “deliver” a “consequence” or perform a “correction” routine, depending on how the client responded. Once that’s accomplished, the item is put aside and the teacher or therapist, so-called, picks up the next item and presents it and the same routine is conducted.  This takes place until the session completes, which is usually a fairly short period of time. (I say that the person presenting these stimuli is a teacher or therapist, “so-called,” because a real teacher or therapist would understand that DTT represents but one procedure out of many to change behavior, and not always the best!)

Some people have the audacity to refer to the behavior in DTT as “operant” behavior.  But, if you observe such DTT, the kid is mainly just sitting there, passively, awaiting environmental events to happen to him or to her.  The response given is entirely reactionary, not “operating on one’s environment” in any significant sense.  The learner, to the extent that he or she is learning anything at all, may simply be learning to be passive; that events are to be presented to him or her.  “Stimuli” are presented.”  Later on, after some response is given, “reinforcement” or “corrections” are likewise presented.  Then one waits for the next “stimulus” to be presented.

This turns Skinner’s model on its head, too.  One can imagine his spin rate accelerating (though, not due to any reinforcement, since you can’t reinforce the dead!).  I will concede that the actions of the teacher or therapist, so-called, represents operant behavior:  That individual is clearly operating on his or her environment!

The “ABC” model has become reified, I contend, as being the model of “operant” behavior.  It’s taken the so-called “three-term contingency” and morphed it into something different from what it was and taken it to what it never should have been.  

In actual fact, the three term contingency might be somewhat better expressed as Stimulus: (Movement –> Consequence).  The discriminative stimulus, sD, doesn’t “cause” the response to occur, though that seems implied in the “ABC” model.  The sD occurs in relation to the the (MC –> Consequence) contingency pair.  In the presence of the sD, MC –> Consequence relation entails a particular type of consequence, such as one that functions as a positive reinforcer. In an “sDelta,” which is just a different type of sD, the MC –> Consequence relation differs.  Perhaps the consequence isn’t a positive reinforcer.  

Let’s parse this out a little, since I’ve introduced some terms (“MC”) without defining them.  You start with a two-term contingency relation, MC –> Consequence, where MC stands for “Movement Cycle.”  A Movement Cycle is an instance of behavior. If it has a known function, you may call it a response. An MC has a beginning point and an ending point, and the organism can do another of the same type of MC once the current one finishes.  Informally, we may say that an MC has a “start time,” a “do time,” and a “stop time.”  Those are the boundaries of a single instance of an MC.  In other words, an MC also represents some action or movement by the organism that you can count (which, in turn, enables us to compute the rate of response).  The “consequence” in this relation may be understood better as simply the “effect” produced by the action.  We can substitute action for MC and effect for the consequence to add clarification.

This “Action” –> “Effect” pair forms a two-term contingency. This two-term contingency can come under stimulus control. But it does not necessarily have to do so, or certainly does not have to do so in the “ABC” model sense. 

In actual operant behavior, the organism moves around, and acts upon its environment. It changes and alters the environment. If nothing else it captures and engulfs some nutritious substance that functions to sustain animal life, since the organisms we’re talking about, including human organisms, are animal life. The organism doesn’t sit there awaiting stimuli to come down at it.  It moves. It operates on its environment.  It changes things around.  The environment differs somewhat after it has been operated upon. Moreover, the organism itself gets changed in some way, perhaps a small way, as a result of its acting upon its environment.  There is reciprocity in operant behavior in its relation to organism and environment.  

All of this seems to be obscured by the “ABC” model.  First, the “ABC” model ignores conditions of deprivation and aversive stimulation, which some behaviorists dub the “establishing operation” (though the term “potentiation” may work better).  The “EO,” as the establishing operation is also called, is not an “antecedent event.”  It doesn’t fit into that term. So, right away we’re faced with a fourth term.  

Next, the “ABC” model leads us back into the old, and rightfully discredited, “S-R” model of behavior.  Some people in ABA seem to think that the “A” causes” the “B” to occur, and why should they think otherwise, given that the very model implies that? Moreover, the “A” gets put into an equivalent status with the “C,” the consequence!  But, in actuality, the “B –> C” relation is far more important in the operant behavior equation than the “A –> B” relation ever would be.  

Unseen and unnoted, what also gets obliterated by the dysfunctional “ABC” model is the CONTINGENCY relation!  This we can denote with another term to identify the relation between the “behavior” and the consequence.”  The contingency, in fact, is far more important than the “B” or the “C” themselves.  

But note that in the “ABC” model, the question of what the contingency relation is will become quite limited. How does one factor in a schedule of reinforcement into that paradigm? Can you imagine doing a VR50 schedule in a DTT paradigm? I can’t either.  The model suggests, rather strongly, that EACH “behavior” will be consequated. And typically, each one is.

In applying the “ABC” model with a DTT procedure, the question of measurement then arises.  What does one measure?  Well, the “behavior” that the client performs is deemed to be “correct” or incorrect.”  One knows the total number of presentations. So, it’s fairly easy to calculate the percent of behaviors that were correct. Percent correct becomes the measure of choice. It’s easy to do. The data needed to compute it are easy to “take.”

The model ignores time as a fundamental parameter of behavior, however.  In principle, it would be possible to measure the LATENCY between when such a “stimulus” is presented to a client and when that client makes a response.  Latencies could be directly charted onto a Standard Celeration Chart, because latencies really are frequencies. (Don’t think so?  A latency is the count of 1 response per however much time elapsed between when the “stimulus” was first presented and the point in time when the behavior began.)  The chart can handle latencies down to .006 seconds, which would indeed be an incredibly short latency. Of course, in the typical DTT situation, latency isn’t recorded, and some might object that it be recorded, because the logistics of carrying out “trials” is already cumbersome enough as things stand. However, scientifically, that’s no excuse.  If the science is to be advanced further, then perhaps some enterprising individual will invent some measurement technology that makes the recording of such latencies as easy and convenient as the current percent correct recording is. Of course, that won’t address the other lingering problems with the underlying paradigm implied by the “ABC” model.

7. Lack of clarity about the terms we use.  I have put words such as “stimulus” in double quotes above, because, again, unless there exists some evidence that a thing or event FUNCTIONS to exert stimulus control over a two-term Action –> Effect relation, the event or thing should not be called a stimulus. The same goes for “reinforcer,” “consequence,” “contingency” and “response.”  All of these terms should be used only when we have demonstrated evidence that they functioned in some way.  Otherwise, we end up with the “junk behaviorism” nonsense statement that “I tried the reinforcer, but it didn’t work.”  Well, sorry to report that EVERY single reinforcer in the 5 billion year history of this planet has worked — each and every time!

So, how do we get past the “junk behaviorism” tendency to use function words when we do not have evidence of function?

Dr. Og Lindsley supplied the answer back in the mid-1960s, by suggesting we use two sets of terms, one to simply describe events as they are, and then a second set to identify terms when we have evidence that they functioned in some way. He named this the IS-DOES operant behavioral equation.  

On the IS side of the equation, the term “antecedent event” would never be used to denote a thing or event that has demonstrated stimulus control over an action–> effect pair.  Antecedent Event, abbreviated AE, would simply refer to events that happened before some behavior occurred. That’s all we know about them, that they took place before behavior, and nothing else. They may be functionally related, or may not be, but when discussing what they ARE, we don’t know what they DO.  We don’t assume that they have a stimulus function, either. (Alas, because the term “antecedent” has become so deeply embedded now in the junk behavioral culture as meaning the exact same thing as “stimulus,” it may be too late to revive “antecedent event” in the same sense that Lindsley meant! This does not negate the point. It rather suggests we need to keep working at terminology.)

Likewise, use “response” only when a Movement Cycle has a functional relationship to other events.  Use Movement Cycle (MC) to simply describe an instance of behavior.  Likewise, relegate “consequence” only to those events that have had a demonstrated effect on the response rate of a response. If the function of an event that follows behavior in time is unknown, then use Subsequent Event (abbreviated SE).  It makes perfect sense to say, “I tried the SE, but the frequency of behavior didn’t increase!”  Well, try another SE to see if it will increase the response rate!

Likwise further, use “arrangement” to denote descriptively the number or time or other relation between an MC and an SE.  But, once you have a clearly evident functional relationship between a response and consequence, then, but only then, use the term contingency.  

The IS side of the equation is thus written:

Program: (Antececent Event: Movement Cycle — Arrangement –> Subsequent Event)

Using abbreviations:

P: (AE: MC — Arr –> SE)

The DOES side of the operant behavioral equation then becomes:

Disposition: (Stimulus: Response — Contingency –> Consequence)

Using abbreviations:

D: (sD: R — K –> C).

Note the use of parentheses and colons.  A single instance of an MC (IS side), or R (DOES side) gets enclosed in the parentheses.  Colons are used to signify that the item in question might be either a discrete event or a more sustained condition (e.g., in a MULT schedule, responding on a VR schedule when the green light is ON — the light being on before, during, and after any given response).  The time arrow, –>, gets used only to signify the temporal relation between the events where we need to indicate it. In other words, if we had put an arrow between the AE and MC, we risk reintroducing the junk behavioral “S-R” mindset.  To avoid that possibility, don’t put an arrow there. It doesn’t fit anyway.

I must note that the operant behavioral equation was conceptualized by Dr. Ogden R. Lindsley in 1964 in a published paper, “Direct Measurement and Prosthesis of Retarded Behavior,” published in the Journal of Education. It morphed a couple of times, with his earlier acronyms and terms changing slightly. Then it became defunct when it appeared to be too difficult to engage would-be behavior analysts in learning the IS-DOES equation.  Let me suggest that now we must reintroduce it. Moreover, I have tweaked the equation somewhat through the use of those parentheses and colons, for the aforementioned reasons.  Will it work? Maybe, but we won’t know if we don’t try, try again!

Well, there’s a lot more “junk behaviorism” that afflicts the field of behavior analysis, and I’ll discuss that in Part Two of this article, and include the relevant references then.

  – JE

Frequency Jumps and Celeration Turns

August 20, 2007

One of the neat things about the Standard Celeration Chart resides in its ability to clearly show two basic types of changes to behavior that can occur when you change an independent variable.  We refer to the point in time when you make a change as a “phase change.”  A phase change takes place, for instance, between a baseline period of behavior recording and an intervention period.  In an intervention you change the values of at least one independent variable, and then monitor its effects to determine whether it changes behavior over time.

As noted, two basic changes to behavior can occur (there are more, but for now we will restrict the discussion to these two basic changes): 

 1. The frequency of the behavior can change abruptly.

 2. The celeration of the behavior can change over time.  We consider the celeration changes to consist of more “gradual” changes, though if the celeration runs steep enough, the change over time may seem anything but gradual.

We call the abrupt changes to frequency “jumps.”  For example, if you make a phase change and the frequency goes from 10 per minute on Monday to 20 per minute on Tuesday, we would say that that change describes a “frequency jump up.”  In this example, the jump up would have a value of x2 (“times two”) on the Standard Celeration Chart.  In charting terminology mathematicians used the older term “step function” for jumps.   The Precision Teaching term “jumps” runs more in line with the plain English emphasis of this field.

We call the more gradual changes to frequency over time “celeration turns.”  On the Standard Celeration Chart we depict frequency with a dot and celeration with a line of best fit drawn through a set of daily frequencies.  You can draw a celeration line for a baseline phase, and then draw a separate celeration line for the subsequent intervention phase.  If the angle of the celeration line changes across phases, then we say that the celeration has “turned.”  For example, if the celeration preceding a phase change ran at x1.0 (“times one”) and then after the intervention it shifted to a x2.0 per minute per week slope, then we would describe the celeration turn as a x2 (“times two”) change.

To recap, the frequency can “jump” and the celeration can “turn” when you put into effect some change to some independent variable.

There are many combinations of frequency jumps and celeration turns.  Note that “no jump” and “no turn” also represent possible outcomes of making an intervention.  Moreover, note that any jump or turn on the Standard Celeration Chart can be up or down.  If the frequency or celeration increases, then the change, as shown on the chart, is “up.”  Likewise, if the frequency or celeration decreases, then the change is “down.”

The basic jump and turn combination, therefore are:

* Frequency jump up, celeration turn up.

* Frequency jump up, celeration no turn.

* Frequency jump up, celeration turn down. (A counter-turn).

* Frequency no jump, celeration turn up.

* Frequency no jump, celeration no turn.

* Frequency no jump, celeration turn down.

* Frequency jump down, celeration turn up (A counter-turn).

* Frequency jump down, celeration no turn.

* Frequency jump down, celeration turn down.

Lindsley and his students identified two cases of “counter-turns.”  A counter turn occurs when you find a frequency jump in one direction followed by a celeration turn in the opposite direction. The two cases are frequency jump up followed by a celeration turn down, and a frequency jump down followed by a celeration turn up.  In both cases, the celeration trend will take the frequencies back to their starting point, suggesting that the changes made to the behavior by the manipulation of the independent variables produced only a temporary effect at best.  Lindsley and his students discovered that a fairly substantial proportion of the published behavior analysis literature contained such counter-turns.  Moreover, they found that the charts and graphs used in the published literature tended to obscure the illustration that counter-turns occurred.  You can make a counter-turn seemingly go away by using stretch-to-fill and fill-the-frame charts.  Of course, in the real life of the student or research participant, the counter-turn has not gone away.

The Figure associated with this essay illustrates the 9 basic frequency jump and celeration turn combinations.  However, you should know that many more combinations are possible.  For instance, while the Figure has the baseline phases running flat across the little charts, you could find situations where the baseline frequencies were already accelerating or decelerating.  Given that, the potential number of jump and turn combinations rises dramatically.  Of course, the total possible number of combinations becomes infinite when you consider all of the possible values that jumps and turns can take.

We use x2 (“times two”) as a useful rule-of-thumb to mark when we have a jump or a turn.  Any change having a value of x2 will clearly show up.  Any change having a value greater than x2 will show even more clearly.  Changes less than x2 can occur, but their distinction becomes somewhat harder to discern. For instance, a frequency change of x1.1 would not show up very clearly no matter what type of chart you used.

You can use a frequency finder and/or a celeration finder to determine the actual, precise values of the change to behavior over time.

  — John Eshleman, Ed.D., BCBA  (August 20, 2007)

———————————————————-

 Click on the Figure below to bring up a readable copy:

Basic Combinations of Frequency Jumps and Celeration Turns

Counting Unknowns

August 6, 2007

Lindsley (1997) states:

 “There are measurement experts who say you must objectively define a thing before you can count it.  Wrong again.  You can even count unknowns.  You can keep track of the time and chart the frequency of unknown things you encounter each day.  The daily frequency of unknowns is very high when you are in a foreign place and very low when you are in a familiar place.” (p. 529)

 REFERENCE:

Lindsley, O.R. (1997). Performance is easy to monitor and hard to measure.  In R. Kaufman, S. Thiagarajan, & P. MacGillis (eds.), The Guidebook for Performance Improvement: Working with Individuals and Organizations.  San Francisco: Jossey-Bass/Pfeiffer.  Chapter 26, pp. 519-559.

Successive Minutes Chart – Doing a Timing Every Other Minute

April 3, 2006

John Eshleman's 1984 SAFMEDS ("Flashcards") data on Merbitz & Layng Chart

On the chart shown you can consider several things. (To view the chart, click on the thumbnail image. Use the back arrow on your browser to return to this page.)

 The attached chart shows data from March 29, 1984 that I had originally charted on a "converted" daily Standard Celeration Chart (DC-9EN). I was 28 at the time. Back then I was testing out how to obtain a Learning Picture in a half hour or so. 

 There were no "timings charts" back then, so I had used a daily chart, crossing out the word "Days" on the x-axis label and replacing it with minutes.  (That original chart is not shown here).

The chart shown here represents an example application of the Merbitz & Layng (1996) V010396 Sprint #19 "Successive Minutes" chart.  Across the bottom axis are successive real-time minutes, not days, not sessions, and not successive timings.  I found the old chart from 1984 today and recharted its data onto the Merbitz & Layng today.  It took only a few minutes to do so.

To read the chart, up the left scale is Responses per Minute.  Across the bottom scale is Successive Minutes.  The y-axis up the left is a multiply-divide scale.  The x-axis across the bottom is an add-subtract scale.

What this is a chart of is of me doing one-minute timings of SAFMEDS (Say All Fast a Minute Every Day Shuffled; in this case, a Minute Every Other Minute Shuffled). I ran the timings every other minute. In some cases, two minutes elapsed between timings.  But, these data were charted in real time, so when one minute elapses between timings, the time line is blank, and when two minutes elapsed between timings, two time lines in succession are blank.  During the minute in between timings I would count the corrects and incorrects, chart them quickly, and then reshuffle the cards. The round dots are corrects per minute and the x's are incorrects per minute. I drew in Record Floors down at the 1 line.

The topic was Apple II Machine Language terms, which I had made into SAFMEDS.  At the time I was learning how to program computers, and I was thinking about learning machine language.  I was not doing this for any class, job, or formal project.  Just learning it on my own.

On the chart are a couple of event manipulations.  About 20 minutes into the study session, I decided to study the errors during the one minute between timings, because there were four to eight errors that still persisted (cards that seemed difficult to learn).  And about 38 minutes into the session, I set an aim goal of 40 per minute (but not an actual aim-star, which would include not only the frequency level, but also the time line. I put the aim over onto the y-axis).

Overall, the chart shows a "jaws" learning picture across a 70 minute period of time. Over that period I did 34 one-minute timings.  There was a slight crossover picture at the start, but only two times when errors were above corrects, so to me this LP looks more like a "jaws" than a "crossover jaws" picture.

Some implications:

Last year I discussed on the Standard Celeration listserve the question of charting data in real time, but did not have the ability at the time to put up successive minutes real-time charted data to illustrate the point.  While the attached data are from an old chart, they illustrate HOW the Merbitz & Layng real-time Successive Minutes chart could be used to work with minute-by-minute types of recording.

You can do more than the "traditional" four or five timings within a day (since when did that become an actual tradition? Why? On what basis?).  When doing timings within a daily session of time, it could be, as this chart illustrates, possible to do many timings.

There are limits. On the chart I noted that at about 67 to 68 minutes into doing this session that I was fatiguing.  So, I wouldn't necessarily recommend doing such an extensive session with yourself or with a learner for that and possibly for other reasons.  On the other hand, you might try doing 10 to 15 to maybe 20 timings within a day if that is logistically feasible, and record them in real time.

The Merbitz & Layng chart is calibrated, I found out by using a BRCo CFM-4 Celeration Finder, to the proper x2 34 degree angle.  I wrote in some celeration values covering some periods of the session (e.g., an initial x2.3 celeration of corrects, followed by a x1.3 midway through and a x1.1 during the fatiguing). The celeration period of this chart is a 10-minute period of time, so technically the first celeration would be stated as x2.3 per minute per 10 minutes (celeration = count per time per time).

The image is slightly angled.  I tried relining up the page in the scanner a couple of times, but still it was angled off a tad.  Then, I measured the margins of the paper, and the distance from edge of paper to the frame was not the same going across.  The copies of the Merbitz & Layng chart that I have seem to have been xeroxed. That goes to another point that Dr. Og Lindsley made in his last-ever talk at the 2003 IPTC about chart standards, that one of the standards is the margins.  Og was very precise and adamant about this.  Margins had to be exact in order to function towards exact overlays, but this slight-angledness now also seems to be another reason why margin standards need to be actual standards as he said.  

– JE

REFERENCE:

Lindsley, O.R. (2003). Precision Teaching's eyes and ears: Standard Celeration Charts and terms.  Invited Address presented at the International Precision Teaching Conference, Columbus, OH 6 Nov 2003.

Posting Charts to the SCC Blog

March 31, 2006

 John Eshleman's first Standard Celeration Chart from 1975

Just testing how charts and graphics can be uploaded to the blog.

To view a larger version of the chart, click on the thumbnail.  To return to this page after viewing the chart, click the back arrow on your web browser. 

The chart shown here is one that I uploaded to the Standard Celeration Listserve a few days ago.  It's a chart from 1975, and, in fact, is the first Standard Celeration Chart that I did. It's a chart of my reading my class notes, day by day, for the course named Applied Reinforcement Theory, which was taught by Dr. Steve Graf, and which was the course that first introduced me to Precision Teaching and to the Standard Celeration Chart.  The Movement Cycle is "reads page."  The y-axis up the left is Pages Read per Minute.  Those were pages in my notebook. –JE

BRCo Standard Celeration Charting Resources

March 27, 2006

A centralized site for both of the Standard Celeration Charting books I mentioned in the previous post, and a source of the charts themselves, is the Behavior Research Company (BRCo) family of websites.  BRCo was founded by Dr. Ogden R. Lindsley.

To order either the Graf & Lindsley (2002) or the Pennypacker, Gutierrez, Jr., and Lindsley (2003) book from BRCo you can go to BRCo's site at:

http://www.behaviorresearchcompany.com/

and click on the link to its online store.

To go to its store directly, you can click on:

https://www.behaviorresearchcompany.com/Merchant2/merchant.mvc?Screen=SFNT&Store_Code=B

and browse amongst its catalogs of products.

To go to the page where you can order the two aforementioned charting books, the direct webpage link is:

https://www.behaviorresearchcompany.com/Merchant2/merchant.mvc?Screen=CTGY&Store_Code=B&Category_Code=BKMN

where you can also view their front covers. — JE

 

Standard Celeration Chart References

March 26, 2006

Here are a couple of primary references that pertain to the Standard Celeration Chart:

Graf, S.A., & Lindsley, O.R. (2002). Standard Celeration Charting 2002. Poland: OH: Graf Implements.  Available at: http://www.behaviordevelopmentsolutions.com/products_celeration.html

Pennypacker, H.S., Gutierrez, Jr., A., & Lindsley, O.R. (2003). Handbook of the Standard Celeration Chart. Concord, MA: The Cambridge Center. Available at: http://www.behavior.org

I encourage all "chart people," Precision Teaching people, Behavior Analysts, and interested other persons to obtain copies of these valuable books.  They explain the chart in detail and cover how to use it. — JE

 

Og on “Standard Celeration Charting System Standards”

March 25, 2006

Back in November of 2003 Dr. Ogden Lindsley delivered what would be his last Invited Address to the International Precision Teaching Conference (IPTC).  The IPTC was held in Columbus, Ohio, USA that year.

The title of Og’s talk was “Precision Teaching’s Eyes and Ears: Standard Celeration Charts and Terms.” (Lindsley, 2003) To my knowledge at this point in time, Og’s Invited Address has not been published anywhere. Nor has his presentation’s two-page handout even been published.  Both should be published.  We need to have the handout published, and also we need to have the talk transcribed and also published.

Anyway, Lindsley proposed a whole set of charting standards, divided into three categories:  (1) Standard Chart Standards, (2) Standard Charting Conventions, and (3) Standard Reading Terms.  Without attempting to copy or replicate his entire presentation here, I think that it’s ok to list the 13 items in that first category, Standard Chart Standards. 

In the past we heard things like “the only thing standard on the chart is the x2 celeration angle.”  That was the sort of thing you’d hear at conventions and so on.  I think even Og said that.  But by the time of this presentation, he had clearly identified a whole lot more standards than just the x2 celeration angle!

Herewith, without further ado, are the 13 Standard Chart Standards:

1. Family of four (daily, weekly, monthly and yearly charts).

2. Celeration angle x2 = 34 degrees (what I told you about above).

3. Vertical axis 6 x10 multiply cycles (full range of human behavior frequencies).

4. Horizontal axis of 20 celeration periods.

5. Horizontal axis of 7 day weeks, and 5 week months.

6. Frame size of 8 inches Wide, 5 4/16 inches High.

7. Margin size of 1 11/16 inches Left, 1 5/16 inches Right, 1 7/16 inches Top, and 1 13/16 inches Bottom.

8. Axis values (e.g. .001, .01, .1, 1, 10, 100, 1000, etc.).

9. Axis labels (e.g., Count per Minute; Successive Calendar Days, etc.).

10. Grid lines (day lines up, frequency lines on multiply-divide scale across).

11. Team location (blanks down at the bottom of the chart).

12. Light blue ink (empirically found to facilitate charting speed and charting accuracy).

13. WOGR paper (the original paper that was durable, and water, oil, and grease resistant, and also translucent — you could stack charts one on another and view several of them at the same time).

From what I recall of Og’s talk, to be a Standard Celeration Chart a chart would need to have all of the above 13 standard features. If it did not have them all, then it would not be a standard chart.  It might be a useful-for-some-purposes chart. It might be an aesthetic chart. It might be a politically correct and hence job or career-enhancing chart. It might be a chart that JABA or JEAB might publish. But if it lacks any or all of those 13 features, it isn’t a standard chart.  That’s the whole point:  As Dr. Dennis Edinger reminds people on the SC listserve every now and then, Dr. Lindsley was very keen on developing and setting standards. In his final talk, he itemized these Standard Celeration Chart standards for us.

REFERENCE:

Lindsley, O.R. (2003). Precision Teaching’s Eyes and Ears: Standard Celeration Charts and Terms. Invited Address presented at the International Precision Teaching Conference, Columbus, Ohio (November 6).

– JE

Comments?

 

Calculating Frequency with ‘Dead Air Time’

March 24, 2006

This may be one of the advantages of timings over recordings, well, some types of recordings, and some types of timings. On the other hand, sometimes recordings may have the advantage over timings. Anyway, I want to deal with a problem in how we measure frequency.

Let’s start with an example. Suppose the learner starts a timing and makes a response every second for the first 10 seconds.  OK, it we stopped the timing after 10 seconds the frequency would be one per second, or, on the chart, 60 per minute, a fairly high frequency.

Now, suppose we hadn’t ended the timing right then and there, but left the timer running. And that after the 10th second, the learner made no more responses.  Dead air, so to speak. As the seconds tic by, the frequency will go down. Right?

Right.  After 20 seconds the learner’s frequency would be 10 responses in 20 seconds. Multiply by three, and we get a frequency of 30 per minute, half of what we had.

If we continued timing, or recording, after 30 seconds the frequency would be down to 20 per minute. (Remember, in this example the learner made 10 responses in the first 10 seconds and then stopped responding, even though the clock is now continuing to run.)

If this scenario played out for a full minute, the learner’s frequency would drop to 10 per minute. And if the same scenario continued for the next minute, the learner would be down to 5 per minute.

So, question:

What’s the kid’s real frequency?  60 per minute, 30 per minute, 20 per minute, 10 per minute, 5 per minute, 1 per minute, what?

By manipulating the clock, or following some rules about using the clock, the representation of the kid’s frequency can be changed without the kid ever doing anything about it, one way or another.  In other words, you can get any frequency you want simply by ensuring that some “dead air time” (time when the person is not responding) is included in, or removed from, the determination of the frequency.

Where timings gain a bit of an advantage is when they span very brief durations of time.  If someone conducts a 10-second timing, there isn’t a whole lot of time left over for any “dead air time” to creep in and thus start changing the apparent frequency.  On the other hand, if you activated a response recorder and several minutes passed by before the learner started responding, you have to decide whether to include that “dead air time” in the calculation of the frequency.

So, what’s the real frequency?

This problem has been brought up from time to time in Behavior Analysis, and it is sometimes offered as a crippling blow to frequency as a measure of behavior.

The wishy-washy answer is, “it depends.”

Dead air time happens during timings and during recordings of behavior. It can occur as a pause for a few seconds within a timing, after which the behavior resumes.

My guess is that if we were to apply Lindsley’s “the child knows best” principle, that we then let the learner’s behavior dictate the frequency, and not some arbitrary rules, unless there is some compelling reason to have the rules trump the principle.

So, abiding by the principle, in my view, in the example above the kid’s frequency was 1 per second for 10 seconds, and that’s it. It’s not 60 per minute because for 50 seconds out of that minute the kid remained mute. Nor was it 10 per minute, because that’s applying an arbitrary rule.  The figure of 5 per minute (in the example, recording or timing for two minutes instead of one) is just as applicable and just as valid and just as totally arbitrary.  And thus just as worthless.

So, how do we chart 1 per second for 10 seconds?  Put a dot at the 60 per minute line?  That seems to be what’s done now if, say, a 10-second timing was conducted and the result of 1 per second obtained.  But, that might be stretching matters a bit, or more than a little bit. In the case of the example above, extrapolating 1 per second for 10 seconds to 60 per minute would be clearly incorrect and unreasonable, given that the learner didn’t maintain that pace for a full minute.

So, how do we chart it? 

This is what’s not clear to me.  Perhaps this is where there’s a weakness in the charting conventions. Currently, we have the Record Floor, aka these days as a Counting Time Floor, and the Behavior Floor (another PT relic from the past, rarely used anymore).  Neither floor applies to the situation above where we are confronted with maybe having to include “dead air time” in the calculation of a frequency. Perhaps we need a third floor, an “Actual Floor,” or something. The notion of an “Actual Floor” would be predicated on the learner’s behavior, not on tactical considerations from the perspective of the teacher, researcher, or other person conducting observation and measurement. Accordingly, an “Actual Floor” would require its own symbol on the chart, what I am not sure.

Or, as an alternative, when a timing, or a recording, falls to less than a minute, perhaps we should avoid charting the frequency in terms of count per minute. (I can almost feel the tidal wave of disagreement there!). In that case, perhaps a different chart, one with Count per Second up the left, would work better?

Otherwise, we’ll end up with a situation where the figure given for a learner’s natural frequency can be arbitrarily changed at will, on paper or on a computer, without the learner’s actual behavior ever changing.

I am open to suggestions on this, and I think this would be a fruitful and compelling topic for discussion for Precision Teaching. — JE

 

 

 

In Appreciation of the Frequency Finder

March 24, 2006

Sometimes it’s the little things that make a difference. Lately, as in the past day or so, I have been thinking about the Frequency Finder.  That’s a charting tool described in Chapter 6 of the Handbook of the Standard Celeration Chart (Pennypacker, Gutierrez, & Lindsley, 2003; pp. 33-36).  The Frequency Finder seems to me to be one of those little things that makes a difference.

I have yet to illustrate or define a Standard Celeration Chart on this blog, and while there are many charting topics and issues to cover, right now discussion of the Frequency Finder seemed most germane.  So, first I’ll cover what the Frequency Finder is, and how to use it, and then get into the reason why I have brought it up.

As mentioned, the Handbook describes how to make a Frequency Finder.  Let’s cover this first.

To make one, you cut up a paper Standard Celeration Chart.  Depending on how wide you make a Frequency Finder, you can get 6 to 10 of them from a single chart.  To make one, get a daily Standard Celeration Chart.  Next, grab a pair of scissors and position them so that you will be cutting vertically up the chart at or around the first Monday line.  Cut all the way up the page, and as the Handbook suggests, trim away the corner areas if you want (that’s the area above the 1000 per minute line and below the .000695 per minute line). Next, somewhere on the Finder, write “Frequency Finder.”  Last, draw a small right-aimed, horizontal arrow at the 1 per minute line.  At this point, you’re done making the Finder, as we’ll call it for short.

As I said, you can make additional Frequency Finders from that same sheet of chart paper, though for any additional ones you make you will also have to write in the frequency numbers (e.g., 1, 5, 10, 50, etc.)

You may also order and purchase clear mylar Frequency Finders, for example, from Behavior Research Company. (I’ll get a link to BRCo’s site at some point.) In fact, even better, you can order Celeration Finders from BRCo; these also have Frequency Finders on them.

So, how to use the Frequency Finder?

Example:  Suppose a learner made 120 responses in 3 minutes.  Now, it’s easy for us to figure out this example frequency without having to use a Finder.  A little simple math tells us that the frequency was 40 per minute.  Anyway, to use the Finder with this example, position the Finder on top of the daily chart onto which you are charting daily frequencies of behavior.  Line up the arrow on the Finder with the 1 per minute line on the chart.  Next, move the Finder down the chart until the 3 per minute line on the Finder lines up with the 1 per minute line on the chart.  The arrow on the Finder now points to the Record Floor.  This is where you draw the time bar, –, on the chart.

Next, without moving the Finder, go up the Finder until you reach 120.  Alas, you will see that there is no 120 line on either the Finder or on the chart.  There is a 100 per minute line and the next line up is 200 per minute — such is life with a multiply-divide scale!  You will have to estimate where the 120 line would go if there were one. Fortunately, with our example here, we already know that 120 divided by 3 equals 40, and thus the frequency dot on the chart goes at 40 per minute. (By the way, this is one way of back-finding where a line would go.)

I should hasten to add that the example presumes that you, the charter, are putting the frequency and the Record Floor on the proper day line on the chart. So, in addition to moving the Finder up and down the chart, you would also first move it across the chart to the appropriate day line!

OK, this was an easy example.  It doesn’t show much of the power of the Frequency Finder, except that there is no 3 minute line indicated on the right-hand side of the chart.  The example really only shows us how to find and draw a time bar when we don’t have some time period guideline on the right side of the chart. And, in this simple example, we could easily do the math.

Next example, then.  Suppose the learner does 312 responses in 7 minutes. While we could still do the math to calculate the frequency, if one is not highly fluent in performing such arithmetic, it may be faster to use the Frequency Finder. In this example, we first line up the Finder so that the correct day line on the chart is just to the right of the Finder. Then, we also line up the Finder so that the arrow on the Finder is on the 1 per minute line of the chart. Then, as before, we’d move the Finder down the chart until the 7 line on the Finder lined up with the 1 line on the chart.  The arrow then points to where a 7 minute line would go. That’s where you draw the Record Floor.

Next, count up to 312 on the Finder. Again, there’s no 312 line, just a 300 line and next up, a 400 line. So, again, you’ll have to interpolate.  312 is much closer to 300 than to 400, so put the frequency just above 300 per minute on the Finder. You will see that, on the chart, the resulting dot is between 50 and 60 per minute. That’s as precise as you need to be, since at that location on the chart’s 10′s cycle, the dot itself will cover several frequencies.

As the Handbook stresses, you can perform rapid plotting with a Frequency Finder. The Handbook also provides visual examples of how to work the Finder.

So, why bring all this up, especially now?

Well, it’s unclear to me just how much people in Precision Teaching are using Frequency Finders these days.  Ever since the “paradigm shifted,” and PT became focused first on fluency, and second on a particular method of conducting timings, the result seems to be less of a need to use a Frequency Finder.

First of all, if you conduct a 1-minute timing, there is no need to use one at all. Secondly, if you conduct timings shorter than one minute using convenient fractions of a minute, then there is likewise hardly any need to use a Frequency Finder.  The common practice now is to conduct timings of students that are 30 seconds, 20 seconds, 15 seconds, 10 seconds, and even 6 seconds long in time duration.  To find a frequency from running a timing of one of these durations is simply a matter of extrapolation.  Extrapolation is achieved by multiplying the count by whatever factor you would need to multiply the time by in order to bring the time up to one minute. 

For example, if you conducted a 30-second timing, then you would multiply the count by two, because you would also multiply 30-seconds by two to get a full minute.  Likewise, then, if a Precision Teacher ran a 10-second timing, he or she would multiply the count by six, because 10 seconds times 6 would equal 60 seconds, or one minute.

So, in conduting timings from within a fluency paradigm, where a teacher is perpetually doing one-minute timings or set, specific fractions of a minute and extrapolating by multiplying, there is no need whatsoever to use a Frequency Finder. The kid did four responses in a six-second timing?  Multiply 4 by 10 and presto!  You can assert, and chart, that the kid’s frequency was “40 per minute.”

Even when people conduct “endurance” tests of two minutes, there’s still not a whole lot of need to use a Frequency Finder.  Just divide the obtained count by two.  If the kid did 50 responses in 2 minutes, that interpolates to 25 per minute. Easy to figure out and compute, and no need to drag in the Finder.

You will note that no one conducts an endurance test of 2 minutes 23 seconds, or 2 minutes 45 seconds, etc.  Or one minute 39 seconds.  Again, etc.  It’s always some convenient (to the teacher or researcher) multiple or fraction. Because, in modern Precision Teaching, the teacher, after all, knows best.

So, is the Frequency Finder becoming something of a lost tool in Precision Teaching?  Maybe. I wonder. Because for the most part it just is not needed.

So, why might it ever have been needed?  Because, in the olden days of Precision Teaching, people did not conduct timings only.  At least the timings paradigm was not paramount, even though it had its origins early on.  Moreover, there was little overt concern with “fluency.”  Go look at the old PT literature from the 60s up through most of the 80s.

Back then, people Recorded behavior, rather than simply run timings. The old PT slogan was “Pinpoint, Record, Chart, and Try, Try Again.”  Not “Pinpoint, Time, Chart, and Try, Try, Again.”

When you record behavior you may not know ahead of time how long you will be recording it.  Moreover, you may end up with really inconvenient (from an arithmetic perspective) recording time.  What if you recorded behavior for 43 minutes, 17 seconds?  Not only is there no 43 minute 17 second line on the chart, there’s also no plain old 43 minute line. If the kid did 177 behaviors in those 43 minutes and 17 seconds, calculating the frequency might be beyond the quick proficiency of many people. With the Finder, you can rapid plot the frequency — move it down to where about a little over 43 on the Finder lines up with the 1 line on the chart, and the arrow is pointing to where a Record Floor of around 43 minutes 17 seconds would go.  Counting up 177 on the Finder lets you put the dot near the 4 per minute line on the chart.

People have asserted (and I’ll need to dig up the references) that in the early days of PT that the one-minute timing emerged because teachers could not do long division, and did not want to try to calculate frequencies in their heads or something.  And that this was in the pre-calculator era, not that people would also want to spend a lot of time messing with calculators anyway. That story may or may not be true.

But, with the Frequency Finder, they’d never have had to do any calculating anyway.  None.  No calculating, period.

So, why not teach the teachers of yesteryear to use Frequency Finders? Problem solved! Maybe that was tried and maybe people found that that, too, was just too inconvenient or difficult?  If anyone from that era who is still around can clarify this point, that’d be much appreciated.

So, why not use a finder?

Just move a piece of paper on top of another piece of paper, using what is, in effect, a little slide rule.  Easy to learn, easy to do, fast to do.

Under the fluency and timings paradigm, it’s not clear to me that newcomers to the field are learning a lot of the potential that Precision Teaching has, nor that they are necessarily learning the full range and scope of the Standard Celeration Chart and its vast and enormous analytic power. 

So, one of the things I want to find out is whether the Frequency Finder is still being taught anymore.  Are Chart Parents teaching it to newbie charters?  Are “intro to charting” workshops at ABA and IPTC teaching attendees about the Frequency Finder?  Are the few university courses that teach PT and charting teaching students about the Frequency Finder? Or is the Finder one of a growing set of relics from an earlier era?  That’s something I’d like to find out, either way. — JE

 


Follow

Get every new post delivered to your Inbox.