Discussant’s Comments Re SAFMEDS for ABAI 2018 (Not Presented)

May 29, 2018

Discussant’s Comments That Could Have Been Presented at ABAI 2018

by John W. Eshleman, EdD, BCBA-D

25 May 2018

[Though originally scheduled to attend the Association for Behavior Analysis International (ABAI) Convention in San Diego, CA in May of 2018, for a variety of reasons I did not attend the convention.  I had been scheduled to be the Discussant for a Symposium entitled “Fluency Based Instruction in University Settings” (Baltazar, 2018).  Dr. Traci Cihon who’d arranged this (and another) symposium arranged to have me replaced as Discussant by Dr. Lee Mason, who became the Discussant-of-Record.  Prior to the convention Dr. Mason emailed me asking me what I might have planned on saying. The paper below is based on my email reply to him (cc’d to Dr. Cihon, too) about some points that I likely would have made had I served as Discussant.  I should note, too, that in the role of Discussant, I’d have also reviewed and commented upon the two papers that comprised the symposium. – John Eshleman, May 28, 2018.]

—————————————————————-

Regarding SAFMEDS. I’ve been working and researching them since about 1980, mostly R&D, less so for experimental analysis. Since you asked, here are a few thoughts:

SAFMEDS are flashcards. In the early years of SAFMEDS, the originator of the acronym/procedure, Dr. O. R. Lindsley often still referred to them as flashcards (Lindsley, 1981).  It took about 10 years between 1978 and 1988 or so for the term SAFMEDS to become common. The term originated as an application of operant lab principles (see any background regarding the history of Lindsley himself, as a student of B. F. Skinner’s) to an educational method – specifically to flashcards.

I would contend that the basic operant principles at the time were pretty-well documented both in terms of experimental analytic origins and also applied research and applications of them.  So, implementing these principles would not itself require validation of them as principles.

But, what about flashcards themselves?

I have served as a reviewer for papers about SAFMEDS, including a couple of recent ones within the past two years.  At that time, about two years ago, I wondered about the origins of flashcards in American education.  When did flashcards themselves first originate?  I did a quick “google” search and within seconds found some sources that indicated that flashcards originated in the 1830s! That’s almost 200 years ago, now. Ok, 188 years, give or take a few (https://en.wikipedia.org/wiki/Flashcard).

I recall a point that my doctoral dissertation Chair and Advisor, Dr. E. A. Vargas repeatedly made: A development of a particular technology both verifies and validates any scientific or engineering principles behind it; in short, a pragmatic truth criterion. Vargas noted that Skinner (1979) in his autobiography had noted the same point. E.g., programmed instruction technology was a pragmatic validation of operant lab principles (not the only one of course!).

So, flashcards have been around in American education for nearly two centuries. That suggests some technological validation and some pragmatic truth criterion.  Viewed from the perspective of metacontingencies, this also suggests some rather long staying-power.  Presumably, if flashcards themselves did not work to achieve certain educational outcomes, as a cultural practice their usage would have likely gone extinct by now.  There seems little other reason to think that flashcards would persist that long unless they had some educational value, whether more recently experimentally analyzed or not, and whether behavior analysis agrees or not. They signify a fairly robust technology.

So, what about SAFMEDS? I agree that the experimental analysis from a behavior analysis perspective may seem lacking.  I’ve chaired several theses that have investigated several of the terms of the acronym themselves.  For example, one of my students, Juntunen (2009) investigated the “Saying” out loud procedural variable by arranging an experiment using a sound meter that recorded decibel levels of learners saying the cards noticeably out loud (e.g., 50 to 60 decibels) compared to whispering them or saying them quietly (about 20 decibels if I recall — could be wrong, but I don’t have her thesis handy).  The result of that experiment was mostly inconclusive, with perhaps only a slight edge to saying them more loudly.  This, and other research efforts, plus my own lengthy experience of using them in my courses perhaps makes me one of the most skeptical of them while yet remaining an advocate for them.

So, here’s the deal. SAFMEDS are flashcards.  We know that flashcards have a robust, pragmatic, empirical validation.  There may not be a whole lot of evidence that SAFMEDS work “better” than flashcards (or better than other fluency-building methods that result in fluent verbal and especially intraverbal repertoires). But there’s also no evidence at all that SAFMEDS are any worse than so-called traditional flashcards.  Neither flashcards nor SAFMEDS  need further research before it becomes permissible or acceptable for teachers to start using them, because they have been doing so since about 1830, just not SAFMEDS style, necessarily!

But, I tend to regard comparison studies as the wrong line of reasoning.  If you haven’t seen Dr. Jim Johnston’s (1988) paper in The Behavior Analyston “Strategic and Tactical Limits of Comparison Studies” (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2741858/ ) then I encourage you to do so.

SAFMEDS are NOT a panacea. They do not always work the same way for everyone, or for every instructional task. Most especially, I have found from incorporating them in my courses that the worst possible application of them is for “terms and their definitions.”  Using SAFMEDS for terms and definitions will likely run counter to what Dr. Carl Binder discussed in his 1996 paper in The Behavior Analystabout fluency and fluency-building (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2733609/).  There are other, I think, better methods for teaching terms and their definitions (some of which were pioneered by my mentor, Dr. Stephen A. Graf — http://www.stevegraf.org: see “Prime Principles” in (http://nebula.wsimg.com/78512dfa3d88b6bc0b84e887d7721142?AccessKeyId=F33FC376F01581DAB5C0&disposition=0&alloworigin=1)

So, what can SAFMEDS be used for?  To establish and/or strengthen intraverbal “word association” verbal response repertoires.  The late Dr. W. S. Verplanck (1992)  has a paper in The Analysis of Verbal Behavior journal, regarding the Word Association Test (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2748587/ ).  That, and other basic intraverbals (e.g., using store-bought Arithmetic flashcards SAFMEDS style to teach some basic arithmetic relations), or to use them in picture-naming “tact” type research and development seems the best way to go. In this latter, put photos or diagrams etc. on the fronts and SeeSay the name of the object shown.

Well, I have a lot more that I could address, but I’ll leave you with these thoughts above. If you’re interested, I encourage you to pursue more behavior analytic research with SAFMEDS, but with also taking into account the limitations noted above, most especially Johnston’s.

References

Baltazar, M. (Chairperson). (2018). Fluency based instruction in university settings. Symposium presented at the meeting of the Association for Behavior Analysis International, San Diego, CA. Manchester Grand Hyatt, Harbor Ballroom C. Symposium #205 TBA/EDC Applied Research, 9:00 am – 9:50 am.

Binder, C. V. (1996). Behavioral fluency: Evolution of a new paradigm. The Behavior Analyst, 19(2): 163–197. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2733609/)

Flashcard. Wikipedia entry. (https://en.wikipedia.org/wiki/Flashcard). Retrieved 28 May 2018 by John W. Eshleman.

Graf, S. A. (1993). “Prime Principles.” Crew Reference Manual. Course Syllabus for General Psychology 560. Youngstown State University, Youngstown, Ohio. Page 6. http://nebula.wsimg.com/78512dfa3d88b6bc0b84e887d7721142?AccessKeyId=F33FC376F01581DAB5C0&disposition=0&alloworigin=1

Johnston, J. (1988). Strategic and tactical limits of comparison studies. The Behavior Analyst, 11(1): 1–9. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2741858/).

Juntunen, T. (2009). The Effects of Response Amplitude on the Acquisition of Vocal-Verbal Behavior Using Fluency-Building. Unpublished Master’s Thesis, The Chicago School of Professional Psychology.  J. W. Eshleman (Thesis Chairperson).

Lindsley, O. R. (1981).  Current issues facing standard celeration charting. Invited Address presented at the Winter Precision Teaching Conference, Orlando, Florida.

Skinner, B. F. (1979). The Shaping of a Behaviorist. New York: Knopf.

Verplanck, W. (1992). A brief introduction to the word associate test. The Analysis of Verbal Behavior, 10, 97–123. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2748587/)

 

John W. Eshleman, EdD, BCBA-D, May 2018

 

 

Advertisement

Some Thoughts about SAFMEDS (Fluency Cards)

April 27, 2018

by John W. Eshleman, EdD, BCBA-D

23 April 2018

SAFMEDS are not terms and definitions.  SAFMEDS should not be about terms and definitions.  There.  I said it. Bluntly, perhaps.  But now that I’ve gotten your attention, let me elaborate.

In the 1970s Dr. Og Lindsley invented SAFMEDS, and did so based on a couple of observations. First, students in his courses previously had been using flashcards to help their learning course content.  So, from the getgo, SAFMEDS were always flashcards. Second, Lindsley, as per usual, applied experimental operant lab concepts to the use of flashcards.  Thus, SAFMEDS started out as flashcards, and they still are. The operant lab principles and lore remain, too, if in the background.

The acronym SAFMEDS stands for “Say All Fast a Minute Each Day Shuffled.”  More recently, in the 1990s or 2000s Dr. Carl Binder dubbed them “Fluency Cards” ™.*  Binder’s trademarked term refers to the function of the cards, whereas Lindsley’s term pertained to some but not all key methodological strategies and their tactics.  More generally, and bringing in here B. F. Skinner’s (1957) verbal behavior concepts, SAFMEDS can be used to help build up verbal behavior repertoires that are (1) primarily sequelic intraverbal relations, (2) tact relations, or (3) other relations, such as textual (e.g., for developing reading and/or pronunciation skills).

It appears that in these days of memes and narratives, that misinformation regarding SAFMEDS spreads rather easily and permeates the behavior analytic community.  That’s why we get this notion that SAFMEDS are for learning terms and their definitions. Some people end up believing that these cards are only for terms and definitions! This mistaken notion runs as follows:  When developing SAFMEDS you write a term on one side of a card (e.g., the “front” side), and then write its corresponding definition on the other side (e.g., the “back” side).  You develop a set of such cards, maybe circa 50 to 80 cards total, though I’ve seen sets as small as 20 or so, and as large as 500 cards.  As with actual SAFMEDS the procedure remains the same. Procedurally, a learner set a timer, and works their way through a set of cards until the timer sounds an alarm and the “timing” (a timed practice session) ends. While doing a timing the learner holds the cards and sees what is on the front side, says a response out loud, flips the card over, checks the correctness of the response just given and then places and releases a card answered correctly into one pile and puts an error into another pile.  After the timing concludes the learner counts the cards in each stack, and perhaps records and charts the resulting data. In applied behavior analysis (ABA) there is no guarantee that the cards will be charted, though persons in Precision Teaching (PT) typically will chart these data.

Sometimes the terms and definitions may be performed “in reverse.”  That is, the learner sees a definition on the front and then says the term (which is printed on the back side) and likewise checks the accuracy of the vocal response given.  Sometimes, further, a “good” set of cards might be construed as being “bi-directional.” To bring in some parlance from stimulus equivalence research, one can test or arrange for “symmetry.”  That is, one can learn term to definition (e. g., “A to B”) and then also learn or test for the reversal, the “B to A,” which would be learning or testing for definition back to term.

This is a recipe for disaster.  Well, maybe not disaster, but probably for disillusionment with SAFMEDS. Why?  Well, I have not yet said why, but now I will try.

When Lindsley and associates developed SAFMEDS the cards did not have terms and definitions on them. Not even the “intraverbal” cards contained definitions.  Lindsley’s 1981 “Learning Pictures Facts” set provides an example. Instead, there might be a word or phrase on the front, and a word or phrase on the back.  The goal was to build up a SeeSay word association repertoire.  You can go back and examine Lindsley’s original SAFMEDS sets to see for yourself.   Lindsley’s 1981 presentation included this segment block quoted here:

“The next thing we should do is "practice what we teach."
I teach three courses at the University of Kansas. I have
taught 'Supervision of Instruction' every semester since
the Fall of 1972. It was not until the Fall of '78 that
I started requiring daily practice in a three-hour, 
once-a-week graduate course. Then I had theguts to say 
"Flashcards: Do them daily, or you get an incomplete." 
So, all over the place we are finding 'teach precision 
teaching once a month, twice a month, once a week.'  
But we know you can't learn anything devoting once a week 
to it, or once a semester.”  And,
“The flashcards are easy to make:

Performance is:  Number per minute
Performance has: Two Dimensions
Performance scientific name: Frequency
Performance changes by: Multiplying and dividing
Learning ranges from: /100 to x100
Performance ranges from: 1 a day to 300 a minute” (no page #)

As you can see, in Lindsley’s arrangement, you might have 1 to 3 words on the front of a card, and from 1 to 8 words on the back.  Anything more than that then becomes what Carl Binder has dubbed a “fluency blocker.” Few definitions can be written with 1 to 8 words without severe editing – editing that may make the phraseology unpronounceable or cumbersome to say out loud.  Cards with definitions ranging from 9 to maybe 30 words on the back are nothing but fluency blockers.  That defeats the word association learning that SAFMEDS are designed to accomplish. I remember obtaining some SAFMEDS for behavior analysis definitions from the 1980 Johnston and Pennypacker “Strategies and Tactics” book that had up to 30 or more words printed on the back sides. There is absolutely no way to build up fluency with such a deck.

Fluency Origins in the Experimental Analysis of Behavior

A sometimes comeback that I hear is that, well, you could count the number of words spoken, instead of counting cards. Yeah?  Well, this gets to the second point of Lindsley’s that I mentioned above.  Why construct and use cards at all in the first place? Why not count words spoken instead? Well, the card itself is an analog to a lever or key from the old operant experimental boxes.  The rat would press a lever or the pigeon would peck a key (or a human would pull and release a plunger manipulandum).  These manipulanda served as “response definers” – a concept that seemingly has not translated well from the experimental analysis to applied analysis.  In SAFMEDS you count cards, not responses, and not words.  The card functions as a response definer.  It forces a Movement Cycle (MC) into the timed practice session. Thus, it also serves as a convenience.  It’s fairly easy and quick to count cards. Counting words spoken would be more cumbersome. Do not believe me. Try it out yourself and see – determinism and philosophic doubt.

Words are artifacts. As we type them out, or write them, by custom or convention – the rules of grammar – we type spaces between them.  If we did not add spaces then we would end up with a mess like this: Ifwedidnotaddspacesthenwewouldendupwithamesslikethis.  See what I mean? But, if you pay attention to a person speaking, unless they pause for a beat or so, there are no spaces in spoken language!  Counting words spoken becomes arbitrary. As Skinner noted in Verbal Behavior (1957), there seldom are any natural “fractures” to define verbal behavior responses; certainly not once you get past single word “pure” responses, such as a “pure” mand or “pure” tact. So, counting words spoken becomes arbitrary as to how many responses are made, and whether something said is a response or is part of a response.  Two or more people can legitimately differ as to when a vocal verbal response starts and when it ends.  That this situation has been a long-term problem in the analysis of verbal behavior often goes unsaid in discussions of instructional design. Adding in a card means putting in a response definer, which “externalizes” the behavior and adds a reliable method of recording and thus otherwise dealing with behavior and behavior change over time.

I guess this latter point is where having some background in the experimental analysis of behavior may help. If you pay attention to the basic scenario of a rat pressing a lever, the lever press simply defines a response and enables automatic recording. Otherwise, if you watch the behavior play out, you will observe that a whole chain of responses and stimuli occur, with the bar press being in the middle of the chain.  The chains may sometimes be somewhat variable, but they include actions and events that take place before, during, and after the bar press. Pressing the bar simply results in a switch closure and thus electrical contact – which electrically increments counters, up moves a pen on a cumulative response recorder, and which may sometimes also trigger a food bin to open and a pellet of food to drop down into a container. Meanwhile, a lot of other behavior goes unrecorded in the execution of the chain of responses.  For example, the cumulative recorder will not record the rat picking up the pellet in its paws or placing the pellet into its mouth and eating and swallowing the pellet.  There is a presumption that it does so, but occasionally it could accidentally drop the pellet or some portion of it.  The dispenser could jam up with pellets (this is how Skinner obtained the first extinction curve!).  All that the operant system records is switch closures, which presumes a bar press. None of the topography of the bar pressing gets recorded.  Sometimes a rat may press the bar part of the way, but not down far enough to make a switch closure before releasing the bar.  Such a response literally does not count.  If the rat puts extra force into a bar press, then this amplitude enhancement does not count for anything more than a less intense response.  A lot can go on that is not counted. But, regarding experimental reliability, the system otherwise does a pretty decent job:  From tons of single subject data you can induce generalizations, and thus empower science.

Back to SAFMEDS:  So, yes, counting cards is somewhat arbitrary, but in the same sense that operant chamber levers, keys and plungers and so on are arbitrary. But then this is also the reason why SAFMEDS are for word associations: The variability in terms of quantity of text from card to card remains fairly low. If all of your cards have from 1 to 3 words on the back sides, then that condition is a lot more uniform than if the wording in the set ranges from, say 3 to 30 or more words.

Alternate Methods for Learning Terms and Their Definitions

But, what about learning terms and their definitions?  Well, that represents a fairly low level objective in Bloom’s Taxonomy of Educational Objectives. And to the extent that this objective is warranted, then you could revert back to old-fashioned flashcards and not try to do the cards “SAFMEDS style.” Much more significant would be to learn how to apply the definitions: to recognize or give examples and non-examples of the concepts denoted by the terms.

Or if you must learn terms and definitions, then it would be better to find some other method(s) of learning.  My old undergrad mentor, Dr. Steve Graf (1980), developed a list of 20 to 25 “must credit” statements (see Appendix below). First, Graf realized that for the most part for a university course you probably do not need to learn by heart more than about 20 to 25 core terms. Second, he set up the “must credits” as statements that a learner would have to “ThinkSay” or “ThinkWrite” or both when they tested out. In Graf’s system, there were 20 to 25 statements printed on a single page.  There would be a term, followed by a dash and then followed by a sentence written, and edited and rewritten by him many times, so that writing it out or saying it out would present few stumbling blocks (“fluency blockers.”).  The task was for the student to “recall” the entire term and definition line for each of the 20 or so terms.  Hence, the learning channels noted refer to such a “recall” sequence.  The learner would be accorded 5 to 8 minutes to write out the entire array of “must credit” statements, and/or a couple of minutes to think and say the entire set of statements.  The response definer would be the entire line of text.

Was Graf’s system better than trying to learn terms and definitions via cards?  I do not think that any research was ever attempted to answer such a question. So, feel free to test it out and seek such an answer. What Graf did know was that (1) it served as an alternative method, and (2) appeared to work. Students could manage to complete the task. In order to pass the course students had to ThinkSay and/or ThinkWrite all statements correctly. They were offered several attempts to do so. In some variations of the course this accomplishment would be merited with receipt of a “Graf All-Stars” certificate. In that instructional design sense, the method demonstrated some pragmatic value.

The astute instructional designer could come up with other methods to learn terms and their definitions, or create variations of the above.  As a variation, if you do have flashcards (not SAFMEDS) of terms and definitions, instead of assigning it as a SeeSay task only, add in a SeeWrite component. Then, instead of holding a set of cards and flipping or sliding them one at a time, cull out the 25 or so key, most critical definitions, and place and array them out on a table in front of yourself. Start your timer, and then (1) PointSeeSay the definitions, or (2) PointSeeWrite the definitions on a response sheet that you have either next to you or also on the table.  Remove the card flipping from the scenario and set the timer to some reasonable (preferably empirically determined) time span.  Depending on the number of words on a card, a timing for a SeeWrite task might last 5 to 8 minutes just as with Graf’s ThinkWrite “must credit” statements. You can still count cards (corrects and incorrects), and the card will still serve as a response definer.  You then can chart these frequencies (responses per minute correct and error responses per minute), and monitor progress over time.

Tact-Based and Other Types of SAFMEDS

Let me close with a side note or two.  The discussion so far has centered on “intraverbal” flashcards and other methods germane to terms and their definitions. But, as noted early on above, you can create “tact”-based SAFMEDS.  For a tact-based set of cards you would place a picture, drawing, diagram, or photograph on the front side of the card and a corresponding term or phrase on the back side. The objective might become one of object-naming (technically, picture-naming).  See the picture, then say the corresponding term.  Obviously, as per the above discussion, you could place and lay out the “tact” cards on a table and conduct SeeSay and SeeWrite or other objectives too. For another objective, you could have a learner SeeGrabPlaceRelease cards to sort them, or to categorize them. Or SeePoint to only the cards that match a given instruction (e.g., if learning colors, point to all of the Blue cards on the table when asked to do so; you’d have a lot of varieties of colors there on the table, of course, including variations of blue that you want a learner to point to correctly and close-in non-examples that you would not, such as purple or blue-green cards, etc.).  There are many fields of knowledge that lend themselves to “tact”-based SAFMEDS or similar instructional design approaches.  I can see such “tact” cards being used in a geology course to learn types of rocks, or in a meteorology course to learn types of clouds, to entice you with a couple of examples.

And, just to drive home the point that SAFMEDS are not about terms and definitions, if you needed to teach arithmetic skills to a learner, you can purchase a set of traditional arithmetic flashcards that have problems on the fronts and answers on the back. Then conduct timings SAFMEDS style. For example, the learner would see 1 + 3 = ? and say “four” for a correct response. The point is, you can take such commercial flashcards and make them into SAFMEDS without ever doing anything to the cards themselves.

Lastly, yes, more research with flashcards, SAFMEDS and the other methods needs to be conducted. But make it research that answers practical questions, not esoteric or trivial questions. Find out how these and other methods lead to quality REAPS (retention, endurance, application, performance standards and stability). Then, too, realize that no one method fits all learners all the time, just as no one shoe size fits everyone’s feet. Learning is both specific and unique, and seeking common methods that always work with everyone amounts to chasing a snark that may not be there.

References

Graf, S. A. (1980). Psychology 560. Course Syllabus:  Youngstown, OH:  Youngstown State University.  PDF File available online at http://www.stevegraf.org.

Johnston, J. M., & Pennypacker, H. S.  (1980). Strategies and tactics of human behavioral research. Hillsdale, NJ:  Lawrence Erlbaum.

Lindsley, O. R. (1981).  Current issues facing standard celeration charting. Invited Address presented at the Winter Precision Teaching Conference, Orlando, Florida. [Transcript available from Dr. John Eshleman.]

Skinner, B. F. (1957). Verbal behavior.  New York: Appleton-Century-Crofts.

=======================================================

Dr. Steve Graf’s (1980) version of the “must credits” statements:


* Fluency Cards ™ is a term trademarked by Dr. Carl Binder.  No trademark infringement is intended here.  I have used this term in my courses under a special dispensation granted by Dr. Binder.

On Terms — Record Floor, Celeration Period, and Multiply-Divide Scale

April 10, 2018

Precision Teaching Terminology

On Terms

The Standard Celeration Chart (SCC; pronounced ess-see-see) has been key component of Precision Teaching (PT; I capitalize the name of the field, rendering it into a proper noun). The chart has always posed some challenges when it comes to teaching people about it and how to use it and why. So, here are some chart terms that sometimes may serve as fluency-blockers when teaching the chart, or if you’re a student, when trying to learn the chart.

Record Floor. This term is also known as the Counting Floor, Counting Time, Time Bar, Counting Period, and more. Based on my own 11 years of teaching the SCC to graduate students, the time bar has been the biggest fluency-blocker when students are learning the chart, but it’s also probably the most important term and concept; one that certainly sets the SCC apart from mainstream graphs and charts.

The record floor designates the time that you spent recording behavior. On the SCC it’s a small dashed horizontal line that you draw connecting a Tuesday to Thursday line. In that sense, it’s a record; short for recording. In the olden days of the analysis of behavior, Skinner and his students and associates actually recorded behavior as it occurred on event recorders and later cumulative response recorders. These devices fed out paper continuously from a spool of paper. The feed-out ran at a constant speed, and this speed was calibrated by gear settings that varied by species. A moveable pen would mark directly onto the paper as it rolled underneath. Each time an organism made a response (e.g., pressed a lever; pecked a key), the pen would move up the paper slightly (across the top of the recorder itself, right to left). The angle or slope of the line produced then was proportional to the rate of the responding: The higher that rate, became the steeper the recorded line. When the SCC was developed, this old concept of recording persisted in the form of the record floor.

The record floor also designates the lowest frequency that could be counted during that time spent recording. The lowest count is 1, so if you counted only 1 behavior during the time span, then that’s your lowest frequency for that time span. On an SCC, for a frequency count of 1, the dot gets placed right on the record floor.

If you did not count so much as even 1 behavior during the designated time span, then you have a 0. But notice! That 0 is with respect only to that time spent recording! That’s all that 0 means. It does not mean 0 for an entire day unless you recorded for an entire day. So, if you ran a 10-minute session, on an SCC the record floor is drawn on the 0.1 per minute line, which technically is one-tenth of a response per minute. But, that’s meaningless. What 0.1 per minute really means is 1 response in 10 minutes (1/10 = 0.1). The record floor thus, not only indicates how long a recording session lasted, but also provides a means for indicating a one (a dot right on the record floor) and a zero (but, again, a 0 only for the session time, nothing further). I’ll revisit the “0 problem” at some later time.

Celeration Period. The celeration period is the third dimension in the definition of celeration. Celeration has three “dimensions”: (1) count, (2) per time, (3) per time. On a daily per minute SCC, what most people think of as being “the chart,” celeration is defined as responses per minute per week: r/min/wk. A week is the celeration period. That’s the time across which the celeration is computed and assigned a quantitative value.

The celeration period is critical to understanding celeration itself. In fact, since celeration is a dimensional, measure, the proper way of speaking of a particular celeration is to include both the number (the count) and the two standard international units (the minute and the week). To report a celeration you need all three, as well as the sign, which will be a x (multiply by) symbol or a / (divide by symbol; aka “slash”). An example might be x2 per minute per week. Another example might be x4/min/wk. A deceleration could be /1.4/min/week. Any variation works, just so long as you have and mention all three parts. A nonexample would be to report a celeration value as x2, or as x4. Those would be non-examples because they exclude the units

Multiply-Divide Scale. Some persons refer to the y-axis of the SCC as a “logarithmic” scale. Technically, it’s not, because a logarithmic scale would run 0, 1, 2, 3, and be equal interval add-subtract. Look up logarithms. The scale on the y-axis of the SCC is based on logarithms, which explains the weird pattern of the lines getting closer and closer together when you go up the scale from 1 to 10, and again from 10 to 100, and so on.

Calling the scale multiply-divide is possibly less foreboding than calling it logarithmic. And it is more accurate: the term multiply-divide indicates the mathematical operation used to move up (multiply) and down (divide) the scale. Such is what Lindsley called the “multiply world.” On an SCC, the vertical distance between a 2 and a 4 is exactly the same as between a 3 and a 6,, and between a 4 and an 8, and between a 5 and a 10, and between a 20 and a 40 and a 150 and a 300. In all those cases, the same exact distance refers to the operation of multiplying by factor of x2. Doubling, in other words! Tripling works the same way. From 1 to 3 runs the same distance from 2 to 6, and 9 to 27. True, there’s no 27 per minute line on the SCC, but if you know the distance meant by x3 (“times three”), then starting at 9, multiplying that by 3 (9 x 3), you can find where the 27 per minute line would go. This feature is one way of finding rates that are not printed on the vertical axis! — JE  9 April 2018

 

 

Rebooting this Site

April 10, 2018

About a dozen years have passed since I first started this website under its older name “Standard Celeration Charting.”  The standard celeration chart (SCC) still finds a home here, as it will forever, but in rebooting this site I’ve added “Precision Teaching” to the title, and also “Behaviorology.”

Much can happen in a dozen years.  In that time I got hired as a full-time faculty person at The Chicago School of Professional Psychology (TCSPP) in Chicago, IL.  I’ve taught graduate-level courses there for the past 11 years. Of course, the best way to learn something always is to teach it, and I think I’ve learned rather quite a bit over the past decade plus.  For that I thank the hundreds of students who have taken my courses: They’ve taught me how to communicate better what I think a science of behavior-environment relations is and ought to be.  Humbly, I agree that I still have a long way to go, and will never arrive at any semblance of perfection — not that that is or ever has been a goal. But, as you teach a subject matter, and see what works and what does not in terms of teaching methods and content your perspective may well change, and I think that mine has. In fact, I know it has.

In the past dozen years I have been a professional Behavior Analyst, too. In the middle of the previous decade I obtained a Board Certified Behavior Analyst (BCBA) certification, which I ramped up to a BCBA-D, the “D” standing for Doctoral, a few years ago when that designation became available.  So, professionally, I am a BCBA-D.  That’s what I do.  I practice that profession by teaching graduate students primarily, though I have from time to time done “field work” in the form of consulting as well.

Yet, other changes have unfolded, too, in that 12-year time span.  It is those changes, described below, that have led me to “reboot” this otherwise moribund site.

Some changes to the fields and scientific disciplines:

  1. A half dozen years ago or slightly more, the Behavior Analysis Certification Board (BACB), the certifying entity that developed the BCBA Exam and Certification, removed the “use the Standard Celeration Chart” from its Task List. The Task List is a set of loosely-defined objectives (but which read more like loose goals) that describe what a competent Behavior Analyst is supposed to know.  Removing the SCC, which is only a tool, but which also is an extremely powerful tool for monitoring and analyzing change in behavior over time, seemed particularly short-sighted.  This would be akin, if you can believe it, to some Carpenter’s Certifying Board deciding to remove measuring tapes from the repertoire of tools that a carpenter is expected to use and apply in their job settings!  I will have more to say about this tragic decision and the way it was arrived at in a later posting.  In the meantime, I’ve adapted as the Task List keeps changing.
  2. The Task Lists keep changing, and adapting is becoming more difficult, simply because how SOME people in Behavior Analysis decide to define Behavior Analysis is not exactly how I would define it.  The BACB is moving toward a 5th Edition of the Task List.  Preliminary versions have deleted the remaining Precision Teaching and Direct Instruction objectives, and have excised out much of the “behavioral acquisition” tasks that earlier Editions carried.  At one point even “use shaping” had been removed, though I have been re-assured by a close friend and colleague that, well, shucks and gee, shaping’s been put back in!  Pursuing our Carpenter’s metaphor a bit, this act would be akin to a Carpenter’s Certifying Board removing “use a hammer” from the Tasks that a competent carpenter should have in their skill set!  It also would appear that Behavior Analysis is devolving toward being primarily about behavior reduction.  To put this into that same metaphorical framework, if real-life carpenters need to know both how to build buildings and to tear down existing buildings, then this movement to mainly behavior reduction would be like our fictional Carpentry certifying body deciding that Carpentry should be about only tearing down structures not building them in the first place!
  3. I’ve been a member of the Association for Behavior Analysis International (ABAI) since 1977.  Back in 1977 it was known as “MABA” (the Midwestern Association of Behavior Analysis).  MABA changed its name to ABAI about 1979.  I’ve always had mixed feelings about ABAI, but I have kept my membership across the decades because, even though as my doctoral advisor and dissertation chairperson Dr. Ernest A. Vargas often pointed out, “Behavior Analysis” was never clearly defined quite as a scientific discipline, it sort of seemed like one.  People could “assume” that it was one (bear in mind that one of the original meanings of “to assume” is “to pretend,” and the situation may become somewhat more clarified).  Behavior Analysis was thus sort of a name for a kind of a science but also a name for an emerging profession.  The profession is defined by the Task Lists as promulgated by the BACB, as noted above.  But what about the science and about ABAI in particular?  Well, at the start of 2018 ABAI, without consulting its membership whatsoever, decided to rename and repurpose its flagship journal, The Behavior Analyst to Perspectives on Behavior Science.  The new journal would not be bound up with or even necessarily tied to Behavior Analysis, which has become the name solely of a profession.  The science is evidently to be named “Behavior Science.” Well, there are many salient but also concerning issues related to that name.  I will talk about these issues more coming up in a new posting later on, but suffice to say, there already IS a name for the basic science and its related natural philosophy — Behaviorology. Behaviorology was a name before its time, coined and advanced by both Dr. E. A. Vargas and Dr. Julie S. Vargas as well as some other individuals who expressed concern about what Behavior Analysis meant. Their concern arose in the mid-1980s, so it has been a while. Behaviorology refers to a scientific discipline (not a field) based on the scientific research and natural philosophy of radical behaviorism as initiated and developed primarily in the work and contributions of B. F. Skinner and Skinner’s colleagues, associates, students, and for the lack of a better term, grand-students (like grandchildren).  That’s been a very clear and direct meaning from the start of Behaviorology.  “Behavior Science,” on the other hand, has no such natural or specific a connection to what I also sometimes refer to as “Skinner’s Science.” Moreover, Behavior Science will have to compete with Behavioral Science, which is an ambiguous term related to and referring to many differing scientific and sort of scientific fields and disciplines, most of them entirely unrelated to any kind of science that B. F. Skinner developed.  That one little -al suffix:  As of March 2018 when I type Behavior Science into search engines such as Google and Bing, these engines “autocorrect” the search to Behavioral Science. Then under Behavioral Science you can see that this term encompasses many fields and disciplines, including psychology (in general), sociology, anthropology, and even political science, among many more.  It will be a tight discrimination for people to make, to distinguish between Behavior Science and Behavioral Science. Likely, that will not happen, but if it does, it will take a lot of work and effort and will be an uphill struggle! So, why not go with a crystal clear existing name?  That’s my strategy.  So, having been trained and educated in Behaviorology by both of the aforementioned Dr. Vargas’ in a “Behavior Analysis in Human Resources”  (BAHR) program located within an Educational Psychology Department which itself was within a College of Education and Human Resources, I’m comfortable wearing both a Behavior Analysis professional hat and adopting, or re-adopting Behaviorology as the name of the science and its related natural philosophy of radical behaviorism that informs my scientific thinking on what it means to have a science of behavior and behavior change.

I will have future updates, including re-posting here some articles/messages that I recently put on the History of Behavior Analysis listserv about the above issues.

Anyway, welcome back.

As usual, I invite cogent, thoughtful and peaceful commentary.  I will update my blog’s rules too. I’m not here to argue or debate, though I will listen to and read thoughtful responses even if they are not in agreement or alignment with what I know or think. Spam messages will be deleted ASAP.

— John W. Eshleman, EdD, BCBA-D, 26 March 2018

 

Charting versus “Junk Behaviorism”

November 2, 2008

Part One

After having gotten back into academia and having taught graduate-level courses in Behavior Analysis for over a year now, some signs pertaining to the health and state of Applied Behavior Analysis have become clear to me.  Painfully clear.

I seem to be engaged in a perhaps losing battle against what I term “junk behaviorism.”  Let me elaborate.

“Junk behaviorism” is a term I’ve come up with to describe a set of beliefs and practices that seem rampant in applied behavior analysis, but which are beliefs and practices that are not based on science and not based on B.F. Skinner’s experimental analysis of behavior science so far as I can tell.

Some preamble: Not long ago I was listening to the late, great George Carlin’s “A Modern Man” routine. Carlin had keen insights on our language.  His “A Modern Man” routine had him speaking just about every modern cliche’d word or phrase that now infests our language.  At one point in the routine he said  “I read junk mail, I eat junk food, I buy junk bonds and I watch trash sports!”  (You can find many copies of his entire routine on YouTube and other sites, including transcripts.) Carlin’s routine served as an sD to prompt me to think about other kinds of “junk” that we indulge in, including, alas, “junk behaviorism.”

Of course, in recent years some commentators have discussed what they term as “junk science.” Wikipedia defines “junk science”: http://en.wikipedia.org/wiki/Junk_science

So, what’s “junk behaviorism”:

1. It’s saying that you “reinforce the person,” when you discuss positive reinforcement.  “I reinforced Joe the Plumber,” for instance.  Well, how?  By giving him a wall to lean against?  From Skinner’s science we know that behaviorally all you can do is reinforce behavior. You don’t reinforce the person.

2. It’s calling an event or thing a “reinforcer” despite the absence of any evidence that it has functioned as a reinforcer or that it is currently functioning as a reinforcer.  “Verbal praise is the reinforcer for Jill the Plumber.”  Or, “we will use tokens as the reinforcer for Janet the Student.”  What?  How do we know that verbal praise “is” the reinforcer, or that the tokens “will” reinforce anything (let alone reinforce Janet the Student)?  We don’t.  This is extreme faulty use of language.  Careless. Disregarding. Even intellectually arrogant.  But above all, conceptually unsound.  The term reinforcer ought to be used only for events that have demonstrated a functional relationship with respect to behavior.  Well, in response to that, what other term should we use? More about that in a bit.

3. Lack of clarity about what a reinforcer does.  Sometimes some students arrive to grad school after having worked for a year or two or even several years out in some agency/clinic that provides “behavioral” services of various kinds to individuals “diagnosed” with various behavioral problems.  In some cases they’ve learned that “reinforcement” “increases behavior.” Well, no it doesn’t.  The phrase “increases behavior” is way too ambiguous.  Case in point: In discussing the definition of behavior, some individuals wanted to defend “behaviors” that do not pass Lindsley’s “Dead Man’s Test.”  A kid saying seated in his seat is therefore construed as behavior, even though a dead person could do better at this “behavior” than a live person ever could do.  That’s a bad pinpoint, when you apply the “Dead Man’s Test.”  So, what’s “increasing behavior” in this example? It’d be the kid staying in his seat for a longer period of time!  Egads!  Talk about turning Skinner’s science on its head?  How many rotations per minute is Skinner spinning in his afterlife? (Said as an update to the common metaphor.)

A variation of this misconception is that “reinforcement” “increases the probability of behavior,” or that it “increases the likelihood of behavior.”  While slightly better than the even more ambiguous “increases behavior,” these still qualify as bad phrases; phrases that obscure more than they clarify.  In contrast, Skinner was very clear:  a reinforcer affects the RATE OF RESPONSE.  More specifically, a reinforcer increases the frequency of behavior over time, where frequency refers to, and means the exact same thing as, rate of response.  

To get a rate of response you have to COUNT instances of behavior and determine how many there are per unit of time.  You need to determine the frequency of behavior and then see whether that frequency changes over time. If it does, and if it increases, then you begin to have some evidence that the event, or thing, functioned as a reinforcer.

In terms of probability and changes to probability, Skinner was always very clear:  Probability referred to rate of response. This type of probability addresses the “how often?” question, not the “what are the odds?” question.  If we loosely say that the “probability of the behavior increases,” in Skinner’s science we really mean that the response rate increased over time.  The count per minute went from one level up to another level.  For example, if we start “reinforcing” behavior, it’s frequency might increase from 5 per minute up to 20 per minute. Or, perhaps behavior increases from .1 responses per minute up to .5 responses per minute. If, but only if, those sorts of increases in response rate occur, do we begin to have evidence that we have reinforcement.

4. Treating nonbehavior as though it is behavior. I have already alluded to how nonbehavior, such as remaining seated, is now thought of and construed as being “behavior.”  Well, only in the junk behaviorism world can this be so!  Nonbehaviors represent a failure to pinpoint actions such that when one instance of an action occurs, it can be counted.  Nonbehaviors also confuse goals, outcomes, or results with behavior. “Remains seated” might well represent a desired goal (for the classroom teacher, perhaps).  I won’t comment here on the desirability of this as a goal; we’ll deal with that at another time. Right now, suffice to say that it’s a goal, and moreover, a state of being, not a behavior.  There’s no action in it.  This is one reason why Lindsley came up with the “Dead Man’s Test.”  Well, the “Dead Man’s Test” cuts against the grain of what appears to be modern-day junk behavioral practices in school or agency settings.  Their definitions of behavior are sometimes so dysfunctional that goals and states of being are confused with movement and action.  That represents a severe and profound failure to conceptualize behavior. In the long run, it will lead to failure of “behavioral” practices, and perhaps ultimately to the dissolution of behavior analysis as a science, to the extent that it really still is a science.

5. Confusing “near-behaviors” with actual behavior.  I got the term “near-behavior” from Jamie Daniels when I worked for Aubrey Daniels & Associates.  I don’t know off-hand if Jamie published it, but let me give him credit. Words such as “use,” “try,” “get,” “give” and so on are “near-behaviors.” They sort of sound behaviorish, and sort of seem to imply that there’s some action.  Yet, they remain very ambiguous.  They do not refer to actual actions or movements.  Ironically, words such as “do,” “respond,” and “behave” are themselves “near-behaviors”!  Well, how does one “respond,” you should ask.  Seek clarification. In junk behaviorism these terms are all used, and seem to be used rather thoughtlessly, as if precision and clarification don’t really matter.  

6. “ABC.” In the field of behavior analysis the “three-term contingency” has become iconic. Moreover, it’s become declarified into the term “ABC,” which stands for “Antecedent, Behavior, Consequence.” This aligns well with Discrete Trial Training (DTT), which almost seems to have become a standard way of viewing behavior one the one hand and the procedure of choice on the other hand.  In DTT there is a learner who is probably just sitting there, waiting.  The learner, so to speak, sits across a table from a teacher or therapist, so-called.  The teacher or therapist, so-called, conducts a “session” with the client learner.  During a “session,” the client is presented with “stimuli.” These are the “antecedents.”  The teacher or therapist, so-called, will present, one at a time, some item to the client.  The item could be a flashcard with a picture on it, for example. This item is shown to the learner. The learner then is supposed to give some response — the “behavior” part of the “ABC” model acronym.  Let’s say that the learner does do this behavior.  Then the teacher or therapist, so-called, will “deliver” a “consequence” or perform a “correction” routine, depending on how the client responded. Once that’s accomplished, the item is put aside and the teacher or therapist, so-called, picks up the next item and presents it and the same routine is conducted.  This takes place until the session completes, which is usually a fairly short period of time. (I say that the person presenting these stimuli is a teacher or therapist, “so-called,” because a real teacher or therapist would understand that DTT represents but one procedure out of many to change behavior, and not always the best!)

Some people have the audacity to refer to the behavior in DTT as “operant” behavior.  But, if you observe such DTT, the kid is mainly just sitting there, passively, awaiting environmental events to happen to him or to her.  The response given is entirely reactionary, not “operating on one’s environment” in any significant sense.  The learner, to the extent that he or she is learning anything at all, may simply be learning to be passive; that events are to be presented to him or her.  “Stimuli” are presented.”  Later on, after some response is given, “reinforcement” or “corrections” are likewise presented.  Then one waits for the next “stimulus” to be presented.

This turns Skinner’s model on its head, too.  One can imagine his spin rate accelerating (though, not due to any reinforcement, since you can’t reinforce the dead!).  I will concede that the actions of the teacher or therapist, so-called, represents operant behavior:  That individual is clearly operating on his or her environment!

The “ABC” model has become reified, I contend, as being the model of “operant” behavior.  It’s taken the so-called “three-term contingency” and morphed it into something different from what it was and taken it to what it never should have been.  

In actual fact, the three term contingency might be somewhat better expressed as Stimulus: (Movement –> Consequence).  The discriminative stimulus, sD, doesn’t “cause” the response to occur, though that seems implied in the “ABC” model.  The sD occurs in relation to the the (MC –> Consequence) contingency pair.  In the presence of the sD, MC –> Consequence relation entails a particular type of consequence, such as one that functions as a positive reinforcer. In an “sDelta,” which is just a different type of sD, the MC –> Consequence relation differs.  Perhaps the consequence isn’t a positive reinforcer.  

Let’s parse this out a little, since I’ve introduced some terms (“MC”) without defining them.  You start with a two-term contingency relation, MC –> Consequence, where MC stands for “Movement Cycle.”  A Movement Cycle is an instance of behavior. If it has a known function, you may call it a response. An MC has a beginning point and an ending point, and the organism can do another of the same type of MC once the current one finishes.  Informally, we may say that an MC has a “start time,” a “do time,” and a “stop time.”  Those are the boundaries of a single instance of an MC.  In other words, an MC also represents some action or movement by the organism that you can count (which, in turn, enables us to compute the rate of response).  The “consequence” in this relation may be understood better as simply the “effect” produced by the action.  We can substitute action for MC and effect for the consequence to add clarification.

This “Action” –> “Effect” pair forms a two-term contingency. This two-term contingency can come under stimulus control. But it does not necessarily have to do so, or certainly does not have to do so in the “ABC” model sense. 

In actual operant behavior, the organism moves around, and acts upon its environment. It changes and alters the environment. If nothing else it captures and engulfs some nutritious substance that functions to sustain animal life, since the organisms we’re talking about, including human organisms, are animal life. The organism doesn’t sit there awaiting stimuli to come down at it.  It moves. It operates on its environment.  It changes things around.  The environment differs somewhat after it has been operated upon. Moreover, the organism itself gets changed in some way, perhaps a small way, as a result of its acting upon its environment.  There is reciprocity in operant behavior in its relation to organism and environment.  

All of this seems to be obscured by the “ABC” model.  First, the “ABC” model ignores conditions of deprivation and aversive stimulation, which some behaviorists dub the “establishing operation” (though the term “potentiation” may work better).  The “EO,” as the establishing operation is also called, is not an “antecedent event.”  It doesn’t fit into that term. So, right away we’re faced with a fourth term.  

Next, the “ABC” model leads us back into the old, and rightfully discredited, “S-R” model of behavior.  Some people in ABA seem to think that the “A” causes” the “B” to occur, and why should they think otherwise, given that the very model implies that? Moreover, the “A” gets put into an equivalent status with the “C,” the consequence!  But, in actuality, the “B –> C” relation is far more important in the operant behavior equation than the “A –> B” relation ever would be.  

Unseen and unnoted, what also gets obliterated by the dysfunctional “ABC” model is the CONTINGENCY relation!  This we can denote with another term to identify the relation between the “behavior” and the consequence.”  The contingency, in fact, is far more important than the “B” or the “C” themselves.  

But note that in the “ABC” model, the question of what the contingency relation is will become quite limited. How does one factor in a schedule of reinforcement into that paradigm? Can you imagine doing a VR50 schedule in a DTT paradigm? I can’t either.  The model suggests, rather strongly, that EACH “behavior” will be consequated. And typically, each one is.

In applying the “ABC” model with a DTT procedure, the question of measurement then arises.  What does one measure?  Well, the “behavior” that the client performs is deemed to be “correct” or incorrect.”  One knows the total number of presentations. So, it’s fairly easy to calculate the percent of behaviors that were correct. Percent correct becomes the measure of choice. It’s easy to do. The data needed to compute it are easy to “take.”

The model ignores time as a fundamental parameter of behavior, however.  In principle, it would be possible to measure the LATENCY between when such a “stimulus” is presented to a client and when that client makes a response.  Latencies could be directly charted onto a Standard Celeration Chart, because latencies really are frequencies. (Don’t think so?  A latency is the count of 1 response per however much time elapsed between when the “stimulus” was first presented and the point in time when the behavior began.)  The chart can handle latencies down to .006 seconds, which would indeed be an incredibly short latency. Of course, in the typical DTT situation, latency isn’t recorded, and some might object that it be recorded, because the logistics of carrying out “trials” is already cumbersome enough as things stand. However, scientifically, that’s no excuse.  If the science is to be advanced further, then perhaps some enterprising individual will invent some measurement technology that makes the recording of such latencies as easy and convenient as the current percent correct recording is. Of course, that won’t address the other lingering problems with the underlying paradigm implied by the “ABC” model.

7. Lack of clarity about the terms we use.  I have put words such as “stimulus” in double quotes above, because, again, unless there exists some evidence that a thing or event FUNCTIONS to exert stimulus control over a two-term Action –> Effect relation, the event or thing should not be called a stimulus. The same goes for “reinforcer,” “consequence,” “contingency” and “response.”  All of these terms should be used only when we have demonstrated evidence that they functioned in some way.  Otherwise, we end up with the “junk behaviorism” nonsense statement that “I tried the reinforcer, but it didn’t work.”  Well, sorry to report that EVERY single reinforcer in the 5 billion year history of this planet has worked — each and every time!

So, how do we get past the “junk behaviorism” tendency to use function words when we do not have evidence of function?

Dr. Og Lindsley supplied the answer back in the mid-1960s, by suggesting we use two sets of terms, one to simply describe events as they are, and then a second set to identify terms when we have evidence that they functioned in some way. He named this the IS-DOES operant behavioral equation.  

On the IS side of the equation, the term “antecedent event” would never be used to denote a thing or event that has demonstrated stimulus control over an action–> effect pair.  Antecedent Event, abbreviated AE, would simply refer to events that happened before some behavior occurred. That’s all we know about them, that they took place before behavior, and nothing else. They may be functionally related, or may not be, but when discussing what they ARE, we don’t know what they DO.  We don’t assume that they have a stimulus function, either. (Alas, because the term “antecedent” has become so deeply embedded now in the junk behavioral culture as meaning the exact same thing as “stimulus,” it may be too late to revive “antecedent event” in the same sense that Lindsley meant! This does not negate the point. It rather suggests we need to keep working at terminology.)

Likewise, use “response” only when a Movement Cycle has a functional relationship to other events.  Use Movement Cycle (MC) to simply describe an instance of behavior.  Likewise, relegate “consequence” only to those events that have had a demonstrated effect on the response rate of a response. If the function of an event that follows behavior in time is unknown, then use Subsequent Event (abbreviated SE).  It makes perfect sense to say, “I tried the SE, but the frequency of behavior didn’t increase!”  Well, try another SE to see if it will increase the response rate!

Likwise further, use “arrangement” to denote descriptively the number or time or other relation between an MC and an SE.  But, once you have a clearly evident functional relationship between a response and consequence, then, but only then, use the term contingency.  

The IS side of the equation is thus written:

Program: (Antececent Event: Movement Cycle — Arrangement –> Subsequent Event)

Using abbreviations:

P: (AE: MC — Arr –> SE)

The DOES side of the operant behavioral equation then becomes:

Disposition: (Stimulus: Response — Contingency –> Consequence)

Using abbreviations:

D: (sD: R — K –> C).

Note the use of parentheses and colons.  A single instance of an MC (IS side), or R (DOES side) gets enclosed in the parentheses.  Colons are used to signify that the item in question might be either a discrete event or a more sustained condition (e.g., in a MULT schedule, responding on a VR schedule when the green light is ON — the light being on before, during, and after any given response).  The time arrow, –>, gets used only to signify the temporal relation between the events where we need to indicate it. In other words, if we had put an arrow between the AE and MC, we risk reintroducing the junk behavioral “S-R” mindset.  To avoid that possibility, don’t put an arrow there. It doesn’t fit anyway.

I must note that the operant behavioral equation was conceptualized by Dr. Ogden R. Lindsley in 1964 in a published paper, “Direct Measurement and Prosthesis of Retarded Behavior,” published in the Journal of Education. It morphed a couple of times, with his earlier acronyms and terms changing slightly. Then it became defunct when it appeared to be too difficult to engage would-be behavior analysts in learning the IS-DOES equation.  Let me suggest that now we must reintroduce it. Moreover, I have tweaked the equation somewhat through the use of those parentheses and colons, for the aforementioned reasons.  Will it work? Maybe, but we won’t know if we don’t try, try again!

Well, there’s a lot more “junk behaviorism” that afflicts the field of behavior analysis, and I’ll discuss that in Part Two of this article, and include the relevant references then.

  — JE

Frequency Jumps and Celeration Turns

August 20, 2007

One of the neat things about the Standard Celeration Chart resides in its ability to clearly show two basic types of changes to behavior that can occur when you change an independent variable.  We refer to the point in time when you make a change as a “phase change.”  A phase change takes place, for instance, between a baseline period of behavior recording and an intervention period.  In an intervention you change the values of at least one independent variable, and then monitor its effects to determine whether it changes behavior over time.

As noted, two basic changes to behavior can occur (there are more, but for now we will restrict the discussion to these two basic changes): 

 1. The frequency of the behavior can change abruptly.

 2. The celeration of the behavior can change over time.  We consider the celeration changes to consist of more “gradual” changes, though if the celeration runs steep enough, the change over time may seem anything but gradual.

We call the abrupt changes to frequency “jumps.”  For example, if you make a phase change and the frequency goes from 10 per minute on Monday to 20 per minute on Tuesday, we would say that that change describes a “frequency jump up.”  In this example, the jump up would have a value of x2 (“times two”) on the Standard Celeration Chart.  In charting terminology mathematicians used the older term “step function” for jumps.   The Precision Teaching term “jumps” runs more in line with the plain English emphasis of this field.

We call the more gradual changes to frequency over time “celeration turns.”  On the Standard Celeration Chart we depict frequency with a dot and celeration with a line of best fit drawn through a set of daily frequencies.  You can draw a celeration line for a baseline phase, and then draw a separate celeration line for the subsequent intervention phase.  If the angle of the celeration line changes across phases, then we say that the celeration has “turned.”  For example, if the celeration preceding a phase change ran at x1.0 (“times one”) and then after the intervention it shifted to a x2.0 per minute per week slope, then we would describe the celeration turn as a x2 (“times two”) change.

To recap, the frequency can “jump” and the celeration can “turn” when you put into effect some change to some independent variable.

There are many combinations of frequency jumps and celeration turns.  Note that “no jump” and “no turn” also represent possible outcomes of making an intervention.  Moreover, note that any jump or turn on the Standard Celeration Chart can be up or down.  If the frequency or celeration increases, then the change, as shown on the chart, is “up.”  Likewise, if the frequency or celeration decreases, then the change is “down.”

The basic jump and turn combination, therefore are:

* Frequency jump up, celeration turn up.

* Frequency jump up, celeration no turn.

* Frequency jump up, celeration turn down. (A counter-turn).

* Frequency no jump, celeration turn up.

* Frequency no jump, celeration no turn.

* Frequency no jump, celeration turn down.

* Frequency jump down, celeration turn up (A counter-turn).

* Frequency jump down, celeration no turn.

* Frequency jump down, celeration turn down.

Lindsley and his students identified two cases of “counter-turns.”  A counter turn occurs when you find a frequency jump in one direction followed by a celeration turn in the opposite direction. The two cases are frequency jump up followed by a celeration turn down, and a frequency jump down followed by a celeration turn up.  In both cases, the celeration trend will take the frequencies back to their starting point, suggesting that the changes made to the behavior by the manipulation of the independent variables produced only a temporary effect at best.  Lindsley and his students discovered that a fairly substantial proportion of the published behavior analysis literature contained such counter-turns.  Moreover, they found that the charts and graphs used in the published literature tended to obscure the illustration that counter-turns occurred.  You can make a counter-turn seemingly go away by using stretch-to-fill and fill-the-frame charts.  Of course, in the real life of the student or research participant, the counter-turn has not gone away.

The Figure associated with this essay illustrates the 9 basic frequency jump and celeration turn combinations.  However, you should know that many more combinations are possible.  For instance, while the Figure has the baseline phases running flat across the little charts, you could find situations where the baseline frequencies were already accelerating or decelerating.  Given that, the potential number of jump and turn combinations rises dramatically.  Of course, the total possible number of combinations becomes infinite when you consider all of the possible values that jumps and turns can take.

We use x2 (“times two”) as a useful rule-of-thumb to mark when we have a jump or a turn.  Any change having a value of x2 will clearly show up.  Any change having a value greater than x2 will show even more clearly.  Changes less than x2 can occur, but their distinction becomes somewhat harder to discern. For instance, a frequency change of x1.1 would not show up very clearly no matter what type of chart you used.

You can use a frequency finder and/or a celeration finder to determine the actual, precise values of the change to behavior over time.

  — John Eshleman, Ed.D., BCBA  (August 20, 2007)

———————————————————-

 Click on the Figure below to bring up a readable copy:

Basic Combinations of Frequency Jumps and Celeration Turns

Counting Unknowns

August 6, 2007

Lindsley (1997) states:

 “There are measurement experts who say you must objectively define a thing before you can count it.  Wrong again.  You can even count unknowns.  You can keep track of the time and chart the frequency of unknown things you encounter each day.  The daily frequency of unknowns is very high when you are in a foreign place and very low when you are in a familiar place.” (p. 529)

 REFERENCE:

Lindsley, O.R. (1997). Performance is easy to monitor and hard to measure.  In R. Kaufman, S. Thiagarajan, & P. MacGillis (eds.), The Guidebook for Performance Improvement: Working with Individuals and Organizations.  San Francisco: Jossey-Bass/Pfeiffer.  Chapter 26, pp. 519-559.

Successive Minutes Chart – Doing a Timing Every Other Minute

April 3, 2006

John Eshleman's 1984 SAFMEDS ("Flashcards") data on Merbitz & Layng Chart

On the chart shown you can consider several things. (To view the chart, click on the thumbnail image. Use the back arrow on your browser to return to this page.)

 The attached chart shows data from March 29, 1984 that I had originally charted on a "converted" daily Standard Celeration Chart (DC-9EN). I was 28 at the time. Back then I was testing out how to obtain a Learning Picture in a half hour or so. 

 There were no "timings charts" back then, so I had used a daily chart, crossing out the word "Days" on the x-axis label and replacing it with minutes.  (That original chart is not shown here).

The chart shown here represents an example application of the Merbitz & Layng (1996) V010396 Sprint #19 "Successive Minutes" chart.  Across the bottom axis are successive real-time minutes, not days, not sessions, and not successive timings.  I found the old chart from 1984 today and recharted its data onto the Merbitz & Layng today.  It took only a few minutes to do so.

To read the chart, up the left scale is Responses per Minute.  Across the bottom scale is Successive Minutes.  The y-axis up the left is a multiply-divide scale.  The x-axis across the bottom is an add-subtract scale.

What this is a chart of is of me doing one-minute timings of SAFMEDS (Say All Fast a Minute Every Day Shuffled; in this case, a Minute Every Other Minute Shuffled). I ran the timings every other minute. In some cases, two minutes elapsed between timings.  But, these data were charted in real time, so when one minute elapses between timings, the time line is blank, and when two minutes elapsed between timings, two time lines in succession are blank.  During the minute in between timings I would count the corrects and incorrects, chart them quickly, and then reshuffle the cards. The round dots are corrects per minute and the x's are incorrects per minute. I drew in Record Floors down at the 1 line.

The topic was Apple II Machine Language terms, which I had made into SAFMEDS.  At the time I was learning how to program computers, and I was thinking about learning machine language.  I was not doing this for any class, job, or formal project.  Just learning it on my own.

On the chart are a couple of event manipulations.  About 20 minutes into the study session, I decided to study the errors during the one minute between timings, because there were four to eight errors that still persisted (cards that seemed difficult to learn).  And about 38 minutes into the session, I set an aim goal of 40 per minute (but not an actual aim-star, which would include not only the frequency level, but also the time line. I put the aim over onto the y-axis).

Overall, the chart shows a "jaws" learning picture across a 70 minute period of time. Over that period I did 34 one-minute timings.  There was a slight crossover picture at the start, but only two times when errors were above corrects, so to me this LP looks more like a "jaws" than a "crossover jaws" picture.

Some implications:

Last year I discussed on the Standard Celeration listserve the question of charting data in real time, but did not have the ability at the time to put up successive minutes real-time charted data to illustrate the point.  While the attached data are from an old chart, they illustrate HOW the Merbitz & Layng real-time Successive Minutes chart could be used to work with minute-by-minute types of recording.

You can do more than the "traditional" four or five timings within a day (since when did that become an actual tradition? Why? On what basis?).  When doing timings within a daily session of time, it could be, as this chart illustrates, possible to do many timings.

There are limits. On the chart I noted that at about 67 to 68 minutes into doing this session that I was fatiguing.  So, I wouldn't necessarily recommend doing such an extensive session with yourself or with a learner for that and possibly for other reasons.  On the other hand, you might try doing 10 to 15 to maybe 20 timings within a day if that is logistically feasible, and record them in real time.

The Merbitz & Layng chart is calibrated, I found out by using a BRCo CFM-4 Celeration Finder, to the proper x2 34 degree angle.  I wrote in some celeration values covering some periods of the session (e.g., an initial x2.3 celeration of corrects, followed by a x1.3 midway through and a x1.1 during the fatiguing). The celeration period of this chart is a 10-minute period of time, so technically the first celeration would be stated as x2.3 per minute per 10 minutes (celeration = count per time per time).

The image is slightly angled.  I tried relining up the page in the scanner a couple of times, but still it was angled off a tad.  Then, I measured the margins of the paper, and the distance from edge of paper to the frame was not the same going across.  The copies of the Merbitz & Layng chart that I have seem to have been xeroxed. That goes to another point that Dr. Og Lindsley made in his last-ever talk at the 2003 IPTC about chart standards, that one of the standards is the margins.  Og was very precise and adamant about this.  Margins had to be exact in order to function towards exact overlays, but this slight-angledness now also seems to be another reason why margin standards need to be actual standards as he said.  

— JE

REFERENCE:

Lindsley, O.R. (2003). Precision Teaching's eyes and ears: Standard Celeration Charts and terms.  Invited Address presented at the International Precision Teaching Conference, Columbus, OH 6 Nov 2003.

Posting Charts to the SCC Blog

March 31, 2006

 John Eshleman's first Standard Celeration Chart from 1975

Just testing how charts and graphics can be uploaded to the blog.

To view a larger version of the chart, click on the thumbnail.  To return to this page after viewing the chart, click the back arrow on your web browser. 

The chart shown here is one that I uploaded to the Standard Celeration Listserve a few days ago.  It's a chart from 1975, and, in fact, is the first Standard Celeration Chart that I did. It's a chart of my reading my class notes, day by day, for the course named Applied Reinforcement Theory, which was taught by Dr. Steve Graf, and which was the course that first introduced me to Precision Teaching and to the Standard Celeration Chart.  The Movement Cycle is "reads page."  The y-axis up the left is Pages Read per Minute.  Those were pages in my notebook. –JE

BRCo Standard Celeration Charting Resources

March 27, 2006

A centralized site for both of the Standard Celeration Charting books I mentioned in the previous post, and a source of the charts themselves, is the Behavior Research Company (BRCo) family of websites.  BRCo was founded by Dr. Ogden R. Lindsley.

To order either the Graf & Lindsley (2002) or the Pennypacker, Gutierrez, Jr., and Lindsley (2003) book from BRCo you can go to BRCo's site at:

http://www.behaviorresearchcompany.com/

and click on the link to its online store.

To go to its store directly, you can click on:

https://www.behaviorresearchcompany.com/Merchant2/merchant.mvc?Screen=SFNT&Store_Code=B

and browse amongst its catalogs of products.

To go to the page where you can order the two aforementioned charting books, the direct webpage link is:

https://www.behaviorresearchcompany.com/Merchant2/merchant.mvc?Screen=CTGY&Store_Code=B&Category_Code=BKMN

where you can also view their front covers. — JE