(S-7)

How was the threshold of 7 items in short-term memory determined?

Answer(s):


From: collins@cs.wm.edu (Bob Collins)
Date: Sat, 19 Mar 1994 13:34:48 GMT

The original source is:

Miller, G. A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity for information processing." Psychological Review 63: 81-97.

This paper summarizes short-term memory tests. I believe this was for AT&T. Hope this helps.


Date: Mon Mar 21 03:47:45 1994
From: sdpage@andersen.co.uk (Stephen Page)

7 +/- 2 provides a useful reminder that short-term memory has a limit. However, psychologists these days widely reject the over-use and over- simplification by HCI people of the 7 +/- 2 metric. (References, anyone?) The limit is measured in "chunks"; since it is very hard to determine what a "chunk" is, the measure is meaningless.


From: fullerr@cgsvax.claremont.edu
Date: 31 May 94 12:50:54 PST

Cognitive limits can change depending on the the environment and the individual. Miller's landmark work showed that at the intersection of several senses and attentional processes there seemed to be a maximum limit to the amount of information that could be understood or remembered:

Miller's work on cognitive limits should be qualified as relevant to situations where you have VERBAL RECALL of VISUAL STIMULI consisting of NONWORDS rehearsed in HUMAN SHORT-TERM MEMORY (it should go without saying that these humans are assumed to be college students). If ALL FOUR of those conditions exist then you can say that your users will be limited to 7 +/-2 chunks of information.

It is NOT possible to even determine the cognitive limit of any one sense: for example, the human ear seems to be able to clearly distinguish up to 50 basic speech sounds per second (assuming English language) but only 2/3 of a basic tone per second (assuming accepted west hemisphere music structure). But both systems use the same sensing apparatus. Also the perception of color and motion are processed differently depending on the location of the stimuli in the eye, and the pathway to and from the LGN. The absolute and relative thresholds of these systems are different.

It might seem like a minor point to a non-psychologist but if you think of the issue as metaphorically being the following HCI issue: you are one of many authors on a document and you are all using different formats. To share data you must reduce your information into the most basic of formats. These can then be shared with the others, but you have no idea what format the others are using (and you really don't care--you only want to use the one that works best for you). If all of the transfer is internal there is no way of knowing how much of your format got garbled before it was printed: you ONLY see the final product. But if all of the authors can work in front of a shared workspace then you can revise your formats before the actual product goes to print. If we assume that each sense (vision, speech, hearing, smell) is a different author then Miller's work deals only with the situation where the authors work alone and merge work at the latest stage (and live with the errors). More current theories of attention deal with the use of external representations (like being able see the screen as you work--WYSIWYG, etc.) and how this increases the cognitive limits of poor users.


comp.human-factors faq WWW page:
http://www.dgp.toronto.edu/people/ematias/faq/contents.html