Detecting intonation errors in familiar melodies Nancy E. Kelley Manchester College North Manchester, Indiana Experiments on musical pitch perception have shown that intervals with equal frequency ratios are not always perceived as the same by musicians, but are affected by the tonal context, or key, in which the intervals are heard (Krumhansl, 1979; Krumhansl & Keil, 1982; Krumhansl & Shepard, 1979). Memory for the pitch of single tones has also been shown to be affected by whether a series of tones intervening between the standard and comparison tones is tonal or atonal (Dewar, Cuddy, & Mewhort,1977; Krumhansl, 1979). These findings are thought to result from the use of an internal frame of reference (the scales) in the encoding of pitch. The present study is a test of the hypothesis that nonmusician's recognition memory for pitch in the form of intonation judgments concerning notes in familiar melodies will be affected by the tonal context in which the tones are heard. Specifically, it is expected that differences in the tonal functions, or tonal stabilities, of the tones in their respective tonalities will contribute to differences in the ability of listeners to perceive small changes in frequency when the notes are out-of-tune. Since the more consonant steps in the scale are more easily remembered, it is predicted that they will also be more discriminable from close neighboring tones. Experiment One In the first study, participants made intonation judgments concerning notes which were the second and fifth degree of the scale in the key of the melody in which they were heard. On half of the trials, the tone being judged had an absolute frequency of 256 Hz (middle C). On the other half, the frequency was 384 Hz (the G above). Method Participants: 27 musically-untrained listeners participated as part of one of the options for fulfilling the experimental methodology activity requirement for an introductory psychology course at Indiana University Purdue University Fort Wayne. Materials: Eight melodies which were familiar to the listeners were synthesized and presented using the Hypersignal software by Hyperception, Inc. The following four melodies contained target tones that were the fifth degree of the scale: America; Jingle Bells; Row, Row, Row Your Boat; and Doe, A Deer. The next four melodies contained target tones that were the second degree of the scale: The Alphabet Song; When the Saints Come Marching In; Happy Birthday; and Here Comes the Bride. Procedure: Each participant was asked to judge whether a particular note, the “target” in each of eight melodies, was in tune or not. The melodies were heard free field and judgments were entered on an answer sheet. Six judgments were available which reflected the listener’s judgment about whether the note was “right” (in-tune) or “wrong” (out-of-tune; sharp or flat) and the degree of confidence: definitely right, right, maybe right, maybe wrong, wrong, definitely wrong. Each listener heard nine versions of each melody in random order for a total of 72 trials. Each version differed in (1) whether or not the target was in tune and, if not, (2) the degree to which the target was out of tune. The nine possible targets were separated by eighth-tone steps. Version one was a half-tone flat, version two was 3/8th-tone flat, and so on up to version nine, which was a half-tone sharp. Participants were informed of which note was the target by using the words of the songs. The words were presented in written form with those words corresponding to non-target tones in black, lowercase type and those for target tones in red, uppercase type. For example, while listening to “Row, Row, Row Your Boat”, participants would see the following: Row, row, row your boat Gently down the STREAM Merrily, merrily, ... They then made their judgment concerning the note corresponding to “stream”. These judgments were made without feedback. Results An initial analysis showed that responses did not differ according to the absolute frequency of the target. Therefore, the data were collapsed across both frequencies. Figure 1 shows the total number of “right” responses of any sort as a function of the number of 1/8 tones the target note was out-oftune. The bell shape of the curves are typical for frequency discrimination data with two exceptions. (1) The point at which the target was most likely to be heard as in-tune for the P5 condition was not centered on the objectively in-tune frequency, but was slightly higher in pitch. (2) Figure 1 The curves are steeper (better discrimination) when the target was flat than when the target was sharp for both P5 and M2. The signal detection analysis shows a similar pattern. For both P5 and M2, two separate analysis was performed for when the targets were sharp and when they were flat. Hit rates were the percentage of correctly responding “right” when the targets were in-tune. The two False Alarm rates were the percentages of incorrectly responding “right” when the target was either a quarter-tone flat or a quarter-tone sharp. While Hit rates were basically the same in all four cases, False Alarms were greater when the out-of-tune targets were on the sharp side. Experiment Two In the second study, participants made intonation judgments concerning notes which were the sixth and eighth degree of the scale in the key of the melody in which they were heard. Method Participants: 18 musically-untrained listeners participated as part of one of the options for fulfilling the experimental methodology activity requirement for an introductory psychology course at Indiana University Purdue University Fort Wayne. Materials: Eight melodies which were familiar to the listeners were synthesized and presented using the Hypersignal software by Hyperception, Inc. The following four melodies contained target tones that were the eighth degree of the scale: This Old Man; Santa Claus is Coming to Town; All I Want for Christmas; Brahm’s Lullaby; The next four melodies contained target tones that were the sixth degree of the scale: Highlands; Oh, Shenandoah; Bicycle Built for Two; and Amazing Grace. Procedure: The same procedure was used as in the first experiment. Results Once again, an initial analysis showed that responses did not differ according to the absolute frequency of the target and the data were collapsed across both frequencies. Figure 2 shows the total number of “right” responses of any sort as a function of the number of 1/8 tones the target note was out-oftune. The peaks of both curves are at the point where the targets Figure 2 are in tune. The overall steepness of the curve for P8 is greater than for M6, showing the P8 to be more easily discriminated from near neighbors. There is an asymmetry in the curves for both P8 and M6 with a steeper fall-off on the flat side indicating better discrimination. The signal detection analysis shows overall better discrimination for P8. There is a conservative bias to respond “wrong” when the target is P8 and a bias to respond “right” when the target is M6. Discussion First of all, musically-untrained listeners are able to hear relatively small changes in intonation. For all the P8, P5, and M2 scale degrees, they are able to detect a quarter-tone change from the point of subjective equality (which is slightly sharp in the case of P5). For the M6, a change of a 3/8 tone is required for reliable discrimination. Secondly, there are clear differences in the discrimination functions for the different scale degrees. Therefore, it is evident that musically-untrained listeners' pitch intonation judgments are sensitive to the tonal functions of the tones. It is not so evident what mechanism(s) is(are) responsible for these differences. It is not a simple case of more consonant tones being more discriminable than less consonant tones. While P8 targets were most discriminable, P5 was no better than M2. The differences in response bias for the different tonal functions suggest that the accuracy of their memory representations may influence the criteria that is used in accepting a tone as in-tune. In other words, if one has a good idea concerning what a note such as P8 should sound like, one will have a more strict criteria for what one calls in-tune. Of course, that begs the question of why certain tonal functions are better represented than others. Another possibility is that a perceptual quality associated with the different tonal functions changes at a different rate in the tones surrounding the target tones used in this study. The differences in the discrimination functions may reflect the degree of change in this subjective quality at 1/8, 1/4, 3/8, etc. tone differences from the in-tune note. The greater the local change, the better the discrimination. References Dewar, K.M., Cuddy, L.L. & Mewhort, D.J.K. (1977). Recognition memory for single tones with and without context. Journal of Experimental Psychology: Human Learning and Memory, 3, 60-67. Krumhansl, C.L. (1979). The psychological representation of musical pitch in a tonal context. Cognitive Psychology, 11, 346-374. Krumhansl, C.L. & Keil, F.C. (1982). Acquisition of the hierarchy of tonal functions in music. Memory & Cognition, 10, 243-251. Krumhansl, C.L. & Shepard, R.N. (1979). Quantification of the hierarchy of tonal functions within a diatonic context. Journal of Experimental Psychology: Human Perception & Performance, 5, 579-594.