Cross-Language Computational Investigation of the Length Effect in [PDF]

effect that may be used to differentiate the models: the length effect, which is the relationship between word and nonwo

0 downloads 5 Views 203KB Size

Recommend Stories


Use of Computational Modelling For Investigation the Effect of Melt Delivery Nozzle Tip Length on
Everything in the universe is within you. Ask all from yourself. Rumi

The effect of noun phrase length on
Happiness doesn't result from what we get, but from what we give. Ben Carson

aerodynamic investigation of the deformable membrane airfoil with excess length
Come let us be friends for once. Let us make life easy on us. Let us be loved ones and lovers. The earth

A Computational and Experimental Investigation of the Human Thermal Plume
Your task is not to seek for love, but merely to seek and find all the barriers within yourself that

the effect of length on the aerodynamic characteristics of bodies of revolution in supersonic flight
If you want to go quickly, go alone. If you want to go far, go together. African proverb

Computational analysis of the effect of physical activity on the changes in femoral bone density in
Everything in the universe is within you. Ask all from yourself. Rumi

The Effect of Femoral Stem Length on Inpatient Rehabilitation Outcomes
How wonderful it is that nobody need wait a single moment before starting to improve the world. Anne

The effect of slip variability on earthquake slip-length scaling
Everything in the universe is within you. Ask all from yourself. Rumi

THE INCREASE IN AVERAGE LENGTH OF LIFE
Make yourself a priority once in a while. It's not selfish. It's necessary. Anonymous

PDF Computational Intelligence: A Compendium (Studies in Computational Intelligence)
Ego says, "Once everything falls into place, I'll feel peace." Spirit says "Find your peace, and then

Idea Transcript


Journal of Experimental Psychology: Human Perception and Performance 2002, Vol. 28, No. 4, 990 –1001

Copyright 2002 by the American Psychological Association, Inc. 0096-1523/02/$5.00 DOI: 10.1037//0096-1523.28.4.990

Cross-Language Computational Investigation of the Length Effect in Reading Aloud Conrad Perry

Johannes C. Ziegler

Macquarie University and The University of Hong Kong

Centre National de la Recherche Scientifique and Universite´ de Provence, and Macquarie University

The authors examined whether 2 computational models of reading, the dual-route cascaded model (M. Coltheart, K. Rastle, C. Perry, R. Langdon, & J. C. Ziegler, 2001) and the connectionist 2-layer model (M. Zorzi, G. Houghton, & B. Butterworth, 1998), were able to predict the pattern that the length effect found in reading aloud is larger in German than in English (J. C. Ziegler, C. Perry, A. M. Jacobs, & M. Braun, 2001). The results showed that the dual-route cascaded model, which uses a serial mechanism for assembling phonology, successfully predicted this cross-language difference. In contrast, the connectionist model of Zorzi et al. (1998) predicted the opposite: a larger length effect in English than in German. Both the success of one model and the failure of the other highlight fundamental differences between 2 major classes of computational models.

lexically. As a consequence, there should be (and was, in Coltheart & Rastle’s [1994] experiments) a position-of-irregularity effect. Recent empirical evidence and simulation studies provided further support for this serial-assembly hypothesis (Perry & Ziegler, 2002; Rastle & Coltheart, 1999). Although the original finding of a position-of-irregularity effect was taken to be in favor of the serial-assembly hypothesis, Zorzi (2000) showed that his model (Zorzi et al., 1998) predicted the same qualitative pattern of results as that found by Rastle and Coltheart (1999), although his model used a parallel orthographyto-phonology route. Zorzi (2000) suggested that his model could predict this pattern because irregularity across position was confounded with positional grapheme–phoneme consistency. That is, the first-position irregular words from Rastle and Coltheart (1999) tended to have more inconsistent graphemes than did words with irregular correspondences in later positions. Thus, according to Zorzi (2000), the degree of grapheme–phoneme inconsistency varied in the same way as the position of irregularity, with firstposition irregular words being more inconsistent than secondposition irregular words, which were more inconsistent than thirdposition irregular words. Being sensitive to positional grapheme– phoneme inconsistency, his model was perfectly capable of simulating the position-of-irregularity effect. (However, see Rastle & Coltheart, 2000, for an alternative view of his statistical, but not simulation, analysis.) Whereas serial and parallel models seem to have reached a stalemate with respect to the position-of-irregularity effect (at least using the items of Rastle & Coltheart, 1999), there is an additional effect that may be used to differentiate the models: the length effect, which is the relationship between word and nonword latencies and stimulus length. In English, when reading-aloud latencies are examined, a length effect is typically found that is the largest for nonwords, smaller for low-frequency words, and essentially nonexistent for high-frequency words (e.g., Weekes, 1997; see Coltheart et al., 2001, for a summary). Note that although early

One of the key differences between current models of reading aloud is the nonlexical mechanism that translates orthography into phonology. In the dual-route cascaded (DRC) model, that mechanism is serial (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Ziegler, Perry, & Coltheart, 2000). Orthography is decomposed into individual letters or letter clusters, and those letters are assembled into phonology from left to right. In contrast, in current parallel models (e.g., Plaut, McClelland, Seidenberg, & Patterson, 1996; Zorzi, Houghton, & Butterworth, 1998), the entire orthographic string is used as input, and phonology is generated in parallel. One critical effect that has been proposed to help decide whether phonology is assembled as a serial process or in parallel is the position-of-irregularity effect. Coltheart and Rastle (1994) suggested that if phonology is assembled in a left-to-right manner, words that have irregular spelling–sound correspondences early in their letter sequences (e.g., chef ) should exhibit slower readingaloud latencies than words with irregular correspondences late in their letter sequences. The logic behind this was that because irregular correspondences in early letter positions are assembled (incorrectly) earlier than those in later positions, they have more time to interfere with the correct pronunciation that is generated

Conrad Perry, Macquarie Centre for Cognitive Science, Macquarie University, Sydney, New South Wales, Australia, and the Joint Laboratories for Language and Cognitive Neuroscience, Department of Linguistics, The University of Hong Kong, Hong Kong; Johannes C. Ziegler, Centre National de la Recherche Scientifique and Universite´ de Provence, Marseille, France, and Macquarie Centre for Cognitive Science, Macquarie University, Sydney, New South Wales, Australia. This research was supported by an Australian Research Council Special Centre Grant. We thank Max Coltheart, Debra Jared, and especially Marco Zorzi for very helpful comments. Correspondence concerning this article should be addressed to Conrad Perry, Macquarie Centre for Cognitive Science, Macquarie University, Sydney, New South Wales, Australia. E-mail: [email protected] 990

LENGTH EFFECTS IN READING

studies suggested that there is an effect of word length in English word-naming latencies (e.g., Frederiksen & Kroll, 1976), recent evidence suggests that the effect is not significant after words are controlled for orthographic neighborhood characteristics (e.g., Weekes, 1997; Ziegler, Perry, Jacobs, & Braun, 2001). The serial explanation of the length effect, as implemented in the DRC model, is that the serial-assembled phonology mechanism interacts with lexical activation. When there is little or no lexical activation, such as when nonwords are read, the greatest length effect is found; because the phonology of the letter string must be assembled via the nonlexical route, and on the basis of serial processing assumptions, long nonwords take more iterations to assemble than do short nonwords. In contrast, when there is lexical activation, such as when words are read, the length effect is moderated by the phonology generated in parallel by the lexical route. Thus, a smaller length effect is found for words. Simulations of the length effect by the DRC model have been reported by Coltheart et al. (2001). Because the serial-assembled phonology mechanism is responsible for both the left-to-right irregularity effect and the length effect on nonwords, it might seem intuitively plausible to predict that the model would also show a length effect on regular words. However, Coltheart et al. (2001) showed that this was not necessarily so. (In fact, the parameter set of the model was deliberately chosen to capture the main empirical patterns in the literature, including the absence of length effects on words.) In particular, the dissociation between the length effects for nonwords and words was captured by setting the assembled phonology route so that it was sufficiently slow, such that the last phoneme of most words typically receives very little activation from the assembled phonology route before being named. As the last phoneme in regular words tends to be the last to rise, the amount of time taken for the model to name regular words is largely based on lexical activation, and hence, there is no length effect on words. In addition, early phonology correctly assembled in early positions has little effect on naming latencies, because although it increases the activation of early phonemes, all phonemes of a word must reach a certain level of activation before naming can occur. Alternatively, because nonwords do not benefit as much from lexical activation (because they have no lexical entries), naming latencies are largely determined by the speed of the assembled phonology route, and hence, a length effect on nonwords is still found. This setting also allows the position-of-irregularity effect to be dissociated from the word length effect. In this case, regularized phonology in early phonemic positions still has an effect on naming latencies, because it slows the correct phonology that is generated by the lexical route. The irregular words are then slower to name because of the slower initial irregular phonemes. Thus, the important point is that assembled phonology still has an effect on both regular and irregular words, in terms of the amount of activation generated (especially on early phonemes), but for regular words, it has little effect on the naming latencies of the model. For words, assembled phonology is so slow that it produces no activation or causes only a very small amount of activation to be generated on later phonemes. It is much less obvious how a parallel-assembly mechanism could produce a length effect. Two possibilities have been discussed in the literature. The first is that length effects are simply outside the scope of current models, because they might reflect peripheral aspects of reading, such as encoding of the visual

991

display and generating articulatory output. As Seidenberg and Plaut (1998) noted: “The residual effects of length are a reminder that there are aspects of word recognition and pronunciation that are beyond the scope of implemented models” (p. 235). For example, one could imagine that length effects derive from processes involved in generating an articulatory output that is necessarily sequential. As Seidenberg, Petersen, MacDonald, and Plaut (1996) suggested, one could implement a process that converts an internal phonological code into an explicit articulatory code. This could be done in a recurrent network producing timevarying output over units representing articulatory features (e.g., Jordan, 1986). Because current models do not deal with articulatory motor output, they cannot be expected to simulate these effects. The second possibility is that length effects are confounded with other variables. For example, shorter words, which include many function words, might be more frequent than longer words. To the extent that parallel models are sensitive to these other variables, they would be able to simulate at least a part of the length effect. Such an argument was developed by Zorzi (2000) with respect to the position-of-irregularity effect. He showed that the supposedly serial effect found by Rastle and Coltheart (1999) was confounded with grapheme–phoneme consistency and that his model was particularly sensitive to this variable. By the same token, if longer words are more likely to have inconsistent graphemes, parallel models similar to the one of Zorzi et al. (1998) might be capable of predicting the length effect. We have recently conducted a cross-language experiment (Ziegler et al., 2001) that provided critical data with regard to the two possibilities described above. In Ziegler et al. (2001), we investigated length effects for words and nonwords in German and in English.1 There were two features in Ziegler et al. that were critical in testing the two explanations put forth by defenders of a parallel-assembly mechanism. The first feature was that we used very similar items in the two languages. This was made possible because both German and English are of Germanic origin and thus have a large number of words with similar orthography and phonology and identical meaning, so-called cognates (e.g., zoo in English vs. Zoo in German and ball in English vs. Ball in German). If Seidenberg and Plaut (1998) were correct in that length effects reflect peripheral aspects of reading, such as encoding of the visual display and generating articulatory output, then length effects should be similar in German and English, because both encoding 1

In this study, Ziegler et al. (2001) used two sets of 80 words and 80 nonwords (one set in English, and one set in German) that were very similar across languages. In particular, 80% of the real words were cognates that were orthographically identical or very similar, 62.5% of the nonwords were identical in both languages, and the others were orthographically very similar. All items are listed in the Appendix. The words and nonwords were each separated into four length groups (three, four, five, and six letters long), with all of the word groups being of similar mean frequency. Across languages, the groups were balanced on a number of other psycholinguistic variables not related to orthographic length, including orthographic neighborhood (see Andrews, 1997, for a review), orthographic body neighborhood (Forster & Taft, 1994; Ziegler & Perry, 1998), word frequency (e.g., Weekes, 1997), and number of phonemes (Rastle & Coltheart, 1998). The words were read aloud by a group of Australian English speakers and a group of German speakers.

PERRY AND ZIEGLER

992

of the visual display and generating articulatory output should be similar for cognates that are matched on length across languages. In other words, cognates such as Ball in German and ball in English not only require similar visual encoding but also require very similar articulatory output. The second critical feature was that despite similar orthography and phonology, German is far more consistent than English in the mapping between orthography and phonology (Frith, Wimmer, & Landerl, 1998; Landerl, Wimmer, & Frith, 1997; Ziegler et al., 2000). For example, the words ball, park, and hand exist in both languages in identical form. The grapheme a receives the same pronunciation in all three words in German but receives a different pronunciation in each word in English. This is illustrated in Table 1, in which we provide our calculated consistency scores (H values) for German as well as those calculated by Treiman, Mullennix, Bijeljac-Babic, and Richmond-Welty (1995) for English. Smaller H values signify more consistent orthography–phonology correspondences at that level. As can be seen in Table 1, German is more consistent than English at all grain sizes. Thus, if Zorzi (2000) was correct in that length effects reflect a difference in spelling–sound consistency, then the length effect should be smaller in German than in English, because German is more consistent than English. That is, because German attractor space is likely to be smoother than English attractor space (i.e., it is more regular and has less inconsistent spelling–sound relationships), English words of different lengths would be more separated (i.e., further from a value that produces an output) than German words. This is because a learning model would be likely to more accurately generalize in a simpler input– output domain, and generalization performance is likely to become progressively more difficult with longer words. The effect of this is that German words will reach the articulation point faster, because the phonologies that are activated will tend to be closer to the articulation point from the beginning, including those phonologies generated from words of a longer length. Thus, length effects are expected to be smaller in German than in English.2 The results of Ziegler et al.’s (2001) experiment showed that neither explanation is correct. In fact, the length effect (in particular, on nonwords) was much larger for German than for English (see Figure 1). The interaction between the effects of length and language was highly significant by subjects and by items. Finding differences in the effects of length contradicts the first explanation, according to which length effects arise from peripheral factors,

Table 1 Analysis of Spelling–Sound Consistency of German and English H by type

H by token

Unit type

German

English

German

English

Onset Vowel Coda Body Antibody

.06 .29 .05 .07 .21

.12 .73 .26 .14 .30

.03 .20 .02 .03 .12

.18 .90 .35 .22 .51

Note. English values are those calculated by Treiman et al. (1995). H ⫽ Fitts and Posner’s (1967) H.

because the peripheral factors were very similar across languages. The finding that German produced stronger length effects than English also contradicts the consistency explanation, because this explanation predicts the opposite effect. In addition, a recent cross-language study with German and English children as participants replicated the finding that length effects in German seem to be larger than length effects in English (Goswami, Ziegler, Dalton, & Schneider, 2001). Thus, the empirical pattern can be regarded as encompassing more than the particular set of items used in Ziegler et al. (2001). Of course, it may be that we have missed an important difference between German and English. Thus, rather than simply theorizing about the interaction between nonword length and language, we felt that it was necessary to test models that have been proposed. As Zorzi (2000) convincingly argued in the final paragraph of his article, “empirical studies that aim at adjudicating between competing models should delay strong inferences until the competitors are actually tested” (p. 855). This is why we implemented the Zorzi et al. (1998) model for German; we could compare simulations of the German and English versions of both the DRC model and the Zorzi model on the exact set of items used by Ziegler et al. (2001). The question is whether the models actually behave the way we think they do. In particular, would the Zorzi model predict smaller length effects for German than for English? In a similar manner, would the DRC model predict the present cross-language dissociation, and if so, why would this be the case? Note that there is another model that we might consider for examining the length effect: that of Plaut (1999). In this model, a parallel input and a serial output mechanism is used, and the latencies of the model are measured by refixations that represent eye saccades in reading. This model also shows a length effect on reading nonwords. For a number of reasons, however, we find the Plaut model more difficult to evaluate than the Zorzi model: (a) The model measures speed in terms of refixations; yet in actual reading, only around 15% of words in normal text are fixated more than once (see, e.g., Rayner, 1998, for a review). Thus, it is not clear how measures from the model are related to the speed of reading aloud. (b) Unlike the Zorzi model, in which the input and output domains use only letters and phonemes, in Plaut’s model the representations are organized to some degree; this organization can differ depending on the choice of the modelers (e.g., Plaut et al., 1996, differs from Plaut, 1999). Thus, these degrees of freedom make it difficult to choose a representation for German that would be equivalent to that of an English model. Note, however, that because the model still has parallel orthographic input (only the output is serial), if such a model were implemented we would expect the results produced to be in the same direction as those of the Zorzi model, because the generation of German phonemes 2

The possibility that German is easier to learn than English is supported not only by the observation that the input– output domain is more regular but also by experimental data. In particular, in Landerl et al.’s (1997) study, it was found that German children of a similar age as English children were more accurate and faster at reading aloud words. In addition, the German children also appeared to generalize better, in that they read nonwords aloud more quickly and more accurately than their English counterparts.

LENGTH EFFECTS IN READING

993

Figure 1. Mean reaction times (in milliseconds) and model predictions (in cycles) for the Ziegler et al. (2001) data, the dual-route cascaded (DRC) model, the Zorzi model, and a version of the DRC model with fast nonlexical phonological assembly (the German DRC model with changed parameters; DRC 2). Top row: English; Bottom row: German. Squares represent nonwords; diamonds represent words.

from parallel orthographic input should still be easier than the generation of English phonemes, as the spelling–sound domain is easier to learn. Thus, we would expect the model to refixate less on the longer words in German compared with the longer words in English.

Model Setup The two models used for the serial-assembled phonology comparisons were the English DRC (Coltheart et al., 2001) and the German DRC (Ziegler et al., 2000) models. Note that in the English DRC model, we used nine sets of letter and phoneme units rather than eight, as published in Coltheart et al. (2001), so that it was identical to the German DRC model. In addition, the letter– orthography excitation was changed such that the same amount of activation would occur at the orthographic level. These changes make essentially no difference to the pattern of results produced by the model, quantitatively or qualitatively. That is, on the entire CELEX database (Baayen, Piepenbrock, & van Rijn, 1993), the number of cycles

the model takes to name a word is very close to the performance of the original model (r ⫽ .99, p ⬍ .001). The two models used for the parallel comparisons were based on Zorzi et al. (1998). The English model was identical to that of Zorzi et al. except that it was trained on the CELEX database used for the English DRC model. Words that could not be coded in terms of the Zorzi et al. scheme because they had onsets or rimes that were too long were simply removed from the database. (A total of 150 words were removed from the English database; 84 were removed from the German database.) The German model, which did not previously exist, was identical to the English Zorzi model, except that it was trained on the German CELEX database. Note that we used the Zorzi et al. (1998) model rather than the model of Plaut et al. (1996), because in Coltheart et al.’s (2001) analysis, the Plaut et al. model produced a length effect in English that did not appear to resemble that found with people at all. That is, the nonword latencies did not increase in a linear fashion but rather produced an interaction that was caused by four- and six-letter nonwords being read aloud more slowly than three- and five-letter nonwords. Alternatively, the Zorzi model did produce a

PERRY AND ZIEGLER

994

Length ⫻ Lexicality interaction, in which nonwords showed a much larger length effect than words, that was almost significant.3 Furthermore, because we have implemented the lexical aspects of the Zorzi model, which are essentially the same as the aspects in the DRC model without feedback, a comparison in which the major differences are between the nonlexical aspects of the model rather than the lexical ones can be made. This is what we wanted to test. A complication with such testing is that the Zorzi model can produce slightly different results depending on how long it is trained. We therefore trained the English Zorzi model for two cycles, at which time it produced the Length ⫻ Lexicality interaction and the main effects of interest (as described below). Because the size of the German database is only about one sixth of the size of the English database, we decided to train the German Zorzi model for 11 cycles instead of 2 cycles, thus making the numbers of training exemplars to which both models were exposed very similar. That is, this procedure guaranteed that both models were exposed to approximately the same number of orthography–phonology patterns during learning. However, to ensure that potential differences between the German and English Zorzi models would not be a direct consequence of the number of training cycles, we performed an additional simulation in which the German Zorzi model was trained for two cycles. The results were very similar to those described below, so we do not report them further; however, the item statistics for both simulations appear in the Appendix. In both the German and English models, the amount of error had largely asymptoted by the end of training, and as can be seen (in the Appendix) by the number of errors given by the models, generalization performance was also reasonable. It is impressive that the German Zorzi model, with exactly the same representation as the English Zorzi model, was able to generalize and learn the relationships between orthography and phonology in another language. Furthermore, it is also impressive that the number of cycles for which the German Zorzi model was trained made very little difference in its performance. The initial set of parameters used for the models was the same as those published in Ziegler et al. (2000) and Zorzi et al. (1998), except that the lateral inhibition parameter in the Zorzi models was set to 0.1 rather than 0.9, as that value produced slightly better results. That is, the models produced answers on two nonwords that timed out with the higher lateral inhibition value. The procedure used to test the Zorzi models was that published in Zorzi et al. (1998). The procedure used to test the English and German DRC models was to simply allow the models to run until they produced a pronunciation.

Results Nonword pronunciations produced by the models were considered incorrect if they were either lexicalizations or phonological forms not given by any of the participants in Ziegler et al. (2001). This applied to five nonwords in the English DRC model, seven nonwords in the German DRC model, seven nonwords in the English Zorzi model, and three nonwords in the German Zorzi model. Because there were so few errors, no statistical analysis was performed on them. In addition, as in the empirical study, a three-standard-deviations cutoff was applied to the results of each model. This resulted in the removal of one word in the English DRC model, three words and one nonword in the German DRC model, one word and one nonword in the English Zorzi model, and one word and two nonwords in the German Zorzi model. Two additional words (schlaf and schwer) were removed from the German Zorzi model, because with the representation used, the model did not learn four-consonant onsets. The simulation results together with the human data appear in Figure 1.

As can be seen, the DRC models showed all of the main effects that were present in the human data from Ziegler et al. (2001). In contrast, the English Zorzi model showed all of the main effects, but the German Zorzi model showed only the lexicality effect. The results of the human data analysis and the models appear in Table 2, and individual items are presented in the Appendix.4 In terms of overall correspondence with the data, the DRC models correlated more strongly with the results than did the Zorzi models. For words, the DRC models correlated significantly with both the English latencies, r ⫽ .23, p ⬍ .05, N ⫽ 78, and the German latencies, r ⫽ .44, p ⬍ .001, N ⫽ 77. The Zorzi models did not correlate significantly with either data set: English, r ⫽ .11, p ⫽ ns, N ⫽ 79; German, r ⫽ .08, p ⫽ ns, N ⫽ 77. A similar pattern held for the nonwords, although with the DRC models, the correlation was stronger: English DRC model, r ⫽ .48, p ⬍ .001, N ⫽ 76; German DRC model, r ⫽ .47, p ⬍ .001, N ⫽ 72; English Zorzi model, r ⫽ .19, p ⫽ .12, N ⫽ 72; German Zorzi model, r ⫽ ⫺.02, p ⫽ ns, N ⫽ 75. Overall, the results from the models provided some interesting insights into the original hypothesis. First, with the DRC models, the absolute size of the length effect between the three- and six-letter nonwords was much larger in the German DRC model than in the English DRC model (77 cycles vs. 49 cycles, respectively). (We further discuss this matter below.) In contrast, as predicted, the Zorzi models produced a larger length effect in English than in German (0.89 cycles vs. 0.28 cycles). Thus, our initial hypothesis, namely that input domains that are easier to learn should cause a smaller length effect in the Zorzi models, appears correct. We take this as evidence that the length effect found in reading aloud in German is caused by more than just consistency differences between orthography and phonology.

Why Does the German DRC Model Produce a Larger Length Effect? The differences that caused the German DRC model to produce a larger nonword length effect than the English DRC model can be found in how the serial application of grapheme–phoneme rules influences nonword latencies and in the statistical distribution of these rules in German and in English. In an analysis of the differences between the German and the English rule systems, Ziegler et al. (2000) found that there are major distinctions between the nonlexical computation of phonology in German and that in English (that is, the spelling–sound rules). In particular, most of the multiletter rules in German are two-letter rules, such as ie3/i/. It is sufficient in the German rule system to use mainly one- and two-letter rules because it has a few 3 There is a discrepancy between Coltheart et al.’s (2001) reporting of the results of Zorzi et al.’s (1998) model and Zorzi’s (2000) reporting. Coltheart et al. reported a nonsignificant interaction, whereas Zorzi (2000) reported that the interaction was significant. This difference apparently comes from a single word, which Coltheart et al. did not exclude when examining the interaction. Zorzi et al. (1998) removed this word as an outlier using a standard deviation criterion that calculates outliers on the basis of values within each length and word group. 4 Note that Ziegler et al. (2001) manipulated a further variable, body neighborhood. Because that variable is of no relevance to the present study, we collapsed across it.

LENGTH EFFECTS IN READING

995

Table 2 Reaction Time Analysis in English and German for the Ziegler et al. (2001) Data, the Dual-Route Cascaded (DRC) Model, and the Zorzi Model df Comparison

Ziegler et al.

DRC

F Zorzi

Ziegler et al.

p

DRC

Zorzi

Ziegler et al.

DRC

Zorzi

94.43 3,313.29 69.08

4.21 164.80 4.41

⬍.001 ⬍.001 ⬍.05

⬍.001 ⬍.001 ⬍.001

⬍.01 ⬍.001 ⬍.01

English Length Lexicality Length ⫻ Lexicality

3, 144 1, 144 3, 144

3, 143 1, 143 3, 143

3, 135 1, 135 3, 135

6.44 156.75 3.13

df Ziegler et al.

DRC

F Zorzi

DRC 2

Ziegler et al.

DRC

p Zorzi

DRC 2

Ziegler et al.

DRC

Zorzi

DRC 2

⬍1 167.84 1.64

101.60 1,013.93 23.91

⬍.001 ⬍.001 ⬍.05

⬍.001 ⬍.001 ⬍.001

ns ⬍.001 ns

⬍.001 ⬍.001 ⬍.001

German Length Lexicality Length ⫻ Lexicality Note.

3, 143 1, 143 3, 143

3, 133 1, 133 3, 133

3, 136 1, 136 3, 136

3, 143 1, 143 3, 143

26.00 77.51 2.78

87.87 1,868.06 61.30

DRC 2 ⫽ German DRC model with fast phonological assembly (changed parameters).

very general, context-sensitive rules, so-called super rules (Ziegler et al., 2000). These super rules deal quite efficiently with inconsistencies at the small grain-size level, which are mainly related to unpredictable vowel length. For example, one of these super rules deals with the pattern that vowels followed by two consonants are typically short, whereas vowels followed by a single consonant are typically long. Thus, the system is able to assign the correct vowel pronunciation by taking into account the consonants toward the end of the word (i.e., those right of the vowel). Note that the use of these super rules is very common, because they are used for most words with a single-letter vowel spelling, which are prevalent in German. These super rules have a major impact on the length effect. An example of this is what occurs when the English and the German nonlexical routes of the DRC models assemble a nonword such as gack. This can best be demonstrated through simulation, which we have presented in Figure 2. The lexical routes have been deactivated such that only differences due to nonlexical processing are evident. As can be seen, in the German case, the long (incorrect) vowel is initially assembled. It is not until a further 28 cycles that the two consonants to the right of the vowel are assembled. This causes the short vowel to be assembled rather than the long vowel. At this point, the activation of the long vowel phoneme has had a large amount of time to rise. This means that there is a great deal of spurious activation in the phoneme position when the final phoneme is assembled. This activation has the effect of slowing the rise of the short vowel phoneme through lateral inhibition and hence contributes to a slower nonword response. Such an interference effect has previously been called the “whammy effect” (Rastle & Coltheart, 1998). Indeed, the super rules have the effect of producing quite late whammies, in which the vowel pronunciation cannot be assigned correctly until the final consonants have been processed. This can be compared with the English nonword gack. On that nonword, the phoneme for the letter -a is assembled

as a short vowel from the beginning. Therefore, not only does the short vowel phoneme rise earlier, but also there is no spurious activation to interfere with its rise. Because the whammy situation related to vowel length is quite common in German, and because that situation differs for words of different lengths, it is one of the reasons that the German DRC model is able to correctly predict stronger length effects for German than is the English DRC model for English. The actual percentage of times this type of nonword occurred in each length group in Ziegler et al. (2001) appears in Table 3. Note that this property of having the long–short vowel distinction marked by coda consonants and its relationship with word length are not specific to the stimulus set that was used in Ziegler et al. (2001). Rather, the proportion of words that used this short–long vowel distinction increased with length over the entire monosyllabic orthography. This can be seen from an example. First, if all words begin with at least one consonant (C) letter, then three-letter words, which must have at least one letter for the vowel (V), can only have a single coda consonant. Thus, only two orthographic permutations can exist, CVV and CVC (i.e., no three-letter words that start with a consonant can have a vowel with two following consonants). Now, with four-letter words that start with at least one consonant, some of these words can have a double end consonant pattern, CVCC. However, any words that have a complex onset cannot have this pattern, because the vowel must be in at least position three (CCVC), thus not allowing two final consonants. A similar pattern holds with five-letter words, which can have a three-consonant onset (e.g., schaf), thus biasing the possible distribution of words that can have two coda consonants. It is only six-letter words for which the biasing from onsets does not remain. Thus, given a random set of three to six letters taken from the German orthography, one would expect that the number of times two-consonant codas are used would increase with word length. The actual percentage of words that use the long–short vowel distinction, as a function of word length (based

996

PERRY AND ZIEGLER

Length Effects on Words

Figure 2. Assembly of the vowel phoneme in the nonword gack in German and in English. Phonemes are those used in the CELEX database.

on all monosyllabic words in the CELEX database), appears in Table 3. We believe that this is a major reason why the German DRC model produces a larger length effect than does the English DRC model. Note that it is possible to test this hypothesis with the German DRC model quantitatively. To do this, we altered the VC rules in the model (all which can be found in Ziegler et al., 2000), such that they assembled a short vowel phoneme instead of a long vowel phoneme for the VCC items. Consequently, the model did not initially incorrectly assemble the vowel on VCC items (e.g., gack). The results from this showed that the nonword length effect was reduced to 50 cycles (117, 137, 148, and 167 cycles for three-, four-, five-, and six-letter groups, respectively). That difference was not significant when compared with the English DRC model (F ⬍ 1). The three-way interaction of model, lexicality, and length was significant when compared with the original German DRC model, however: F(3, 140) ⫽ 9.43, p ⬍ .001. The impact of initially assembling the incorrect -VC sequence in -VCC words can also be examined by comparing that type of word with all others. In particular, if the naming latencies of the nonmodified German DRC model (i.e., mean number of cycles) are examined within each length group as a function of whether the items are named with a VCC rule, it is evident that the model is slower to name the items that use a VCC rule than those that do not (four-letter words: 158 vs. 136 ms; five-letter words: 177 vs. 151 ms; six-letter words: 208 vs. 169 ms). Note that because of the nonlinear dynamics of the German DRC model, this effect is not linear over letter position; it has an increasing difference over the letter lengths (22, 26, and 39 cycles). Thus, although the -VCC rule use is the same number in each of the length groups, the model nevertheless produced increasing effect sizes with increasing stimulus length. The model does so because as stimulus length increases, more cycles process before the final phonology is assembled. During this period, competing activation (such as incorrect lexical activation, which also increases over time) can more strongly affect the course of phonological assembly. Thus, the longer the word, the longer it takes for the correct phonological form to inhibit the increased amount of competing activation that has been produced.

One property that the German DRC model did not simulate was the length effect that was found with German words. This is not surprising, because the German DRC model operated with the same parameter set as the English DRC model, which was tuned to predict no length effects on words (as was observed in most recent English studies; e.g., Weekes, 1997). However, it is conceivable, as suggested by both behavioral and brain imaging studies (e.g., Goswami et al., 2001; Paulesu et al., 2000), that nonlexical processing is stronger in languages with shallow orthographies than it is in English. Because this may be true, we wondered whether changing the balance between lexical and nonlexical processing would allow the German DRC model to capture a length effect for words in addition to the greater length effect for nonwords. To do this, we could have either increased the effect of assembled phonology or decreased the strength of the lexical route. Because both modifications would lead to a similar pattern, here we decreased the letter-to-orthography parameter of the model to 0.03 and increased the strength of the grapheme–phoneme conversion route to 0.07. The results of the parameter change in the model (labeled DRC 2) can be seen in Figure 1 and in Table 2. Individual item statistics can be found in the Appendix. There are two important things to note about the pattern of results. First, there was a length effect on words (a 21-cycle difference between three- and six-letter words). Second, the length effect on nonwords was reduced but was still larger than that of the English DRC model, with the DRC 2 model producing a 64-cycle difference between three- and six-letter nonwords. That difference was still large enough to cause a Language ⫻ Length interaction, F(3, 67) ⫽ 11.77, p ⬍ .001, although the three-way Language ⫻ Length ⫻ Lexicality interaction failed to reach significance, F(3, 67) ⫽ 1.45, p ⫽ ns. The length effect on the nonwords was diminished because the greater activation of the nonlexical route allowed phonemes generated nonlexically to rise faster. Similar individual item correlations between the real data and the model also existed (words: r ⫽ .43, p ⬍ .001; nonwords: r ⫽ .52, p ⬍ .001), and as can be seen from Table 2, a similar pattern of analysis of variance results also occurred. It therefore appears that speeding up the nonlexical route allows the German DRC model to capture the pattern of results in which some of the length effect found in naming words and nonwords is on words (unlike that in English).

Conclusion In conclusion, the results of our analysis show that the DRC model, with a serial-assembled phonology mechanism, correctly predicts the crosslinguistic difference in reading in which German

Table 3 Percentage of Appearances of Vowels with Two or More Following Consonants in German as a Function of Word Length Word length (letters) Word set

3

4

5

6

Ziegler et al. (2001) CELEX database

0 2.25

65 57.30

60 71.54

70 71.43

LENGTH EFFECTS IN READING

readers show a larger nonword length effect than English readers. This was without any modification to the parameters that have been previously used in the English DRC model. In contrast, the Zorzi model (Zorzi et al., 1998) fails to account for this difference. The German Zorzi model predicted no length effects in German. We note that this is not simply because of practical limitations with the Zorzi model. Rather, models based on statistical learning are likely to predict that length effects are reduced in languages with more regular spelling-to-sound correspondences. However, this is the opposite of the empirical pattern. In contrast, the capacity of the DRC model to correctly account for the empirical pattern is evidence supporting a serial-assembled phonology account of the length effects found in reading aloud. Finally, we note that with a small parameter change, the German DRC model can also capture that some of the length effect in German resides on the word responses.

References Andrews, S. (1997). The effect of orthographic similarity on lexical retrieval: Resolving neighborhood conflicts. Psychonomic Bulletin & Review, 4, 439 – 461. Baayen, R. H., Piepenbrock, R., & van Rijn, H. (1993). The CELEX lexical database [CD-ROM]. Philadelphia, PA: Linguistic Data Consortium, University of Pennsylvania. Coltheart, M., & Rastle, K. (1994). Serial processing in reading aloud: Evidence for dual-route models of reading. Journal of Experimental Psychology: Human Perception and Performance, 20, 1197–1211. Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. C. (2001). DRC: A dual-route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204 –256. Fitts, P. M., & Posner, M. I. (1967). Human performance. Belmont, CA: Brooks/Cole. Forster, K. I., & Taft, M. (1994). Bodies, antibodies, and neighborhood density effects in masked form priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 844 – 863. Frederiksen, J. R., & Kroll, J. F. (1976). Spelling and sound: Approaches to the internal lexicon. Journal of Experimental Psychology: Human Perception and Performance, 5, 674 – 691. Frith, U., Wimmer, H., & Landerl, K. (1998). Differences in phonological recoding in German- and English-speaking children. Scientific Studies of Reading, 2, 31–54. Goswami, U., Ziegler, J. C., Dalton, L., & Schneider, W. (2001). Pseudohomophone effects and phonological recoding procedures in reading development in English and German. Journal of Memory and Language, 45, 648 – 664. Jordan, M. I. (1986). Attractor dynamics and parallelism in a connectionist sequential machine. Proceedings of the eighth annual meeting of the Cognitive Science Society (pp. 531–536). Hillsdale, NJ: Erlbaum. Landerl, K., Wimmer, H., & Frith, U. (1997). The impact of orthographic consistency on dyslexia: A German–English comparison. Cognition, 63, 315–334. Paulesu, E., McCrory, E., Fazio, F., Menoncello, L., Brunswick, N., Cappa,

997

S. F., et al. (2000). A cultural effect on brain function. Nature Neuroscience, 3, 91–96. Perry, C., & Ziegler, J. C. (2002). On the nature of phonological assembly: Evidence from backward masking. Language and Cognitive Processes, 17, 31–59. Plaut, D. C. (1999). A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543–568. Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. E. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56 –115. Rastle, K., & Coltheart, M. (1998). Whammy and double whammy: Length effects in nonword naming. Psychonomic Bulletin & Review, 5, 277– 282. Rastle, K., & Coltheart, M. (1999). Serial and strategic effects in reading aloud. Journal of Experimental Psychology: Human Perception and Performance, 25, 482–503. Rastle, K., & Coltheart, M. (2000). Serial processing in reading aloud: Reply to Zorzi (2000). Journal of Experimental Psychology: Human Perception and Performance, 26, 1232–1235. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372– 422. Seidenberg, M. S., Petersen, A., MacDonald, M. C., & Plaut, D. C. (1996). Pseudohomophone effects and models of word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 48 – 62. Seidenberg, M. S., & Plaut, D. C. (1998). Evaluating word-reading models at the item level: Matching the grain of theory and data. Psychological Science, 9, 234 –237. Treiman, R., Mullennix, J., Bijeljac-Babic, R., & Richmond-Welty, E. D. (1995). The special role of rimes in the description, use, and acquisition of English orthography. Journal of Experimental Psychology: General, 124, 107–136. Weekes, B. S. (1997). Differential effects of number of letters on words and nonword naming latency. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 50(A), 439 – 456. Ziegler, J. C., & Perry, C. (1998). No more problems in Coltheart’s neighborhood: Resolving neighborhood conflicts in the lexical decision task. Cognition, 68, B53–B62. Ziegler, J. C., Perry, C., & Coltheart, M. (2000). The DRC model of visual word recognition and reading aloud: An extension to German. European Journal of Cognitive Psychology, 12, 413– 430. Ziegler, J. C., Perry, C., Jacobs, A. M., & Braun, M. (2001). Identical words are read differently in different languages. Psychological Science, 12, 379 –384. Zorzi, M. (2000). Serial processing in reading aloud: No challenge for a parallel model: Journal of Experimental Psychology: Human Perception and Performance, 26, 847– 856. Zorzi, M., Houghton, G., & Butterworth, B. (1998). Two routes or one in reading aloud? A connectionist dual-process model. Journal of Experimental Psychology: Human Perception and Performance, 24, 1131– 1161.

(Appendix follows)

PERRY AND ZIEGLER

998

Appendix Individual Items and Item Statistics Table A1 Individual Reaction Time Latencies (RTs; in Milliseconds) and Individual Zorzi Model and Dual-Route Cascaded Model Processing Times (Cycles) for the English and German Words and Nonwords Used Items

Human data (RTs; ms)

Zorzi model (cycles)

DRC model (cycles)

English

German

English

German

English

German

Two cycle

English

German

DRC 2

tea toe bus per zoo sea arm gas box act loud plus form text hair boat film verb flea golf storm point cloth cross sport chair stiff beast scarf front length taught weight prince change course phrase fierce freeze strict hat hay ice fan red pro day bar cow raw post sand rice bank blue

tee zeh bus per zoo see arm gas box akt laut plus form text haar boot film verb floh golf sturm punkt kleid kreuz sport stuhl steif biest schal front leicht tausch wunsch prompt schlaf storch frisch feucht frosch strikt hut heu eis fan rot pro tag bar kuh roh post sand reis bank blau

544 505 530 562 526 482 541 534 518 523 499 517 530 544 497 528 541 541 532 542 491 520 571 511 485 487 549 595 519 577 508 530 511 544 498 530 569 599 550 498 438 477 517 532 515 521 500 518 517 525 486 467 520 504 488

535 570 499 586 552 534 483 517 464 501 472 526 521 538 511 499 558 550 537 493 576 509 536 528 636 575 625 573 544 570 487 542 486 602 552 619 586 591 555 631 485 496 508

3 3 3 2 3 2 3 3 3 2 3 3 3 4 3 3 3 3 3 4 4 3 3 3 4 3 3 4 3 4 3 3 3

3 3 5 3 3 2 2 3 4 2 3 4 2 3 3 5 2 5 3 3 3 2 3 3 2 3 3 3 5 2 3 3 2 5

4 3 4 3 3 2 2 3 5 2 4 4 2 3 3 4 2 5 3 4 3 2 3 3 2 3 5 4 6 3 3 4 2 6

72 73

4 3 5 4 3 3 3 3

6 3

69 70 69 68 73 66 70 68 68 68 75 71 71 73 73 74 71 77 77 73 74 70 74 72 73 77 74 75 76 73 74 73 73 74 74 69 75 77 75 74 69 70 73 70 67 72 67 60 71 70

81 83 87 84 80 80 78 79 87 79 90 95 88 89 85 86 89 112 89 94 98 92 92 93 97 93 98 96 106 94 94 105 99 99 110 107 98 114 107 101 79 86 84 88 80 77 74 81 80 82 91 95 87 92 90

496 566 512 485 544 524 543 548 532 484 497

3 3 3 3 3 3 3 3 3 2 3 2 2 3 4 3 3 4 3 3

3 2 2 3 3 4 3 3 2 2 3

5 3 3 4 3 6 3 2 2 4 3 3 3 3 2 2 3

74 76 72 72

92 71 71 70 70 70 78 83 74 76 75 76 76 89 78 81 80 74 75 75 79 77 80 79 87 78 73 79 76 80 86 83 79 80 82 79 70 74 73 71 69 67 72 71 72 78 81 75 77 78

LENGTH EFFECTS IN READING

999

Table A1 (continued) Items

Human data (RTs; ms)

Zorzi model (cycles)

DRC model (cycles)

English

German

English

German

English

German

Two cycle

English

German

DRC 2

nest four meal wine beer night stone start steel sight dream stick block steep pound flight strong school bright ground stream spring please bought thrill ler rea foo sil tis nup tof nal heg fas naul saun trup sibe boam goft noof furk parn perd glird broal ploar spond meast spilk prish glief droan sterk strond freast blorce gladge baint

nest vier mahl wein bier nacht stein start stahl sicht traum stock block steil pfund frucht strand schein brauch gleich strich schwer pracht bleich tracht lir rie foo sil tis nuf tof nal heg fas naul saun trub seib bohm goft noof furg parn perd glirt brohl plohr spaut miest silch pisch glief drohn stork strond friest blorst klackt breist

495 579 500 503 514 508 465 478 549 475 528 484 512 546 576 582 481 439 522 497 494 473 542 623 600 584 621 608 541 582 526 604 571 521 606 587 616 646 602 656 638 578 664 602 579 684 665 623 562 600 613 632 755 564 548 561 721 665 647 669

465 550 465 476 504 458 577 597 594 607 565 583 492 618 632 602 600 567 565 493 627 571 574 535 581 522 608 593 627 599 501 602 489 557 592 483 549 634 615 542 601 502 606 640 702 647 575 658 769 522 695 610 652 582 687 720 681 583 616 629

3 2 3 3 3 3 3 2 3 3 3 2 3 3 3 3 3 2 3 3 3 3 3 3 3 4 5 4 4 5 2 5 2 4 5 6 6 5 4 4 6 4 4 4 4 5 4

3 3 2 2 2 2 2 3 3 2 2 3 3 2 3 3 3 3 3 2 2

3 4 2 2 2 2 2 3 3 2 3 3 3 2 3 3 3 3 3 2 3

74 68 74 74 74 73 75 70 72 75 74 72 72 75 73 76

3 3 3 5 4 4 4 4 4 5 4 4 4 4 4 5 4

4 3 3 4 4 4 5 4 5 5 4 5 8 4 5 5

79 79 74 76 74 76 77 80 78 79 75 86 79 78 81 80 77 86 78 71 78 77 82 79 81 105 105 127 118 121 109 117 121 118 118 138 139 139 130 141 165 141 166 158 155 191 155 155 151 153

93 90 86 88 84 123 95 98 94 151 93 107 97 96 102 104 98 111 99 84 100 100 114 100 108 116 116 116 116 116 116 116 116 116 116 129 129 129 129 129 157 129 157 157 157 180 143 143 143 143 157 180 143 143 180 201 157 180 180 158

6 8 5 6 5 4 4 6 7

4

6 4

7 5 5 5 8 4

7 5 6 7 5

6 5 4 4 6 4 4 5 6

6 6

(Appendix continues)

6 8 5 6 6 6 8

69 75 72 73 73 70 78 77 109 118 115 116 124 128 127 108 139 136 134 121 131 132 136 137 134 127 156 151 156 132 133 143 156 153 141 146 140 166 171 139

182 143 154 158 202 202 168

PERRY AND ZIEGLER

1000 Table A1 (continued) Items

Human data (RTs; ms)

Zorzi model (cycles)

DRC model (cycles)

English

German

English

German

English

German

Two cycle

English

German

DRC 2

plarch flurst stince straul bralse fot lat dee lan bry moy sar sut gat nop gack nist tord tain pamp lunk nuck plar bick tump steck plock grost drace bruck proom drail gruck brist crost sprand stroom dright sprail flitch gratch plench scrast sprank fratch

parsch flurst strinz straul balsch fot lat dee lan pei meu sar sut gat nan gack nist tord torn pund lank nuck plar bick tind steck plock grost dratz bruck praum dreil gruck brist krost sprand straum drecht spreil flisch grecht pleich schast sprank fatsch

635 675 606 546 892 582 533 577 626 530 550 572 588 564 557 622 533 586 600 580 566 576 558 676 595 571 659 651 715 637 578 559 671 602 584 558 566 678 571 660 670 633 619 660 652

610 681 778 700 585 593 521 568 515 638 540 597 577 525 535 551 512 600 575 622 520 507 604 562 597 703 604 625 652 560 665 586 605 607 617 803 742 611 798 668 630 627 656 757 640

5 5 4 6

6 5 4 4 5 5 5 4 4 4 4 5 5 5 4 4 4 4 5 4 4 6 5 4 4 5 6 4 5 5 4 4 5 4 4 6 4 4 4 4 5 4 4 5 5

5 6 6 4 5 5 4 5 4 4 5 7 6 5 4 4 4 4 6 4 4 4 5 4 4 4 5 5 5 5 5 5 5 4 6 5 5 5 4 4 5 4 4 5

170 171 157 166 171 101 111 111 107 123 119 114 109 112

214 202 226 191 220 122 119

201 180 201 155 201 116 116 126 116 121 121 116 116 116 116 157 157 157 157 157 157 157 129 157 157 183 180 152 179 180 143 143 180 180 180 201 158 179 158 201 180 166 194 201

4 5 4 2 6 4 4 4 5 4 4 4 4 4 4 4 4 4 4 4 6 4 4 4 4 4 7 5 4 4 5 6 4

113 126 126 123 127 136 135 141 114 130 125 136 151 151 145 141 146 146 144 129 160 154 155 160 189 191 169 166 157 191

121 115 120 120 121 120 118 167 169 157 161 148 147 171 128 171 140 182 199 177 150 153 174 180 185 166 202 170 204 202 158 212 226

Note. Two cycle ⫽ Zorzi model trained for two cycles; DRC ⫽ dual-route cascaded model; DRC 2 ⫽ version with fast phonological assembly (changed parameters).

LENGTH EFFECTS IN READING

1001

Table A2 Mean Reaction Time Latencies (RTs; in Milliseconds) and Mean Zorzi Model Processing Times (Cycles) for the English and German Words and Nonwords Used, as a Function of Word Length Human data (RTs; ms) No. of letters Words Three Four Five Six Nonwords Three Four Five Six

Zorzi model (cycles)

DRC model (cycles)

English

German

English

German

Two cycle

English

German

DRC 2

516 516 521 530

519 513 570 569

2.75 3.10 3.10 2.94

2.94 2.90 2.70 3.17

3.20 3.00 3.00 3.47

68.90 73.32 73.45 73.60

72.24 77.85 78.65 79.20

81.45 90.65 100.75 102.60

572 603 624 644

565 576 631 676

4.11 4.37 4.89 5.21

4.35 4.56 4.84 4.67

4.85 4.64 5.44 5.06

114.82 129.90 144.05 166.83

117.63 151.55 166.60 194.89

117.00 147.20 162.75 181.68

Note. Two cycle ⫽ Zorzi model trained for two cycles; DRC ⫽ dual-route cascaded model; DRC 2 ⫽ version with fast phonological assembly (changed parameters).

Received February 28, 2001 Revision received January 25, 2002 Accepted January 29, 2002 䡲

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.