The Investigation of Vocabulary Input in High School EFL Textbooks and University Admission Tests Navinda Sujinpram1*, Asst. Prof. Dr. Natawan Senchantichai2#, Dr. Kornwipa Poonpon3# English Program, Khon Kaen Univesity, Khon Kaen *navindy@hotmail.com, #nansen42@gmail.com, #korpul@kku.ac.th Abstract The relationship between vocabulary and language ability has gained theoretically and empirically accepted. This is especially true for reading comprehension which the strong relationship is found. As such, there is a possibility that knowledge of vocabulary can predict the ability to use the language, in particular to gain adequate comprehension in reading any material. This study set out to investigate vocabulary input in high school EFL textbooks and English tests for university admission, and to find out if there is any difference between them. Three research questions were addressed as follows: 1) what is the vocabulary input in high school EFL textbooks? 2) what is the vocabulary input in the English tests for university admission? and 3) is there difference of vocabulary input between them? To answer the questions, a corpus-based investigation was conducted. The British National Corpus (BNC) was selected as a representative corpus, and run on the VocabProfile BNC-20, an online word profiling program. Two textbook series and three high-steak English tests for university admission were compiled as new corpora. The criterion for the analysis was set at 95 percent text coverage. The results showed that one textbook contained 3,000 words inclusive of proper nouns, and the other contained 4,000 words inclusive of proper nouns as resources for reading any texts. One of the two English tests needed 3,000 words inclusive of proper nouns for gaining adequate comprehension, and the other two tests needed 4,000 words inclusive of proper nouns to do so. It was apparent that a textbook with 3,000 words failed to contain adequate vocabulary input for gaining adequate comprehension in two of the three English tests. So, some additional materials to increase vocabulary knowledge are potentially useful to bridge the gap between them. Keywords: O-NET, GAT, KKU, BNC, VocabProfile BNC-20 Introduction Due to the approaching of Association of Southeast Asian Nation [ASEAN] integration, the role of the English language for successful regional communication is increasing. However, the average Thai students seem far away from being proficient users of the language20. The National Institute of Educational Testing Service [NIETS] 12-14 reveals that the mean English test scores of Thai students in a standard-based achievement test were rather low. So were the scores from a general aptitude test which aims to assess students’ ability to communicate in English. This is especially true for Grade 12 students who have studied the language at least 10 years before entering university-level education. Supposing that the tests are well specifically designed for students in a particular education level, students’ low scores from those tests assume students’ limited ability to use the language. Although students’ low scores are affected by a number of factors, knowledge of vocabulary is potentially a critical one since its relationship to language ability has been found. Many researchers3,8,10,19 claimed that a learner with extensive vocabulary knowledge tends to be better at performing the four language skills of listening, speaking, reading and writing while another with limited vocabulary knowledge tends to encounters language difficulties. Previous studies3,4,6,7,12 used corpus-based investigation to measure the number of words needed to read varieties of materials, and certain percentage of text coverage (the total number of running words from each list multiplied by 100 and divided by the total number of running words in each corpus18 p.54) was set as a criterion for adequate comprehension to take place. Laufer8 and Laufer and Ravenhorst-Kalovski9 claimed that 95 percent text coverage is minimally needed for gaining adequate comprehension in reading any material. Nation10 also supported that a reader with 95 percent text coverage potentially succeeds in guessing meaning of unknown words from context. However, percentage of text coverage increases to 98 percent if a reader needs reading for pleasure11,12. An example of corpus-based studies was conducted in English test of Indonesia’s National Exams by using the British National Corpus (BNC) as a representative corpus. Adequate comprehension was set at 95 percent text coverage and the number of words needed to read the test was estimated accordingly. Another example, Chujo and Oghigian4 used the British National Corpus High Frequency Word List (BNC HFWL), the Standard Vocabulary List 12000 (SVL) and the Nation’s 14K to measure the number of words needed to gain comprehension in the Test of English for International Communication (TOEIC), the Test of English as a Foreign Language (TOEFL) and the English tests for Japanese learners of English. In the study, the criterion for adequate comprehension was also set at 95 percent text coverage. Hsu7 used the Nation’s 14K to measure the number of words needed to read business English textbooks and articles. The criteria were set at 95 percent for minimal requirement of adequate comprehension and 98 percent for optimal one. In Chujo’s study3, the corpus-based approach was used to estimate the number of words provided from EFL textbooks and the number of words needed to gain adequate comprehension in university admission tests. The comparison of vocabulary input between the materials was made and the results showed the big difference of vocabulary input that could hinder students’ ability to gain comprehension in the target tests. The study of Chujo3 provided crucial implication for EFL learning, teaching and testing in particular. This is due to the fact that EFL learners’ vocabulary knowledge primarily depends on textbooks they use at school2. According to the above mentioned, there is a possibility that the students’ low scores from the standardized English achievement and the general aptitude tests are affected from students’ limited knowledge of vocabulary that they are given from their textbooks. In order to find out if students are provided adequate vocabulary knowledge for gaining adequate comprehension in the target tests, the investigation of vocabulary input in students’ textbooks and that in the target tests was conducted. The authentic data from the investigation was analyzed for the difference of vocabulary input between those mentioned sources. The results of the study showed not only the adequacy of vocabulary input provided by the target textbooks, but also the number of words needed to gain comprehension in the target tests. Students, teachers and test designers are those who are fully given benefits from this study. That is, students and teachers may set vocabulary learning goal according to the estimated number of words needed to gain adequate comprehension in the target tests. Teachers may also prepare some additional materials to bridge a vocabulary gap between the textbooks and the tests if any. Test designers may use the information to get into their consideration of vocabulary selection for test designing, particularly if vocabulary input in the textbooks and that in the tests are not comparable. Methodology The materials used in this study were six high school EFL textbooks categorized in two series (the first series includes 3 “Upstream” textbooks for Grade 10 to Grade 12 and the second series includes 3 “Mega Goal” textbooks for Grade 10 to Grade 12), two English test papers of the Ordinary National Educational Test (O-NET) for the academic year 2010 and 2011, six English test papers of the General Aptitude Test (GAT) for the academic year 2010 and 2011 and three English test papers of Khon Kaen University (KKU) quota admission for the academic year 2010, 2011 and 2012. Each of the Upstream textbook series comprises of 5 modules presented in 10 units with different topics. Four language skills of listening, speaking, reading and writing are included in each single unit, together with vocabulary and grammar focuses. Each of the Mega Goal textbook series comprises of 12 units with different topics. The four language skills and grammatical knowledge are also included in each single unit. Vocabulary is presented in glossary at the end of the textbooks. The English test papers of O-NET 2010 and O-NET 2011 for Grade 12 students are standard-based achievement test17. They are multiple-choice tests consisting of 70 questions each. The total scores are 100 marks. Each paper covers two sections aiming to test (1) speaking and writing ability and (2) reading comprehension. The first section includes 10 questions of conversations and 20 questions of sentence completion and error detection with 2 marks each. The second section includes a cloze test with 10 blanks and 30 questions of reading comprehension with 1 mark each. The English test papers of the GAT 2010 and GAT 2011 aim to test students’ ability to communicate in English13. They are multiple-choice tests consisting of 60 questions. Each paper covers 4 sections of Speaking, Vocabulary, Reading and Structure and Writing. The total scores are 150 marks. The Speaking section covers 15 questions of conversations with 2.5 marks each. The Vocabulary section covers 12 questions of vocabulary meaning out of context tests with 1.5 marks each, vocabulary meaning recognition with 2.5 marks each and vocabulary meaning in context with 3.5 marks each. The Reading section covers 18 questions of reading comprehension with 2.5 marks each. The Structure and Writing section covers 15 questions of error detection, sentence completion and passage completion with 2.5 marks each. The English test papers of the KKU quota admission for 2010 to 2012 aim to select students with excellent academic performance to study in the university. The papers are multiple-choice test consisting of 100 questions. The total scores are 100 marks. Each paper covers 5 sections. The first section includes 30 questions of reading comprehension, the second section includes 20 questions of error detection, the third section includes 20 questions of vocabulary, the fourth section includes 20 questions of conversation and the last section includes 10 questions of cloze test. Each question accounts for 1 mark. The above mentioned sources were compiled as new corpora. For textbooks compilation, the content of each textbook series was typed in Microsoft Word 2007 and saved as Microsoft Word document (.docx) files. Textbook covers, prefaces, introductions, table of contents, references, directions, glossary and other irrelevances (e.g., literature corner, lyrics and grammar reference) were excluded from these document files. Also, nonlexical items (cardinal and ordinal numbers, interjections, unclassified items such as www, umm, alphabetical symbols, units, abbreviations) were manually deleted. This is because the percentage of non-lexical items has a tendency to interfere the results of the analysis provided that they are included in the target text files. Then, the newly-compiled textbook corpora were proofread and revised before transcribed into plain text files (.txt). The test corpora compilation was rather similar to that of the textbook corpora, but it is, in fact, a lot easier and faster. This is due to the fact that all of the target tests were in portable data format (.pdf) files, so it made sense to use the Optical Character Recognition (OCR) to directly transcribe the target tests from portable data format files to plain text files. Test covers, directions, other irrelevances together with non-lexical items were manually deleted from the plain text files. Then the new test corpora were proofread and revised. It is important to note that the OCR program was practical for transcribing the tests but not the textbooks. This is because the textbooks are full of graphics that possibly interfere the transcribing process of the OCR program, and consequently produce a number of errors. On the other hands, the tests do not contain a number of graphics that can interfere the OCR transcribing process. A representative corpus used in the study was the British National Corpus (BNC) which is one of the largest and most updated corpora comprising of more than 100 million spoken and written British English words during the late twentieth century. The BNC was run in the VocabProfile BNC-20, an online word profiling program available for free at http://www.lextutor.ca/vp/bnc. The VocabProfile BNC-20 developed by Tom Cobb5 contains 20 1000-word lists of the BNC ranged according to frequency order. Although it is practical instrument for investigating vocabulary input in the present study, it has limitations. First, the program cannot count multi-word units. If multi-word units are input, they are separately counted as a single unit. Second, the program cannot distinguish homographs. Third, words related to computer and technology are not in the lists although the BNC is an up-to-date corpus. Even those words are in the lists, they appear in the late lists of words with low frequency. Each of the new corpora was uploaded into the VocabProfile BNC-20 website. The option for automatically recategorizing proper nouns was selected. Dealing with proper nouns is an important issue for investigating vocabulary input in a target text, particularly if the text contains a number of proper nouns. This is due to the fact that all proper nouns are easily recognizable from their capital letter at the beginning of word within a sentence, and a reader can know those proper nouns at the first time of encountering them. Therefore, the categorization of proper nouns can greatly affect the result of any study. Specifically, a text seems rather more difficult than it actually is provided that proper nouns are categorized as off-list words (not included in any of the 20 1000-word lists). It is worth noting that the VocabProfile BNC-20 recategorizes only off-list proper nouns as the first 1000-word list, but on-list proper nouns are categorized according to their frequency lists. The data from each particular corpus showed the total tokens, types, families, text coverage of each frequency list and cumulative text coverage. The 95 percent coverage of text was set as criterion for this investigation since it is the minimally needed for a reader to gain adequate comprehension in a text as claimed by Laufer7 and Laufer and RavenhorstKalovski8. The researcher counted percentage of coverage from the first 1000-word list to the subsequent lists until the cumulative percentage of coverage approached the predetermined text coverage. Results Vocabulary input in the textbook corpora Table 1 shows total types and tokens of each textbook corpus. The Mega Goal corpus contained the larger number of tokens and types than the Upstream corpus. Such larger number informed us that the Mega Goal required larger extensive reading and broader knowledge of different word forms compared to the Upstream. Table 1. Total types and tokens of each textbook corpus. Textbook Corpus Upstream Mega Goal Types 7,881 8,190 Tokens 83,909 86,727 Types, tokens and BNC coverage of the first 4 1000-word lists in the Upstream corpus were as shown in Table 2. The corpus made up 95.34 percent text coverage at the third 1000-word list. This means that around 3,000 words inclusive of proper nouns were provided as resources for students using this textbook series. Table 2. Types, tokens and BNC coverage in the Upstream corpus. 1000-Word Frequency List 1st 2nd 3rd 4th Types 3,059 1,590 865 507 Tokens (%) 70,370 (83.86) 7,087 (8.45) 2,544 (3.03) 1,193(1.42) Cumulative Coverage 83.86% 92.31% 95.34% 96.76% Types, tokens and BNC coverage of the first 4 1000-word lists in the Mega Goal corpus were as shown in Table 3. The corpus made up 96.34 percent text coverage at the fourth 1000-word list. This means that around 4,000 words inclusive of proper nouns were provided as resources for students using this textbook series. Table 3. Types, tokens and BNC coverage in the Mega Goal corpus. 1000-Word Frequency List 1st 2nd 3rd 4th Types 3,220 1,573 866 526 Tokens (%) 73,180 (84.38) 6,476 (7.47) 2,462 (2.84) 1,435 (1.65) Cumulative Coverage 84.38% 91.85% 94.69% 96.34% Vocabulary input in the English test corpora Table 4 shows the average types and tokens of each English test corpus. The O-NET and the KKU corpora contained rather similar number of words (3,658 and 3,609) which was larger than that of the GAT (2,969). This indicated that the O-NET and the KKU quota admission test needed more extensive reading than the GAT. The KKU corpus contained the most types among all of the test corpora while the O-NET corpus contained the least. So, broad knowledge of different word forms proved the most advantageous to read the KKU quota admission test. Table 4. Average types and tokens of each English test corpus. Test Corpus O-NET (2010-2011) GAT (2010-2011) KKU (2010-2012) Types 979 1,081 1,160 Tokens 3,658 2,969 3,609 The average percentage of BNC coverage in the O-NET corpus was as shown in Table 5. The corpus made up 95.17 percent text coverage at the third 1000-word list. This means that the around 3,000 words inclusive of proper nouns are needed for gaining adequate comprehension in the O-NET. Table 5. Average BNC coverage in the O-NET corpus. 1000-Word Frequency List 1st 2nd 3rd 4th Coverage 83.81% 8.54% 2.82% 1.53% Cumulative Coverage 83.31% 92.35% 95.17% 96.70% The average percentage of BNC coverage in the GAT corpus was as shown in Table 6. The corpus made up 95.50 percent text coverage at the fourth 1000-word list. This means that around 4,000 words inclusive of proper nouns are needed for gaining adequate comprehension in the GAT. Table 6. Average BNC coverage in the GAT corpus. 1000-Word Frequency List 1st 2nd 3rd 4th Coverage 80.88% 9.05% 3.61% 1.96% Cumulative Coverage 80.88% 89.93% 93.54% 95.50% The average percentage of BNC coverage in the KKU corpus was as shown in Table 7. The corpus made 95.57 percent text coverage at the fourth 1000-word list. The means that the around 4,000 words inclusive of proper nouns are needed for gaining adequate comprehension in the KKU quota admission test. Table 7. Average BNC coverage in the KKU corpus. 1000-Word Frequency List 1st 2nd 3rd 4th Coverage 81.16% 8.73% 3.51% 2.17% Cumulative Coverage 81.16% 89.89% 93.40% 95.57% Differences between vocabulary input in the textbook corpora and the test corpora Table 8 shows the percentage of BNC coverage provided by each textbook and test corpus. The Mega Goal corpus provided the highest percentage text coverage at the first 1000-word list, but the GAT provided the lowest. In fact, 3.5 percent text coverage was found different between the two corpora at the first 1000-word list. However, the GAT provided the highest text coverage at the second 1000-word list while the Mega Goal corpus provided the lowest. Still, cumulative percentage of text coverage from the Mega Goal corpus was higher than the GAT corpus at the first list. Concerning the distribution of text coverage over the frequency lists, it was found that the two textbook corpora had rather similar pattern. Text coverage of the O-NET corpus also exhibited similar pattern to the two textbook series but different from the other two test corpora. For the GAT and the KKU corpora, the distribution of text coverage over each frequency list was rather the same. The Upstream and the O-NET corpora approached 95 percent text coverage at the same frequency list (the third 1000-word list) which can be assumed that there was no difference of vocabulary input between the Upstream textbook series and the O-NET. The Mega Goal, the GAT and the KKU corpora approached 95 percent text coverage at the fourth 1000-word list which can be assumed that there was no difference of vocabulary input between the Mega Goal textbook series and the two tests (GAT and the KKU quota admission test). However, the difference of vocabulary input between Upstream textbook series and the GAT and the KKU quota admission test was evident. That is, 1000 words more than those provided from the Upstream textbook series were necessary to gain adequate comprehension in reading the GAT and the KKU quota admission test. Table 8. Percentage of BNC coverage in each textbook and test corpus. Upstream Mega Goal O-NET GAT KKU 1st 1000-Word 83.86 (83.86) 84.38 (84.38) 83.81 (83.86) 80.88 (80.81) 81.16 (81.16) 2nd 1000-Word 8.44 (92.30) 7.47 (91.85) 8.54 (92.35) 9.05 (89.93) 8.73 (89.89) 3rd 1000-Word 3.03 (95.33) 2.84 (94.69) 2.82 (95.17) 3.61 (93.54) 3.51 (93.40) 4th 1000-Word 1.42 (96.75) 1.65 (96.34) 1.53 (96.70) 1.96 (95.50) 2.17 (95.57) Discussion and Conclusion The investigation reveals the apparent discrepancy in number of running words (token), number of word forms (type) and text coverage between the two textbook series along with the tests. Regarding to the textbooks, the Mega Goal series consists of higher number of running words and different word forms than the Upstream series. Such higher number indicates some key issues concerning extensive reading and different word form recognition. In other words, the results demonstrate the requirement for larger extensive reading and broader knowledge of different word forms of the Mega Goal series compared to the Upstream series. Text coverage analysis allows us to estimate the number of words provided as resources by the textbooks that the Upstream series provides 3,000 words and the Mega Goal series provides 4,000 words. With reference to the tests, it was found the comparable numbers of running words in the O-NET and the KKU quota admission test which are obviously higher than that in the GAT. This basically means that the O-NET and the KKU quota admission test need larger extensive reading than the GAT. However, the results clearly reveal that the GAT contains higher number of different word forms than the O-NET and similar number to the KKU quota admission test although its total running words is the lowest. This proves the usefulness of different word form recognition to read the GAT and the KKU quota admission test, and confirms the higher repetition rate of words in the O-NET. As text coverage analysis reveals, to gain adequate comprehension in the tests, around 3,000 words are needed for the O-NET and 4,000 words are needed for the GAT and the KKU quota admission test. Although this clearly points that the GAT and the KKU quota admission test need 1,000 words more than the O-NET, it does not ascertain anything beyond the difficulty of vocabulary input in the tests. In fact, the O-NET though needs lesser words to gain comprehension need larger extensive reading. The investigation strongly indicates the adequacy of vocabulary input in the two textbook series for gaining adequate comprehension in the O-NET. Such adequacy raises the possibility for students to gain comprehension in reading the O-NET, and potentially get high scores from the test. Moreover, it supports the fact that in term of vocabulary input, the ONET does not contain vocabulary beyond that in students’ textbooks. However, it was evident that the Upstream series fails to provide students with adequate vocabulary for gaining comprehension in the GAT and the KKU quota admission tests. Students using the Upstream series need some additional materials to increase their vocabulary knowledge at least 1,000 words more to gain adequate comprehension in the tests. Referring to the Mega Goal series, there is a reasonable chance of students using the series to gain adequate comprehension in all of the tests due to the adequacy of vocabulary input provided by the series. The distribution of text coverage over the BNC frequency lists provided by the materials used in this present study can be compared with Nation’s12 range of average BNC text coverage gathered from authentic spoken and written samples. As Nation12 described, spoken texts cover around 81-84 percent at the first 1000-word list, 5-6 percent at the second 1000-word list, 2-3 percent at the third 1000-word list and 1.5-3 percent at the fourth and fifth 1000-word lists together. Written texts cover around 78-81 percent at the first 1000-word list, 8-9 percent at the second 1000-word list, 3-5 percent at the third list and 3 percent at the fourth and fifth 1000-word lists together. Compared Nation’s range of text coverage to the text coverage from the materials in the present study, it was found that text coverage from the two textbook series and the O-NET falls into similar range of coverage obtained from spoken texts except at the second 1000-word list which higher text coverage from textbooks and the O-NET was found. This can be assumed that the content of the materials focuses on communicative English. On the other hand, coverage from the GAT and the KKU admission test fall into similar range of coverage obtained from written texts. This also allows us to assume that the content of the GAT and the KKU quota admission test focuses on reading skill. The vocabulary input in the Indonesia’s National Exam (NE) was investigated in Aziez’s study1. The test is probably a counterpart with the O-NET in Thailand because both are national tests for high school students. The results showed that 4,000 words are needed to gain adequate comprehension in reading the test. In other words, around 1,000 words more than the O-NET are necessary to read the Indonesia’s NE. However, the comparison of vocabulary input regarding to the number of running words and different word forms reveals that the O-NET needs larger extensive reading comparing to the Indonesia NE. This again allows us to assumed that the O-NET though needs lesser number of words to gain comprehension is not easier than the Indonesia NE due to larger extensive reading it requires. Chujo3 conducted a study to estimate the number of words provided in high school EFL textbooks and those needed for adequate comprehension in university admission tests in Japan. She found that textbooks provided 3,000 words and 3,200 words as resources while three national universities need 3,500 words, 4,800 words and 6,300 words to gain comprehension. Obviously, most of the tests in Chujo’s study needed larger number of words than the textbooks, and this consequently causes big difference of vocabulary input between the textbooks and the tests in her study. In the present study, although the difference of vocabulary input between the Upstream series and the two tests (GAT and KKU quota admission test) were found, that difference are not as large as those found in Chujo’s study. In fact, it seems like the O-NET, the GAT and the KKU quota admission test were constructed with better consideration of vocabulary selection because the vocabulary input in the tests was rather comparable to the textbooks used for classroom instruction. In conclusion, the investigation confirms the high possibility that students using either of the textbook series without additional materials gain adequate comprehension in the ONET, and consequently tend to get high scores. However, students’ scores in the past years were contrary to this. If we assume that the tests are standardized in terms of vocabulary input, other factors contributing to such low scores should be investigated. Particularly, students’ vocabulary size should be measured to find out whether students possess as large vocabulary knowledge as provided by the target textbooks. Additionally, the fact that a textbook series fails to provide students with adequate vocabulary input for gaining adequate comprehension in the GAT and the KKU quota admission test, teachers should give students additional materials to increase students’ knowledge of vocabulary as needed to comprehend the GAT and the KKU quota admission test. The results also address test designers that regarding to vocabulary input, the O-NET does not contain vocabulary input beyond what is input in the textbooks. It is also obvious that the all of the university admission tests used in the present study are constructed with better consideration on vocabulary selection compared to two of the three Japan national university admission tests in Chujo’s study3. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. Aziez F. Examining the vocabulary levels of Indonesia’s English National Examination Texts. Asian EFL Journal 2011;51:16-29. Catalán RMJ, Francisco RM. Vocabulary input in EFL textbooks. The Revista Española de Lingüística Aplicada 2008;21:147-165. Chujo K. Measuring vocabulary levels of English textbooks and tests using a BNC lemmatized high frequency word list. In: Nakamura J, editor. English corpora under Japanese eyes. Amsterdam: Rodopi; 2004. p.231-249. Chujo K, Oghigian K. How many words do you need to know to understand TOEIC, TOEFL & EIKEN? An examination of text coverage and high frequency vocabulary. The Journal of Asian TEFL 2009;6:121-148. Cobb T. Learning about language and learners from computer program. Reading in a Foreign Language 2010;22:181-200. Hsu W. Measuring the vocabulary of college general English textbooks and English-medium textbooks of business core courses. Electronic Journal of Language Teaching 2009;6:126-149. Hsu W. Vocabulary thresholds of business textbooks and business research articles for EFL learners. English for Specific Purposes 2011;30:247-257. Laufer B. What percentage of text lexis is essential for comprehension? In: Lauren CH, Nordman M, editors. Special Language: From Humans Thinking To Thinking Machines. Philadelphia: Multilingual Matters; 1989. p.316-323. Laufer B, Ravenhorst-Kalovski GC. Lexical threshold revised: Lexical text coverage, learners’ vocabulary size and reading comprehension. Reading in a Foreign Language 2010;22:15-30. Nation ISP. Learning vocabulary in another language. Cambridge: Cambridge University Press; 2001. Nation ISP. How good is your vocabulary program? ESL Magazine 2001;4:22-24. Nation ISP. How large a vocabulary is needed for reading and listening? The Canadian Modern Language Review 2006;63:59-82. National Institute of Educational Testing Service. About NIETS [Portable data format]. 2009 [cited 2013 Nov 19]. Available from http://www.niets.or.th/uploadfiles/uploadfile/5/5113f2fc40d9b7ccbf26972226c1a536.pdf National Institute of Educational Testing Service. 2009 O-NET statistics [Internet]. c2010 [cited 2012 Jul 1]. Available from http://www.niets.or.th/uploadfiles/uploadfile/9/d236a3a2deb4e08ec19b3a40c9acc3ed.pdf National Institute of Educational Testing Service. 2010 O-NET scores classified according to provinces [Internet]. c2011 [cited 2012 Jul 1]. Available from http://www.niets.or.th/uploadfiles/uploadfile/9/40faf7edfb767401bd0b85a4fb44cabf.pdf National Institute of Educational Testing Service. 2011 O-NET scores classified according to Educational Service Area [Internet]. c2012 [cited 2012 Jul 1]. Available from http://www.niets.or.th/index.php/research_th/download/28 Phanphruk S. The application of O-NET scores on students’ school assessment [PowerPoint]. 2013 [cited 2013 Nov 19]. Available from http://www.slideshare.net/tadchanan/o-net-16157981 Poonpon K, Honsa S JR, Cowan RA. The teaching of academic vocabulary to science students at Thai universities. Studies in Languages and Language Teaching 2001:51-63. Read J. Assessing vocabulary. Cambridge: Cambridge University Press; 2000. Wiriyachitra A. English language teaching and learning in Thailand in this decade. Thai TESOL Focus 2002;15:4-9.