<< Chapter < Page
  Collection     Page 30 / 53
Chapter >> Page >

First, before test administration there has been no official meeting - named ‘the standardisation meeting ’ by Alderson, Clapham&Wall (1995)- for discussion and agreement on how to mark each question/task among the group of assessors. Perhaps the administrators here tend to think the assessors, as language teachers, must obviously know how to fully elicit the students’ oral proficiency, so they do not need to be informed of what to do during the test. Even when the assessors can be aware of the importance of this meeting, they are unable to hold it. It is partially because the staff’s insufficient knowledge of oral testing cannot help them to design an appropriate marking key and a reasonable description of mark categories with a mark criterion.

Therefore, before test administration, a marking key and a mark criterion for mark categories are first needed from test designers, and then a considerable amount of time must be spent on discussion to reach agreement on the way to mark each question/task. Alderson, Clapham&Wall (1995, p.112) maintain, ‘although this is likely to be expensive, it is the safest way of ensuring that enough discussion will take place for all examiners to understand thoroughly the level scale and the procedures for scoring.’ All these things aim at assuring reliability of an achievement speaking test.

Second, Table 4.3 reveals that, during the students’ test performance, interaction hardly existed between the assessors and the test takers or students apart from 2 students out of 10. These two students were asked 1 or 2 questions. Moreover, the duration of these 10 students’ test performance varies 2 minutes on average. As discussed in 2.1- Chapter 2, spoken language has two functions, interactional and transactional, which are both necessarily incorporated into a speaking test. In fact, in most of the oral tests in use at TNU, namely the achievement test mentioned above, the students are expected to merely produce transactional instances of the language. Can such tests be considered to be able to measure test takers’ or students’ overall oral proficiency? The answer is surely no because they reveals no interactive communication. This also means that the assessors gave scores just on the students’ presentation, which also surely indicates a lack of validity and reliability (See 2.5, Chapter 2).

Last but not least, as regards a supportive testing environment the oral tests were almost administered in noisy rooms. Students should be put at ease before and during their performance, which can increase their confidence. Bachman&Palmer (1996), hence, demonstrate that it is crucial to maintain a supportive environment throughout the test, that is to avoid distractions due to temperature, noise, excessive movement, etc. In order to do this, test administrators and assessors are to be in control of techniques and create an atmosphere which will help each student to feel at ease (Alderson, Clapham&Wall (1995, p.116). For those students waiting for their turn should be sitting in a comfortable room, not standing along the corridor and talking so as not to affect the others’ performance.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Collection. OpenStax CNX. Dec 22, 2010 Download for free at http://cnx.org/content/col11259/1.7
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Collection' conversation and receive update notifications?

Ask