Me

Me
Here's me in my jail cell (office)

Thursday, October 20, 2011

Assessment Questions

 
It is important to know the type of capability of a performance objective because learning can only be measured if it can somehow be observed first.  The performance objective should specify what behavior you will be able to see from the learner.  This is done by stating the capability the instructor expects the learner to use to satisfy the performance objective.  Performance objectives should be stated terms that clearly identify the behavior that will be demonstrated.  I think in a traditional, online assessment there are some capabilities that are easier to incorporate than others.  Multiple choice questions and true or false questions are very flexible and allow a sufficient sampling of the content material.  One reason for this is that more of these types of responses can be included in an assessment than of constructed-response items, such as essay questions.  In other words, the learner can answer more multiple choice and true or false questions in a set time frame than essays questions.  By including more questions, more of the content can be covered which will increase the generalizability of the assessment.  Fixed response questions can also measure procedural knowledge, like concepts and rules, easier than completion items. This can be done in a variety of ways, for example multiple choice questions can ask the learner to classify examples as correct or incorrect.  They can also measure a learner’s ability to apply a rule by having the learner choose the correct response to a problem where the other options offer common wrong solutions. Completion items are good for measuring a learner’s recall of information because there is less chance of guessing the correct answer.  Feedback can be easily incorporated when using fixed responses in an online assessment.  Fixed response items are graded by the computer program much easier than completion items are.  Although completion items can be automatically graded, there are often mistakes due to things like capitalization and spacing that do not match the instructor’s key.

Friday, October 7, 2011

Prompts and Scoring

It’s important to use a consistent evaluation system when grading essay assessment items.  This is important for several reasons.  One reason is to ensure that all students are graded fairly and equally.  Having clear and concise criteria for evaluation allows the grader to be certain this happens.   Other ways to do this include grading without knowing the identity of the student, using a model answer, and grading all responses to a particular question at one time (Oosterhof, 2008). 
The textbook provides 6 suggestions of criteria for evaluating essay items:
1.       Does this item measure the specified skill?
2.       Is the level of reading skill required by this item below the learner’s ability?
3.       Will all or almost all students answer this item in less than 10 minutes?
4.       Will the scoring plan result in different readers assigning similar scores to a given student’s response?
5.       Does the scoring plan describe a correct and complete response?
6.       Is the item written in such a way that the scoring plan will be obvious to knowledgeable learners?
 The following will include a critique of the prompt and scoring article found on the ACT website (http://www.actstudent.org/writing/sample/textsamples.html ) using the 6 suggested criteria for evaluating essay items previously mentioned.
1.    Does this item measure the specified skill?  In this article a prompt is given but it does not include any indication of what the objective is.  I am not sure how a grader would know if the objective of the essay was met or not.  In this example the learner must answer the question given, but is allowed to choose a position on the subject.  Knowledge of a particular subject is not needed to answer this question correctly.  Based on the scoring explanation in the article, the learner’s writing ability and organizational skills are what is being evaluated.
2.    Is the level of reading skill required by this item below the learner’s ability?  Yes, I believe the required reading skill needed to understand this question is below the learner’s ability.   The sentence structure is simple and does contain difficult or unusual words that may not be known by the learner.
3.    Will all or almost all students answer this item in less than 10 minutes?  Yes, this question should be able to be answered in less than 10 minutes.  It is opinion based and only asks the learner to support their position using specific reasons and examples.
4.     Will the scoring plan result in different readers assigning similar scores to a given student’s response?  No, this example does not ensure that different readers will assign a similar score to a given student’s response.  Three characteristics should be met to ensure reliable scoring.  The first characteristic states that the number of points the item is worth should be known by the student at the time of the test.  This is not the case in the example.  The second characteristic states that the scoring plan should specify the attributes being evaluated.  Again, this is not the case in the example.  The third and final characteristic states that a scoring plan should indicate how points will be awarded.  This is also not identified in the example.  A good scoring plan should leave little doubt as to when a point should or should not be awarded.
5.    Does the scoring plan describe a correct and complete response?   No, a scoring plan was not used in this example.  The learner has no indication of what is required to be considered a complete response other than including reasons and examples to support their position.
6.   Is the item written in such a way that the scoring plan will be obvious to knowledgeable learners?   No, again, no scoring plan was provided. 
While it is possible that the ACT uses a scoring plan and provides it to the various readers grading the essay items, it is not made known to the student at the time of testing.  Providing this information may make the learner more aware of what is expected of them and allow them to focus on the specific content.

Oosterhof, A., Conrad, R., & Ely, D.P. (2008). Assessing Learners Online. Upper Saddle River, NJ: Pearson Education Inc.