Me

Me
Here's me in my jail cell (office)

Monday, November 28, 2011

Final Blog

I felt that this course pushed me in several ways.  There were a lot of deadlines to keep up with, and posting on different days was difficult when others had not posted their initial discussion on time.  I think assignments 2, 3, and 4 were helpful but I’m not sure I will go through all of the steps, (like making the tables) before making every assessment.  I felt that some of the discussions were useful but on occasion it appeared that the comments were very repetitive and only made for the sake of earning all of the points.  I understand that they are necessary and that in a perfect world; students would openly and willingly discuss pertinent topics, but it is frustrating to have to sift through so many discussions to find useful information.  I also feel like the assignments would be less frustrating if they were all available from the beginning.  If I had known the format of the template, I could have written my questions in the correct format the first time.  I liked creating the assessment and uploading it into Respondus because I had never added feedback to my questions before.  That was the needed push for me to learn how. 
Initially when I learned that we would be blogging I assumed it would be about things we were interested in or just wanted to share.  It would have been nice if we could have used the blog or the journal feature to simply present ideas or new things we learned, instead of answering more discussion type questions.  It felt like double duty on the discussions.  Having the blogs more personal would allow us to still discuss (in the discussion forum) as well as to create that more personal environment that still seems to be lacking in distance education.  Many of us seem to be all taking the same classes together, in the same order.  I think it would encourage more of a sense of community.  Overall I enjoyed the class and signed up for Administration in LST in the Spring!  J

Respondus

I like using Respondus to build my exams and also using the LockDown Browser for students while taking the exam.  All of my courses were previously taught in a face-to-face classroom and the exams were given paper and pencil.  I easily re-formatted my Word documents into text files with the “*” before the correct answers and added feedback (an added bonus since there was not any feedback previously!).   I also like that I can continue to format my questions using bold, italics, or color, as well as adding images.  Although these formats are lost when converting to plain text format, they can be fixed once they are uploaded into Respondus.  As for building a new exam, I believe I would build it straight into Blackboard.  A copy with an answer key can then be saved in Respondus and also to your computer as a back-up.  
I believe Respondus is very useful to instructors as long as they are familiar with the program’s capabilities. 

Tuesday, November 1, 2011

Assessment Creation

When creating an assessment item, I found the easiest part to be creating the stem, and the hardest part to be creating incorrect leafs.  I could easily think of a topic I wanted to cover, put it in question form, but then had a difficult time coming up with options that were indeed wrong, without appearing to be “tricky” or misleading.  I also had to be very conscious not to give away too much information to the question in the stem itself.  For example, in one stem I intended to give a definition and have the learner choose the correct term.  In the definition of criterion-referenced I used the word criteria, which would easily give away the answer even to the most clueless of students.  I found it helpful to use the thesaurus in MS Word to help overcome this problem.
I did not need to need to modify my original objectives once I began creating questions.  I knew this was an option and considered doing so at times, but was able to design assessment items that matched all of my objectives. 
Below is an example of one of my best assessment items:
12.)  In education there are four references commonly used for interpreting the performance of learners.  Read the following example and identify the type of interpretation demonstrated.  In Miss Mary’s first grade class, all of the students are given a short quiz at the beginning of the year.  At the end of the year Miss Mary gives the students the quiz again.
                A.            Ability-referenced
                B.            Growth-referenced
                C.            Norm-referenced
                D.            Criterion-referenced
Correct Answer: B – Growth-referenced compared the learner’s performance to their prior performance.
Incorrect Answer: A – Ability-referenced interprets the learners’ performance in light of that individual’s maximum performance.
Incorrect Answer: C – Norm-referenced interpretation compares the learners’ performance to that of others.
Incorrect Answer: D – Criterion-referenced describes what the learner can and cannot do.
Performance Objective:  (Chapter 6) Given an example, correctly identify the type of interpretation demonstrated.

Thursday, October 20, 2011

Assessment Questions

 
It is important to know the type of capability of a performance objective because learning can only be measured if it can somehow be observed first.  The performance objective should specify what behavior you will be able to see from the learner.  This is done by stating the capability the instructor expects the learner to use to satisfy the performance objective.  Performance objectives should be stated terms that clearly identify the behavior that will be demonstrated.  I think in a traditional, online assessment there are some capabilities that are easier to incorporate than others.  Multiple choice questions and true or false questions are very flexible and allow a sufficient sampling of the content material.  One reason for this is that more of these types of responses can be included in an assessment than of constructed-response items, such as essay questions.  In other words, the learner can answer more multiple choice and true or false questions in a set time frame than essays questions.  By including more questions, more of the content can be covered which will increase the generalizability of the assessment.  Fixed response questions can also measure procedural knowledge, like concepts and rules, easier than completion items. This can be done in a variety of ways, for example multiple choice questions can ask the learner to classify examples as correct or incorrect.  They can also measure a learner’s ability to apply a rule by having the learner choose the correct response to a problem where the other options offer common wrong solutions. Completion items are good for measuring a learner’s recall of information because there is less chance of guessing the correct answer.  Feedback can be easily incorporated when using fixed responses in an online assessment.  Fixed response items are graded by the computer program much easier than completion items are.  Although completion items can be automatically graded, there are often mistakes due to things like capitalization and spacing that do not match the instructor’s key.

Friday, October 7, 2011

Prompts and Scoring

It’s important to use a consistent evaluation system when grading essay assessment items.  This is important for several reasons.  One reason is to ensure that all students are graded fairly and equally.  Having clear and concise criteria for evaluation allows the grader to be certain this happens.   Other ways to do this include grading without knowing the identity of the student, using a model answer, and grading all responses to a particular question at one time (Oosterhof, 2008). 
The textbook provides 6 suggestions of criteria for evaluating essay items:
1.       Does this item measure the specified skill?
2.       Is the level of reading skill required by this item below the learner’s ability?
3.       Will all or almost all students answer this item in less than 10 minutes?
4.       Will the scoring plan result in different readers assigning similar scores to a given student’s response?
5.       Does the scoring plan describe a correct and complete response?
6.       Is the item written in such a way that the scoring plan will be obvious to knowledgeable learners?
 The following will include a critique of the prompt and scoring article found on the ACT website (http://www.actstudent.org/writing/sample/textsamples.html ) using the 6 suggested criteria for evaluating essay items previously mentioned.
1.    Does this item measure the specified skill?  In this article a prompt is given but it does not include any indication of what the objective is.  I am not sure how a grader would know if the objective of the essay was met or not.  In this example the learner must answer the question given, but is allowed to choose a position on the subject.  Knowledge of a particular subject is not needed to answer this question correctly.  Based on the scoring explanation in the article, the learner’s writing ability and organizational skills are what is being evaluated.
2.    Is the level of reading skill required by this item below the learner’s ability?  Yes, I believe the required reading skill needed to understand this question is below the learner’s ability.   The sentence structure is simple and does contain difficult or unusual words that may not be known by the learner.
3.    Will all or almost all students answer this item in less than 10 minutes?  Yes, this question should be able to be answered in less than 10 minutes.  It is opinion based and only asks the learner to support their position using specific reasons and examples.
4.     Will the scoring plan result in different readers assigning similar scores to a given student’s response?  No, this example does not ensure that different readers will assign a similar score to a given student’s response.  Three characteristics should be met to ensure reliable scoring.  The first characteristic states that the number of points the item is worth should be known by the student at the time of the test.  This is not the case in the example.  The second characteristic states that the scoring plan should specify the attributes being evaluated.  Again, this is not the case in the example.  The third and final characteristic states that a scoring plan should indicate how points will be awarded.  This is also not identified in the example.  A good scoring plan should leave little doubt as to when a point should or should not be awarded.
5.    Does the scoring plan describe a correct and complete response?   No, a scoring plan was not used in this example.  The learner has no indication of what is required to be considered a complete response other than including reasons and examples to support their position.
6.   Is the item written in such a way that the scoring plan will be obvious to knowledgeable learners?   No, again, no scoring plan was provided. 
While it is possible that the ACT uses a scoring plan and provides it to the various readers grading the essay items, it is not made known to the student at the time of testing.  Providing this information may make the learner more aware of what is expected of them and allow them to focus on the specific content.

Oosterhof, A., Conrad, R., & Ely, D.P. (2008). Assessing Learners Online. Upper Saddle River, NJ: Pearson Education Inc.





Tuesday, September 20, 2011

Training and Education

When determining what to assess, one must first decide whether the skills to be assessed are considered training or education.  If the goal is for students to learn to perform a skill, then it is considered training.  Education is when a student learns material which they can build upon and use for future problem solving.
I routinely use both training and education in the content area that I teach.  An example of training that I use is in our Basic Patient Care laboratory course.  I am currently teaching venipuncture techniques in lab.  I believe there is a remarkable difference in “knowing” how to do something and being able to do it.  I “know” how to do the splits, but I can assure you that I can no longer actually do them!  This is where the benefit of having a hands-on lab comes in.  Students can practice their IV starting techniques on a mannequin arm that is filled with a blood-like fluid. 
 
Instructure and student using a mannequin arm

All of the students have access to the task analyses that are created for each skill they will be assessing on in lab.  This helps them to break down the steps as they practice.  There is always a certain amount of nervousness that students must learn to overcome before they can stick a patient for the first time.  This nervousness does not go away, but practice helps build confidence, and I believe confidence helps to steady a shaky hand.  As their instructor I am beside the student the entire time, offering guidance when needed, and feedback when the task is complete.  Once the student has mastered the mannequin arm they are allowed to move on to a classmate.  I use a very detailed grading rubric to assess the student’s performance when they attempt to prove competency on a classmate.  I assure them that it is not necessary to puncture a vein and advance the catheter until blood returns for the student to successfully pass the comp.  I do not believe this would be completely fair because not all patients are the same, nor do they all have good veins for venipuncture.  I assess their ability to follow protocol and use the proper techniques.  They are only allowed one attempt to stick a person.  This keeps our students from becoming human pincushions.  If the student does not pass I discuss the issues with the student and they try again on the mannequin arm until the problem is resolved.  Repetitive hands-on practice, reaching a level of proficiency, remedial training when proficiency is not met, focus on a specific behavior demonstrated, and the practice of assessing every skill learned, are all qualities that make this an example of training.
The Basic Patient Care course that compliments the lab section is a great example of education rather than the training as previously discussed.  The coursework provides the structure and foundation for the knowledge that the students will need to apply in their clinical rotations.  The material mainly consists of declarative knowledge, albeit a small amount of procedural knowledge is also covered.  Goal statements are an excellent way for the student to clearly identify the target outcomes of the material.  The material is exhaustive and must be generalized for assessment purposes.  While the student must use this knowledge to anticipate a variety of problem solving situations, the assessment cannot cover every situation that may occur in patient care.  Typical assessments I use for this course include both low and high stakes assessments.  A low-stakes assessment is an assessment in which the performance by the student does not have significant implication on their grade.  Low-stakes assessments in this course are used in the form of discussion questions, group activities, quizzes, and module exams.  High-stakes assessments are given considerable weight and have more of an implication on the student’s grade.     High-stakes assessments, such as a mid-term and final exam are given and weighted heavier than the low-stakes assessments.  These types of exams require higher test security and require a great deal of planning. 
Both training and education are distinctly different and are effective if used properly.   These types of learning should be chosen based on the goals that the instructor has for the learners.  If the instructor hopes to teach declarative and procedural knowledge, without problem solving, training may be the best method.  If problem solving is a desired skill then education may be the best path to take.

Monday, September 12, 2011

Learning Capabilities

I may be teaching Radiographic Anatomy this semester in a face-to-face classroom setting for the last time.  As our program moves towards distance education and the online format, I am thinking how this course may be restructured to meet the needs of the learner.  Currently, I use PowerPoint to display radiographs on the overhead projector for the class as we discuss the anatomy.  Students are also given a copy of the lecture so that they may actively take notes and label the radiographs.  This works as an excellent study tool. 
One of the first lessons covered in the Radiographic Anatomy course is how to properly hang films.  It sounds easy enough, but there are a surprisingly large number of rules that must be applied.  I would prepare a pre-test to assess the learners’ prior knowledge on anatomy (it is a pre-requisite of the program) to determine if we are all on the same page to begin with.  This is a form of declarative knowledge in which the learner can tell me what information they do, or do not know about the subject.  Students would then view the lessons on film hanging.  These lessons would contain multiple examples and demonstrations of both the correct and incorrect method to display a film for reading by the radiologist.
A great way for the learner to prove their understanding of the concept is to hang the films for themselves.  I have looked into a few online puzzle-type software programs that I may be able to use to make interactive assessments.  My goal would be to display an image incorrectly, and have the student manipulate the image until it is displayed correctly for reading.  For example, a radiograph of a posterioanterior right hand should be displayed as if the viewer’s eyes were the x-ray beam going through the image in the same manner in which the photons went through the extremity when it was imaged.  It should be hung on the viewbox for reading as if the patient were hanging from their fingers, with the radiographer’s marker on the lateral aspect of the anatomy. 
 On the assessment, the image may be displayed upside down, backwards, or flipped.  Concept knowledge can be demonstrated as described previously, or by giving the learner a multiple choice exam containing images of radiographs.  The student could be asked to choose which image is displayed correctly, or select the description which would best explain how to fix the radiograph. 
To demonstrate rule knowledge the student may be given a radiograph they have not been shown before and asked to apply the previously learned rules for film hanging to properly hang the film.  For example, if students were told that extremities are hung by the phalanges, although they have not seen a foot radiograph displayed, they should know that according to the rule, it should be hung by the toes. 
Problem solving knowledge may be assessed in this type of course by giving the student a mystery patient.  We also refer to this as “a day in the life of [a radiographer]”.  These types of assessments are designed to pull all of the learners’ knowledge together into a real-world situation.  The student is given a radiograph that has not been marked (this is a major no-no in radiography) and asked to display it.  The student must be able to correctly identify anatomy on a radiograph, and differentiate the organs visualized to be able to determine the patient’s left and right side of the body.  Then the student must apply the rules they have learned to properly display the image.