Tuesday, August 23, 2011

Reflection on Assessments in Online Environments

EIDT6511.gifAs this course on Assessments in Online Environments ends, I have been asked to reflect on the course and how it will affect my professional practice as a way to demonstrate my learning. Part of this assignment was to provide a narration of my reflection. Since blogger.com does not directly support embedded audio, I created an external link for the narration. Click the graphic to the left to play the narration.


Designing online assessments is part of a larger iterative process of designing online instruction. It begins with understanding the mission of the project: who are the learners, what are the learning objectives, and how will distance learners be able to demonstrate that they have incorporated the learning into their own day-to-day thinking. (Dick, Carey, & Carey, 2009)
In the past, I considered assessment my weakest area in instructional design. Now I feel I have a deeper understanding of the issues that separate shallow homework checks from the kinds of formative and summative assessments that allow students to take charge of their own learning, and instructors to focus on individual student needs. I understand that the validity of an assessment has to do with how well it connects with learning objectives. I understand the importance of ensuring “generalizability,” that the topics covered in an assessment provide an accurate picture of all learning, and not just the topics that were covered. I also understand how to achieve better generalizability through covering learning objectives from more than one perspective, and through using performance assessments when appropriate. (Oosterhof, Conrad, & Ely, 2008)

Concepts and Strategies

One of the benefits of this course was that it allowed me to look back at older learning models to realize each learning model provides essential tools for the instructional designer. In the first week of the course I commented that I had been so focused on social learning principles, that it was refreshing to consider that social learning requires each learner to have a foundation of concrete facts.
We discussed the difference between formative and summative assessments. Formative assessments are generally not graded, and can allow instructors and students to get quick non-threatening feedback to identify weak areas in student understanding and instruction that need strengthening. Sometimes formative assessments are used directly by students to make quick adjustments in their own learning. Sometimes instructors and instructional designers use formative assessments to identify problems with learning activities. (Oosterhof, Conrad, & Ely, 2008)
Later we discussed kinds of learning, and kinds of assessments that are appropriate to these kinds of learning. Declarative learning involves facts that can be memorized and repeated such as “Indianapolis is the capital of Indiana.” We discussed procedural or “rules” learning, which involves following a series of steps to complete a task. Learning how to make a pot of coffee would be an example of procedural learning. We discussed problem-solving which requires application of facts and rules to demonstrate learning. (Oosterhof, Conrad, & Ely, 2008)
In conjunction with different kinds of learning, we discussed various kinds of assessments that are appropriate to these kinds of learning, along with advantages and disadvantages of using these various kinds of assessments.
Fixed-answer quizzes, in which students select from a choice of predetermined answers, such as true-false questions, multiple choice, matching, and sequencing exercises all provide students with answers to be recognized or manipulated to demonstrate declarative or procedural learning. Fixed answer quizzes can provide students with instant feedback for formative assessments. They can also take less time to grade.
Disadvantages include the tendency for students to get no feedback on incorrect answers. Multiple-choice questions must have alternate choices that appear plausible to students who have not thoroughly mastered learning objectives, but which are obviously incorrect to students who have mastered the objectives. When multiple-choice questions are not well designed, they can mislead students who understand the material into selecting incorrect answers. They can also provide unintended clues to students who may use them to surmise correct answers when learning objectives have not been mastered. Designing effective multiple-choice questions can be time-consuming. (Oosterhof, Conrad, & Ely, 2008)
Short-answer  (one-word) questions have the advantage of not providing clues to students, but they have a disadvantage of requiring manual grading or review, because short-answer questions allow for the possibility of unanticipated correct answers. 
Essay and short-essay (phrase or sentences) questions are even more secure from the standpoint of accurately assessing student learning but require even more time on the part of instructors to grade. Unlike short-answer (one-word) questions, short-essay questions cannot be graded with an answer key. They require an instructor to read and subjectively evaluate each response. Checklists and rubrics can be useful to restore objectivity when grading short-essay questions. (Oosterhof, Conrad, & Ely, 2008)

Technology Toolbox

One especially useful tool from my “technology toolbox” that I used during the course was “iRubric” from http://www.rcampus.com/indexrubric.cfm. I have not fully explored the capabilities of the software, but not only does it guide the process of developing a rubric, including the ability to tie specific questions to learning objectives, but after the rubric is saved, it provides a utility to use the rubric to create individualized responses to students. 
Other tools I investigated during this course were Moodle (http://moodle.org/) and QuestionMark (http://www.questionmark.com/us/index.aspx). Moodle is a complete CMS system that is free, but requires installation on a server. I have installed Moodle on my personal website at http://1loyd.com/moodle. QuestionMark is a stand-alone assessment tool which was recommended in our course textbook as a software solution providing assistance with assessment design. (Oosterhof, Conrad, & Ely, 2008)
All of these tools provide automated assistance with grading. Many other tools were presented in the course “tech resources” section, including a number of online “gradebook” applications which I have yet to review, including:
Engrade, MyGradeBook, SnapGrades, ThinkWave, TeacherEase, and TheGradeNetwork.

Insights About Scoring and Feedback

One aspect of online scoring and feedback from assessments that this course emphasized was the ability to pre-plan much of the student feedback that students need. Assessment software such as Moodle and QuestionMark has facilities to design and display question-specific and even answer-specific responses to guide students with instant formative feedback. Rubrics also provide a simple means to design student feedback by wording graded (point-based) mastery descriptions as suggestions to help students improve. (Oosterhof, Conrad, & Ely, 2008)
A significant insight I gained from this course was how to use scoring statistics to understand student responses, and in turn, to design more effective assessments. I learned how to recognize when apparent response patterns are significant, when they are not significant, and how to use significant results to identify problem questions that may need revision. (Suskie, 2009)

References:

Dick, W., Carey, L., & Carey, J. O. (2009). The systematic design of instruction (7th ed.). The systematic design of instruction: Pearson.
Oosterhof, A., Conrad, R.M., & Ely, D.P. (2008). Assessing Learners Online. Upper Saddle River, NJ: Pearson Education, Inc.
Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey-Bass.

Wednesday, August 10, 2011

Inspiring online discussion


(Tapscott, 2009)
While some topics may inherently inspire interest, most topics require a controversial "twist" to make the topic interesting, and to cause readers to want to participate in the discussion. It is in finding that controversy or by adding something new from personal experience that makes discussion posts interesting and inspiring to read (Payne, 1985).

Consider how the topic of inspiring online discussion can be made interesting, and write a response to this post which expresses two ideas which can be compared and contrasted in a short response of two or three paragraphs. 

The scoring rubric may be downloaded from this link. Review the rubric before writing a response. 


References:


Tapscott, D. (2009, October 14). Student collaboration [Photo]. In wlibrary's photostream. flickr. Retrieved from http://www.flickr.com/photos/99107397@N00/4011844231/in/photostream 

Payne, L. V. (1985). The lively art of writing (5th ed.). Chicago, IL: Follett Educational Corporation.