Reflection 3.4 - Programming Assessment Examples

Instructional assessment has two main goals. We want to be able to determine whether students have learned what we intended so we can assign grades. We also want to use that information as feedback for us as teachers to determine whether any deficiencies exist in our instruction.

 

Please note that assessment provides data for assigning grades but is not the same as assigning grades.

 

The discussion below includes several examples relating to assessment methodologies that could be used in a programming class. They are not meant as the ways to assess programming. Rather the goal is to provide a somewhat novel approach to assessment that might spark your thinking about assessing programming.

 

End-of-Course Assessment

Often the goal of a programming course is that students be able to write a program. A reasonable assessment of that is to have them write a program. The program should be one that allows/requires students to use all the main elements of programming the course is intended to develop. It need not use all the detailed elements of programming.

If you use similar programs at the end of every course you should be able to determine whether any changes you might have made had the appropriate impact. On the other hand it might be useful to have some additional assessment that looks more directly at some programming concepts. One such instrument was developed with the express purpose of comparing student evaluations over time or between courses/institutions. We used the "bench-mark" instrument in our Fundamental of Programming course and it or some similar assessment could be used as a supplementary course assessment.

Often we do not wish to use high-stakes evaluation such as a single final exam. In that case some like the following might be useful.

 

 

Mastery Learning/Competency Demonstration

Most of us have heard of mastery learning. I prefer to think of it in terms of competency since it takes a longer time than allowed by a single course to achieve "mastery". Whichever you call it, it fits extremely well with programming.

The goal is that students demonstrate particular capabilities. One example is that students be able to produce solutions for simple IPO "problems". IPO programs require input from the user, some manipulation of data, and some output. The sub-skills involved might be:

One possibility for assessment in this case is an pencil and paper quiz with a couple numeric "problems" involving calculations and assignment, a couple string "problem" involving manipulation and assignment, and a couple problems requiring input, processing, and output (IPO)—one involving string data and one involving numeric data.

The teacher can quickly grade such a quiz perhaps using a rubric or checklist including each of the subgoals. The teacher can make a professional judgment as to whether the student is competent with the desired content as a whole or not.

If the student passes (demonstrates competency), great. If not then, another demonstration, perhaps after additional learning activity, can be administered. The next quiz/demo will have the same structure as the first, but different particular tasks. The process continues (in principle) until the student demonstrates competency.

While the process may continue indefinitely, in principle, practicalities will likely come into play. The course will have an ending date which must be adhered to or the student given an incomplete. It may be the case that some competencies are considered prerequisite to others and must be passed before later ones or attempted. (I used a rule that the IPO demo must be passed before taking the retake of any later demo and that the end of the semester was the cutoff unless arrangements for an incomplete were explicitly made.) Individual teachers will need to make decisions appropriate to their context.

A similar process could be used for other programming sub-skills. I, for examples, had other competency demos for conditional (Boolean) expressions, selection (skills & "problems" concerning "if" statements) of the overall programming

 

 

In-person Grading (Oral Assessment)

Dr. East and I used to have long discussions about how we graded programming assignments.

The one thing we both disliked most about grading homework (for programming classes) was the feedback mechanism. Even if I were to get the grading done quickly, I almost always felt that students mostly looked at the score I gave and nothing else. It seems to me that talking directly with students and telling them your reaction to what you saw is a better approach.

On several occasions I have used in-person grading to discuss student submissions. It takes more time than writing something on their papers but it feels better (to me at least) and seems to be well received by the students. In a "normal" sized course you will likely only have time to do one-third to one-half of the students on each activity but the feedback will likely be better and better-received.

 

Code Walkthroughs

Code walkthroughs are not really/typically an assessment activity. However, they can be very good mechanisms for providing feedback to students about programs and programming.

Sometimes you simply bring up a student's solution to your assignment and, as a groups, discuss what is good and what could be improved.  If you do it well nobody's feelings get hurt if you are commenting on improving their programs and they actually learn a lot about their code.

But you might start with made-up examples of programs the students produced.   I often times grab two or three student programs and merge them into a single program I use in-class.  Students will be able to recognize elements of their own but no student would really think the program was theirs and feel embarrassed.

You would likely want to have several preplanned topics about the program for students to consider as they examined it, e.g., Something:

As students get used to code walkthroughs, they will usually start looking forward to them and volunteer their work. We all would like to perform better and this provides a good opportunity.

As noted earlier, code walkthroughs are not usually used for assessment. However, they might provide an alternative form of feedback that would allow you to use some assessment that you would otherwise not use due to the lack of a feedback mechanism.

 

[Note, we tried to do an element of code walkthroughs in FOP but it just doesn't translate well to online courses.]