Assessment

When the math group published results of ENLVM, how did they access their success? Which articles should I read to tell what metrics you applied? Vicki: Joel gave me the links to many articles. Some of the links are bad. In the articles I read, it seemed like there was much more emphasis on qualitative data than quantitative. I never saw anything like, "Students score x% better using this system or are y% more likely to take another math class."

What I did see was lots of surveys to measure opinion and practice - observers going into the classroom to interview the students. They would observe if the students were learning the higher level concepts are were excited about using the tools. They would observe proficiency by saying things like, //At the beginning of the class session, students clicked on the arrow key slowly until the lines evenly divided the region. As they became more rehearsed in the activity, they seemed to click quickly until they reached a common multiple. Students also// //appeared to use their knowledge of multiples to anticipate when to stop clicking__.__//

My impression is that we need qualitative studies initially before we start to get hard data. We need to know if the tools are usable and if teachers like them. Eventually more precise data would be nice - but I would think we need some things in place first: a. teachers commitment to use b. ILMs that are in final form

Also, Joel, is there a way of timing how long each student spend on each activity or lesson? We would likely want to know how the time spent correlated to correct answers and student skill level.


 * Vicki: I think assessment is the critical missing link – so that we can publish results. See if there is anything you would have Milan do differently.

I think there are two different kinds of assessment:

1. How can teachers easily grade the results and students get feedback. 2. How can we tell if ILMs are better than traditional approaches. Don: I’m not sure we can ever do this because it may be hard to tie down what are traditional approaches. It may be that we ask three questions: Did they enjoy using the ILM? Were student scores on normal class assessment tools, e.g. tests and homeworks, better/worse for those that used the ILM’s? Were students using the ILM’s more efficient in terms of study time or some other metric on the material?

It seems like the only way to do this is with two classes, one using ILM’s and one not using ILM’s

The first is a convenience for the teachers BUT may also be part of our ILM assessment as we could tell what kinds of questions the students excel at because (?) of the ILM experience. The second is really what is important for us.

FIRST POINT This summer, the teachers seemed motivated to find a solution for the first point. I just don’t think we have a system that is going to satisfy them. Now, the grading is pretty much manual. We provide them with a spreadsheet of answers, but the grading of fields is manual. (Joel, am I right?) Plus, we don’t have a natural system for the activities they did in the ILM to make their way to a gradeable answer. For example, we have them color a graph in minimal colors – but the evaluation is for them to report, “How many colors did you use?” We don’t just look at the applet and know how many colors they used. We could do that my passing data into and out of the applet, but now it is pretty much separate: they do the activity and learn good things and then we ask them to respond to questions to assess their learning.

It almost seems like the ILMs are PRIMARILY for learning, not for assessment of knowledge. We likely could do a better job of providing feedback via the applet itself giving hints or explaining why an answer is incorrect, but the more feedback we give, the less it can be used for assessment (directly). Am I right? If students are given no or little feedback, we can tell how well they learned, but we didn’t help them learn. If we give them lots of help along the way, everyone is going to get the correct answer and we know nothing about what they really learned.

One thing I did think of was letting INETest help us. INETest is nice for grading, as it grades multiple choice, numeric answers, tallies points, shows the teacher what he/she needs to grade, and allows the teacher to be prompted with the right answer (for text questions). It also does things such as show you which questions were missed the most frequently, and which were nearly always answered correctly. We hoped to get such support from UTIPS – but I don’t think that will be fast in coming, PLUS UTIPS really is just an online test taking tool. It isn’t even the gradebook. Since it is a Utah thing, it may not work internationally.

Don: Is it possible to give INETest a file of student answers and the key and then be able to use INETest as though the answers were provided by students online? A n easier approach would be to have a link to INETest which students click on from CSILM. Then INETest takes over the whole screen and asks questions about their activity. We could even divide it into chunks so they enter and leave INETest after each activity is complete. I set up Sushi on inettest yesterday. I will take some time for him to get up to speed. Right now he is going to learn about inettest by doing a few bug fixes. He will then go to work on expanding the programming question. Giving inettest a file of student answers and the key is probably more simply done as a separate piece of code. I think the real way to go is a n-step process 1. Get Sushi up to speed on inettest 2. Let Sushi expand the programming question in the current inettest system so it has the functionality we want 3. Create a new/modified inettest so that it can interface to CSILM with the programming questions a. We have to be careful in doing this so that it is still the old inettest with some functionality added. In that way whenever the old inettest is upgraded or a bug is fixed, it is also fixed for CSILM since it is still the same inettest. 4. Expand on this interface to the CSILM so that all of the concept questions in inettest are accessible in CSILM. a. When an instructor creates a CSILM they create a single test broken into sections in inettest. These sections have questions that assess a particular part of the ILM. As a student completes the associated section in the ILM, they click on a link to inettest and that section of the test is brought up and the student takes it and exits.

SECOND POINT Any ideas you have on telling whether ILMS are a good learning experience is appreciated. We need the teachers (and us) who use ILMs to report in some meaningful way how well they worked. We talked about putting a page at the end of every lesson which asked for feedback, but Joel asked, “Are you wanting student feedback or teacher feedback? If it is teacher feedback, you don’t want the students to see it.” I think he is right. I’m not sure exactly what to do here. If every ILM asks the student for feedback, they will quickly learn to ignore that page. Plus, any results we did get would be biased as not everyone filled in the survey. Ideas? Does the feedback have to come at the end of a lesson? Maybe it could be an entirely separate module that is called as a lesson is started and if the previous lesson has been completed the student is asked to fill out the survey before they are allowed to proceed. **

My current attitude is, "Let's try to be as rigorous as possible in gathering statistics, but in the end, it might be our gut feelings that are most valuable."

It isn't always easy to determine if students learned better with technique A or technique B. We all think of the statistics experiments to determine if fertilizer worked better than no fertilizer on similar fields of corn. Those are great tests - but I'm guessing we won't have such defendable results.

I am suspect of many results given for instructional models. For example, I've heard researchers claim that "laboratory experiences" (where students are given a problem to solve with no explanation and they figure it out) are great as students learn it so much better and retain it longer. But, then in talking to the teachers I learn: Yeah, but it is pretty frustrating for some learning types and can take a lot more student time. Don't try it unless you have GOOD teaching assistant support. You have a whole classroom where students are in various stages of learning. They need a lot of teacher time. If you are one teacher with 40 students, do NOT even think about it.