Gifted Bulletin Board

Welcome to the Gifted Issues Discussion Forum.

We invite you to share your experiences and to post information about advocacy, research and other gifted education issues on this free public discussion forum.
CLICK HERE to Log In. Click here for the Board Rules.

Links


Learn about Davidson Academy Online - for profoundly gifted students living anywhere in the U.S. & Canada.

The Davidson Institute is a national nonprofit dedicated to supporting profoundly gifted students through the following programs:

  • Fellows Scholarship
  • Young Scholars
  • Davidson Academy
  • THINK Summer Institute

  • Subscribe to the Davidson Institute's eNews-Update Newsletter >

    Free Gifted Resources & Guides >

    Who's Online Now
    0 members (), 266 guests, and 15 robots.
    Key: Admin, Global Mod, Mod
    Newest Members
    MyModalert, miappaa, Brooklyn, hellotoyou, polles
    11,456 Registered Users
    June
    S M T W T F S
    1
    2 3 4 5 6 7 8
    9 10 11 12 13 14 15
    16 17 18 19 20 21 22
    23 24 25 26 27 28 29
    30
    Previous Thread
    Next Thread
    Print Thread
    Page 2 of 3 1 2 3
    Joined: Sep 2008
    Posts: 1,898
    C
    Member
    Offline
    Member
    C
    Joined: Sep 2008
    Posts: 1,898
    Why does anyone think multiple choice assessments are automatically bad for maths assessment? I don't agree, though of course there can be bad ones. There are things they can't test, but that shouldn't be a concern if you think your child has mastery, surely? In English etc. I get the over-thinking trap, though even there I'm not as convinced as some... But in maths?


    Email: my username, followed by 2, at google's mail
    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    Val Offline
    Member
    Offline
    Member
    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    Originally Posted by ColinsMum
    Why does anyone think multiple choice assessments are automatically bad for maths assessment? I don't agree, though of course there can be bad ones. There are things they can't test, but that shouldn't be a concern if you think your child has mastery, surely? In English etc. I get the over-thinking trap, though even there I'm not as convinced as some... But in maths?

    IMO, the problems with multiple choice assessments are as follows

    1. Points are allotted based solely on the answer chosen. Students hand in scannable forms or slick SUBMIT, making it unlikely (or impossible) that the teacher will look at the student's work and assess her strengths and weaknesses. As a result, the teacher has no way of knowing if a student got an answer wrong because he made a minor mistake or if she really had no idea how to approach the problem and just guessed.

    2. Multiple choice (MC) exams test superficial knowledge. This is because they provide a lot of questions that have to be answered quickly (e.g. the SAT mathematics test gives 70 minutes to answer 54 questions). A test designed this way clearly isn't examining a student's ability to think carefully and put ideas together in more than a superficial way.

    Compare with this GCE exam from the UK: 8 questions, 90 minutes.

    BTW, if you think the SAT questions are a joke, check out the California high school exit exam. Look at question 41!

    True, they could easily turn that GCE paper into a multiple choice exam with only 8 answer bubbles, but then a student would lose 1/8 of all possible points if he made a minor sign mistake at the end a problem --- and a student who doesn't really understand all the maths could narrow the answers down to two possibilities and guess correctly.

    3. Multiple choice tests use tricky wording as a poor substitute for questions like the ones in the GCE exam above. This is a serious problem in the humanities, but it affects maths and science exams as well. It can be used to cut both ways: questions can be written in a way that makes an answer guessable (see question 42 in the California exam), and questions can be written in a way that tries to trick the student (common on the SAT).

    You can argue that point 3 makes a bad MC exam, but I would argue that the NCLB/everyone-must-go-to-college environment in the US makes it virtually impossible to write a MC exam that doesn't suffer from the first problem (writing to help students guess answers: OMG!! They all have to pass!!! eek eek eek ). And the nature of the MC format is such that using tricky wording is an unavoidable outcome on exams like the SAT. This particular problem actually goes a bit deeper than this, but I'll stop here.

    All of these attributes contribute to teaching students that mathematics is all about clever wording and getting answers quickly. It doesn't teach them how to think slowly and carefully. It also rewards ONLY students who think quickly and punishes students who can get original answers, but not quickly.

    Last edited by Val; 10/14/13 11:58 AM. Reason: Clarity
    Joined: Sep 2008
    Posts: 1,898
    C
    Member
    Offline
    Member
    C
    Joined: Sep 2008
    Posts: 1,898
    Thanks for spelling it out. As I thought: all those criticisms are criticisms of poorly written MCQ tests, not things inherent to the format.

    Of course, if many of you have only ever encountered poor MCQ tests, it's entirely reasonable to be suspicious when one is proposed! I will only say: you haven't sat mine :-) (Which were hard work to set, which is why I'm not using the format just now, I confess.)


    Email: my username, followed by 2, at google's mail
    Joined: Dec 2012
    Posts: 2,035
    P
    Member
    Offline
    Member
    P
    Joined: Dec 2012
    Posts: 2,035
    What is the point of spending years and years penalising a child for not showing his work and then giving a test in which the working is completely irrelevant. Multi-choice isn't much used here but I have had a couple of good ones. To me though the purpose of a test is to see what I have actually learnt though not my ability to choose correctly between four options without actually having to solve the problem.

    However my all time worst test award goes to unit standards as practiced by NZ polytechnics. The question is only judged correct if it has certain key words or phrases. The only way you can get full marks is to quote the text verbatim (in an assignment literally copy). I mean how does this equal learning?

    Last edited by puffin; 10/14/13 02:21 PM.
    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    Val Offline
    Member
    Offline
    Member
    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    Originally Posted by puffin
    What is the point of spending years and years penalising a child for not showing his work and then giving a test in which the working is completely irrelevant.

    Oh, good one. +1.

    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    Val Offline
    Member
    Offline
    Member
    Joined: Sep 2007
    Posts: 3,298
    Likes: 1
    I'd be interested in seeing examples of good MC questions that don't have the problems I noted above; I've seen so many bad ones, the contrast would be good to see.

    Joined: Feb 2013
    Posts: 1,228
    2
    22B Offline
    Member
    Offline
    Member
    2
    Joined: Feb 2013
    Posts: 1,228
    Originally Posted by Val
    I'd be interested in seeing examples of good MC questions that don't have the problems I noted above; I've seen so many bad ones, the contrast would be good to see.

    http://www.artofproblemsolving.com/Wiki/index.php/AMC_8_Problems_and_Solutions

    Joined: Feb 2011
    Posts: 5,181
    Member
    Offline
    Member
    Joined: Feb 2011
    Posts: 5,181
    This is the reason that I refused to use MC questions:

    Originally Posted by Val
    1. Points are allotted based solely on the answer chosen. Students hand in scannable forms or slick SUBMIT, making it unlikely (or impossible) that the teacher will look at the student's work and assess her strengths and weaknesses. As a result, the teacher has no way of knowing if a student got an answer wrong because he made a minor mistake or if she really had no idea how to approach the problem and just guessed.

    Inherently, there IS no way to entirely eliminate this particular problem, which is fairly severe in STEM.

    It's so binary/outcome-based that the process is (mostly) ignored. That is the format.

    I'll add to Val's critique that MC is the format that allows students to earn credit via "guess-and-check" rather than understanding-- and a student who has poor computational accuracy may well score about as well as a speedy guess-and-checker. I think that those two hypothetical students should NOT be earning the same grade on an assessment of their understanding of that material.

    I love short-answer. I hate MC.

    While the former cannot be readily autograded the way that the latter can, they ARE fairly quick to grade. I managed to do it routinely for 200-300 students a semester, and while I'm quick, I don't think that I'm that much faster at grading or constructing assessments than most faculty.

    Anyway. MC and Common Core seem to be the very worst sort of mismatch imaginable-- after all, the ideals behind CC indicate an emphasis on process and a move AWAY from "accuracy" in that process...

    meaning that fundamentally, there is a lack of alignment with the assessments if they are multiple choice. {sigh}

    IMO, even good MC questions are mostly not intended with students in mind, but with high THROUGHPUT in mind.






    Schrödinger's cat walks into a bar. And doesn't.
    Joined: Mar 2013
    Posts: 690
    K
    Member
    Offline
    Member
    K
    Joined: Mar 2013
    Posts: 690
    Neat link. Thanks!

    Joined: Sep 2008
    Posts: 1,898
    C
    Member
    Offline
    Member
    C
    Joined: Sep 2008
    Posts: 1,898
    Originally Posted by Val
    I'd be interested in seeing examples of good MC questions that don't have the problems I noted above; I've seen so many bad ones, the contrast would be good to see.
    It would be too revealing to post mine, but let me say a bit about how I used to go about setting a multiple choice exam.

    This was for a middle-years university course in a technical subject (both mathematical and human context aspects). (Not the whole of the assessment of the course, but a good chunk of it.) It was a situation in which we didn't mind the course marks having a fairly low ceiling - every year there'd be a few students who got all the MCQs right, and that was fine. Since it was the final exam, and for a class of 200 or so students, there was no expectation that it was going to give me an understanding of any individual student's progress. The MCQ answer sheets were optically read, and then I used to get not only the individual reports of what each student put and what their mark was, but also a very thorough statistical report of what happened in each question - more on that below.

    - Time allowed: think of a number and double it. We used to allow an hour for 20 MCQs, of which the majority "ought" to take no more than reading time plus a few seconds of thought for a student who had mastered the material. (Dyslexic students automatically get extra time on top of that, of course - but the feedback I got was that the time we allowed was, in practice, enough for everyone, and the report validated this, see below.) I had no interest in assessing students' processing speed.

    - Know what each question is trying to determine. (Sounds trivial, right? :-)

    - Know what common student misconceptions are (this is the hard part!)

    - Design the distractors so that a student with one of the common misconceptions has a wrong answer to plump for (it looked to me as though the AMC paper 22B posted didn't do this that well, or maybe I just didn't think of the right misconceptions in my quick look at that!)

    - Make sure meta-reasoning, MCQ exam technique, doesn't easily give the answer. E.g. if you see answers 3, 30, 300, 0.3, 6, it's usually a fair bet that the answer isn't 6. Don't do that ;-)

    - Going up an abstraction level, think about what the student needs to know/need to be able to do to pick out the right answer from the ones you're offering - not just about what they'd need if they had to come up with it given a blank sheet of paper. That is, watch out for whether guess-and-check will work as HK said.

    - Try out the paper on a teaching assistant or colleague and if they are confused by any question, fix it!

    - Use the statistical report, both checking for errors in this paper and for improving the next paper. The report told me, for example, what proportion of students chose which answer, and what proportion left the question blank. This immediately lets me check whether there was a tendency for people to run out of time - were the last questions less well or less often done than the first ones? Obviously you look carefully at the cases where a larger than usual fraction of students got the question "wrong" - was it just hard, or is there a problem with the question? Even more usefully, these reports used to summarise *which* students got a question wrong - that is, it would tell me what proportion of the students in each band (I forget, say quintile), by overall mark on the exam, got *this* question right. What you expect is that this proportion is highest for the highest quintile, lowest for the lowest. If it's pretty even across the quintiles, the question isn't doing a good job of discriminating - maybe it's too easy, or maybe it's somehow a bad question, e.g. that relies on something the course isn't supposed to be testing. If the pattern is wrong, e.g. the best students have a tendency to get it wrong more than less good students, it's likely there's another way to interpret the question that you haven't seen, or even that the "right" answer is wrong (or not the only arguable answer)! (Of course, another possibility is that the exam is testing several things, performance on which is not well correlated, but that wasn't the case for me.) This was very useful information for honing my skills in writing these questions (and it happened a couple of times that we had to disregard a question).

    Anyway; we used to use this alongside some more conventional questions, but tbh, if I'd had to keep only one of those two elements, I feel the MCQ exam was the one that did a better job of reporting on students' attainment in this case. Setting the exam was a pretty skilled job, though.


    Email: my username, followed by 2, at google's mail
    Page 2 of 3 1 2 3

    Moderated by  M-Moderator 

    Link Copied to Clipboard
    Recent Posts
    Orange County (California) HG school options?
    by Otters - 06/09/24 01:17 PM
    Chicago suburbs - private VS public schools
    by indigo - 06/08/24 01:02 PM
    Mom in hell, please help
    by indigo - 06/08/24 01:00 PM
    Justice sensitivity in school / DEI
    by indigo - 06/06/24 05:58 AM
    11-year-old earns associate degree
    by indigo - 05/27/24 08:02 PM
    Powered by UBB.threads™ PHP Forum Software 7.7.5