Initial Assessment – What is it good for?
At the beginning of each year, before lessons begin, we ferry our students in and out of the computer rooms to complete the initial assessment (IA). This is standard college procedure and, as with other college procedures, a business has grown around it. The tests are usually on one of two (expensive) software platforms, and ostensibly assess students’ prior learning so we can better attend to their specific educational needs. In fact, the one we are using this year even has a “learning styles” quiz. That is not the only problem. It’s the first week of college, and many students are still confused about where they are meant to be. They have had bad experiences in English and maths in the past. They are unfamiliar with the website. The internet sometimes cuts and tests crash halfway through. Passwords don’t work and have to be reset. And when they finally get logged in, they are not in a frame of mind to make much of an effort.
The tests must range across all ability levels, so questions begin at the most basic and gradually increase in difficulty. When a student is repeatedly answering incorrectly, the program stops and a level is determined. As soon as one student realises you can finish the test early by getting it wrong, word spreads and a sudden, mass onset of dyscalculia sweeps the college. All this adds up to a completely unreliable result. Last year, students came to us with a grade D in GCSE only to be “assessed” as entry level 3. In the end, we stream students (as best we can) according to prior attainment in actual exams. Sometimes, a student dropped out completely and there are no data. Here we assume the worst and move them up quickly if needs be. Of course, teachers do need to assess their students' starting points, but this comes later, through written exams in a calm environment, once classes are settled. The data collected by the initial assessments sit in a folder on the shared drive, never to be used.
Why bother with the IA then? The answer lies in conditions of funding. The government requires that every young learner who has not yet passed English and maths be required to continue studying it. I agree with that. They also want colleges to make progress with these students, not just process them through an easy functional skills course when they could be doing GCSE. This seems reasonable. They don’t, however, trust colleges to make a sensible judgement about which course a student should be on, hence the requirement to do a standard IA as evidence.
There are no paper tests that range from entry level right up to higher GCSE. If we wanted to use paper tests as a basis for streaming and choosing qualification levels, we would have to give each student a number of different papers, which would all need to be marked before the start of teaching. Some colleges have over 1000 students sitting English and maths. It is not possible. And that leads us to the situation described at the beginning. In order to manage the requirement, colleges pay hefty subscription fees for a piece of software they will use precisely once in the year, and derive absolutely no benefit from. But it ticks the box. If Ofsted ask for our IA results, we can show them. They always demonstrate that students are studying to a higher level than when they came to us, because the initial assessments invariably return terrible results. So, the system works. Students spend an hour logging into a website and absentmindedly clicking “next question” until they are released to the canteen. Lecturers pace up and down computer rooms muttering profanities under their breath and going red in the face as they wait for tech support. Ofsted report that all students in the college are starting at the appropriate level. The software companies who make these platforms pass go and collect £200 (or rather, several thousand).
In its fear of feckless lecturers wasting able students’ time, the DfE imposes a rule that costs thousands of pounds and wastes everybody’s first week of lessons. Even if we grant that the fear of “under teaching” is reasonable, these tests do not prevent it, because they grade far below students’ actual ability. Personally, I think it’s high time we threw them out, but who knows, perhaps there are professionals out there who have a more positive experience. If so, I would love to hear about it!
Next on my list of thoughts to set down in writing – where I think tech does have a role to play in FE.