Starting the Year With Comprehensible Input

We’re about nine weeks into my Year 8 beginner Japanese course, and I figured some of you might be curious how they go with Comprehensible Input. How do students actually go when you throw out vocab lists, drills, rote memorization and explicitly taught content and structures? What happens when you rely solely on stories, visuals, language games and meaningful interaction? 

Before we get started, if you’re curious about what I do, I think I’ve been pretty transparent up until now. You can find all my programs here, you can see the activities I play here and the stories I tell too. So with that all said and done, let’s take a quick look at how my students have been doing.


Week 4: Early Days, Modest Gains

Four weeks in, I started collecting vocabulary data - not through a formal quiz, but through various Gimkit games. In all honesty, I’ve always found my students did worst at these sorts of assessments as they’re not exactly taught for them. Anyway, students were given 10 minutes for each quiz every two weeks.

At Week 4, we’d used well over 30 words and structures across different contexts. But, a part of the idea with this sort of approach in the classroom is to ‘shelter vocabulary’ and to push grammatical structures (although the quizzes didn’t include everything I’d introduced). So in week 4, on average, students collectively answered 428 vocab-related questions, with 62.6% accuracy. Not stunning, but not terrible either - especially considering they were completely blindsided by this activity and had never once been asked to memorize any of these words. These words weren’t taught, they just showed up in stories, conversations, games, drawings, and daily class routines.

Week 4 was just a hint of things to come. This just the number of questions students answered in just one 10-minute Gimkit game, but this number would only grow with their accuracy.

Week 6: Natural Reinforcement Takes Hold

Two weeks later, I repeated the same kind of check, with around 48 words this time. I waited until the last possible moment in the week too, I think it was about 2:50pm last period of the day and everyone was more than happy to play a game of Gimkit. Accuracy rose from 62.6% to 70.5%, even though the number of total questions answered was slightly lower that week (around 340), probably due to lesson timing.

It should be made clear at this point though, that students had never been told to “study” these words. They don’t write them down in their books, I don’t use flashcards and we never go over them. They had just heard them - again and again - in different contexts, over time. I get bored very quickly in class too, so we don’t always do the same games and activities either.

Also, in case you’re curious, this class isn’t streamed, it’s mixed ability. Also we live in a rural, lower socio-economic area. I typically see them twice a week too.

64% accuracy in Week 4.

74.5% accuracy in Week 6.

83.1% accuracy by Week 8!

Week 8: It All Comes Together

By the end of Week 8, it is clear that something has shifted. Students answered over 1,200 vocabulary-related questions, with an overall accuracy of 83.1%. That’s three times the question volume of Week 4, with a 20-point gain in accuracy. Some of the biggest improvements were in words that students struggled with early on. For example:

  • “sai” (age) started at 25% and rose to 88%

  • “ookii” (big) jumped from 31% to 89%

  • “suki” (like) climbed from 42% to 95%

I think these numbers show a broader shift in comprehension, a sign that students were truly acquiring these pieces of language, not memorizing for a test.

What About the Students Themselves?

Looking at individual progress made this even more obvious.

Student A started at 43% in Week 4. By Week 8, they had reached 74% - and they answered nearly double the questions by then.

Student B improved from 52% to 73%, while also increasing their response volume from 33 to 70.

Student C was nearly invisible in Week 6, answering just 10 questions. But by Week 8, they responded to 118 prompts and got 95% of them right.

They didn’t have sets of words to study at home and they weren’t being taught explicitly. By the way, in case you’re wondering, I have 26 students in this class and this is just a sample. See below for a heatmap which better illustrates whole class results.

Acquisition looks Different

I think this might be a reasonably accurate picture of what language acquisition looks like in practice. Learning isn’t immediately obvious and you can see that it goes up and down inexplicably. I predict that when I update the word list for my next quiz, the results will likely go down too. It can be hard to assess students who don’t learn through traditional methods, and I also know that these quizzes aren’t anywhere near what my students are capable of.

I’ll sometimes have students tell me that they haven’t ‘learned’ anything in my classes. I think this would be pretty relatable for most teachers, regardless of how you teach lol… But surely even the most stubborn and resistant student couldn’t deny this level of growth - actually, who am I kidding, of course they could. I don’t think CI and language acquisition is about short-term test performance, it’s about long-term understanding and retainment. I’m sure we all tell our students that different students pick up things at different rates and this is pretty evident here.

In a traditional setting, a student might memorize a certain number of words in order to complete a unit (e.g. who is in your family or school subjects). They learn them, ace a test and then forget them (granted many don’t). In CI, those same words might take longer to surface, but when they do, they’re often remembered because they were understood, not recited. 

I fed all the Gimkit data into ChatGPT and asked it to generate a heatmap of how students were doing with each word across the weeks. As you’ll see, progress wasn’t always linear. Sometimes they moved forward. Sometimes they moved backward. That’s acquisition. It’s messy. But it builds.

I should point out that this isn’t anywhere near all the words and structures we covered by Week 8, but this was some words that I added to the quiz after particular points. I got a bit lazy and forgot to add new ones into the Week 8 quiz.

Final Thoughts

Some would argue that there’s still explicit teaching here, after all, I choose what language appears in class, and I repeat vocabulary and structures intentionally. That’s fair. I’m not here to debate definitions. What I’m doing is planned, purposeful, and focused on language growth though and I just don’t do it through front-loading vocab lists and grammar lectures. I let language be experienced, not explained. An example of this - Can you believe that we’re 8-9 weeks in and I had a student in one of my classes ask for the first time exactly what ‘wa’ does in a sentence. Up until now they’d just been using it and accepting it as a part of the sentence, but in this moment I did a ‘pop up’ explanation and told him explicitly how it functioned.

There are some times when I teach explicitly e.g. introduction of Hiragana and Katakana (sometimes), grammar pop ups, some core grammatical concepts like te-form and plain form etc. I suppose the point I’m trying to make is, there are arguments to not just putting one tool in your belt, like CI. No singular teaching method will ever be perfect in and of itself though. Every approach has its strengths and I see a part of our job as educators to stay informed and flexible. That said, with the current push toward explicit teaching in education, I think it’s only fair to show what the other side looks like  - not just in theory, but in practice.

I actually have some other interesting evidence to share, but I’m still assembling it. Most notably, I’ve been having my students do a ‘brain dump’ every month, so I can measure their development across the year. Also, students write an English summary after each story we hear, but some students have been attempting these in Japanese, so they would be interesting to look at too. Here is a photo I took of one student’s work that really got my attention - I found it interesting that they had begun using structures I was only starting to ‘hint’ at in class. Like, quoting speech, adverbs, the particle ‘mo’ and incorporating verbs into sentences. They still have a few things obviously to ‘pick up’, but I think this is pretty damn great for about Week 6? I think. And with enough exposure to the correct grammar usage, like when to use ‘imasu’ and when to use ‘arimasu’, I think it’s safe to say they will self-correct.

This was a Beginning Middle and End (BME) summary of the 3rd story from my book, The Story Pit.

In conclusion, I’m not trying to redefine anyone’s method, just sharing what happened when I leaned into story-first, meaning-first input and what the data showed me along the way. And to be honest, a lot of my teaching choices come down to what makes my life easier. CI does that. But more importantly, I think I’ve got the data to  show it is working. Having said this, if you’re a language teacher tired of repeating vocab lists that never stick, or getting burned out and bored teaching the same damn text, or frustrated that you feel like you’re taking two-steps forward and one-step back… maybe this will give you pause.

Next
Next

Choose Your Own Adventure