Policy planning part 2, continued:

Last time, we looked at a few basic elements that pertain to the English curriculum at your school, such as materials, time, and history of the program at your school.  This time, we are going to look at something even more important: the teachers themselves.

Teachers are fundamental to the entire curriculum, regardless of time and material available to them, so it’s really important to open up the dialogue about what they think, what they do, and what they need.

One way of opening up this dialogue is by using a questionnaire. In creating your questionnaire, think about aspects relating to didactics, knowledge, lesson organization, and assessment. Also, think about aspects that relate to the school’s vision and ambition. For instance, if the school is a Montessori or Dalton school, aspects relating to how well children can work independently or cooperatively might be included on your list. Below is an example of a questionnaire.


A quick sample of a questionnaire

This sample is meant as a start, to help you get inspired for writing your own questions.  Each school has its own points of focus, so you’ll need to create questions that reflect that.  Handy hint: if you want more inspiration in this area, handy search terms are “checklist good EFL lesson” or “what makes a good EFL lesson”.  Once there, you’ll find all sorts of blogs and sites offering questions worth thinking about.

In designing your questionnaire, it’s important to allow for degrees of opinion and anonymity in supplying answers to these questions.  Remember, you’re looking for points to improve upon, so it’s really important that people feel the space to be honest instead of only providing socially acceptable answers.  Allowing for an anonymous response is a good way to achieve this.

Also, allow space for open answers.  That way, your fellow teachers will be able to explain why they gave certain answers.  For instance, when a teacher says “I disagree with ….” perhaps he would like to explain why he says that.

Once you’ve designed your questionnaire, put it aside for a day or two, and come back to it later with a fresh pair of eyes.  Put it along this checklist, and be critical!  Take the time to refine your questions, add one or two when needed, but also to remove questions that don’t work well.

  1. Does each question only cover one topic?  (question 3 of the sample given clearly does not do this)
  2. Is each question concrete and clear, or is there space for ambiguity?
  3. Is the questionnaire too long?  It is too short?
  4. Do the questions relate to teacher skills, knowledge, and school vision?
  5. Is there space for open input?

Tip:  for more information about writing a good questionnaire, you might have a look at this blog: “Good survey questions” or infographic.

Once you’ve gotten everyone’s anonymous responses, it’s time to tally up the numbers.  In order to do this, you simply tally, per question, the number of responses per possibility.  Here is an illustration of a quick tally:


Tallying up the answers shows general tendencies.


As you can see, this tally only shows general tendencies among the teachers, highlighting points that might make for some interesting follow-up discussions.  In looking at the data like this, it allows any follow-up dialogue to be open and non-personal, and everyone can have a role in addressing the issues at a team level without feeling personally called out.

Begin by describing what you see.  For instance, no-one claims to (re-)design lessons to make the learning tasks more authentic.  Few use extra materials for the weaker or stronger children.  The group plan is not always used as the basis for the lessons.  These are simply factual descriptions of what you see, without labelling anything as “good” or “bad”.  Your observations will form the basis for any follow-up discussion with your colleagues.

In thinking about these points, it’s important to find out why this is the way it is.  For instance, teachers do not (re-)design lessons to make the learning tasks more authentic.  Does the material used already makes the learning tasks authentic?  Or is this perhaps something that teachers didn’t think about before?  Do they want more authentic tasks?  Also, is authenticity of learning tasks considered important?  Also important, did they understand the question correctly?

Again, it must be stressed that any dialogue about the outcomes of the questionnaire must take place in an open and safe manner.

Talk about these questions with others at your school and make a note of your findings.  As you do this, also note any problems that need to be addressed, along with any ideas that people give for tackling these issues.

Next time, we will look at the issues of assessment of learning, and how to move towards the next step of creating points of action.


Rubrics: a basis for qualitative feedback


This is one of those things I wished I’d learned about years ago, because it would have made my own life as a teacher so much easier.  I’ve learned about them now, however, so I’m shouting my joy from the rooftops.  Hurray for rubrics!

What is a rubric, one might ask.  A rubric is a means of giving detailed, qualitative feedback to students regarding a given product.  It contains concrete descriptions of the criteria for a well-completed product.

There are different sorts of rubrics.  I’ll explain two sorts of rubrics, using generic sample rubrics I wrote for this purpose.  The first one is a criterion-referenced rubric, and the second one lists success critera for different levels of ability.  I wrote these rubrics for a group project in which the children had to create posters demontrating what they’d learned during the last unit of learning.  They were to use the new words they’d learned in correct sentences.  For the sake of simplicity, I’ve omitted criteria for layout and presentation.  I’ve only included the very basic criteria of content, language, and process.

The first example here shows a criterion-referenced rubric of the sort most people might use.   For teachers, this is an easy form of marking, since the standard for the work remains the same, no matter the ability level of the child.  Also, the criteria for success are clearly described, so all children can know ahead of time what he needs to do in order to pass the assignment.  Another positive aspect is that children get differentiated feedback per criteria heading.  On the downside, it’s perfectly possible for a child to fail the given assignment, as the criteria for success are of the one-size-fits-all variety.  There is no way to allow for differences of ability when using this sort of rubric.Rubric01

That problem can be solved by using a different setup.  The example shown below demonstrates a way to differentiate feedback per ability level.  For instance, if you work in a classroom with a broad difference in language ability, then it might be nice to set up the assessment so everyone has the chance to succeed.  At the same time, this rubric also allows you to set up minimum success criteria per ability group that are just above the actual level of the children so that each child is pushed towards a higher level of ability.  This is called differentiating in output.

In this case, the children should know ahead of time what group they belong to, and they understand that they each have a choice: to succeed at his own level, or to work towards success at a higher level.  The term minimum success criteria is critical here: children should reach the minimum level indicated, but may also choose to work towards a higher level.  Sometimes, if a child needs, he may choose to work at a lower level, but that is a pedagogical decision that you and that child can discuss.  In this rubric, the indicators for ‘process’ are the same for all children, since it would be reasonable to expect all children to work on their social development irregardless of their language development.


Rubric for differentiating in output.  Note that success criteria for the ‘process’ are the same for all children, regardless of ability level.

Of course, no matter what kind of assessment format you use, it’s important that children be aware of the criteria for success so they know what they need to work toward.  As a teacher, I post my rubrics on their electronic bulletin board at school, so students know what they can expect.  It helps them focus their work and gives them the space to make informed decisions when it comes to their own learning.  It also means they have no surprises when they get their grades back, which makes a big difference for everyone involved.

For more information on the use of minimum success criteria and rubrics, feel free to have a look at this site:

What other topics would you like to see covered on this blog?  Please let me know!

Not all equal, but moving forward all the same

Standardized-testingAt one time or another, we teachers are confronted with the need to assess our children’s learning.  Many of us have thought long and hard about the use of a single, standard test to find out what our children have learned.  There are, of course, things to be said in favor of standardized testing: one gets a view of how children perform compared to other children their age.  That can be very valuable information, providing a basis for differentiated instruction.

However, children who are the weaker learners in the class also need a moment of success, of being “good enough” without always being last in line.  When will these children be allowed to feel like they have learned enough, that they are making progress?  Earlier, I wrote a blog entry about writing group plans for long-term planning.  Based on these semi-annual plans, the language goals for a given theme can be determined.  After that, though, how does one determine when each child has actually made progress at his or her own level?  This is when differentiated outcome rubrics come in handy.

Part of what I do when designing a new theme, is determine which words must be learned by everyone, which words most children should learn, and what words are challenge words.

  • Basic vocabulary: Words everyone should learn.  These generally transfer easily from the mother tongue, are shorter, and used relatively often.
  • Extended vocabulary: Words most children should learn.  These may transfer easily, but may also be longer and used less often than the basic vocabulary.
  • Challenge words: Words some children should learn.  These words may be difficult for a number of reasons, they may be spelled unusually, be seldom used, or longer in length.

Next, I determine some form of end product that the children should work toward in the course of the theme.  In the example below, I want them to do some kind of oral presentation about something we’ve learned.  The weakest children are the the group “Cat”, the strongest are in the group “Chipmunk”, and everyone else are in the group “Bird”  (no particular reason for those names, incidentally, I’ve used “skateboarders”, “snowboarders”, and “kite-surfers” in the past as well).

Finally, I determine what concrete language they should be able to produce for this product, based on the semi-annual plan.  In this differentiated outcome rubric, I show what the minimum expectations are for a presentation that is “good enough.”  Each child knows what group he or she belongs to, and therefore what kind of output is considered “good enough” in order to be considered successful.

In this example, the “Cats” work towards a short presentation in which they use short sentences correctly applying the basic vocabulary.  There is space for some hesitation during the presentation.  “Birds” need to use the extended vocabulary correctly, in longer sentences,with better pronunciation, and so on.

Cat (intensive) Bird (basic)

Chipmunk (talent)

Vocabulary Uses basic vocabulary correctly Uses extended vocabulary correctly Uses challenge vocabulary correctly
Sentence length 3 to 4 words 4 – 7 words 5 – 10 words
Speaking Some errors in pronunciation

Some hesitation

Few errors in pronunciation

No hesitation

Clear diction


No hesitation

Of course, it is perfectly fine if children decide to try out a more difficult level of work.  Some children get a real “kick” out of performing at a higher level than expected.  Some, however, might wish to try out a lower level, and that’s fine too.  There are plenty of children suffering from performance anxiety who might feel more comfortable operating at a lower, more easily-achieved level.  Others might try out a lower level for fun, find it too easy (and therefore boring), and return to a more challenging level of work.  The important thing is, however, that each child be allowed to succeed at a level appropriate to his or her own level, and a differentiated outcome rubric is good for just that.

Update: Can-do descriptors of language development

One of the questions I often wrestled with as a starting teacher was how to build a logical and developmentally sound curriculum.  I’ve written a blog about it before, but return to this topic as I have since found new descriptors for language development that I thought would be interesting to share.

One set of new documents that I’ve found is a series of grade-leveled booklets in which various levels of language development are described for speaking, listening, reading and writing.  An example of one such chart is shown here:

wida_can_do_1-2_rwAs you can see, these descriptors are still quite general, allowing the teacher to decide what vocabulary to teach in order to help their learners develop towards the next level.

Here, I’ve included links to the booklets with descriptors that the WIDA  (World-class Instructional Design and Assessment) developed.





Besides this, WIDA also provides ready-made Can-do descriptor name charts, so teachers can fill in the names of their own children at the appropriate level, thus creating an overview of language goals to work towards.  I’ve included links to these ready-made name lists here:

Key Use Can Dos Kindergarten

Key Use Can Dos Gr 1

Key Use Can Dos Gr 2-3

Key Use Can Dos Gr 4-5

Key Use Can Dos Gr 6-8

actfl-logo-2011Some teachers may find it a bit daunting, however, to deal with these general descriptors.  Is it possible to connect these descriptors with more concrete language behaviors?  The answer is: yes.  The American Council on the Teaching of Foreign Languages (ACTFL) has put together just such a list of concrete language behaviors in their booklet “Can-do statements: Performance indicators for language learners” (2015)

In this booklet, one finds checklists of behaviors such as “I can say hello and goodbye,” or “I can ask who, what, when, and where questions.”  This booklet is meant to be a self-assessment checklist, but can just as easily be used by teachers to assess their learners and decide what benchmark their learners have achieved.  Besides this, the language skills are divided up into five categories: conversing (interacting), presenting (speaking), listening, reading, and writing.  These categories correspond with the five categories employed by the Common European Framework of Reference (CEFR), making it easier for teachers in Europe to use this document in their own work.

Moreover, the ACTFL has collaborated with sixteen language organizations around the world to define “world-readiness standards” for learning languages, and aligned their own benchmark levels with those of the CEFR.  This alignment makes it easier for teachers around the world to use these documents in informing their own teaching.

So now my question remains, what do other teachers use in designing their curricula?  What checklists, language level descriptors, or other standards do you use?  Please let me know!

Important update to this blog entry: I have recently had my Digital Record of Pupil Progress (DRoPP) program updated.  I have re-written it to include the descriptors from the ACTFL booklet, and the levels are divided up into A0 (pre-A1), A1, A2, and B1 levels for the five language skills areas: listening, presenting, conversing, reading, and writing.  I am including the booklet of instruction here so you can look it through.


If you are interested in a trial use of DRoPP, please contact me here:


Links to the ACTFL documents cited:


Flow charts: visualizing the 20 questions game

4c3cb8e281f009140a83546b1bdd72b9“Does it have legs?” a child asked.  The child in front of the class answered quickly, “yes, it does.”

“Can it fly?” another asked.  “No, it can’t,” was the answer.

It took a little while and a number of yes/no questions, but soon the class knew the animal’s secret identity: a giraffe.

This guessing game is a favorite among many ESL teachers, including myself.  The question I found myself asking was how to make it more challenging for the older learners.  Also, how could I change the format of this game so that every learner could participate, even the shy ones?


It took a little searching, but I soon had a viable answer: flow charts.  In essence, a flow chart works just like the verbal version of the guessing game, but visualizes the process of elimination involved.

There are different ways a flow chart can be used in the lesson.


A sample visual flow chart for younger learners.

For younger learners, one can make up a poster-sized chart with pictograms on the question blocks and pictures of the vocabulary being sorted out.

For middle learners, one can make a flow chart with simple questions.

The older learners can make up their own flow charts to try out on classmates.


The same flow chart, but now with questions written out.

Flow charts allow children to visually sort information along the lines of simple questions.  The example provided here is about a few animals, but with a bit of creativity, one can help children make their own flow charts about any number of topics one teaches about, for instance modes of transportation, clothing, food, weather, hobbies, and jobs.  By having children sort the information in this fashion, they are also activating their logical-mathematical intelligence, broadening their learning.

Flow charts can also be used to assess a learner’s understanding of the concepts taught.  Can he or she ask questions effectively to find out what the secret word is?  Can he or she formulate the questions correctly?  Can he or she create a flow chart that includes all of the concepts learned during the last few lessons?  These are just a few possibilities that come to mind when connecting flow charts to assessment of our learners.

The examples provided here are, of course, rather straightforward and very simple.  I suppose children in high school or adult ESL learners could make more intricate examples, for example to describe their day or how they prepare a meal.  I wonder if others use flow charts in their ESL classrooms?  If so, how?  I’d like to hear your ideas.




Testing, testing, 1-2-3…

For what must have been the hundredth time, I pulled out the apple, the fish, the key, and a dozen other toy-like attributes.  The kindergartener eyeballed the objects, eager to play.  I pulled out my checklist and pencil, and started: “where is the fish?”  “Where is the chair?”  “Where is the key?”  Each time, the child would point at the object, sometimes uncertain, other times rejoicing in the right answer, gaining in self-confidence each time I nodded at him. So began yet another of one of hundreds of Reynell tests I administered as part of a research project being carried out by my employer.

The Reynell test uses a mixture of attributes that children may handle and colorful pictures.

The Reynell test uses a mixture of attributes that children may handle and colorful pictures.

The Reynell test

The Reynell test itself is well-designed, including practical and interesting tasks for young children in order to assess their listening and speaking skills in English.  The test begins with groups of similar, easy tasks, which become increasinly difficult as the test progresses.  When a child begins to fail at a particular sort of task, the assessor ends that task and moves on to the next set.  If the child fails at that set as well, the test is done.  It is easy, standardized, and informative. The point of the test is to see what developmental age a child has reached in his language abilities; even though a child can perform poorly, he can never fail this norm-referenced test.

This test is, however, also very time-consuming.  Depending on how well a child does, the test may take only 10 minutes, or a full three-quarter hour per child.  After administering the Reynell test a hundred times, I found myself quite ready to throw rabbit and bear out of the window and move on to something – anything – besides rabbit putting the knife under the bed and bear pushing the bed.

One of the dangers of administering the same test a hundred times: the examiner (in this case, me) might start getting bored and make things up, which isn’t allowed, of course, as that would affect the standardized scoring.

Later that year, I administered another sort of test to the older ESL pupils.  This time, there was no teddy or rabbit, but instead, paper, pencil, and a cd with spoken texts.  It was time for the Anglia exam.  

The Anglia exam

The Anglia exam isn’t a single exam; instead, it is a series of exams that begin at a very basic level (A1) and graduate to higher levels of skill (C2)*.  The basic Anglia exam includes listening, reading, and writing skills.  Speaking assessments are separate and cost extra, depending on who scores the test.  The Anglia exam is a criterion-referenced test, which means that a child may fail.  If he fails, then the attempted level was too difficult, but if he passes, then perhaps the attempted level was too easy.  Therefore with this test, it is necessary for the examiner to know two things ahead of time:

1) what exactly is tested at each level (described in detail in the Teacher’s Manual on the Anglia website)

2) what each examinee’s general level of English is (for instance, by using the Placement Test on the Anglia website)

The Anglia exam is a series of leveled exams, starting at pre-A1 and building up to C2.

The Anglia exam is a series of leveled exams, starting at pre-A1 and building up to C2.

Besides that, all of the sections are tested at the same level, regardless of possible differences between a child’s skills in listening, reading, and writing.  The tests are costly, and since children want to pass this exam, the teacher must make a careful estimation of the highest possible test the child will be able to pass, even if much of the examination might be too easy for the child.  

Administering the test itself is simple enough, since that is done classically. The feedback from the assessment is a diploma in which the child (barely) passes, passes well, or passes exceptionally well.  If a child fails, it receives a referral to try again at an easier level.  Personally, I found this feedback to be as effective as measuring the depth of the North Sea with a meter stick:  reliable, but not terribly informative.

A new test

A few years ago, my employee decided it was time to create a new sort of test, an informative assessment that would cover the broad range of ability found at our schools, while being time- and cost-effective.  We spent hours analyzing existing tests, discussing questions like “do we really need to count spelling as part of a listening test?” and “how can we differentiate the material so that we can find a child’s level in each language skills area?”

After that, the test was administered to hundreds of children, as part of the process of creating a normative score.  Basic feedback was given to the teachers, and the children all got certificates stating that they helped in creating this test.  Unfortunately, this test is still not available for regular use, so I am – still – left to my own devices.  Fortunately, I had already been developing my own devices for nearly ten years.

My own assessment

I have been creating my own means of assessing children’s progress in English.  Not only that, but I have developed a system of recording their progress so that I have a long-term picture of children’s development in the ESL program.  I used my experiences with both the Anglia and the Reynell tests to form something more useable for my school.  But more about that in another blog…

* The levels A1, A2, B1, B2, C1 and C2 are part of the Common European Framework of Reference.  More information about the CEFR can be found here:

More information about Reynell tests:

More information about Anglia tests:

What to teach? Making a plan of action

One of the first problems I came across as a budding early ESL teacher was the question: what do I teach? Of course, I needed to teach them numbers, colors, food, animals, and classroom vocabulary, but – what exactly did these children need to know? What was going to be the curriculum?

I decided that there would be a dual basis for the curriculum. Of course, the children needed a lexicon: words, words, and more words. Equally important, however, was what the children could do with those words. So I needed a wholistic means of looking at the children’s learning and of focusing my teaching to their zone of proximal development.


It was time to look around with the help of my favorite search engine and (practically) best friend: google. Search terms like “language development,” “ESL curriculum”, and “wholistic ESL” crossed the screen until I finally found a language proficiency handbook, written by the Illinois State Board of Education. Finding this handbook was one of my first Eureka-moments: herein was a clearly described continuum of development for ESL learners, outlining exactly how a speaker of a foreign language would develop, which I immediately adopted as a basis for my curriculum.

The stages of language acquisition easily lent themselves to a checklist format. Once I decided how far each child had developed along this continuum – and in the early days that was more of a “touch wind” method than I really care to admit – I then had a clearer idea of where I could lead the class towards.

For instance, if I noticed that some children were able to focus on the main idea of things – “Point to the picture, very good” – then I could cue the other (by now quite lost) children in: “Look, Johnny is pointing. Very good.” I also knew that at this level, it woud be appropriate to get children to repeat simple words. Two- and three-word phrases would come later. Conversely, if a child could already express himself in short phrases, I knew I no longer needed to accept simply pointing to an object as an answer to a question. I could expect that child to answer a question verbally.

I also used – and still use – this continuum to inform my own language use in teaching. If children only understand the main point, I avoid complicated sentences. I keep my own speech limited to their zone of proximal development. If children understand one word at a time, my own speech is therefore usually sentences of 3 or 4 words, accompanied by supportive body language. As the children’s language develops, I drop the body cues and lengthen my own sentences appropriately.

There are, of course, pros and cons to working with a wholistic scale of development. On the one hand, it doesn’t really matter what kinds of words the children learn, as wholistic development is applicable to any theme. On the other hand, it’s far more difficult to develop a standardized test for this, as the content of any test will be dependent on the vocabulary and grammar that was taught previously.

As the years passed, I spent time developing an adaptive assessment that allowed for this, as well as a system of recording this. I won’t go into it this time around, but in the near future, the topic of assessment will certainly be addressed.

I wonder how others have attacked the problem of curriculum building?

A link for the aformentioned language policy handbook can be found here.

And a link for more information about the zone of proximal development can be found here