Classroom Tips and Resources

 

NILE CEFR Filtering Tool with YL proficiency descriptors

This month we focus on the NILE CEFR Filtering Tool, developed for the updated CEFR descriptors in the Companion Volume (2020) and recently enhanced with related Young Learner (7-10 and 11-15 year olds) proficiency descriptors. 

This free tool, built in Microsoft Excel, allows you to isolate relevant CEFR descriptors at a click, according to parameters of CEFR level, scales, skill, etc.

Access the tool by signing up to the free NILE Members Area, and if you want to see more, please watch this NILE CEFR Filtering Tool Introductory Video.

 

GO TO MEMBERS AREA

 

 

Chatting or Cheating?

It seems you can’t now go a day without some mention of AI - it can give us audiobooks and create evocative images (Google the character Loab) that intrigue humans. Of more specific relevance to ELT practitioners is how the airwaves have been buzzing with chat about AI robot ChatGPT’s ability to generate text content including whole books - and doubtless it’s been no different in your workplace powwows. The arrival of this AI bot has had us at NILE HQ deliberating and cogitating, with some terrified and others excited, and this twin reaction seems a microcosm of what’s out there in the wider world. After Elon Musk’s response which was that we can now bid homework goodbye, I’ve been asking myself when I review the requisite sample of a CELTA applicant’s writing, "will I know if they’ve put the task I set through ChatGPT?"   We posed ‘Teaching is a job for life, a vocation. Discuss’ and the bot gave us an articulate answer easily at the C1+ level required and when we hit ‘Regenerate’, another, of the same quality, with no clue to the absence of a human at the keyboard. If we thought plagiarism was an enemy of education, then the landscape we’re in with ChatGPT is way beyond that. Check out the irony of this exchange: ‘Can ChatGPT help students cheat?’, to which the bot responded, ‘It is not appropriate to use chatbots to cheat...cheating goes against the values of honesty, integrity and fairness, and undermines the education system.'

Assuming that ChatGPT won’t destroy itself in light of that response, and to counter our reasons to be fearful with some cheer, we would like to share four key points:

  1. To show that the writing is not the work of a human, offer ChatGPT a personalised task, e.g. ‘Why did you choose teaching as a career?’ or a contemporary topic. The former garnered ‘I am an artificial intelligence and do not have personal experiences’, and the latter ‘I’m sorry, I cannot provide an answer as my knowledge was cut off in 2021’. Equally, while we can go text-to-image with Loab and her ilk, image-to-text is a convincing task option – ChatGPT can’t cope with a photo or graph in place of a text prompt.
  2. We see the ‘checking’ cavalry has arrived, in three forms: A. Academics’ trusty ally, Turnitin, is being massaged to cope with ChatGPT’s finest work in a ‘potential cat and mouse game’ (businessinsider.com); B. The Guardian reports that a student has created an app that ‘tests a calculation of “perplexity” – which measures the complexity of a text, and “burstiness” – which compares the variation of sentences’; and Detect-GPT, a Chrome extension, can apparently scan an online text and colour code anything that’s AI-generated (thanks to Nik Peachey for that one).
  3. Tabitha Goldstaub, Chairman of the UK AI Council, acknowledges that while AI’s been with us for a while, new bots have accelerated the need for national-level regulation – and she will contribute to a forthcoming government White Paper in the UK.
  4. Just as students might misuse it, teachers can surely use it well, to save them time in creating model essays to explore for useful content and language. The New York Times urge us to not to ban it from education, and Stanford University similarly talk of the need to embrace it, organising ‘gatherings with educators to strategise a path for generative AI’. Perhaps we should be ready to pay attention to their ongoing views.

The ultimate concern is perhaps whether we might be one day replaced, and we can take comfort from Goldstaub, who speaking on BBC Radio 4’s Woman’s Hour said, ‘some people ask me “is an AI gonna take my job?” and I always say “no but somebody using AI better than you might do”’. So the view from Norwich is: love it or loathe it, it’s here to stay – and let’s find a way to work with it.

 

BBC Radio 4 The Today Programme 12.1.23

The Business Insider [accessed 13.1.23]

The Guardian [accessed 13.1.23]

New York Times [accessed 13.1.23]

Stanford Graduate School of Education Research Stories [accessed 13.1.23]

BBC Radio 4 Woman’s Hour 11.1.23