On 9 November, the LTSIG teamed up with TESOL CALL-IS for an online webinar entitled, ‘The Role of Artificial Intelligence (AI) in English Language Teaching, Learning and Assessment: Either Friend or Foe?’ which took place in the Learning Technologies SIG Adobe connect room. The recordings will be available to LTSIG (via our newsletter) and TESOL CALL-IS members only. However, a summary review of the day’s proceedings can be found below, writes Phil Longwell.
I approached this online event with great interest. Earlier this year, I was preparing students for a City and Guilds speaking test and the topic that they had to discuss was ‘automation’. This Guardian article was one of the texts used as the basis for discussion. The robots have been coming for a long time now and the threat to humans often exaggerated. This event positioned robots and Artificial Intelligence in general as potential useful learning tools while taking a sceptical stance and generally dismissing the view that they are taking over. Robert Szabó’s session, in particular, looked at whether the human was ‘irreplaceable’. This fascinating day, however, kicked off with introductions from Sophia Mavridi (IATEFL LTSIG coordinator), Georgios Vlasios Kormpas (TESOL CALL-IS) and Dr. Christine Sabieh (TESOL CALL-IS chair) calling in via a ‘phone bridge’ as Heike Philp called it, from the Lebanon.
Dr. Karen Price gave the opening ‘plenary’ session. She began by asking ‘How can AI be used to learn language?’ before running through a huge number of examples of AI at breakneck speed. This field of AI includes the recognition of gestures, faces, objects, handwriting, and speech recognition. She mentioned the IBM Watson software that beat a human in the trivia TV game, Jeopardy. She demonstrated the live captioning software, Otter.ai, for individual speakers in a group discussion. We can assess oral skills with MAP fluency reading app, Pearson’s VERSANT test and Carnegie’s Speech Assessment. IBM Watson’s Tone Analyser detects emotional tone. This AI software is used in telephone call centres and gives agents real time information about turn taking, speech rate and key words. She demonstrated an MIT media lab program, which recognised facial features and voice modulation, as well as what is actually said and analysing when attention wanders.
Dr Price showed a website (VIEW) which could turn any web content for practising features of English, before moving on to gesture recognition. Most AI software is currently based on written and spoken language. Interactions with physical movements, such as a Sesame Street coconut throwing game, explore these possibilities of assessing comprehension, especially for younger learners. We know that mobile devices can translate images into words and there is no reason why we can not point a camera at a plant, which triggers an audio file asking the user to find water. She demonstrated software, such as a pizza making game, from Osmo, before talking about stylus-based interfaces. The Skritter program educates the user how to draw Chinese and Japanese characters, while the ‘talking pen’, a prototype developed by Dr Price, turns handwriting into audio. Using facial recognition to detect someone’s gaze has many applications, she argued. Video can pause when a user looks away and resume when the user returns its focus. Attention can be assessed, not through the length of time someone stays on a webpage, but with eye tracking.
Emotion Aware technology is not just about cognitive approaches. Learning is an affectively charged experience, not just a cognitive endeavour, Dr Price argued. Emotion aware robots can both recognise and express feelings, it would seem. One system tags the Mona Lisa painting, famously noted for her enigmatic smile, with certain emotions, while Affectiva uses your webcam, through their demo video, to analyse your facial expressions. However, emotions can be disguised, I would argue. We do not always express how we are feeling through our facial cues, although they are often a good indicator in a natural state. Affective tutoring in a foreign language context is based on going beyond simple automated cognition, demonstrating contingent vs non-contingent robots with young learners, once more. More demos of affective tutoring and vocal expressiveness data with robots, such as Tega, were shown, before concluding that technologies which detect and mimic emotion can benefit language learning, as well as our understanding of the processes involved in it. Dr Price’s session was a fantastic, positive run through of so many applications and affordances of AI in language teaching and, more importantly, learning. It set the day up for others to go into more critical detail of these kinds of applications and uses.
Gilbert Dizon gave a brief presentation which detailed the results of two studies on Alexa with Japanese EFL learners. The first was a small-scale case study that investigated the capability of Alexa to understand the L2 speech of EFL students and examined their opinions of the intelligent personal assistant (IPA) for language learning. While admittedly the initial sample size was quite small (4 university EFL students in Japan), Dizon demonstrated the five commands with audio recordings of the students using Alexa. So it was more of a qualitative than quantitative study. He explained the accuracy with search results. It revealed that Alexa’s ability to understand L2 utterances was moderate, with the IPA understanding learner commands at a lower rate compared to speech uttered during a storytelling skill. The study also found that the EFL learners had generally positive views towards its use.
A later study, with 21 students, developed this, with nine commands given by the students in data analysis. The command performance was defined as fully understood or not understood. The second study had two primary goals: to investigate if Alexa intelligibility of L2 speech differed significantly from that of human raters and to examine if L2 interaction with Alexa promoted pronunciation improvements. Findings indicated a significant difference in the degree of L2 intelligibility between Alexa and native-speaker evaluators. Additionally, learners improved upon their L2 pronunciation, as indicated by an increase in intelligible output on later attempts of the same commands. Dizon highlighted the potential pros and cons of using Alexa for language learning, and the need for additional research on IPAs. While short, it was a good insight into some small scale action research on using this particular AI, which has an intelligent design.
Dr. Ron Chang Lee presented the award-winning ESL Robot ‘Tutor’ which he developed. He first gave his background of its development and why it is successful – namely, that its creator is an ESL teacher and the ‘final patterns script’ which any chatbot developer can use to shorten the time spent on developing one. ESL ‘Tutor’, as Dr Lee referred to it, can give advice on language learning, answer grammar questions, correct students’ grammatical and spelling errors, answer general knowledge questions an even tell jokes and solve riddles. Dr Lee explained how students can interact and practice the citizenship interview, for a suggested 15 minutes per day, with either the website or the ‘tutor’ app.
My initial reaction was that a lot of this is not actually particularly new, but the explanation of the final patterns script showed how it was programmed. English sentences have patterns and are grouped according to the meaning they express. All the patterns in the group are converted to the Final Pattern, he stated. A chatbot developer only needs to write answer to the question using the Final Pattern. The limitations included that it is a challenge for AI to understand English learners and is far from perfect and too expensive, as he later claimed in the World Cafe debate. He announced a new award, The Ron Chang Lee Award for Excellence in Classroom Technology. Language instructors need to be creative in applying new technology to language teaching, he concluded.
At the end of the session, Heike lead a quick, amusing demonstration of Tutor Mike, with participants asking questions such as:
Josh: “Hey Mike, what is the most difficult thing to teach?”
Tutor Mike’s Answer: Sorry, it just slipped my memory.
Shen: How can I improve my pronunciation?
Answer: First, you need to listen to correct pronunciation. There are online exercises to help you correct your pronunciation. Go to Ron Lee’s website for more
Carol: ‘Have you seen [the new Queen biopic] Bohemian Rhapsody?’
Answer: No, I haven’t.
and ‘What is your favourite book?
Answer: Stephen Hawking’s ‘A Brief History of Time’.
Gulnur: Do you think flipped classroom is effective in teaching?
Answer: I think the answer is ‘no’.
Phil: Do you think the flipped classroom is effective for LEARNING?
Answer: Yes.
Hyang Lee: Can you speak korean or other language?
Answer: Yes, I can. (This was later tested with the Korean for hello – 안녕하세요)
Pawel: Will robots take over the world?
Answer: According to the Bible, God Is In control of the universe, and nothing will ever take His place.
While some of the answers seemed to suggest a pre-programmed list, there was some intelligence going on and how it worked out the meaning of what was being answered, as it was able to deal with the grammar correctly and, in some cases, produced some great responses.
Robert Szabó is an educator, writer, speaker and academic manager based in Cologne, Germany. His session was entitled, ‘The Irreplaceable Human’ and took a skeptical stance at the potential threat of AI and its potential to take over. He began by referencing Sugata Mitra’s highly controversial plenary session at Harrogate 2014. Like many other teachers, I was present. He then referenced Philip Kerr on vested interests: “Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies.” Szabó believed this to be true. He highlighted Kerr’s blog and, in particular, this excellent post which was written in light of the promotion of this event. Szabó spoke about how “AIED systems need to operate with what is called a ‘domain knowledge model’.
He also believed that teachers may be safer than they think, quoting from the Brookings Institute (October, 2018): “The types of jobs that are at the least risk of being replaced by automation involve problem solving, teamwork, critical thinking, communication, and creativity.’ The education profession is unlikely to see a dramatic drop in demand for employees given the nature of work in this field. Rather, preparing students for the changing labor market will likely be a central challenge for schools and educators.”
Szabó looked at the World Economic Forum’s top ten skills, such as complex problem solving, critical thinking and people management. These people skills are not likely to be replaced by AI in these areas, he argued. He spoke briefly about non linguistic communication in the business environment. Robots do not have the ‘semantic’ understanding of what is going on, only able to make ‘syntactic’ choices. He also highlighted the difference between ‘savoir’ and ‘savior-faire’ in respect of communicative competence and decision making and, again, what AI does not give us.
Szabó is the director of pedagogy at Learnship as deals with documents such as the Common European Framework of Reference for Languages: Learning, Teaching and Assessment. He shared this quote as it highlights how deeply human language is and how difficult to imagine how computers could perform the same role: “Communication calls upon the whole human being. The competences separated and classified below interact in complex ways in the development of each unique human personality. As a social agent, each individual forms relationships with a widening cluster of overlapping social groups, which together define identity.”
Szabó highlighted attrition rate, similar to gym membership, and how rapidly this falls off when there is a lack of a human in a paper called about self-study with language learning software in the workplace. He showed a number of other studies, such as machine learning, neural networks and NLP. Furthermore, “This approach does produce useful applications, often referred to as AI. However, it doesn’t solve the problem of engaging in meaningful conversations or human-computer-interaction, which means the “intelligence” expected from AI is still missing. (Londsdale). Furthermore, John Searle (2010) states that: “Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else.” Machines do not have this semantic understanding, Szabó reiterated.
During the chat, the final speaker, Tarek R Besold, shared the story of this first AI news anchor so although the ‘threat’ is there, there are several human competencies which AI simply cannot recreate. The session ended with a Jack Ma video where he talks about the differences between humans and machines. The threat of automation is real.
Dr Tarek R Besold rounded off the day with a 35 minute, methodical, chronological presentation questioning whether AI makes teachers better or better teachers. Tarek is the AI Lead of the Alpha Health AI Lab, working there as a Senior Research Scientist/Topic Leader for the “Trustworthy AI” stream. He also serves as the chairman of the German Institute for Standardisation (DIN) NIA Working Committee on Artificial Intelligence (NA 042-01-42).
Tarek began by giving what he felt are the definitions of AI and what constitutes it. Very few technologies are actually AI. For example, ‘Amazon Echo / Alexa’ is. √ However, ‘Virtual Reality’, which our SIG has been very much involved in discussing, is not. × Google Assistant is, √ while Robotic arms, however amazing, are tools of automation, but not intelligent per se. ×
Tarek gave a few examples of ‘actual applications of AI in the wild’. The LA Times generated attention when it published a story about an earthquake in the US that was written entirely by algorithm. To give some context, the story already existed but the AI contextualised, summarised and reproduced it. We can automatise aspects of contract law. Strategies in gameplay was also highlighted with the example, that Karen Price mentioned earlier, of IBM Watson winning jeopardy. If a machine can compete with a human that it can be considered intelligent, although it is a fairly ‘narrow’ range. AI-based platforms, including games and simulations, are being used as recruitment tools and for training, including in military settings.
Tarek spoke about what is happening ‘now’in AI and Education: Teaching Robots, for example, can be programmed to do certain tasks. Engagement and motivational levels are higher for children. Whilst using these ‘classical’ robots are motivational elements for teaching, the long-term effects not yet studied and current findings are somewhat inconclusive. In addition, when the novelty wears off, children seek other forms of stimulus. Intelligent tutoring systems like ‘Jill Watson’ perform routine tasks (answering standard questions on forums, checking coursework submissions, etc). Students reported the tutor was highly engaged and responsive always polite. But the students were unaware she was AI, although a few thought something was up based on some of her replies. At the moment, the interaction is still limited to standard, frequently occurring tasks. Tarek went on to discuss the near future. AI, he suggested, would be taking over the ‘declarative knowledge’ part of teaching (i.e. a fixed question and answer which requires a ‘Jill Watson’ automated kind of response). Teachers will be freed up for more responsibility for assessment, planning individual teaching journey and delivering content which addresses complex conversation situations and cultural differences etc.
In 5-10 years’ time, on a more ‘advanced horizon’, Learning Analytics and data collection and evaluation and prediction could be carried out, for example, from a MOOC. This is a promising approach, leveraging the full power of statistics and big data for tasks solvable relying on frequency of an occurrence within huge data sets. While this in unspectacular, the interesting follow-on to group analytics is ‘personalized’ Analytics. Personalized data collection and evaluation using AI techniques (e.g., from tone of voice, micro-hesitations, mouse movements, etc.) can be compared across learners. Recognition and assessment of momentary cognitive and state of the learner could be achieved. This could enable much better, multi-dimensional, real-time accounts of the state and performance of individual learners. Navigational support can improve the ‘learning journey’ and give an ‘optimal path’, which many providers might claim, but rarely deliver.
Tarek gave two take-aways for the language teacher. The first is that creativity will always beat tenacity. You cannot ‘outrepeat’ the machine, but you can ‘outhuman’ it. Focus on development of teaching approaches, away from ‘declarative knowledge’ and grow your skills level content away from those repetitive tasks towards content creation and social aspects. He invited the audience to know your tools: Engage with AI (and technology in general) and see it as an opportunity rather than a threat. In addition, we should be curious, become tech-savvy, and discover ways how to use Al-powered technology to your advantage. Furthermore, at risk of alienating an ELT audience, as a non-language teacher, he reminded us to recognise that AI systems are assistants, not replacements, as Szabó alluded to earlier. Realise that “as an educator you are an enabler and driver of learning whose value arises from the learning success of the student/learner, not from the number of rehearsed sentences or repeated words.”
This was a fascinating insight from an academic perspective. He “spoke from our hearts”, commented Heike after the session.
2 experimental ‘World Cafe’ discussions took place during the day. Both was facilitated and moderated by Heike Philp. The first posed the question is AI in ELT a friend or foe? There were around 50 participants who contributed greatly to this discussion. One key point was made during the World Cafe discussion by Dr Cynthia Calongne (a.k.a. Lyr Lobo on Second Life): “The difference in AI and Expert Systems (a collection of facts with decision rules on when to apply them) may reside in how we think about the tool ‘learning’ and adapting to our needs or changes in languages, in learners, and in instructors’ needs,” which we tweeted at the time. For strengths, many particiapnts wrote about the opportunities of AI. It can be engaging, time-saving, fosters autonomy, customisation and critical thinking. Carol Rainbow commented that she had tried a couple of chat bots of Second Life but found it time consuming and as soon as one is complete, everyone tries to beat it. Murat Ata said that “Learning one language takes months at best, but a strong AI based Machine Learning Tool like Google Translate may achieve instant communication between 100+ languages.” Lyr stated that “in our game designs, we’re using an analytics dashboard our fearless leader designed to store behavior and responses to game challenges for later analysis. The data is private, not public, and only visible by us.” She shared this from Amazon.
On the downside, George stated that it comes with technical glitches and that technology companies might take over education. In addition, AI “may not encourage group work but more individualized learning.” Carol commented that by the time they reach adulthood, they are likely to have posted 70,000 times by the age of 13, a child’s parents will have posted on average 1,300 photos and videos of them to social media, sharing this this news story about children being ‘datafied from birth’. We need to protect children, she stated. Ron Chang said it is expensive, although Karen Price stated that “Actually, several VERY exciting AI apps are free … The affordances of AI are terrific, but the uses and implementations may be poor.” Eily, in France, wondered about “who writes and owns the algorithm? Risks or profiling, data privacy could be huge.” She continued by asking, “how can we make sure that humans have final say in choices offered, don’t want all choices to be outsourced to businesses?” Daniela, in Argentina, wrote of “Internet connection, devices for students, may deepen the gap between those with access and those without it,” while Ervin Ramos, in the Philippines, said that it “can lead to more digital divide in schools.” Heike told a funny story about mistaking Siri for Cortana, where Siri replied, ‘Who is Cortana?’ 🙂 Try asking Siri what “zero x zero is”, suggested Phil, who also shared this comical ‘If Alexa was Hal’ video. Ervin Ramos agree with George that “AI could further lead to the commercialization of education…educational technology developments might be more profit-oriented than learner-centered.”
The second one, which was not recorded, was a brainstorming activity about the tools and technology used, and the methodology behind it.
Throughout the day, Dr Cynthia Calongne, masquerading as Lyr Lobo, tweeted her enthusiasm and the LTSIG shared some of her tweets.
No comments yet.