A picture is worth a thousand words

One of the most disquieting things about the world in which we find ourselves is that our eyes and ears can be deceived. In my younger years, I never would have believed this possible. I was raised on movies such as Clash of the Titans and One Million Years BC, so the idea that special effects would ever become genuinely convincing seemed ludicrous to me. Yet fast forward a few years to the advent of CGI and I was witnessing films such as Jurassic Park, so I guess I should have seen the next stage coming.

The advent of AI is genuinely unsettling. We already inhabit a world in which it is possible to make anyone say anything. Take an image of someone famous, take their voice, feed it into the right kind of software managed by someone with decent skills and bingo – you’ve got Sadiq Khan saying “I control the Met police and they will do as the London Mayor orders” and suggesting that the Armistice Day memorial service be moved in order to make way for a pro-Palestininan march to take place. Even Sadiq Khan himself agreed that it sounded exactly like him. It kind of was him. Only he never said it. So this is where we are.

One of the earliest cases of mass hysteria over a fake photograph took place in the early 1900s, when nine-year-old Frances Griffiths and her mother – both newly arrived in the UK from South Africa – were staying with Frances’s aunt Polly and cousin Elsie, in the village of Cottingley in West Yorkshire. Elsie and Frances played together in the valley at the bottom of the garden, and said that they went there to see the fairies. To prove it, Elsie borrowed her father’s camera and dark room and produced a series of photographs of the two of them … well … playing with fairies. While Elsie’s father immediately dismissed the photographs as a fake and a prank on the part of the girls, Elsie’s mother was convinced by them. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. As a spiritualist, like many thinking men of his time, Doyle was convinced by the photographs, and interpreted them as clear and visible evidence of psychic phenomena. He was not alone.

One of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986)

This week’s horrendouly doctored photograph of the Princess of Wales and her three children was an extraordinary example of incompetence on the part of the Royal family’s comms team. Into a world which is already somewhat alive with gossip about the welfare and whereabouts of the princess, they released a photograph so badly edited that it was immediately withdrawn by major news outlets such as Reuters and the Associated Press. Reporting in the mainstream media has remained broadly philosophical and seems to accept claims by the palace that Kate herself had simply been messing about with Photoshop, that the photograph is a poorly-executed mash-up of several frames. Those of us that hang around on the internet, however, will know that the incident has sent the world into meltdown, with theories as to the whereabouts and welfare of the princess going wild. Many people believe that the botched photograph is a complete fake that proves Kate is currently either unwilling or unable to be photographed for real. Some have even convinced themselves that she is dead.

In a world where trust in authorities is becoming more and more eroded, I wonder whether the advent of AI will make us more and more afraid. I am a little afraid myself. Photographs used to be damning evidence and fake versions of them so obvious that they held no sway in a court of law or in the court of (reasoned) public opinion. These days, not only can convincing photographs be easily faked, but this fact opens up what is perhaps an even more frightening prospect: that anyone will be able to get away with anything, simply by claiming that the evidence as to their guilt is faked. Caught me with my hands in the till? It’s a fake. Caught me on camera with the secretary, darling? It’s a fake. Caught me attacking that person? It’s a fake. I find myself wondering how any of us will ever be sure of anything in the future.

The reluctant Luddite

I am anything but a Luddite. Technology is remarkable and wonderful and I could not be luckier to have been born in the late 20th century and have the privilege of seeing our access to the written word proliferate thanks to the digital world.

As someone cursed with poor (and increasingly deteriorating) eyesight, I thank my lucky stars on a daily basis for the advent of smart screens, giving me the power to choose the nature, size and resolution of fonts, not to mention the simply glorious dawn of the audiobook. The younger among you will not recall, but the reading options for people with poor eyesight even just 20 years ago were dismal: a vanishingly small number of books were put onto audio CD and very few places stocked them. These days, the best actors are squabbling over the reading rights to books. Not long ago, I listened to a simply perfect narration of The Dutch House by Ann Pratchett, read by some chap called Tom Hanks. In a world where current research seems to indicate a worrying downturn in children reading for pleasure, I support any and all routes for them to access stories and tales, by whatever means.

As a result of all this, I always feel slightly uncomfortable when I find myself making a case against digital technology. I am the last person to criticise for I acknowledge and appreciate the huge benefits that the advent of the internet and digital technology have brought to me. Not only could I not do my job without them, my life would be infinitely poorer and less diverse. Yet one must always be cautious of what one is throwing away, and when it comes to children’s development of literacy we should be particularly so. First and foremost, we should be hyper-focused on the best ways of helping children to learn to read and write.

In January, the Guardian highlighted that “a ground-breaking study shows kids learn better on paper than on screen,” but the truth is that this information has been out there for at least two decades. Modern cognitive science evidences that motor and sensory aspects of our behaviour have a far-reaching impact on our knowledge and recall. Of course it does. Our brain is an embodied phenomenon that makes sense of the world through the physical data it receives. In a study carried out way back in 2005, subjects were shown a series of words and asked to indicate whether each word was positive or negative by moving a joystick. Half of the subjects were told to indicate that a word was positive or “good” by pulling the joystick towards their bodies, while the other half were told to indicate “good” by pushing it away. A consistent correlation was observed between meaning and movement: the quickest, most accurate and most confident responses were produced by the subjects who were told to indicate “good” by pulling the joystick towards themselves, and to indicate “bad” by pushing it away. The hypothesis is that this relates to our natural embodied state – what’s “good” feels natural drawn physically towards us, what’s “bad” feels like something we should naturally push away. This direct and inherent involvement of the body and senses in our cognitive processes helps to explain how writing by hand (as opposed to on a keyboard or a tablet) helps us to learn letters and words most efficiently. The fact that forming letters by hand is superior to doing so with the use of technology is well accepted among cognitive scientists and literacy specialists.

Furthermore, it is not just the early-years essentials of learning to write that are supported by the process of hand-writing. A study in 2021 compared subjects’ recall of words learned either by typing or writing by hand and found that recall was better when words had been learned using a pen and paper. In another study, a small group of adults learned symbols from an unfamiliar language that they then had to reproduce with either a pen or a keyboard. When they had finished learning the symbols, there were no differences in recall between the two methods, but the keyboard users forgot a significant amount of what they had learned as time passed. In other words, the process of handwriting the symbols was much more effective for long-term recall. Evidence for the effectiveness of handwriting over typing when it comes to learning is now pretty overwhelming and neuroscientists suggest that learning with a pen and paper is better because it is more “embodied,” meaning that it involves more complex sensory-motor feedback for each letter as it is written down. This complexity leaves a more distinctive blueprint in our memories and hence makes things easier to memorise and recall.

I have written before on a methodology I teach to help students to learn their set texts off by heart. The process involves writing down the first letter of each word and works only if students do so by hand. The effectiveness of the method is increased hugely if the student can be persuaded to say the whole word aloud as they write the letter. So, to learn the opening line of Portia’s speech to Shylock in The Merchant of Venice, students would say out loud “The quality of mercy is not strained” while writing the letters “T q o m i n s” in time with their articulation of the words. The physicality of the process and the immersive nature of writing, saying and repeating is quite remarkably powerful and I have never had a student fail to learn the texts using this method.

The data and current research on the importance of physical texts and handwriting have not gone unnoticed. Sweden, a country often cited as superior to ours when it comes to education, experienced a downtrend in literacy levels from 2016 onwards and is back-peddling wildly on their roll-out of digital technology in schools, returning to a focus on physical books and handwriting. What’s worrying for me is that the trend may be going in the opposite direction in the UK. Perhaps most worrying of all, the major examination boards have all indicated their desire to move towards digital examinations, despite the overwhelming chorus of dismay from Headteachers across the country who know that they simply do not have the infrastructure to support such a move. It is unsurprising that examination boards want to push the digital model, as the current process of collecting and digitising examination scripts no doubt costs them a fortune; but beyond the logistical nightmare for schools that the digitisation of examinations will present, I genuinely fear for the impact on students’ literacy and understanding. A move towards digital examinations will push schools further down the road of letting students do everything on screen (many private schools and well-funded academies are already there) and the effect on their learning will be catastrophic. Some of the students I work with are already in this position and their grasp of the texts they are learning is woeful; their teachers allow them access to a simply overwhelming number of documents, all of which they are expected to have the skills to access and draw information from, when in reality they have little to no idea what’s actually in front of them and how that relates to what they need to commit to memory.

So I find myself a somewhat reluctant Luddite, telling my students to reach for a notepad and pen and encouraging them to form letters on a page by hand. The irony in the fact that I am doing so over Zoom is not lost on me, but here’s the thing: technology is incredible, it is life-changing, it is illuminating, it is wonderfully democratic and a great leveller for those of us with physical disabilities. We must, however, be circumspect with how we use it and thus ensure that we do not unwittingly lose more than we gain.

Julius Caesar and the longest Leap Year in history

When it came to taking charge of chaotic situations, Julius Caesar did not mess about. The stories surrounding his courage on the battlefield, his talent for strategic thinking and his downright tenacity are countless, but did you know that tackling the hopelessly disorganised Roman calendar and introducing the concept of the Leap Year was also among Caesar’s claims to fame?

Picture the scene. You’re a farmer in the 1st century BC and – according to the calendar, which circled around the ritual of state religion – you ought to be doling out ripe vegetables ready for the festivals of plenty. Yet to you and any of your slave-labourers, for whom the passage of the seasons are essential, it is clear that those harvests are months away from fruition. How did this end up happening? Well, the Roman calendar had become so out of sync with astronomical reality that annual festivals were starting to bear little resemblance to what was going on in the real world. Something had to be done.

Julius Caesar wanted to fix the mess but this was no mean feat: to shift the entire Roman empire and all its provinces onto a calendar that was properly aligned with both the rotation of Earth on its axis (one day) and its orbit of the Sun (a year). Caesar’s solution created not only the longest year in history, adding months to the calendar during that year, it also anchored the calendar to the seasons and brought us the leap year. It was a phenomenal task. We are in 46BC, otherwise known as “the year of confusion”.

Centuries prior to Caesar’s intervention, the early Roman calendar was drawn up according to the cycles of the Moon and the agricultural year. The origins of the calendar being focused on agriculture gave rise to the phenomenon of a calendar with only 10 months in it, starting in spring, with the tenth and final month of the year roughly equivalent to what we now know as December. Six of the months had 30 days, and four had 31 days, giving a total of 304 days. So what about the rest? Well, this is where it gets really weird. For the two “months” of the year when there was no work being done in the fields, those days were simply not counted. The Sun continued to rise and set but – according to the early Roman calendar, no “days” officially passed. As far back as 731BC people realised that this was a little unhinged, and King Numa, the second King of Rome tried to improve the situation by introducing two extra months to cover that dead winter period. He added 51 days to the calendar, creating what we now call January and February, and this extension brought the calendar year up to 355 days.

If you think that 355 days seems like an odd number, you’d be right. The number took its starting point from the lunar year (12 lunar months), which is 354 days long. However, due to Roman superstitions about even numbers being unlucky, an additional day was added to make a nice non-threatening 355. At the same time, and for the same reason, the months of the year were arranged in such a way that they all had odd numbers of days, except for February, which had 28. February, as a result, was considered to be unlucky and became a time during which the dead were honoured as well as a time of ritual purification.

This all looks like good progress, but it was a situation that still left the Romans around 11 days out from the Solar year and even with all the improvements made, it remained inevitable that the calendar would gradually become more and more out of sync with the seasons, which are controlled by the Earth’s position in relation to the sun. By the 2nd century BC things had got so bad that a near-total eclipse of the Sun was observed in Rome in what we would now consider to be mid-March, but it was recorded as having taken place on 11th July.

Increasingly unable to escape the problem, the College of Pontiffs in Rome resorted to inserting an additional month called Mercedonius on an ad-hoc basis to try to realign the calendar. This did not go well, since public officials tended to pop the month in whenever it suited them best politically, without sufficient focus on the goal of re-aligning the calendar with the seasons. According to Suetonius, if anything it made the situation worse: “the negligence of the Pontiffs had disordered the calendar for so long through their privilege of adding months or days at pleasure, that the harvest festivals did not come in summer nor those of the vintage in the autumn”.

During 46BC – Caesar’s year of confusion – there was already an extra Mercedonius month planned for that year. But Caesar’s Egyptian astronomical advisor Sosigenes, warned that Mercedonius wasn’t going to be enough this time and that things were getting drastic. On the astronomer’s advice, Caesar therefore added another two extra months to the year, one of 33 days and one of 34, to bring the calendar in line with the Sun. These additions created the longest year in history: 15 months, lasting 445 days. Caesar’s drastic intervention brought the calendar back in line with the seasons, meaning that the practice of the ad hoc extra month of Mercedonius could be abandoned.

Of course, getting the calendar to line up with the Sun is one thing; keeping it that way is quite another. As an astronomer, Sosigenes was well aware of the problem. The issue arises from the inconvenient fact that there aren’t a nice round number of days (i.e. Earth rotations) per year (i.e. Earth orbits of the Sun). The number of Earth rotations on each of its trips around the Sun is – I am reliably infomed – roughly 365.2421897. Hence the problem and hence the need for a leap year. The Earth fits in almost an extra quarter-turn every time it does a full orbit of the Sun. Sosigenes therefore calculated that adding an extra day every four years – in February – would help to fix the mismatch. It doesn’t completely solve the problem forever, but it was a jolly good stop-gap.

Photo by Dan Meyers on Unsplash

Pyramid schemes

For every dubious claim in education, there’s a pyramid. Educationalists love them. Whether it be Bloom’s taxonomy, Maslow’s hierarcy of need or Dale’s cone of experience (otherwise known as the learning pyramid) it’s got to be presented in that shape, preferably one with a rainbow of colours. A pyramid diagram means it must be true.

Quite how anyone could ever be convinced by statements such as “we recall only 10% of what we read” is fascinating to me. Think about it. We recall only 10% of what we read?! That’s demonstrably ridiculous. This is not the only verifiably false claim I have had presented to me during my 21-year career in the classroom. I’ve listened to countless dubious assertions about how the brain works, made by people who probably struggled to pass their biology O level. I’ve sat through demonstrations of “Brain Gym”, during which I was told that waggling your head back and forward “oxygenates your frontal cortex”. I’ve been told that mind-map diagrams are the best and only way to present information to students because they look a bit like the branch-like structure of brain cells under a microscope. I’ve been told that some children’s brains work better on the left than they do on the right, and that whether they are “left-brained or right-brained” will influence their learning outcomes. These are the kinds of mind-bogglingly ridiculous assertions that were made in schools all over the country while exhausted teachers sat on plastic chairs in draughty halls and listened to them. The insult to our intelligence, never mind the sorry waste of taxpayers’ money on this drivel, makes me feel quite ill.

Yesterday I attended an online presentation given by John Nichols, the President of The Tutors’ Association and someone I worked with when I was a member of the Board of Directors of that Association a few years ago. John is an intelligent man of great integrity and has an excellent working knowledge of educational theory in all its glorious mutations, not all of them for the good. He took us on a whistlestop tour of some enduring ideas from psychologists in the 1950s, through the persistent neuromyths that have been debunked a thousand times but just won’t die, right up to the useful stuff at last being brought to us by neuroscientists about working memory, cognitive load and schema theory. It is truly heartening to know that this kind of information is being shared with tutors who are members of the Association and with luck it will start to filter through and influence the way people work.

Teachers are a cynical bunch and it would be easy for those of us who have been drowning in the tsunami of nonsense we’ve been swept away by over the years to be cynical about the more recent developments in educational theory. I am not and here’s why: they’re applicable to learning at a practical level and they work. When you apply the key principles of retrieval practice and spaced learning, you see an immediate and dramatic improvement in learning outcomes for your students. When you bear in mind cognitive load and attempt to reduce the pressure on students’ working memory in the classroom, you likewise see results. None of this was true of the old stuff, which caused nothing but obfuscation and distraction in the classroom. Even when I first joined the profession as a rookie and was regretably at my most susceptible, there was a little voice in my head telling me that this stuff was – to borrow the phrase of my old Classics master – a load of old hooey.

A part of me wishes that I’d listened to that voice sooner, but I should not be too hard on my former self, I think: it is difficult to stand against a tidal wave of so-called information when your bosses are telling you it’s all real and are also telling you that you’ll be marked down as a bad teacher if you don’t dance to their tune. When I think about the wasted hours I spent in my career trying to apply principles that were clearly nonsense because I was told to, I could weep. All of that time could have been so much better spent.

Happily, nobody now dictates to me how I work. I apply the principles that are evidence-based and work for my students. The overwhelming majority of them respond readily. For some, the simplest of techniques can feel like a revelation or a miracle, which only serves to show how far some schools have yet to go in distilling this information to their frontline teachers. To be honest, I am sympathetic to schools who remain suspicious about advice on how children learn. You can only try and sell people so many pyramid schemes before they develop a pretty cynical attitude towards any kind of salesmen.

Photo by Gaurav D Lathiya on Unsplash

First, do no harm

primum non nocere: first, do no harm.

A central tenet of the Hippocratic oath

As Tom Bennet OBE wrote on the platform formerly known as Twitter this week, “Even qualified practitioners are bound to ‘do no harm’. But the desire to support children leads many schools to well-meant but potentially damaging mental health ‘interventions’.”

This week I have listened to a quite horrifying piece of investigative journalism by the Financial Times into Goenka mindfulness retreats, at which attendees are encouraged to practise an extreme kind of meditation known as Vipassana. People on the retreat are not allowed to speak and strongly discouraged from leaving for 10 days. They are awakened at 4.00am, deprived of food and taught to meditate for multiple hours per day. Anyone who struggles with the process or becomes confused or distressed is encouraged to keep meditating. For those of you with even the most basic grasp of mental health and wellbeing, it will not come as a massive shock to discover that some people are affected very negatively by this process. I recommend you listen to the podcast but please be aware that it does not shy away from some very difficult material: there are people who have lost their loved ones to this process.

Human beings are social animals. We have evolved to live in groups and we know that extreme social isolation and withdrawal has a very negative effect on mental health and wellbeing in an extremely short time. The dangerous impact of solitary confinement is well-documented and has caused neuroscientists to campaign against its prolonged use in the penal system. Even good old-fashioned and ever-familiar loneliness has been proved to have a significant impact on a person’s health and longevity, never mind their psychological well-being. It should not surprise us in the least to discover that a process which demands people shut themselves off from each other and concentrate entirely and exclusively on the what’s inside their own head carries the risk of a psychotic break.

As part of my studies during my degree in Classics I did a course on the rise of Christianity in the Roman world. I recall reading an account of the life of St Antony by the Bishop Athanasius and being particularly struck by a passage that reports upon his demeanour when leaving a fortress in which he had shut himself for 20 years in order to commune with God and battle his demons. It reads as follows:

“Antony, as from a shrine, came forth initiated in the mysteries and filled with the spirit of God. Then for the first time he was seen outside the fort by those who came to see him. And they, when they saw him, wondered at the sight, for he had the same habit of body as before … but his soul was free from blemish, for it was neither contracted as if by grief, nor relaxed by pleasure, nor possessed by laughter or dejection. For he was not troubled when he beheld the crowd, nor overjoyed at being saluted by so many.”

While I do not wish to mock or offend anyone’s deeply-held beliefs, it seems pretty clear to me that this is a description of someone who has completely detached from other human beings and is suffering from the psychological effects of that process. While the religiously-minded among you may see this as an account of someone in touch with the holy spirit, I see it as an account of someone who is suffering from a psychotic break. Antony is described as being unmoved by and disconnected from the people around him, in possession of a strange kind of detachment. Given that he had spent 20 years in isolation while – in his mind – battling between good and evil, this is not greatly surprising.

During my final few years in mainstream education there was a big push on “mindfulness” for all students. This was what Tom Bennet was referring to in the Tweet I quoted at the start of this blog and I share his concerns about this growing trend. The mental health of young people is a painful and emotive issue and has been brought into sharp relief once again with calls from a grieving mother asking for mindfulness to be rolled out across all state schools (although it is already being promoted and practised in many). As Daniel Bundred wrote on the same platform as Tom a few months ago, “Schools probably shouldn’t do mindfulness, because most teachers are fundamentally unqualified to lead mindfulness, and entirely unequipped to deal with the potential outcomes of it.” As he puts it, “Mindfulness strikes me as being very similar to guided meditation in approach and potentially outcome; how many teachers could handle a student experiencing ego-death in their classroom? Ego-death is a potential outcome of successful meditation, it’s not desirable in tutor time.” Daniel here is referencing exactly the kind of experiences that the young people who underwent a psychotic break at the Goenka retreats have experienced. This is of course the worst-case scenario and while not widespread it is crucially important consider if we are to stick to the concept of “do no harm”; the advocates of the Goenka retreat point to the many people who say that meditation has helped them, as if the handful of attributable deaths are therefore irrelevant. It is essential to remember that teachers (like the volunteers at the Goenka retreats) are not mental health experts; fiddling about with something as potentially profound and intimate as mindfulness or meditation is profundly dangerous and goes way beyond the remit of educators.

Beyond the enormous risk of potential harm to a student who may have experienced past trauma or may simply not be an appropriate candidate for mindfulness for a variety of reasons, there is an increasing amount of evidence indicating that mindfulness in schools does no good for anybody. A recent study revealed no tangible positive outcomes, which places the profund risk of harm to some in an even more alarming context. Why are we doing something with risks attached to it when there are no estimable benefits anyway? Beyond this, why are we demanding that teachers expend their time and energy on something unnproven and valueless?

Tom Bennet is right. As he puts it: “The best way to support children’s mental health in a school environment? Provide a culture that is safe, calm and dignified. With purposeful activities.” In our desperation to support the most vulnerable of children, we must never forget the simple power of providing routine, stability and boundaries for those whose personal and emotional lives may well (for all we know) be dominated by chaos, trauma and distress. The more we acknowledge that some children face the most horrifying of circumstances, the more essential the security of our education system becomes. School and the reassurance that its stability provides is a lifeline for many of our children. This is what we should be providing for them.

Photo by Colton Sturgeon on Unsplash

False judgements

Emotions got a bad rap from ancient philosophers. Most agreed that the ideal state was a kind of calmness that the Hellenistic philosophers (most famously the Epicureans and the Stoics) called ataraxia. There was even talk of apatheia – a detachment from the chaos of feelings and overwhelm. This is perhaps unsurprising if you understand the birth of western philosophy; if you’re trying to formulate, define and distil the key to the perfect life and the perfect society (which is what the early founders of western philosophy were trying to do) then it probably doesn’t include your citizens experiencing a rollercoaster of emotions. Once you’ve admitted that emotions are a bit of a distraction and often cause issues both on a personal level and for society, it’s not much of an overreach to find yourself arguing for a state of detachment.

The term “stoic” these days is synonymous with having a “stiff upper lip” but this is based on a crucial misunderstanding of the Stoic position. The Stoics did not advocate for iron-clad self-control or suppressing your feelings. Rather, they believed that all emotions were what they called “false judgements”, which meant that they were based on a misunderstanding: if you’re feeling them, you’re still getting it wrong. In the ideal philosophical life that they strove for, a person would have such a great understanding of himself, the world and his place within it that he would not suffer at the slings and arrows of outrageous fortune: he would simply nod and know the right thing to do. One example given is that a Stoic would run into a burning building in order to attempt to save a child because that is the right thing to do; they also argued, however, that a true Stoic would feel no distress when his mission failed. Weird, isn’t it? Interesting, though.

One of the frustrating things about this period of philosophy is that much of the writings that we have are general “sayings”, snippets or purported quotations which appear in the works of later authors, usually writing in Latin rather than in Greek, and reporting on what a particular thinker or school of thinkers believed. The reality of this of course is that they may be wrong. For example, there is a famous quotation attributed to Epicurus that states “the wise man is happy on the rack”. Quite how this works within a school of philosophy that was dedicated to the avoidance of pain is puzzling. If the quotation is correct, our best guess is that the Epicureans certainly spent a lot of their time considering the correct attitude towards unavoidable pain, for this was one of the biggest challenges to their philosophical position; presumably the “wise man” – someone at the pinnacle of philosophical endeavour – would know how to cope with pain in extremis.

Most people see Epicureanism and Stoicism as polar opposites and they were indeed rival schools of philosophy at the time. As so often, however, there was more that united them than divided them. Both schools were arguing and aiming for the perfect life and the state of detachment that philosophers before them had explored; both schools were concerned with how to manage our responses to pain and distress. Perhaps the biggest difference is that the Stoics believed in proactive, conscious and deliberate involvement in society and its structures, whereas the Epicureans were a bit more lethargic about the whole idea – getting involved with politics is painful and distressing, so is it really rational to bother?

One philosopher, writing before the Stoics and the Epicureans, was unusual in his take on emotions. Aristotle argued that emotions were appropriate and necessary: the trick was understanding when and how you should be feeling them and what to do with them. He spoke of “righteous anger” and argued that a good philosopher would indeed feel such a thing. It is difficult to explain how truly radical this position was, when the way the philosophical movement was drifting was towards ataraxia and apatheia. Aristotle also smashed through the Socratic idea that philosophical ideals such as “courage” and “justice” could be defined in one way and that if one could not do so then one lacked an understanding of them. Aristotle argued that there were multiple forms of “courage” and “justice” and that nobody could define them in one simple way nor apply their principles in individual cases without discussion, debate and compromise. What a genius he was.

Why the hell am I writing about this? Well, I spoke to a friend yesterday who has taken a decision about which she feels guilty. I cannot divulge the details of this decision as I do not want to betray her confidence. Suffice to say that it was a professional decision, the right decision and one which the people affected will hopefully benefit from in the long-run. There is no doubt – in my mind and even in hers – that the decision was right and good. Yet she still feels what she describes as “guilty” about it.

This reminded me yet again of The Greeks and the Irrational by ER Dodds, a book written in the 1950s, which I mentioned in another blog a few weeks ago. One of the chapters in the book argues that the Athenian world was a “shame culture” and that later ancient societies – the Hellenistic world and the Roman worlds – began the shift towards a “guilt culture”. I have thought about this on and off all of my life. The very thought that the nature of one’s emotions can be dictated by the society in which one grows up is fascinating to me. Dodds argues (rightly, I think) that modern society is more person-centric and hence feelings such as guilt can be internalised; in Athens, one’s personal standing and engagement with society was more relevant (a symptom perhaps of living in a small and emergent city-state) and therefore a sense of shame before others was more powerful than any kind of internalised guilt.

As I listened to my friend who left me some WhatsApp voice messages (I love them – it’s like receiving a personalised podcast!) I found myself wondering whether the Stoics had it right. Sometimes emotions truly are false judgements. My friend has no reason to feel guilty about her actions and she should strive to release herself from the false state of mind in which this feeling distresses her. According to the Stoic ideal she has prevailed in her actions but has not yet achieved the ideal state of detachment. So how should she achieve this goal? Well, I guess it depends on your approach to these things. A Stoic would advocate for rigorous rational analysis and say that this will eventually lead to release from one’s feelings. This is not, in fact, a million miles away from cognitive behavioural therapy, the therapy model supported by psychiatrists and many psychologists, who would say that she needs to question why she feels guilty and challenge her reasons for doing so. A psychologist with leanings towards the psychodynamic model would argue that she needs to explore where her feelings might stem from – does the situation remind her of experiences in her past, during which she has been made to feel or to carry guilt that perhaps should not have been hers? (Pretty sure the Stoics wouldn’t have been up for that one).

Whatever the answer in this particular circumstance, personally I find myself returning to the Stoics time and again. They were a fascinating turning point in philosophical history and paved the way – I believe – towards modern psychiatry. After all, what is the difference between sanity and insanity if not the difference between the rational and the irrational, the true and the untrue, the controlled and the uncontrolled? I will leave you with the Stoic image of how the individual should relate to society – not because I advocate for it, necessarily, but because it’s a classic and a model I have never stopped thinking about since I first learned about it in the 1990s. The Stoics believed that individuals could not control fate but they also argued that individuals had free will. So an individual person is like a dog tied to the back of a wagon. Whatever the dog’s actions, the wagon will go on its way. So how does the dog have free will? Well, he can resist the wagon and be dragged along, impeding the wagon’s progress and damaging himself along the way. Alternatively, he can trot along like a good dog and help the wagon to proceed smoothly.

This incredible photo is by Jaseel T on Unsplash.
It was taken in the Museum of the Future in Dubai

Perchance to dream?

Last night I dreamt that Roald Dahl was in prison. Not exactly “I went to Manderley again” as an opening line, but it’s the truth.

Despite centuries of interest in the subject and recent studies with all the benefits of modern science, dreams are still not fully understood. They are generally acknowledged to be a by-product of evolution and quite possibly the brain’s way of processing and sorting information, but exactly how and why they occur is still debated. Some neuroscientists and psychologists argue that they help us to organise our memories, others suggest that they are part of the important process of forgetting or “dumping” unnecessary clutter from our minds. Some believe that they are a way of safely practising difficult scenarios, and some have even claimed that the frequency of dreams in which we are being chased – particularly in childhood – is evidence for their origins in our early evolutionary history. I’m not sure I buy that, not least because it falls into the trap of believing that everything that evolves does so for an obvious purpose. Dreams may simply be a by-product of our extraordinarily large and complex brain-structures: they may not necessarily be essential or advantageous in the battle of survival and reproduction. One thing’s for sure, it is frequently difficult to explain how a particular story ends up being told in one’s mind overnight; last night, my brain placed a long-dead children’s author behind bars.

Dreams mainly occur while we are in REM sleep, which for adult humans makes up only around two hours per night of our sleep time. Yet some research indicates that a human foetus in utero, by the time it reaches the third trimester, spends around 20 hours out of each 24-hour cycle in REM sleep. Is the foetus dreaming for all of that time? If so, what on earth is it dreaming about and how does that relate to the commonly-accepted idea that dreams are remnants of our thoughts?

When I was doing my PhD I spent an inordinate amount of time going down rabbit holes of research into this kind of thing. The ancient work I studied (which I have written about in a little more detail before) mentions in passing that messages from the gods come to us in the hazy state between sleeping and waking, a state now defined as “hypnogogic” and one into which there has been a considerable amount of research. I became fascinated by the idea of different brain-states and how people may experience phenomena such as audible hallucinations and thus become convinced that they are receiving messages from a divine source. I read all sorts of stuff written by anthropologists, neurologists and psychologists and realised just how little I knew about the grey matter inside my own skull.

When it comes to studying, one of the things worth knowing about the brain is that “memory is the residue of thought” meaning that “the more you think about something, the more likely it is that you’ll remember it later.” (Daniel T. Willingham). This might seem obvious but you wouldn’t believe how little consideration is given to this fact in our education system. Students will only recall things that they are actively thinking about – reading and highlighting, for example, are both passive activities which are very unlikely to aid recall. If you need to absorb, understand and recall the information written on a page, you should put the book down and reproduce its contents in your own words in order to have any chance of being able to remember it. This process forces you brain to begin forming memories, which are in fact reconstructions: memory doesn’t work like a recording, it is rather the brain constantly reconstructing its past experiences, which explains why eye-witness accounts are so unreliable and why each individual may remember the same situation very differently from other people.

All of this means – I’m afraid – that those fantasies people have about listening to recordings while they sleep and miraculously waking up knowing the information on the recording really are that – just fantasies. The brain is not a computer: you can’t do a reboot and download while it’s powered down. Much as one would like to wake up like Neo in The Matrix with a newfound perfect knowledge of and ability to perform Kung Fu, the reality is that learning new information or a new skill requires constant use, review and practice.

All of that said, it is undeniable that sleep (and – for reasons we have yet to understand – dreaming) is essential for good learning. This is not only because exhaustion is detrimental to study, it is also because that downtime really is important for the brain to be able to do its job properly, especially when we are making big demands of it. Further to this, “sleeping on a problem” can often make a huge difference, in ways that are once again not fully understood. My father, a brilliant engineer, often reported waking up with a solution to a problem he had been grappling with and failing to solve during his waking hours. Similarly, I have found that I can be completely stuck on a crossword clue but when I come back to it the next day and pick up the clue again, the solution seems blindingly obvious, even though I have given it no proactive thought in the last 24 hours. This kind of background problem-solving really is a fascinating quirk of brain-states and one I wonder whether neuroscientists will be able to explain in the future.

Many parents worry that their children are not getting enough sleep and there is certainly a lot of evidence that many young people, particularly teenagers, are sleep-deprived. The best advice remains to observe good digital hygiene: do not under any circumstances allow your child to take their devices to bed. Personally, I do have my phone beside my bedside but all notifications switch off after my bedtime (you can set emergency numbers from loved ones as exceptions to this rule, by the way) so it does not disturb me after I have gone to bed and I am not fascinated enough by it to have the urge to check it during the night. This is not true of most teenagers when it comes to their smart phones, and they need protecting from this temptation.

I have resolved to read more about dreaming and sleep-states, as I have no doubt that the research has moved on since I last dipped into this field. One of my favourite games to play is to try to trace where my dreams have come from. Why did I put Roald Dahl behind bars? Well, this week I’ve been watching a police drama with lots of scenes in cells, plus I have also read a fair bit about “cancel culture” over the last few weeks, which may have set off a chain of links in my mind to something I read about Dahl’s works being edited to remove language that is deemed not to resonate with the current zeitgeist. Is that where it all came from? Quite probably. Dreams are rarely, if ever, significant. I look forward to increasing my knowledge. Perhaps we now know whether androids dream of electric sheep.

Photo by Ihor Malytskyi on Unsplash

Post-mock post-mortem?

No matter how many years I spent at the chalkface, I remained unconvinced as to the value of dissecting children’s Mock papers in class. While there was always an urge to pore over mistakes and demonstrate to students exactly what they should have written, I never felt that the process added as much value as I would have liked. Now that I am separated from the classroom, it is perhaps easier to reflect on why that might be.

Even if students have already received their overall grades (my old school used to dish them out in enevelopes to give them the “full experience” of receiving their results), the class in which students first gain sight of their papers in the one where they see how they performed in the separate papers of each exam. In most schools, they may also have just been told their overall grade by the teacher. This, to me, is the problem. Ever since Black and Wiliam first published their seminal work on assessment for learning (a concept they now wish they had named “responsive teaching”), the authors observed that students take significantly less notice of feedback if there is a grade attached to it, rendering the process of feedback close to pointless. This should not surprise us greatly: it is a natural response to be fixated on how you performed overall rather than the minutiae of why that result has come to pass, especially when the overall performance grade is high-stakes. It is very difficult for students to let go of their emotional response to their grade (whether it be good or bad) and concentrate on the feedback offered. This goes especially for students who are shocked and/or upset by their result, and thus calls into question the wisdom of the entire process.

It is difficult for classroom teachers to know what to do for the best. Every instinct drives any good teacher to provide detailed feedback to individual students and to the class, but to do this effectively can be close to impossible for a variety of reasons. Imagine a class in which some students have performed superbly, others have truly bombed. The inevitable emotional response from students to their performance will make the class in which feedback takes place highly-charged and potentially difficult to manage. Moreover, students who perform most poorly will probably benefit the least from the process, which leads me to conclude that there is little point in doing it at all. To not do so, on the other hand, can feel like letting those students down and failing to explain to them where they went wrong. It would take an immense amount of self-belief and confidence.

Yet let us consider the point of feedback. If students are not shown explicitly how they can improve their grade next time round, it is inherently pointless. This may well mean that the traditional “going through the paper” is close to irrelavant to those students who performed badly in it, since they will gain little to nothing from the process of being shown the correct answers. With my own tutees I am giving them headline information about their performance by telling them the areas they need to focus on and/or the types of questions we need to practise. We will then practise other questions of the same type. This is much more effective than smoking over the smouldering embers of their cataclysmic performance under pressure – a process which is simply too threatening and disheartening to be of value.

I am more and more coming to the conclusion that Mock exams should be there to inform the teacher what the students don’t know, affording them the opportunity to focus their teaching time on those particular areas in the remaining weeks of the academic year. Mocks are not something which most students can successfully analyse or diagnose their own problems. The pressure on teachers to “go through” the Mocks at a granular level is huge, but really the process has limited – if any – value to students. We need to trust teachers to provide and guide the learning curve that students should go through, based on how they performed.

Photo by Joshua Hoehne on Unsplash