I have always thought of myself as someone who is fundamentally miswired when it comes to getting myself about. My internal compass is completely absent. Not just slightly out of kilter, you understand, but fundamentally and dramatically wrong — enough to turn a ten-minute walk into a humiliating loop past the same shop three times. My sense of direction isn’t just bad, it is non-existent, as if I have been born without whatever quiet internal certainty tells other people, “this way makes sense.” I’ve always envied those who have that innate sense of the right way: my father navigates the world like an indigenous man on walkabout. Sadly, I did not inherit a single iota of his instincts, to the extent that I’m surprised that I know the difference between left and right.
For most of my life, my appalling sense of direction has been a significant handicap and I suspect it had an impact on my confidence in other spheres. If I can’t reliably find my way across a town or even within the building in which I work, how on earth was I supposed to navigate anything larger: major life decisions, life ambitions, the invisible map of life? Other people seem to stride forward with invisible coordinates guiding them, while I would hover at crossroads, second-guessing, recalculating, almost always choosing a path with the distinct feeling that I might soon regret it.
Then, almost overnight, the world changed. Or rather, the tools in my pocket did. What used to require a well-thumbed but mysterious (to me, at any rate) A to Z, guesswork and a regular prostrating of my dignity onto the mercy of strangers, all of this has now been superceded by the reassuring warm glow of technology. Even more wondrously, it doesn’t sigh or say “I told you so” when I end up getting lost. It simply recalculates, endlessly patient, as if wrong turns are not failures but part of the process.
At first, I used it defensively. I would check directions obsessively, zooming in on routes, memorising landmarks as if preparing for an exam. But slowly, something unexpected happened. I stopped treating navigation as a test I had to pass and started treating it as a conversation. I could walk, drift even, knowing that if I wandered too far off course, I wouldn’t be lost — I would just be somewhere new, with a way back always available thanks to the super-duper magic pocket-wizard.
When you no longer fear being lost, the world opens up in subtle ways. Streets become less like corridors you must follow correctly and more like possibilities you can explore. A wrong turn isn’t a mistake: it’s a detour with an exit strategy. The pressure to always “get it right” dissolves, replaced by a quiet confidence that you can recover, adjust and continue. Even more pleasingly, I no longer find myself late for an appointment, lost and crying. (Yes, humiliatingly, that has happened: at one low point in 1993 I spent an hour and a half trying to find the location of a lecture I was supposed to attend in London for my degree: in despair, I sat down on some steps and blubbed, only to realise after a couple of minutes that I was actually sitting on the steps of the building in which I was meant to be attending the lecture).
Life in the 1990s felt like one long navigation problem for me. Choosing the wrong path early onmeant ending up miles away from where I was meant to be, with no simple way back that didn’t involve an expensive taxi ride. Now, I wonder if direction isn’t about always knowing where you’re going, but about trusting that you can keep moving, even when you don’t. That’s the freedom that technology has afforded me. The tiny blue dot on my iPhone’s digital map — steady, present and always updating — feels like a metaphor for something I never realised I needed: reassurance that my current position has a way out and a way forward.
I still have a terrible sense of direction and I am at peace with that. If you took away my iPhone, I would probably end up circling that same corner shop, wondering how I got there again. But I no longer see that as a personal flaw so much as a different way of moving through the world. I am someone who wanders, who doubles back, who explores by accident. And now, with my talisman in my pocket, I am reminded that no step is irreversible; wandering feels less like failure and more like a way of discovering paths I never would have chosen on purpose. Maybe that’s what direction really is: not a straight line, not a fixed bearing, but the ability to keep going, to adapt: to trust that, even if you don’t know exactly where you are, you are never lost forever.
As someone who works almost exclusively with young people between the ages of 14 and 16, I am well-versed in the problem of teenagers and sleep. Any teacher who has been scheduled a challenging class on a Monday morning will understand the issue, but since working online I regularly have the pleasure of being presented with a youngster who has quite obviously been peeled unceremoniously from their bedsheets less than a minute prior our session. I have often advised parents to book a later session, on the grounds that their child is simply not in a fit state to absorb anything prior to mid-morning. There are some that can cope, but for others it becomes apparent that their parents will be wasting the money that they pay me, so groggy and unengaged is their child when the session begins. But just why do teenagers find mornings so palpably difficult?
At the centre of everything is our circadian rhythm, the internal clock which governs when we feel sleepy and when we feel alert. In younger children, as every parent knows, the clock tends to run on the early shift; but somewhere around puberty, the clock shifts — enough to have a pretty big impact. In adolescents, the hormone melatonin, which is central to what drives our urge to sleep, is released later in the evening compared to both children and older adults. This shift in their natural sleep phase means that teenagers genuinely feel more alert late into the evening and do not feel ready to sleep until much later than they used to. I remember this with visceral clarity. Staying up to watch Moonlighting, followed by Indelible Evidence, felt effortless, while waking the next morning felt simply agonising. Teenaged bodies, still growing and developing rapidly, require a substantial amount of rest — typically around eight to ten hours. But if a teenager is wired to fall asleep at midnight, yet still needs to wake up at 7.00 a.m. for school, then we’re faced with a problem.
I always like to ponder why these evidential facts about our biological nature and development might have evolved. It is undeniable that most teeangers experience a change in their body clock as they develop, and it is also undeniable that adults vary in terms of their own body clock: some are natural night owls, some are larks. (You probably have a good idea which one you are, but it’s quite fun to do the test: some people are very strongly one or the other, some people are flexible. I am as much of a lark as it is possible to be). So, why might it be that we vary in this way? As humans evolved, it would have been useful for a tribe to have a variety of members within it, to ensure that there is always someone that is capable of being hyper-vigilant at any time. When life was lived on a knife-edge, an endless battle for survival, it was crucial for the safety of everyone that at least some members of the tribe were capable of functioning at any one time. Thus, these subtle differences in how alert we feel at different times of the day may have been an essential advantage and thus these differences were perpetuated through natural selection.
Of course, biology is only part of the story. If teenagers were tucked away in candlelit rooms with nothing but a paperback novel and their thoughts, they might still stay up slightly later than the adults in the household, but probably not quite as late as they do now. Modern life has introduced a dazzling array of sleep-delaying tactics, most notably in the glowing rectangle of the smartphone. Social media, messaging apps, streaming platforms — all of these operate on the principle that there is always one more thing to see, one more conversation to have, one more video that might be even funnier than the last. If you can establish one rule in the home, it should be that these devices do not take the journey to bed with you. Teenagers are particularly sensitive to reward and novelty, meaning that the little bursts of satisfaction provided by notifications, likes and new content are especially compelling. The result can be a perfect storm: a brain wired to seek stimulation, a body that doesn’t feel sleepy yet, plus a device that delivers endless entertainment on demand. Bedtime, under these conditions, becomes achingly oppressive.
Waking up early for school is difficult not because teenagers are being dramatic (although, to be fair, some drama may be involved in some cases), but because the teenaged internal clock is still firmly set to “night mode.” When an alarm goes off at 6.30 a.m., it is essentially interrupting the biological equivalent of midnight. Imagine being forced to wake up at 2.00 a.m. and then expected to perform algebra, write essays and engage in meaningful discussion. That is not far off what most teenagers are experiencing every day.
The misalignment between biological rhythms and social expectations is sometimes referred to as “social jet lag”. It’s the same groggy, disoriented feeling one might have after flying across time zones, except instead of being a temporary inconvenience, it is a daily occurrence. The result is chronic sleep deprivation, which has a range of effects that extend far beyond simply feeling tired. In the classroom, this can manifest as difficulty concentrating, slower cognitive processing, and a general sense of mental fog. Teachers may notice students staring into space, struggling to retain information, or reacting with the enthusiasm of someone who has been asked to solve a puzzle while underwater. It’s not (always) that teenagers don’t care about their education; it’s that their brains are not operating at full capacity during the hours when learning is expected to happen.
Sleep deprivation is also closely linked to irritability, emotional volatility and increased stress. Sound familiar? Parents, who may already be operating under the assumption that their teenager simply needs to go to bed earlier, are so often met with morning grumpiness that can escalate into full-blown conflict. The state that a young person is in can reinforce the adults’ belief that the child should retire to bed earlier. The teenager, meanwhile, feels misunderstood and unfairly judged, leading to a cycle of frustration on all sides. What makes this situation especially tricky is that both perspectives contain elements of truth. Teenagers do, in many cases, make choices that exacerbate the problem — staying up later than necessary, using devices late into the night and underestimating the importance of sleep. I did this myself on an infinite loop and looking back it seems ridiculous. Yet at the same time, I remember vividly how alert I felt in the late evening and how utterly unattractive it seemed to take myself off to bed. The fact remains that the underlying biology of teenagers genuinely does make early sleep and early waking more difficult for them. It is not a simple matter of willpower, nor is it entirely in their control.
Some schools have tried to take the peculiar biology of teeangers into account by experimenting with later start times. Research suggests that even a modest delay in the beginning of the school day can lead to improvements in attendance, academic performance and overall well-being. Teenagers who are allowed to wake up in closer alignment with their natural rhythms tend to be more alert and more engaged. I have always wondered, however, what these schools are like for the adults. Speaking as someone whose energy is now heavily weighted towards the morning (I spring awake, starving hungry, at around 5.30am most days), I would hate to work in a place where the day was shifted later. This is the problem: the teenagers are not the only ones with skin in the education game.
Given that all schools still start significantly earlier than most teenagers would like, there are nevertheless small changes that can help. Exposure to natural light in the morning can nudge the circadian rhythm slightly earlier, making it easier to wake up over time. Limiting screen use in the hour before bed can reduce the stimulating effects of both blue light and stimulating content, giving melatonin a better chance to do its job. Consistent sleep schedules, even on weekends, can also make a difference, although this is perhaps the most challenging suggestion of all, given the powerful allure of a Saturday lie-in. It all seems rather easier said than done, and my parents certainly gave up even trying. Ultimately, understanding is key. If families can recognise that their teenager’s sleep patterns are not entirely a matter of choice, and if teenagers can be persuaded to acknowledge that their habits can influence their well-being, the conversation can at least be had.
It is, in the end, a delicate dance between what our bodies want and what our schedules demand. Teenagers, caught in the middle of this dance, are not failing at mornings so much as they are beginning to negotiate with them. If they’re lucky, they will become a lark like I did, and the world will become an infinitely easier place to negotiate (unless they want to work in the nightclub industry, I suppose). So, if your teen occasionally hits the snooze button one too many times, it might be worth remembering that they are not resisting the day — they are just trying to catch up with a night that ended a little too soon for them.
Last week, there was something of a debate amongst my now quite elderly parents and me. I remarked that I genuinely struggle to understand why so many people are so reluctant to change their minds. What on earth was so frightening about it? My father, a trained scientist, seemed to get where I was coming from. My mother, a trained counsellor, was less impressed. She sees everything through the matrix of people’s emotional responses and finds it easy to comprehend the ways in which people’s fears and hang-ups are their most powerful driving forces. But I genuinely struggle to understand why people are so fixed in their ways of thinking.
While I’ve never considered myself to be much of a scientist (a glance at my GCSE grades will confirm this for anyone in doubt), I do like to think that I am a rationalist and that I base my responses to most things on the evidence in front of me. I am also, I think on balance, quite emotionally robust. Given these two character traits, I will confess that I genuinely struggle to comprehend why changing one’s mind about something is considered to be such a terrifying prospect; but the older I get, the more I am forced to acknowledge that for many it seems to be so.
In the same week, I met a friend who filled me in on some local gossip and remarked, in passing, that living in our village had been good for her, since she had been exposed to a range of people with different political views and discovered (in a manner that she reported with some surprise) that Conservative voters did not all possess the horns of Beelzebub. She reflected on the limitations of being brought up in a home in which one political viewpoint was presented (a household that she summed up as “Guardian-reading”). I reflected on the fact that I felt there had been a variety of political standpoints within my close family and that these had been openly (and sometimes quite heatedly!) debated, perhaps leaving me open to the notion that there can be well thought-out (and indeed extremely badly thought-out) views on all sides. She said that she envied this experience. It was genuinely fascinating and gave me further pause for thought. Might this exposure to conflicting politics within one family be another reason why I am interested in rather than threatened by alternative viewpoints?
Also this week, whilst listening to a podcast, I heard a reference to an analogy used in human psychology that I had not come across before, and it chimed with all the thoughts I had been having about tribal thinking versus the ability to change one’s mind. I looked up the reference and was fascinated to discover someone called Julia Galef, an author and co-founder of the Center for Applied Rationality. Galef argues that some people act like “soldiers”, while others act like “scouts”. “Soldiers” in her analogy tend to approach a discussion from the sole position of defending their beliefs, attempting to discredit or dismiss conflicting information and seeing the alternative viewpoints as the enemy to be shot down. “Scouts”, by contrast, are motivated more by the desire to find the truth regardless, of their starting point.
But before we “scouts” get too smug about our Stoic capacity for reason, according to Galef, our tendencies towards being either a “soldier” or a “scout” are both rooted in our emotional responses and learned behaviour. The “soldier” mindset tends to be held by someone who is motivated by connection and community (which can lead to tribalism), whereas someone with a “scout” mindset is more likely to enjoy the process of discovering new things (which can lead to innovative or creative thinking, but carries with it the threat of isolation). For a “soldier”, the process of changing your mind feels like a weakness or even a defeat. For a “scout”, it as something positive and exciting. This is exactly how I feel. To me, the process of changing my mind isn’t simply non-threatening: it is genuinely thrilling and wonderful. I love discovering that I have been wrong about something, or that my understanding of a topic has been flawed. I find it genuinely mind-boggling that people can hold the same views that they have always held, and borderline distressing to imagine that they find the process of change a net negative.
In a quest for further knowledge, I will delve into Galef’s book, The Scout Mindset, and find out if her analogy resonates once I’ve read it in full. For now, I feel genuinely happy to be a “scout” and can highly recommend it. It might not make you the most popular person at the party, but it does make you the one who will point out that the emperor is stark, staring naked.
To what extent should it be evidentially apparent that we practise what we preach? I have been pondering this dilemma, since observing a distinctly less than svelte gentleman, who regularly oversees the training regimes of customers in the gym, encouraging his clients to build muscle and burn fat whilst resting his own arms on his notable and substantial belly. I will admit that my mind went somewhere slightly uncharitable, a thought process that can be summed up in the line, “maybe get your own house in order first,” but then I found myself wondering: to what extent does a fitness instructor have to be fit?
The question became more complicated the more I thought about it. Given the well-documented fact that many doctors smoke and drink alcohol and that many of them are also overweight, does that make those same doctors any less qualified to tell the rest of us what to do when it comes to looking after our bodies? Their ability to diagnose and treat doesn’t disappear if they don’t always follow ideal habits. Shift work is notorious for making it difficult to sustain a healthy diet and lifestyle, as is stress: so should we think less of them for falling prey to the same barriers as the rest of us? And why should this not apply also to fitness instructors?
Physical trainers and their clientele at the gym are something of a source of fascination for me. It is (for me, at any rate) simply impossible not to eavesdrop on their sessions, and I’m fascinated by the barely-concealed disinterest with which so many of the PTs conduct their sessions. It is, to be fair, a pretty repetitive job, watching a variety of newbies straining to improve their fitness, but I am startled by how dismally uninterested some of the instructors seem to be in the process of fitness. Why apply yourself to a career without that basic love for what you do? Personally, I still get a kick out of explaining how participles work, after nearly thirty years of doing so, and I still take joy and pride in watching the lights go on when I help a student to understand something that they have not managed to grasp before.
Perhaps the most enthusiastic personal trainer at the gym is a man whom I have nicknamed (in my head, you understand) The Dangling Frog. This man is genuinely interested in pumping iron, although I would question the degree to which he is an advocate of what I would deem true fitness, given the amount of steroids I suspect he has imbibed. So, why Dangling Frog, you ask? Well, he has done so much upper body work that he resembles one. Imagine holding up a frog by its body and letting its bowed, skinny legs dangle beneath? That’s what this guy looks like. I have witnessed him working on his own muscles and the level of strain he applies to his shoulders and biceps is impressive: the poor old legs don’t get a look in, so they remain a mere shadow of his upper-body musculature. To be frank, I’m surprised that his legs can sustain the downward force of his upper body. One day, I fear that the unarguable laws of physics will complete the inevitable demonstration of force and gravity and his legs will give way.
Still, disproportionate physique aside, this guy is certainly more enthusiastic than most. His favourite regular client is older than him, so he gets to show off a bit under the guise of training and encouragement, but I saw him recently with a younger compatriot and I couldn’t quite work out whether this was a trainer-client situation or simply a meeting of mutual appreciation. Whatever the circumstances, it looked like a beautiful bromance was developing, and their conversation went something as follows:
Bro 1: yeah I think I’m trying too hard. Bro 2: yeah you need to reduce to 36K and focus on letting your body do what it needs to do. Bro 1: yeah exactly. Bro 2: yeah. Bro 1: Maybe I’ll try 30K. Bro 2: I tried with like 42K last week. Bro 1: no way wow. Bro 2 (preening slightly): yeah that was like way too much but you know. Bro 1: yeah man. Bro 2: yeah.
It was like listening to poetry. Dangling Frog at least practises a significant amount of what he preaches, which is (I suspect) lots of upper-body pumping, the constant imbibing of protein-based sludge and (I also suspect) a regular date with some anabolic steroids. Is he a better role-model than the instructor with the standard paunch, I found myself wondering? I’m honestly not sure. So, where are the genuine fitness enthusiastists? Is it really that hard?
For fitness coaches, credibility is surely tied to example. If I were a client, I’ll be honest that I would expect them to look fit, and I would expect them to practise what they teach. If a coach promotes healthy eating and an active lifestyle but doesn’t look as if they follow their own advice, my trust would go out the window. Much of their job involves demonstration and motivation, so one would have thought that leading by example would be their most powerful weapon.
Yet, how about counsellors and therapists? They give advice on emotions, coping strategies and behaviour, but they are not expected to have perfect mental health themselves. Indeed, many counsellors have faced their own struggles and a therapist who has sailed through life with no challenges would be an inadequate one indeed. What matters is that they understand techniques and can guide others effectively. While practising what they preach is helpful, self-awareness and professional skill matter more than personal perfection. Perhaps this should also apply to fitness instructors?
I am still pondering this and am left asking myself why it is the case that I would expect and demand a fitness coach to be an exemplification of what they are employed to teach. Perhaps it is because I would assume that the process of fitness is a source of personal interest for them. Likewise, I also perhaps assume that once one is truly knowledgeable about fitness then one surely would not be able to resist the urge to apply it to one’s own lifestyle. For example, once you truly understand the damage that a sedentary lifestyle can cause, surely you cannot help but be more active? Yet, if this were true, then nobody would smoke, nobody would drink, nobody would be inactive. Is it really the case that ignorance is the problem, given the overwhelming amount of information with which we are all surrounded?
When someone claims authority — whether it be a teacher, a coach or a religious leader — humans instinctively look for alignment between their words and actions as proof that their advice is genuine and achievable. If a teacher follows their own guidance, it signals integrity, builds trust and makes their teachings feel real rather than abstract. On the other hand, hypocrisy can weaken respect and create doubt, because it suggests either a lack of belief or a gap between theory and real life. Ultimately, we value authenticity, and seeing someone embody their own principles makes those principles more convincing.
What should one do if one’s own Member of Parliament were to misrepresent reality and twist the truth in order to raise their profile on social media and blow their own trumpet in the House of Commons? One asks this purely hypothetically, of course, for it would be truly beyond the pale were such a thing to happen for real.
Imagine this entirely theoretical scenario. One’s local Member of Parliament visits a comprehensive school within his (or her) constituency. Let’s say, for argument’s sake, that this is a school in which one has worked for many happy years and is therefore a school in which one is emotionally invested and indeed a school of which one has an intimate knowledge. It is possible, for example, that one might well have taught a lesson in pretty much every single classroom in such a school. One might, indeed, have a thorough knowledge of the school and the fabric of its buildings, as a result of working there full time for well over a decade. Since leaving this school (again, entirely hypothetically), one might have maintained contact with it on and off, and one might have visited the school on occasion. All possible, in this entirely imaginary tale, I hope you can agree. What is less plausible in one’s story is the behaviour of the Member of Parliament.
To continue this whimsical flight of fancy, imagine that the Member of Parliament saw fit to poke about in an area of the school that has been condemned for several years and is completely shut off to both students and staff. Imagine that this has been the case for so long that — as someone who has worked there for over a decade — one is not even entirely sure where that part of the school actually is, since in one’s imagined scenario the condemned area has been quite rightly out of bounds. Perhaps nobody goes there and perhaps nobody has access to it except for the site team, in this altogether imaginary school in this altogether made-up story. Despite these facts (I say “facts”, but of course, do remember that this is an entirely fictional tale), the Member of Parliament in one’s story takes some pictures of this disused area (imagine that! Ludicrous!) then shares those pictures on his (or her) social media pages and makes claims about the entire school being in “an appalling state”. I mean … this is simply ridiculous, isn’t it? Any publisher would reject such a story as thoroughly unconvincing. Rip up the story and start again, you foolish author, for our fine upstanding Members of Parliament do not behave in such a way. They are busy, important people; they have no truck with such shenanigans.
Forgive me, but we must stretch this truly bizarre tale even further, to the point where you will of course recognise it for the blatant fiction that it must be. For this entirely imaginary Member of Parliament actually speaks in the House about his (or her) visit to this wholly make-believe comprehensive, in a manner that could, in extremis (and, of course, this is what makes this account so obviously hypothetical), be considered misleading to the House. He (or she) describes classrooms (note the plural! Classrooms!) which are “held together by gaffer tape”, leaving one, in one’s imagined scenario, scratching one’s head and struggling to picture the classrooms that he (or she) is talking about. He (or she) then goes on to describe the disused area of the school of which he (or she) took these photographs, and claims that he (or she) “almost fell through the floor” and was what’s more assailed by the stench of mould, all the while insinuating that this is actually representative of the state of the school. The classrooms, after all, you may recall from the imaginary description, are “held together by gaffer tape.” So, in this crazy hallucination, one is still left trying desperately to picture whether one saw a single piece of gaffer tape in one’s entirely imagined 13 years at this entirely fictional comprehensive school. One’s imagination may seem to have completely run away, for in one’s head the very same school is (in an alternate universe that one might call reality) so well-maintained that it is positively the envy of other schools in the area. So free from decay is the site (a source of pride for its site team) that visitors comment on the fact. This is all in one’s vivid imagination, you understand.
In a final flourish to one’s extraordinarily far-fetched tale (I really must write it up some time), one’s fabricated Member of Parliament doubles down when challenged on social media, deletes all of one’s posts in which one points out that he (or she) has misled the House, and even claims to be doing what he (or she) is doing at the behest of the school. This is despite the fact that he (or she) is not in government and not in a position to secure them any funding and perhaps despite the fact (and here things get really wild) that the Leadership team at the school might, one could possibly fantasise in one’s wildest moments, have asked him (or her) to desist. One can just about conceivably imagine that it has been made clear to this MP gone rogue that his (or her) interference and naming of the school on social media and at Westminster is distinctly unhelpful and unwelcome. Could one not?
What an extraordinary tale, I am sure you’ll agree! So, what would one do, in this entirely imagined situation? (I realise that it is so ludicrous that you might not even consider it worth fleshing out a plan for such an unlikely situation, but do humour me for a few more lines). One would, I suspect, have to write to this Member of Parliament directly, as one of his (or her) constiuents, in order to express in no uncertain terms one’s disquiet with his (or her) behaviour. One might even have to explore what avenues there are for making a formal complaint about such a state of affairs, given one’s sincere belief that a distinctly less than truthful statement has been made within the hallowed walls of Parliament. Furthermore, if one were to find oneself in this highly unlikely position, one would certainly (one imagines) feel thoroughly disillusioned with the honesty and integrity of one’s democratically elected representative and one would (one suspects) feel glad, not for the first time, that one did not personally vote for such a person.
It is a truth universally acknowledged that the one thing we love more than a hero is to see a hero fall. I’m not sure whether this is an entirely modern phenomenon, but it is perhaps a tendency that has burgeoned in recent decades. More than this, something which I do think is peculiar to our age, is the expectation that historic figures should be judged according to 21st century western values. This, especially when it is pitched against some of the figures who had a significant hand in the process of carving those same values, leaves me distinctly uneasy.
Last week, the BBC reported that Hinchingbrooke School in Huntingdon was swapping the name of one of their pastoral houses from Pepys to Lady Olivia. The process was enacted via a democratic ballot, which turned out to be a classic example of western democracy in action, given that the much-celebrated result was voted for by less than 50% of the electorate. Nevertheless, Lady Olivia, wealthy landowner, school sponsor and evangelical Christian, now finds herself named as the chosen figurehead for modern students in the school that Pepys attended, along with Oliver Cromwell. One can only hope for her that there are no skeletons in her cupboard, to be discovered down the line. There’s always a tweet.
Samuel Pepys seems to have gotten away with being a prolific sex offender without much modern public disapproval until 2025, when historian and translator De la Bédoyère went back to Pepys’s original manuscripts and translated all of his coded entries, which he wrote in a kind of Franglish, Pidgin Latin and a smattering of Spanglish. De la Bédoyère re-published Pepys’s diaries in all their glory, and the result is the extraordinarily detailed snapshot of 17th century life that one might expect; unfortunately, that life is one of a man for whom praying upon vulnerable women was a something of a daily occurrence. It was certainly an education for me, reading what this serial predator got up to on an average day, and it very much does not chime with 21st century western values. Historians are keen to point out that Pepys’s behaviour didn’t even chime particularly well with 17th century western values, as he seems to have had something of a reputation in his day. I only wish I could believe the world has changed, but let’s not pretend that it has. Men with such reputations are still running several countries.
I have been pondering the school’s decision to demote Pepys from his position as a House name and I have no wish to criticise it. The school has already made it clear that there are parts of the school named after Samuel Pepys and that those tributes to him will not change. I have no doubt whatsoever that the school was placed under enormous pressure by a vociferous minority and I don’t even have a particular issue with that in some ways: perhaps those individuals are right. If I had a daughter in the school, perhaps I might have agreed with them that there are better figureheads for her to look up to. Whatever my individual thoughts on the matter, it is inescapable that these days it only takes one parent with a bee in their bonnet and an active WhatsApp group to dictate school policy and this — for better or for worse — is the reality of where schools find themselves today. Headteachers have to pick their battles, and going out to bat for Samuel Pepys was perhaps not something the Headteacher felt was a hill worth dying on.
What I think is more interesting is to ponder whether we have lost something when society cannot tolerate undeniably serious flaws in their heroes. Is this a quirk of the kind of modern puritanism that we find ourselves facing today? If we turn to the ancient texts for our model, the authors of those understood only too well the value of a rounded hero, indeed the very definition of hero required the inclusion of multiple flaws. The notion of a “fatal flaw”, popularised by Rennaissance readings of Aristotle’s Poetics, influenced Shakespeare and other writers. There is unanimous agreement from ancient times to modern that the most interesting heroes are the ones with inherent weaknesses: a perfect hero would be a thoroughly tedious creation.
When Virgil introduces Aeneas as the hero at the beginning of his epic work, he does something quite remarkable. When we first meet Aeneas, he is at his lowest ebb. Battle-fatigued and a travel-worn refugee, Aeneas is at breaking point. He screams and cries and implores the gods to take him: why did I not die in Troy? he asks. What was the point of it all? The visceral shock of introducing us to a hero who appears to have abandoned all hope and is wishing he was dead is one of the most exciting decisions that the author could have made, and it thrills me every time I revisit the text (which has been hundreds of times over the last two years, for that section of the text is on the specification for OCR GCSE). The point, I think, is for us to reflect upon how much more impressive it is when Virgil later describes Aeneas suppressing his emotions, resuming command and leadership over his men: someone we have witnessed at cracking point does the right thing for the good of the majority and for the men in his care. Now, that’s a hero.
Not only does Virgil start his epic work with a radical take on heroism, he ends it controversially, by demonstrating that Aeneas is very much less than perfect. At the end of the epic battle that ensures the supremacy of the Trojans in their new homeland, thus securing the future of what will become the Roman empire, Aeneas is faced with his arch enemy, who begs for mercy. The tradition in ancient texts was that good heroes are extraordinary warriors but they do not give in to blood-lust; whenever a warrior is taken over by this kind of crazed, emotionally-charged violence, disaster tends to ensue and the warrior is punished for his misdemeanours. Good warriors show mercy when the time is right. Yet Virgil does not finish his work in this way. As Aeneas looks down upon his enemy, he is overwhelmed by rage, bitterness and grief: he slays him, quickly and ingloriously, and the epic finishes with our hero’s enemy groaning his last, his tortured soul shrinking away to the underworld. It is a radically depressing way to close an epic work of propaganda and reflects a true genius at his peak. The reader (or more likely the listener) is left with an uneasy sense of disappointment in our hero, left to carry the burdensome knowledge that founding an empire is not without its price and that war makes even good men do terrible things.
Perhaps indeed we have lost something along with the present-day puritanism that judges historic figures according to our modern western values and — inevitably — finds them wanting. Personally, I don’t have a problem with recognising the contribution that Pepys made through his unflinching account of 17th century life alongside the fact that the life he describes is one to which I would viscerally object. It’s what history is all about. What I hope for the future is that we can have these discussions in a more mature and nuanced way. There is nothing more irksome that the modern tendency towards cancellation and extremism, the “no debate” lobby, who consistently fail to understand that the very pluralistic society that they believe in so fervently and lobby so hard for requires endless compromise and true tolerance, the kind of forbearance that makes you feel uncomfortable and sometimes forces you to question your own values. I occasionally wonder whether the louder the cancellation crew shout, the more they’re trying to drown out the voices of doubt in their own head.
Examinations are looming on the horizon. This year’s GCSE candidates will no doubt be receiving revision advice, yet I fear that much of it will be inadequate. While there are some schools that are doing a great job on this, others are still behind the curve when it comes to their knowledge base: teaching is sadly a profession that has been historically prone to fads and unevidenced practice, something I witnessed during my training and throughout my career. In recent years, many individual teachers have gone out of their way to inform themselves about what cognitive science has to say about effective study, and this increasing knowledge and understanding about memory and learning is finally beginning to impact upon the advice that is given to students. This can be seen in the sheer number of teachers who choose to attend ResearchED conferences on a Saturday during their own time, to inform their understanding of good learning techniques. Despite this quiet, grassroots revolution, there is still a remarkable amount of misinformation out there, and I still occasionally reel in mortification at the sorts of things that are said to my tutees when it comes to revision advice.
Much of the problem stems from the very language that is used by teachers, students and parents when it comes to revision. It is hard to know where that language comes from, but much of it seems to be ingrained and on an infinite loop, like a scratched record. Students still frequently say to me that they need to “go over” something, which by its very nature implies revisiting the content to refresh their memory. In practical terms, the advice that a student needs to “go over” something encourages them to reread their notes. A student who is attempting to be proactive about their studies may highlight key information while they read. Yet cognitive science teaches us that reading and highlighting in this way are entirely ineffective practices, for they provide the learner with a feeling of familiarity without genuinely increasing or securing their knowledge-base. Reading and highlighting can feel genuinely productive, to the extent that the student believes that they are actively engaging with an effective learning process; in reality, they are giving themselves false reassurance and not practising the process of retrieval, which is essential both for learning outcomes and for examination practice.
Kate Jones, a teacher and an expert in sharing good practice for effective, evidence-based learning, has this week published a short blog on the Evidence Based Education website, highlighting the importance of what she calls responsive revision. In the blog she did what she does so well, which is to summarise and consolidate what we know from cognitive science into a practical and effective format that is easy for both classroom teachers and students to apply. Responsive revision, according to Kate Jones’ blog, is “a deliberate, structured method of independent study in which students use retrieval to generate evidence about what they know, what they can recall, and where gaps remain. They then respond to that evidence by directing their time and effort towards strengthening those gaps. It shifts revision from passive review to informed action. It also ensures students don’t keep going over their favourite or familiar topics but instead identify and tackle gaps in knowledge and understanding.”
One of the most important things for students to understand is the difference between what feels familiar (the process of recognition) and what is genuine recall (the process of retrieval). When a student rereads their notes or sits and listens to a concept being explained to them again, the material will feel familiar. This gives them the illusion that they can remember something when in fact, under pressure, they will not be able to recall it. The illusion can be so convincing that it can even cause the learner to fool themselves in the process: for example, research shows that many students have the tendency to use flashcards wrongly by turning over the card too soon, resulting in the phenomenon of them recognising the answer and then convincing themselves they did indeed know the answer. The trap is surprisingly easy to fall into. One simple way to guard against it is to work with someone else and to put them in charge of flipping the cards over. Because recognising information is so much easier and more comforting than the process of forcing yourself to recall it independently, students often cling to methods that allow them to experience the process of recognition, like a comfort blanket. They may even insist that the method is working for them, because it feels safe and encouraging and gives them the illusion that their knowledge base is strengthening. In reality, they are doing nothing to aid their recall under pressure.
In her blog, Kate Jones argues that revision should generate evidence, and by that she means evidence of absence as well as evidence of knowledge. Students need to test themselves in order to evidence the knowledge that they possess and to reveal the gaps in that knowledge, keeping themselves in a constant information loop of what they can retrieve successfully and confidently, what they can partially remember, and what they cannot yet call to mind. Armed with that information, the student can then take effective ation, a process which she explores in her blog.
If I could convince any learner of one thing that seems counter-intuitive, it would be that they should be testing themselves at every stage of their learning, including at the beginning. Students tend to resist this, for the process is challenging and uncomfortable (especially if they are not used to it in school) and the notion that they should be testing themselves on an area where they are aware that their knowledge-base is inadequate can feel rather daunting: perfectionists find it especially difficult to tolerate. Yet testing is essential to learning. When a student attempts to recall a piece of information from memory, they create the evidence base for what they do and do now know. Even more than this, not only does the process of retrieval make their knowledge (or lack of it) visible, it is also part of the learning process. For every time a student attempts to recall something and each time they manage to do so, they are working on the very thing that they will need to rely on in the examination; they are also strengthening the foundations of that knowledge base.
I cannot recommend Kate Jones’ blog highly enough for a simple, evidence-based explanation for how to go about the process of revision. Her ability to distil complex, research-informed ideas into a practical, workable guide is quite remarkable and as a result she is quite brilliant as a go-to advisory service for teachers. Her books on retrieval practice should be the benchmark for any classroom teacher. For advice directed at learners, regular readers of my blog will know that I am a huge fan of the psychologist Paul Penn’s advice on how to learn, which can be found both in his book on effective studying and on his YouTube channel.
Suddenly, everyone is talking about Lord of the Flies. It is one of my favourite novels, one which I taught for GCSE English literature for around a decade. I’m afraid that I have no urge to see what the BBC have done with it. I have also been somewhat irritated to see multiple hot takes on social media, critisising the story’s doom-laden attitude towards childhood and children’s psychology.
First of all, Golding was emphatically not being doom-laden about the nature of children, he was being doom-laden about the nature of humanity as a whole: let us not underestimate the extent of his doom-mongering, please. Secondly, Lord of the Flies is no more a novel about children and childhood than Animal Farm is a novel about livestock and animal husbandry. Like Animal Farm, Lord of the Flies is an extended allegory, and its message is a profoundly depressing one. So, buckle up.
Golding’s work of genius (one which he, incidentally, dismissed in later life as “boring and crude”) is a thoroughly disturbing exploration of what happens when the structures of civilisation fall away. It is emphatically not a novel about children. While the novel appears to contain the trappings of childhood: children’s games, their fears, their rivalries and their capacity for cruelty, it becomes clear as the narrative unfolds that Golding’s central concern extends way beyond childhood psychology. The island on which the children find themselves stranded is a microcosm of the world that the boys have left behind, a specimen society in which rival authorities, social hierarchies, violence and superstitious ideology rapidly emerge. Golding uses children in order to examine society stripped to its essentials, suggesting that what we call “civilisation” is a fundamentally fragile construct laid over a persistent human capacity for savagery. The novel is less an anthropological study of childhood than a parable about the nature of society itself.
From the outset of the novel, in which the boys find themselves stranded in the wilderness, the protagonists attempt to recreate the structures of the adult world from which they have come. They call assemblies, establish rules and elect a leader. Ralph’s authority rests on apparent legitimacy: he is chosen through a vote, and a conch shell is used as a tangible sign of democratic order. The conch regulates speech, embodies fairness and stands as a shared agreement among the boys to abide by rules. These early chapters might seem to suggest that humans, left to their own devices, instinctively lean towards mature governance; yet Golding makes it clear that the boys’ desire for adherence to a set of rules depends not on moral conviction but on a fear of consequences and a individual lust for dominance, for the boys speak immediately of the punishments that will face anyone who transgresses the rules they plan to lay down for themselves. Furthermore, as the hope of rescue fades, the rules lose all of their potency. As Ralph puts it, “things are breaking up. I don’t understand why.” The deterioration is not portrayed as uniquely childish; rather, it reflects how flimsy and insubstantial social contracts are when the institutions that sustain them collapse.
Jack’s transformation from choir leader to autocratic demagogue underscores this shift. His authority on the island grows not through reasoned persuasion but through his manipulation of fear and the promise of hunting and meat. He paints his face, embraces ritual and forms a tribe built on spectacle and intimidation. In doing so, he does not regress into childhood so much as adopt the tactics of a charismatic despot.
It is hinted from the outset that the boys have arrived from a society already engaged in a global conflict. The island society quickly begins to resemble the violent regimes and wartime mentalities of the adult world and the children’s play-acting of war quickly becomes indistinguishable from the very worst forms of human brutality. The murder of Simon is not an impulsive scuffle between children; it is a collective frenzy, a ritualised killing fuelled by hysteria and conformity. In that pivotal moment, Golding depicts the terrifying ease with which ordinary individuals can participate in atrocities when swept up by mass hysteria and mindless ideology. This is emphatically not a comment on the nature of children: it is a study in group dynamics and the power of suggestion.
Prior to his death, Simon’s role in the novel further supports the interpretation that Golding is examining society and group dynamics. His encounter with the pig’s head, the eponymous “Lord of the Flies,” reveals the central moral insight of the book: “the beast” that the boys fear is not an external creature but something within themselves. The pig’s head, swarming with flies, seems to speak to Simon, telling him that it (the beast) is part of them, is inside them: it is not an external force, rather it is innate to humanity. Golding aims to convince his readers that the impulse toward violence and domination is an inherent aspect of human nature, one that civilised society attempts, imperfectly, to restrain. Simon’s death, at the hands of boys who mistake him for “the beast” crawling out of the forest, symbolises the destruction of moral truth by collective fear and aggression. The tragedy lies not in the fact that the children are capable of evil, but in the implication that all humans are in the wrong circumstances.
Piggy represents rationality, scientific thought and the values of ordered civilisation. His glasses, which enable the boys to make fire, symbolise the power of technology and reason. Yet reason alone cannot withstand the tide of savagery once the social consensus collapses. Piggy is marginalised, mocked and finally killed when Roger deliberately dislodges the boulder that crushes him. This final act by Roger is particularly significant: earlier in the novel, he is depicted as throwing stones at the younger boys but he deliberately misses; the implication is that he is an inherently violent boy who is restrained in his urges by what Golding calls “the taboo of the old life.” As those restrictions erode with the breakdown of society, so too does his individual restraint. By the time he kills Piggy, Roger acts with deliberate intent. Golding’s emphasis on the gradual disappearance of internalised moderation points to his theme of the importance of societal structures in shaping and curbing antisocial behaviour. When those structures weaken, he believes, our latent cruelty surfaces.
Golding’s novel is emphatically not about childhood. The boys bring with them the hierarchies, prejudices and fears of their culture. The choirboys, accustomed to discipline and exclusion, quickly form an elite group under Jack. The “littluns” (as the youngest members of the group are collectively referred to) are marginalised and terrorised by the older boys and even Ralph, ostensibly the champion of order, participates in the violence against Simon. No character is exempt from moral compromise and this universality suggests that Golding is less interested in developmental psychology than in the broader human condition: his view of us is emphatically not a happy one.
The sudden arrival of the naval officer at the end of the novel crystallises the evidence that the island society is a mirror that Golding is holding up to the adult world. The officer is initially amused by the boys’ appearance, viewing their behaviour as a childish game. Yet he represents a world engaged in destructive warfare: his warship waits offshore, a reminder that organised violence is not confined to the island but is institutionalised in the adult society that lies beyond it. The boys’ painted faces and sharpened sticks are grotesque reflections of his uniform and the weapons he brings. The officer’s presence does not negate the horror that has occurred; rather, it frames it within a wider context. The island is not an aberration but a microcosm: Golding implies that the same forces driving the boys to chaos are operating on a global scale.
Published in 1954, in the aftermath of the Second World War and at the dawn of the nuclear age, Lord of the Flies reflects a period of unprecedented recent human destruction. The belief in steady moral and social progress had been shattered by the exposure of the Holocaust and the growing fear of atomic warfare. Golding, who had served in the Royal Navy, stated that he had witnessed firsthand man’s capacity for organised brutality and illustrating this was his purpose in writing the novel. His choice to use schoolboys as protagonists was an artistic decision: by stripping away adult institutions and placing children in isolation, Golding constructs a controlled experiment in which the island mirrors the essential dynamics of society in a concentrated form. The boys’ age if anything underscores the horrifying argument that the seeds of societal violence lie not in complex political systems alone but in the fundamental aspects of human nature. While the “beast” that the children fear can be seen as a childish nightmare, Golding does not treat their fears as trivial. “The beast” evolves into a powerful symbol of how societies create external enemies to embody internal anxieties and explain the darkness within them. The boys’ belief in the beast apparently justifies Jack’s desire for authoritarian rule and explains the abandonment of rational deliberation. In this way, childish superstition becomes analogous to the propaganda and scapegoating we find in adult societies.
It is undeniable that the novel challenged the mid-twentieth-century literary tradition, which portrayed children as naturally innocent and if anything morally superior to adults. In traditional adventure stories, still popular at the time, stranded boys tend to maintain British civility and cooperation. Golding deliberately inverts this literary convention. His boys do not build a utopia; they descend into barbarism. This inversion, however, is not a comment on children but a critique of the complacent belief that civilisation is secure and that moral behaviour is natural and instinctive. By showing that even well-educated English schoolboys can commit atrocities, Golding aimed to dismantle the myth of inherent cultural or moral superiority. Ralph’s uncontrolled grief at the end of the novel is portrayed as a source of embarassment to the naval officer. He weeps “for the end of innocence” and “the darkness of man’s heart,” a final summation of Golding’s bleak vision.
To read Lord of the Flies as a novel about the nature of children is to overlook its broader philosophical ambitions. Golding did not believe or aim to suggest that children are uniquely savage or that society alone corrupts them. Instead, he proposes that society is both a product of and a defence against the darker aspects of human nature. Civilisation provides structures — laws, social norms and institutions — that channel natural instincts such as aggression and desire into appropriate avenues. When those structures disintegrate, as they do on the island, the underlying impulses are revealed. The boys are not aberrations; they are average human beings.
Golding’s frankly brilliant work interrogates the very foundations upon which social order rests, yet it achieves this by focusing on children, whose assumed innocence sharpens the shock of moral collapse. Golding invites readers to question their comforting assumptions about progress, about culture and the nature of morality. The savagery on the island is not confined to childhood; it is an ever-present possibility within human communities. By the time the naval officer arrives, the reader understands that rescue from the island does not equate to rescue from the darkness within. Golding’s enduring message is that society’s stability depends upon our constant vigilance against forces that originate in the human heart. How’s that for a bedtime story?