There is an apocryphal saying that has been shared thousands of times on the internet. It is usually labelled “a Greek proverb” but sadly I cannot find any reliable reference to it that predates the 20th century. Nevertheless, it is a favourite saying of mine and whoever first expressed the sentiment was certainly insightful, even if he didn’t share his thoughts in the agora of 4th century Athens.
The saying is as follows:
“A society grows great when old men plant trees in whose shade they know they will never sit.”
Source unknown
There is so much to like about this statement. First of all, I like the fact that it talks about the responsibilities of the oldest in society. It seems to me that we all spend quite a lot of time wagging fingers at the young, telling them that it’s their responsibility to sort out the problems of the future – we may have caused all the problems, mind you, but we won’t be around to face the consequences and they will. The quoted statement calls this attitude into question and suggests that we all bear a responsibility towards the future that will exist after we are gone. I’m not surprised that people assumed such sentiments came from ancient Athens, which was a patriarchal society in which aristocratic men enjoyed the benefits and bore the responsibilities of government; elderly men were afforded power and respect, and in return they were expected to leave behind a legacy for the good of the generations to come.
In many ways, however, this statement is about the importance of trees. While it is using the tree as a metaphor for the future, to express the importance of the longterm legacy that every human is capable of leaving behind when they’re gone, it speaks to the visceral understanding that planting a tree is one of the best things that anybody can do in this world. Our love for trees and our trust in their enduring importance has recently been brought into sharp relief with the heinous felling of the beautiful tree at Sycamore Gap, a famous landmark so named after the tree that by chance grew in a sharp dip in the hillside next to Hadrian’s Wall in Northumberland. The real horror of this inexplicable act of nihilism has left me and countless others quite bereft; even those of us down in the south know the sense of history and local pride that this awe-inspiring natural feature commanded. I simply cannot believe that somebody could bring themselves to do such a thing.
The Romans valued their trees, not just for ornamentation but also for their practical uses. Trees were planted along roads, around public buildings, and inside the garden rooms of the villas of the wealthy, creating an outside-in effect that still inspires architecture and city planning to this day. Preserved cities like Pompeii and Herculaneum evidence how the Romans made trees a part of their urban landscape; excavations reveal that these ancient cities were home to a wide variety of trees, strategically planted for shade and selected for both their aesthetics and their utility. The Romans clearly had an understanding of how they could use trees to improve urban environments, a concept that we are now returning to, with more and more research suggesting that trees can improve the air quality as well as reduce temperatures in modern cities.
I am privileged to live in “leafy Surrey” and it is perhaps poignant that I become most aware of the trees around me in autumn, as we watch the leaves die and start to fall. During October and November, walking along a pavement where I live becomes a joyous experience of swishing through the fallen leaves and crunching upon acorns and horse chestnuts. The title of this blog post is taken from a poem by Gillian Clarke entitled simply October. It explores imagery of death and dying, but highlights the beauty of the colours as leaves start to die and decay in autumn. There simply is not a more beautiful and poignant time of year and while it is always tinged with sadness as it foreshadows the depths of winter to come, I value the glory and the beauty of this time of year immensely.
Adrian Chiles had a bit of a rant in his column in the Guardianthis month. Now, I should say from the outset that I sympathise with his obvious desperation; as someone who has to write a blog post every week, I have a small shred of insight into the pressure that paid columnists must be under to come up with something – anything – to write about every week in their column. I find it hard enough, and I don’t have to write to the standard that’s expected for the Guardian (no jokes, please). Years ago I had a paid gig writing for an online magazine once a fortnight, for which the standard of writing was pretty high: I couldn’t keep it up.
Poor Adrian was obviously having a particularly tough week when he decided to write a piece about television documentaries which use the present tense to describe historical events. Apparently, it “makes his blood boil.”
“If something happened centuries ago,” he says (said?), “let’s talk about it as if it happened centuries ago – not as if it was going on right now.” Chiles even quotes (quoted?) Dan Snow as someone who is (was?) apparently “miserable” as a result of the process, forced by his producers to speak in the present tense about historical events. I cannot begin to imagine their pain.
Sarcasm aside, it is interesting to me that Chiles – and, based on the comments I read online, perhaps others – claims to find the process of talking about past events using the present tense patronising; he seems to have decided that producers have come up with this device as a cynical or simplistic tool to bring events to life for a modern audience with a short attention span. Chiles not only believes that this unnecessary, but cites it as something which is likely to tip him over the edge.
Personally, I had not noticed that the use of the historic present in historical documentaries was on the increase, but if this is the case is then it is certainly not a modern phenomenon. It has always amused me how incensed English teachers become when a student’s work slides between the tenses. In English classes, students are trained that switching tense is an absolute no-no and will mean that their writing makes no sense. In the ancient world, by contrast, switching between tenses for effect was considered the height of excellent writing: Virgil was a genius at it.
A poet such as Virgil sometimes wrote whole passages in the present tense for effect; he would also write in the past tense and then jump into the present for a particularly striking moment, capitalising on the jarring effect to make a moment vivid. So a technique practised by men that were and are (past and present) considered to be some of the greatest literary artists that have ever lived now gets you marked down in GCSE creative writing and certainly gets you up the nose of Adrian Chiles.
In truth, I would not advise students to switch constantly betweeen tenses in the way that Virgil does; it is a not a technique commonly used in modern writing and can indeed lead to potential confusion unless used with caution. Apart from anything, just because a technique is used by a genius doesn’t mean that it’s necessarily a great idea for us lesser mortals. But the use of the present tense to describe historical events is surely an effective way to bring them to life and I’m a little puzzled as to why anyone would find it so irritating. I guess it’s one of those things, like a dripping tap, that starts to wind a person up inexorably once they have noticed it. My advice for Chiles would be to try some deep-breathing exercises next time he watches anything on BBC Four.
Some things have happened to me this week that have made me reflect about how we talk to each other online. I mentioned in my last post that I had (accidentally) smashed my iPhone. This is now fixed, although not before I had been through quite the self-reflection on whether it might actually be rather good for me to own a smart phone that was less pleasant to use. In the end, however, I concluded that a broken phone was at risk of malfunctioning and that this was perhaps not the smartest move for someone who is self-employed and relies on business coming in; yesterday, I forked out for a replacement screen.
The smashed phone coincided with some broader reflections that I also mentioned in my last blog post and which have continued to ferment in my mind. Two television programmes have influenced me over the last fortnight, one a drama and one a documentary. A couple of weeks ago I got around to watching the most recent season of Charlie Brooker’s Black Mirror and was moved and disturbed as always. The final episode – without giving too much away – deals with smart phone addiction; it is a thought experiment about where such an addiction might lead in a worst-case scenario, and takes a wry look at how even the creators of the big social media platforms seem to rue their own creation.
This episode of Black Mirror really stuck in my mind and at first I struggled to think why. It wasn’t one of Brooker’s best and it certainly wasn’t one of his most disturbing. (There are other episodes of Black Mirror that I frankly regret watching). Yet this one needled me, I suspect because I recognised the compulsion and the attachment it explored. I knew that I found my smart phone addictive. So I resolved to do better, and as a part of my quest I decided to watch something else that had been on my list for a while, a Netflix documentary called The Social Dilemma. This production, made only a couple of years ago, interviews a range of ex-techies from Silicon Valley, all of whom have left the companies for which they previously worked: there was the guy who created the “Like” button on Facebook, there were techies from the platform formerly known as Twitter, from Instagram and even from Google. All of them had three things in common. Firstly, they had all struggled personally with addiction to the products that they themselves had helped to create: they were suppliers addicted to their own drug. Secondly, they were now united in opposition to the way that these platforms were built and designed in order to be addictive; many of them were actively campaigning against the platforms that they used to work for, appalled by what they themselves had created. Thirdly, not one of them let any of their kids near a smart phone. Not at all. These were wealthy tech whizzes from Silicon Valley and their own kids do not have smart phones. If that doesn’t make the rest of the world reflect on why they let their kids have access to these devices from such a young age, I don’t know what will.
There is so much to love about the internet. I find it empowering and useful and it enables me to do the work that I do. On the other hand, there is much to be afraid of, most of all the addictive nature of the ever-accessible device in your pocket. Listening to the men and women who created these platforms that we all use and hearing them explain how they are built, designed and programmed to be addictive was a sobering experience. I have found myself looking at those around me – both the people I am close to and people who are strangers to me – and I see the signs of compulsive usage everywhere. I see it in myself. To my regret, I have found myself scrolling through and staring at platforms I actively dislike, somehow unable not to look at them, even in the sure and present knowledge that they bring me no joy. Why do these things have such power over us? The answer is that they were built that way; clever people are paid a lot of money to find ever-improving ways to keep us glued to every platform we sign up to.
In response, and taking the direct advice of the self-confessed ex-drug-pushers from Silicon Valley, I have removed all social media apps from my phone. There are several platforms I viscerally dislike and would happily never use again, but they are undeniably useful for business: Instagram, Facebook and LinkedIn; these from now on I will manage solely through scheduling on my laptop, and I will log in to do that kind of work once or twice a week. The messaging services on Facebook and Instagram I have set up to deliver an automated message to anyone enquiring after my services, saying hello, explaining that I do not spend time on those platforms and giving other ways to get in touch with me. The responses to this, I can tell you, have been interesting. A couple of very genuine prospective clients have reached out to me, one even thanking me for enabling them to get off the platform, which she also disliked. Another said “good for you”. But two other people – neither of whom were prospective clients, nor were they known to me personally – have already expressed their disapproval.
When I logged in to check my Instagram account recently, I found one message from someone purporting to be a business coach. I have no interest in using a coaching service, so I would have ignored this man’s approach anyway, wherever he had made it. He sent me a message stating that he “had a question about my business” and, because it was on Instagram, he received my automated response. His immediate reaction was anger. I blocked him, obviously, but I do find myself wondering about just how bad his own addiction is that the very implication that someone else was choosing not to hang out on his platform of choice made him furious.
Further to this, it appears that another person approached me initially on Instagram and then followed this up, as instructed, with an email. This, of course, I received. He too said that he had a question, and I asked him what it was. Fortunately, it was not a ruse to send me something inappropriate, but it was an inroad into asking me to translate something into Latin for him. Now, you probably don’t realise this, but I get literally dozens of these kinds of requests. I used to respond to all of them. I still do to some. A few months ago, someone got in touch and asked for my help with a favourite quotation for their mother’s funeral and of course I replied to them, indeed I corresponded with them at some length.
Much of the time, however, especially when I am busy, I don’t honestly consider it my honour-bound duty to provide a free translation service for anyone and everyone’s t-shirt, club logo, necklace or tattoo. I am a teacher and a tutor, I’m not a motto-creation service. If someone asks nicely, I may help them out. This man, however, before I had even decided whether and how I was going to respond to his request, followed up his initial email with a second one barely an hour or so later, wanting to know whether I had received the first email and intimating that he was waiting on my response. I didn’t like this, so I decided simply to delete both the emails. The consequence of this decision was that he sent me another, one-word message on Instagram. It said “fraud”.
Fraud.
I am sure that this person is a perfectly reasonable and functioning individual in real life. Were I to sit him down face-to-face and explain that this is a busy time of year for me, that I get dozens of these sorts of requests, that I might indeed have responded to him had he been a little more patient and not harrassed me for an answer, I am quite certain that he would have reacted in a rational manner. Yet online, without that human connection, not only did he decide that I am a “fraud”, he felt the need to tell me so. How did he feel after he sent that message, I wonder? Vindicated? Satisfied? Like he’d done a good thing? Somehow I doubt it. It is an empty feeling, shouting into the void and being left to wonder what the reaction at the other end might be.
The truth is that these platforms are not good for us. They make us less honest and they make us less kind. Most of all, it seems to me, they make us lonelier by dividing us further – the very opposite, those recovered tech junkies tell me, of the original Silicon Valley dream. So you will not find me hanging out on LinkedIn, Instagram or Facebook, none of which contain anything that interests me enough to outweigh the excessive demands that they have placed on my attention due to the addictive nature of their construction. I do gain something from the platform formerly known as Twitter, as so many teachers exchange ideas on there and it remains an outstanding medium for finding links to new ideas and research about good practice in education. If Threads takes over that mantle, so be it. Still, however, I have ruthlessly removed these platforms from my phone. I will keep things on my iPad, which I do use but nowhere near as much as I use my phone. So the phone will be solely for genuine messages from real people – family, friends and clients. At the moment, as I get used to the situation, I am finding myself picking the phone up and then wondering what on earth I have picked it up for. Numerous times a day. This only goes to prove that my decision was right – clearly, the number of times I have been habitually checking these platforms for no good reason is genuinely scary.
I think what I have decided is that, like all addictive substances, social media must either be avoided altogether or be very strictly managed. Its usage must be balanced against the risks and if it’s not bringing me joy or enriching my life, then I genuinely don’t see the point of it. For some people, I fear, social media really is the same as drugs and alcohol: highly addictive, with the potential to turn them into the very worst version of themselves.
It’s been impossible to ignore the start of the school year this September, even for those people with no children and with no connection to the education system. With the scandal of RAAC concrete rocking the country and all of us reeling once again at what can only be described as years of incompetence and underinvestment by government, whatever your political stripe, the start of the new school term and the new school year has been on everyone’s mind.
This academic year feels like a milestone for me. This time last year felt truly surreal, as for the first time I did not return to school as I had done for the previous 21 years. The start of last September was very strange and somehow I didn’t quite believe it was happening; I still had the familiar anxiety dreams, so convinced was my subconscious I would be returning to the chalkface as usual. This year, with some distance in place between myself and the school grounds, I forgot altogether which day my old school was returning (although old colleagues did keep me posted on the usual hilarities of INSET day).
I have enjoyed the summer holiday immensely, working to a different schedule (I only saw clients in the morning) and doing significantly fewer hours compared to my usual schedule. But it also feels great now to be settling back into the routine again and I am loving seeing the return of regular clients as they come back for their old slots and restart the academic year. There is also the excitement of starting to work with new students, especially the ones that I really feel I can help make a difference to; nothing in life is as rewarding as helping a student to turn their performance around.
This year I decided to reflect on what happens in schools at the start of the new academic year and to apply the best and most important aspects of this to my tutoring business. I have refreshed my safeguarding training, a legal requirement for teachers in schools but not something which is (yet) regulated for tutors. I have looked at my results and done some reflection, although one of the joys of one-to-one work is you do not face the surprises and disappointments that inevitably occur across a year group in a school. I have reflected on my own practice, decided what worked best last year and resolved to apply the most effective techniques to all clients. Over the last couple of weeks I have reshaped my daily timetable and applied some lessons learnt from last year about when I work most effectively as well as where demand is highest. FInally, I have reflected on how I can reduce unncessary administration and time-wasting, most especially the time spent on social media, which I have reduced to an absolute minimum; I have put systems in place to mean that I don’t have to engage at all with the platforms which do not bring me joy, namely Facebook and Instagram. That final decision has been rather well-assisted by me smashing up my iPhone (not deliberately, but there is a psychological school of thought that there are no real accidents …); this sparked some further reflection on just how much screen time is truly necessary for running a business like mine and how much of it was mindless, fruitless scrolling in the name of “visibility”, which so many business coaches seem to preach is essential to the success of my business. With a website that performs as well as mine does, I do not find this to be so.
Thus, as I settle in to my second year as a full time, independent, one-to-one tutor, I could not be happier with my role and with the balance I have managed to strike between meaningful employment and a better quality of life. I cannot wait to get on with helping my clients, old and new, and to see what the new academic year will bring.
Since my last post, so many people have sent me messages asking what my research was actually about that I have decided to write an explanation. You only have yourselves to blame.
One of the difficulties one faces when writing a proposal for a PhD is to find a niche in one’s subject where there is work left to be done. I have met academics in my time who have written PhDs on Virgil or Homer, but how they managed to come up with a new angle, never mind how they managed to get a handle on everything that had been written already, is completely beyond me. Personally, I decided that something a little more obscure was the way forward.
I had an interest in ancient philosophy and I was also lucky enough as a part of my degree to do an undergraduate course on the rise of Christianity in the ancient world. These two fields of study collided when I started to learn about Neoplatonism, a branch of thinking in late antiquity which is notoriously difficult to define. In origin and essence, Neoplatonism was everything that was said, thought and written about Plato, Aristotle and other key thinkers in the generations after they lived. Initially, this was the men studying in the schools in which Plato and Aristotle themselves taught (Aristotle was a pupil of Plato, so the process started with him), but as the centuries rolled by Neoplatonism became the wildly diverse writings that were produced generations and even centuries after Plato and Aristotle were writing and teaching. People also wrote intensively about Pythagoras and some ancient scholars became interested in finding what they believed to be religious and philosophical allegories in the writings of Homer. The study of what these men wrote at the time is thus an entire field in itself – if you like, it’s the study of Platonic, Aristotelian and Pythaorean reception in the ancient world. Its most famous and respected proponent was a man called Plotinus, who lived and wrote in the 3rd century AD and had a strong influence on Christian philosophy; I specialised in the men who came shortly after him.
Despite its noble origins as an intellectual field of study, Neoplatonism took on a life of its own and morphed into something really rather bizarre as the years rolled by. This was partly because it was influenced during this period by the growth of religions that focused on developing a personal relationship with one’s god, but there were other complicating factors too. Suffice to say, by the time you get to the period in which I specialised, Neoplatonism had become something pretty weird and wonderful: an intensely intellectual field of study on the one hand and a downright barking set of pseudo-philosophical cultish ravings on the other. I do not exaggerate – better scholars than I have said as much.
Most of the writings from the period we are talking about were so mystical and incomprehensible that modern scholars had no interest in bothering with them. As a result, many of the texts remained untranslated until a movement led by Richard Sorabji, who was a Professor at King’s College while I was studying and researching. Sorabji oversaw a series of texts and translations, making many of these works available for the first time to undergraduates and indeed to anyone else who was bonkers enough to be interested. He specialised in the commentators on Aristotle, the scores of ancient scholars who had spent their lives poring over Aristotelian texts and writing down their thoughts on them.
So I ended up wading around in this quagmire of growing information in this developing field and, prompted by my Supervisor, took a look at a text nicknamed the De Mysteriis by an author called Iamblichus, a Syrian thinker who was writing in Greek during the late 3rd and early 4th centuries AD. He was particularly keen on Pythagoras, and wrote masses of pseudo-mystical nonsense about him; we have one complete surviving work which has frankly undeniable parallels with the Gospels and presents Pythagoras as what can only be described as a Christ figure. He also wrote various other works including the De Mysteriis, on which I wrote my research and which is fundamentally about theurgy or divine magic. Yeah. I told you it was weird.
So. Theurgy. It is pretty difficult to define without presenting my entire thesis, but in essence it was a range of mystical rituals, all with the aim of connecting humans with the divine. You’d recognise some of them from your general knowledge of the ancient world: oracles, for example, through which the gods supposedly spoke to men. Iamblichus believed very firmly that there was a right way and a wrong way of doing these divine rituals, and the De Mysteriis is his authoritative account of what’s what when it comes to doing this stuff. As a result it is – inevitably – absolutely barking. This is not exactly what I said in my thesis, but it’s the honest truth in summary. Indeed, the De Mysteriis is so barking that previous scholars had largely consigned it to obscurity and it had not been translated into English since 1911. So, that’s where I came along. My PhD was a study of the work and through that research I hooked up with another couple of scholars – far older and more prestigious in the field than I was – and who had in the previous decade taken on the task of producing a modern edition and translation of this text. They were – to put it mildly – rather regretting doing so. One of them had already had a heart attack, although the jury was out as to whether the De Mysteriis was entirely to blame or only partially. Long story short, they drafted me in as Chief Editor and I finished it for them. My PhD was also published.
As I wrote last week, I did not enjoy the process of academic research and I regretted signing up for it. However, this does not mean that I was uninterested in much of what I was doing. What it did reveal is what I should have been studying, and it wasn’t Classics. During the process of my research I realised that what fascinated me more than anything else in the world was (a) what makes people do, think and believe what they do and (b) how it is possible to persuade even the most intelligent and educated person of something which is provably impossible. In simple terms, why do people believe in miracles? Why did Iamblichus believe that a truly inspired (for which read fully possessed) spokesperson for the gods could be struck on the back of the neck with an axe and not be injured? Did he really believe that the famous oracles of which he spoke were still functioning? (We know for a fact that most of them had been disbanded by his time – one that he writes about fulsomely had become a Christian campsite by the time he was writing). Following my interests, and whilst I was meant to be working exclusively on Neoplatonism, I ended up going down all sorts of rabbit holes. I read about early 20th century research into “shell shock” (now known as PTSD); I read purported accounts from the 19th century of children possessed by the devil; I read about mass conversion rallies such as those led by Billy Graham; I read about attacks of crowd hysteria, such as faining fits or hysterical laughing in nunneries and girls’ boarding schools; I read about witch trials; I read about Zaehner’s LSD-fuelled research into what would happen to his mind when, enhanced by hallucinogenic drugs, it was exposed to art or literature. (Not much as it turns out – he just couldn’t stop laughing). In short, I read a wildly diverse range of stuff about possession, altered states of the mind and all sorts of jolly interesting weirdness. Long story short, I should have switched to anthropology.
My interest in such things remains to this day and in other guises I have written articles about belief, conversion and religiosity. I even dipped my toe into novel-writing and wrote a dystopian Young Adult novel about a world in which beliefs are controlled and dictated. Much of my spare time these days is spent reading about a variety of cult-like beliefs which are developing rapidly and spreading online. I might even write about it one day.
As thousands of students receive their A level, BTEC and T level results this morning, I’ve been thinking about moments in life that I and no doubt many others from my era nickname “sliding doors”: moments that mark a turning point in the course of your life. The 1998 film Sliding Doors explores the idea that the course of one woman’s career as well as her love-life hung upon whether or not she managed to catch a particular tube-train; it follows both scenarios in parallel – one in which she catches the train, one in which she doesn’t.
In real life, of course, without the omnipotency of a film director, one cannot do this. We cannot see the different scenarios played out and choose which one we prefer. We can look back at pivotal moments in life and acknowledge that something shifted in our lives at that moment, but we cannot know what would have happened in an alternative universe. In the context of romantic relationships, this concept is expressed wonderfully in another 1990s classic, one of my favourite songs by Pulp, called Something’s Changed. In this song, Jarvis Cocker explores the chance nature of his meeting a partner and how it might never have happened; it also uses the conceit of imagining himself writing a particular song on a particular day, which then became about that person:
I wrote this song two hours before we met. I didn’t know your name, or what you looked like yet. I could have stayed at home and gone to bed. I could have gone to see a film instead. You might have changed your mind and seen your friends. Life would have been very different then.
Later in the song he returns to the conceit and perhaps my favourite moment (probably upsettting to the more romantically inclined among you), is when he even ponders that without his partner, he might have met somebody different:
When we woke up that morning we had no way of knowing That in a matter of hours we’d change the way we were going. Where would I be now? Where would I be now if we’d never met? Would I be singing this song to someone else instead …?
The tone of the whole song is wistful but not melancholy, nor is it overtly gushing – those of you who know Jarvis Cocker will understand that he doesn’t really do gushing. The girlfriend is given a voice, but she uses it to tell Jarvis to “stop asking questions that don’t matter anyway”. The general conclusion is: ah, well.
I spoke to a friend this week – on Zoom because we live 300 miles apart – and she too is at a turning point in life. We spoke about sliding doors moments and I told her about how miserable I was doing my PhD and how eventually I decided not to pursue an academic career because I realised that the lifestyle was making me deeply unhappy. “I finished it,” I said. “But it nearly killed me.” This friend then asked me something that nobody has asked me before. She asked me whether I regretted finishing it.
Lots of people have asked me whether I am glad I finished my research. That tends to be the expected tone of the conversation. It is in my view a marker of how insightful this particular friend is that she worded it differently. Do I regret finishing it? Do I regret putting myself through that process? It got me thinking. Maybe I should.
Looking back, the reasons I finished it were all in relation to external pressures. I had received funding from the British Academy and that is very hard to come by – I would have felt guilty that I had taken somebody else’s place and squandered such a privilege. My parents would have been disappointed. My Supervisor would have been disappointed. Finally, and perhaps most foolishly of all, I didn’t like quitting and I still believed that the qualification was important. So, I soldiered on and I finished the thing. I cried multiple times a day. A low point was sitting in my college room and pondering how long it would take somebody to notice if I died in there.
Not only did I finish the PhD, my deep unhappiness and loathing for the life drove me to finish it in record time. Two and a half years. For the last 4 to 6 months when I was officially “writing up”, three copies of the completed thesis sat on my desk, all printed out and ready to be bound. I hid this even from my Supervisor. It was not the done thing to finish in less than the standard three years, plus I had nowhere else to go once my thesis was completed. I had a place to start teacher training in September, but until then I needed to hang on to my college room. So I waited. Eventually, the thesis was bound and sent to the examiners and ultimately my Viva went without a hitch. PhDs are not officially graded, but truthfully there is a hierarchy to what the examiners may say to you at the end of your Viva. Best case scenario is that they mark in pencil a few minor errors or typos and tell you that these do not require correction in order for the thesis to be accepted; worst case scenario, they tell you to tear the thing up and never darken their doors again. Most people fall somewhere in the middle, with corrections advised or sometimes a re-write of some sections recommended. Mine was waved through with the minor pencil errors. The examiners shook my hand at the end of the session and used my brand new title for the first time. Being me, I did correct the minor errors even though I didn’t have to, and submitted it to Senate House.
My PhD has brought me some benefits, not least an exposure to teaching which ultimately became my career choice and is not, I think, something I would have considered as a possibility had I not been thrust into it. It has been useful, as a woman, to sign off as “Dr. E Williams” when writing to certain types of people or institutions – until we bring down the partiarchy (work ongoing), it can be handy to let the recipient of your complaint assume that you are a man (which those certain sorts of people or institutions inevitably tend to do when you use an academic title). My PhD is also something I am proud of, solely because I know just what it took me to complete it. Many of my research peers fell by the wayside (ironically, all of them claiming to be loving the process of research, while I was always very vocal about how miserable I was finding it). A few years ago, I was invited back by my university as something of a voice of doom on a panel about postgraduate research: it’s tough, and most people don’t really enjoy it. So be careful what you wish for.
Back to my lovely friend’s question. Do I regret finishing my research? I have always told myself that I don’t, since I made it through and have something to show for it. My field is pretty niche (true of most PhDs) but my contribution was significant and is still cited in other people’s research in this field right across the world. Quitting would have meant that I had a bad experience and had nothing to show for it. Also, when it comes to sliding doors in life I think it’s best not to have regrets: you made the decisions you made or things happened and that’s how it is – there is little point in asking yourself what you could or should have done differently. But the way my friend worded that question really has made me think about that particular decision a little differently.
Sometimes in life, putting yourself through more pain truly isn’t worth it. The more I think about that awful time, the more my decision seems a little crazy. Much as I cannot see the alternatives played out with the clarity of a film, I can make sensible and reasonable predictions about what might have happened. My career would still have worked out: I had already been exposed to teaching (that happened in my very first year of research), so I probably would have chosen to switch to teacher training if I’d had the foresight and courage to jump ship when I could and should have done. I would have started work earlier, bought a house more cheaply than I did, paid into my pension for longer. If I’m honest, I’m not sure that my PhD has benefitted me in gaining work to the extent that I have told myself it has; my first class Honours degree and Masters at Distinction level was probably plenty and I’m not quite sure why I’ve never considered that before. So in truth, I cannot think of a negative outcome that would have happened due to quitting, other than the immense courage it would have taken to do so.
It is healthy in life to have no regrets, and I’m certainly not going to beat myself up about a decision I made in my mid-20s. I shall continue to make use of my title, and maintain pride in what turned out to be the toughest achievement I have ever faced in life. Go me. But if I could go back in time and tell myself what to do in 1997, I’d tell myself that the brave decision was not, in fact, to soldier on and do what was expected of me; the brave decision would have been to run for the hills. Sometimes, quitting is the bravest act of all.
Students are often surprised and puzzled when I point out to them that English is not very comfortable with the passive voice. It’s not our most natural way of speaking, which may go some way towards explaining why students find the passive voice difficult to translate. For example, while Latin slides into the passive voice in the imperfect and future tenses quite simply with its alternative set of endings, English makes one heck of a meal out of this: who on earth really wants to say “he was being carried” or “he will be carried”?
Despite this, and this is another thing I like to point out to my tutees, the passive voice is used for very distinct purposes in the English language. First of all, it is used in scientific writing. When writing up an experiment, students are taught to write that “the powder was placed into the test tube” rather than “I put the powder in the test tube”. In scientific writing, this tradition stems from the principle that we should take the individual out of the process and focus on the process itself, removing any other distractions or influences. This then carries forward as a tradition in all academic writing in all fields, although I note with some dismay that this appears to be changing. O tempora! O mores!
The passive voice in scientific writing is a tradition because the person conducting the experiment is not (or should not be) the focus of the experiment. Likewise, another purpose for which the passive voice can therefore be used is in order to separate an event from its cause. Consider the difference between saying “Emma broke the vase” and “the vase was broken (by Emma)”. Not only is the person who broke the vase separated from the action, they don’t even need to be named for the action to make some kind of sense: the accident just happened.
This kind of passive speaking is used to great effect is by public figures – in particular, politicians. The phrase “mistakes were made” even has its own Wikipedia entry, so synonymous is it with political double-speak. The phrase, described by the New York Times as a “classic Washington linguistic construct” allows a politician to sound as if they or their party are taking responsibility for something without actually doing so and without even articulating what they are taking responsibility for. Despite the fact that journalists have poked fun at this phrase since it was first used as early as the 19th century, politicians continue to roll it out on a regular basis. Listen out for it – you’ll be amazed how often it or something very like it pops up.
There are yet more creative ways in which the passive voice can be used other than merely distancing yourself from responsiblity. If you want to create a straw man argument and so ridicule a view that nobody actually holds, how about saying “what we are being asked to believe”? “What we are being asked to believe here is that young people are incapable of making any kind of decision.” Who is actually asking us to believe this? Erm, nobody; we’re just “being asked” to believe it by persons unspecified – straw men, if you will. The passive voice makes this rhetorical trick viable, effective and convincing. (Nice tricolon, I know). Similarly, if you wish to give more credibility to a position than it truly deserves, then make it sound like the consensus view by using the perfect passive participle — drop in that a position is “long held” or “long agreed-upon”. Long held or long agreed-upon by whom, exactly? Well, nobody knows or seems to care.
One of my hobbies is listening out for ancient rhetorical techniques as employed by modern politicians (or rather their speech writers, as since the Age of Spin I am somewhat cynical about the degree to which any of our modern leaders write their own material). Many of the techniques learned, employed and taught by the greatest speech-writers of the Roman era can still be heard in the House of Commons today. A very basic version of the same skills are indeed taught in schools, as students are still expected to learn how to write persuasively in their English language GCSE. The passive voice is an often-overlooked and thus dangerously insidious technique. Do not let the speakers fool you with it. Or, I should say, do not be fooled (by them).
We in education could learn a great deal from the aviation industry. In fact, most professions could learn a lot from the aviation industry. While so many other professions have a tendency towards a blame-culture and criticism, aviation is built on the principle of learning from its mistakes and implementing procedures to mitigate against them. It is also relentlessly focused on clarity of communcation: for lives literally depend on getting this right.
Last night, my husband and I watched one of a string of documentaries aired on the National Geographic channel, which follow accident investigations in aviation. The accident explored in this particular episode occured all the way back in 1989 and involved a Boeing 707 on an American charter flight from Italy to the Dominican Republic. The flight was making a scheduled stop on the Portugese island of Santa Maria, where it was due to be refuelled. As a result of a series of minor but significant oversights in procedure made both by the flight crew and by the Air Traffic Controller, Flight 1851 came in too low and struck a mountain range, killing everyone on board.
One of the most important things about the way in which investigations are conducted in aviation is that the culture focuses not on finger-pointing but on identifying the factors which led to an accident, so that the findings can be shared within the industry and lessons truly learned. The investigators may make recommendations which lead to a tightening of regulations around pilots’ working hours, a change in how pilots are trained, an adjustment to the design of an aircraft and/or its instruments, a tweak in recommended flight procedures, or all of the above. The approach is a model in how to react under extreme pressure: it does not seek to apportion blame, it aims rather to improve safety for the future, for the benefit of everyone. The pilots who do make errors (and who pay the ultimate price for them) are treated with infinite respect and the investigators do not simply stop at putting things down to “pilot error” and then washing their hands of the incident – they go on to explore whythe pilots may have made a particular error, with the ultimate aim of reducing the risk of similar errors occuring again in the future. Were they overtired? Was the information they were receiving unclear or counter-intuitive? Was their training insufficient in this area? Above all, how could we have prevented them from making this mistake? I find this approach genuinely inspiring and I wish other fields would learn from it.
One of the key findings of this particular investigation was that there was a crucial miscommunication between the Air Traffic Controller and the flight crew. The ATC instructed the crew that they were “cleared to three thousand feet” (in other words, that their initial approach should not go below three thousand feet). For several reasons, the First Officer ended up mishearing the instruction as “cleared two thousand feet” and the aircraft was set to the incorrect height. My husband (a trained pilot, as it happens) informs me that this would be less likely to occur now because the advised vocalisations for this particular information have been revised, in an attempt to prevent such misunderstandings; the instruction would now be “you are cleared to altitude three thousand feet” – the word “altitude” must be used immediately before the given height in order to avoid any confusion between words and numbers, an error which in the case of Flight 1851 led to devastating consequences.
While the documentary explored numerous other reasons why the crash ultimately occured and indeed made it clear that the Captain of the flight crew undeniably missed several opportunities to prevent the disaster from happening, the underlying cause of the crash was quite simple – the aircraft came in too low. The miscommunication between the ATC and the flight crew, which was not corrected when it could and should have been, set the aircraft on a collision course with the mountainous terrain towards which it was headed.
Now I for one am jolly glad to be working in a field in which my mistakes are unlikely to cause multiple fatalities and even less likely to be the subject of a documentary on National Geographic some 35 years later. Yet this minor slip which led to such devastating consequences for the flight and its passengers did remind me of a misapprehension which I discovered in a tutee this year. On Flight 1851, the First Officer mistook a word for a number – quite simply, he heard “cleared to” as “cleared two”. Similarly, I realised this year that one of my tutees was convinced that the dative case had something to do with numbers. After a couple of minutes of discussing this with him and trying to explore what was going on, I suddenly realised what had happened: his teacher had (quite rightly) taught his class that the dative case was to be translated as “to” or “for”. My tutee, however, had heard “two” or “four”. He heard numbers instead of words. And he had been royally confused ever since.
Whilst teachers are not making minute-by-minute decisions on which hundreds of lives depend, instead they are laying the foundations for a child’s understanding in their subject. Whilst this is not life-threatening (happily, I can’t think of a single occasion on which a misunderstanding of the dative case has led to multiple fatalities), it is nevertheless important in our line of work, assuming we care about what we do. This particular child’s misunderstanding underlined for me the importance of dual coding, which means using a visual representation of what you are saying as well as a verbalisation: quite simply, if the teacher in question had merely written the words “to” and “for” on the board as they spoke, they would have avoided the misconception that was absorbed and internalised by this particular child.
On one sunny day, during which I took the photograph below, I was very privileged to join my husband on a flight during his training and listen to the impeccably high standard of teaching that he received from his instructor. My advice to all teachers if they want to observe a model in verbal clarity is to take every opportunity that they can to go and listen to people who teach a practical skill. Go and watch a PE teacher setting up a game; watch a science teacher preparing students for an experiment; take a refresher course from a driving instructor; tune in to your coach at the gym. Above all, in your own teaching, remember that every word you use must be carefully thought through and – in an ideal world – that you should take a note of every misconception which does occur and seek to mitigate against it next time by improving your verbal explanation. While I am happy and relieved to say that a child’s life will not depend on your words, their success in your subject absolutely does.