Shining Girls (Apple TV, 2022)

Another time travel fiction review with SPOILERS for the TV show (and to some extent the book as well)

I very much enjoyed the Apple TV series Shining Girls, an adaptation of Lauren Beukes’ 2013 novel The Shining Girls. I thought it was well acted, well shot, mostly well written and had a satisfying ending, albeit a problematic one since the killer is left alive and Kirby might now be forever bound to the house like he was. However, I was confused and somewhat annoyed by the time-travel aspects; the way the house worked as a time machine mostly made sense, but the way that  Kirby’s present (and later that of Harper and Jin-Sook) was shown to change moment to moment really makes zero sense. It made me very curious to find out if it was part of the book, and I soon found out that it wasn’t. I decided to read the book as much preferred the idea of a straightforward time travel version of the same story. As much as I enjoyed the book, it made me all the more annoyed that the TV version had made such a dramatic and nonsensical change. It wasn’t the only questionable change either. The focus upon Kirby and her ever-shifting reality resulted in a great deal being changed or removed, including most of the titular ‘shining girls’ including, surprisingly for 2022, the black, trans, and pro-abortion characters. The ones that are retained are significantly changed and a whole new character – Leo Jenkins – is added for no clear reason. 

Time travel in the novel is straightforward; you simply can’t change the past. It’s a clever twist on a closed loop like The Terminator or Twelve Monkeys. So nothing changes. In the TV show it’s more like Terminator 2 or Back to the Future – you can change the past and save the girls. This is a change that the 12 Monkeys TV show also made to the movie’s story, and I could have lived with the same here. Most people don’t share my love of closed loops and it’s fun to see a seemingly foregone conclusion averted/subverted (which is why James Cameron contradicted his own first movie with his sequel – it made for an emotionally satisfying ending at the expense of pure logic. No, what got me annoyed in Shining Girls (2022) was not the malleable timeline but the introduction of a second, wholly nonsensical mechanism for changing it. This is both more confusing than it need be and a direct contradiction because in theory changes made by one mechanism should impact those made by the other. Dark and Avengers: Endgame (see my review here) both introduced branching realities, to varying degrees of success – I would have been OK with this show doing something similar since under that system of time travel cause and effect is pretty much intact. Shining Girls makes the same mistake as Endgame, but whereas that logic only broke in the final scenes and can be ‘fixed’ with some off-screen assumptions, Shining Girls is fundamentally broken as a time travel story since its second mechanism is nothing to do with ‘many worlds’ and is, well, random. Drinking vessels, desks, haircuts, clothes, characters and locations all change, for absolutely no reason. No multiverse shenanigans are ever mentioned or even implied. The characters speculate at one point that the changes are somehow echoes of events that might yet happen; a laundromat changes into a bar for which Kirby already has a matchbook, and Kirby goes from single to married to a coworker. 

Dan: When things change for you, do you recognize it? 

Kirby: Sometimes. Other times, they’re just random. 

Dan: Maybe they’re what’s to come.

But then it’s shown that she doesn’t marry her coworker at all in the ‘final’ timeline, at least as far as we see. Is she still destined to do so at some point? If so then there’s no chance that she stays in the house and becomes some sort of time-travelling vigilante or whatever. They’ve shown that it’s possible to change reality, seemingly permanently, so surely the timeline where she marries him is no longer viable? When should the laundromat have been a bar and what are the consequences of it changing at the ‘wrong’ time? Kirby has the matchbook – why? Jin-Sook’s career is destroyed in the present because she isn’t killed…also in the present. At the same time Kirby’s present also shifts because Dan is stabbed, again, in the present. Why? The answer to all of this and the other seemingly random changes is deeply unsatisfying and illogical. The cause of these changes is not meddling in the past but rather (sigh) strong emotions experienced by someone who is ‘entangled’ (a clear if nonsensical attempt to reference quantum mechanics) with another person who is somehow detached from time – namely Harper (with Kirby’s fellow victim Jin-Sook joining the entangled mess later on). In Luisa’s own words:

“I always thought of time just there’s one string of time, and so wherever Harper is he’s still connected to Kirby so his emotions, his violence against other women it ripples forward kind of like a butterfly effect and changes her world, changes her, you know, her hair, her apartment depending on on what he’s done, and so if he kills Jin-Sook in April 26 it doesn’t matter that Kirby is, you know, at the same time, it basically ripples backwards and still impacts her life.” 

This (and another attempt to explain it here) makes absolutely no sense. The conceit of ‘mutable’ timeline time travel and much of our fascination with it is that when you change something, you’re creating a cause that has an effect. It doesn’t matter which way around – you can have something exist out of time in the past that is caused in the future; logically speaking there’s no problem with that. But two unconnected events are, well, unconnected. There IS no cause, there can be no effect. How the hell does Harper killing a woman that has nothing to do with Kirby’s past change Kirby’s present? How does him attacking her in the present change the past of the building that they happen to be in? Or where her desk is? How is Harper ‘entangled’ with Kirby in the first place? He’s affected by the house’s time travel magic – is this somehow contagious? There is no satisfactory answer to any of these questions. What Harper is doing in the present cannot logically affect events in the past. He can take an object from the present back or otherwise change the past IN the past, but he can’t just throw a spacetime tantrum and change Kirby’s past from the present. What Luisa is describing is some sort of psychic warfare – which might have been an interesting premise for a TV series, but not this one. The changes are not even consistent in their frequency or magnitude. At one point near the end reality shifts again but Kirby’s hair, clothes and makeup don’t. This was apparently because they “ran out of hairstyles” and liked her cool punky confident look so they just kept it. 

Of course it’s possible to (as some fans have) invoke ‘many worlds’ and say that every change we see is actually the universe branching, but that’s not shown or told to us. Instead, everything is shown to happen in a single mutable timeline in which trips to the past absolutely do change the present/future. Further, only causal events that take place in the subjective present (like the fight with the changing building) could create a branch in reality and even then, this branch would occur then and there, not arbitrarily in the past (indeed, according to the many worlds interpretation of quantum mechanics, that’s exactly what IS happening all the time). If you’re going to make up rules that aren’t logical, OK, do that, but you need to spell them out, if not in the show then somewhere (famously, Donnie Darko did this on its website). 

I don’t think I’m just being a time travel obsessive here. It isn’t just the fun nerdy logic puzzle aspect that this affects, it’s the narrative as well (unless you miss the fact or choose to overlook it). Although it feels like the stakes and tension are being raised by the changes becoming more frequent and disruptive, they aren’t really – it’s unearned and artificial-feeling, like overly dramatic loud music playing over an otherwise ordinary scene (looking at you, modern Doctor Who). If anything can happen at any moment to three of the main characters, nothing really matters. It’s also needlessly confusing for the viewer, since it’s hard enough for people to follow cause-and-effect changes – hence the contrived photos and fax in Back to the Future – never mind completely random ones taking place in parallel yet not, apparently, conflicting with or modifying the logical changes. Two totally separate mechanisms for change happening at the same time. It’s a bizarre narrative choice, especially since it isn’t taken from the book, and detracts from the otherwise excellent acting, staging, dialogue etc. However, having read many reviews, not many seem to agree with me. They seem to fall into several camps on the time travel aspects. First, people like this SyFy reviewer who seem to think that this multiverse travel, which I’ve explained isn’t the case. Second, some people misunderstand what’s shown and think that the changes ARE due to Harper changing the past, like this Slate reviewer who, by the way, I otherwise agree with. Even Beukes seems to rue the changes to an extent, although she seems mostly happy with the adaptation, perhaps because she’s less attached to her own coherent time travel than I or simply because adaptations are inevitably a compromise between producers, showrunner, writers and studio. Then there are the people who just don’t care or even (looking at you, Redditors) protest that anyone trying to analyse the time travel is ‘missing the point’ and should stop fussing over it. Finally, and not too far removed from the last group, are people who accept that Harper, Kirby or Jin-Sook’s emotions are somehow enough to change the timeline, which as noted is what the showrunner and writers actually intended. As is often the case with fan explanations, none is very satisfactory.

It seems to me that the creators understood that unexpected timeline changes are interesting and fun from movies like Primer or The Butterfly Effect (or perhaps series like 12 Monkeys) and would fit their intent for the adaptation, but weren’t able (or didn’t care) put in the work to make the changes work in terms of cause and effect. Instead they came up with this handwavy version in which things might feel like they might ultimately make sense but logic is in fact out of the window. It’s very much the J.J. Abrams empty ‘mystery box’ approach – set up the intriguing mystery, then reveal that stuff just happens because the writers say so rather than because (say) Harper killing the coroner/medical examiner in the past prevents Kirby getting access to the body she needs to investigate and suddenly a key piece of evidence is lost to her (other than her memory of it) in the present. I chose this example because they do a similar reality shift with the medical examiner in the show (changing from a woman to a man and back again), but it happens (twice) for no reason other than to throw off the audience.

The idea here was that Kirby’s ever-shifting present would be a metaphor for her trauma and “born of a desire to keep the series subjective to Kirby’s experience”, but there’s no reason why subjectively unexplained shifts (i.e. we the viewer sees the cause, Kirby doesn’t) wouldn’t do just as well – better, in fact, since Harper would be actively changing her past to affect her present and future, rather than being clueless as to how or why he was having these effects. Happily, like the other stories I referenced, The Shining Girls novel follows a self-consistent narrative – Harper was always going to lose, he (and Kirby) just didn’t know it yet. No-one is saved by changing the past. Even the hard date limit on Harper’s time travel, hand-waved in the show, is originally due to the fact that the timeline is (as the author’s time-travel consultant Sam Wilson confirmed) self-consistent – he can’t go past 1993 because that’s when the house is, essentially, fated to burn. He is living a loop – he dies in the burning house and then, it’s strongly implied, becomes the house, reaching back to lure a series of owners, including himself, to try to makes things right. But it’s a closed loop – he is merely setting the story in motion from its end. He has no free will, something that people tend to dislike about predestination stories, but I find them satisfying. The creators of the show claimed that they didn’t want the house to be the driving force for Harper’s murders because it took away from his agency – they wanted him bad in the first place. Seemingly, Luisa and co have misunderstood the ending – the house is not just some supernatural entity driving Harper to kill, it’s his ghost. Harper himself is the supernatural cause of the time travel in this story. There was no need to change the story to make Harper solely responsible for his evil – he already was. Like all serial killers he thinks that he has some higher reason for killing but in reality it’s pointless and circular. This also destroys the origin of the time travel house – in the show it’s just…there, and remains unexplained. Kirby inherits it as a “totem of power” according to Luisa, which seems anathema to the original ending (to be fair to her she does acknowledge that this isn’t necessarily a good thing).

Author Lauren Beukes had fellow writer Sam Wilson ‘doctor’ the timeline for her to make it work, and he did a great job. Beukes also gives her vision for her novel:

“I wanted to use time travel as a way of exploring how much has changed (or, depressingly stayed the same) over the course of the 20th Century, especially for women, and subvert the serial killer genre by keeping the focus much more on the victims and examining what real violence is and what it does to us. The killer has a type, but it’s not a physical thing – he goes for women with fire in their guts, who kick back against the conventions of their time.”

This aspect, unlike the closed time loop, somewhat carries over to the TV series, albeit lacking the same variety in terms of the titular girls. However, she also stated that she;

“…wanted to play with loops and paradoxes and obsessions which meant the model I settled on was a fatalistic one. Think of it is as Greek tragedy time travel – the more you resist your destiny, the more you put in to play all the events that will bring it about, like Oedipus or MacBeth or King Herrod but also, in the way it loops back on itself, echoing the legends of Sisyphus and the punishment of Prometheus.”

This is thrown out along with the time travel logic and, for me, somewhat undermines its own narrative. As Beukes correctly tried to show, trauma cannot be magically undone and the dead certainly cannot be brought back. You can only try to address it and, hopefully, stop others from suffering in future. As I said, I did enjoy the show as a supernatural mystery series with time travel elements. The time periods were all nicely depicted and the excitement of travelling through time was there. But it didn’t scratch that timey-wimey itch for me, unfortunately. The recent adaptation of The Time Traveller’s Wife was much better in that regard. In conclusion, if you’re a time travel nut like me, check out the show if you like, but the main thing is to read or listen to the book. Not only is the time travel much better but the way the interior of the house works, its origins and connection to the killer, and even the title all make much more sense.

Advertisement

That’s is not Vlad Dracula’s House

And that’s not his dad, either…

Another Romania-related one as I catch up with material that I’ve been sitting on since my visit to Romania in 2020. On that trip I stayed briefly in the beautiful old walled town section of Sighișoara, which I thoroughly recommend. Like many westerners my interest in the city, the country and indeed the historical Vlad III Dracula was sparked by a love of vampire fiction, but like at least some, I have also found the real history so much more interesting than the actually quite loose connection between Count and Voivode. I was well aware that the historical Dracula had no connection to Bran Castle, but I knew little of the Sighișoara connection. I was skeptical of the connection myself and researched it as best I could at the time, but didn’t get around to posting about it. I was pleased to see Dr Adrian Gheorghe of the Corpus Draculianum project covering all of the major ‘Vlad’ sites in a recent YouTube video (in Romanian, with English subtitles). The focus of Vlad claims in Sighișoara is the ‘Vlad Dracul House’ or Casa Vlad Dracul, supposedly where Vlad II was living when the future Dracula was born. The evidence given is the presence of the coin mint that Vlad II was known to have operated at the time, the supposed age of the house itself, and a fresco depicting a mustachioed chap identified as Vlad II. Gheorghe cites three reasons why this cannot be the Drăculești family residence, two of which I had figured out and can add a little to, one of which I had missed. I’ll summarise these several arguments against, but please do watch Dr Gheorghe’s video as it covers other sites such as Poenari Castle, which I will one day write about, and the infamous Bran);

  1. Gheorghe states that the current building dates from the end of the 17th century, since it would have been destroyed in the great fire of 1676. From my own research I can add that this is supported by writer Dieter Schlesak and historian Michael Kroner. I should also mention that the cellar is claimed to be significantly older – 14th – 15th century based on the extant architecture with occupation even further back than this, based upon unpublished pottery finds by archaeologist Georghe Baltag (of whom more in a moment). Some sources claim that the facade of the building is circa 1500, but this is academic since that’s still significantly post-Vlad II. The building as it stands cannot be Vlad’s house. At best we can say that the cellar could in theory be that of the purported Drăculești residence. Unfortunately we’re not done yet…
  2. He also points out that if this was Vlad’s house, the mint could not have been co-located there because of the constant loud noise. However, we don’t even need to speculate on the tolerance or deafness of Vlad, his family or his neighbours, because if there is any archaeological evidence for a mint at this site, it has never been published (again, by Baltag) as this report laments. 
  3. Gheorghe confirms that the figures in the fresco supposedly including Vlad II are wearing high status clothing from the 18th century. Others say 17th century, including career Dracula grifters McNally and Florescu, who spuriously claim that it must be a copy of an earlier original. Whether copied or entirely much later, there’s no reason to think that a high status resident of Sighișoara (almost certainly a Saxon) would be celebrating this controversial Wallachian figure. Quite apart from all of this and despite M&F’s insistence that the resemblance of the fresco is “uncanny”, it’s pretty naive and cartoonish. There is in any case no known depiction of Vlad II with which to compare it. Almond-shaped eyes and olive skin with a big moustache aren’t enough, I’m afraid.
  4. Finally, something I missed but shouldn’t have since it was mentioned in an article I found – Gheorghe informs us that foreigners were not even permitted to live within the city walls and that there is no historical evidence for any exception made for Vlad II.

So where did this claim originate? Gheorghe references an unnamed Romanian historian that wanted to help create a more Romanian (i.e. Wallachian) past for Sighişoara by placing a Romanian noble (specifically the new national hero that was Vlad III) in the city at this early date. There are in fact two Romanian historians responsible for this myth. One is Gheorghe Baltag, who is criticised pretty hard in this article. Baltag’s original claim was published in Magazinul Istoric in 1977, specifically Vol. 11 No. 1 (issue 127, which is available behind a paywall here). In this he cites the fresco, the age of the building, and “local tradition” as his evidence and does not even mention the mint. However, as the Romanian magazine ‘Historia’ revealed earlier this year, this “local tradition” dates back only as far as 1945, when a medical doctor by the name of I. Culcer, who knew of the original discovery of the fresco circa 1900, first made the connection to Vlad II (I have not been able to identify where this was published or exactly what was claimed). Both men sought to better tie the Saxon city of Sighişoara into the modern nation of Romania, but did so on the flimsiest of evidence. Romanians have a fascinating and important history that does not need or deserve this kind of spurious approach. To be fair to Baltag, he retracted his own claim in another issue of Magazinul Istoric (Vol. 40, Issue 2, February 2006, pp. 13-16) in which he admits that “…today the hypothesis is rejected by most serious historians and can no longer be taken into account in any case.” He helpfully explains that the house, known prior to its spurious modern name as the Casa Paulinus, was named after Mayor Johannes Paulinus, under whose ownership it was documented as having burned in the 1676 fire, being rebuilt in its present form by his descendants. There is no mention of the survival of an older facade or even cellar (not that it would matter as noted above). Baltag doesn’t even mention the house in the chapter (‘Sighişoara – istorie şi arhitectură’) that he wrote for the 2009 book ‘Turism Cultural’ (edited by Teodorescu, pp. 18-20). Neither source references a mint at the site (he would know – he did the excavations in 1979) and in both publications he states that the house post-dates Vlad II. So the primary proponent of the theory completely abandoned it, albeit he leaves the possibility that Vlad II could have lived within the walls and states that his mint must have (which does make sense if sited away from major dwellings and/or below ground). I’m still not sure who first claimed that Vlad II’s mint was located at Casa Paulinus. Cazacu claims that the house was “known” for having the mint on site prior to the association with Vlad (2017, ‘Dracula’, p.2) but I can find no evidence for this and surely Baltag of all people would have mentioned it.

Thus the association with the Drăculești begins with the fresco that almost certainly does not depict Vlad II in a house that is too new to have housed him. It never had a mint – the claim that it did follows from the flawed identification of Vlad II and is not based upon any archaeological or historical evidence. Dracul and family, whilst they probably did live in the city, could not have lived within the walls of the Old Town. That is definitively not the house where Vlad Dracula was born or where he or his father lived, despite ongoing efforts from the tourism industry, ignorant westerners, and even some Romanian academics.

The Râul Doamnei was NOT named after Dracula’s bride

“I can see my house from heeeeeeeeeere!”

Having been fortunate enough recently to catch the 30th anniversary re-release of ‘Bram Stoker’s Dracula’ (one of my favourite movies), I thought it time to dust off the story of Vlad Dracula’s bride, who supposedly threw herself to her death from Poenari Castle to the River Argeș below. It has (since at least 2006) appeared on the Wikipedia page for the tributary itself, albeit without any kind of cite (sigh). This story is quite key to the movie’s plot, and ‘Prince Vlad’ explains to the reincarnation of his lost love that the river was thus renamed ‘in his mother’s tongue it is called Arges; River Princess’. This bit is somewhat correct. This tributary of the river Arges is today called Râul Doamnei or Rîul Doamnei, in English the “River Lady” or the “Lady’s River”. Not that it necessarily invalidates the claim, but ‘Princess’ is a questionable translation. There are several Romanian words for ‘princess’; none of them are ‘Doamnei’, which, derived from Latin ‘domina’, means ‘Lady’, as in a mistress of a household or a gentlewoman. Thus the ‘Lady’ in question could have been a member of the nobility or ruling family, but is not necessarily royalty (even the usual English translation of Voivode as “prince” is questionable as far as I can tell, being closer to Lord or perhaps Baron – the Romanian word for Slavonic voivode is domn which is indeed “lord”).

The larger problem is that this real-life piece of folklore is not actually based upon any wife of Voivode Vlad Țepeș III, aka Vlad Dracula. It is yet another bit of BS history created by Dracula researchers Raymond McNally and (the late) Radu Florescu. You can read about a fair number of their questionable claims in Elizabeth Miller’s appropriately named book ‘Dracula: Sense and Nonsense’, and Anthony Hogg has covered the extremely dubious yet widespread myth that Dracula was accused of dipping his bread in human blood here (the accusation was actually that he washed his hands in blood). Perhaps the most frustrating aspect of their work, given their respectable academic backgrounds, is their total lack of citations, making it difficult to disentangle fact from elaboration, error, and perhaps deliberate misinformation. They expand snippets of history and indeed legend into whole paragraphs and pages, presented as complete historical accounts. Other than to entertain, sell books and boost the Romanian tourist industry, their main goal seems to have been to blur the very clear line between the historical Voivode Țepeș aka ‘Dracula’ and the fictional Count of the same name. Miller covers this very well in her book, and one of my first blog posts back in 2007 goes over it as well, but to be clear, Stoker was not inspired by the historical Vlad Dracula in his creation of the fictional Count, and the links between the two are tenuous at best. The fictional Dracula is a superficial and historically inaccurate conflation of Vlad III and his father Vlad II, based upon a single source that Stoker found whilst writing the book.

McNally and Florescu first published their version of the River Princess story in 1973’s ‘Dracula: A Biography of Vlad the Impaler, 1431-1476’ (p.106). In 1991, when production on the movie was in progress, an almost identical version was printed in the follow-up book, ‘Dracula: Prince of Many Faces’, and follows below:

“During that night, one of Dracula’s relatives who had been enslaved by the Turks years before, mindful of his family allegiance, decided to forewarn the Wallachian prince of the great danger he was incurring by remaining in the fortress. Undetected, during the pitch-dark, moonless night, the former Romanian, who was a member of the janissary corps, climbed to the top of Poenari Hill, a short distance from Dracula’s castle, and then, armed with a bow and arrow, took careful aim at one of the dimly lit openings in the main castle tower, which he knew contained Dracula’s quarters. At the end of the arrow he had pinned a message advising Dracula to escape while there was still time. The Romanian-born Muslim witnessed the accuracy of his aim: the candle was suddenly extinguished by the arrow. Within a minute it was relit by Dracula’s Transylvanian concubine; she could be seen reading the message by the flickering light. What followed could have been recalled only by Dracula’s intimate advisers within the castle, who presumably witnessed the scene. Peasant imagination, however, reconstructed the story in the following manner. Dracula’s mistress apprised her husband of the ominous content of the message. She told him that she would “rather have her body rot and be eaten by the fish of the Argeş than be led into captivity by the Turks.” She then hurled herself from the upper battlements, her body falling down the precipice below into the river, which became her tomb. A fact that tends to corroborate this story is that to this day the river at that point is known as Rîul Doamnei, or the “Princess’s River.” Apart from a brief notice in the Russian narrative, this tragic folkloric footnote is practically the only reference anywhere to Dracula’s so-called wife, who is permanently enshrined only in local memories.”

This text reappeared in “The Complete Dracula” a year later, the same year as the Coppola movie was released, forever cementing this connection as historically accurate. As noted already though, this is simply not correct, even to folklore (which of course need not reflect historical events). There are multiple Romanian legends around the Râul Doamnei, collected by Codru Rădulescu-Codin in ‘Literatură, tradiții și obiceiuri din Corbii-Musc̦elului’ (1929, available and Google-Translatable here). Not one of these involves Vlad Dracula or for that matter his castle/citadel of Poenari (Cetatea Poenari or Cetatea Țepeș-Voda). The first from the 1929 book is about a Lady who washes her clothes in the river (hence the name), the second one involves the wife of Negru-Voda (the ‘Black Voivode’) a possibly mythical early prince of Wallachia. Negru-Voda is in fact traditionally associated with the nearby Poenari Castle (as its builder), but this is coincidental folklore – the castle doesn’t feature in this case, largely because the tale relates to the royal couple travelling through the area rather than being resident at Poenari. There’s another reason, though. Poenari is far too distant from the river in question to be jumped into. If you had jumped out of any of its windows, you would just have landed on the rocky slopes below, a good 200 metres from the river. Nor is this one anything to do with either Turks, or the Lady’s suicide; she doesn’t even die in this version. This version appears in actual Romanian language literature as late as 1990 (‘Revista de Psihologie’, Volumes 36-37, Academia Republicii Populare Romîne), whereas I can find nothing in Romanian sources for any version involving Vlad. The third of the actual traditional folk tales does feature the suicide, but here it is the Voivode that the Lady (or Princess if you like) is fleeing, following a disagreement over a church that she founds whilst he is away fighting the Turk and that he destroys with cannon. This one was clearly dreamed up to explain the ruined state of the nearby Sân Nicoara church, as well as the name of the river. It would also have changed the major subplot of the 1992 movie rather drastically if Dracula was attempting to lure Mina into the same domestic abuse situation as her former incarnation!

The last two versions of the myth, under the title ‘Piatra Doamnei’, are conceptually much closer to the Florescu/McNally version in that the titular Lady is fleeing Turkish rapists and accidentally kills herself in the river. This is also the oldest one that I (at least) could find in print online, dating back to at least 1909 in ‘Jocuri de copii’ by Tudor Pamfile (p. 59). This and the others are no doubt much older as verbal folk tales. Its actual origins, like most placenames in the world, will be lost to history, and even the legitimate folktale/song version was probably concocted to explain the existing name. One version names her as ‘Doamna Carjoaia’. The other, simpler version goes as follows:

“In vremea de demult, ci-ca, alergau Tatarii prin partile acestea dupa o Domnita romanca, tare frumoasa, voind s’o pangareasca. Da cand a ajuns Domnita la raul asta, apa era mare de tot si ea, neputand s’o treaca, a inceput sa fuga in sus pe malul sting al rauluitot a fugit pana a ajuns in Corbi. Da, aci, era aproape s’o prima Tatarii. Ce sa faca ? A vazut o piatra mare si a dat fuga de s’a ascuns dupa piatra. Totus, Tatarii au gasito si aci. Biata Domnita atunci, a inceput sa tremure de groaza si s’a incercat sa treaca raul, cum o putea, doar-doar o scapa cu vieata. N’a avut noroc insa, ca au luat-o valurile si s’a innecat in rau biata Domnita. De atunci, raului in care s’a innecat, i-se zice Rdul Doamnei, iar pietrei, Piatra Doamnei.”

“A long time ago in these parts the Tatars* were chasing a very beautiful Romanian princess, wanting to defile her. When the princess reached this river, the water was very deep and she, not being able to cross it, started to run up on the left bank of the river and ran until she reached Corbi. Yes, here, it was almost like the first Tartar. What to do? She saw a large stone and ran to hide behind it. However, the Tatars also found her here. The poor Miss, she began to tremble with horror and tried to cross the river as best she could, she just got away. She was unlucky, however, because the waves took her and the poor Miss drowned in the river. Since then, the river in which she drowned is called Raul Doamnei, and the stone, Piatra Doamnei.”

*Most likely actual Turkish invaders rather than ethnically Turkish Wallachian Tatars.
There could be yet another version where Vlad’s princess commits suicide due to Turkish misinformation, but unlike these other versions I can find no evidence of that. A final clincher here is that various of the legend are found all over Romania and Moldova, pertaining to lots of different rivers and different female protagonists. It’s a folklore ‘motif’, the real-world predecessor of the internet meme. These stories spread like viruses amongst and between populations, who modify them to fit their locality, preferences, and prejudices, with a recognisable kernel (the tragic death of a woman) remaining the same. In that respect this one isn’t much different from the western European “grey lady” ghost stories, and has nothing whatever to do with our old friend Vlad III. As the Corpus Draculianum team has pointed out, Vlad III, although famous at the time and known to European and Ottoman nobility, was really not a big figure in folklore and popular awareness until after the founding of the modern Romanian state in 1859. This is supported by the total lack of any folklore involving Vlad III at all in Tony Brill’s 1940s collection of Romanian popular legends (‘Tipologia legendei populare româneşti. Vol. 2: Legenda mitologică, legenda religioasă, legenda istorică’, Ed. by Ioan Oprişan, 2006). The ‘Princess River’ motif does appear, but again, without any reference to Vlad III or even Poenari Castle. In the 20th century, a number of scholars, not just McNally and Florescu, then retrospectively created histories that gave his memory far more period significance and continuity than it originally had. Which is not to say that he doesn’t deserve his pivotal place in the history of the region, just that circumstances denied him that local legacy until it was, effectively, rediscovered and mythologised. On the one hand it’s annoying that we in the west had to create ‘Vampire Vlad’ through fiction and misrepresented history, but on the other, I wouldn’t have got to enjoy the 1992 movie if we hadn’t. At least that gave the real Vlad III some nuance back, since the prior western awareness was filtered through near-period exaggeration and lies about his cruel nature, notably through the Transylvanian Saxons and their Germanic cousins in the west. The real Vlad was not a vampire nor an exceptionally murderous tyrant, he was very human, and like many medieval rulers, used terror and violence to try to secure order within and without his realm. As to the “why” of all this, I don’t know why McNally and Florescu made this claim or what sources they may have used that they felt supported it, but including it certainly did help their overarching efforts to make the two Draculas appear to be one and the same. At this point they have “won”, at least in the West; much to the disgust of many present-day Romanians who had already reclaimed Vlad Dracula as a national hero, and for whom the vampire association is a perplexing reflection of the old Saxon slurs about him.

Kelvedon Hatch nuclear bunker

A wonderful photo of boffins at work in the (level 1) Ops Room at Kelvedon Hatch circa 1962
THE UNITED KINGDOM DURING THE COLD WAR, 1945-1991 (D 106284) United Kingdom Warning and Monitoring Organisation. Metropolitan Sector Operations Centre. Operations Room – Scientists at Work. Copyright: © IWM. Original Source: http://www.iwm.org.uk/collections/item/object/205220516

For those still following this page, my apologies for another lengthy drought. I’ve just been too busy unfortunately. However, I have been working on a few things, the first of which follows…

This spring I finally got around to visiting one of the best preserved Cold War nuclear bunkers in the UK – Kelvedon Hatch in Essex. First, given the implication of me commenting on anything historical here, let me say that I absolutely loved this place. We owe the owner and manager, Mr Parrish, a massive debt for rescuing it for the nation rather than being vandalised or destroyed entirely. That said, it is not without its issues from an historical standpoint. Parrish claimed in 1996 that “Everything is original — except the John Major figure…It is exactly as the Government left it”. The Facebook page today likewise proclaims “Everything is as it was left by the Government, when the bunker was decommissioned in the early 90’s.” This is not the case. 

Praise for the bunker is (rightly) almost universal, but at the same time it’s attracted very little scholarly attention. I did find criticism in David Lowe and Tony Joel’s ‘Remembering the Cold War: Global Contest and National Stories’ (2014, p.59) where they remark that Kelvedon Hatch’s “…testimony to the Cold War is somewhat compromised by its private ownership. The organization and upkeep of displays is very tired and occasionally misplaced (a dummy of former prime minister Margaret Thatcher, for instance, sits next to communications equipment dating from the 1960s), and the bunker jostles with youth-focused outside activities…”. Digging out Imperial War Museum and Historic England photos from 1992 and 1997 respectively, it isn’t just the interpretation that could be described as “tired”, sadly. The general condition of the place has gone from absolutely pristine to, shall we say, looking its age. Flooring and painted surfaces are worn (but not peeling), plant machinery is looking rough (albeit not visibly corroded), and there are worrying cracks in a couple of walls. I feel terrible pointing this out because if you’re going to privately run an underground three-storey office block formerly maintained at great taxpayer’s expense, maintenance is an enormous and inevitable problem. I certainly have no issue with the adjacent outdoor activities – how else are they going to fund this place? Mr Parrish’s recorded audioguide tour, whilst engaging, informative, and funny, doesn’t give the full story, but then how could it? Even Lowe and Joel blame the private status of the site rather than the owner himself. However, the whole place is in a sort of three-way limbo between an attempt at reconstructing actual wartime occupation of the final RGHQ phase, attempts to evoke its earlier days, and a sort of ad hoc Cold War history museum. It’s great for most visitors, but for those of us wanting more, I decided to try to disentangle this confusion using the available information, plans, photos, and film footage of the site’s different eras. 

Contrary to just about all of the information out there, there were actually three operational phases as follows:

Phase 1 – ROTOR bunker

1951 – 1953: Construction 

1953 – 1957: RAF ROTOR (R4 type) Metropolitan Sector Operational Control (MSOC) 

1957 – 1962: United Kingdom Warning & Monitoring Organisation (UKWMO)/Royal Observer Corps (ROC) Metropolitan Sector HQ co-located with the ‘rump’ RAF SOC following closure of the ROTOR programme.

Plans: via Historic England

Variant plans via the RAF Barnton Quarry restoration project (you can right-click and open image in a new tab to view a larger version)

Film: Kelvedon Hatch features prominently in the 1962 film ‘The Hole in the Ground’ (note this copy misses out a short introduction set outside the bungalow). By this time the RAF have handed over operations to the UKWMO, but the fabric of the building has yet to change. In the opening scene we see UKWMO team members running into the above ground guardhouse, then proceeding down the long access tunnel and into the main bunker, the blast doors slamming shut behind them. Visible in the background is some sort of equipment stowage or coat rack (!) located where the Home Office Radio Room would later be established. We then see the Chief Sector Warning Officer and his team of scientists emerge from the doorway at top left on the above-linked plan and immediately turn right, walking under the now-defunct tote board (its red-painted support posts and frame are visible). At this point we get a great view of almost the whole Ops Room. At the opposite side of the room the bottom of the now disused RAF Sector Ops glazed-in ‘cabins’ are visible (these appear more clearly later on as well). They then walk behind desks manned by (as explained in the film) Post Office telephonists who have volunteered under UKWMO. The team then turns right again, disappearing behind a large black pinboard (with two large maps on this side of it) that effectively bisects the room into admin/comms and scientific analysis. We pick up with the scientific team later as they perusing maps and charts. We get a reverse shot later on that shows the tote support structure again, in front of the group, complete with a colour-coded ‘sector’ type clock (as used in RAF ops rooms in the Second World War). Toward the end of the film we see the bottom right corner of the room with red double doors (these were visible in the distance in the establishing shot of the room). Pleasingly, these original 1950s doors are still in situ today (along with a lot of others!), repainted light green and with an additional 1985 vintage inner door in front of them. 

The whole setup is remarkably ad hoc – simple black cloth-covered pin-boards, ordinary tables with switchboard-style phones, individual message trays and pigeonholes made of unpainted wood. 

Photos: There are two known images, one of which is shown in photocopied form on the tour – the middle two rows of cabins with the top row just in shot – and the two plotting tables on Level 1 below the cabins. I was also very pleased to discover (I believe for the first time) IWM photo D 106284 showing a civilian UKWMO scientist plotting nuclear bursts on a map using a radiac slide rule. If anyone recognises the communications kit to the right of his drawing board, comment below. This shot is a perfect match for the scenes in the film.

Description: Kelvedon Hatch differed from all other R4 bunkers in having a tunnel that emerged into Level 1 rather than Level 3. Note that the cage opposite the main blast doors, today filled with random weapons as though an armoury, actually housed the 1950s electrical transformer for the site (plans are labelled as such and photos of other ROTOR bunkers still show the plant in place). Much is made of the ‘disguised’ above-ground bungalow, but this was a real, functioning military-style guardroom like any other, with toilets, offices, and an armoury (there’s a plan here that also appears in McCamley’s book). The armoury later became a decontamination room (this room’s door, behind the outer blast door, is still so labelled). All ROTOR bunkers throughout their various phases of use had a perimeter chain-link fence patrolled by armed guards and the actual radar stations were effectively military barracks with massive rotating radar dishes. The above-ground structures may have been intended to be low-profile, and certainly were at Kelvedon Hatch more so than elsewhere (since KH never had radar arrays and had the advantage of some tree cover) but were certainly not disguised. 

As a command centre for a short-lived RAF radar network, the site was focused around a central Operations Room ‘well’ three floors deep, with plotting boards at the bottom, a tall ‘tote’ mission control board at the front, and glazed, angled control ‘cabins’ wrapped around the back. The central room on Level 1 housed the two large plotting tables and the support posts for the tote. The remainder of the floor comprised two large rooms – ‘Apparatus’ at left and the main plant room at right. The two plant rooms remained much the same throughout all three phases. 

Moving up to level 2 we again find the glazed ops ‘well’ in the middle, surrounded by a corridor with office spaces either side and beyond this a maze of partition walls defining the toilet blocks (the women’s toilet being larger than the men’s) and a number of offices/rooms of varying size. By far the largest is an open plan space at top left. The plans (nor any other source) don’t reveal what the purpose of any of these might have been, unfortunately. We have a bit more information on the top floor (level 3) which again has the ops well but without the corridor around it. Instead there is a ring of self-contained offices. At left we have two large unidentified rooms with a thin partition wall and on the other side of a more substantial wall running from the stairwell to the bottom wall, a row of squarish offices with a corridor running past them. At right we have some labels, denoting the Women’s Royal Air Force (WRAF) rest room 

Phase 2 – Sub-Regional Headquarters/Sub-Regional Control

1963 – 1966: SRHQ for Region 4, ‘East’

1966 – 1985: S-RC 4.2 (Region 4, Control 2)

NB UKWMO/ROC Sector HQ retained until 1971 only

Plans: displayed (until they eventually fall apart) in the access tunnel, via Alamy stock images. Undated but believed to be ca.1965. Sadly I didn’t take my own photo so a lot of the room numbers and labels are not legible on the image we have. 

Photos/Film: None, however see ‘The Hole in the Ground’ above – the room may have changed a lot ca.1965 but the operations carried out, the kit, and the personnel involved would have been much the same.

Description

The first phase intended for ‘continuity of government’ in the event of a Third World War. The most significant change for this phase was that the ROTOR Operations ‘well’ was floored over. Rooms were also reconfigured throughout in keeping with the bunker’s new role. The men’s lavatory was expanded and a corridor created along the back of the toilets with partitioned offices along it (those labelled with a purpose are ‘Tape Room’ and ‘PBX’, a type of telephone exchange). New rooms were built on the left side of the floor telephone exchange Offices on Level 2 were knocked through to create a large Conference Room. Others were more significant. Notably, sleeping accommodation was installed; one dedicated 20 bunk dormitory on Level 3 and another 20 or so bunk beds in other areas, including a full row of beds along the access tunnel (so accommodation for around 80 people). Next door to the dorm was an equivalently sized room labelled ‘DEPTS’; the first dedicated working space for representatives of different government departments. The former RAF and WRAF rest rooms were converted into a single large unisex ‘Canteen Rest Room’ with adjoining kitchen capable of providing hot meals. The centre of Level 1 (rooms 101 and 103) remained in use as ‘Sector’ (presumably an operations room), but around 1/3rd of the room (101) was walled off as a sleeping area with bunks along the back wall. Another four small rooms on Level 2 were also designated ‘Sector’, with other rooms allocated to ‘Military’ and ‘Fire’ (one large room), Scientists, and Civil Defence Operations. The BBC studio was installed in its current location (albeit a different configuration) next to the GPO (General Post Office) ‘frame room’. The plant rooms (the main room being 102) remained unchanged. All told, Level 1 was already close to its Phase 3 incarnation in terms of usage and layout, if not in detail, but 2 and 3 remained quite different.

Phase 3 – RGHQ

1985 – 1992: Regional Government Headquarters, Metropolitan Region (RGHQ 5.1)

Plans: Only Level 1 has been reproduced online. To see this final layout (albeit in Phase 4 ‘trim’) you can also check out various tours on YouTube (including this short official one), and the complete plans were published in Judy Cowan’s 1994 pamphlet ‘Kelvedon Hatch Secret Bunker’. Better yet, visit the bunker yourself if you can; this article is primarily intended for people like me who visited but didn’t get a full picture of the site.

Photos: A whole series of Imperial War Museum record shots taken on decommissioning in 1992. These show how very sparse the place was, contrary to modern claims that the site is as the government left it. All we see are tables, chairs and telephones. 

Description

Internal walls were again rebuilt, this time in handily identifiable blockwork construction. Basically, if you see breeze-blocks, you’re looking at a 1980s alteration. The entrance to the access tunnel was redesigned to incorporate a new generator room into the near end of the tunnel (the exhaust stacks for the diesel engines are still visible to the left of the bungalow and have changed in design since the 1962 film). The complete row of bunks was replaced by a few fold-down bed frames attached to the wall (presumably for a guardroom ‘watch’, since there was now space for everyone on Level 3). The area at the far end of the access tunnel was enlarged and fitted with sliding blast doors on tracks to create a ‘Home Office Radio Room’. New generator cabinets and a siren point were installed just inside the main blast doors (it’s not clear whether the transformer outside them remained in place). The UKWMO Sector HQ relocated to the newly expanded Group HQ building at Horsham, and ‘Sector’ became office/operational space for “Uniformed Services” (outfitted with tables, chairs, lockers and desk phones), but the Communications Centre or COMCEN (also on Level 1) remained part of the wider Emergency Communication Network (CEN) with access to UKWMO/ROC data. The entrance used by the team three years earlier in ‘The Hole in the Ground’ and the door opposite it were walled off to create a corridor bypassing the new smaller main room (visitors now enter the room in the middle through a door marked ‘no entry’) and a small admin room (now one of several small cinemas for visitors). The science team were moved from Level 2 down to Level 1 next to a much smaller BBC studio. On Level 2 the formerly closed-in offices across the middle third of the room were knocked through to create one large central open-plan office. Note that the various wooden painted signs around the walls in this room are not original, as shown by the 1992 photos of the space and, in the 1997 shots, their initial suspension from the walls on string loops. Later they were (regrettably) screwed into the walls. These seem too specific in terms of content and style to be made up, but if they came from another site I don’t know which one or what era. 

Closed offices remained around the perimeter of the floor but were also reconfigured. Where there had been only one office/bedroom for a government official, a new corridor (down which the modern tour proceeds) accommodated three such spaces; 203 for the Regional Commissioner, 204 for the Principal Officer, and 205 for the Prime Minister (although it’s far from clear that the PM would ever have used this room). The row of offices that visitors see when they emerge from said corridor are wholly new for this phase – their predecessors having been ripped out. The first office at left is the ‘Secretariat’ (206), with a small room within this (207) housing a typing pool (previously located on Level 3). The adjoining rooms (208 and 209) were a truncated version of the Conference Room and, in the corner (now an event room) the Information Room.

Up on Level 3 the existing dormitory space was doubled, taking up the former government department space (there now being much more space for them on Level 2). This was divided by sex, men in the right hand room (302 – now an off-limits meetings/event space) with a small room next door for ‘Drivers’ (likely now a store for the giftshop), and women in the two rooms next door (301 and 308). Another two (also adjoining) male dorm rooms (309 and 311) were established on the other side of the toilets and Sick Bay (310 – seen here before stripping out and dummying-up). This area is partly correct today but 309 is, with some artistic licence, dressed as an emergency operating theatre. In reality this room would be fitted out with bunks and lockers in anticipation of use. We know this because we have a photo taken from 309 looking into 311. As cramped as the recreated dorms in the bunker are today, they are nothing compared to the reality, and the beds and lockers there today are not the same as they originally were.

Phase 4 – Visitor Attraction

1994: Sold back to the landowning family.

1995 – Present: Opened to the public.

Plans: represented by the fire evacuation map located in the bunker. Identical to Phase 3 but with two additions – a new metal staircase in the upper right corner of Level 2 allowing access from the open plan office straight up to the room outside the canteen on Level 3 (marked ‘Common Room’ on the Phase 3 plan). Although not included on the 1985 plan, it is clearly original to the RGHQ phase. The other change is the exit tunnel sadly bored through the wall of the Common Room to comply with fire regulations.

Photos: Another series from Historic England who documented the site as a new visitor attraction in 1997, by which time a lot of the present embellishments had been made but without the additional clutter and the ravages of time that we see today.

Description: As part of the decommissioning process, all of the original furniture and communications equipment (other than some of the telephone exchange) was removed. The original 1950s transformer room was also stripped out, but the rest of the plant remained. By 1997 the bunker was increasingly ‘dressed’ with surplus Cold War-era furniture, equipment, artefacts (most of the phones are marked up with the station crest of RAF St Athan) and some basic museum-style diorama displays ranging from individual dummies in wigs to an attempt at a ‘Threads’-style post-apocalypse household. A recent addition is a large-scale Spitfire model that has for some reason been suspended over the plotting table in the former Operations Room. 

Note that some of the room door labels (which slide into universal holders affixed to the doors – the actual room numbers are permanent) seem to have been moved over the years. The label for room 202, ‘Government Departments’ is currently fitted to the door for room 201, which per the plans should be ‘Common Services’ (and is loosely interpreted as such today with racks of stationery). Male and female dorm rooms 302 and 301 have had their labels swapped for some reason.

Conclusion

What you see at Kelvedon Hatch bunker today is therefore mostly a very…busy take on the final operational (RGHQ) phase. I will say again; this is an incredible place, it just needs some analysis to make full sense of. The current attraction conveys the general sense of what all three phases were about, it’s just not clear how these fit within chronology and the fabric of the building itself. If it were up to me (clearly it isn’t) I would thin out the accumulated clutter and remove all of the shop dummy diorama displays. Remember – none of the furniture or props there now are original to the site. I’d choose to depict Phase 3 throughout, and choose one room to clearly demarcate and curate as a museum to interpret the first two Phases, with an introductory display on Civil Defence.  

Bibliography

The bunker is mentioned in a number of published works and websites, nearly all of them are superficial in their treatment of the site or outright wrong. I recommend:

Clarke, Bob. 2005. ‘Four Minute Warning: Britain’s Cold War’. The History Press.

Cowan, Judy. 1994. ‘The Kelvedon Hatch Secret Bunker’. 

McCamley, Nick, 2002. ‘Cold War Secret Nuclear Bunkers: The Passive Defence of the Western World During the Cold War’. Pen & Sword.

Stop Medicalising Vampirism!

Just a quick comment on an article that appeared on the usually excellent Atlas Obscura a little while back. It starts out OK, but fairly quickly we hit an error. The first image is of the alleged home, not of Vlad III, “Dracula” but his father Vlad II “Dracul”. We could simply read between the lines here, since Vlad III is further alleged to have been born in that house (both claims are shaky, in fact, as I will eventually get around to explaining). However, the caption states that the real-life Dracula was “was born in Romania in the 14th century”. That’s a century out, not to mention that Vlad’s contribution to the Stoker novel was actually very limited, being limited to a brief fictionalised biography that also confuses Vlad II and Vlad III, and a Victorian equivalent of a copy/paste of “Dracula” and “Transylvania” for the original draft’s “Count Wampyr” and “Styria”. The author of this article ought to know this, and I wonder if this is an editorial cockup inherited from the original ‘The Conversation’ article (on a related note, why do people keep buying articles from that site?). 

Then it gets really wrong in the thrust of its argument, which is a rehash of several post-hoc medical/scientific explanations for vampirism that have been debunked numerous times:

“…two in particular show solid links. One is rabies, whose name comes from a Latin term for “madness.” It’s one of the oldest recognized diseases on the planet, transmissible from animals to humans, and primarily spread through biting—an obvious reference to a classic vampire trait.”

The massive problem with this explanation is that the vampires we’re taking about here are strogoi mort – animated corpses that the villagers identified as such, to the point of often digging up the suspect and trying to (re)kill them (and yes, I’m familiar with the strigoi vui, which were not thought to suck blood and were directly analagous to the western [living] witch). This is classical post hoc BS history; X disease resembles our modern impression of what Y folklore concept might have been, therefore X caused Y. When in fact there’s zero evidence for this and at best it’s unfalsifiable speculation. Based upon one article in a neurology (not a history or folklore) journal, the author also concludes that the rabies sufferer’s fear of water must be related to folklore tales of vampires being unable to cross running water (nope, that was witches again), and disturbed sleep patterns (yet again, the vampires we’re all talking about here are animated corpses, not insomniacs) plus increased aggression (I suppose any amount of aggression from a corpse qualifies as “increased”). Even the original rabies article from 1998 says that this explanation is just one possible cause of the vampire myth. You don’t have to be a folklore buff to realise that disease symptoms in the living cannot explain them in the dead. 

The second alleged vampire disease cited in the Conversation/Atlas Obscura article is pellagra, and is even less convincing since the author himself admits that it (and this is the second of his two top candidates for the origin of the vampire myth, remember);

“…did not exist in Eastern Europe until the 18th century, centuries after vampire beliefs had originally emerged.”

As Doctor Evil would say, “riiiiiiiiight…”. So how is there in *any way* a causal link between the two? There isn’t even any tradition of the classical blood-drinking vampire in the Americas; only its tuberculosis-causing cousin. No, sorry, these and in fact all disease explanations for vampirism have been, remain, and always will be, terrible. Just stop. Now, to redeem Atlas Obscura, here’s a much, much better article of theirs that completely agrees with me, and makes the excellent point that these lurid claims are not victimless, since real living people have to suffer with diseases like porphyria. 

‘Stinking Rich’?

I’ve just watched a fascinating lecture from funerary and art historian Dr. Julian Litten on burial vaults. I learned a lot and greatly enjoyed it, but was very surprised to hear him recite the old chestnut that the smell of decaying bodies under church floors led to the expression ‘stinking rich’. This is just not true, as phrases.org.uk relates:

The real origin of stinking rich, which is a 20th-century phrase, is much more prosaic. ‘Stinking’ is merely an intensifier, like the ‘drop-dead’ of drop-dead gorgeous, the ‘lead pipe’ of lead pipe cinch or, more pertinent in this case, the ‘stark-raving’ of stark-raving mad. It has been called upon as an intensifier in other expressions, for example, ‘stinking drunk’ and ‘we don’t need no stinking badges’

The phrase’s real derivation lies quite a distance from Victorian England in geography as well as in date. The earliest use of it that I can find in print is in the Montana newspaper The Independent, November 1925:

He had seen her beside the paddock. “American.” Mrs Murgatroyd had said. “From New England – stinking rich”.

However, I thought I’d check, and I did find an earlier cite, from ‘V.C.: A Chronicle of Castle Barfield and of the Crimea’, by David Christie Murray (1904, p. 92);

“I’m stinking rich – you know – disgraceful rich.”

Nothing earlier than that however. So I would add to the explanation at phrases.org.uk and say that it’s more of an expression of disgust; someone is so rich that it’s obscene and figuratively ‘stinks’. If we had any early 19th century or older cites, I’d grant that it could have been influenced in some way by intramural burial, but this was rare by the turn of the 20th century and lead coffins had been a legal requirement since 1849. Litten suggests that unscrupulous cabinetmakers might omit the lead coffin, leading to ‘effluvia’, but even then I can’t imagine that was common as it would be obvious when it had happened and whose interment was likely to have caused it, resulting in complaints and most likely reburial. 

Litten also repeated a version of the myth of Enon Chapel, which is a story I’ve been working on and will be forthcoming, but added a claim that I have yet to come across; that the decomposition gases from the crypt below were so thick that they made the gas lighting in the chapel above ‘burn brighter’. I don’t know where this comes from and it hardly seems plausible. Dr Waller Lewis, the UK’s first Chief Medical Officer, wrote on the subject in an 1851 article in The Lancet entitled ‘ON THE CHEMICAL AND GENERAL EFFECTS OF THE PRACTICE OF INTERMENT IN VAULTS AND CATACOMBS’. Lewis stated that: “I have never met with any person who has actually seen coffin-gas inflame” and reported that experiments had been carried out and “in every instance it extinguished the flame”. This makes sense, since it was not decomposition gases per se (and certainly not ‘miasma’ as was often claimed at the time) that made workers light-headed or pass out in vaults – it was the absence of oxygen and high concentration of CO2 that caused this. Hence reports of candles going out rather than inflaming more.

Unfortunately, even the best of us are not immune to a little BS history. It was nonetheless a privilege to hear Dr. Litten speak.

Werewolves = Serial Killers?

Beast of Gevaudan (1764). Not to Scale (Wikimedia Commons)

When I last wrote on the Beast of Gévaudan, I said that I couldn’t rule out the involvement of one or more human murderers whose actions could have been conflated with several wolves and possibly other wild animals killing French peasants between 1764 – 1767. I meant that literally; the Beast was a craze, and it’s perfectly possible that one or more victims was in fact the victim of a murder. We have no evidence for that, of course, and certainly not for the claim, sometimes made, that the whole thing was the work of a serial killer. This was recently repeated in this otherwise very good video from YouTube channel ‘Storied’ (part two of two; both parts feature the excellent Kaja Franck, who I was fortunate to meet at a conference some years ago). Meagan Navarro of the horror (fiction) website Bloody Disgusting states the following:

“The Beast of Gevaudan or the Werewolf of Dole, these were based on men that were serial killers and slaughtered, and folklore was a means of exploring and understanding those acts by transforming them into literal monsters.”

The ‘werewolf’ of Dole does indeed appear to be a deluded individual who thought he was able to transform into a wolf and was convicted as such. However, this is not the case for Gévaudan, which is a well-documented piece of history, not some post-hoc rationalisation for a series of murders as she implies. The various attacks that comprise the story were widely reported at the time and in some detail (albeit embellishments were added later). No-one at the time suspected an ordinary person of the actual killings, and the only sightings consistently refer to a large beast, sometimes detailing how the kills were made. The idea of a human being in control of the Beast somehow was mooted at the time, as was the werewolf of folklore, but never a straightforward murderer. Of course, the idea of the serial killer was unknown until the late 19th century, and it wasn’t long after this that a specious connection was made. In 1910 French gynaecologist Dr. Paul Puech published the essay (‘La Bête du Gévaudan’, followed in 1911 by another titled ‘Qu’était la bête du Gévaudan?’). Puech’s thin evidence amounted to;

1) The victims being of the same age and gender as those of Jack the Ripper and Joseph Vacher. In fact, women and children (including boys) were not only the more physically vulnerable to attack generally, but were the members of the shepherding families whose job it was to bring the sheep in at the end of the day. This is merely a coincidence.

2) Decapitation and needless mutilation. The latter is pretty subjective, especially if the animal itself might be rabid (plenty were) and therefore attacking beyond the needs of hunger alone. The relevance of decapitation depends upon whether a) this really happened and b) whether a wolf or wolves would be capable of it. Some victims were found to have been decapitated, something that these claimants assert is impossible for a wolf to achieve. I can’t really speak to how plausible this is, although tearing limbs from sizable prey animals is easily done and if more than one animal were involved I’ve little doubt that they could remove a head if they wished. So, did these decapitations actually take place? Jay Smith’s ‘Monsters of the Gévaudan: The Making of a Beast’ relays plenty of reports of heads being ripped off. However, details of these reports themselves mitigate against the idea of a human killer. Take Catherine Valy, whose skull was recovered some time after her death. Captain of dragoons Jean-Baptiste Duhamel noted that “judging by the teeth marks imprinted [on the skull], this animal must have terrifying jaws and a powerful bite, because this woman’s head was split in two in the way a man’s mouth might crack a nut.” Duhamel, like everyone else involved, believed that he faced a large and powerful creature (whether natural or supernatural), not a mere human. Despite the intense attention of the local and national French authorities, not to mention the population at large, no suggestion was ever made nor any evidence ever found of a human murderer and the panic ended in 1767 after several ordinary wolves were shot.

3) Similar deaths in 1765 in the Soissonnais, which he for some reason puts down to a copycat killer rather than, you know, more wolves. This reminds me of the mindset of many true crime writers; come up with your thesis and then go cherry-picking and misrepresenting the data to fit.At the very least then, this claim is speculative, and should not be bandied about as fact (in fact, the YouTube channel should really have queried the claim). So, if not a serial killer, then what? French historian Emmanuel Le Roy Ladurie argues that the Beast was a local legend blown out of proportion to a national level by the rise of print media. Jean-Marc Moriceau reports 181 wolf killings through the 1760s, which puts into context the circa 100 killings over three years in one region of France. That is, statistically remarkable, but within the capability of the country’s wolf population to achieve, especially given the viral and environmental pressures from rabies and the Little Ice Age respectively that Moriceau cites. If we combine these two takes, we get close to the truth, I think. ‘The’ Beast most likely actually consisted of some unusually violent attacks carried out by more than one wolf or packs of wolves that were confabulated and exaggerated as the work of one supernatural beast, before ultimately being pinned by the authorities on several wolves, three shot by François Antoine in 1765 and another supposedly ‘extraordinary’ (yet actually ordinary sized) Jean Chastel in 1767.

Milk in First, or Last Part 2: a Tempest in a Teapot

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

This is Part 2 of a very long article – see here for part 1.

Clearly the majority of modern-day advocates (including all those YouTube commenters that I mentioned last time) aren’t aspiring members of the upper-middle or upper classes or avid followers of etiquette, so why does this schism among tea-drinkers still persist? No doubt the influence of snobs like Nancy Mitford, Evelyn Waugh et al persists, but for most it seems to boil down (ha) to personal preference. This has not calmed the debate any however. Both sides, now mostly comprised of middle class folk such as myself, now argue with equal certainty that their way is the only right way. Is Milk In First (MIF)/Milk In Last (MIL) really now a ‘senseless meme’ (as Professor Markman Ellis believes; see Part 1) – akin to the ‘big-endians’ and ‘little-endians’ of ‘Gulliver’s Travels’? Is there some objective truth to the two positions that underpins all this passion and why the debate has surpassed class differences? Is there a way to reconcile or at least explain it so that we can stop this senseless quibbling? Well, no. We’re British. Quibbling and looking down on each other are two of our chief national pastimes. However, another of those pastimes is stubbornness, so let’s try anyway…

Today’s MILers protest that their method is necessary in order to be able to judge the strength of the tea by its colour. Yet clearly opinions on this differ and, as I showed in the video, sufficiently strong blends – and any amount of experience in making tea – render this moot. If you do ‘under milk’, you can add more to taste (although as I also noted, you might argue that this makes MIL the more expedient method). As we’ve seen with George Orwell vs the Tea & Coffee Trade, the colour/strength argument is highly subjective. Can science help us in terms of which way around is objectively better? Perhaps, although there are no rigorous scientific studies. In the early 2000s the Royal Society of Chemistry and Loughborough University both came out in favour of MIF. The RSC press release gives the actual science:

“Pour milk into the cup FIRST, followed by the tea, aiming to achieve a colour that is rich and attractive…Add fresh chilled milk, not UHT milk which contains denatured proteins and tastes bad. Milk should be added before the tea, because denaturation (degradation) of milk proteins is liable to occur if milk encounters temperatures above 75°C. If milk is poured into hot tea, individual drops separate from the bulk of the milk and come into contact with the high temperatures of the tea for enough time for significant denaturation to occur. This is much less likely to happen if hot water is added to the milk.

It also transpires that an actual international standard (ISO 3103:1980, preceded by several British Standards going back to 1975) was agreed for tea-making way back in 1980, and this too dictated that tea should be added to milk “…in order to avoid scalding the milk”. This would obviously only happen if the tea is particularly hot, and indeed the standard includes a ‘milk last’ protocol in which the tea is kept below 80 degrees celsius. Perhaps those favouring MIL simply like their tea cooler and so don’t run into the scalding problem? This might explain why I do prefer the taste of the same tea, with the same milk, made MIF from a pot, rather than MIL with a teabag in a cup… I like my tea super hot. So, the two methods can indeed taste different; a fact proven by a famous statistical experiment (famous among statisticians; a commenter had to point this out for me) resulted in a lady being able to tell whether a cup of tea had been made MIF or MIL eight times out of eight.

“Already, quite soon after he had come to Rothamstead, his presence had transformed one commonplace tea time to an historic event. It happened one afternoon when he drew a cup of tea from the urn and offered it to the lady beside him, Dr. B. Muriel Bristol, an algologist. She declined it, stating that she preferred a cup into which the milk had been poured first. “Nonsense,” returned Fisher, smiling, “Surely it makes no difference.” But she maintained, with emphasis, that of course it did. From just behind, a voice suggested, “Let’s test her.” It was William Roach who was not long afterward to marry Miss Bristol. Immediately, they embarked on the preliminaries of the experiment, Roach assisting with the cups and exulting that Miss Bristol divined correctly more than enough of those cups into which tea had been poured first to prove her case.

-Fisher-Box, 1978, p. 134.

This of course doesn’t help with which is objectively better, but does suggest that one side may be ‘right’. However, as well as temperature, the strength of the brew may also make a difference here, one that might explain why this debate rumbles on with no clear victor. A commenter on a Guardian article explains the chemistry of a cup of tea;

“IN THE teacup, two chemical reactions take place which alter the protein of the milk: denaturing and tanning. The first, the change that takes place in milk when it is heated, depends only on temperature. ‘Milk-first’ gradually brings the contents of the cup up from fridge-cool. ‘Milk-last’ rapidly heats the first drop of milk almost to the temperature of the teapot, denaturing it to a greater degree and so developing more ‘boiled milk’ flavour. The second reaction is analogous to the tanning of leather. Just as the protein of untanned hide is combined with tannin to form chemically tough collagen/tannin complexes, so in the teacup, the milk’s protein turns into tannin/casein complexes. But there is a difference: in leather every reactive point on the protein molecule is taken up by a tannin molecule, but this need not be so in tea. Unless the brew is strong enough to tan all the casein completely, ‘milk-first’ will react differently from ‘milk-last’ in the way it distributes the tannin through the casein. In ‘milk-first’, all the casein tans uniformly; in ‘milk-last’ the first molecules of casein entering the cup tan more thoroughly than the last ones. If the proportions of tannin to casein are near to chemical equality, ‘which-first’ may determine whether some of the casein escapes tanning entirely. There is no reason why this difference should not alter the taste.

-Dan Lowy, Sutton, Surrey (The Guardian, Notes & Queries, 2011).

Both the scalding and the denaturation/tanning explanations are referenced in the popular science book ‘Riddles in Your Teacup’ (p. 90), the authors having consulted physicists (who favour a temperature explanation) and chemists (who of course take a chemistry-based view) on this question. I also found this interesting explanation, from an 1870 edition of the Boston Journal of Chemistry, of tannins in tea and how milk reacts with them to change the taste of the tea. This supports the idea, as does the tea-tasting lady’s ability to tell the difference, that MIF and MIL can result in a different taste. Needless to say, people have different palates and preferences and it’s likely that some prefer their tannins left unchecked (black tea), fully suppressed (milk in first), or partly mitigated (milk in last). However, if your tea is strong enough, the difference in taste will be small or even non-existent, as the tannins will shine through regardless and you’ll just get the additional flavour of the milk (perhaps tasting slightly boiled?). My preferred blend (Betty’s Tea Room blend) absolutely does retain this astringent taste regardless of which method I use or even how hot the water is (even if I do prefer it hot and MIF!).

So, the available scientific advice does favour MIF, for what it’s worth, which interestingly bears out those early reports of upper class tea aficionados and later ‘below stairs’ types who both preferred it this way. However, the difference isn’t huge and depends what temperature the tea is when you hit it with the milk, how strong the brew is, and what blend you use. It’s a bit like unevenly steamed milk in a latte or cappuccino; it’s fine, but it’s nicer when it has that smooth, foamed texture and hasn’t been scalded by the wand. The bottom line, which is what I was trying to say in my YouTube response, is that it’s basically just fashion/habit and doesn’t much matter either way (despite the amount I’ve said and written about it!) – to which I can now add the taste preference and chemical change aspects. If you pour your tea at a lower temperature, the milk won’t get so denatured/scalded, and even this small difference won’t occur. Even if you pour it hot, you might not mind or notice the difference in taste. As for the historical explanation of cracking cups, it’s probably bollocks, albeit rooted in the fact of substandard British teaware. As readers of this blog will know by now, these neat origin stories generally do turn out to be made up after the fact, and the real history is more nuanced. This story is no different.

To recap; when tea was introduced in the 17th century most people drank it black. By the early 19th century milk became widely used as an option that you added to the poured tea, like sugar. Later that century, some found that they preferred putting the milk in first and were thought particular for doing so (marking the start of the Great Tea Schism). Aside from being a minority individual preference, most upper class hostesses continued to serve MIL (as Hartley recommended) because when hosting numbers of fussy guests, serving the tea first and offering milk, sugar and lemon to add to their own taste was simply more practical and efficient. Guests cannot object to their tea if they are responsible for putting it together, and this way, everyone gets served at the same time. Rather than outline this practical justification, the 1920s snobs chose to frame the debate in terms of class, setting in stone MIL as the only ‘proper’ way. This, probably combined with a residual idea that black tea was the default and milk was something that you added, and also doubtless definitely as a result of the increasing dominance of tea-making using a teabag and mug/cup (where MIL really is the only acceptable method) left a lot of non-upper class people with the idea that MIL was objectively correct. Finally, as the class system broke down, milk first or last became the (mostly) good-natured debate that it is today.

All of this baggage (especially, in my view, the outdated class snobbery aspect) should be irrelevant to how we take our tea today, and should have been even back then. As far back as 1927, J.B. Priestley used his Saturday Review column to mock the snobs who criticised “…those who pour the milk in first…”. The Duke of Bedford’s ‘Book of Snobs’ (1965, p. 42) lamented the ongoing snobbery over ‘milk in first’ as “…stigmatizing millions to hopelessly inferior status…”. Today, upper class views on what is correct or incorrect are roundly ignored by the majority, and most arguing in favour of MIL would not claim that you should do it because the upper class said that you should, and probably don’t even realise that this is where it came from. Even high-end tea-peddlers Fortnum & Mason note that you should “…pour your tea as you please”. Each person’s view on this is a product of family custom and upbringing, social class, and individual preference; a potent mixture that leads to some strong opinions! Alternatively, like me, you drink your tea sufficiently strong that it barely matters (note I said ‘barely’ – I remain a heretical MIF for life). What does matter, of course, in tea as in all things, is knowing what you like and how to achieve it, as this final quote underlines:

…no rules will insure good tea-making. Poeta nascitur non fit,* and it may be said similarly, you are born a tea-maker, but you cannot become one.

-Samuel Kneeland, About Making Tea (1870). *A Latin expression meaning that poets are born and not made.

References (for both Parts):

Bedford, John Robert Russell, George Mikes & Nicholas Bentley. 1965. The Duke of Bedford’s Book of Snobs. London: P. Owen.

Bennett, Arnold. 1912. Helen With the High Hand. London: Chapman and Hall.

Betjeman, John. 1956. ‘How to Get on in Society’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Boston Journal of Chemistry. 1870. ‘Familiar Science – Leather in the Tea-Cup’. Vol. V, No. 3.

Ferguson, Jonathan. 2020. ‘You’re Doing It Wrong: Tea and Milk with Jonathan Ferguson’. Forgotten Weapons. YouTube video. 15 April 2020. <https://www.youtube.com/watch?v=8VCRFVMpSc8&gt;.

Ferguson, Jonathan & McCollum, Ian. 2020. ‘Jonathan Reacts to the First Day Kickstarter for his Book’. Forgotten Weapons. YouTube video. 13 April 2020. <https://www.youtube.com/watch?v=1XO4VgkC_JE&gt;.

Fisher-Box, Joan. 1978. R.A. Fisher: The Life of a Scientist. New York, NY: Wiley.

Fortnum & Mason. ‘How to Make the Perfect Cup of Tea.’ The Journal | #Fortnums. <https://www.fortnumandmason.com/fortnums/the-perfect-cup-of-tea&gt;.

Ghose, Partha & Dipankar Home. 1994. Riddles in your Teacup. Boca Raton, FL: CRC Press.

Guanghua (光華). 1995. Press Room of the Information Bureau of the Executive Yuan of the Republic of China. Vol. 20, Nos. 7–12.

Hartley, Florence. 1860. The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook. Boston, MA: Cottrell.

Johnson, Dorothea. 2002. Tea & Etiquette. Washington, D.C.: Capital.

Kneeland, Markman. 2017. ‘“Milk in First”: a miffy question’. Queen Mary University of London History of Tea Project. 11 May. <https://qmhistoryoftea.wordpress.com/2017/05/11/milk-in-first-a-miffy-question/&gt;.

Kneeland, Samuel. 1870. ‘About Making Tea’. Good Health. Vol. 1, No. 12.

Lowy, Dan. 2011. ‘Notes and Queries’. The Guardian. Digital edition:  <https://www.theguardian.com/notesandqueries/query/0,,-1400,00.html>.

Manley, Jeffrey. 2016. ‘Milk in First.’ The Evelyn Waugh Society. 17 November 2016. <https://evelynwaughsociety.org/2016/milk-in-first/&gt;.

Orwell, George. 1946. ‘A Nice Cup of Tea.’ London Evening Standard. Available at <https://orwell.ru/library/articles/tea/english/e_tea&gt;

Rice, Elizabeth Emma. 1884. Domestic Economy. London: Blackie & Son.

Royal Society of Chemistry. 2003. ‘How to Make a Perfect Cup of Tea.’ Press Release. <https://web.archive.org/web/20140811033029/http:/www.rsc.org/pdf/pressoffice/2003/tea.pdf&gt;.

Waugh, Evelyn. 1956. ‘An Open Letter to the Honble Mrs Peter Rodd (Nancy Mitford) On a Very Serious Subject’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Smith, Matthew. ‘Should milk go in a cup of tea first or last?’ YouGov. 30 July 2018. <https://yougov.co.uk/topics/food/articles-reports/2018/07/30/should-milk-go-cup-tea-first-or-last/&gt;


Milk in First, or Last Part 1: a Storm in a Teacup?

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

The Short Version: Pouring tea (from a teapot) with the milk in the cup first was an acceptable, if minority, preference regardless of class until the 1920s, when upper class tea drinkers decided that it was something that only the lower classes did. It does affect the taste but whether in a positive or negative way (or whether you even notice/care) is strictly a matter of preference. So, if we’re to ignore silly class-based snobbery, milk-in-first remains an acceptable alternative method. Unless you are making your tea in a mug or cup with a teabag, in which case, for the love of god, put the milk in last, or you’ll kill the infusion process stone dead.

This article first appeared in a beautifully designed ‘Tea Ration’ booklet designed by Headstamp Publishing for Kickstarter supporters of my book (Ferguson, 2020). Now that these lovely people have had their books (and booklets) for a while, I thought it time to unleash a slightly revised version on anyone else that might care! It’s a long read, so I’ll break it into two parts (references in Part 2, now added here, for those interested)…

Part 1: The History

Like many of my fellow Britons, I drink an enormous amount of tea. By ‘tea’, I mean tea as drunk in Britain, the Republic of Ireland and to a large extent in the Commonwealth. This takes the form of strong blends of black leaves, served hot with (usually) milk and (optionally) sugar. I have long been aware of the debate over whether to put the milk into the cup first or last, and that passions can run pretty high over this (as in all areas of tea preference). For a long time however, I did not grasp just how strong these views were until I read comments made on a video (Ferguson & McCollum, 2020) made to support the launch of my book ‘Thorneycroft to SA80: British Bullpup Firearms 1901 – 2020’. This showed brewed tea being poured into a cup already containing milk, which caused a flurry of mock (and perhaps some genuine) horror in the comments section. Commenters were overwhelmingly in favour of putting milk in last (henceforth ‘MIL’) and not the other way around (‘milk in first’ or ‘MIF’). This is superficially supported by a 2018 survey in which 79% of participants agreed with MIL (Smith, 2018). This survey was seriously flawed in not specifying the use of a teapot or individual mug/cup as the brewing receptacle. Very few British/Irish-style tea drinkers would ever drop a teabag in on top of milk, as this soaks into the bag, preventing most of the leaves from infusing into the hot water. Most of us these days only break out the teapot (and especially the loose-leaf tea, china cups, tea-tray etc) on special occasions, and it takes a conscious effort to try the milk in first.

Regardless, anecdotally at least it does seem that a majority would still argue for MIL even when using a teapot. This might seem only logical; tea is the drink, milk is the additive. The main justifications given were the alleged difficulty of judging the colour and therefore the strength of the mixture, and an interesting historical claim that only working class people in the past had put milk in first, in order to protect their cheap porcelain cups. The practicalities seemed to be secondary to some idea of an objectively ‘right’ way to do it, however, with many expressing mock (perhaps in some cases, genuine) horror at MIF. This vehement reaction drove me to investigate, coming to the tentative conclusion that there was a strong social class influence and releasing a follow-up video in which I acknowledged this received wisdom (Ferguson, 2020). I also demonstrated making a cup of perfectly strong tea using MIF, thus empirically proving the colour/strength argument wrong – given a suitably strong blend and brew of course. The initial source that I found confirmed the modern view on the etiquette of tea making and the colour justification. This was ‘Tea & Etiquette’ (1998, pp. 74-75) written by American Dorothea Johnson. Johnson warns ‘Don’t put the milk in before the tea because then you cannot judge the strength of the tea by its color…’

And:

‘ …don’t be guilty of this faux pas…’

Johnson then lists ‘Good Reasons to Add Milk After the Tea is Poured into a Cup’, as follows:

  • The butler in the popular 1970s television program Upstairs, Downstairs kindly gave the following advice to the household servants who were arguing about the virtues of adding milk before or after the tea is poured: “Those of us downstairs put the milk in first, while those upstairs put the milk in last.”
  • Moyra Bremner, author of Enquire Within Upon Modern Etiquette and Successful Behaviour, says, “Milk, strictly speaking, goes in after the tea.”
  • According to the English writer Evelyn Waugh, “All nannies and many governesses… put the milk in first.”
  • And, by the way, Queen Elizabeth II adds the milk in last.

Unlike the video comments, which did not directly reference social class, this assessment practically drips with snobbery, thinly veiled with the practical but subjective justification that one cannot judge the colour (and hence strength) of the final brew as easily. Still, it pointed toward the fact that there really was somehow a broadly acknowledged ‘right’ way, which surprised me. The handful of other etiquette and household books that I found in my quick search seemed to agree, and in a modern context there is no doubt that ‘milk in last’ (MIL) has come to be seen as the ‘proper’ way. However, as I suspected, there is definitely more to it—milk last wasn’t always the prescribed method, and it isn’t necessarily the best way to make your ‘cuppa’ either…

So, to the history books themselves… I spent longer than is healthy perusing ladies’ etiquette books and, as it turns out, only the modern ones assert that milk should go in last or imply that there is any kind of class aspect to be borne in mind. In fact, Elizabeth Emma Rice in her Domestic Economy (1884, p. 139) states confidently that:

“…those who make the best tea generally put the sugar and milk in the cup, and then pour in the hot tea.”

I checked all of the etiquette books that I could find electronically, regardless of time period, and only one other is proscriptive with regards to serving milk with tea. This is The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook, by Florence Hartley (1860, pp. 105–106) which passes no judgement on which is superior, but recommends for convenience that cups of tea are poured and passed around to be milked and sugared to taste. This may provide a practical underpinning to the upper-class preference for MIL; getting someone’s cup of tea wrong would be a real issue at a gathering or party. You either had to ask how the guest liked it and have them ‘say when’ to stop pouring the milk, which would take time and be fraught with difficulty or, more likely, you simply poured a cup for each and let them add milk and sugar to their taste. This also speaks to how tea was originally drunk (as fresh coffee still is)—black, with milk if you wanted it. A working-class household was less likely to host large gatherings or have a need to impress people. There it was more convenient to add roughly the same amount of milk to each cup, and then fill the rest with tea. , you would simply be given a cup made as the host deemed fit, or perhaps be asked how you like it. If thought sufficiently fussy, you might be told to make it yourself! In any case, Hartley was an American writing for Americans, and I found no pre-First World War British guides that actually recommended milk in last. As noted, the only guide that did cover it (Rice) actually favours milk in first.

Much of my research aligns with that presented in a superb article by Professor Markman Ellis of the Queen Mary University History of Tea Project. Ellis agrees that the ‘milk in first or last’ thing was really about the British class system—which helps explain why I found so few pre-Second World War references to the dilemma. His thesis boils down (ha!) to a crisis of identity among the post-First World War upper class. In the 1920s, the wealth gap between the growing middle class and the upper class was narrowing. This is where the expression nouveau riche—the new rich—comes from; they had the money but, as the ‘true’ upper class saw it, not the ‘breeding’. They could pose as upper class, but could never be upper class. Of course, that very middle class would, in its turn, come to look down on aspiring working-class people (think Hyacinth Bucket from British situation comedy Keeping Up Appearances). In any case, if you cared about appearances and reputation among your upper-class peers or felt threatened by social mobility, you had to have a way of setting yourself apart from the ’lower classes’. Arbitrary rulesets that included MIL were a way to do this. Ellis cites several pre-First World War sources (dating back as far as 1846) which comment on how individuals took their tea. These suggest that milk-in-first (MIF) was thought somewhat unusual, but the sources pass no judgement and don’t mention that this is thought to be a working class phenomenon. Adding milk to tea was, logically enough, how it was originally done—black tea came first and milk was an addition. Additions are added, after all. As preferences developed, some would have tried milk first and liked it. This alone explains why those adding milk first might seem eccentric, but not ‘wrong’ per se. In fact, by the first decade of the 20th century, MIF had become downright fashionable, at least among the middle class, as Helen with the High Hand (1910) shows. In this novel, the titular Helen states that an “…authority on China tea…” should know that “…milk ought to be poured in first. Why, it makes quite a different taste!” This presumptuous attitude (how dare the lower classes tell us how to make our tea?!) that influenced the upper-class rejection of the practice in later decades.

This brings us back to Ellis’s explanation of where the practice originated, and also explains the context of Evelyn Waugh’s comments as reported by Johnson. These come from Waugh’s contribution to to Noblesse Oblige—a book that codified the latest habits of the English aristocracy. Ellis dismisses the authors and editor as snobs of the sort that originated and perpetuated the tea/milk meme. However, in fairness to Waugh, he does make clear that he’s talking about the view of some of his peers, not necessarily his own, and even gives credit to MIF ‘tea-fanciers’ for trying to make the tea taste better. His full comments are as follows:

All nannies and many governesses, when pouring out tea, put the milk in first. (It is said by tea-fanciers to produce a richer mixture.) Sharp children notice that this is not normally done in the drawing-room. To some this revelation becomes symbolic. We have a friend you may remember, far from conventional in other ways, who makes it her touchstone. “Rather MIF, darling,” she says in condemnation.

                             -Waugh, 1956.

Incidentally, I erroneously stated that governesses were ‘working class’ in my original video on this topic. In fact, although nannies often were, the governess was typically of the middle class, or even an impoverished upper-middle or upper class woman. Both roles occupied a space between classes, being neither one nor the other but excluded from ever being truly ‘U’. As a result, they were free to make tea as they thought best. Waugh’s view is not the only tea-related one in the book. Poet John Betjeman also alluded to this growing view that MIF was a lower class behaviour in his long list of things that would mark out the speaker as a member of the middle class:

Milk and then just as it comes dear?

I’m afraid the preserve’s full of stones;

Beg pardon I’m soiling the doileys

With afternoon tea-cakes and scones.

                             -Betjeman, 1956.

Returning to the etiquette books, although the early ones were written for those running an upper-class household, the latter-day efforts like Johnson’s are actually aimed at those aspiring to behave like, or at least fascinated by, the British upper class. This is why Johnson invokes famous posh Britons and even the Queen herself to make her point to her American audience. Interestingly though, Johnson takes Samuel Twining’s name in vain. The ninth-generation member of the famous Twining tea company is in fact an advocate of milk first, and he too thought that MIL came from snobbery:

With a wave of his hand, Mr. Twining dismisses this idea as nonsense. “Of course you have to put the milk in first to make a proper cup of tea.” He surmises that upper-class snobbery about pouring the tea first, had its origins in their desire to show that their cups were pure imported Chinese porcelain.

Guanghua (光華) magazine, 1995, Volume 20, Issues 7-12, p. 19.

Twining goes on to explain his hypothesis that the lower classes only had access to poor quality porcelain that could not withstand the thermal shock of hot liquid, and so had to put the milk in first to protect the cup. Plausible enough, but almost certainly wrong. As Ellis explains in his article;

…tea was consumed in Britain for almost two centuries before milk was commonly added, without damaging the cups, and in any case the whole point of porcelain, other than its beauty, was its thermo-resistance.

Food journalist Beverly Dubrin mentions the theory in her book ‘Tea Culture: History, Traditions, Celebrations, Recipes & More’ (2012, p. 24), but identifies it as ‘speculation’. I could find no historical references to the cracking of teacups until after the Second World War. The claim first appears in a 1947 issue of the American-published (but international in scope)‘Tea & Coffee Trade Journal’ (Volumes 92-93, p.11), along with yet another pro-MIF comment:

…MILK FIRST in the TEA, PLEASE! Do you pour the milk in your cup before the tea? Whatever your menfolk might say, it isn’t merely ‘an old wives’ tale : it’s a survival from better times than these, when valuable porcelain cups were commonly in use. The cold milk prevented the boiling liquor cracking the cups. Just plain common sense, of course. But there is more in it than that, as you wives know — tea looks better and tastes better made that way.

The only references to cracking teaware that I’ve found were to the teapot itself, into which you’d be pouring truly boiling water if you wanted the best brewing results. Several books mention the inferiority of British ‘soft’ porcelain in the 18th century, made without “access to the kaolin clay from which hard porcelain was made”, as Paul Monod says in his 2009 book ‘Imperial Island: A History of Britain and Its Empire, 1660-1837’. By the Victorian period this “genuine or true” porcelain was only “occasionally” made in Britain, as this interesting 1845 source relates, and remained expensive (whether British or imported) into the 20th century. This has no doubt contributed to the explanation that the milk was put there to protect the cups, even though the pot was by far the bigger worry and there are plenty of surviving soft-paste porcelain teacups today without cracks (e.g. this Georgian example). Of course, it isn’t actually necessary for cracking to be a realistic concern, only that the perception existed, and so we can’t rule it out as a factor. However, that early ‘Tea & Coffee Trade Journal’ mention is also interesting because it omits any reference to social class and implies that this was something that everyone used to do for practical reasons, and is now done as a matter of preference. Likewise, on the other side of the debate, author and Spanish Civil War veteran George Orwell argued in favour of MIL in a piece for the Evening Standard (January 1946) entitled ‘A Nice Cup of Tea’:

…by putting the tea in first and stirring as one pours, one can exactly regulate the amount of milk whereas one is liable to put in too much milk if one does it the other way round.

                             -Orwell, 1946.

This reiterated his earlier advice captured in this wonderful video from the Spanish trenches. However, Orwell acknowledged that the method of adding milk was “…one of the most controversial points of all…” and admitted that “the milk-first school can bring forward some fairly strong arguments.” Orwell (who himself hailed from the upper middle class) doesn’t mention class differences or worries over cracking cups.

By the 1960s people were more routinely denouncing MIF as a working class practice, although even at this late stage there was disagreement. Upper class explorer and writer James Maurice Scott in ‘The Tea Story’ (1964, p. 112) commented:

The argument as to which should be put first into the cup, the tea or the milk, is as old and unsolvable as which came first, the chicken or the egg. There is, I think, a vague feeling that it is Non-U to put the milk in first – why, goodness knows.

It’s important to note that ‘U’ and ’Non-U’ were abbreviations used as shorthand for ‘Upper-Class’ and ‘Non-Upper-Class’ invented by Professor Alan Ross in his 1954 linguistic study, and unironically embraced by the likes of Mitford as a way to ‘other’ those that they saw as inferior.

The New Yorker magazine (1965, p. 26) reported a more emphatic advisory (seemingly a trick question!) given to an American visitor to London:

Do you like milk in first or tea in first? You know, putting milk in the cup first is a working-class custom, and tea first is not.

This, then, was the status quo reflected in the British TV programme ‘Upstairs, Downstairs’ in the 1970s, which helped to expose new audiences to the idea that MIF was ‘not the done thing’. Lending libraries and affordable paperback editions afforded easy access to books like Noblesse Oblige. The 1980s then saw the modern breed of etiquette books (like ‘Miss Manners’ Guide to Excruciatingly Correct Behavior’ that rehashed this snobbery for an American audience fascinated with the British upper class. Ironically of course, any American would have been unquestionably ‘Non-U’ to any upper class Brit, just as any working or middle-class Briton would have been. And finally (again covered by Ellis), much like the changing fashion of the extended pinkie finger (which started as an upper class habit and then became ‘common’ when it trickled down to the lower classes – see my article here), the upper class decided that worrying about the milk in your tea was now vulgar. Having caused the fuss in the first place, they retired to their collective drawing room, leaving us common folk to endlessly debate the merits of MIF/MIL…

That’s it for now. Next time: Why does anyone still care about this?

“…few men…would be clever enough to be crows.”

I recently caught up with this Nicola Clayton lecture on corvid intelligence. Well worth a watch, it ends with a very apt quote;

“If men had wings and bore black feathers, Few of them would be clever enough to be crows.”

-Henry Ward Beecher

Unfortunately, as quotes in Powerpoint presentations often are, this is incorrect.

The actual quote is;

“Take off the wings, and put him in breeches, and crows make fair average men. Give men wings, and reduce their smartness a little, and many of them would be almost good enough to be crows.”

Some time into researching the origins of this, I came across this blog post, which correctly identifies that the above is the original wording and that Beecher was indeed its originator. However, taking things a little further, I can confirm that the first appearance of this was NOT ‘Our Dumb Animals’ but rather The New York Ledger. Beecher’s regular (weekly) column in the Ledger was renowned at the time. Unfortunately, I can’t find any 1869 issues of the Ledger online, so I can’t fully pin this one down. Based upon its appearance in the former publication in May of 1870, and various other references from publications that summer (e.g. this one) to “a recent issue of the Ledger”, it appeared in early 1870. From there it was reprinted in various other periodicals and newspapers including ‘Our Dumb Animals’ (even if the latter doesn’t credit the Ledger as other reprints did). 

So how did the incorrect version come about? It was very likely just a misquote or rather, a series of misquotes and paraphrasings. Even some of the early direct quotes got it wrong. One 1873 reprint drops the word ‘almost’, blunting Beecher’s acerbic wit slightly. Saying that many men would be good enough to be crows is kinder than saying that many would be almost good enough. Fairly early on, authors moved to paraphrasing, for example in 1891’s ‘Collected Reports Relating to Agriculture’ we find:

“…Henry Ward Beecher long ago remarked that if men were feathered out and given a pair of wings, a very few of them would be clever enough to be crows.” 

This appeared almost verbatim twenty years later in Coburn’s ‘The Behavior of the Crow’ (1923). Two years later, Glover Morrill Allen’s ‘Birds and Their Attributes’ (1925, p.222) gave us a new version:

“…Henry Ward Beecher was correct when he said that if men could be feathered and provided with wings, very few would be clever enough to be Crows!”

It was this form that was repeated from then on, crucially in some cases (such as Bent’s 1946 ‘Life Histories of North American Birds’) with added quotation marks, making it appear to later readers that these were Beecher’s actual words. Interestingly, the earliest occurrence of the wording ‘very few would prove clever enough’ (my emphasis) seems to emerge later, and is credited to naturalist Henry David Thoreau:

“… once said that if men could be turned into birds, each in accordance with his individual capacity, very few would prove clever enough to be Crows.”

-Bulletin of the Massachusetts Audubon Society in 1942 (p.11).

I can find no evidence that Thoreau ever said anything like this, and of course it’s also suspiciously similar to the Beecher versions floating about at the same time (here’s another from a 1943 issue of ‘Nature Magazine’, p. 401). Thus, I suspect, the Thoreau attribution is a red herring, probably a straight-up mistake by a lone author. In any case, relatively few (only eight that I could detect via Google Books) have run with that attribution since, and these can likely be traced back to the MA Audubon Society error.

So, we are seeing here a game of literary ‘telephone’ from the original Beecher tract in 1870 via various misquotes in the 1920s – 1950s that solidified the version that’s still floating around today. Pleasingly, although his wording has been thoroughly mangled, the meaning remains intact. The key difference is that Beecher was using the attributes of the crow to disparage human beings based upon the low opinion that his fellow man then held of corvids. Despite this, Beecher very clearly did respect the intelligence of the bird as much as the 20th century birders who referenced him, and those of us today who also love the corvids. I think it’s important to be reminded that, as his version shows, widespread affection for corvids is a very recent thing. We should never forget how badly we have mistreated them and, sadly, continue to do so in many places.