Shining Girls (Apple TV, 2022)

Another time travel fiction review with SPOILERS for the TV show (and to some extent the book as well)

I very much enjoyed the Apple TV series Shining Girls, an adaptation of Lauren Beukes’ 2013 novel The Shining Girls. I thought it was well acted, well shot, mostly well written and had a satisfying ending, albeit a problematic one since the killer is left alive and Kirby might now be forever bound to the house like he was. However, I was confused and somewhat annoyed by the time-travel aspects; the way the house worked as a time machine mostly made sense, but the way that  Kirby’s present (and later that of Harper and Jin-Sook) was shown to change moment to moment really makes zero sense. It made me very curious to find out if it was part of the book, and I soon found out that it wasn’t. I decided to read the book as much preferred the idea of a straightforward time travel version of the same story. As much as I enjoyed the book, it made me all the more annoyed that the TV version had made such a dramatic and nonsensical change. It wasn’t the only questionable change either. The focus upon Kirby and her ever-shifting reality resulted in a great deal being changed or removed, including most of the titular ‘shining girls’ including, surprisingly for 2022, the black, trans, and pro-abortion characters. The ones that are retained are significantly changed and a whole new character – Leo Jenkins – is added for no clear reason. 

Time travel in the novel is straightforward; you simply can’t change the past. It’s a clever twist on a closed loop like The Terminator or Twelve Monkeys. So nothing changes. In the TV show it’s more like Terminator 2 or Back to the Future – you can change the past and save the girls. This is a change that the 12 Monkeys TV show also made to the movie’s story, and I could have lived with the same here. Most people don’t share my love of closed loops and it’s fun to see a seemingly foregone conclusion averted/subverted (which is why James Cameron contradicted his own first movie with his sequel – it made for an emotionally satisfying ending at the expense of pure logic. No, what got me annoyed in Shining Girls (2022) was not the malleable timeline but the introduction of a second, wholly nonsensical mechanism for changing it. This is both more confusing than it need be and a direct contradiction because in theory changes made by one mechanism should impact those made by the other. Dark and Avengers: Endgame (see my review here) both introduced branching realities, to varying degrees of success – I would have been OK with this show doing something similar since under that system of time travel cause and effect is pretty much intact. Shining Girls makes the same mistake as Endgame, but whereas that logic only broke in the final scenes and can be ‘fixed’ with some off-screen assumptions, Shining Girls is fundamentally broken as a time travel story since its second mechanism is nothing to do with ‘many worlds’ and is, well, random. Drinking vessels, desks, haircuts, clothes, characters and locations all change, for absolutely no reason. No multiverse shenanigans are ever mentioned or even implied. The characters speculate at one point that the changes are somehow echoes of events that might yet happen; a laundromat changes into a bar for which Kirby already has a matchbook, and Kirby goes from single to married to a coworker. 

Dan: When things change for you, do you recognize it? 

Kirby: Sometimes. Other times, they’re just random. 

Dan: Maybe they’re what’s to come.

But then it’s shown that she doesn’t marry her coworker at all in the ‘final’ timeline, at least as far as we see. Is she still destined to do so at some point? If so then there’s no chance that she stays in the house and becomes some sort of time-travelling vigilante or whatever. They’ve shown that it’s possible to change reality, seemingly permanently, so surely the timeline where she marries him is no longer viable? When should the laundromat have been a bar and what are the consequences of it changing at the ‘wrong’ time? Kirby has the matchbook – why? Jin-Sook’s career is destroyed in the present because she isn’t killed…also in the present. At the same time Kirby’s present also shifts because Dan is stabbed, again, in the present. Why? The answer to all of this and the other seemingly random changes is deeply unsatisfying and illogical. The cause of these changes is not meddling in the past but rather (sigh) strong emotions experienced by someone who is ‘entangled’ (a clear if nonsensical attempt to reference quantum mechanics) with another person who is somehow detached from time – namely Harper (with Kirby’s fellow victim Jin-Sook joining the entangled mess later on). In Luisa’s own words:

“I always thought of time just there’s one string of time, and so wherever Harper is he’s still connected to Kirby so his emotions, his violence against other women it ripples forward kind of like a butterfly effect and changes her world, changes her, you know, her hair, her apartment depending on on what he’s done, and so if he kills Jin-Sook in April 26 it doesn’t matter that Kirby is, you know, at the same time, it basically ripples backwards and still impacts her life.” 

This (and another attempt to explain it here) makes absolutely no sense. The conceit of ‘mutable’ timeline time travel and much of our fascination with it is that when you change something, you’re creating a cause that has an effect. It doesn’t matter which way around – you can have something exist out of time in the past that is caused in the future; logically speaking there’s no problem with that. But two unconnected events are, well, unconnected. There IS no cause, there can be no effect. How the hell does Harper killing a woman that has nothing to do with Kirby’s past change Kirby’s present? How does him attacking her in the present change the past of the building that they happen to be in? Or where her desk is? How is Harper ‘entangled’ with Kirby in the first place? He’s affected by the house’s time travel magic – is this somehow contagious? There is no satisfactory answer to any of these questions. What Harper is doing in the present cannot logically affect events in the past. He can take an object from the present back or otherwise change the past IN the past, but he can’t just throw a spacetime tantrum and change Kirby’s past from the present. What Luisa is describing is some sort of psychic warfare – which might have been an interesting premise for a TV series, but not this one. The changes are not even consistent in their frequency or magnitude. At one point near the end reality shifts again but Kirby’s hair, clothes and makeup don’t. This was apparently because they “ran out of hairstyles” and liked her cool punky confident look so they just kept it. 

Of course it’s possible to (as some fans have) invoke ‘many worlds’ and say that every change we see is actually the universe branching, but that’s not shown or told to us. Instead, everything is shown to happen in a single mutable timeline in which trips to the past absolutely do change the present/future. Further, only causal events that take place in the subjective present (like the fight with the changing building) could create a branch in reality and even then, this branch would occur then and there, not arbitrarily in the past (indeed, according to the many worlds interpretation of quantum mechanics, that’s exactly what IS happening all the time). If you’re going to make up rules that aren’t logical, OK, do that, but you need to spell them out, if not in the show then somewhere (famously, Donnie Darko did this on its website). 

I don’t think I’m just being a time travel obsessive here. It isn’t just the fun nerdy logic puzzle aspect that this affects, it’s the narrative as well (unless you miss the fact or choose to overlook it). Although it feels like the stakes and tension are being raised by the changes becoming more frequent and disruptive, they aren’t really – it’s unearned and artificial-feeling, like overly dramatic loud music playing over an otherwise ordinary scene (looking at you, modern Doctor Who). If anything can happen at any moment to three of the main characters, nothing really matters. It’s also needlessly confusing for the viewer, since it’s hard enough for people to follow cause-and-effect changes – hence the contrived photos and fax in Back to the Future – never mind completely random ones taking place in parallel yet not, apparently, conflicting with or modifying the logical changes. Two totally separate mechanisms for change happening at the same time. It’s a bizarre narrative choice, especially since it isn’t taken from the book, and detracts from the otherwise excellent acting, staging, dialogue etc. However, having read many reviews, not many seem to agree with me. They seem to fall into several camps on the time travel aspects. First, people like this SyFy reviewer who seem to think that this multiverse travel, which I’ve explained isn’t the case. Second, some people misunderstand what’s shown and think that the changes ARE due to Harper changing the past, like this Slate reviewer who, by the way, I otherwise agree with. Even Beukes seems to rue the changes to an extent, although she seems mostly happy with the adaptation, perhaps because she’s less attached to her own coherent time travel than I or simply because adaptations are inevitably a compromise between producers, showrunner, writers and studio. Then there are the people who just don’t care or even (looking at you, Redditors) protest that anyone trying to analyse the time travel is ‘missing the point’ and should stop fussing over it. Finally, and not too far removed from the last group, are people who accept that Harper, Kirby or Jin-Sook’s emotions are somehow enough to change the timeline, which as noted is what the showrunner and writers actually intended. As is often the case with fan explanations, none is very satisfactory.

It seems to me that the creators understood that unexpected timeline changes are interesting and fun from movies like Primer or The Butterfly Effect (or perhaps series like 12 Monkeys) and would fit their intent for the adaptation, but weren’t able (or didn’t care) put in the work to make the changes work in terms of cause and effect. Instead they came up with this handwavy version in which things might feel like they might ultimately make sense but logic is in fact out of the window. It’s very much the J.J. Abrams empty ‘mystery box’ approach – set up the intriguing mystery, then reveal that stuff just happens because the writers say so rather than because (say) Harper killing the coroner/medical examiner in the past prevents Kirby getting access to the body she needs to investigate and suddenly a key piece of evidence is lost to her (other than her memory of it) in the present. I chose this example because they do a similar reality shift with the medical examiner in the show (changing from a woman to a man and back again), but it happens (twice) for no reason other than to throw off the audience.

The idea here was that Kirby’s ever-shifting present would be a metaphor for her trauma and “born of a desire to keep the series subjective to Kirby’s experience”, but there’s no reason why subjectively unexplained shifts (i.e. we the viewer sees the cause, Kirby doesn’t) wouldn’t do just as well – better, in fact, since Harper would be actively changing her past to affect her present and future, rather than being clueless as to how or why he was having these effects. Happily, like the other stories I referenced, The Shining Girls novel follows a self-consistent narrative – Harper was always going to lose, he (and Kirby) just didn’t know it yet. No-one is saved by changing the past. Even the hard date limit on Harper’s time travel, hand-waved in the show, is originally due to the fact that the timeline is (as the author’s time-travel consultant Sam Wilson confirmed) self-consistent – he can’t go past 1993 because that’s when the house is, essentially, fated to burn. He is living a loop – he dies in the burning house and then, it’s strongly implied, becomes the house, reaching back to lure a series of owners, including himself, to try to makes things right. But it’s a closed loop – he is merely setting the story in motion from its end. He has no free will, something that people tend to dislike about predestination stories, but I find them satisfying. The creators of the show claimed that they didn’t want the house to be the driving force for Harper’s murders because it took away from his agency – they wanted him bad in the first place. Seemingly, Luisa and co have misunderstood the ending – the house is not just some supernatural entity driving Harper to kill, it’s his ghost. Harper himself is the supernatural cause of the time travel in this story. There was no need to change the story to make Harper solely responsible for his evil – he already was. Like all serial killers he thinks that he has some higher reason for killing but in reality it’s pointless and circular. This also destroys the origin of the time travel house – in the show it’s just…there, and remains unexplained. Kirby inherits it as a “totem of power” according to Luisa, which seems anathema to the original ending (to be fair to her she does acknowledge that this isn’t necessarily a good thing).

Author Lauren Beukes had fellow writer Sam Wilson ‘doctor’ the timeline for her to make it work, and he did a great job. Beukes also gives her vision for her novel:

“I wanted to use time travel as a way of exploring how much has changed (or, depressingly stayed the same) over the course of the 20th Century, especially for women, and subvert the serial killer genre by keeping the focus much more on the victims and examining what real violence is and what it does to us. The killer has a type, but it’s not a physical thing – he goes for women with fire in their guts, who kick back against the conventions of their time.”

This aspect, unlike the closed time loop, somewhat carries over to the TV series, albeit lacking the same variety in terms of the titular girls. However, she also stated that she;

“…wanted to play with loops and paradoxes and obsessions which meant the model I settled on was a fatalistic one. Think of it is as Greek tragedy time travel – the more you resist your destiny, the more you put in to play all the events that will bring it about, like Oedipus or MacBeth or King Herrod but also, in the way it loops back on itself, echoing the legends of Sisyphus and the punishment of Prometheus.”

This is thrown out along with the time travel logic and, for me, somewhat undermines its own narrative. As Beukes correctly tried to show, trauma cannot be magically undone and the dead certainly cannot be brought back. You can only try to address it and, hopefully, stop others from suffering in future. As I said, I did enjoy the show as a supernatural mystery series with time travel elements. The time periods were all nicely depicted and the excitement of travelling through time was there. But it didn’t scratch that timey-wimey itch for me, unfortunately. The recent adaptation of The Time Traveller’s Wife was much better in that regard. In conclusion, if you’re a time travel nut like me, check out the show if you like, but the main thing is to read or listen to the book. Not only is the time travel much better but the way the interior of the house works, its origins and connection to the killer, and even the title all make much more sense.

Advertisement

Kelvedon Hatch nuclear bunker

A wonderful photo of boffins at work in the (level 1) Ops Room at Kelvedon Hatch circa 1962
THE UNITED KINGDOM DURING THE COLD WAR, 1945-1991 (D 106284) United Kingdom Warning and Monitoring Organisation. Metropolitan Sector Operations Centre. Operations Room – Scientists at Work. Copyright: © IWM. Original Source: http://www.iwm.org.uk/collections/item/object/205220516

For those still following this page, my apologies for another lengthy drought. I’ve just been too busy unfortunately. However, I have been working on a few things, the first of which follows…

This spring I finally got around to visiting one of the best preserved Cold War nuclear bunkers in the UK – Kelvedon Hatch in Essex. First, given the implication of me commenting on anything historical here, let me say that I absolutely loved this place. We owe the owner and manager, Mr Parrish, a massive debt for rescuing it for the nation rather than being vandalised or destroyed entirely. That said, it is not without its issues from an historical standpoint. Parrish claimed in 1996 that “Everything is original — except the John Major figure…It is exactly as the Government left it”. The Facebook page today likewise proclaims “Everything is as it was left by the Government, when the bunker was decommissioned in the early 90’s.” This is not the case. 

Praise for the bunker is (rightly) almost universal, but at the same time it’s attracted very little scholarly attention. I did find criticism in David Lowe and Tony Joel’s ‘Remembering the Cold War: Global Contest and National Stories’ (2014, p.59) where they remark that Kelvedon Hatch’s “…testimony to the Cold War is somewhat compromised by its private ownership. The organization and upkeep of displays is very tired and occasionally misplaced (a dummy of former prime minister Margaret Thatcher, for instance, sits next to communications equipment dating from the 1960s), and the bunker jostles with youth-focused outside activities…”. Digging out Imperial War Museum and Historic England photos from 1992 and 1997 respectively, it isn’t just the interpretation that could be described as “tired”, sadly. The general condition of the place has gone from absolutely pristine to, shall we say, looking its age. Flooring and painted surfaces are worn (but not peeling), plant machinery is looking rough (albeit not visibly corroded), and there are worrying cracks in a couple of walls. I feel terrible pointing this out because if you’re going to privately run an underground three-storey office block formerly maintained at great taxpayer’s expense, maintenance is an enormous and inevitable problem. I certainly have no issue with the adjacent outdoor activities – how else are they going to fund this place? Mr Parrish’s recorded audioguide tour, whilst engaging, informative, and funny, doesn’t give the full story, but then how could it? Even Lowe and Joel blame the private status of the site rather than the owner himself. However, the whole place is in a sort of three-way limbo between an attempt at reconstructing actual wartime occupation of the final RGHQ phase, attempts to evoke its earlier days, and a sort of ad hoc Cold War history museum. It’s great for most visitors, but for those of us wanting more, I decided to try to disentangle this confusion using the available information, plans, photos, and film footage of the site’s different eras. 

Contrary to just about all of the information out there, there were actually three operational phases as follows:

Phase 1 – ROTOR bunker

1951 – 1953: Construction 

1953 – 1957: RAF ROTOR (R4 type) Metropolitan Sector Operational Control (MSOC) 

1957 – 1962: United Kingdom Warning & Monitoring Organisation (UKWMO)/Royal Observer Corps (ROC) Metropolitan Sector HQ co-located with the ‘rump’ RAF SOC following closure of the ROTOR programme.

Plans: via Historic England

Variant plans via the RAF Barnton Quarry restoration project (you can right-click and open image in a new tab to view a larger version)

Film: Kelvedon Hatch features prominently in the 1962 film ‘The Hole in the Ground’ (note this copy misses out a short introduction set outside the bungalow). By this time the RAF have handed over operations to the UKWMO, but the fabric of the building has yet to change. In the opening scene we see UKWMO team members running into the above ground guardhouse, then proceeding down the long access tunnel and into the main bunker, the blast doors slamming shut behind them. Visible in the background is some sort of equipment stowage or coat rack (!) located where the Home Office Radio Room would later be established. We then see the Chief Sector Warning Officer and his team of scientists emerge from the doorway at top left on the above-linked plan and immediately turn right, walking under the now-defunct tote board (its red-painted support posts and frame are visible). At this point we get a great view of almost the whole Ops Room. At the opposite side of the room the bottom of the now disused RAF Sector Ops glazed-in ‘cabins’ are visible (these appear more clearly later on as well). They then walk behind desks manned by (as explained in the film) Post Office telephonists who have volunteered under UKWMO. The team then turns right again, disappearing behind a large black pinboard (with two large maps on this side of it) that effectively bisects the room into admin/comms and scientific analysis. We pick up with the scientific team later as they perusing maps and charts. We get a reverse shot later on that shows the tote support structure again, in front of the group, complete with a colour-coded ‘sector’ type clock (as used in RAF ops rooms in the Second World War). Toward the end of the film we see the bottom right corner of the room with red double doors (these were visible in the distance in the establishing shot of the room). Pleasingly, these original 1950s doors are still in situ today (along with a lot of others!), repainted light green and with an additional 1985 vintage inner door in front of them. 

The whole setup is remarkably ad hoc – simple black cloth-covered pin-boards, ordinary tables with switchboard-style phones, individual message trays and pigeonholes made of unpainted wood. 

Photos: There are two known images, one of which is shown in photocopied form on the tour – the middle two rows of cabins with the top row just in shot – and the two plotting tables on Level 1 below the cabins. I was also very pleased to discover (I believe for the first time) IWM photo D 106284 showing a civilian UKWMO scientist plotting nuclear bursts on a map using a radiac slide rule. If anyone recognises the communications kit to the right of his drawing board, comment below. This shot is a perfect match for the scenes in the film.

Description: Kelvedon Hatch differed from all other R4 bunkers in having a tunnel that emerged into Level 1 rather than Level 3. Note that the cage opposite the main blast doors, today filled with random weapons as though an armoury, actually housed the 1950s electrical transformer for the site (plans are labelled as such and photos of other ROTOR bunkers still show the plant in place). Much is made of the ‘disguised’ above-ground bungalow, but this was a real, functioning military-style guardroom like any other, with toilets, offices, and an armoury (there’s a plan here that also appears in McCamley’s book). The armoury later became a decontamination room (this room’s door, behind the outer blast door, is still so labelled). All ROTOR bunkers throughout their various phases of use had a perimeter chain-link fence patrolled by armed guards and the actual radar stations were effectively military barracks with massive rotating radar dishes. The above-ground structures may have been intended to be low-profile, and certainly were at Kelvedon Hatch more so than elsewhere (since KH never had radar arrays and had the advantage of some tree cover) but were certainly not disguised. 

As a command centre for a short-lived RAF radar network, the site was focused around a central Operations Room ‘well’ three floors deep, with plotting boards at the bottom, a tall ‘tote’ mission control board at the front, and glazed, angled control ‘cabins’ wrapped around the back. The central room on Level 1 housed the two large plotting tables and the support posts for the tote. The remainder of the floor comprised two large rooms – ‘Apparatus’ at left and the main plant room at right. The two plant rooms remained much the same throughout all three phases. 

Moving up to level 2 we again find the glazed ops ‘well’ in the middle, surrounded by a corridor with office spaces either side and beyond this a maze of partition walls defining the toilet blocks (the women’s toilet being larger than the men’s) and a number of offices/rooms of varying size. By far the largest is an open plan space at top left. The plans (nor any other source) don’t reveal what the purpose of any of these might have been, unfortunately. We have a bit more information on the top floor (level 3) which again has the ops well but without the corridor around it. Instead there is a ring of self-contained offices. At left we have two large unidentified rooms with a thin partition wall and on the other side of a more substantial wall running from the stairwell to the bottom wall, a row of squarish offices with a corridor running past them. At right we have some labels, denoting the Women’s Royal Air Force (WRAF) rest room 

Phase 2 – Sub-Regional Headquarters/Sub-Regional Control

1963 – 1966: SRHQ for Region 4, ‘East’

1966 – 1985: S-RC 4.2 (Region 4, Control 2)

NB UKWMO/ROC Sector HQ retained until 1971 only

Plans: displayed (until they eventually fall apart) in the access tunnel, via Alamy stock images. Undated but believed to be ca.1965. Sadly I didn’t take my own photo so a lot of the room numbers and labels are not legible on the image we have. 

Photos/Film: None, however see ‘The Hole in the Ground’ above – the room may have changed a lot ca.1965 but the operations carried out, the kit, and the personnel involved would have been much the same.

Description

The first phase intended for ‘continuity of government’ in the event of a Third World War. The most significant change for this phase was that the ROTOR Operations ‘well’ was floored over. Rooms were also reconfigured throughout in keeping with the bunker’s new role. The men’s lavatory was expanded and a corridor created along the back of the toilets with partitioned offices along it (those labelled with a purpose are ‘Tape Room’ and ‘PBX’, a type of telephone exchange). New rooms were built on the left side of the floor telephone exchange Offices on Level 2 were knocked through to create a large Conference Room. Others were more significant. Notably, sleeping accommodation was installed; one dedicated 20 bunk dormitory on Level 3 and another 20 or so bunk beds in other areas, including a full row of beds along the access tunnel (so accommodation for around 80 people). Next door to the dorm was an equivalently sized room labelled ‘DEPTS’; the first dedicated working space for representatives of different government departments. The former RAF and WRAF rest rooms were converted into a single large unisex ‘Canteen Rest Room’ with adjoining kitchen capable of providing hot meals. The centre of Level 1 (rooms 101 and 103) remained in use as ‘Sector’ (presumably an operations room), but around 1/3rd of the room (101) was walled off as a sleeping area with bunks along the back wall. Another four small rooms on Level 2 were also designated ‘Sector’, with other rooms allocated to ‘Military’ and ‘Fire’ (one large room), Scientists, and Civil Defence Operations. The BBC studio was installed in its current location (albeit a different configuration) next to the GPO (General Post Office) ‘frame room’. The plant rooms (the main room being 102) remained unchanged. All told, Level 1 was already close to its Phase 3 incarnation in terms of usage and layout, if not in detail, but 2 and 3 remained quite different.

Phase 3 – RGHQ

1985 – 1992: Regional Government Headquarters, Metropolitan Region (RGHQ 5.1)

Plans: Only Level 1 has been reproduced online. To see this final layout (albeit in Phase 4 ‘trim’) you can also check out various tours on YouTube (including this short official one), and the complete plans were published in Judy Cowan’s 1994 pamphlet ‘Kelvedon Hatch Secret Bunker’. Better yet, visit the bunker yourself if you can; this article is primarily intended for people like me who visited but didn’t get a full picture of the site.

Photos: A whole series of Imperial War Museum record shots taken on decommissioning in 1992. These show how very sparse the place was, contrary to modern claims that the site is as the government left it. All we see are tables, chairs and telephones. 

Description

Internal walls were again rebuilt, this time in handily identifiable blockwork construction. Basically, if you see breeze-blocks, you’re looking at a 1980s alteration. The entrance to the access tunnel was redesigned to incorporate a new generator room into the near end of the tunnel (the exhaust stacks for the diesel engines are still visible to the left of the bungalow and have changed in design since the 1962 film). The complete row of bunks was replaced by a few fold-down bed frames attached to the wall (presumably for a guardroom ‘watch’, since there was now space for everyone on Level 3). The area at the far end of the access tunnel was enlarged and fitted with sliding blast doors on tracks to create a ‘Home Office Radio Room’. New generator cabinets and a siren point were installed just inside the main blast doors (it’s not clear whether the transformer outside them remained in place). The UKWMO Sector HQ relocated to the newly expanded Group HQ building at Horsham, and ‘Sector’ became office/operational space for “Uniformed Services” (outfitted with tables, chairs, lockers and desk phones), but the Communications Centre or COMCEN (also on Level 1) remained part of the wider Emergency Communication Network (CEN) with access to UKWMO/ROC data. The entrance used by the team three years earlier in ‘The Hole in the Ground’ and the door opposite it were walled off to create a corridor bypassing the new smaller main room (visitors now enter the room in the middle through a door marked ‘no entry’) and a small admin room (now one of several small cinemas for visitors). The science team were moved from Level 2 down to Level 1 next to a much smaller BBC studio. On Level 2 the formerly closed-in offices across the middle third of the room were knocked through to create one large central open-plan office. Note that the various wooden painted signs around the walls in this room are not original, as shown by the 1992 photos of the space and, in the 1997 shots, their initial suspension from the walls on string loops. Later they were (regrettably) screwed into the walls. These seem too specific in terms of content and style to be made up, but if they came from another site I don’t know which one or what era. 

Closed offices remained around the perimeter of the floor but were also reconfigured. Where there had been only one office/bedroom for a government official, a new corridor (down which the modern tour proceeds) accommodated three such spaces; 203 for the Regional Commissioner, 204 for the Principal Officer, and 205 for the Prime Minister (although it’s far from clear that the PM would ever have used this room). The row of offices that visitors see when they emerge from said corridor are wholly new for this phase – their predecessors having been ripped out. The first office at left is the ‘Secretariat’ (206), with a small room within this (207) housing a typing pool (previously located on Level 3). The adjoining rooms (208 and 209) were a truncated version of the Conference Room and, in the corner (now an event room) the Information Room.

Up on Level 3 the existing dormitory space was doubled, taking up the former government department space (there now being much more space for them on Level 2). This was divided by sex, men in the right hand room (302 – now an off-limits meetings/event space) with a small room next door for ‘Drivers’ (likely now a store for the giftshop), and women in the two rooms next door (301 and 308). Another two (also adjoining) male dorm rooms (309 and 311) were established on the other side of the toilets and Sick Bay (310 – seen here before stripping out and dummying-up). This area is partly correct today but 309 is, with some artistic licence, dressed as an emergency operating theatre. In reality this room would be fitted out with bunks and lockers in anticipation of use. We know this because we have a photo taken from 309 looking into 311. As cramped as the recreated dorms in the bunker are today, they are nothing compared to the reality, and the beds and lockers there today are not the same as they originally were.

Phase 4 – Visitor Attraction

1994: Sold back to the landowning family.

1995 – Present: Opened to the public.

Plans: represented by the fire evacuation map located in the bunker. Identical to Phase 3 but with two additions – a new metal staircase in the upper right corner of Level 2 allowing access from the open plan office straight up to the room outside the canteen on Level 3 (marked ‘Common Room’ on the Phase 3 plan). Although not included on the 1985 plan, it is clearly original to the RGHQ phase. The other change is the exit tunnel sadly bored through the wall of the Common Room to comply with fire regulations.

Photos: Another series from Historic England who documented the site as a new visitor attraction in 1997, by which time a lot of the present embellishments had been made but without the additional clutter and the ravages of time that we see today.

Description: As part of the decommissioning process, all of the original furniture and communications equipment (other than some of the telephone exchange) was removed. The original 1950s transformer room was also stripped out, but the rest of the plant remained. By 1997 the bunker was increasingly ‘dressed’ with surplus Cold War-era furniture, equipment, artefacts (most of the phones are marked up with the station crest of RAF St Athan) and some basic museum-style diorama displays ranging from individual dummies in wigs to an attempt at a ‘Threads’-style post-apocalypse household. A recent addition is a large-scale Spitfire model that has for some reason been suspended over the plotting table in the former Operations Room. 

Note that some of the room door labels (which slide into universal holders affixed to the doors – the actual room numbers are permanent) seem to have been moved over the years. The label for room 202, ‘Government Departments’ is currently fitted to the door for room 201, which per the plans should be ‘Common Services’ (and is loosely interpreted as such today with racks of stationery). Male and female dorm rooms 302 and 301 have had their labels swapped for some reason.

Conclusion

What you see at Kelvedon Hatch bunker today is therefore mostly a very…busy take on the final operational (RGHQ) phase. I will say again; this is an incredible place, it just needs some analysis to make full sense of. The current attraction conveys the general sense of what all three phases were about, it’s just not clear how these fit within chronology and the fabric of the building itself. If it were up to me (clearly it isn’t) I would thin out the accumulated clutter and remove all of the shop dummy diorama displays. Remember – none of the furniture or props there now are original to the site. I’d choose to depict Phase 3 throughout, and choose one room to clearly demarcate and curate as a museum to interpret the first two Phases, with an introductory display on Civil Defence.  

Bibliography

The bunker is mentioned in a number of published works and websites, nearly all of them are superficial in their treatment of the site or outright wrong. I recommend:

Clarke, Bob. 2005. ‘Four Minute Warning: Britain’s Cold War’. The History Press.

Cowan, Judy. 1994. ‘The Kelvedon Hatch Secret Bunker’. 

McCamley, Nick, 2002. ‘Cold War Secret Nuclear Bunkers: The Passive Defence of the Western World During the Cold War’. Pen & Sword.

‘Stinking Rich’?

I’ve just watched a fascinating lecture from funerary and art historian Dr. Julian Litten on burial vaults. I learned a lot and greatly enjoyed it, but was very surprised to hear him recite the old chestnut that the smell of decaying bodies under church floors led to the expression ‘stinking rich’. This is just not true, as phrases.org.uk relates:

The real origin of stinking rich, which is a 20th-century phrase, is much more prosaic. ‘Stinking’ is merely an intensifier, like the ‘drop-dead’ of drop-dead gorgeous, the ‘lead pipe’ of lead pipe cinch or, more pertinent in this case, the ‘stark-raving’ of stark-raving mad. It has been called upon as an intensifier in other expressions, for example, ‘stinking drunk’ and ‘we don’t need no stinking badges’

The phrase’s real derivation lies quite a distance from Victorian England in geography as well as in date. The earliest use of it that I can find in print is in the Montana newspaper The Independent, November 1925:

He had seen her beside the paddock. “American.” Mrs Murgatroyd had said. “From New England – stinking rich”.

However, I thought I’d check, and I did find an earlier cite, from ‘V.C.: A Chronicle of Castle Barfield and of the Crimea’, by David Christie Murray (1904, p. 92);

“I’m stinking rich – you know – disgraceful rich.”

Nothing earlier than that however. So I would add to the explanation at phrases.org.uk and say that it’s more of an expression of disgust; someone is so rich that it’s obscene and figuratively ‘stinks’. If we had any early 19th century or older cites, I’d grant that it could have been influenced in some way by intramural burial, but this was rare by the turn of the 20th century and lead coffins had been a legal requirement since 1849. Litten suggests that unscrupulous cabinetmakers might omit the lead coffin, leading to ‘effluvia’, but even then I can’t imagine that was common as it would be obvious when it had happened and whose interment was likely to have caused it, resulting in complaints and most likely reburial. 

Litten also repeated a version of the myth of Enon Chapel, which is a story I’ve been working on and will be forthcoming, but added a claim that I have yet to come across; that the decomposition gases from the crypt below were so thick that they made the gas lighting in the chapel above ‘burn brighter’. I don’t know where this comes from and it hardly seems plausible. Dr Waller Lewis, the UK’s first Chief Medical Officer, wrote on the subject in an 1851 article in The Lancet entitled ‘ON THE CHEMICAL AND GENERAL EFFECTS OF THE PRACTICE OF INTERMENT IN VAULTS AND CATACOMBS’. Lewis stated that: “I have never met with any person who has actually seen coffin-gas inflame” and reported that experiments had been carried out and “in every instance it extinguished the flame”. This makes sense, since it was not decomposition gases per se (and certainly not ‘miasma’ as was often claimed at the time) that made workers light-headed or pass out in vaults – it was the absence of oxygen and high concentration of CO2 that caused this. Hence reports of candles going out rather than inflaming more.

Unfortunately, even the best of us are not immune to a little BS history. It was nonetheless a privilege to hear Dr. Litten speak.

Werewolves = Serial Killers?

Beast of Gevaudan (1764). Not to Scale (Wikimedia Commons)

When I last wrote on the Beast of Gévaudan, I said that I couldn’t rule out the involvement of one or more human murderers whose actions could have been conflated with several wolves and possibly other wild animals killing French peasants between 1764 – 1767. I meant that literally; the Beast was a craze, and it’s perfectly possible that one or more victims was in fact the victim of a murder. We have no evidence for that, of course, and certainly not for the claim, sometimes made, that the whole thing was the work of a serial killer. This was recently repeated in this otherwise very good video from YouTube channel ‘Storied’ (part two of two; both parts feature the excellent Kaja Franck, who I was fortunate to meet at a conference some years ago). Meagan Navarro of the horror (fiction) website Bloody Disgusting states the following:

“The Beast of Gevaudan or the Werewolf of Dole, these were based on men that were serial killers and slaughtered, and folklore was a means of exploring and understanding those acts by transforming them into literal monsters.”

The ‘werewolf’ of Dole does indeed appear to be a deluded individual who thought he was able to transform into a wolf and was convicted as such. However, this is not the case for Gévaudan, which is a well-documented piece of history, not some post-hoc rationalisation for a series of murders as she implies. The various attacks that comprise the story were widely reported at the time and in some detail (albeit embellishments were added later). No-one at the time suspected an ordinary person of the actual killings, and the only sightings consistently refer to a large beast, sometimes detailing how the kills were made. The idea of a human being in control of the Beast somehow was mooted at the time, as was the werewolf of folklore, but never a straightforward murderer. Of course, the idea of the serial killer was unknown until the late 19th century, and it wasn’t long after this that a specious connection was made. In 1910 French gynaecologist Dr. Paul Puech published the essay (‘La Bête du Gévaudan’, followed in 1911 by another titled ‘Qu’était la bête du Gévaudan?’). Puech’s thin evidence amounted to;

1) The victims being of the same age and gender as those of Jack the Ripper and Joseph Vacher. In fact, women and children (including boys) were not only the more physically vulnerable to attack generally, but were the members of the shepherding families whose job it was to bring the sheep in at the end of the day. This is merely a coincidence.

2) Decapitation and needless mutilation. The latter is pretty subjective, especially if the animal itself might be rabid (plenty were) and therefore attacking beyond the needs of hunger alone. The relevance of decapitation depends upon whether a) this really happened and b) whether a wolf or wolves would be capable of it. Some victims were found to have been decapitated, something that these claimants assert is impossible for a wolf to achieve. I can’t really speak to how plausible this is, although tearing limbs from sizable prey animals is easily done and if more than one animal were involved I’ve little doubt that they could remove a head if they wished. So, did these decapitations actually take place? Jay Smith’s ‘Monsters of the Gévaudan: The Making of a Beast’ relays plenty of reports of heads being ripped off. However, details of these reports themselves mitigate against the idea of a human killer. Take Catherine Valy, whose skull was recovered some time after her death. Captain of dragoons Jean-Baptiste Duhamel noted that “judging by the teeth marks imprinted [on the skull], this animal must have terrifying jaws and a powerful bite, because this woman’s head was split in two in the way a man’s mouth might crack a nut.” Duhamel, like everyone else involved, believed that he faced a large and powerful creature (whether natural or supernatural), not a mere human. Despite the intense attention of the local and national French authorities, not to mention the population at large, no suggestion was ever made nor any evidence ever found of a human murderer and the panic ended in 1767 after several ordinary wolves were shot.

3) Similar deaths in 1765 in the Soissonnais, which he for some reason puts down to a copycat killer rather than, you know, more wolves. This reminds me of the mindset of many true crime writers; come up with your thesis and then go cherry-picking and misrepresenting the data to fit.At the very least then, this claim is speculative, and should not be bandied about as fact (in fact, the YouTube channel should really have queried the claim). So, if not a serial killer, then what? French historian Emmanuel Le Roy Ladurie argues that the Beast was a local legend blown out of proportion to a national level by the rise of print media. Jean-Marc Moriceau reports 181 wolf killings through the 1760s, which puts into context the circa 100 killings over three years in one region of France. That is, statistically remarkable, but within the capability of the country’s wolf population to achieve, especially given the viral and environmental pressures from rabies and the Little Ice Age respectively that Moriceau cites. If we combine these two takes, we get close to the truth, I think. ‘The’ Beast most likely actually consisted of some unusually violent attacks carried out by more than one wolf or packs of wolves that were confabulated and exaggerated as the work of one supernatural beast, before ultimately being pinned by the authorities on several wolves, three shot by François Antoine in 1765 and another supposedly ‘extraordinary’ (yet actually ordinary sized) Jean Chastel in 1767.

Milk in First, or Last Part 2: a Tempest in a Teapot

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

This is Part 2 of a very long article – see here for part 1.

Clearly the majority of modern-day advocates (including all those YouTube commenters that I mentioned last time) aren’t aspiring members of the upper-middle or upper classes or avid followers of etiquette, so why does this schism among tea-drinkers still persist? No doubt the influence of snobs like Nancy Mitford, Evelyn Waugh et al persists, but for most it seems to boil down (ha) to personal preference. This has not calmed the debate any however. Both sides, now mostly comprised of middle class folk such as myself, now argue with equal certainty that their way is the only right way. Is Milk In First (MIF)/Milk In Last (MIL) really now a ‘senseless meme’ (as Professor Markman Ellis believes; see Part 1) – akin to the ‘big-endians’ and ‘little-endians’ of ‘Gulliver’s Travels’? Is there some objective truth to the two positions that underpins all this passion and why the debate has surpassed class differences? Is there a way to reconcile or at least explain it so that we can stop this senseless quibbling? Well, no. We’re British. Quibbling and looking down on each other are two of our chief national pastimes. However, another of those pastimes is stubbornness, so let’s try anyway…

Today’s MILers protest that their method is necessary in order to be able to judge the strength of the tea by its colour. Yet clearly opinions on this differ and, as I showed in the video, sufficiently strong blends – and any amount of experience in making tea – render this moot. If you do ‘under milk’, you can add more to taste (although as I also noted, you might argue that this makes MIL the more expedient method). As we’ve seen with George Orwell vs the Tea & Coffee Trade, the colour/strength argument is highly subjective. Can science help us in terms of which way around is objectively better? Perhaps, although there are no rigorous scientific studies. In the early 2000s the Royal Society of Chemistry and Loughborough University both came out in favour of MIF. The RSC press release gives the actual science:

“Pour milk into the cup FIRST, followed by the tea, aiming to achieve a colour that is rich and attractive…Add fresh chilled milk, not UHT milk which contains denatured proteins and tastes bad. Milk should be added before the tea, because denaturation (degradation) of milk proteins is liable to occur if milk encounters temperatures above 75°C. If milk is poured into hot tea, individual drops separate from the bulk of the milk and come into contact with the high temperatures of the tea for enough time for significant denaturation to occur. This is much less likely to happen if hot water is added to the milk.

It also transpires that an actual international standard (ISO 3103:1980, preceded by several British Standards going back to 1975) was agreed for tea-making way back in 1980, and this too dictated that tea should be added to milk “…in order to avoid scalding the milk”. This would obviously only happen if the tea is particularly hot, and indeed the standard includes a ‘milk last’ protocol in which the tea is kept below 80 degrees celsius. Perhaps those favouring MIL simply like their tea cooler and so don’t run into the scalding problem? This might explain why I do prefer the taste of the same tea, with the same milk, made MIF from a pot, rather than MIL with a teabag in a cup… I like my tea super hot. So, the two methods can indeed taste different; a fact proven by a famous statistical experiment (famous among statisticians; a commenter had to point this out for me) resulted in a lady being able to tell whether a cup of tea had been made MIF or MIL eight times out of eight.

“Already, quite soon after he had come to Rothamstead, his presence had transformed one commonplace tea time to an historic event. It happened one afternoon when he drew a cup of tea from the urn and offered it to the lady beside him, Dr. B. Muriel Bristol, an algologist. She declined it, stating that she preferred a cup into which the milk had been poured first. “Nonsense,” returned Fisher, smiling, “Surely it makes no difference.” But she maintained, with emphasis, that of course it did. From just behind, a voice suggested, “Let’s test her.” It was William Roach who was not long afterward to marry Miss Bristol. Immediately, they embarked on the preliminaries of the experiment, Roach assisting with the cups and exulting that Miss Bristol divined correctly more than enough of those cups into which tea had been poured first to prove her case.

-Fisher-Box, 1978, p. 134.

This of course doesn’t help with which is objectively better, but does suggest that one side may be ‘right’. However, as well as temperature, the strength of the brew may also make a difference here, one that might explain why this debate rumbles on with no clear victor. A commenter on a Guardian article explains the chemistry of a cup of tea;

“IN THE teacup, two chemical reactions take place which alter the protein of the milk: denaturing and tanning. The first, the change that takes place in milk when it is heated, depends only on temperature. ‘Milk-first’ gradually brings the contents of the cup up from fridge-cool. ‘Milk-last’ rapidly heats the first drop of milk almost to the temperature of the teapot, denaturing it to a greater degree and so developing more ‘boiled milk’ flavour. The second reaction is analogous to the tanning of leather. Just as the protein of untanned hide is combined with tannin to form chemically tough collagen/tannin complexes, so in the teacup, the milk’s protein turns into tannin/casein complexes. But there is a difference: in leather every reactive point on the protein molecule is taken up by a tannin molecule, but this need not be so in tea. Unless the brew is strong enough to tan all the casein completely, ‘milk-first’ will react differently from ‘milk-last’ in the way it distributes the tannin through the casein. In ‘milk-first’, all the casein tans uniformly; in ‘milk-last’ the first molecules of casein entering the cup tan more thoroughly than the last ones. If the proportions of tannin to casein are near to chemical equality, ‘which-first’ may determine whether some of the casein escapes tanning entirely. There is no reason why this difference should not alter the taste.

-Dan Lowy, Sutton, Surrey (The Guardian, Notes & Queries, 2011).

Both the scalding and the denaturation/tanning explanations are referenced in the popular science book ‘Riddles in Your Teacup’ (p. 90), the authors having consulted physicists (who favour a temperature explanation) and chemists (who of course take a chemistry-based view) on this question. I also found this interesting explanation, from an 1870 edition of the Boston Journal of Chemistry, of tannins in tea and how milk reacts with them to change the taste of the tea. This supports the idea, as does the tea-tasting lady’s ability to tell the difference, that MIF and MIL can result in a different taste. Needless to say, people have different palates and preferences and it’s likely that some prefer their tannins left unchecked (black tea), fully suppressed (milk in first), or partly mitigated (milk in last). However, if your tea is strong enough, the difference in taste will be small or even non-existent, as the tannins will shine through regardless and you’ll just get the additional flavour of the milk (perhaps tasting slightly boiled?). My preferred blend (Betty’s Tea Room blend) absolutely does retain this astringent taste regardless of which method I use or even how hot the water is (even if I do prefer it hot and MIF!).

So, the available scientific advice does favour MIF, for what it’s worth, which interestingly bears out those early reports of upper class tea aficionados and later ‘below stairs’ types who both preferred it this way. However, the difference isn’t huge and depends what temperature the tea is when you hit it with the milk, how strong the brew is, and what blend you use. It’s a bit like unevenly steamed milk in a latte or cappuccino; it’s fine, but it’s nicer when it has that smooth, foamed texture and hasn’t been scalded by the wand. The bottom line, which is what I was trying to say in my YouTube response, is that it’s basically just fashion/habit and doesn’t much matter either way (despite the amount I’ve said and written about it!) – to which I can now add the taste preference and chemical change aspects. If you pour your tea at a lower temperature, the milk won’t get so denatured/scalded, and even this small difference won’t occur. Even if you pour it hot, you might not mind or notice the difference in taste. As for the historical explanation of cracking cups, it’s probably bollocks, albeit rooted in the fact of substandard British teaware. As readers of this blog will know by now, these neat origin stories generally do turn out to be made up after the fact, and the real history is more nuanced. This story is no different.

To recap; when tea was introduced in the 17th century most people drank it black. By the early 19th century milk became widely used as an option that you added to the poured tea, like sugar. Later that century, some found that they preferred putting the milk in first and were thought particular for doing so (marking the start of the Great Tea Schism). Aside from being a minority individual preference, most upper class hostesses continued to serve MIL (as Hartley recommended) because when hosting numbers of fussy guests, serving the tea first and offering milk, sugar and lemon to add to their own taste was simply more practical and efficient. Guests cannot object to their tea if they are responsible for putting it together, and this way, everyone gets served at the same time. Rather than outline this practical justification, the 1920s snobs chose to frame the debate in terms of class, setting in stone MIL as the only ‘proper’ way. This, probably combined with a residual idea that black tea was the default and milk was something that you added, and also doubtless definitely as a result of the increasing dominance of tea-making using a teabag and mug/cup (where MIL really is the only acceptable method) left a lot of non-upper class people with the idea that MIL was objectively correct. Finally, as the class system broke down, milk first or last became the (mostly) good-natured debate that it is today.

All of this baggage (especially, in my view, the outdated class snobbery aspect) should be irrelevant to how we take our tea today, and should have been even back then. As far back as 1927, J.B. Priestley used his Saturday Review column to mock the snobs who criticised “…those who pour the milk in first…”. The Duke of Bedford’s ‘Book of Snobs’ (1965, p. 42) lamented the ongoing snobbery over ‘milk in first’ as “…stigmatizing millions to hopelessly inferior status…”. Today, upper class views on what is correct or incorrect are roundly ignored by the majority, and most arguing in favour of MIL would not claim that you should do it because the upper class said that you should, and probably don’t even realise that this is where it came from. Even high-end tea-peddlers Fortnum & Mason note that you should “…pour your tea as you please”. Each person’s view on this is a product of family custom and upbringing, social class, and individual preference; a potent mixture that leads to some strong opinions! Alternatively, like me, you drink your tea sufficiently strong that it barely matters (note I said ‘barely’ – I remain a heretical MIF for life). What does matter, of course, in tea as in all things, is knowing what you like and how to achieve it, as this final quote underlines:

…no rules will insure good tea-making. Poeta nascitur non fit,* and it may be said similarly, you are born a tea-maker, but you cannot become one.

-Samuel Kneeland, About Making Tea (1870). *A Latin expression meaning that poets are born and not made.

References (for both Parts):

Bedford, John Robert Russell, George Mikes & Nicholas Bentley. 1965. The Duke of Bedford’s Book of Snobs. London: P. Owen.

Bennett, Arnold. 1912. Helen With the High Hand. London: Chapman and Hall.

Betjeman, John. 1956. ‘How to Get on in Society’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Boston Journal of Chemistry. 1870. ‘Familiar Science – Leather in the Tea-Cup’. Vol. V, No. 3.

Ferguson, Jonathan. 2020. ‘You’re Doing It Wrong: Tea and Milk with Jonathan Ferguson’. Forgotten Weapons. YouTube video. 15 April 2020. <https://www.youtube.com/watch?v=8VCRFVMpSc8&gt;.

Ferguson, Jonathan & McCollum, Ian. 2020. ‘Jonathan Reacts to the First Day Kickstarter for his Book’. Forgotten Weapons. YouTube video. 13 April 2020. <https://www.youtube.com/watch?v=1XO4VgkC_JE&gt;.

Fisher-Box, Joan. 1978. R.A. Fisher: The Life of a Scientist. New York, NY: Wiley.

Fortnum & Mason. ‘How to Make the Perfect Cup of Tea.’ The Journal | #Fortnums. <https://www.fortnumandmason.com/fortnums/the-perfect-cup-of-tea&gt;.

Ghose, Partha & Dipankar Home. 1994. Riddles in your Teacup. Boca Raton, FL: CRC Press.

Guanghua (光華). 1995. Press Room of the Information Bureau of the Executive Yuan of the Republic of China. Vol. 20, Nos. 7–12.

Hartley, Florence. 1860. The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook. Boston, MA: Cottrell.

Johnson, Dorothea. 2002. Tea & Etiquette. Washington, D.C.: Capital.

Kneeland, Markman. 2017. ‘“Milk in First”: a miffy question’. Queen Mary University of London History of Tea Project. 11 May. <https://qmhistoryoftea.wordpress.com/2017/05/11/milk-in-first-a-miffy-question/&gt;.

Kneeland, Samuel. 1870. ‘About Making Tea’. Good Health. Vol. 1, No. 12.

Lowy, Dan. 2011. ‘Notes and Queries’. The Guardian. Digital edition:  <https://www.theguardian.com/notesandqueries/query/0,,-1400,00.html>.

Manley, Jeffrey. 2016. ‘Milk in First.’ The Evelyn Waugh Society. 17 November 2016. <https://evelynwaughsociety.org/2016/milk-in-first/&gt;.

Orwell, George. 1946. ‘A Nice Cup of Tea.’ London Evening Standard. Available at <https://orwell.ru/library/articles/tea/english/e_tea&gt;

Rice, Elizabeth Emma. 1884. Domestic Economy. London: Blackie & Son.

Royal Society of Chemistry. 2003. ‘How to Make a Perfect Cup of Tea.’ Press Release. <https://web.archive.org/web/20140811033029/http:/www.rsc.org/pdf/pressoffice/2003/tea.pdf&gt;.

Waugh, Evelyn. 1956. ‘An Open Letter to the Honble Mrs Peter Rodd (Nancy Mitford) On a Very Serious Subject’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Smith, Matthew. ‘Should milk go in a cup of tea first or last?’ YouGov. 30 July 2018. <https://yougov.co.uk/topics/food/articles-reports/2018/07/30/should-milk-go-cup-tea-first-or-last/&gt;


Milk in First, or Last Part 1: a Storm in a Teacup?

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

The Short Version: Pouring tea (from a teapot) with the milk in the cup first was an acceptable, if minority, preference regardless of class until the 1920s, when upper class tea drinkers decided that it was something that only the lower classes did. It does affect the taste but whether in a positive or negative way (or whether you even notice/care) is strictly a matter of preference. So, if we’re to ignore silly class-based snobbery, milk-in-first remains an acceptable alternative method. Unless you are making your tea in a mug or cup with a teabag, in which case, for the love of god, put the milk in last, or you’ll kill the infusion process stone dead.

This article first appeared in a beautifully designed ‘Tea Ration’ booklet designed by Headstamp Publishing for Kickstarter supporters of my book (Ferguson, 2020). Now that these lovely people have had their books (and booklets) for a while, I thought it time to unleash a slightly revised version on anyone else that might care! It’s a long read, so I’ll break it into two parts (references in Part 2, now added here, for those interested)…

Part 1: The History

Like many of my fellow Britons, I drink an enormous amount of tea. By ‘tea’, I mean tea as drunk in Britain, the Republic of Ireland and to a large extent in the Commonwealth. This takes the form of strong blends of black leaves, served hot with (usually) milk and (optionally) sugar. I have long been aware of the debate over whether to put the milk into the cup first or last, and that passions can run pretty high over this (as in all areas of tea preference). For a long time however, I did not grasp just how strong these views were until I read comments made on a video (Ferguson & McCollum, 2020) made to support the launch of my book ‘Thorneycroft to SA80: British Bullpup Firearms 1901 – 2020’. This showed brewed tea being poured into a cup already containing milk, which caused a flurry of mock (and perhaps some genuine) horror in the comments section. Commenters were overwhelmingly in favour of putting milk in last (henceforth ‘MIL’) and not the other way around (‘milk in first’ or ‘MIF’). This is superficially supported by a 2018 survey in which 79% of participants agreed with MIL (Smith, 2018). This survey was seriously flawed in not specifying the use of a teapot or individual mug/cup as the brewing receptacle. Very few British/Irish-style tea drinkers would ever drop a teabag in on top of milk, as this soaks into the bag, preventing most of the leaves from infusing into the hot water. Most of us these days only break out the teapot (and especially the loose-leaf tea, china cups, tea-tray etc) on special occasions, and it takes a conscious effort to try the milk in first.

Regardless, anecdotally at least it does seem that a majority would still argue for MIL even when using a teapot. This might seem only logical; tea is the drink, milk is the additive. The main justifications given were the alleged difficulty of judging the colour and therefore the strength of the mixture, and an interesting historical claim that only working class people in the past had put milk in first, in order to protect their cheap porcelain cups. The practicalities seemed to be secondary to some idea of an objectively ‘right’ way to do it, however, with many expressing mock (perhaps in some cases, genuine) horror at MIF. This vehement reaction drove me to investigate, coming to the tentative conclusion that there was a strong social class influence and releasing a follow-up video in which I acknowledged this received wisdom (Ferguson, 2020). I also demonstrated making a cup of perfectly strong tea using MIF, thus empirically proving the colour/strength argument wrong – given a suitably strong blend and brew of course. The initial source that I found confirmed the modern view on the etiquette of tea making and the colour justification. This was ‘Tea & Etiquette’ (1998, pp. 74-75) written by American Dorothea Johnson. Johnson warns ‘Don’t put the milk in before the tea because then you cannot judge the strength of the tea by its color…’

And:

‘ …don’t be guilty of this faux pas…’

Johnson then lists ‘Good Reasons to Add Milk After the Tea is Poured into a Cup’, as follows:

  • The butler in the popular 1970s television program Upstairs, Downstairs kindly gave the following advice to the household servants who were arguing about the virtues of adding milk before or after the tea is poured: “Those of us downstairs put the milk in first, while those upstairs put the milk in last.”
  • Moyra Bremner, author of Enquire Within Upon Modern Etiquette and Successful Behaviour, says, “Milk, strictly speaking, goes in after the tea.”
  • According to the English writer Evelyn Waugh, “All nannies and many governesses… put the milk in first.”
  • And, by the way, Queen Elizabeth II adds the milk in last.

Unlike the video comments, which did not directly reference social class, this assessment practically drips with snobbery, thinly veiled with the practical but subjective justification that one cannot judge the colour (and hence strength) of the final brew as easily. Still, it pointed toward the fact that there really was somehow a broadly acknowledged ‘right’ way, which surprised me. The handful of other etiquette and household books that I found in my quick search seemed to agree, and in a modern context there is no doubt that ‘milk in last’ (MIL) has come to be seen as the ‘proper’ way. However, as I suspected, there is definitely more to it—milk last wasn’t always the prescribed method, and it isn’t necessarily the best way to make your ‘cuppa’ either…

So, to the history books themselves… I spent longer than is healthy perusing ladies’ etiquette books and, as it turns out, only the modern ones assert that milk should go in last or imply that there is any kind of class aspect to be borne in mind. In fact, Elizabeth Emma Rice in her Domestic Economy (1884, p. 139) states confidently that:

“…those who make the best tea generally put the sugar and milk in the cup, and then pour in the hot tea.”

I checked all of the etiquette books that I could find electronically, regardless of time period, and only one other is proscriptive with regards to serving milk with tea. This is The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook, by Florence Hartley (1860, pp. 105–106) which passes no judgement on which is superior, but recommends for convenience that cups of tea are poured and passed around to be milked and sugared to taste. This may provide a practical underpinning to the upper-class preference for MIL; getting someone’s cup of tea wrong would be a real issue at a gathering or party. You either had to ask how the guest liked it and have them ‘say when’ to stop pouring the milk, which would take time and be fraught with difficulty or, more likely, you simply poured a cup for each and let them add milk and sugar to their taste. This also speaks to how tea was originally drunk (as fresh coffee still is)—black, with milk if you wanted it. A working-class household was less likely to host large gatherings or have a need to impress people. There it was more convenient to add roughly the same amount of milk to each cup, and then fill the rest with tea. , you would simply be given a cup made as the host deemed fit, or perhaps be asked how you like it. If thought sufficiently fussy, you might be told to make it yourself! In any case, Hartley was an American writing for Americans, and I found no pre-First World War British guides that actually recommended milk in last. As noted, the only guide that did cover it (Rice) actually favours milk in first.

Much of my research aligns with that presented in a superb article by Professor Markman Ellis of the Queen Mary University History of Tea Project. Ellis agrees that the ‘milk in first or last’ thing was really about the British class system—which helps explain why I found so few pre-Second World War references to the dilemma. His thesis boils down (ha!) to a crisis of identity among the post-First World War upper class. In the 1920s, the wealth gap between the growing middle class and the upper class was narrowing. This is where the expression nouveau riche—the new rich—comes from; they had the money but, as the ‘true’ upper class saw it, not the ‘breeding’. They could pose as upper class, but could never be upper class. Of course, that very middle class would, in its turn, come to look down on aspiring working-class people (think Hyacinth Bucket from British situation comedy Keeping Up Appearances). In any case, if you cared about appearances and reputation among your upper-class peers or felt threatened by social mobility, you had to have a way of setting yourself apart from the ’lower classes’. Arbitrary rulesets that included MIL were a way to do this. Ellis cites several pre-First World War sources (dating back as far as 1846) which comment on how individuals took their tea. These suggest that milk-in-first (MIF) was thought somewhat unusual, but the sources pass no judgement and don’t mention that this is thought to be a working class phenomenon. Adding milk to tea was, logically enough, how it was originally done—black tea came first and milk was an addition. Additions are added, after all. As preferences developed, some would have tried milk first and liked it. This alone explains why those adding milk first might seem eccentric, but not ‘wrong’ per se. In fact, by the first decade of the 20th century, MIF had become downright fashionable, at least among the middle class, as Helen with the High Hand (1910) shows. In this novel, the titular Helen states that an “…authority on China tea…” should know that “…milk ought to be poured in first. Why, it makes quite a different taste!” This presumptuous attitude (how dare the lower classes tell us how to make our tea?!) that influenced the upper-class rejection of the practice in later decades.

This brings us back to Ellis’s explanation of where the practice originated, and also explains the context of Evelyn Waugh’s comments as reported by Johnson. These come from Waugh’s contribution to to Noblesse Oblige—a book that codified the latest habits of the English aristocracy. Ellis dismisses the authors and editor as snobs of the sort that originated and perpetuated the tea/milk meme. However, in fairness to Waugh, he does make clear that he’s talking about the view of some of his peers, not necessarily his own, and even gives credit to MIF ‘tea-fanciers’ for trying to make the tea taste better. His full comments are as follows:

All nannies and many governesses, when pouring out tea, put the milk in first. (It is said by tea-fanciers to produce a richer mixture.) Sharp children notice that this is not normally done in the drawing-room. To some this revelation becomes symbolic. We have a friend you may remember, far from conventional in other ways, who makes it her touchstone. “Rather MIF, darling,” she says in condemnation.

                             -Waugh, 1956.

Incidentally, I erroneously stated that governesses were ‘working class’ in my original video on this topic. In fact, although nannies often were, the governess was typically of the middle class, or even an impoverished upper-middle or upper class woman. Both roles occupied a space between classes, being neither one nor the other but excluded from ever being truly ‘U’. As a result, they were free to make tea as they thought best. Waugh’s view is not the only tea-related one in the book. Poet John Betjeman also alluded to this growing view that MIF was a lower class behaviour in his long list of things that would mark out the speaker as a member of the middle class:

Milk and then just as it comes dear?

I’m afraid the preserve’s full of stones;

Beg pardon I’m soiling the doileys

With afternoon tea-cakes and scones.

                             -Betjeman, 1956.

Returning to the etiquette books, although the early ones were written for those running an upper-class household, the latter-day efforts like Johnson’s are actually aimed at those aspiring to behave like, or at least fascinated by, the British upper class. This is why Johnson invokes famous posh Britons and even the Queen herself to make her point to her American audience. Interestingly though, Johnson takes Samuel Twining’s name in vain. The ninth-generation member of the famous Twining tea company is in fact an advocate of milk first, and he too thought that MIL came from snobbery:

With a wave of his hand, Mr. Twining dismisses this idea as nonsense. “Of course you have to put the milk in first to make a proper cup of tea.” He surmises that upper-class snobbery about pouring the tea first, had its origins in their desire to show that their cups were pure imported Chinese porcelain.

Guanghua (光華) magazine, 1995, Volume 20, Issues 7-12, p. 19.

Twining goes on to explain his hypothesis that the lower classes only had access to poor quality porcelain that could not withstand the thermal shock of hot liquid, and so had to put the milk in first to protect the cup. Plausible enough, but almost certainly wrong. As Ellis explains in his article;

…tea was consumed in Britain for almost two centuries before milk was commonly added, without damaging the cups, and in any case the whole point of porcelain, other than its beauty, was its thermo-resistance.

Food journalist Beverly Dubrin mentions the theory in her book ‘Tea Culture: History, Traditions, Celebrations, Recipes & More’ (2012, p. 24), but identifies it as ‘speculation’. I could find no historical references to the cracking of teacups until after the Second World War. The claim first appears in a 1947 issue of the American-published (but international in scope)‘Tea & Coffee Trade Journal’ (Volumes 92-93, p.11), along with yet another pro-MIF comment:

…MILK FIRST in the TEA, PLEASE! Do you pour the milk in your cup before the tea? Whatever your menfolk might say, it isn’t merely ‘an old wives’ tale : it’s a survival from better times than these, when valuable porcelain cups were commonly in use. The cold milk prevented the boiling liquor cracking the cups. Just plain common sense, of course. But there is more in it than that, as you wives know — tea looks better and tastes better made that way.

The only references to cracking teaware that I’ve found were to the teapot itself, into which you’d be pouring truly boiling water if you wanted the best brewing results. Several books mention the inferiority of British ‘soft’ porcelain in the 18th century, made without “access to the kaolin clay from which hard porcelain was made”, as Paul Monod says in his 2009 book ‘Imperial Island: A History of Britain and Its Empire, 1660-1837’. By the Victorian period this “genuine or true” porcelain was only “occasionally” made in Britain, as this interesting 1845 source relates, and remained expensive (whether British or imported) into the 20th century. This has no doubt contributed to the explanation that the milk was put there to protect the cups, even though the pot was by far the bigger worry and there are plenty of surviving soft-paste porcelain teacups today without cracks (e.g. this Georgian example). Of course, it isn’t actually necessary for cracking to be a realistic concern, only that the perception existed, and so we can’t rule it out as a factor. However, that early ‘Tea & Coffee Trade Journal’ mention is also interesting because it omits any reference to social class and implies that this was something that everyone used to do for practical reasons, and is now done as a matter of preference. Likewise, on the other side of the debate, author and Spanish Civil War veteran George Orwell argued in favour of MIL in a piece for the Evening Standard (January 1946) entitled ‘A Nice Cup of Tea’:

…by putting the tea in first and stirring as one pours, one can exactly regulate the amount of milk whereas one is liable to put in too much milk if one does it the other way round.

                             -Orwell, 1946.

This reiterated his earlier advice captured in this wonderful video from the Spanish trenches. However, Orwell acknowledged that the method of adding milk was “…one of the most controversial points of all…” and admitted that “the milk-first school can bring forward some fairly strong arguments.” Orwell (who himself hailed from the upper middle class) doesn’t mention class differences or worries over cracking cups.

By the 1960s people were more routinely denouncing MIF as a working class practice, although even at this late stage there was disagreement. Upper class explorer and writer James Maurice Scott in ‘The Tea Story’ (1964, p. 112) commented:

The argument as to which should be put first into the cup, the tea or the milk, is as old and unsolvable as which came first, the chicken or the egg. There is, I think, a vague feeling that it is Non-U to put the milk in first – why, goodness knows.

It’s important to note that ‘U’ and ’Non-U’ were abbreviations used as shorthand for ‘Upper-Class’ and ‘Non-Upper-Class’ invented by Professor Alan Ross in his 1954 linguistic study, and unironically embraced by the likes of Mitford as a way to ‘other’ those that they saw as inferior.

The New Yorker magazine (1965, p. 26) reported a more emphatic advisory (seemingly a trick question!) given to an American visitor to London:

Do you like milk in first or tea in first? You know, putting milk in the cup first is a working-class custom, and tea first is not.

This, then, was the status quo reflected in the British TV programme ‘Upstairs, Downstairs’ in the 1970s, which helped to expose new audiences to the idea that MIF was ‘not the done thing’. Lending libraries and affordable paperback editions afforded easy access to books like Noblesse Oblige. The 1980s then saw the modern breed of etiquette books (like ‘Miss Manners’ Guide to Excruciatingly Correct Behavior’ that rehashed this snobbery for an American audience fascinated with the British upper class. Ironically of course, any American would have been unquestionably ‘Non-U’ to any upper class Brit, just as any working or middle-class Briton would have been. And finally (again covered by Ellis), much like the changing fashion of the extended pinkie finger (which started as an upper class habit and then became ‘common’ when it trickled down to the lower classes – see my article here), the upper class decided that worrying about the milk in your tea was now vulgar. Having caused the fuss in the first place, they retired to their collective drawing room, leaving us common folk to endlessly debate the merits of MIF/MIL…

That’s it for now. Next time: Why does anyone still care about this?

“…few men…would be clever enough to be crows.”

I recently caught up with this Nicola Clayton lecture on corvid intelligence. Well worth a watch, it ends with a very apt quote;

“If men had wings and bore black feathers, Few of them would be clever enough to be crows.”

-Henry Ward Beecher

Unfortunately, as quotes in Powerpoint presentations often are, this is incorrect.

The actual quote is;

“Take off the wings, and put him in breeches, and crows make fair average men. Give men wings, and reduce their smartness a little, and many of them would be almost good enough to be crows.”

Some time into researching the origins of this, I came across this blog post, which correctly identifies that the above is the original wording and that Beecher was indeed its originator. However, taking things a little further, I can confirm that the first appearance of this was NOT ‘Our Dumb Animals’ but rather The New York Ledger. Beecher’s regular (weekly) column in the Ledger was renowned at the time. Unfortunately, I can’t find any 1869 issues of the Ledger online, so I can’t fully pin this one down. Based upon its appearance in the former publication in May of 1870, and various other references from publications that summer (e.g. this one) to “a recent issue of the Ledger”, it appeared in early 1870. From there it was reprinted in various other periodicals and newspapers including ‘Our Dumb Animals’ (even if the latter doesn’t credit the Ledger as other reprints did). 

So how did the incorrect version come about? It was very likely just a misquote or rather, a series of misquotes and paraphrasings. Even some of the early direct quotes got it wrong. One 1873 reprint drops the word ‘almost’, blunting Beecher’s acerbic wit slightly. Saying that many men would be good enough to be crows is kinder than saying that many would be almost good enough. Fairly early on, authors moved to paraphrasing, for example in 1891’s ‘Collected Reports Relating to Agriculture’ we find:

“…Henry Ward Beecher long ago remarked that if men were feathered out and given a pair of wings, a very few of them would be clever enough to be crows.” 

This appeared almost verbatim twenty years later in Coburn’s ‘The Behavior of the Crow’ (1923). Two years later, Glover Morrill Allen’s ‘Birds and Their Attributes’ (1925, p.222) gave us a new version:

“…Henry Ward Beecher was correct when he said that if men could be feathered and provided with wings, very few would be clever enough to be Crows!”

It was this form that was repeated from then on, crucially in some cases (such as Bent’s 1946 ‘Life Histories of North American Birds’) with added quotation marks, making it appear to later readers that these were Beecher’s actual words. Interestingly, the earliest occurrence of the wording ‘very few would prove clever enough’ (my emphasis) seems to emerge later, and is credited to naturalist Henry David Thoreau:

“… once said that if men could be turned into birds, each in accordance with his individual capacity, very few would prove clever enough to be Crows.”

-Bulletin of the Massachusetts Audubon Society in 1942 (p.11).

I can find no evidence that Thoreau ever said anything like this, and of course it’s also suspiciously similar to the Beecher versions floating about at the same time (here’s another from a 1943 issue of ‘Nature Magazine’, p. 401). Thus, I suspect, the Thoreau attribution is a red herring, probably a straight-up mistake by a lone author. In any case, relatively few (only eight that I could detect via Google Books) have run with that attribution since, and these can likely be traced back to the MA Audubon Society error.

So, we are seeing here a game of literary ‘telephone’ from the original Beecher tract in 1870 via various misquotes in the 1920s – 1950s that solidified the version that’s still floating around today. Pleasingly, although his wording has been thoroughly mangled, the meaning remains intact. The key difference is that Beecher was using the attributes of the crow to disparage human beings based upon the low opinion that his fellow man then held of corvids. Despite this, Beecher very clearly did respect the intelligence of the bird as much as the 20th century birders who referenced him, and those of us today who also love the corvids. I think it’s important to be reminded that, as his version shows, widespread affection for corvids is a very recent thing. We should never forget how badly we have mistreated them and, sadly, continue to do so in many places.

Time Travel in Avengers: Endgame

A still from Oren Bell’s brilliant interactive timeline for Endgame as a multiverse movie. He disagrees with both writers and directors on the ending – check it out on his site here

With the new time travel-centric Marvel TV series Loki about to debut, I thought it was time (ha) for another dabble in the genre with a look at 2019’s Avengers: Endgame. (SPOILERS for those who somehow have yet to see it). To no-one’s surprise, the writers of Endgame opted to wrap up both a 20+ film long story arc and a cliffhanger involving the death of half the universe by recourse to that old chestnut of time travel (an old chestnut I love though!). It did so in a superficially clever way, comparing itself to and distancing itself from (quote) “bullshit” stories like ‘Back to the Future’ and ‘The Terminator’. The more I’ve thought and read about it though, the more I realise that it’s no more scientific in its approach than those movies. “No shit” I hear you say, but there are plenty of people out there who are convinced that this is superior time travel storytelling, and possibly even ‘makes perfect sense’. In reality, although it ends up mostly making sense, this is perhaps more by luck than judgement. I still loved the film, by the way, I’m just interested in how we all ended up convinced that it was ‘good’ (by which I mean consistent and logical) time travel, because it isn’t!

tl;dr – Endgame wasn’t written as a multiverse time travel story – although it can be made to work as one.

Many, myself included, understood Endgame to differ from most time travel stories by working on the basis of ‘multiverse’ theory, in which making some change in the past (possibly even the act of time travel itself) causes the universe to branch. This is a fictional reflection of the ‘Many Worlds’ interpretation of quantum mechanics in which the universe is constantly branching into parallel realities. As no branching per se was shown on camera, I assumed that it was the act of time travel itself that branched reality, landing the characters in a fresh, indeterminate future in which anything is possible. My belief was reinforced by an interview with physicist Sean Carroll, a champion of this interpretation and a scientific advisor on the movie. I was actually really pleased; multiverse time travel is incredibly rare (the only filmed attempt I’m aware of was Corridor Digital’s short-lived ‘Lifeline’ series on YouTube Premium). I’m not really sure why this is but regardless, the idea certainly works for Endgame as time travel is really just a means to an end i.e. getting hold of the Infinity Stones. I wasn’t the only one to assume something along these lines, which is why many were confused as to how the hell Captain America ended up on that bench at the end of the movie. If, as it seemed to, the film worked on branching realities, how could he have been there the whole time? If he wasn’t there the whole time and did in fact come from a branch reality that he’s been living in, how did he get back? Bewildered journalists asked both the writers and the directors (there are two of each) about this and got two different answers. The writers insisted that this was our Cap having lived in our timeline all along, although they later admitted that the directors’ view might also (i.e. instead) be valid, i.e. that he must have lived in a branch reality caused by changes made in the past. W, T, and indeed, F?

There is a good reason for this. The directors’ view is actually a retcon of the movie as written and filmed. Endgame is actually a self-consistent universe that you can’t alter and in which, therefore, time-duplicate Cap was always there. There is a multiverse element, but as we’ll see, this is bolted onto that core mechanic, and not very well, either. Let’s look at the evidence. The writers explain their take in this interview:

“It’s crucial to your film that in your formulation of time travel, changes to the past don’t alter our present. How did you decide this?

MARKUS We looked at a lot of time-travel stories and went, it doesn’t work that way.

McFEELY It was by necessity. If you have six MacGuffins and every time you go back it changes something, you’ve got Biff’s casino, exponentially. So we just couldn’t do that. We had physicists come in — more than one — who said, basically, “Back to the Future” is .

MARKUS Basically said what the Hulk says in that scene, which is, if you go to the past, then the present becomes your past and the past becomes your future. So there’s absolutely no reason it would change.”

What these physicists were trying to tell them is that IF time travel to the past were possible, either a) whatever you do, you have already done, so nothing can change or b) your time travel and/or your actions create a branch reality, so you’re changing this, and not your past. Unfortunately the writers misunderstood what they meant by this and came up with a really weird hybrid approach, which is made clear in a couple of key scenes involving Hulk where the two parallel sets of time-travel rules are explained. As originally written and filmed these formed a single scene, with all the key dialogue delivered by the Ancient One. First, the original version of those famous Hulk lines that they allude to above (for the sake of time/space I won’t bother to repeat those here): 

ANCIENT ONE

Of course, there will be consequences.

HULK

yes…If we take the stones we alter time, and we’ll totally screw up our present-day even worse than it already is.

ANCIENT ONE

If you travel to the past from your present, then that past becomes your future, and your former present becomes your past. Therefore it cannot be altered by your new future. 

This is deliberately, comedically obfuscatory, but is really simple if you break it down. All they’re saying is that you may be travelling into the past, but it’s your subjective future. If you could change the past, you’d disallow for your own presence there, because you’d have no reason to travel. In other words, you just can’t change the past, and paradoxes (or Bill & Ted-style games of one-upmanship) are impossible. On the face of it this dictates an immutable timeline; you were always there in the past, doing whatever you did, as in the films ‘Timecrimes’, ‘Twelve Monkeys’, or ‘Predestination’. In keeping with this, the writers also claim that Captain America’s travel to the past to be with Peggy is also part of this. How? We’re coming to that. Most definitely not in keeping however is, well, most of the movie. We see the Avengers making overt changes to the past that we’ve already seen in prior movies, notably Captain America attacking his past self. How is this possible given the above rule? If it is possible despite this, how does 2012 Cap magically forget that this happened? The answers to both questions are contained in the next bit of dialogue: 

HULK

Then all of this is for nothing.

ANCIENT ONE

No – no no, not exactly. If someone dies, they will always die. Death is.. Irreversible, but Thanos is not. Those you’ve lost have not died, they’ve been willed out of existence. Which means they can be willed back. But it doesn’t come cheap. 

ANCIENT ONE

The Infinity Stones bind the universe together, creating what you experience as the flow of time. Remove one of these stones, this flow splits. Your timeline might benefit, but my new one would definitely not. For every stone that you remove, you create new very vulnerable timelines; millions will suffer. 

In other words, because the Stones are critical to the flow of time and because later on a Stone is taken, the changes to the past of Steve’s own reality are effectively ‘fixed’, creating a new branch reality where he does remember fighting himself and the future pans out differently without changing his own past. We can try to speculate on what would have happened if the time travellers had made changes to the past and then a Stone hadn’t been taken, but this is unknowable since every change to what we know happened does get branched. Either the writers are lying to us, they don’t understand their own script, or – somehow – the taking of the Stones is effectively predestined, forming another aspect of the self-consistent universe of the movie. Logically of course, this is, to use the technical quantum mechanical term, bollocks. Events happening out of chronological order in time travel is fine; cause and effect are preserved, just not in the order to which we’re accustomed. However, you don’t get to change the past, then branch reality, then imply that the earlier change is not only retrospectively included in that branch, but is also predestined! This is a case of the cart before the horse; the whole point of branched realities is to allow for change to the past – it should not be possible to make any change prior to this point. The very concept is self-contradictory. If you can’t change the past, you can’t get to the point of taking a Stone to allow for a change to the past. The only way this works is if we accept that you can make changes, but as per the nonsense Ancient One/Hulk line, your present… “…cannot be altered by your new future.” Unfortunately, the writers have established rules and then immediately broken them in an attempt to avoid falling into the time travel cliche of pulling a Deadpool and stopping the villain in the past and yet retain the past-changing japes of those exact same conventional time travel movies. Recognising that the new branched realities would be left without important artefacts, they then explain how these ‘dark timelines’ are avoided:

HULK

Then we can’t take the stones.

ANCIENT ONE

Yet your world depends on it.

HULK

OK, what if… what if once we’re done we come back and return the stones?

ANCIENT ONE

[Then] the branch will be clipped, and the timeline restored.

Note that this is further evidence of the writer’s vision; if reality branches all the time, there’s no way to actually ‘save’ these timelines – only to create additional better ones. If reality only branches when a Stone is removed, putting it back ‘clips’ that branch as they explain. Still, on balance this interpretation is seriously flawed and convoluted. Luckily the version of this same scene from the final draft of the script (i.e., what we saw play out) helps us make sense of this mess (albeit not the dark timelines; they are still boned, I’m afraid!):

ANCIENT ONE

At what cost?

The Infinity Stones create the experience you know as the flow of time. Remove one of the stones, and the flow splits.

Now, your timeline might benefit.

My new one…would definitely not.

In this new branch reality, without our chief weapon against the forces of darkness, our world would be overrun…

For each stone you remove, you’ll create a new, vulnerable timeline. Millions will suffer.

(beat)

Now tell me, Doctor. Can your science prevent all that?

ASTRAL BANNER

No. But it can erase it.

Astral Banner reaches in and grabs THE VIRTUAL TIME STONE.

ASTRAL BANNER (CONT’D)

Because once we’re done with the stones, we can return each one to its own timeline. At the moment it was taken. So chronologically, in that reality, the stone never left.

These changes have two significant effects (other than removing the potentially confusing attempt to differentiate being willed out of existence from ‘death’):

1) To move the time travel exposition earlier in the movie to avoid viewers wondering why they can’t just go back and change things. 

To achieve this they added the obvious Hitler comparison (it may not be a comparison that this was a minor plot point in Deadpool 2!), along with pop culture touchstones to help the audience understand that this isn’t your grandfather’s (ha) time travel and you can’t simply go back and change your own past to fix your present. This works fine and doesn’t affect our interpretation of the movie’s time travel.

2) To de-emphasise the arbitrary nature of the Stones somehow being central to preventing a ‘dark’ timeline by pointing out that they’re essentially a means of defence against evil. 

This is more critical. We go from ‘the Infinity Stones create the experience you know as the flow of time’ to ‘creating what you experience as the flow of time’, which I read as moving from them creating time itself, to simply the timeline that we know (i.e. where the universe has the Stones to defend itself). This provides more room for the interpretation that removing a Stone is simply a major change to the timeline, like any other, that would otherwise disallow for the future we know, and so results in reality branching to a new and parallel alternate future. Still, I really don’t think that improving time travel logic was the main aim here, or even necessarily an aim at all. The wording about how the Stones ‘bind the universe together’ may have been dropped as simply redundant, or possibly to soften the plothole that not only the ‘flow of time’ but also the ‘universe’ are just fine when the Stones all get destroyed in the present-day (2023) of the prime reality. If the filmmakers truly cared about their inconsistent rules, they had the perfect opportunity here to switch to a simple multiverse approach and record a single line of dialogue that would explain it without the need to change anything else. Here’s the equivalent line from Lifeline:

“Look, your fate is certain. Okay? It can’t be undone. Your every action taken is already part of a predetermined timeline and that is why I built the jump box. It doesn’t just jump an agent forward in time, it jumps them to a brand new timeline. Where new outcomes are possible.”

Anyway, back to that head-scratcher of an ending and the writer’s claim that Cap was always there as a time duplicate in his own past. They say this is the case because it’s not associated with the taking of a Stone. I have checked this, and they’re right; it’s the only change to the past that can’t be blamed on a Stone. There’s also no mention in the script (nor the alternate scene below) of alternate universes being created prior to the taking of a Stone. So, per the writers’ rules, Cap (and not some duplicate from another reality) is indeed living in his own past and not that of a branch reality. This was the intent “from the very first outline” of the movie, notwithstanding the later difference of opinion between writing and directing teams. To be clear, everyone involved does agree that he didn’t just go back (or back and sideways if you believe the directors) for his dance raincheck – he stayed there, got married and had Peggy’s two children. Which inevitably means that Steve somehow had to live a secret life with a secret marriage (maybe he did a ‘Vision’ and used his timesuit as a disguise?) and kissed his own great niece in Civil War (much like Marty McFly and his mum). 

You can still choose to interpret Steve’s ‘retirement’ to his own past as a rewriting of the original timeline that alters Peggy’s future (i.e. who she married, who fathered her kids etc). Alternatively, you can believe the directors that Cap lived his life with the Peggy of a branch reality and returned (off camera!) to the prime reality to hand over the shield. But neither of these fits with the original vision for the movie that you can’t change your own past and it doesn’t branch unless a Stone is removed. There’s another problem with the writer’s logic here. Cap only gets to the past by having created and then ‘clipped’ all the branching realities. This means that the creation and destruction of these branches also always happened and is also part of an overarching self-consistent universe. Except that they can’t possibly be for the reason I’ve already given above; we’ve seen the original timelines before they become branch realities, so we know something has in fact changed, and there can’t be an original timeline for Cap to have ended up in his own past!

Conclusions

So, Endgame as written and even as filmed (according to the writers) is really not the multiverse time travel movie that most of us thought. It’s a weird hybrid approach that you can sort of mash together into a convoluted fixed timeline involving multiple realities but not really. It actually makes less sense than the films that it (jokingly) criticises and handwaves all consequences for time travel. Luckily, it can be salvaged if we overlook the resulting plothole of Captain America’s mysterious off-camera return and follow the interpretation of the directors. That is, that there’s no predestination, the Avengers are making changes, but every significant change, (i.e. one that would otherwise change the future, like living a new life in the past with your sweetheart) creates a branch reality. Not just messing with Stones. This isn’t perfect; how could it be? It’s effectively a retcon. But it’s easily the better choice overall in my view. Why wouldn’t this be the case? It’s only logical. The only serious discrepancy is the remaining emphasis placed upon the significance of the Stones, which I think can be explained by the Ancient One’s overly mystical view of reality. She focuses on the earth-shattering consequences for messing with the Stones simply because she knows the gravity of those consequences. She doesn’t explicitly rule out other causes of branches. It likely doesn’t matter that they’re destroyed in the subjective present of the prime universe, because the ultimate threat she identifies is Thanos, and he’s been defeated, along with the previous threats that the Stones had a hand in, including of course ‘Variant’ Thanos from the 2014 branch (meaning that branch doesn’t have to contend with him and gets its Soul and Power Stones back). Of course, this interpretation has some dark implications: If significant changes create branches, then when Cap travels back to each existing branch to return each stone, reality must be branched again. The Avengers have still created multiple new universes of potential suffering and death without one or more Stones, they’ve just karmically balanced things somewhat by creating a new set of positive branches that have all their Stones. Except for, again, the new Loki branch. 

For me, the directors’ approach, whilst imperfect, is the best compromise between logic and narrative. It’s not clear whether they somehow thought this was the case all along, or whether they only recognised the inconsistencies in post-production or even following the movie’s release. The fact that the writing and directing teams weren’t already on the same page when they were interviewed tells me that, simply, not enough thought went into this aspect of the film. Why should we believe them? Well, the director’s role in the filmmaking process traditionally supersedes that of the writer, shaping both the final product and the audience’s view of it. Perhaps the most famous example is Ridley Scott’s influence on Deckard’s status as a replicant. You can still choose to believe that he is human based on the theatrical cut and ignoring Scott’s own intent, but this is contradicted by his later comments and director’s cuts. There’s also the fact that subsequent MCU entries suggest that the Russos’ multiverse model is indeed the right one. Unless Loki is going to be stealing multiple more iterations of Infinity Stones, the universe is going to get branched simply by him time travelling. If so, this will establish (albeit retroactively) that the Ancient One really was just being specific about the Stones because of the particularly Earth-shattering consequences of messing with their past (and the need to keep things simple for a general audience). It would also pretty much establish the Russos’ scenario for Captain America; that he really did live out his life in a branch reality before somehow returning to the prime reality to hand over his mysterious newly made shield (another plothole!) to Sam. Where he went after that, we may never know, but I hear he’s on the moon

The Muffin Man?

This is an odd one. Some idiot has claimed as fact a stupid joke about the ‘muffin man’ of the child’s song/nursery rhyme actually being an historical serial killer and some credulous folk (including medium.com) have fallen for it. Snopes have correctly debunked it, yet despite a total lack of any evidence for it being the case, have labelled it ‘unproven’. I hope they figure out that this isn’t how history works. The onus is on the claimant to provide a reference. They aren’t going to find a definitive origin for a traditional song like that that would allow the (patently ludicrous) claim to be disproven. It’s moderately endearing that Snopes had to find out via furious Googling that ‘muffin men’ were a real thing. I learned this when I was a child. Maybe it’s a British thing that Americans have lost their cultural memory of. The very concept of the muffin man is very clearly enough to debunk this bollocks on its own. The muffin man was a guy who went door to door selling tasty treats that kids enjoy, not some ‘Slenderman’ bogeyman figure. It would be like suggesting that there was a serial killer called ‘Mr Whippy‘. Anyway, this Jack Williamson guy is just another internet attention-seeker who will hopefully disappear forthwith. As for Snopes, I can’t fault their article, but I suspect their ongoing foray into political fact-checking has made them a little gunshy of calling things ‘False’ without hard evidence.

Count Cholera 2: Revenge of the Half-Baked Hypothesis

These two get it.
(from https://www.theverge.com/2020/4/20/21227874/what-we-do-in-the-shadows-season-2-hulu-preview)

As I noted in my first post on Marion McGarry’s Dracula=Cholera hypothesis, I’m always wary of criticising ideas that have been filtered through the media (rather than presented first-hand by the author or proponent), because something is almost always missing, lost in translation or even outright misrepresented. So when a kind commenter directed me to this recording of McGarry’s talk on her theory that Bram Stoker’s ‘Dracula’ was inspired by Stoker’s mother’s experience of the early 19th century Sligo cholera outbreak, I felt that I had to listen to it (I never did receive a reply to my request for her article). Now that I have listened, I can confirm that McGarry is reaching bigtime. The talk adds very little to the news reports that I referenced last time and covers much the same ground, including spurious stuff like the novel having the working title of ‘The Undead’ (‘undead’ already being a word as I noted previously). There is some new material however.

Early on McGarry references recent scholarship regarding the historical figure of Wallachian ruler Vlad III being the inspiration for the Count and the novel that features him. She is right about this; Stoker did indeed only overlay Vlad’s name and (incorrect) snippets of his biography onto his existing Styrian ‘Count Wampyr’. However, needless to say, just because ‘Dracula’ was not inspired by the historical Vlad III, it does not follow that it/he was inspired by cholera. As I noted before, Stoker did not invent the fictional vampire, and had no need of inspiration to create his own vampire villain. The only argument that might hold weight is that he was inspired to tackle vampirism by his family history. McGarry’s main argument for this hinges on the fact that Stoker did research for his novels in libraries. As noted last time, this actually works against her theory, since we have Stoker’s notes and there is no mention of his having read around cholera in preparation for writing ‘Dracula’. Whereas we do have his notes on his actual sources, which were about eastern European folklore; vampires and werewolves. The aspects that Stoker did use, he transplanted almost wholesale; it’s easy to see, for example, which bits he lifted from Emily Gerard. Stoker did not in fact do ‘a great deal’ of reading; he found a couple of suitable books and stopped there. Which is why the only other new bit of information from this talk is also of limited use. McGarry cites this 1897 interview with Stoker, claiming that ‘…the kernel of Dracula was formed by live burials…’ This is not, in fact, what Stoker was asked. He was asked what the origin of the *the vampire myth* was, not the inspiration for his taking on that source material:

“Is there any historical basis for the legend?”

Stoker, who was no better informed on the true origins of the Slavic vampire than any other novelist, answered:

“It rested, I imagine, on some such case as this. A person may have fallen into a death-like trance and been buried before the time. 

Afterwards the body may have been dug up and found alive, and from this a horror seized upon the people, and in their ignorance they imagined that a vampire was about.”

Yes, this has parallels with cholera victims being buried prematurely, but it is by no means clear that Stoker was thinking of this when he made this response. Certainly, he does not mention it. There is every chance that this is purely coincidence; plenty of others at this time lazily supposed, like Stoker, that vampire belief stemmed from encounters with still-living victims of premature burial, or (apocryphal) stories of scratches on the inside of coffin lids. Stoker’s family connection with premature burial is likely a coincidence. Had he included a scene involving premature burial, or even a mention of it in the novel, McGarry might be onto something.

McGarry tries to compare Stoker’s victims of vampirism with descriptions of cholera patients; lethargy, sunken eyes, a blue tinge to the eyes and skin. Unfortunately the first two fit lots of other diseases, notably tuberculosis, and the third symptom doesn’t actually feature in ‘Dracula’ at all. I have literally no idea why she references it. She also tries to link the blue flames of the novel with German folklore in which ’blue flames emerge from the mouths of plague victims’. I have never heard of this, nor can I find any reference to it. I do know, however, that Stoker took his blue flames from Transylvanian folklore about hidden treasure; taken again from Emily Gerard (Transylvanian Superstitions), confirmed once again by Stoker’s notes. If there is folklore about blue flames and cholera, no reference appears in his notes, and it is most likely coincidence.

In an extension of her commentary that storms preceded both outbreaks (cholera and vampirism) McGarry asserts that the first victim of cholera presented on 11 August – the same date as Dracula’s first British victim in the novel, the evidence being William Gregory Wood-Martin’s 1882 book ‘The History of Sligo County and Town’. This is not correct. Lucy, Dracula’s first victim, does indeed receive her vampire bite on 11 August. MEanwhile however, back in the real world, the first case of cholera in Sligo was identified on 29 July 1832. Wood-Martin mentions 11 August only because a special board was created on that day, precisely because the first case had happened some time previously. McGarry does admit that 11 August ‘..may have been randomly chosen by Stoker’, yet still lists this piece of ‘evidence’ in her summing up, which is as follows;

‘It cannot be a coincidence that Bram Stoker had Dracula tread a path very similar to cholera; a devastating contagion travelling from the East by ship that people initially do not know how to fight, a great storm preceding its arrival, the ability to travel over land by mist and the stench it emits, avenging doctors and Catholic imagery, the undead rising from the dead, all culminating in the date of august 11th of the first victim.’

Just to take these in order;

  1. ‘It cannot be a coincidence’ It can absolutely be a coincidence. All of this is literally coincidence without any evidence to support it. This is not how history works. 
  2. ‘…a path very similar…’ Dracula comes from Western Europe. Cholera came from the Far East. Both are east of the British Isles, but the origins of the two contagions are hardly identical. The ship aspect I dealt with last time; this is how people and goods travelled across continents at that time. Not to mention that all of these similarities with cholera are similarities with any disease – and most agree that the idea of the vampire as contagion is a legitimate theme of ‘Dracula’ (indeed, historical belief in vampires has strong ties to disease). There’s nothing special about cholera in this respect. The same goes for idea of people not knowing how to fight these afflictions; all disease outbreaks require learning or relearning of ways to combat them. One could just as easily claim similarity in that cholera had been fought off previously, and that Van Helsing already knows how to defeat vampires; just not necessarily this one… 
  3. ‘…the ability to travel over land by mist and the stench it emits…’ earlier in the talk McGarry claims that Stoker invokes miasma theory in ‘Dracula’. In fact he doesn’t. Bad smells abound, sure, but the only mention of miasma in the novel is metaphorical (‘as of some dry miasma’) and relates to the earthy smell of Dracula’s Transylvanian soil, not to the Count himself. Nowhere is smell cited as a means of transmission, only biting. ‘Dracula’, famously, takes a very modern, pseudoscientific approach to vampirism, even if its counter is good old-fashioned Catholic Christianity. Speaking of which…
  4. ‘…avenging doctors and Catholic imagery…’ as noted, ‘Dracula’ does treat vampirism as a disease, so the doctors follow from that; not bearing any specific relation to cholera in Ireland. As for Catholic imagery, well, Stoker was from that background, and Dracula is very overtly Satanic in the novel. You need religion to defeat evil just as you need medicine to defeat disease. Once again, this is coincidence.
  5. ‘…the undead rising from the dead…’ how else does one get the undead? Seriously though, I’ve dealt with this above and previously. Stoker chose to write about vampires, therefore the undead feature. 
  6. ‘…all culminating in the date of August 11th of the first victim.’ Except it doesn’t, as I’ve shown.

I make that a 0/6. The themes identified by McGarry in Stoker’s book stem from his choice of vampires as the subject matter, and his take is shaped by his knowledge, upbringing, etc etc. Was he in part inspired to choose vampires because of family history with cholera? Maybe; it’s plausible as one of many influences (not, as McGarry implies, the main or sole influence) but there is literally zero evidence for it.