‘Stinking Rich’?

I’ve just watched a fascinating lecture from funerary and art historian Dr. Julian Litten on burial vaults. I learned a lot and greatly enjoyed it, but was very surprised to hear him recite the old chestnut that the smell of decaying bodies under church floors led to the expression ‘stinking rich’. This is just not true, as phrases.org.uk relates:

The real origin of stinking rich, which is a 20th-century phrase, is much more prosaic. ‘Stinking’ is merely an intensifier, like the ‘drop-dead’ of drop-dead gorgeous, the ‘lead pipe’ of lead pipe cinch or, more pertinent in this case, the ‘stark-raving’ of stark-raving mad. It has been called upon as an intensifier in other expressions, for example, ‘stinking drunk’ and ‘we don’t need no stinking badges’

The phrase’s real derivation lies quite a distance from Victorian England in geography as well as in date. The earliest use of it that I can find in print is in the Montana newspaper The Independent, November 1925:

He had seen her beside the paddock. “American.” Mrs Murgatroyd had said. “From New England – stinking rich”.

However, I thought I’d check, and I did find an earlier cite, from ‘V.C.: A Chronicle of Castle Barfield and of the Crimea’, by David Christie Murray (1904, p. 92);

“I’m stinking rich – you know – disgraceful rich.”

Nothing earlier than that however. So I would add to the explanation at phrases.org.uk and say that it’s more of an expression of disgust; someone is so rich that it’s obscene and figuratively ‘stinks’. If we had any early 19th century or older cites, I’d grant that it could have been influenced in some way by intramural burial, but this was rare by the turn of the 20th century and lead coffins had been a legal requirement since 1849. Litten suggests that unscrupulous cabinetmakers might omit the lead coffin, leading to ‘effluvia’, but even then I can’t imagine that was common as it would be obvious when it had happened and whose interment was likely to have caused it, resulting in complaints and most likely reburial. 

Litten also repeated a version of the myth of Enon Chapel, which is a story I’ve been working on and will be forthcoming, but added a claim that I have yet to come across; that the decomposition gases from the crypt below were so thick that they made the gas lighting in the chapel above ‘burn brighter’. I don’t know where this comes from and it hardly seems plausible. Dr Waller Lewis, the UK’s first Chief Medical Officer, wrote on the subject in an 1851 article in The Lancet entitled ‘ON THE CHEMICAL AND GENERAL EFFECTS OF THE PRACTICE OF INTERMENT IN VAULTS AND CATACOMBS’. Lewis stated that: “I have never met with any person who has actually seen coffin-gas inflame” and reported that experiments had been carried out and “in every instance it extinguished the flame”. This makes sense, since it was not decomposition gases per se (and certainly not ‘miasma’ as was often claimed at the time) that made workers light-headed or pass out in vaults – it was the absence of oxygen and high concentration of CO2 that caused this. Hence reports of candles going out rather than inflaming more.

Unfortunately, even the best of us are not immune to a little BS history. It was nonetheless a privilege to hear Dr. Litten speak.

Werewolves = Serial Killers?

Beast of Gevaudan (1764). Not to Scale (Wikimedia Commons)

When I last wrote on the Beast of Gévaudan, I said that I couldn’t rule out the involvement of one or more human murderers whose actions could have been conflated with several wolves and possibly other wild animals killing French peasants between 1764 – 1767. I meant that literally; the Beast was a craze, and it’s perfectly possible that one or more victims was in fact the victim of a murder. We have no evidence for that, of course, and certainly not for the claim, sometimes made, that the whole thing was the work of a serial killer. This was recently repeated in this otherwise very good video from YouTube channel ‘Storied’ (part two of two; both parts feature the excellent Kaja Franck, who I was fortunate to meet at a conference some years ago). Meagan Navarro of the horror (fiction) website Bloody Disgusting states the following:

“The Beast of Gevaudan or the Werewolf of Dole, these were based on men that were serial killers and slaughtered, and folklore was a means of exploring and understanding those acts by transforming them into literal monsters.”

The ‘werewolf’ of Dole does indeed appear to be a deluded individual who thought he was able to transform into a wolf and was convicted as such. However, this is not the case for Gévaudan, which is a well-documented piece of history, not some post-hoc rationalisation for a series of murders as she implies. The various attacks that comprise the story were widely reported at the time and in some detail (albeit embellishments were added later). No-one at the time suspected an ordinary person of the actual killings, and the only sightings consistently refer to a large beast, sometimes detailing how the kills were made. The idea of a human being in control of the Beast somehow was mooted at the time, as was the werewolf of folklore, but never a straightforward murderer. Of course, the idea of the serial killer was unknown until the late 19th century, and it wasn’t long after this that a specious connection was made. In 1910 French gynaecologist Dr. Paul Puech published the essay (‘La Bête du Gévaudan’, followed in 1911 by another titled ‘Qu’était la bête du Gévaudan?’). Puech’s thin evidence amounted to;

1) The victims being of the same age and gender as those of Jack the Ripper and Joseph Vacher. In fact, women and children (including boys) were not only the more physically vulnerable to attack generally, but were the members of the shepherding families whose job it was to bring the sheep in at the end of the day. This is merely a coincidence.

2) Decapitation and needless mutilation. The latter is pretty subjective, especially if the animal itself might be rabid (plenty were) and therefore attacking beyond the needs of hunger alone. The relevance of decapitation depends upon whether a) this really happened and b) whether a wolf or wolves would be capable of it. Some victims were found to have been decapitated, something that these claimants assert is impossible for a wolf to achieve. I can’t really speak to how plausible this is, although tearing limbs from sizable prey animals is easily done and if more than one animal were involved I’ve little doubt that they could remove a head if they wished. So, did these decapitations actually take place? Jay Smith’s ‘Monsters of the Gévaudan: The Making of a Beast’ relays plenty of reports of heads being ripped off. However, details of these reports themselves mitigate against the idea of a human killer. Take Catherine Valy, whose skull was recovered some time after her death. Captain of dragoons Jean-Baptiste Duhamel noted that “judging by the teeth marks imprinted [on the skull], this animal must have terrifying jaws and a powerful bite, because this woman’s head was split in two in the way a man’s mouth might crack a nut.” Duhamel, like everyone else involved, believed that he faced a large and powerful creature (whether natural or supernatural), not a mere human. Despite the intense attention of the local and national French authorities, not to mention the population at large, no suggestion was ever made nor any evidence ever found of a human murderer and the panic ended in 1767 after several ordinary wolves were shot.

3) Similar deaths in 1765 in the Soissonnais, which he for some reason puts down to a copycat killer rather than, you know, more wolves. This reminds me of the mindset of many true crime writers; come up with your thesis and then go cherry-picking and misrepresenting the data to fit.At the very least then, this claim is speculative, and should not be bandied about as fact (in fact, the YouTube channel should really have queried the claim). So, if not a serial killer, then what? French historian Emmanuel Le Roy Ladurie argues that the Beast was a local legend blown out of proportion to a national level by the rise of print media. Jean-Marc Moriceau reports 181 wolf killings through the 1760s, which puts into context the circa 100 killings over three years in one region of France. That is, statistically remarkable, but within the capability of the country’s wolf population to achieve, especially given the viral and environmental pressures from rabies and the Little Ice Age respectively that Moriceau cites. If we combine these two takes, we get close to the truth, I think. ‘The’ Beast most likely actually consisted of some unusually violent attacks carried out by more than one wolf or packs of wolves that were confabulated and exaggerated as the work of one supernatural beast, before ultimately being pinned by the authorities on several wolves, three shot by François Antoine in 1765 and another supposedly ‘extraordinary’ (yet actually ordinary sized) Jean Chastel in 1767.

Milk in First, or Last Part 2: a Tempest in a Teapot

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

This is Part 2 of a very long article – see here for part 1.

Clearly the majority of modern-day advocates (including all those YouTube commenters that I mentioned last time) aren’t aspiring members of the upper-middle or upper classes or avid followers of etiquette, so why does this schism among tea-drinkers still persist? No doubt the influence of snobs like Nancy Mitford, Evelyn Waugh et al persists, but for most it seems to boil down (ha) to personal preference. This has not calmed the debate any however. Both sides, now mostly comprised of middle class folk such as myself, now argue with equal certainty that their way is the only right way. Is Milk In First (MIF)/Milk In Last (MIL) really now a ‘senseless meme’ (as Professor Markman Ellis believes; see Part 1) – akin to the ‘big-endians’ and ‘little-endians’ of ‘Gulliver’s Travels’? Is there some objective truth to the two positions that underpins all this passion and why the debate has surpassed class differences? Is there a way to reconcile or at least explain it so that we can stop this senseless quibbling? Well, no. We’re British. Quibbling and looking down on each other are two of our chief national pastimes. However, another of those pastimes is stubbornness, so let’s try anyway…

Today’s MILers protest that their method is necessary in order to be able to judge the strength of the tea by its colour. Yet clearly opinions on this differ and, as I showed in the video, sufficiently strong blends – and any amount of experience in making tea – render this moot. If you do ‘under milk’, you can add more to taste (although as I also noted, you might argue that this makes MIL the more expedient method). As we’ve seen with George Orwell vs the Tea & Coffee Trade, the colour/strength argument is highly subjective. Can science help us in terms of which way around is objectively better? Perhaps, although there are no rigorous scientific studies. In the early 2000s the Royal Society of Chemistry and Loughborough University both came out in favour of MIF. The RSC press release gives the actual science:

“Pour milk into the cup FIRST, followed by the tea, aiming to achieve a colour that is rich and attractive…Add fresh chilled milk, not UHT milk which contains denatured proteins and tastes bad. Milk should be added before the tea, because denaturation (degradation) of milk proteins is liable to occur if milk encounters temperatures above 75°C. If milk is poured into hot tea, individual drops separate from the bulk of the milk and come into contact with the high temperatures of the tea for enough time for significant denaturation to occur. This is much less likely to happen if hot water is added to the milk.

It also transpires that an actual international standard (ISO 3103:1980, preceded by several British Standards going back to 1975) was agreed for tea-making way back in 1980, and this too dictated that tea should be added to milk “…in order to avoid scalding the milk”. This would obviously only happen if the tea is particularly hot, and indeed the standard includes a ‘milk last’ protocol in which the tea is kept below 80 degrees celsius. Perhaps those favouring MIL simply like their tea cooler and so don’t run into the scalding problem? This might explain why I do prefer the taste of the same tea, with the same milk, made MIF from a pot, rather than MIL with a teabag in a cup… I like my tea super hot. So, the two methods can indeed taste different; a fact proven by a famous statistical experiment (famous among statisticians; a commenter had to point this out for me) resulted in a lady being able to tell whether a cup of tea had been made MIF or MIL eight times out of eight.

“Already, quite soon after he had come to Rothamstead, his presence had transformed one commonplace tea time to an historic event. It happened one afternoon when he drew a cup of tea from the urn and offered it to the lady beside him, Dr. B. Muriel Bristol, an algologist. She declined it, stating that she preferred a cup into which the milk had been poured first. “Nonsense,” returned Fisher, smiling, “Surely it makes no difference.” But she maintained, with emphasis, that of course it did. From just behind, a voice suggested, “Let’s test her.” It was William Roach who was not long afterward to marry Miss Bristol. Immediately, they embarked on the preliminaries of the experiment, Roach assisting with the cups and exulting that Miss Bristol divined correctly more than enough of those cups into which tea had been poured first to prove her case.

-Fisher-Box, 1978, p. 134.

This of course doesn’t help with which is objectively better, but does suggest that one side may be ‘right’. However, as well as temperature, the strength of the brew may also make a difference here, one that might explain why this debate rumbles on with no clear victor. A commenter on a Guardian article explains the chemistry of a cup of tea;

“IN THE teacup, two chemical reactions take place which alter the protein of the milk: denaturing and tanning. The first, the change that takes place in milk when it is heated, depends only on temperature. ‘Milk-first’ gradually brings the contents of the cup up from fridge-cool. ‘Milk-last’ rapidly heats the first drop of milk almost to the temperature of the teapot, denaturing it to a greater degree and so developing more ‘boiled milk’ flavour. The second reaction is analogous to the tanning of leather. Just as the protein of untanned hide is combined with tannin to form chemically tough collagen/tannin complexes, so in the teacup, the milk’s protein turns into tannin/casein complexes. But there is a difference: in leather every reactive point on the protein molecule is taken up by a tannin molecule, but this need not be so in tea. Unless the brew is strong enough to tan all the casein completely, ‘milk-first’ will react differently from ‘milk-last’ in the way it distributes the tannin through the casein. In ‘milk-first’, all the casein tans uniformly; in ‘milk-last’ the first molecules of casein entering the cup tan more thoroughly than the last ones. If the proportions of tannin to casein are near to chemical equality, ‘which-first’ may determine whether some of the casein escapes tanning entirely. There is no reason why this difference should not alter the taste.

-Dan Lowy, Sutton, Surrey (The Guardian, Notes & Queries, 2011).

Both the scalding and the denaturation/tanning explanations are referenced in the popular science book ‘Riddles in Your Teacup’ (p. 90), the authors having consulted physicists (who favour a temperature explanation) and chemists (who of course take a chemistry-based view) on this question. I also found this interesting explanation, from an 1870 edition of the Boston Journal of Chemistry, of tannins in tea and how milk reacts with them to change the taste of the tea. This supports the idea, as does the tea-tasting lady’s ability to tell the difference, that MIF and MIL can result in a different taste. Needless to say, people have different palates and preferences and it’s likely that some prefer their tannins left unchecked (black tea), fully suppressed (milk in first), or partly mitigated (milk in last). However, if your tea is strong enough, the difference in taste will be small or even non-existent, as the tannins will shine through regardless and you’ll just get the additional flavour of the milk (perhaps tasting slightly boiled?). My preferred blend (Betty’s Tea Room blend) absolutely does retain this astringent taste regardless of which method I use or even how hot the water is (even if I do prefer it hot and MIF!).

So, the available scientific advice does favour MIF, for what it’s worth, which interestingly bears out those early reports of upper class tea aficionados and later ‘below stairs’ types who both preferred it this way. However, the difference isn’t huge and depends what temperature the tea is when you hit it with the milk, how strong the brew is, and what blend you use. It’s a bit like unevenly steamed milk in a latte or cappuccino; it’s fine, but it’s nicer when it has that smooth, foamed texture and hasn’t been scalded by the wand. The bottom line, which is what I was trying to say in my YouTube response, is that it’s basically just fashion/habit and doesn’t much matter either way (despite the amount I’ve said and written about it!) – to which I can now add the taste preference and chemical change aspects. If you pour your tea at a lower temperature, the milk won’t get so denatured/scalded, and even this small difference won’t occur. Even if you pour it hot, you might not mind or notice the difference in taste. As for the historical explanation of cracking cups, it’s probably bollocks, albeit rooted in the fact of substandard British teaware. As readers of this blog will know by now, these neat origin stories generally do turn out to be made up after the fact, and the real history is more nuanced. This story is no different.

To recap; when tea was introduced in the 17th century most people drank it black. By the early 19th century milk became widely used as an option that you added to the poured tea, like sugar. Later that century, some found that they preferred putting the milk in first and were thought particular for doing so (marking the start of the Great Tea Schism). Aside from being a minority individual preference, most upper class hostesses continued to serve MIL (as Hartley recommended) because when hosting numbers of fussy guests, serving the tea first and offering milk, sugar and lemon to add to their own taste was simply more practical and efficient. Guests cannot object to their tea if they are responsible for putting it together, and this way, everyone gets served at the same time. Rather than outline this practical justification, the 1920s snobs chose to frame the debate in terms of class, setting in stone MIL as the only ‘proper’ way. This, probably combined with a residual idea that black tea was the default and milk was something that you added, and also doubtless definitely as a result of the increasing dominance of tea-making using a teabag and mug/cup (where MIL really is the only acceptable method) left a lot of non-upper class people with the idea that MIL was objectively correct. Finally, as the class system broke down, milk first or last became the (mostly) good-natured debate that it is today.

All of this baggage (especially, in my view, the outdated class snobbery aspect) should be irrelevant to how we take our tea today, and should have been even back then. As far back as 1927, J.B. Priestley used his Saturday Review column to mock the snobs who criticised “…those who pour the milk in first…”. The Duke of Bedford’s ‘Book of Snobs’ (1965, p. 42) lamented the ongoing snobbery over ‘milk in first’ as “…stigmatizing millions to hopelessly inferior status…”. Today, upper class views on what is correct or incorrect are roundly ignored by the majority, and most arguing in favour of MIL would not claim that you should do it because the upper class said that you should, and probably don’t even realise that this is where it came from. Even high-end tea-peddlers Fortnum & Mason note that you should “…pour your tea as you please”. Each person’s view on this is a product of family custom and upbringing, social class, and individual preference; a potent mixture that leads to some strong opinions! Alternatively, like me, you drink your tea sufficiently strong that it barely matters (note I said ‘barely’ – I remain a heretical MIF for life). What does matter, of course, in tea as in all things, is knowing what you like and how to achieve it, as this final quote underlines:

…no rules will insure good tea-making. Poeta nascitur non fit,* and it may be said similarly, you are born a tea-maker, but you cannot become one.

-Samuel Kneeland, About Making Tea (1870). *A Latin expression meaning that poets are born and not made.

References (for both Parts):

Bedford, John Robert Russell, George Mikes & Nicholas Bentley. 1965. The Duke of Bedford’s Book of Snobs. London: P. Owen.

Bennett, Arnold. 1912. Helen With the High Hand. London: Chapman and Hall.

Betjeman, John. 1956. ‘How to Get on in Society’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Boston Journal of Chemistry. 1870. ‘Familiar Science – Leather in the Tea-Cup’. Vol. V, No. 3.

Ferguson, Jonathan. 2020. ‘You’re Doing It Wrong: Tea and Milk with Jonathan Ferguson’. Forgotten Weapons. YouTube video. 15 April 2020. <https://www.youtube.com/watch?v=8VCRFVMpSc8&gt;.

Ferguson, Jonathan & McCollum, Ian. 2020. ‘Jonathan Reacts to the First Day Kickstarter for his Book’. Forgotten Weapons. YouTube video. 13 April 2020. <https://www.youtube.com/watch?v=1XO4VgkC_JE&gt;.

Fisher-Box, Joan. 1978. R.A. Fisher: The Life of a Scientist. New York, NY: Wiley.

Fortnum & Mason. ‘How to Make the Perfect Cup of Tea.’ The Journal | #Fortnums. <https://www.fortnumandmason.com/fortnums/the-perfect-cup-of-tea&gt;.

Ghose, Partha & Dipankar Home. 1994. Riddles in your Teacup. Boca Raton, FL: CRC Press.

Guanghua (光華). 1995. Press Room of the Information Bureau of the Executive Yuan of the Republic of China. Vol. 20, Nos. 7–12.

Hartley, Florence. 1860. The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook. Boston, MA: Cottrell.

Johnson, Dorothea. 2002. Tea & Etiquette. Washington, D.C.: Capital.

Kneeland, Markman. 2017. ‘“Milk in First”: a miffy question’. Queen Mary University of London History of Tea Project. 11 May. <https://qmhistoryoftea.wordpress.com/2017/05/11/milk-in-first-a-miffy-question/&gt;.

Kneeland, Samuel. 1870. ‘About Making Tea’. Good Health. Vol. 1, No. 12.

Lowy, Dan. 2011. ‘Notes and Queries’. The Guardian. Digital edition:  <https://www.theguardian.com/notesandqueries/query/0,,-1400,00.html>.

Manley, Jeffrey. 2016. ‘Milk in First.’ The Evelyn Waugh Society. 17 November 2016. <https://evelynwaughsociety.org/2016/milk-in-first/&gt;.

Orwell, George. 1946. ‘A Nice Cup of Tea.’ London Evening Standard. Available at <https://orwell.ru/library/articles/tea/english/e_tea&gt;

Rice, Elizabeth Emma. 1884. Domestic Economy. London: Blackie & Son.

Royal Society of Chemistry. 2003. ‘How to Make a Perfect Cup of Tea.’ Press Release. <https://web.archive.org/web/20140811033029/http:/www.rsc.org/pdf/pressoffice/2003/tea.pdf&gt;.

Waugh, Evelyn. 1956. ‘An Open Letter to the Honble Mrs Peter Rodd (Nancy Mitford) On a Very Serious Subject’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.

Smith, Matthew. ‘Should milk go in a cup of tea first or last?’ YouGov. 30 July 2018. <https://yougov.co.uk/topics/food/articles-reports/2018/07/30/should-milk-go-cup-tea-first-or-last/&gt;


Milk in First, or Last Part 1: a Storm in a Teacup?

Poster created by the amazing Geof Banyard (islandofdoctorgeof.co.uk) for a
2016 mock ‘Tea Referendum’

The Short Version: Pouring tea (from a teapot) with the milk in the cup first was an acceptable, if minority, preference regardless of class until the 1920s, when upper class tea drinkers decided that it was something that only the lower classes did. It does affect the taste but whether in a positive or negative way (or whether you even notice/care) is strictly a matter of preference. So, if we’re to ignore silly class-based snobbery, milk-in-first remains an acceptable alternative method. Unless you are making your tea in a mug or cup with a teabag, in which case, for the love of god, put the milk in last, or you’ll kill the infusion process stone dead.

This article first appeared in a beautifully designed ‘Tea Ration’ booklet designed by Headstamp Publishing for Kickstarter supporters of my book (Ferguson, 2020). Now that these lovely people have had their books (and booklets) for a while, I thought it time to unleash a slightly revised version on anyone else that might care! It’s a long read, so I’ll break it into two parts (references in Part 2, now added here, for those interested)…

Part 1: The History

Like many of my fellow Britons, I drink an enormous amount of tea. By ‘tea’, I mean tea as drunk in Britain, the Republic of Ireland and to a large extent in the Commonwealth. This takes the form of strong blends of black leaves, served hot with (usually) milk and (optionally) sugar. I have long been aware of the debate over whether to put the milk into the cup first or last, and that passions can run pretty high over this (as in all areas of tea preference). For a long time however, I did not grasp just how strong these views were until I read comments made on a video (Ferguson & McCollum, 2020) made to support the launch of my book ‘Thorneycroft to SA80: British Bullpup Firearms 1901 – 2020’. This showed brewed tea being poured into a cup already containing milk, which caused a flurry of mock (and perhaps some genuine) horror in the comments section. Commenters were overwhelmingly in favour of putting milk in last (henceforth ‘MIL’) and not the other way around (‘milk in first’ or ‘MIF’). This is superficially supported by a 2018 survey in which 79% of participants agreed with MIL (Smith, 2018). This survey was seriously flawed in not specifying the use of a teapot or individual mug/cup as the brewing receptacle. Very few British/Irish-style tea drinkers would ever drop a teabag in on top of milk, as this soaks into the bag, preventing most of the leaves from infusing into the hot water. Most of us these days only break out the teapot (and especially the loose-leaf tea, china cups, tea-tray etc) on special occasions, and it takes a conscious effort to try the milk in first.

Regardless, anecdotally at least it does seem that a majority would still argue for MIL even when using a teapot. This might seem only logical; tea is the drink, milk is the additive. The main justifications given were the alleged difficulty of judging the colour and therefore the strength of the mixture, and an interesting historical claim that only working class people in the past had put milk in first, in order to protect their cheap porcelain cups. The practicalities seemed to be secondary to some idea of an objectively ‘right’ way to do it, however, with many expressing mock (perhaps in some cases, genuine) horror at MIF. This vehement reaction drove me to investigate, coming to the tentative conclusion that there was a strong social class influence and releasing a follow-up video in which I acknowledged this received wisdom (Ferguson, 2020). I also demonstrated making a cup of perfectly strong tea using MIF, thus empirically proving the colour/strength argument wrong – given a suitably strong blend and brew of course. The initial source that I found confirmed the modern view on the etiquette of tea making and the colour justification. This was ‘Tea & Etiquette’ (1998, pp. 74-75) written by American Dorothea Johnson. Johnson warns ‘Don’t put the milk in before the tea because then you cannot judge the strength of the tea by its color…’

And:

‘ …don’t be guilty of this faux pas…’

Johnson then lists ‘Good Reasons to Add Milk After the Tea is Poured into a Cup’, as follows:

  • The butler in the popular 1970s television program Upstairs, Downstairs kindly gave the following advice to the household servants who were arguing about the virtues of adding milk before or after the tea is poured: “Those of us downstairs put the milk in first, while those upstairs put the milk in last.”
  • Moyra Bremner, author of Enquire Within Upon Modern Etiquette and Successful Behaviour, says, “Milk, strictly speaking, goes in after the tea.”
  • According to the English writer Evelyn Waugh, “All nannies and many governesses… put the milk in first.”
  • And, by the way, Queen Elizabeth II adds the milk in last.

Unlike the video comments, which did not directly reference social class, this assessment practically drips with snobbery, thinly veiled with the practical but subjective justification that one cannot judge the colour (and hence strength) of the final brew as easily. Still, it pointed toward the fact that there really was somehow a broadly acknowledged ‘right’ way, which surprised me. The handful of other etiquette and household books that I found in my quick search seemed to agree, and in a modern context there is no doubt that ‘milk in last’ (MIL) has come to be seen as the ‘proper’ way. However, as I suspected, there is definitely more to it—milk last wasn’t always the prescribed method, and it isn’t necessarily the best way to make your ‘cuppa’ either…

So, to the history books themselves… I spent longer than is healthy perusing ladies’ etiquette books and, as it turns out, only the modern ones assert that milk should go in last or imply that there is any kind of class aspect to be borne in mind. In fact, Elizabeth Emma Rice in her Domestic Economy (1884, p. 139) states confidently that:

“…those who make the best tea generally put the sugar and milk in the cup, and then pour in the hot tea.”

I checked all of the etiquette books that I could find electronically, regardless of time period, and only one other is proscriptive with regards to serving milk with tea. This is The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook, by Florence Hartley (1860, pp. 105–106) which passes no judgement on which is superior, but recommends for convenience that cups of tea are poured and passed around to be milked and sugared to taste. This may provide a practical underpinning to the upper-class preference for MIL; getting someone’s cup of tea wrong would be a real issue at a gathering or party. You either had to ask how the guest liked it and have them ‘say when’ to stop pouring the milk, which would take time and be fraught with difficulty or, more likely, you simply poured a cup for each and let them add milk and sugar to their taste. This also speaks to how tea was originally drunk (as fresh coffee still is)—black, with milk if you wanted it. A working-class household was less likely to host large gatherings or have a need to impress people. There it was more convenient to add roughly the same amount of milk to each cup, and then fill the rest with tea. , you would simply be given a cup made as the host deemed fit, or perhaps be asked how you like it. If thought sufficiently fussy, you might be told to make it yourself! In any case, Hartley was an American writing for Americans, and I found no pre-First World War British guides that actually recommended milk in last. As noted, the only guide that did cover it (Rice) actually favours milk in first.

Much of my research aligns with that presented in a superb article by Professor Markman Ellis of the Queen Mary University History of Tea Project. Ellis agrees that the ‘milk in first or last’ thing was really about the British class system—which helps explain why I found so few pre-Second World War references to the dilemma. His thesis boils down (ha!) to a crisis of identity among the post-First World War upper class. In the 1920s, the wealth gap between the growing middle class and the upper class was narrowing. This is where the expression nouveau riche—the new rich—comes from; they had the money but, as the ‘true’ upper class saw it, not the ‘breeding’. They could pose as upper class, but could never be upper class. Of course, that very middle class would, in its turn, come to look down on aspiring working-class people (think Hyacinth Bucket from British situation comedy Keeping Up Appearances). In any case, if you cared about appearances and reputation among your upper-class peers or felt threatened by social mobility, you had to have a way of setting yourself apart from the ’lower classes’. Arbitrary rulesets that included MIL were a way to do this. Ellis cites several pre-First World War sources (dating back as far as 1846) which comment on how individuals took their tea. These suggest that milk-in-first (MIF) was thought somewhat unusual, but the sources pass no judgement and don’t mention that this is thought to be a working class phenomenon. Adding milk to tea was, logically enough, how it was originally done—black tea came first and milk was an addition. Additions are added, after all. As preferences developed, some would have tried milk first and liked it. This alone explains why those adding milk first might seem eccentric, but not ‘wrong’ per se. In fact, by the first decade of the 20th century, MIF had become downright fashionable, at least among the middle class, as Helen with the High Hand (1910) shows. In this novel, the titular Helen states that an “…authority on China tea…” should know that “…milk ought to be poured in first. Why, it makes quite a different taste!” This presumptuous attitude (how dare the lower classes tell us how to make our tea?!) that influenced the upper-class rejection of the practice in later decades.

This brings us back to Ellis’s explanation of where the practice originated, and also explains the context of Evelyn Waugh’s comments as reported by Johnson. These come from Waugh’s contribution to to Noblesse Oblige—a book that codified the latest habits of the English aristocracy. Ellis dismisses the authors and editor as snobs of the sort that originated and perpetuated the tea/milk meme. However, in fairness to Waugh, he does make clear that he’s talking about the view of some of his peers, not necessarily his own, and even gives credit to MIF ‘tea-fanciers’ for trying to make the tea taste better. His full comments are as follows:

All nannies and many governesses, when pouring out tea, put the milk in first. (It is said by tea-fanciers to produce a richer mixture.) Sharp children notice that this is not normally done in the drawing-room. To some this revelation becomes symbolic. We have a friend you may remember, far from conventional in other ways, who makes it her touchstone. “Rather MIF, darling,” she says in condemnation.

                             -Waugh, 1956.

Incidentally, I erroneously stated that governesses were ‘working class’ in my original video on this topic. In fact, although nannies often were, the governess was typically of the middle class, or even an impoverished upper-middle or upper class woman. Both roles occupied a space between classes, being neither one nor the other but excluded from ever being truly ‘U’. As a result, they were free to make tea as they thought best. Waugh’s view is not the only tea-related one in the book. Poet John Betjeman also alluded to this growing view that MIF was a lower class behaviour in his long list of things that would mark out the speaker as a member of the middle class:

Milk and then just as it comes dear?

I’m afraid the preserve’s full of stones;

Beg pardon I’m soiling the doileys

With afternoon tea-cakes and scones.

                             -Betjeman, 1956.

Returning to the etiquette books, although the early ones were written for those running an upper-class household, the latter-day efforts like Johnson’s are actually aimed at those aspiring to behave like, or at least fascinated by, the British upper class. This is why Johnson invokes famous posh Britons and even the Queen herself to make her point to her American audience. Interestingly though, Johnson takes Samuel Twining’s name in vain. The ninth-generation member of the famous Twining tea company is in fact an advocate of milk first, and he too thought that MIL came from snobbery:

With a wave of his hand, Mr. Twining dismisses this idea as nonsense. “Of course you have to put the milk in first to make a proper cup of tea.” He surmises that upper-class snobbery about pouring the tea first, had its origins in their desire to show that their cups were pure imported Chinese porcelain.

Guanghua (光華) magazine, 1995, Volume 20, Issues 7-12, p. 19.

Twining goes on to explain his hypothesis that the lower classes only had access to poor quality porcelain that could not withstand the thermal shock of hot liquid, and so had to put the milk in first to protect the cup. Plausible enough, but almost certainly wrong. As Ellis explains in his article;

…tea was consumed in Britain for almost two centuries before milk was commonly added, without damaging the cups, and in any case the whole point of porcelain, other than its beauty, was its thermo-resistance.

Food journalist Beverly Dubrin mentions the theory in her book ‘Tea Culture: History, Traditions, Celebrations, Recipes & More’ (2012, p. 24), but identifies it as ‘speculation’. I could find no historical references to the cracking of teacups until after the Second World War. The claim first appears in a 1947 issue of the American-published (but international in scope)‘Tea & Coffee Trade Journal’ (Volumes 92-93, p.11), along with yet another pro-MIF comment:

…MILK FIRST in the TEA, PLEASE! Do you pour the milk in your cup before the tea? Whatever your menfolk might say, it isn’t merely ‘an old wives’ tale : it’s a survival from better times than these, when valuable porcelain cups were commonly in use. The cold milk prevented the boiling liquor cracking the cups. Just plain common sense, of course. But there is more in it than that, as you wives know — tea looks better and tastes better made that way.

The only references to cracking teaware that I’ve found were to the teapot itself, into which you’d be pouring truly boiling water if you wanted the best brewing results. Several books mention the inferiority of British ‘soft’ porcelain in the 18th century, made without “access to the kaolin clay from which hard porcelain was made”, as Paul Monod says in his 2009 book ‘Imperial Island: A History of Britain and Its Empire, 1660-1837’. By the Victorian period this “genuine or true” porcelain was only “occasionally” made in Britain, as this interesting 1845 source relates, and remained expensive (whether British or imported) into the 20th century. This has no doubt contributed to the explanation that the milk was put there to protect the cups, even though the pot was by far the bigger worry and there are plenty of surviving soft-paste porcelain teacups today without cracks (e.g. this Georgian example). Of course, it isn’t actually necessary for cracking to be a realistic concern, only that the perception existed, and so we can’t rule it out as a factor. However, that early ‘Tea & Coffee Trade Journal’ mention is also interesting because it omits any reference to social class and implies that this was something that everyone used to do for practical reasons, and is now done as a matter of preference. Likewise, on the other side of the debate, author and Spanish Civil War veteran George Orwell argued in favour of MIL in a piece for the Evening Standard (January 1946) entitled ‘A Nice Cup of Tea’:

…by putting the tea in first and stirring as one pours, one can exactly regulate the amount of milk whereas one is liable to put in too much milk if one does it the other way round.

                             -Orwell, 1946.

This reiterated his earlier advice captured in this wonderful video from the Spanish trenches. However, Orwell acknowledged that the method of adding milk was “…one of the most controversial points of all…” and admitted that “the milk-first school can bring forward some fairly strong arguments.” Orwell (who himself hailed from the upper middle class) doesn’t mention class differences or worries over cracking cups.

By the 1960s people were more routinely denouncing MIF as a working class practice, although even at this late stage there was disagreement. Upper class explorer and writer James Maurice Scott in ‘The Tea Story’ (1964, p. 112) commented:

The argument as to which should be put first into the cup, the tea or the milk, is as old and unsolvable as which came first, the chicken or the egg. There is, I think, a vague feeling that it is Non-U to put the milk in first – why, goodness knows.

It’s important to note that ‘U’ and ’Non-U’ were abbreviations used as shorthand for ‘Upper-Class’ and ‘Non-Upper-Class’ invented by Professor Alan Ross in his 1954 linguistic study, and unironically embraced by the likes of Mitford as a way to ‘other’ those that they saw as inferior.

The New Yorker magazine (1965, p. 26) reported a more emphatic advisory (seemingly a trick question!) given to an American visitor to London:

Do you like milk in first or tea in first? You know, putting milk in the cup first is a working-class custom, and tea first is not.

This, then, was the status quo reflected in the British TV programme ‘Upstairs, Downstairs’ in the 1970s, which helped to expose new audiences to the idea that MIF was ‘not the done thing’. Lending libraries and affordable paperback editions afforded easy access to books like Noblesse Oblige. The 1980s then saw the modern breed of etiquette books (like ‘Miss Manners’ Guide to Excruciatingly Correct Behavior’ that rehashed this snobbery for an American audience fascinated with the British upper class. Ironically of course, any American would have been unquestionably ‘Non-U’ to any upper class Brit, just as any working or middle-class Briton would have been. And finally (again covered by Ellis), much like the changing fashion of the extended pinkie finger (which started as an upper class habit and then became ‘common’ when it trickled down to the lower classes – see my article here), the upper class decided that worrying about the milk in your tea was now vulgar. Having caused the fuss in the first place, they retired to their collective drawing room, leaving us common folk to endlessly debate the merits of MIF/MIL…

That’s it for now. Next time: Why does anyone still care about this?

“…few men…would be clever enough to be crows.”

I recently caught up with this Nicola Clayton lecture on corvid intelligence. Well worth a watch, it ends with a very apt quote;

“If men had wings and bore black feathers, Few of them would be clever enough to be crows.”

-Henry Ward Beecher

Unfortunately, as quotes in Powerpoint presentations often are, this is incorrect.

The actual quote is;

“Take off the wings, and put him in breeches, and crows make fair average men. Give men wings, and reduce their smartness a little, and many of them would be almost good enough to be crows.”

Some time into researching the origins of this, I came across this blog post, which correctly identifies that the above is the original wording and that Beecher was indeed its originator. However, taking things a little further, I can confirm that the first appearance of this was NOT ‘Our Dumb Animals’ but rather The New York Ledger. Beecher’s regular (weekly) column in the Ledger was renowned at the time. Unfortunately, I can’t find any 1869 issues of the Ledger online, so I can’t fully pin this one down. Based upon its appearance in the former publication in May of 1870, and various other references from publications that summer (e.g. this one) to “a recent issue of the Ledger”, it appeared in early 1870. From there it was reprinted in various other periodicals and newspapers including ‘Our Dumb Animals’ (even if the latter doesn’t credit the Ledger as other reprints did). 

So how did the incorrect version come about? It was very likely just a misquote or rather, a series of misquotes and paraphrasings. Even some of the early direct quotes got it wrong. One 1873 reprint drops the word ‘almost’, blunting Beecher’s acerbic wit slightly. Saying that many men would be good enough to be crows is kinder than saying that many would be almost good enough. Fairly early on, authors moved to paraphrasing, for example in 1891’s ‘Collected Reports Relating to Agriculture’ we find:

“…Henry Ward Beecher long ago remarked that if men were feathered out and given a pair of wings, a very few of them would be clever enough to be crows.” 

This appeared almost verbatim twenty years later in Coburn’s ‘The Behavior of the Crow’ (1923). Two years later, Glover Morrill Allen’s ‘Birds and Their Attributes’ (1925, p.222) gave us a new version:

“…Henry Ward Beecher was correct when he said that if men could be feathered and provided with wings, very few would be clever enough to be Crows!”

It was this form that was repeated from then on, crucially in some cases (such as Bent’s 1946 ‘Life Histories of North American Birds’) with added quotation marks, making it appear to later readers that these were Beecher’s actual words. Interestingly, the earliest occurrence of the wording ‘very few would prove clever enough’ (my emphasis) seems to emerge later, and is credited to naturalist Henry David Thoreau:

“… once said that if men could be turned into birds, each in accordance with his individual capacity, very few would prove clever enough to be Crows.”

-Bulletin of the Massachusetts Audubon Society in 1942 (p.11).

I can find no evidence that Thoreau ever said anything like this, and of course it’s also suspiciously similar to the Beecher versions floating about at the same time (here’s another from a 1943 issue of ‘Nature Magazine’, p. 401). Thus, I suspect, the Thoreau attribution is a red herring, probably a straight-up mistake by a lone author. In any case, relatively few (only eight that I could detect via Google Books) have run with that attribution since, and these can likely be traced back to the MA Audubon Society error.

So, we are seeing here a game of literary ‘telephone’ from the original Beecher tract in 1870 via various misquotes in the 1920s – 1950s that solidified the version that’s still floating around today. Pleasingly, although his wording has been thoroughly mangled, the meaning remains intact. The key difference is that Beecher was using the attributes of the crow to disparage human beings based upon the low opinion that his fellow man then held of corvids. Despite this, Beecher very clearly did respect the intelligence of the bird as much as the 20th century birders who referenced him, and those of us today who also love the corvids. I think it’s important to be reminded that, as his version shows, widespread affection for corvids is a very recent thing. We should never forget how badly we have mistreated them and, sadly, continue to do so in many places.

Time Travel in Avengers: Endgame

A still from Oren Bell’s brilliant interactive timeline for Endgame as a multiverse movie. He disagrees with both writers and directors on the ending – check it out on his site here

With the new time travel-centric Marvel TV series Loki about to debut, I thought it was time (ha) for another dabble in the genre with a look at 2019’s Avengers: Endgame. (SPOILERS for those who somehow have yet to see it). To no-one’s surprise, the writers of Endgame opted to wrap up both a 20+ film long story arc and a cliffhanger involving the death of half the universe by recourse to that old chestnut of time travel (an old chestnut I love though!). It did so in a superficially clever way, comparing itself to and distancing itself from (quote) “bullshit” stories like ‘Back to the Future’ and ‘The Terminator’. The more I’ve thought and read about it though, the more I realise that it’s no more scientific in its approach than those movies. “No shit” I hear you say, but there are plenty of people out there who are convinced that this is superior time travel storytelling, and possibly even ‘makes perfect sense’. In reality, although it ends up mostly making sense, this is perhaps more by luck than judgement. I still loved the film, by the way, I’m just interested in how we all ended up convinced that it was ‘good’ (by which I mean consistent and logical) time travel, because it isn’t!

tl;dr – Endgame wasn’t written as a multiverse time travel story – although it can be made to work as one.

Many, myself included, understood Endgame to differ from most time travel stories by working on the basis of ‘multiverse’ theory, in which making some change in the past (possibly even the act of time travel itself) causes the universe to branch. This is a fictional reflection of the ‘Many Worlds’ interpretation of quantum mechanics in which the universe is constantly branching into parallel realities. As no branching per se was shown on camera, I assumed that it was the act of time travel itself that branched reality, landing the characters in a fresh, indeterminate future in which anything is possible. My belief was reinforced by an interview with physicist Sean Carroll, a champion of this interpretation and a scientific advisor on the movie. I was actually really pleased; multiverse time travel is incredibly rare (the only filmed attempt I’m aware of was Corridor Digital’s short-lived ‘Lifeline’ series on YouTube Premium). I’m not really sure why this is but regardless, the idea certainly works for Endgame as time travel is really just a means to an end i.e. getting hold of the Infinity Stones. I wasn’t the only one to assume something along these lines, which is why many were confused as to how the hell Captain America ended up on that bench at the end of the movie. If, as it seemed to, the film worked on branching realities, how could he have been there the whole time? If he wasn’t there the whole time and did in fact come from a branch reality that he’s been living in, how did he get back? Bewildered journalists asked both the writers and the directors (there are two of each) about this and got two different answers. The writers insisted that this was our Cap having lived in our timeline all along, although they later admitted that the directors’ view might also (i.e. instead) be valid, i.e. that he must have lived in a branch reality caused by changes made in the past. W, T, and indeed, F?

There is a good reason for this. The directors’ view is actually a retcon of the movie as written and filmed. Endgame is actually a self-consistent universe that you can’t alter and in which, therefore, time-duplicate Cap was always there. There is a multiverse element, but as we’ll see, this is bolted onto that core mechanic, and not very well, either. Let’s look at the evidence. The writers explain their take in this interview:

“It’s crucial to your film that in your formulation of time travel, changes to the past don’t alter our present. How did you decide this?

MARKUS We looked at a lot of time-travel stories and went, it doesn’t work that way.

McFEELY It was by necessity. If you have six MacGuffins and every time you go back it changes something, you’ve got Biff’s casino, exponentially. So we just couldn’t do that. We had physicists come in — more than one — who said, basically, “Back to the Future” is .

MARKUS Basically said what the Hulk says in that scene, which is, if you go to the past, then the present becomes your past and the past becomes your future. So there’s absolutely no reason it would change.”

What these physicists were trying to tell them is that IF time travel to the past were possible, either a) whatever you do, you have already done, so nothing can change or b) your time travel and/or your actions create a branch reality, so you’re changing this, and not your past. Unfortunately the writers misunderstood what they meant by this and came up with a really weird hybrid approach, which is made clear in a couple of key scenes involving Hulk where the two parallel sets of time-travel rules are explained. As originally written and filmed these formed a single scene, with all the key dialogue delivered by the Ancient One. First, the original version of those famous Hulk lines that they allude to above (for the sake of time/space I won’t bother to repeat those here): 

ANCIENT ONE

Of course, there will be consequences.

HULK

yes…If we take the stones we alter time, and we’ll totally screw up our present-day even worse than it already is.

ANCIENT ONE

If you travel to the past from your present, then that past becomes your future, and your former present becomes your past. Therefore it cannot be altered by your new future. 

This is deliberately, comedically obfuscatory, but is really simple if you break it down. All they’re saying is that you may be travelling into the past, but it’s your subjective future. If you could change the past, you’d disallow for your own presence there, because you’d have no reason to travel. In other words, you just can’t change the past, and paradoxes (or Bill & Ted-style games of one-upmanship) are impossible. On the face of it this dictates an immutable timeline; you were always there in the past, doing whatever you did, as in the films ‘Timecrimes’, ‘Twelve Monkeys’, or ‘Predestination’. In keeping with this, the writers also claim that Captain America’s travel to the past to be with Peggy is also part of this. How? We’re coming to that. Most definitely not in keeping however is, well, most of the movie. We see the Avengers making overt changes to the past that we’ve already seen in prior movies, notably Captain America attacking his past self. How is this possible given the above rule? If it is possible despite this, how does 2012 Cap magically forget that this happened? The answers to both questions are contained in the next bit of dialogue: 

HULK

Then all of this is for nothing.

ANCIENT ONE

No – no no, not exactly. If someone dies, they will always die. Death is.. Irreversible, but Thanos is not. Those you’ve lost have not died, they’ve been willed out of existence. Which means they can be willed back. But it doesn’t come cheap. 

ANCIENT ONE

The Infinity Stones bind the universe together, creating what you experience as the flow of time. Remove one of these stones, this flow splits. Your timeline might benefit, but my new one would definitely not. For every stone that you remove, you create new very vulnerable timelines; millions will suffer. 

In other words, because the Stones are critical to the flow of time and because later on a Stone is taken, the changes to the past of Steve’s own reality are effectively ‘fixed’, creating a new branch reality where he does remember fighting himself and the future pans out differently without changing his own past. We can try to speculate on what would have happened if the time travellers had made changes to the past and then a Stone hadn’t been taken, but this is unknowable since every change to what we know happened does get branched. Either the writers are lying to us, they don’t understand their own script, or – somehow – the taking of the Stones is effectively predestined, forming another aspect of the self-consistent universe of the movie. Logically of course, this is, to use the technical quantum mechanical term, bollocks. Events happening out of chronological order in time travel is fine; cause and effect are preserved, just not in the order to which we’re accustomed. However, you don’t get to change the past, then branch reality, then imply that the earlier change is not only retrospectively included in that branch, but is also predestined! This is a case of the cart before the horse; the whole point of branched realities is to allow for change to the past – it should not be possible to make any change prior to this point. The very concept is self-contradictory. If you can’t change the past, you can’t get to the point of taking a Stone to allow for a change to the past. The only way this works is if we accept that you can make changes, but as per the nonsense Ancient One/Hulk line, your present… “…cannot be altered by your new future.” Unfortunately, the writers have established rules and then immediately broken them in an attempt to avoid falling into the time travel cliche of pulling a Deadpool and stopping the villain in the past and yet retain the past-changing japes of those exact same conventional time travel movies. Recognising that the new branched realities would be left without important artefacts, they then explain how these ‘dark timelines’ are avoided:

HULK

Then we can’t take the stones.

ANCIENT ONE

Yet your world depends on it.

HULK

OK, what if… what if once we’re done we come back and return the stones?

ANCIENT ONE

[Then] the branch will be clipped, and the timeline restored.

Note that this is further evidence of the writer’s vision; if reality branches all the time, there’s no way to actually ‘save’ these timelines – only to create additional better ones. If reality only branches when a Stone is removed, putting it back ‘clips’ that branch as they explain. Still, on balance this interpretation is seriously flawed and convoluted. Luckily the version of this same scene from the final draft of the script (i.e., what we saw play out) helps us make sense of this mess (albeit not the dark timelines; they are still boned, I’m afraid!):

ANCIENT ONE

At what cost?

The Infinity Stones create the experience you know as the flow of time. Remove one of the stones, and the flow splits.

Now, your timeline might benefit.

My new one…would definitely not.

In this new branch reality, without our chief weapon against the forces of darkness, our world would be overrun…

For each stone you remove, you’ll create a new, vulnerable timeline. Millions will suffer.

(beat)

Now tell me, Doctor. Can your science prevent all that?

ASTRAL BANNER

No. But it can erase it.

Astral Banner reaches in and grabs THE VIRTUAL TIME STONE.

ASTRAL BANNER (CONT’D)

Because once we’re done with the stones, we can return each one to its own timeline. At the moment it was taken. So chronologically, in that reality, the stone never left.

These changes have two significant effects (other than removing the potentially confusing attempt to differentiate being willed out of existence from ‘death’):

1) To move the time travel exposition earlier in the movie to avoid viewers wondering why they can’t just go back and change things. 

To achieve this they added the obvious Hitler comparison (it may not be a comparison that this was a minor plot point in Deadpool 2!), along with pop culture touchstones to help the audience understand that this isn’t your grandfather’s (ha) time travel and you can’t simply go back and change your own past to fix your present. This works fine and doesn’t affect our interpretation of the movie’s time travel.

2) To de-emphasise the arbitrary nature of the Stones somehow being central to preventing a ‘dark’ timeline by pointing out that they’re essentially a means of defence against evil. 

This is more critical. We go from ‘the Infinity Stones create the experience you know as the flow of time’ to ‘creating what you experience as the flow of time’, which I read as moving from them creating time itself, to simply the timeline that we know (i.e. where the universe has the Stones to defend itself). This provides more room for the interpretation that removing a Stone is simply a major change to the timeline, like any other, that would otherwise disallow for the future we know, and so results in reality branching to a new and parallel alternate future. Still, I really don’t think that improving time travel logic was the main aim here, or even necessarily an aim at all. The wording about how the Stones ‘bind the universe together’ may have been dropped as simply redundant, or possibly to soften the plothole that not only the ‘flow of time’ but also the ‘universe’ are just fine when the Stones all get destroyed in the present-day (2023) of the prime reality. If the filmmakers truly cared about their inconsistent rules, they had the perfect opportunity here to switch to a simple multiverse approach and record a single line of dialogue that would explain it without the need to change anything else. Here’s the equivalent line from Lifeline:

“Look, your fate is certain. Okay? It can’t be undone. Your every action taken is already part of a predetermined timeline and that is why I built the jump box. It doesn’t just jump an agent forward in time, it jumps them to a brand new timeline. Where new outcomes are possible.”

Anyway, back to that head-scratcher of an ending and the writer’s claim that Cap was always there as a time duplicate in his own past. They say this is the case because it’s not associated with the taking of a Stone. I have checked this, and they’re right; it’s the only change to the past that can’t be blamed on a Stone. There’s also no mention in the script (nor the alternate scene below) of alternate universes being created prior to the taking of a Stone. So, per the writers’ rules, Cap (and not some duplicate from another reality) is indeed living in his own past and not that of a branch reality. This was the intent “from the very first outline” of the movie, notwithstanding the later difference of opinion between writing and directing teams. To be clear, everyone involved does agree that he didn’t just go back (or back and sideways if you believe the directors) for his dance raincheck – he stayed there, got married and had Peggy’s two children. Which inevitably means that Steve somehow had to live a secret life with a secret marriage (maybe he did a ‘Vision’ and used his timesuit as a disguise?) and kissed his own great niece in Civil War (much like Marty McFly and his mum). 

You can still choose to interpret Steve’s ‘retirement’ to his own past as a rewriting of the original timeline that alters Peggy’s future (i.e. who she married, who fathered her kids etc). Alternatively, you can believe the directors that Cap lived his life with the Peggy of a branch reality and returned (off camera!) to the prime reality to hand over the shield. But neither of these fits with the original vision for the movie that you can’t change your own past and it doesn’t branch unless a Stone is removed. There’s another problem with the writer’s logic here. Cap only gets to the past by having created and then ‘clipped’ all the branching realities. This means that the creation and destruction of these branches also always happened and is also part of an overarching self-consistent universe. Except that they can’t possibly be for the reason I’ve already given above; we’ve seen the original timelines before they become branch realities, so we know something has in fact changed, and there can’t be an original timeline for Cap to have ended up in his own past!

Conclusions

So, Endgame as written and even as filmed (according to the writers) is really not the multiverse time travel movie that most of us thought. It’s a weird hybrid approach that you can sort of mash together into a convoluted fixed timeline involving multiple realities but not really. It actually makes less sense than the films that it (jokingly) criticises and handwaves all consequences for time travel. Luckily, it can be salvaged if we overlook the resulting plothole of Captain America’s mysterious off-camera return and follow the interpretation of the directors. That is, that there’s no predestination, the Avengers are making changes, but every significant change, (i.e. one that would otherwise change the future, like living a new life in the past with your sweetheart) creates a branch reality. Not just messing with Stones. This isn’t perfect; how could it be? It’s effectively a retcon. But it’s easily the better choice overall in my view. Why wouldn’t this be the case? It’s only logical. The only serious discrepancy is the remaining emphasis placed upon the significance of the Stones, which I think can be explained by the Ancient One’s overly mystical view of reality. She focuses on the earth-shattering consequences for messing with the Stones simply because she knows the gravity of those consequences. She doesn’t explicitly rule out other causes of branches. It likely doesn’t matter that they’re destroyed in the subjective present of the prime universe, because the ultimate threat she identifies is Thanos, and he’s been defeated, along with the previous threats that the Stones had a hand in, including of course ‘Variant’ Thanos from the 2014 branch (meaning that branch doesn’t have to contend with him and gets its Soul and Power Stones back). Of course, this interpretation has some dark implications: If significant changes create branches, then when Cap travels back to each existing branch to return each stone, reality must be branched again. The Avengers have still created multiple new universes of potential suffering and death without one or more Stones, they’ve just karmically balanced things somewhat by creating a new set of positive branches that have all their Stones. Except for, again, the new Loki branch. 

For me, the directors’ approach, whilst imperfect, is the best compromise between logic and narrative. It’s not clear whether they somehow thought this was the case all along, or whether they only recognised the inconsistencies in post-production or even following the movie’s release. The fact that the writing and directing teams weren’t already on the same page when they were interviewed tells me that, simply, not enough thought went into this aspect of the film. Why should we believe them? Well, the director’s role in the filmmaking process traditionally supersedes that of the writer, shaping both the final product and the audience’s view of it. Perhaps the most famous example is Ridley Scott’s influence on Deckard’s status as a replicant. You can still choose to believe that he is human based on the theatrical cut and ignoring Scott’s own intent, but this is contradicted by his later comments and director’s cuts. There’s also the fact that subsequent MCU entries suggest that the Russos’ multiverse model is indeed the right one. Unless Loki is going to be stealing multiple more iterations of Infinity Stones, the universe is going to get branched simply by him time travelling. If so, this will establish (albeit retroactively) that the Ancient One really was just being specific about the Stones because of the particularly Earth-shattering consequences of messing with their past (and the need to keep things simple for a general audience). It would also pretty much establish the Russos’ scenario for Captain America; that he really did live out his life in a branch reality before somehow returning to the prime reality to hand over his mysterious newly made shield (another plothole!) to Sam. Where he went after that, we may never know, but I hear he’s on the moon

The Muffin Man?

This is an odd one. Some idiot has claimed as fact a stupid joke about the ‘muffin man’ of the child’s song/nursery rhyme actually being an historical serial killer and some credulous folk (including medium.com) have fallen for it. Snopes have correctly debunked it, yet despite a total lack of any evidence for it being the case, have labelled it ‘unproven’. I hope they figure out that this isn’t how history works. The onus is on the claimant to provide a reference. They aren’t going to find a definitive origin for a traditional song like that that would allow the (patently ludicrous) claim to be disproven. It’s moderately endearing that Snopes had to find out via furious Googling that ‘muffin men’ were a real thing. I learned this when I was a child. Maybe it’s a British thing that Americans have lost their cultural memory of. The very concept of the muffin man is very clearly enough to debunk this bollocks on its own. The muffin man was a guy who went door to door selling tasty treats that kids enjoy, not some ‘Slenderman’ bogeyman figure. It would be like suggesting that there was a serial killer called ‘Mr Whippy‘. Anyway, this Jack Williamson guy is just another internet attention-seeker who will hopefully disappear forthwith. As for Snopes, I can’t fault their article, but I suspect their ongoing foray into political fact-checking has made them a little gunshy of calling things ‘False’ without hard evidence.

Count Cholera 2: Revenge of the Half-Baked Hypothesis

These two get it.
(from https://www.theverge.com/2020/4/20/21227874/what-we-do-in-the-shadows-season-2-hulu-preview)

As I noted in my first post on Marion McGarry’s Dracula=Cholera hypothesis, I’m always wary of criticising ideas that have been filtered through the media (rather than presented first-hand by the author or proponent), because something is almost always missing, lost in translation or even outright misrepresented. So when a kind commenter directed me to this recording of McGarry’s talk on her theory that Bram Stoker’s ‘Dracula’ was inspired by Stoker’s mother’s experience of the early 19th century Sligo cholera outbreak, I felt that I had to listen to it (I never did receive a reply to my request for her article). Now that I have listened, I can confirm that McGarry is reaching bigtime. The talk adds very little to the news reports that I referenced last time and covers much the same ground, including spurious stuff like the novel having the working title of ‘The Undead’ (‘undead’ already being a word as I noted previously). There is some new material however.

Early on McGarry references recent scholarship regarding the historical figure of Wallachian ruler Vlad III being the inspiration for the Count and the novel that features him. She is right about this; Stoker did indeed only overlay Vlad’s name and (incorrect) snippets of his biography onto his existing Styrian ‘Count Wampyr’. However, needless to say, just because ‘Dracula’ was not inspired by the historical Vlad III, it does not follow that it/he was inspired by cholera. As I noted before, Stoker did not invent the fictional vampire, and had no need of inspiration to create his own vampire villain. The only argument that might hold weight is that he was inspired to tackle vampirism by his family history. McGarry’s main argument for this hinges on the fact that Stoker did research for his novels in libraries. As noted last time, this actually works against her theory, since we have Stoker’s notes and there is no mention of his having read around cholera in preparation for writing ‘Dracula’. Whereas we do have his notes on his actual sources, which were about eastern European folklore; vampires and werewolves. The aspects that Stoker did use, he transplanted almost wholesale; it’s easy to see, for example, which bits he lifted from Emily Gerard. Stoker did not in fact do ‘a great deal’ of reading; he found a couple of suitable books and stopped there. Which is why the only other new bit of information from this talk is also of limited use. McGarry cites this 1897 interview with Stoker, claiming that ‘…the kernel of Dracula was formed by live burials…’ This is not, in fact, what Stoker was asked. He was asked what the origin of the *the vampire myth* was, not the inspiration for his taking on that source material:

“Is there any historical basis for the legend?”

Stoker, who was no better informed on the true origins of the Slavic vampire than any other novelist, answered:

“It rested, I imagine, on some such case as this. A person may have fallen into a death-like trance and been buried before the time. 

Afterwards the body may have been dug up and found alive, and from this a horror seized upon the people, and in their ignorance they imagined that a vampire was about.”

Yes, this has parallels with cholera victims being buried prematurely, but it is by no means clear that Stoker was thinking of this when he made this response. Certainly, he does not mention it. There is every chance that this is purely coincidence; plenty of others at this time lazily supposed, like Stoker, that vampire belief stemmed from encounters with still-living victims of premature burial, or (apocryphal) stories of scratches on the inside of coffin lids. Stoker’s family connection with premature burial is likely a coincidence. Had he included a scene involving premature burial, or even a mention of it in the novel, McGarry might be onto something.

McGarry tries to compare Stoker’s victims of vampirism with descriptions of cholera patients; lethargy, sunken eyes, a blue tinge to the eyes and skin. Unfortunately the first two fit lots of other diseases, notably tuberculosis, and the third symptom doesn’t actually feature in ‘Dracula’ at all. I have literally no idea why she references it. She also tries to link the blue flames of the novel with German folklore in which ’blue flames emerge from the mouths of plague victims’. I have never heard of this, nor can I find any reference to it. I do know, however, that Stoker took his blue flames from Transylvanian folklore about hidden treasure; taken again from Emily Gerard (Transylvanian Superstitions), confirmed once again by Stoker’s notes. If there is folklore about blue flames and cholera, no reference appears in his notes, and it is most likely coincidence.

In an extension of her commentary that storms preceded both outbreaks (cholera and vampirism) McGarry asserts that the first victim of cholera presented on 11 August – the same date as Dracula’s first British victim in the novel, the evidence being William Gregory Wood-Martin’s 1882 book ‘The History of Sligo County and Town’. This is not correct. Lucy, Dracula’s first victim, does indeed receive her vampire bite on 11 August. MEanwhile however, back in the real world, the first case of cholera in Sligo was identified on 29 July 1832. Wood-Martin mentions 11 August only because a special board was created on that day, precisely because the first case had happened some time previously. McGarry does admit that 11 August ‘..may have been randomly chosen by Stoker’, yet still lists this piece of ‘evidence’ in her summing up, which is as follows;

‘It cannot be a coincidence that Bram Stoker had Dracula tread a path very similar to cholera; a devastating contagion travelling from the East by ship that people initially do not know how to fight, a great storm preceding its arrival, the ability to travel over land by mist and the stench it emits, avenging doctors and Catholic imagery, the undead rising from the dead, all culminating in the date of august 11th of the first victim.’

Just to take these in order;

  1. ‘It cannot be a coincidence’ It can absolutely be a coincidence. All of this is literally coincidence without any evidence to support it. This is not how history works. 
  2. ‘…a path very similar…’ Dracula comes from Western Europe. Cholera came from the Far East. Both are east of the British Isles, but the origins of the two contagions are hardly identical. The ship aspect I dealt with last time; this is how people and goods travelled across continents at that time. Not to mention that all of these similarities with cholera are similarities with any disease – and most agree that the idea of the vampire as contagion is a legitimate theme of ‘Dracula’ (indeed, historical belief in vampires has strong ties to disease). There’s nothing special about cholera in this respect. The same goes for idea of people not knowing how to fight these afflictions; all disease outbreaks require learning or relearning of ways to combat them. One could just as easily claim similarity in that cholera had been fought off previously, and that Van Helsing already knows how to defeat vampires; just not necessarily this one… 
  3. ‘…the ability to travel over land by mist and the stench it emits…’ earlier in the talk McGarry claims that Stoker invokes miasma theory in ‘Dracula’. In fact he doesn’t. Bad smells abound, sure, but the only mention of miasma in the novel is metaphorical (‘as of some dry miasma’) and relates to the earthy smell of Dracula’s Transylvanian soil, not to the Count himself. Nowhere is smell cited as a means of transmission, only biting. ‘Dracula’, famously, takes a very modern, pseudoscientific approach to vampirism, even if its counter is good old-fashioned Catholic Christianity. Speaking of which…
  4. ‘…avenging doctors and Catholic imagery…’ as noted, ‘Dracula’ does treat vampirism as a disease, so the doctors follow from that; not bearing any specific relation to cholera in Ireland. As for Catholic imagery, well, Stoker was from that background, and Dracula is very overtly Satanic in the novel. You need religion to defeat evil just as you need medicine to defeat disease. Once again, this is coincidence.
  5. ‘…the undead rising from the dead…’ how else does one get the undead? Seriously though, I’ve dealt with this above and previously. Stoker chose to write about vampires, therefore the undead feature. 
  6. ‘…all culminating in the date of August 11th of the first victim.’ Except it doesn’t, as I’ve shown.

I make that a 0/6. The themes identified by McGarry in Stoker’s book stem from his choice of vampires as the subject matter, and his take is shaped by his knowledge, upbringing, etc etc. Was he in part inspired to choose vampires because of family history with cholera? Maybe; it’s plausible as one of many influences (not, as McGarry implies, the main or sole influence) but there is literally zero evidence for it. 

Rifle musket or rifled musket?

A Rifled musket. Also a rifle musket. And a rifle.

Tl;dr – 

‘Rifle’ = short for ‘rifled gun’

‘Rifled gun’ = any firearm with rifling

‘Rifled musket’ OR ‘rifle musket’ = any musket with rifling

‘Musket’ = any shoulder-fired enlisted infantry firearm

*i.e. not an artillery or cavalry carbine, or an NCO or officer’s fusil or pistol.

Having seen the Smithsonian TV channel’s YouTube channel describe an India Pattern ‘Brown Bess’ musket as a ‘musket rifle’ – which is a nonsense term – I thought it was time to roll out my research on the term ‘rifle musket’ – which is an actual historical thing. Firstly, I should point out that their ‘test’ of the musket vs the Dreyse needle gun is typically flawed and superficial modern TV stuff, as Brandon F. details. Brandon corrects ‘musket rifle’ to ‘rifled musket’, with a ‘d’ but in fact both forms – ‘rifled musket’ and ‘rifle musket’ were used interchangeably in the period in question. Said period is from c.1850, when the technology of spiral grooves in the barrel or rifling, known for more than 300 years by this point, was first applied to standard issue infantry firearms. 

The most important thing to say is that the use of ‘rifle’ or ‘rifled’ is just a matter of preference around verb inflection, like ‘race car’ in American English (a car for use in a race) and ‘racing car’ in British English (a car for racing in). This linguistic difference was less pronounced in the 19th century (although did exist as we’ll see), and so ‘rifle musket’ and ‘rifled musket’ were genuinely interchangeable. More on this later, but the main thing I want to address – and the ‘BS history’ here – is that they don’t mean different things. Some (including the former Pattern Room Custodian Herbert J. Woodend in his British Rifles book) have suggested that the term ‘rifled’ denoted a conversion – a ‘musket’ that had been ‘rifled’ – whereas a ‘rifle musket’ is a musket-like rifle that was designed and made that way. Although logical enough, there is literally no evidence for this, no consistency in the actual use of the two variant terms, and plenty of evidence to suggest that they are just linguistic differences. 

A quick word on the word ‘rifled’ or ‘to rifle’ – as this period dictionary shows, this originally meant to raid, loot, ransack or, and this is where the grooves cut into a barrel come in – ‘to disturb’. Gunmakers running a sharp tool on a rod in and out of a gun’s bore were indeed disturbing the otherwise smooth surface of the metal. Incidentally, the term ‘screwed gun’ is a synonym for ‘rifle(d) gun’ as this 1678 source shows. The etymology is pretty clear, but had apparently been forgotten by the end of the 18th century, when ‘to rifle’ either meant just ransacking or looting, or to cut spiral grooves in a gun. At any rate, this was in use from at least 1700, and was short for ‘rifled gun’ or ‘rifle gun’. Inventor of the Baker rifle, Ezekiel Baker, refers to the generic rifle as ‘the rifled gun’ in his own 1806 book, so this long form term was still in current use at that time, but was already commonly abbreviated. Almost from the off therefore, ‘rifled gun’, ‘rifle gun’ and ‘rifle’ were all used to refer to any shoulder-fired firearm with rifling, whereas ‘rifled musket’, ‘rifle musket’ or ‘rifle-musket’ referred specifically to a military weapon with rifling. Military rifles in the age of linear tactics had to serve as both gun and half-pike, so that infantry could fight without shooting, and especially engage with cavalry. There was little need for the precision offered by the rifle, a lack of training to allow soldiers to exploit it, and in any case they were much more labour-intensive and therefore costly to make. Rifles were also slower to load, and it was more effective for the majority of troops to be drilled in musketry using quick-loading and cost-effective smoothbore muskets than to provide them with rifles. The typical rifle was designed for hunting or target shooting. Of course, during the 18th century they were adapted for limited use in war by specialist troops, and light infantry tactics developed for them, but the standard soldier’s weapon remained the musket, and until the 1840s was invariably a smoothbore musket and not a ‘rifled musket’.

Although we are used to thinking of a musket as a clunky, inaccurate, short-ranged and smoothbore weapon therefore, the actual distinguishing characteristics of the musket were really only twofold. First, it had to have a long barrel to allow for more complete powder burn and therefore sufficient velocity (especially important with the lack of gas seal at the breech) as well as enough reach to engage in bayonet fighting (especially against cavalry) and secondly, a bayonet. This is why the Baker rifle could be called a ‘rifle musket’ – and its users fought as line infantry as well as light infantry – and also why the famous Winchester company marketed a long-barrelled, bayonet-capable version of its lever-action rifle as a musket. By the end of the 19th century the smoothbore musket had fallen out of use, and so there was no longer a need to differentiate between ‘(smoothbore) musket’ and ‘rifled musket’. Of course, we could have just called rifles ‘muskets’, but ‘rifle’ was already in common usage, and the word ‘musket’ had become associated with the smoothbore musket amidst the hype of the superiority of the rifle musket. ‘Rifle’ or ‘Rifled’ was the key part of the name, so once again the standard infantry weapon was abbreviated to just ‘rifle’ – which was in any case used throughout this whole period. The P’53 Enfield was always a ‘rifle’, a ‘rifled musket’, and technically, a ‘rifled gun’ as well.

All of this would tend to suggest that ‘rifled musket’ only came in with general issue percussion rifles like the Enfield and the Springfield, but in fact early military rifles like the famous British Baker were also ‘muskets’. Rifled muskets. The 1816 ‘Encyclopaedia Perthensis; Or Universal Dictionary of the Arts, Sciences, Literature’, Volume 18 (p. 383);

‘A telescope with cross-hairs, fitted to a common rifled musket, and adjusted to the direction of the shot, will make any person, with very little practice, hit an object with more precision than the most experienced marksman.’

De Witt Bailey’s ‘British Military Flintlock Longarms’ shows that the Baker itself was in fact sometimes called a ‘Rifled musquet’, and not just in its rare ‘musket bore’ variant either. It was a musket because it was a military long gun with a bayonet. It was a rifle gun, rifle musket, or just plain ‘rifle’, because it was rifled! By this stage however the shorthand ‘rifle’ was not only in common use, but was part of the formal designation of the weapon (the ‘Infantry Rifle’). It also helped to further differentiate the specialist weapon from the common musket. However, the term ‘musket’ did survive for a long time afterward in the context of ‘musketry’ – military marksmanship. The British ‘School of Musketry’ was only formed in 1854, when rifles were already standard issue – in fact that’s primarily why it was formed; soldiers now had to learn how to hit their mark at distance. My mention of ‘musket bore’ raises a third differentiating aspect that I ignored earlier; because it becomes irrelevant in the 19th century, which is a larger, heavier bullet than the typical rifle, carbine, or ‘fusil’. This held broadly true from the inception of the musket in the 1530s to the 19th century when (rifle!) musket bores reduced as velocities went up. However, even in this earlier period, a carbine could be of ‘musket bore’, just as it could also mount a bayonet. Terminology is a thorny problem that is just as often driven by the armed force that’s doing the naming as it is by logic; but here I’m just concerned with sorting out the ‘rifle(d) musket’ issue. 

The official British term for an infantry rifle intended for use by ‘line infantry’ (i.e. not light infantry or specialist riflemen) during the period of the Pattern 1853 rifle was ‘rifled musket’, in keeping with the modern British English grammatical preference. As noted though, this was less set in stone in the mid-19th century and ‘rifle musket’ was also used, notably by Henry Jervis-White-Jervis in his 1854 ‘The Rifle-musket: A Practical Treatise on the Enfield-Pritchett Rifle’. ‘The Rifle: And how to Use It’ by Hans Busk (1861) uses both terms, leading with ‘rifled musket’, and is referring to the Pattern 1853 rifle, so again, there’s no question of ‘rifled’ meaning a conversion of a smoothbore musket. In the U.S. also, both terms were used. Peter Smithurst in his Osprey book on the P’53 refers to the records of the 10th Massachusetts Volunteers of Springfield (July 1861);

‘….Friday morning the regiment marched to the U.S. Armory and returned the muskets loaned them for the purpose of drill, and in the afternoon we received our full supply of the Enfield rifled musket.’

Yet the ‘Catalogue of the Surgical Section of the United States Army Medical Museum’ by Alfred A. Woodhull (1866, p. 583) lists various weapons, using ‘rifle musket’ for the U.S. Springfield, but ‘rifled musket’ for foreign types including the P’53. Once again, interchangeable terms for the same thing. 

There you go – call them ‘rifle muskets’, ‘rifled muskets’, ‘rifle guns’ or just plain ‘rifles’ – all are correct and all refer to the same thing – a military rifle. The only reason we don’t call an M16 a ‘musket’ is fashion, basically.

Time Travel: the ending of ‘Twelve Monkeys’

‘I’m in insurance. Just to be clear, that means I’m a scientist here from the future to obtain a sample of the original form of your doomsday virus so that survivors in the future can reclaim the surface of the Earth. Clear?’

Something a bit left-field, but still about BS and history (sort of) and another in a series of time travel-related posts. One of the greatest time travel stories ever is the original Twelve Monkeys (1995); I love it. It’s an absolutely flawless, self-consistent time loop with a wonderfully bleak ending where (spoiler alert………….) the hero dies and fails to prevent the end of the world. However, it isn’t actually as bleak as it seems. The whole point of the movie, which some people have missed, is that the outbreak cannot be prevented. To do so would prevent the very future that sends James Cole back in time in the first place. What the future scientists *can* achieve is to obtain a sample of the virus to engineer a cure for the survivors in the future. They dub this an insurance policy of sorts, hence the future scientist – the ‘Astrophysicist’ in the credits and the script – introduces themselves as ‘…in insurance…’. Some take this literally, perhaps a deliberate jibe at the incompetent future rulers; she wasn’t even a trained scientist – just some business type (this argument has taken place, amongst other places, on the Wikipedia article ‘talk’ page)! This is not the case. The woman on the plane is definitively the scientist we see in the future, and she is a key part of the plan to save what’s left of humanity in the future, not in the past.

If the apparent age of the actress herself (who does not wear age makeup or even sport grey hair in the future scenes) doesn’t make this clear, the available drafts of the film’s script, dated June 27 1994 and February 6 1995, do. The future scientist is meant to appear the same age as he (given the late dates of the scripts, they must simply have not bothered to change the scientist’s gender after Carol Florence was cast in the role) is in the future scenes; ‘…a silver-haired gentleman…’. 

INT.  747 CABIN – DAY

  1. PETERS closes the door to the overhead luggage rack containing his Chicago Bulls bag and takes his seat.  Next to him, a FELLOW TRAVELER, unseen, says…

FELLOW TRAVELER’S VOICE (o.s.)

It’s obscene, all the violence, all the lunacy.  Shootings even at airports now. You might say…we’re the next endangered species…human beings!

CLOSE ON DR. PETERS, smiling affably, turning to his neighbor.

  1. PETERS

I think you’re right. sir.  I think you’ve hit the nail on the head.

  1. PETERS’ POV:  the FELLOW TRAVELER, a silver haired gentleman in a business suit, offering his hand congenially.  DR. PETERS doesn’t know who this man is, but we do.  It’s the ASTROPHYSICIST!

ASTROPHYSICIST Jones is my name.  I’m in insurance.

EXT.   PARKING LOT/AIRPORT

As YOUNG COLE’S PARENTS (seen only as sleeves and torsos) usher YOUNG COLE into their station wagon, the boy hesitates, looks back, watches a 747 climb into the sky.

 

FADE OUT:

 

The Astrophysicist we see in the final cut likewise looks no older in the future than she does in the past. Although the date of the future scenes is never given, we are looking at a minimum of 30 years from 1996; Jose specifically mentions ‘30 years’ in the closing scenes, and Bruce Willis is very clearly at least 30 years older than his child actor self. This is a Hollywood movie with a 30+ million dollar budget; they could have afforded a little more latex if they had wanted to change the intent of the script.

The real clincher though is the wonderful documentary ‘The Hamster Factor’, which you can find (illegally of course) on YouTube. I’d encourage watching the whole thing, but from 45 minutes in we hear, despite director Terry Gilliam’s misgivings, the filmmakers’ clear intent that this scene is indeed a ‘happy ending’:

 

‘…a shot which has caused considerable conflict between Terry and Chuck. Chuck wants to follow the original script which ends with young Cole in the airport parking lot. As far as Terry is concerned though he has his final shot; the shot of young Cole in the airport witnessing his own death. …from early on reading the script and in discussions I’ve always felt that the ending of the film would take place in the airport between Railly and the boy, their eye contact, I mean, that’s why I started the film with, on his eyes, and end on his eyes, and the boy is touched, scarred, damaged by what he’s just seen, something that’s going to stay with him for the rest of his life. The scene that then came after that, was a scene in the airplane where Dr Peters and his viruses meet the astrophysicist and we know that somehow, the astrophysicist will get the virus and will be able to save the human race. [there is then a short clip of Jones the astrophysicist on the plane with Peters]. There was an argument that we needed that scene because otherwise Cole’s death would have been in vain, that he wouldn’t have achieved anything; this way we the audience can see that he has achieved something, that his death has led them to the virus and he saves the future, and um I was convinced that was all nonsense anyway, it was unnecessary and emotionally it would weaken the emotional ending.’

 

Note that although Gilliam talks about ‘reading the script’, the aeroplane ‘happy ending’ scene was definitely in there from at least a year before filming began; Gilliam as director was proposing that they should leave it out as it might be ‘giving too much away’, but producer Chuck Roven (and no doubt others, given the difficulties experienced with test audiences) were insistent that it remain. Later in the documentary, Mick Audsley (sound editor) explains the tricky balance being struck between giving the audience enough information or too little. We see Gilliam and others in the edit, watching first the scene of young Cole seeing older Cole die, and then the scene on the plane. Audsley even laughingly asks if this scene might actually be a setup for a sequel (!), something which Gilliam denies immediately before explaining that they are preparing two different edits for test audiences, one that ends on young Cole’s face, the other with the plane scene. As he puts it; ‘There are definitely two camps here on this one about whether that detracts from the ending or enriches it a little bit by tidying up certain plot.’ Then Gilliam states outright that ‘…she’s actually come back from the future, and Cole effectively has led them to this point…’ to which Audsley (at least, I think it’s him, it’s said off-camera) admits that this ‘didn’t come through’ for him. According to Gilliam, ‘quite a few people’ didn’t get it either. So if you were one of those people, don’t worry; you are in good company!  

On the ‘somehow’ of the means by which the future scientists will retrieve the sample from Peters (which definitely is unclear), I actually suspect that the handshake is also meant to represent the scientist willingly contracting the virus herself to obtain the virus sample by physical contact. This would be consistent with the Terminator-style naked time travel that we see; she couldn’t bring back a phial of virus, but she could contract the virus and bring herself back. Alternatively, perhaps there is a means of bringing back a sample without killing herself (assuming no actual virus has yet been released, she could even achieve this, er, drug mule style… I’ll say no more than that). The important point is that whether or not the scientists thought they might be able to stop the outbreak, they had a contingency plan to use the pinpointed location, time, date and ID of the perpetrator to obtain a sample and at least have a chance of engineering a cure. Although it isn’t clear how the unmutated virus would help them combat the mutated future strains, but still, the filmmakers are clear that this is the ending. It’s ambiguous enough, and the plan desperate enough, that you can still read it as the beginning of the end of humanity if you wish. For me it’s the right balance of bleakness, but I can see why many, including Gilliam himself, wanted the movie to end on young Cole watching himself die as a futile loop is completed.