Just a quick comment on an article that appeared on the usually excellent Atlas Obscura a little while back. It starts out OK, but fairly quickly we hit an error. The first image is of the alleged home, not of Vlad III, “Dracula” but his father Vlad II “Dracul”. We could simply read between the lines here, since Vlad III is further alleged to have been born in that house (both claims are shaky, in fact, as I will eventually get around to explaining). However, the caption states that the real-life Dracula was “was born in Romania in the 14th century”. That’s a century out, not to mention that Vlad’s contribution to the Stoker novel was actually very limited, being limited to a brief fictionalised biography that also confuses Vlad II and Vlad III, and a Victorian equivalent of a copy/paste of “Dracula” and “Transylvania” for the original draft’s “Count Wampyr” and “Styria”. The author of this article ought to know this, and I wonder if this is an editorial cockup inherited from the original ‘The Conversation’ article (on a related note, why do people keep buying articles from that site?).
Then it gets really wrong in the thrust of its argument, which is a rehash of several post-hoc medical/scientific explanations for vampirism that have been debunked numerous times:
“…two in particular show solid links. One is rabies, whose name comes from a Latin term for “madness.” It’s one of the oldest recognized diseases on the planet, transmissible from animals to humans, and primarily spread through biting—an obvious reference to a classic vampire trait.”
The massive problem with this explanation is that the vampires we’re taking about here are strogoi mort – animated corpses that the villagers identified as such, to the point of often digging up the suspect and trying to (re)kill them (and yes, I’m familiar with the strigoi vui, which were not thought to suck blood and were directly analagous to the western [living] witch). This is classical post hoc BS history; X disease resembles our modern impression of what Y folklore concept might have been, therefore X caused Y. When in fact there’s zero evidence for this and at best it’s unfalsifiable speculation. Based upon one article in a neurology (not a history or folklore) journal, the author also concludes that the rabies sufferer’s fear of water must be related to folklore tales of vampires being unable to cross running water (nope, that was witches again), and disturbed sleep patterns (yet again, the vampires we’re all talking about here are animated corpses, not insomniacs) plus increased aggression (I suppose any amount of aggression from a corpse qualifies as “increased”). Even the original rabies article from 1998 says that this explanation is just one possible cause of the vampire myth. You don’t have to be a folklore buff to realise that disease symptoms in the living cannot explain them in the dead.
The second alleged vampire disease cited in the Conversation/Atlas Obscura article is pellagra, and is even less convincing since the author himself admits that it (and this is the second of his two top candidates for the origin of the vampire myth, remember);
“…did not exist in Eastern Europe until the 18th century, centuries after vampire beliefs had originally emerged.”
As Doctor Evil would say, “riiiiiiiiight…”. So how is there in *any way* a causal link between the two? There isn’t even any tradition of the classical blood-drinking vampire in the Americas; only its tuberculosis-causing cousin. No, sorry, these and in fact all disease explanations for vampirism have been, remain, and always will be, terrible. Just stop. Now, to redeem Atlas Obscura, here’s a much, much better article of theirs that completely agrees with me, and makes the excellent point that these lurid claims are not victimless, since real living people have to suffer with diseases like porphyria.
I’ve just watched a fascinating lecture from funerary and art historian Dr. Julian Litten on burial vaults. I learned a lot and greatly enjoyed it, but was very surprised to hear him recite the old chestnut that the smell of decaying bodies under church floors led to the expression ‘stinking rich’. This is just not true, as phrases.org.uk relates:
The real origin of stinking rich, which is a 20th-century phrase, is much more prosaic. ‘Stinking’ is merely an intensifier, like the ‘drop-dead’ of drop-dead gorgeous, the ‘lead pipe’ of lead pipe cinch or, more pertinent in this case, the ‘stark-raving’ of stark-raving mad. It has been called upon as an intensifier in other expressions, for example, ‘stinking drunk’ and ‘we don’t need no stinking badges’
The phrase’s real derivation lies quite a distance from Victorian England in geography as well as in date. The earliest use of it that I can find in print is in the Montana newspaper The Independent, November 1925:
He had seen her beside the paddock. “American.” Mrs Murgatroyd had said. “From New England – stinking rich”.
However, I thought I’d check, and I did find an earlier cite, from ‘V.C.: A Chronicle of Castle Barfield and of the Crimea’, by David Christie Murray (1904, p. 92);
“I’m stinking rich – you know – disgraceful rich.”
Nothing earlier than that however. So I would add to the explanation at phrases.org.uk and say that it’s more of an expression of disgust; someone is so rich that it’s obscene and figuratively ‘stinks’. If we had any early 19th century or older cites, I’d grant that it could have been influenced in some way by intramural burial, but this was rare by the turn of the 20th century and lead coffins had been a legal requirement since 1849. Litten suggests that unscrupulous cabinetmakers might omit the lead coffin, leading to ‘effluvia’, but even then I can’t imagine that was common as it would be obvious when it had happened and whose interment was likely to have caused it, resulting in complaints and most likely reburial.
Litten also repeated a version of the myth of Enon Chapel, which is a story I’ve been working on and will be forthcoming, but added a claim that I have yet to come across; that the decomposition gases from the crypt below were so thick that they made the gas lighting in the chapel above ‘burn brighter’. I don’t know where this comes from and it hardly seems plausible. Dr Waller Lewis, the UK’s first Chief Medical Officer, wrote on the subject in an 1851 article in The Lancet entitled ‘ON THE CHEMICAL AND GENERAL EFFECTS OF THE PRACTICE OF INTERMENT IN VAULTS AND CATACOMBS’. Lewis stated that: “I have never met with any person who has actually seen coffin-gas inflame” and reported that experiments had been carried out and “in every instance it extinguished the flame”. This makes sense, since it was not decomposition gases per se (and certainly not ‘miasma’ as was often claimed at the time) that made workers light-headed or pass out in vaults – it was the absence of oxygen and high concentration of CO2 that caused this. Hence reports of candles going out rather than inflaming more.
Unfortunately, even the best of us are not immune to a little BS history. It was nonetheless a privilege to hear Dr. Litten speak.
When I last wrote on the Beast of Gévaudan, I said that I couldn’t rule out the involvement of one or more human murderers whose actions could have been conflated with several wolves and possibly other wild animals killing French peasants between 1764 – 1767. I meant that literally; the Beast was a craze, and it’s perfectly possible that one or more victims was in fact the victim of a murder. We have no evidence for that, of course, and certainly not for the claim, sometimes made, that the whole thing was the work of a serial killer. This was recently repeated in this otherwise very good video from YouTube channel ‘Storied’ (part two of two; both parts feature the excellent Kaja Franck, who I was fortunate to meet at a conference some years ago). Meagan Navarro of the horror (fiction) website Bloody Disgusting states the following:
“The Beast of Gevaudan or the Werewolf of Dole, these were based on men that were serial killers and slaughtered, and folklore was a means of exploring and understanding those acts by transforming them into literal monsters.”
The ‘werewolf’ of Dole does indeed appear to be a deluded individual who thought he was able to transform into a wolf and was convicted as such. However, this is not the case for Gévaudan, which is a well-documented piece of history, not some post-hoc rationalisation for a series of murders as she implies. The various attacks that comprise the story were widely reported at the time and in some detail (albeit embellishments were added later). No-one at the time suspected an ordinary person of the actual killings, and the only sightings consistently refer to a large beast, sometimes detailing how the kills were made. The idea of a human being in control of the Beast somehow was mooted at the time, as was the werewolf of folklore, but never a straightforward murderer. Of course, the idea of the serial killer was unknown until the late 19th century, and it wasn’t long after this that a specious connection was made. In 1910 French gynaecologist Dr. Paul Puech published the essay (‘La Bête du Gévaudan’, followed in 1911 by another titled ‘Qu’était la bête du Gévaudan?’). Puech’s thin evidence amounted to;
1) The victims being of the same age and gender as those of Jack the Ripper and Joseph Vacher. In fact, women and children (including boys) were not only the more physically vulnerable to attack generally, but were the members of the shepherding families whose job it was to bring the sheep in at the end of the day. This is merely a coincidence.
2) Decapitation and needless mutilation. The latter is pretty subjective, especially if the animal itself might be rabid (plenty were) and therefore attacking beyond the needs of hunger alone. The relevance of decapitation depends upon whether a) this really happened and b) whether a wolf or wolves would be capable of it. Some victims were found to have been decapitated, something that these claimants assert is impossible for a wolf to achieve. I can’t really speak to how plausible this is, although tearing limbs from sizable prey animals is easily done and if more than one animal were involved I’ve little doubt that they could remove a head if they wished. So, did these decapitations actually take place? Jay Smith’s ‘Monsters of the Gévaudan: The Making of a Beast’ relays plenty of reports of heads being ripped off. However, details of these reports themselves mitigate against the idea of a human killer. Take Catherine Valy, whose skull was recovered some time after her death. Captain of dragoons Jean-Baptiste Duhamel noted that “judging by the teeth marks imprinted [on the skull], this animal must have terrifying jaws and a powerful bite, because this woman’s head was split in two in the way a man’s mouth might crack a nut.” Duhamel, like everyone else involved, believed that he faced a large and powerful creature (whether natural or supernatural), not a mere human. Despite the intense attention of the local and national French authorities, not to mention the population at large, no suggestion was ever made nor any evidence ever found of a human murderer and the panic ended in 1767 after several ordinary wolves were shot.
3) Similar deaths in 1765 in the Soissonnais, which he for some reason puts down to a copycat killer rather than, you know, more wolves. This reminds me of the mindset of many true crime writers; come up with your thesis and then go cherry-picking and misrepresenting the data to fit.At the very least then, this claim is speculative, and should not be bandied about as fact (in fact, the YouTube channel should really have queried the claim). So, if not a serial killer, then what? French historian Emmanuel Le Roy Ladurie argues that the Beast was a local legend blown out of proportion to a national level by the rise of print media. Jean-Marc Moriceau reports 181 wolf killings through the 1760s, which puts into context the circa 100 killings over three years in one region of France. That is, statistically remarkable, but within the capability of the country’s wolf population to achieve, especially given the viral and environmental pressures from rabies and the Little Ice Age respectively that Moriceau cites. If we combine these two takes, we get close to the truth, I think. ‘The’ Beast most likely actually consisted of some unusually violent attacks carried out by more than one wolf or packs of wolves that were confabulated and exaggerated as the work of one supernatural beast, before ultimately being pinned by the authorities on several wolves, three shot by François Antoine in 1765 and another supposedly ‘extraordinary’ (yet actually ordinary sized) Jean Chastel in 1767.
Clearly the majority of modern-day advocates (including all those YouTube commenters that I mentioned last time) aren’t aspiring members of the upper-middle or upper classes or avid followers of etiquette, so why does this schism among tea-drinkers still persist? No doubt the influence of snobs like Nancy Mitford, Evelyn Waugh et al persists, but for most it seems to boil down (ha) to personal preference. This has not calmed the debate any however. Both sides, now mostly comprised of middle class folk such as myself, now argue with equal certainty that their way is the only right way. Is Milk In First (MIF)/Milk In Last (MIL) really now a ‘senseless meme’ (as Professor Markman Ellis believes; see Part 1) – akin to the ‘big-endians’ and ‘little-endians’ of ‘Gulliver’s Travels’? Is there some objective truth to the two positions that underpins all this passion and why the debate has surpassed class differences? Is there a way to reconcile or at least explain it so that we can stop this senseless quibbling? Well, no. We’re British. Quibbling and looking down on each other are two of our chief national pastimes. However, another of those pastimes is stubbornness, so let’s try anyway…
Today’s MILers protest that their method is necessary in order to be able to judge the strength of the tea by its colour. Yet clearly opinions on this differ and, as I showed in the video, sufficiently strong blends – and any amount of experience in making tea – render this moot. If you do ‘under milk’, you can add more to taste (although as I also noted, you might argue that this makes MIL the more expedient method). As we’ve seen with George Orwell vs the Tea & Coffee Trade, the colour/strength argument is highly subjective. Can science help us in terms of which way around is objectively better? Perhaps, although there are no rigorous scientific studies. In the early 2000s the Royal Society of Chemistry and Loughborough University both came out in favour of MIF. The RSC press release gives the actual science:
“Pour milk into the cup FIRST, followed by the tea, aiming to achieve a colour that isrich and attractive…Add fresh chilled milk, not UHT milk which contains denatured proteins and tastes bad. Milk should be added before the tea, because denaturation (degradation) of milk proteins is liable to occur if milk encounters temperatures above 75°C. If milk is poured into hot tea, individual drops separate from the bulk of the milk and come into contact with the high temperatures of the tea for enough time for significant denaturation to occur. This is much less likely to happen if hot water is added to the milk.“
It also transpires that an actual international standard (ISO 3103:1980, preceded by several British Standards going back to 1975) was agreed for tea-making way back in 1980, and this too dictated that tea should be added to milk “…in order to avoid scalding the milk”. This would obviously only happen if the tea is particularly hot, and indeed the standard includes a ‘milk last’ protocol in which the tea is kept below 80 degrees celsius. Perhaps those favouring MIL simply like their tea cooler and so don’t run into the scalding problem? This might explain why I do prefer the taste of the same tea, with the same milk, made MIF from a pot, rather than MIL with a teabag in a cup… I like my tea super hot. So, the two methods can indeed taste different; a fact proven by a famous statistical experiment (famous among statisticians; a commenter had to point this out for me) resulted in a lady being able to tell whether a cup of tea had been made MIF or MIL eight times out of eight.
“Already, quite soon after he had come to Rothamstead, his presence had transformed one commonplace tea time to an historic event. It happened one afternoon when he drew a cup of tea from the urn and offered it to the lady beside him, Dr. B. Muriel Bristol, an algologist. She declined it, stating that she preferred a cup into which the milk had been poured first. “Nonsense,” returned Fisher, smiling, “Surely it makes no difference.” But she maintained, with emphasis, that of course it did. From just behind, a voice suggested, “Let’s test her.” It was William Roach who was not long afterward to marry Miss Bristol. Immediately, they embarked on the preliminaries of the experiment, Roach assisting with the cups and exulting that Miss Bristol divined correctly more than enough of those cups into which tea had been poured first to prove her case.“
-Fisher-Box, 1978, p. 134.
This of course doesn’t help with which is objectively better, but does suggest that one side may be ‘right’. However, as well as temperature, the strength of the brew may also make a difference here, one that might explain why this debate rumbles on with no clear victor. A commenter on a Guardian article explains the chemistry of a cup of tea;
“IN THE teacup, two chemical reactions take place which alter the protein of the milk: denaturing and tanning. The first, the change that takes place in milk when it is heated, depends only on temperature. ‘Milk-first’ gradually brings the contents of the cup up from fridge-cool. ‘Milk-last’ rapidly heats the first drop of milk almost to the temperature of the teapot, denaturing it to a greater degree and so developing more ‘boiled milk’ flavour. The second reaction is analogous to the tanning of leather. Just as the protein of untanned hide is combined with tannin to form chemically tough collagen/tannin complexes, so in the teacup, the milk’s protein turns into tannin/casein complexes. But there is a difference: in leather every reactive point on the protein molecule is taken up by a tannin molecule, but this need not be so in tea. Unless the brew is strong enough to tan all the casein completely, ‘milk-first’ will react differently from ‘milk-last’ in the way it distributes the tannin through the casein. In ‘milk-first’, all the casein tans uniformly; in ‘milk-last’ the first molecules of casein entering the cup tan more thoroughly than the last ones. If the proportions of tannin to casein are near to chemical equality, ‘which-first’ may determine whether some of the casein escapes tanning entirely. There is no reason why this difference should not alter the taste.“
Both the scalding and the denaturation/tanning explanations are referenced in the popular science book ‘Riddles in Your Teacup’ (p. 90), the authors having consulted physicists (who favour a temperature explanation) and chemists (who of course take a chemistry-based view) on this question. I also found this interesting explanation, from an 1870 edition of the Boston Journal of Chemistry, of tannins in tea and how milk reacts with them to change the taste of the tea. This supports the idea, as does the tea-tasting lady’s ability to tell the difference, that MIF and MIL can result in a different taste. Needless to say, people have different palates and preferences and it’s likely that some prefer their tannins left unchecked (black tea), fully suppressed (milk in first), or partly mitigated (milk in last). However, if your tea is strong enough, the difference in taste will be small or even non-existent, as the tannins will shine through regardless and you’ll just get the additional flavour of the milk (perhaps tasting slightly boiled?). My preferred blend (Betty’s Tea Room blend) absolutely does retain this astringent taste regardless of which method I use or even how hot the water is (even if I do prefer it hot and MIF!).
So, the available scientific advice does favour MIF, for what it’s worth, which interestingly bears out those early reports of upper class tea aficionados and later ‘below stairs’ types who both preferred it this way. However, the difference isn’t huge and depends what temperature the tea is when you hit it with the milk, how strong the brew is, and what blend you use. It’s a bit like unevenly steamed milk in a latte or cappuccino; it’s fine, but it’s nicer when it has that smooth, foamed texture and hasn’t been scalded by the wand. The bottom line, which is what I was trying to say in my YouTube response, is that it’s basically just fashion/habit and doesn’t much matter either way (despite the amount I’ve said and written about it!) – to which I can now add the taste preference and chemical change aspects. If you pour your tea at a lower temperature, the milk won’t get so denatured/scalded, and even this small difference won’t occur. Even if you pour it hot, you might not mind or notice the difference in taste. As for the historical explanation of cracking cups, it’s probably bollocks, albeit rooted in the fact of substandard British teaware. As readers of this blog will know by now, these neat origin stories generally do turn out to be made up after the fact, and the real history is more nuanced. This story is no different.
To recap; when tea was introduced in the 17th century most people drank it black. By the early 19th century milk became widely used as an option that you added to the poured tea, like sugar. Later that century, some found that they preferred putting the milk in first and were thought particular for doing so (marking the start of the Great Tea Schism). Aside from being a minority individual preference, most upper class hostesses continued to serve MIL (as Hartley recommended) because when hosting numbers of fussy guests, serving the tea first and offering milk, sugar and lemon to add to their own taste was simply more practical and efficient. Guests cannot object to their tea if they are responsible for putting it together, and this way, everyone gets served at the same time. Rather than outline this practical justification, the 1920s snobs chose to frame the debate in terms of class, setting in stone MIL as the only ‘proper’ way. This, probably combined with a residual idea that black tea was the default and milk was something that you added, and also doubtless definitely as a result of the increasing dominance of tea-making using a teabag and mug/cup (where MIL really is the only acceptable method) left a lot of non-upper class people with the idea that MIL was objectively correct. Finally, as the class system broke down, milk first or last became the (mostly) good-natured debate that it is today.
All of this baggage (especially, in my view, the outdated class snobbery aspect) should be irrelevant to how we take our tea today, and should have been even back then. As far back as 1927, J.B. Priestley used his Saturday Review column to mock the snobs who criticised “…those who pour the milk in first…”. The Duke of Bedford’s ‘Book of Snobs’ (1965, p. 42) lamented the ongoing snobbery over ‘milk in first’ as “…stigmatizing millions to hopelessly inferior status…”. Today, upper class views on what is correct or incorrect are roundly ignored by the majority, and most arguing in favour of MIL would not claim that you should do it because the upper class said that you should, and probably don’t even realise that this is where it came from. Even high-end tea-peddlers Fortnum & Mason note that you should “…pour your tea as you please”. Each person’s view on this is a product of family custom and upbringing, social class, and individual preference; a potent mixture that leads to some strong opinions! Alternatively, like me, you drink your tea sufficiently strong that it barely matters (note I said ‘barely’ – I remain a heretical MIF for life). What does matter, of course, in tea as in all things, is knowing what you like and how to achieve it, as this final quote underlines:
…no rules will insure good tea-making. Poeta nascitur non fit,* and it may be said similarly, you are born a tea-maker, but you cannot become one.
-Samuel Kneeland, About Making Tea (1870). *A Latin expression meaning that poets are born and not made.
References (for both Parts):
Bedford, John Robert Russell, George Mikes & Nicholas Bentley. 1965. The Duke of Bedford’s Book of Snobs. London: P. Owen.
Bennett, Arnold. 1912. Helen With the High Hand. London: Chapman and Hall.
Betjeman, John. 1956. ‘How to Get on in Society’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.
Boston Journal of Chemistry. 1870. ‘Familiar Science – Leather in the Tea-Cup’. Vol. V, No. 3.
Waugh, Evelyn. 1956. ‘An Open Letter to the Honble Mrs Peter Rodd (Nancy Mitford) On a Very Serious Subject’ in Noblesse Oblige: An Enquiry into the Identifiable Characteristics of the English Aristocracy (Nancy Mitford, ed.). London: Hamish Hamilton.
The Short Version: Pouring tea (from a teapot) with the milk in the cup first was an acceptable, if minority, preference regardless of class until the 1920s, when upper class tea drinkers decided that it was something that only the lower classes did. It does affect the taste but whether in a positive or negative way (or whether you even notice/care) is strictly a matter of preference. So, if we’re to ignore silly class-based snobbery, milk-in-first remains an acceptable alternative method. Unless you are making your tea in a mug or cup with a teabag, in which case, for the love of god, put the milk in last, or you’ll kill the infusion process stone dead.
This article first appeared in a beautifully designed ‘Tea Ration’ booklet designed by Headstamp Publishing for Kickstarter supporters of my book (Ferguson, 2020). Now that these lovely people have had their books (and booklets) for a while, I thought it time to unleash a slightly revised version on anyone else that might care! It’s a long read, so I’ll break it into two parts (references in Part 2, now added here, for those interested)…
Part 1: The History
Like many of my fellow Britons, I drink an enormous amount of tea. By ‘tea’, I mean tea as drunk in Britain, the Republic of Ireland and to a large extent in the Commonwealth. This takes the form of strong blends of black leaves, served hot with (usually) milk and (optionally) sugar. I have long been aware of the debate over whether to put the milk into the cup first or last, and that passions can run pretty high over this (as in all areas of tea preference). For a long time however, I did not grasp just how strong these views were until I read comments made on a video (Ferguson & McCollum, 2020) made to support the launch of my book ‘Thorneycroft to SA80: British Bullpup Firearms 1901 – 2020’. This showed brewed tea being poured into a cup already containing milk, which caused a flurry of mock (and perhaps some genuine) horror in the comments section. Commenters were overwhelmingly in favour of putting milk in last (henceforth ‘MIL’) and not the other way around (‘milk in first’ or ‘MIF’). This is superficially supported by a 2018 survey in which 79% of participants agreed with MIL (Smith, 2018). This survey was seriously flawed in not specifying the use of a teapot or individual mug/cup as the brewing receptacle. Very few British/Irish-style tea drinkers would ever drop a teabag in on top of milk, as this soaks into the bag, preventing most of the leaves from infusing into the hot water. Most of us these days only break out the teapot (and especially the loose-leaf tea, china cups, tea-tray etc) on special occasions, and it takes a conscious effort to try the milk in first.
Regardless, anecdotally at least it does seem that a majority would still argue for MIL even when using a teapot. This might seem only logical; tea is the drink, milk is the additive. The main justifications given were the alleged difficulty of judging the colour and therefore the strength of the mixture, and an interesting historical claim that only working class people in the past had put milk in first, in order to protect their cheap porcelain cups. The practicalities seemed to be secondary to some idea of an objectively ‘right’ way to do it, however, with many expressing mock (perhaps in some cases, genuine) horror at MIF. This vehement reaction drove me to investigate, coming to the tentative conclusion that there was a strong social class influence and releasing a follow-up video in which I acknowledged this received wisdom (Ferguson, 2020). I also demonstrated making a cup of perfectly strong tea using MIF, thus empirically proving the colour/strength argument wrong – given a suitably strong blend and brew of course. The initial source that I found confirmed the modern view on the etiquette of tea making and the colour justification. This was ‘Tea & Etiquette’ (1998, pp. 74-75) written by American Dorothea Johnson. Johnson warns ‘Don’t put the milk in before the tea because then you cannot judge the strength of the tea by its color…’
‘ …don’t be guilty of this faux pas…’
Johnson then lists ‘Good Reasons to Add Milk After the Tea is Poured into a Cup’, as follows:
The butler in the popular 1970s television program Upstairs, Downstairs kindly gave the following advice to the household servants who were arguing about the virtues of adding milk before or after the tea is poured: “Those of us downstairs put the milk in first, while those upstairs put the milk in last.”
Moyra Bremner, author of Enquire Within Upon Modern Etiquette and Successful Behaviour, says, “Milk, strictly speaking, goes in after the tea.”
According to the English writer Evelyn Waugh, “All nannies and many governesses… put the milk in first.”
And, by the way, Queen Elizabeth II adds the milk in last.
Unlike the video comments, which did not directly reference social class, this assessment practically drips with snobbery, thinly veiled with the practical but subjective justification that one cannot judge the colour (and hence strength) of the final brew as easily. Still, it pointed toward the fact that there really was somehow a broadly acknowledged ‘right’ way, which surprised me. The handful of other etiquette and household books that I found in my quick search seemed to agree, and in a modern context there is no doubt that ‘milk in last’ (MIL) has come to be seen as the ‘proper’ way. However, as I suspected, there is definitely more to it—milk last wasn’t always the prescribed method, and it isn’t necessarily the best way to make your ‘cuppa’ either…
So, to the history books themselves… I spent longer than is healthy perusing ladies’ etiquette books and, as it turns out, only the modern ones assert that milk should go in last or imply that there is any kind of class aspect to be borne in mind. In fact, Elizabeth Emma Rice in her Domestic Economy (1884, p. 139) states confidently that:
“…those who make the best tea generally put the sugar and milk in the cup, and then pour in the hot tea.”
I checked all of the etiquette books that I could find electronically, regardless of time period, and only one other is proscriptive with regards to serving milk with tea. This is The Ladies’ Book of Etiquette, and Manual of Politeness: A Complete Handbook, by Florence Hartley (1860, pp. 105–106) which passes no judgement on which is superior, but recommends for convenience that cups of tea are poured and passed around to be milked and sugared to taste. This may provide a practical underpinning to the upper-class preference for MIL; getting someone’s cup of tea wrong would be a real issue at a gathering or party. You either had to ask how the guest liked it and have them ‘say when’ to stop pouring the milk, which would take time and be fraught with difficulty or, more likely, you simply poured a cup for each and let them add milk and sugar to their taste. This also speaks to how tea was originally drunk (as fresh coffee still is)—black, with milk if you wanted it. A working-class household was less likely to host large gatherings or have a need to impress people. There it was more convenient to add roughly the same amount of milk to each cup, and then fill the rest with tea. , you would simply be given a cup made as the host deemed fit, or perhaps be asked how you like it. If thought sufficiently fussy, you might be told to make it yourself! In any case, Hartley was an American writing for Americans, and I found no pre-First World War British guides that actually recommended milk in last. As noted, the only guide that did cover it (Rice) actually favours milk in first.
Much of my research aligns with that presented in a superb article by Professor Markman Ellis of the Queen Mary University History of Tea Project. Ellis agrees that the ‘milk in first or last’ thing was really about the British class system—which helps explain why I found so few pre-Second World War references to the dilemma. His thesis boils down (ha!) to a crisis of identity among the post-First World War upper class. In the 1920s, the wealth gap between the growing middle class and the upper class was narrowing. This is where the expression nouveau riche—the new rich—comes from; they had the money but, as the ‘true’ upper class saw it, not the ‘breeding’. They could pose as upper class, but could never be upper class. Of course, that very middle class would, in its turn, come to look down on aspiring working-class people (think Hyacinth Bucket from British situation comedy Keeping Up Appearances). In any case, if you cared about appearances and reputation among your upper-class peers or felt threatened by social mobility, you had to have a way of setting yourself apart from the ’lower classes’. Arbitrary rulesets that included MIL were a way to do this. Ellis cites several pre-First World War sources (dating back as far as 1846) which comment on how individuals took their tea. These suggest that milk-in-first (MIF) was thought somewhat unusual, but the sources pass no judgement and don’t mention that this is thought to be a working class phenomenon. Adding milk to tea was, logically enough, how it was originally done—black tea came first and milk was an addition. Additions are added, after all. As preferences developed, some would have tried milk first and liked it. This alone explains why those adding milk first might seem eccentric, but not ‘wrong’ per se. In fact, by the first decade of the 20th century, MIF had become downright fashionable, at least among the middle class, as Helen with the High Hand (1910) shows. In this novel, the titular Helen states that an “…authority on China tea…” should know that “…milk ought to be poured in first. Why, it makes quite a different taste!” This presumptuous attitude (how dare the lower classes tell us how to make our tea?!) that influenced the upper-class rejection of the practice in later decades.
This brings us back to Ellis’s explanation of where the practice originated, and also explains the context of Evelyn Waugh’s comments as reported by Johnson. These come from Waugh’s contribution to to Noblesse Oblige—a book that codified the latest habits of the English aristocracy. Ellis dismisses the authors and editor as snobs of the sort that originated and perpetuated the tea/milk meme. However, in fairness to Waugh, he does make clear that he’s talking about the view of some of his peers, not necessarily his own, and even gives credit to MIF ‘tea-fanciers’ for trying to make the tea taste better. His full comments are as follows:
All nannies and many governesses, when pouring out tea, put the milk in first. (It is said by tea-fanciers to produce a richer mixture.) Sharp children notice that this is not normally done in the drawing-room. To some this revelation becomes symbolic. We have a friend you may remember, far from conventional in other ways, who makes it her touchstone. “Rather MIF, darling,” she says in condemnation.
Incidentally, I erroneously stated that governesses were ‘working class’ in my original video on this topic. In fact, although nannies often were, the governess was typically of the middle class, or even an impoverished upper-middle or upper class woman. Both roles occupied a space between classes, being neither one nor the other but excluded from ever being truly ‘U’. As a result, they were free to make tea as they thought best. Waugh’s view is not the only tea-related one in the book. Poet John Betjeman also alluded to this growing view that MIF was a lower class behaviour in his long list of things that would mark out the speaker as a member of the middle class:
Milk and then just as it comes dear?
I’m afraid the preserve’s full of stones;
Beg pardon I’m soiling the doileys
With afternoon tea-cakes and scones.
Returning to the etiquette books, although the early ones were written for those running an upper-class household, the latter-day efforts like Johnson’s are actually aimed at those aspiring to behave like, or at least fascinated by, the British upper class. This is why Johnson invokes famous posh Britons and even the Queen herself to make her point to her American audience. Interestingly though, Johnson takes Samuel Twining’s name in vain. The ninth-generation member of the famous Twining tea company is in fact an advocate of milk first, and he too thought that MIL came from snobbery:
With a wave of his hand, Mr. Twining dismisses this idea as nonsense. “Of course you have to put the milk in first to make a proper cup of tea.” He surmises that upper-class snobbery about pouring the tea first, had its origins in their desire to show that their cups were pure imported Chinese porcelain.
–Guanghua (光華) magazine, 1995, Volume 20, Issues 7-12, p. 19.
Twining goes on to explain his hypothesis that the lower classes only had access to poor quality porcelain that could not withstand the thermal shock of hot liquid, and so had to put the milk in first to protect the cup. Plausible enough, but almost certainly wrong. As Ellis explains in his article;
…tea was consumed in Britain for almost two centuries before milk was commonly added, without damaging the cups, and in any case the whole point of porcelain, other than its beauty, was its thermo-resistance.
Food journalist Beverly Dubrin mentions the theory in her book ‘Tea Culture: History, Traditions, Celebrations, Recipes & More’ (2012, p. 24), but identifies it as ‘speculation’. I could find no historical references to the cracking of teacups until after the Second World War. The claim first appears in a 1947 issue of the American-published (but international in scope)‘Tea & Coffee Trade Journal’ (Volumes 92-93, p.11), along with yet another pro-MIF comment:
…MILK FIRST in the TEA, PLEASE! Do you pour the milk in your cup before the tea? Whatever your menfolk might say, it isn’t merely ‘an old wives’ tale : it’s a survival from better times than these, when valuable porcelain cups were commonly in use. The cold milk prevented the boiling liquor cracking the cups. Just plain common sense, of course. But there is more in it than that, as you wives know — tea looks better and tastes better made that way.
The only references to cracking teaware that I’ve found were to the teapot itself, into which you’d be pouring truly boiling water if you wanted the best brewing results. Several books mention the inferiority of British ‘soft’ porcelain in the 18th century, made without “access to the kaolin clay from which hard porcelain was made”, as Paul Monod says in his 2009 book ‘Imperial Island: A History of Britain and Its Empire, 1660-1837’. By the Victorian period this “genuine or true” porcelain was only “occasionally” made in Britain, as this interesting 1845 source relates, and remained expensive (whether British or imported) into the 20th century. This has no doubt contributed to the explanation that the milk was put there to protect the cups, even though the pot was by far the bigger worry and there are plenty of surviving soft-paste porcelain teacups today without cracks (e.g. this Georgian example). Of course, it isn’t actually necessary for cracking to be a realistic concern, only that the perception existed, and so we can’t rule it out as a factor. However, that early ‘Tea & Coffee Trade Journal’ mention is also interesting because it omits any reference to social class and implies that this was something that everyone used to do for practical reasons, and is now done as a matter of preference. Likewise, on the other side of the debate, author and Spanish Civil War veteran George Orwell argued in favour of MIL in a piece for the Evening Standard (January 1946) entitled ‘A Nice Cup of Tea’:
…by putting the tea in first and stirring as one pours, one can exactly regulate the amount of milk whereas one is liable to put in too much milk if one does it the other way round.
This reiterated his earlier advice captured in this wonderful video from the Spanish trenches. However, Orwell acknowledged that the method of adding milk was “…one of the most controversial points of all…” and admitted that “the milk-first school can bring forward some fairly strong arguments.” Orwell (who himself hailed from the upper middle class) doesn’t mention class differences or worries over cracking cups.
By the 1960s people were more routinely denouncing MIF as a working class practice, although even at this late stage there was disagreement. Upper class explorer and writer James Maurice Scott in ‘The Tea Story’ (1964, p. 112) commented:
The argument as to which should be put first into the cup, the tea or the milk, is as old and unsolvable as which came first, the chicken or the egg. There is, I think, a vague feeling that it is Non-U to put the milk in first – why, goodness knows.
It’s important to note that ‘U’ and ’Non-U’ were abbreviations used as shorthand for ‘Upper-Class’ and ‘Non-Upper-Class’ invented by Professor Alan Ross in his 1954 linguistic study, and unironically embraced by the likes of Mitford as a way to ‘other’ those that they saw as inferior.
The New Yorker magazine (1965, p. 26) reported a more emphatic advisory (seemingly a trick question!) given to an American visitor to London:
Do you like milk in first or tea in first? You know, putting milk in the cup first is a working-class custom, and tea first is not.
This, then, was the status quo reflected in the British TV programme ‘Upstairs, Downstairs’ in the 1970s, which helped to expose new audiences to the idea that MIF was ‘not the done thing’. Lending libraries and affordable paperback editions afforded easy access to books like Noblesse Oblige. The 1980s then saw the modern breed of etiquette books (like ‘Miss Manners’ Guide to Excruciatingly Correct Behavior’ that rehashed this snobbery for an American audience fascinated with the British upper class. Ironically of course, any American would have been unquestionably ‘Non-U’ to any upper class Brit, just as any working or middle-class Briton would have been. And finally (again covered by Ellis), much like the changing fashion of the extended pinkie finger (which started as an upper class habit and then became ‘common’ when it trickled down to the lower classes – see my article here), the upper class decided that worrying about the milk in your tea was now vulgar. Having caused the fuss in the first place, they retired to their collective drawing room, leaving us common folk to endlessly debate the merits of MIF/MIL…
That’s it for now. Next time: Why does anyone still care about this?
“If men had wings and bore black feathers, Few of them would be clever enough to be crows.”
-Henry Ward Beecher
Unfortunately, as quotes in Powerpoint presentations often are, this is incorrect.
The actual quote is;
“Take off the wings, and put him in breeches, and crows make fair average men. Give men wings, and reduce their smartness a little, and many of them would be almost good enough to be crows.”
Some time into researching the origins of this, I came across this blog post, which correctly identifies that the above is the original wording and that Beecher was indeed its originator. However, taking things a little further, I can confirm that the first appearance of this was NOT ‘Our Dumb Animals’ but rather The New York Ledger. Beecher’s regular (weekly) column in the Ledger was renowned at the time. Unfortunately, I can’t find any 1869 issues of the Ledger online, so I can’t fully pin this one down. Based upon its appearance in the former publication in May of 1870, and various other references from publications that summer (e.g. this one) to “a recent issue of the Ledger”, it appeared in early 1870. From there it was reprinted in various other periodicals and newspapers including ‘Our Dumb Animals’ (even if the latter doesn’t credit the Ledger as other reprints did).
So how did the incorrect version come about? It was very likely just a misquote or rather, a series of misquotes and paraphrasings. Even some of the early direct quotes got it wrong. One 1873 reprint drops the word ‘almost’, blunting Beecher’s acerbic wit slightly. Saying that many men would be good enough to be crows is kinder than saying that many would be almost good enough. Fairly early on, authors moved to paraphrasing, for example in 1891’s ‘Collected Reports Relating to Agriculture’ we find:
“…Henry Ward Beecher long ago remarked that if men were feathered out and given a pair of wings, a very few of them would be clever enough to be crows.”
This appeared almost verbatim twenty years later in Coburn’s ‘The Behavior of the Crow’ (1923). Two years later, Glover Morrill Allen’s ‘Birds and Their Attributes’ (1925, p.222) gave us a new version:
“…Henry Ward Beecher was correct when he said that if men could be feathered and provided with wings, very few would be clever enough to be Crows!”
It was this form that was repeated from then on, crucially in some cases (such as Bent’s 1946 ‘Life Histories of North American Birds’) with added quotation marks, making it appear to later readers that these were Beecher’s actual words. Interestingly, the earliest occurrence of the wording ‘very few would prove clever enough’ (my emphasis) seems to emerge later, and is credited to naturalist Henry David Thoreau:
“… once said that if men could be turned into birds, each in accordance with his individual capacity, very few would prove clever enough to be Crows.”
-Bulletin of the Massachusetts Audubon Society in 1942 (p.11).
I can find no evidence that Thoreau ever said anything like this, and of course it’s also suspiciously similar to the Beecher versions floating about at the same time (here’s another from a 1943 issue of ‘Nature Magazine’, p. 401). Thus, I suspect, the Thoreau attribution is a red herring, probably a straight-up mistake by a lone author. In any case, relatively few (only eight that I could detect via Google Books) have run with that attribution since, and these can likely be traced back to the MA Audubon Society error.
So, we are seeing here a game of literary ‘telephone’ from the original Beecher tract in 1870 via various misquotes in the 1920s – 1950s that solidified the version that’s still floating around today. Pleasingly, although his wording has been thoroughly mangled, the meaning remains intact. The key difference is that Beecher was using the attributes of the crow to disparage human beings based upon the low opinion that his fellow man then held of corvids. Despite this, Beecher very clearly did respect the intelligence of the bird as much as the 20th century birders who referenced him, and those of us today who also love the corvids. I think it’s important to be reminded that, as his version shows, widespread affection for corvids is a very recent thing. We should never forget how badly we have mistreated them and, sadly, continue to do so in many places.
A still from Oren Bell’s brilliant interactive timeline for Endgame as a multiverse movie. He disagrees with both writers and directors on the ending – check it out on his site here
With the new time travel-centric Marvel TV series Loki about to debut, I thought it was time (ha) for another dabble in the genre with a look at 2019’s Avengers: Endgame. (SPOILERS for those who somehow have yet to see it). To no-one’s surprise, the writers of Endgame opted to wrap up both a 20+ film long story arc and a cliffhanger involving the death of half the universe by recourse to that old chestnut of time travel (an old chestnut I love though!). It did so in a superficially clever way, comparing itself to and distancing itself from (quote) “bullshit” stories like ‘Back to the Future’ and ‘The Terminator’. The more I’ve thought and read about it though, the more I realise that it’s no more scientific in its approach than those movies. “No shit” I hear you say, but there are plenty of people out there who are convinced that this is superior time travel storytelling, and possibly even ‘makes perfect sense’. In reality, although it ends up mostly making sense, this is perhaps more by luck than judgement. I still loved the film, by the way, I’m just interested in how we all ended up convinced that it was ‘good’ (by which I mean consistent and logical) time travel, because it isn’t!
tl;dr – Endgame wasn’t written as a multiverse time travel story – although it can be made to work as one.
Many, myself included, understood Endgame to differ from most time travel stories by working on the basis of ‘multiverse’ theory, in which making some change in the past (possibly even the act of time travel itself) causes the universe to branch. This is a fictional reflection of the ‘Many Worlds’ interpretation of quantum mechanics in which the universe is constantly branching into parallel realities. As no branching per se was shown on camera, I assumed that it was the act of time travel itself that branched reality, landing the characters in a fresh, indeterminate future in which anything is possible. My belief was reinforced by an interview with physicist Sean Carroll, a champion of this interpretation and a scientific advisor on the movie. I was actually really pleased; multiverse time travel is incredibly rare (the only filmed attempt I’m aware of was Corridor Digital’s short-lived ‘Lifeline’ series on YouTube Premium). I’m not really sure why this is but regardless, the idea certainly works for Endgame as time travel is really just a means to an end i.e. getting hold of the Infinity Stones. I wasn’t the only one to assume something along these lines, which is why many were confused as to how the hell Captain America ended up on that bench at the end of the movie. If, as it seemed to, the film worked on branching realities, how could he have been there the whole time? If he wasn’t there the whole time and did in fact come from a branch reality that he’s been living in, how did he get back? Bewildered journalists asked both the writers and the directors (there are two of each) about this and got two different answers. The writers insisted that this was our Cap having lived in our timeline all along, although they later admitted that the directors’ view might also (i.e. instead) be valid, i.e. that he must have lived in a branch reality caused by changes made in the past. W, T, and indeed, F?
There is a good reason for this. The directors’ view is actually a retcon of the movie as written and filmed. Endgame is actually a self-consistent universe that you can’t alter and in which, therefore, time-duplicate Cap was always there. There is a multiverse element, but as we’ll see, this is bolted onto that core mechanic, and not very well, either. Let’s look at the evidence. The writers explain their take in this interview:
“It’s crucial to your film that in your formulation of time travel, changes to the past don’t alter our present. How did you decide this?
MARKUS We looked at a lot of time-travel stories and went, it doesn’t work that way.
McFEELY It was by necessity. If you have six MacGuffins and every time you go back it changes something, you’ve got Biff’s casino, exponentially. So we just couldn’t do that. We had physicists come in — more than one — who said, basically, “Back to the Future” is .
MARKUS Basically said what the Hulk says in that scene, which is, if you go to the past, then the present becomes your past and the past becomes your future. So there’s absolutely no reason it would change.”
What these physicists were trying to tell them is that IF time travel to the past were possible, either a) whatever you do, you have already done, so nothing can change or b) your time travel and/or your actions create a branch reality, so you’re changing this, and not your past. Unfortunately the writers misunderstood what they meant by this and came up with a really weird hybrid approach, which is made clear in a couple of key scenes involving Hulk where the two parallel sets of time-travel rules are explained. As originally written and filmed these formed a single scene, with all the key dialogue delivered by the Ancient One. First, the original version of those famous Hulk lines that they allude to above (for the sake of time/space I won’t bother to repeat those here):
Of course, there will be consequences.
yes…If we take the stones we alter time, and we’ll totally screw up our present-day even worse than it already is.
If you travel to the past from your present, then that past becomes your future, and your former present becomes your past. Therefore it cannot be altered by your new future.
This is deliberately, comedically obfuscatory, but is really simple if you break it down. All they’re saying is that you may be travelling into the past, but it’s your subjective future. If you could change the past, you’d disallow for your own presence there, because you’d have no reason to travel. In other words, you just can’t change the past, and paradoxes (or Bill & Ted-style games of one-upmanship) are impossible. On the face of it this dictates an immutable timeline; you were always there in the past, doing whatever you did, as in the films ‘Timecrimes’, ‘Twelve Monkeys’, or ‘Predestination’. In keeping with this, the writers also claim that Captain America’s travel to the past to be with Peggy is also part of this. How? We’re coming to that. Most definitely not in keeping however is, well, most of the movie. We see the Avengers making overt changes to the past that we’ve already seen in prior movies, notably Captain America attacking his past self. How is this possible given the above rule? If it is possible despite this, how does 2012 Cap magically forget that this happened? The answers to both questions are contained in the next bit of dialogue:
Then all of this is for nothing.
No – no no, not exactly. If someone dies, they will always die. Death is.. Irreversible, but Thanos is not. Those you’ve lost have not died, they’ve been willed out of existence. Which means they can be willed back. But it doesn’t come cheap.
The Infinity Stones bind the universe together, creating what you experience as the flow of time. Remove one of these stones, this flow splits. Your timeline might benefit, but my new one would definitely not. For every stone that you remove, you create new very vulnerable timelines; millions will suffer.
In other words, because the Stones are critical to the flow of time and because later on a Stone is taken, the changes to the past of Steve’s own reality are effectively ‘fixed’, creating a new branch reality where he does remember fighting himself and the future pans out differently without changing his own past. We can try to speculate on what would have happened if the time travellers had made changes to the past and then a Stone hadn’t been taken, but this is unknowable since every change to what we know happened does get branched. Either the writers are lying to us, they don’t understand their own script, or – somehow – the taking of the Stones is effectively predestined, forming another aspect of the self-consistent universe of the movie. Logically of course, this is, to use the technical quantum mechanical term, bollocks. Events happening out of chronological order in time travel is fine; cause and effect are preserved, just not in the order to which we’re accustomed. However, you don’t get to change the past, then branch reality, then imply that the earlier change is not only retrospectively included in that branch, but is also predestined! This is a case of the cart before the horse; the whole point of branched realities is to allow for change to the past – it should not be possible to make any change prior to this point. The very concept is self-contradictory. If you can’t change the past, you can’t get to the point of taking a Stone to allow for a change to the past. The only way this works is if we accept that you can make changes, but as per the nonsense Ancient One/Hulk line, your present… “…cannot be altered by your new future.” Unfortunately, the writers have established rules and then immediately broken them in an attempt to avoid falling into the time travel cliche of pulling a Deadpool and stopping the villain in the past and yet retain the past-changing japes of those exact same conventional time travel movies. Recognising that the new branched realities would be left without important artefacts, they then explain how these ‘dark timelines’ are avoided:
Then we can’t take the stones.
Yet your world depends on it.
OK, what if… what if once we’re done we come back and return the stones?
[Then] the branch will be clipped, and the timeline restored.
Note that this is further evidence of the writer’s vision; if reality branches all the time, there’s no way to actually ‘save’ these timelines – only to create additional better ones. If reality only branches when a Stone is removed, putting it back ‘clips’ that branch as they explain. Still, on balance this interpretation is seriously flawed and convoluted. Luckily the version of this same scene from the final draft of the script (i.e., what we saw play out) helps us make sense of this mess (albeit not the dark timelines; they are still boned, I’m afraid!):
At what cost?
The Infinity Stones create the experience you know as the flow of time. Remove one of the stones, and the flow splits.
Now, your timeline might benefit.
My new one…would definitely not.
In this new branch reality, without our chief weapon against the forces of darkness, our world would be overrun…
For each stone you remove, you’ll create a new, vulnerable timeline. Millions will suffer.
Now tell me, Doctor. Can your science prevent all that?
No. But it can erase it.
Astral Banner reaches in and grabs THE VIRTUAL TIME STONE.
ASTRAL BANNER (CONT’D)
Because once we’re done with the stones, we can return each one to its own timeline. At the moment it was taken. So chronologically, in that reality, the stone never left.
These changes have two significant effects (other than removing the potentially confusing attempt to differentiate being willed out of existence from ‘death’):
1) To move the time travel exposition earlier in the movie to avoid viewers wondering why they can’t just go back and change things.
To achieve this they added the obvious Hitler comparison (it may not be a comparison that this was a minor plot point in Deadpool 2!), along with pop culture touchstones to help the audience understand that this isn’t your grandfather’s (ha) time travel and you can’t simply go back and change your own past to fix your present. This works fine and doesn’t affect our interpretation of the movie’s time travel.
2) To de-emphasise the arbitrary nature of the Stones somehow being central to preventing a ‘dark’ timeline by pointing out that they’re essentially a means of defence against evil.
This is more critical. We go from ‘the Infinity Stones create the experience you know as the flow of time’ to ‘creating what you experience as the flow of time’, which I read as moving from them creating time itself, to simply the timeline that we know (i.e. where the universe has the Stones to defend itself). This provides more room for the interpretation that removing a Stone is simply a major change to the timeline, like any other, that would otherwise disallow for the future we know, and so results in reality branching to a new and parallel alternate future. Still, I really don’t think that improving time travel logic was the main aim here, or even necessarily an aim at all. The wording about how the Stones ‘bind the universe together’ may have been dropped as simply redundant, or possibly to soften the plothole that not only the ‘flow of time’ but also the ‘universe’ are just fine when the Stones all get destroyed in the present-day (2023) of the prime reality. If the filmmakers truly cared about their inconsistent rules, they had the perfect opportunity here to switch to a simple multiverse approach and record a single line of dialogue that would explain it without the need to change anything else. Here’s the equivalent line from Lifeline:
“Look, your fate is certain. Okay? It can’t be undone. Your every action taken is already part of a predetermined timeline and that is why I built the jump box. It doesn’t just jump an agent forward in time, it jumps them to a brand new timeline. Where new outcomes are possible.”
Anyway, back to that head-scratcher of an ending and the writer’s claim that Cap was always there as a time duplicate in his own past. They say this is the case because it’s not associated with the taking of a Stone. I have checked this, and they’re right; it’s the only change to the past that can’t be blamed on a Stone. There’s also no mention in the script (nor the alternate scene below) of alternate universes being created prior to the taking of a Stone. So, per the writers’ rules, Cap (and not some duplicate from another reality) is indeed living in his own past and not that of a branch reality. This was the intent “from the very first outline” of the movie, notwithstanding the later difference of opinion between writing and directing teams. To be clear, everyone involved does agree that he didn’t just go back (or back and sideways if you believe the directors) for his dance raincheck – he stayed there, got married and had Peggy’s two children. Which inevitably means that Steve somehow had to live a secret life with a secret marriage (maybe he did a ‘Vision’ and used his timesuit as a disguise?) and kissed his own great niece in Civil War (much like Marty McFly and his mum).
You can still choose to interpret Steve’s ‘retirement’ to his own past as a rewriting of the original timeline that alters Peggy’s future (i.e. who she married, who fathered her kids etc). Alternatively, you can believe the directors that Cap lived his life with the Peggy of a branch reality and returned (off camera!) to the prime reality to hand over the shield. But neither of these fits with the original vision for the movie that you can’t change your own past and it doesn’t branch unless a Stone is removed. There’s another problem with the writer’s logic here. Cap only gets to the past by having created and then ‘clipped’ all the branching realities. This means that the creation and destruction of these branches also always happened and is also part of an overarching self-consistent universe. Except that they can’t possibly be for the reason I’ve already given above; we’ve seen the original timelines before they become branch realities, so we know something has in fact changed, and there can’t be an original timeline for Cap to have ended up in his own past!
So, Endgame as written and even as filmed (according to the writers) is really not the multiverse time travel movie that most of us thought. It’s a weird hybrid approach that you can sort of mash together into a convoluted fixed timeline involving multiple realities but not really. It actually makes less sense than the films that it (jokingly) criticises and handwaves all consequences for time travel. Luckily, it can be salvaged if we overlook the resulting plothole of Captain America’s mysterious off-camera return and follow the interpretation of the directors. That is, that there’s no predestination, the Avengers are making changes, but every significant change, (i.e. one that would otherwise change the future, like living a new life in the past with your sweetheart) creates a branch reality. Not just messing with Stones. This isn’t perfect; how could it be? It’s effectively a retcon. But it’s easily the better choice overall in my view. Why wouldn’t this be the case? It’s only logical. The only serious discrepancy is the remaining emphasis placed upon the significance of the Stones, which I think can be explained by the Ancient One’s overly mystical view of reality. She focuses on the earth-shattering consequences for messing with the Stones simply because she knows the gravity of those consequences. She doesn’t explicitly rule out other causes of branches. It likely doesn’t matter that they’re destroyed in the subjective present of the prime universe, because the ultimate threat she identifies is Thanos, and he’s been defeated, along with the previous threats that the Stones had a hand in, including of course ‘Variant’ Thanos from the 2014 branch (meaning that branch doesn’t have to contend with him and gets its Soul and Power Stones back). Of course, this interpretation has some dark implications: If significant changes create branches, then when Cap travels back to each existing branch to return each stone, reality must bebranched again. The Avengers have still created multiple new universes of potential suffering and death without one or more Stones, they’ve just karmically balanced things somewhat by creating a new set of positive branches that have all their Stones. Except for, again, the new Loki branch.
For me, the directors’ approach, whilst imperfect, is the best compromise between logic and narrative. It’s not clear whether they somehow thought this was the case all along, or whether they only recognised the inconsistencies in post-production or even following the movie’s release. The fact that the writing and directing teams weren’t already on the same page when they were interviewed tells me that, simply, not enough thought went into this aspect of the film. Why should we believe them? Well, the director’s role in the filmmaking process traditionally supersedes that of the writer, shaping both the final product and the audience’s view of it. Perhaps the most famous example is Ridley Scott’s influence on Deckard’s status as a replicant. You can still choose to believe that he is human based on the theatrical cut and ignoring Scott’s own intent, but this is contradicted by his later comments and director’s cuts. There’s also the fact that subsequent MCU entries suggest that the Russos’ multiverse model is indeed the right one. Unless Loki is going to be stealing multiple more iterations of Infinity Stones, the universe is going to get branched simply by him time travelling. If so, this will establish (albeit retroactively) that the Ancient One really was just being specific about the Stones because of the particularly Earth-shattering consequences of messing with their past (and the need to keep things simple for a general audience). It would also pretty much establish the Russos’ scenario for Captain America; that he really did live out his life in a branch reality before somehow returning to the prime reality to hand over his mysterious newly made shield (another plothole!) to Sam. Where he went after that, we may never know, but I hear he’s on the moon…
I’ve been following John Campbell’s YouTube channel since early on in the current COVID-19 pandemic. He does a good job of science communication, but something he mentioned recently had me reaching for my internets. He claimed that in the great plague of 1665-6, the villagers of Eyam in Derbyshire had selflessly quarantined themselves to protect their neighbours and suffered disproportionately. Most retellings (notably Wood’s 1859 ‘The History and Antiquities of Eyam’) link those two facts, emphasising that people opted to get sick and die rather than spread the disease. I hadn’t heard of Eyam, but the claim is widespread, and there’s even a museum dedicated to the event. It’s so widespread that it’s essentially now an accepted fact. It’s no surprise that this evidence of the capacity for altruism in the face of infectious disease was wheeled out during the current pandemic, including by the BBC. This is quite a nuanced one. The village certainly did suffer from the plague, and there was a quarantine. However, Patrick Wallis’ 2005 article ‘A dreadful heritage: interpreting epidemic disease at Eyam, 1666-2000’ (he’s written a more accessible summary for The Economist as well) shows that there is really no evidence that this was voluntary in any meaningful sense. Instead, as elsewhere in England, restrictions were imposed by those in charge, and neither the village’s isolation nor its high death toll (36% of the population, in line with the average mortality for the British Isles) were particularly unusual. Even the museum (which owes its existence to this traditional story), today gives accurate mortality figures (previously wrongly estimated at more than half the population) and explains that it was the local religious authorities who were responsible for the lockdown rather than the ordinary folk. The story of the supposedly willing sacrifice of the population only emerged some two hundred years after the fact and only became more mythologised over time (complete with made-up love story!). In Wallis’ words:’
“Only a limited body of contemporary evidence survives, the principal artefacts being three letters by William Mompesson, which powerfully convey the personal impact the death of Catherine Mompesson had on him, and, in passing, mention some of the villagers’ responses. There is a copy of the parish register, made around 1705. Finally, there is the landscape of the parish, with its scattering of tombs. Two of the earliest accounts claim indirect connections through their authors’ conversations with the sons of Mompesson and Stanley. Beyond this scanty body of evidence, a voluminous body of ‘oral tradition’ published in the early nineteenth century by the local historian and tax collector William Wood provides the bulk of the sources.”
Mompesson, the rector, wrote three letters, which don’t mention anything about villagers volunteering for a cordon sanitaire. In one of them Mompesson describes the suffering of his fellow villagers and does describe anti-plague measures – the ‘fuming and purifying’ of woollens and burning of ‘goods’, ‘pest-houses’, and of course prayer – but there’s nothing on quarantine, voluntary or otherwise. Early printed accounts confirm that one was put in place, with provisions supplied by the Earl of Devonshire. They praise the behaviour of Mompesson and/or Stanley, his unseated nonconformist predecessor (who had remained in the village) in keeping inhabitants from leaving (even though Mompesson sent his children to safety) but again there is nothing about the residents choosing to sacrifice their freedom for the greater good (the greater good). The community spirit element of the story doesn’t enter the picture until 54 years later when Richard Mead updated his ‘Short Discourse Concerning Pestilential Contagion’ (8th Ed., 1722, see here) with this account:
“The plague was likewise at Eham, in the Peak of Derbyshire; being brought thither by means of a box sent from London to a taylor in that village, containing some materials relating to his trade…A Servant, who first opened the foresaid Box, complaining that the Goods were damp, was ordered to dry them at the Fire; but in doing it, was seized with the Plague, and died: the same Misfortune extended itself to all the rest of the Family, except the Taylor’s Wife, who alone survived. From hence the Distemper spread about and destroyed in that Village, and the rest of the Parish, though a small one, between two and three hundred Persons. But notwithstanding this so great Violence of the Disease, it was restrained from reaching beyond that Parish by the Care of the Rector; from whose Son, and another worthy Gentleman, I have the Relation. This Clergyman advised, that the Sick should be removed into Hutts or Barracks built upon the Common; and procuring by the Interest of the then Earl of Devonshire, that the People should be well furnished with Provisions, he took effectual Care, that no one should go out of the Parish: and by this means he protected his Neighbours from Infection with compleat Success.”
The information is pretty sound, coming from the rector’s son and so within living memory, and is much more plausible than a more ‘grassroots’ motive. Of course, the source is likely to emphasise his own father’s role, but the bottom line is that the actual primary sources are few, and none suggest that the villagers took an active role. As Wallis puts it:
“The leadership of Stanley and Mompesson, respectively, is praised, but there is no hint or romance, tragedy, or even of distinction accruing to the rest of the community.”
He also suggests that the few villagers with the means to do so probably fled (certainly Mompesson ensured that his two children left and tried to persuade his wife to). The majority could not afford to leave and at this period likely wouldn’t have friends or family elsewhere that they could go and stay with. They were also being provided with supplies to encourage them not to leave. This leads me to what I think is the key to whether you regard this one as myth or reality; the extent to which the quarantine can be seen as voluntary. The contradiction of a ‘voluntary’ quarantine that was actually instigated by those in charge is highlighted by this contradictory phrasing from the website;
“…Mompesson and Stanley, the Rector of Eyam at that time [sic], who had persuaded the villagers to voluntarilly [sic] quarantine themselves to prevent the infection spreading to the surrounding towns and villages.”
I suppose the quarantine was ‘voluntary’ insofar as they didn’t nail people into their dwellings, but this wasn’t standard practice in the countryside anyway as far as I can tell. This was done in towns and cities where too many were infected to quarantine them in pest-houses or hospitals, and the risk of escalating infection was too great not to do it. Personally, I don’t think you can meaningfully call what happened at Eyam ‘voluntary’. The people of Eyam were most likely just doing what they were told and, as noted above, had little other option. This aspect of the situation isn’t that different (save the much worse fatality rate of plague) from that in countries today where lockdowns have been put in place due to the current pandemic. Yes, these have a legal basis, as did period quarantines in urban centres, and we can at least admit that Eyam’s quarantine was voluntary in that it was not governed by any formal law, and there’s no evidence that force or the threat of it had to be deployed. However, the same is true of the recent lockdowns in England; aside from a handful of fines, there has been no enforcement and, in the vast majority of villages, very high levels of compliance. The same was true in the 17th century; people mostly did what they were told and in any case had little other option. For this reason I think praising Eyam’s population for not (other than some more well-off people, including Mompesson’s own children) breaking their lockdown is akin to praising modern English people for not breaking theirs. Indeed, we may get lip service gratitude from our governments for complying, but we are (rightly) not hailed as heroes. Obviously I’m not comparing the fatality rates of the two diseases, just the power relationships at play. The rector of a parish held a great deal of sway at that time, and going against his wishes in a matter of public health would have been bold. Finally, and something that Wallis doesn’t seem to pick up on, is that (as I mentioned above) Mompesson explicitly mentions ‘pest houses’ in the context of them all being empty as of November 1666. These would have been existing structures identified for use to house and attempt to care for anyone diagnosed with the plague. No-one placed in one of these houses would have been permitted to leave, for the good of the uninfected in the village. Thus although those yet to be infected could in theory try to leave the village (although, where would they go?), anyone visibly afflicted certainly could not. Those free of plague would have even less reason to leave, as they weren’t being asked to live in the same house as the infected.
Overall, I think Eyam is an interesting and important case study (especially the rare survival of 17th century plague graves in the village) and, as Wallis capably shows, a reflection of changing knowledge and opinion on management of infectious diseases. In the 20th century the ‘meme’ shifted from heroic sacrifice to tragic ignorance. Quarantine and isolation didn’t work, and Eyam was proof. We are now witnessing another shift back toward quarantine as a viable measure and, along with it, a reversion to the narrative of English people ‘doing the right thing’ in the face of deadly disease. However we reinterpret their fate to suit ourselves over time, the people of Eyam were just some of the many unfortunate victims of disease in the 17th century, no more or less heroic than any others.
Of course not. But that’s what this guy is claiming, no doubt suitably egged-on by The Sun. It probably is an arrowhead, or maybe a crossbow bolt head, and it may well be medieval in date and so is a really nice find. The coincidence of it being found in Sherwood Forest is obviously fun, but to suggest that it’s somehow either evidence of the existence of Robin Hood or that the existence of Robin Hood as an historical figure means that this is his… Well that’s just absurd. Robin Hood did not exist. He is an archetypical character from English folklore. More ‘Bullshit’ than “Bullseye”, The Sun.
Also, it certainly isn’t made of silver. I’m perplexed as to why they think it is, or how that’s relevant to the Robin Hood myth. It does look to have a blackish patina, and silver does tarnish black, so perhaps that’s why. But there is visible iron oxide rust at the socket, and it’s… magnetic. Which is how he claims to have found it. Using a magnet. Silver is famously not a ferrous metal. OK, it could be silver-plated, but a) why bother, and b) the metal looks homogenous; there’s no sign of a hammered-on outer layer.
An absolutely ridiculous story salvaged only by the fact that it’s a genuine archaeological artefact. The rest is nonsense, however. Also, who the hell proof-reads this crap? “it’s authenticity”? “Historians believe to silver arrow could belong to Robin Hood”?
Almost every county in the UK has some story about a tried or convicted 16th or 17th century witch; it’s an unfortunate part of our history. Yorkshire has several noted ‘witches’; one with a surprisingly persistent local legacy is Mary Pannell (or Panel, or Pannel, or Pannal, or Pennell), supposedly a local ‘wise woman’ or sometimes just an ordinary girl with some knowledge of herbal medicine, who offered medical help to William Witham of the local Ledston Hall (renamed ‘Wheler Priory’ in ‘Most Haunted’ for security reasons), and supposedly ended up executed for witchcraft and/or for killing Witham when he died in 1593. Pannell’s story is still current in local news and oral tradition, she has her own (not very good) Wikipedia entry, and even featured in TV’s ‘Most Haunted Live’ 2007 Halloween Special. Her story has appeared both in print and online, but the oldest is an internet version from 1997 (this version revised 26.4.2006; the Internet Archive only has the 2000 version onwards).
The first thing I should tackle are the modern embellishments introduced to the story in the retelling. First, William Witham was not the young son of the owner of the Hall, he was the owner, and was 47 when he died! Witham did have sons, two of which were also called William, but one died in infancy years earlier and the other survived his father and went on to have his own son. There is also no evidence that Pannell was an employee of Witham’s (a claim that has expanded in very recent versions to include Witham taking advantage of her). In fact, we know nothing about Pannell for sure, although (as Wikipedia informs us) it’s possible that she may be the same ‘Marye Tailer’ of nearby Kippax who married a John Pannell in 1559 (see these parish records, p. 11). Anyway, these modern changes have likely crept in to make Pannell and Witham more sympathetic victims of the unthinking posh folk who in some versions of the story kill their own innocent son and an innocent woman who was trying to help. Originally, Pannell is an evil woman to be feared; today she is feared in death as a wronged spirit, but otherwise pitied as a victim of prejudice and ignorance.
The good news is that Mary Pannell did exist circa 1600, and was indeed believed to be a witch, as proven by Edward Fairfax’s 1622 manuscript ‘Dæmonologia: A Discourse on Witchcraft’ (p. 98):
“…that the devil can take to himself a true body, or that he can make one of this man’s leg, the second’s arm, and the head of the third (as a great divine hath lately written), or that he can play the incubus and beget children, as the old tale of Merlin, and our late wonder of the son of Mary Pannell* (not yet forgot) seem to insinuate.”
Unfortunately, the footnote on the same page containing the above details i.e. that Pannell was executed in 1603 and ‘bewitched’ William Witham to death was added by Grainge, based on an earlier source (see below). Fairfax’s original manuscripts (there are several versions) do not include any of this. We do know from unrelated period records that William Witham of Ledston Hall did die in 1593 and, again, that Pannell existed and was thought a witch; but there is no primary evidence connecting these facts. It’s by no means clear that Pannell was actually executed, or even tried for witchcraft. Court records for that area and period don’t survive, and unlike other witchcraft suspects, there are no other primary sources to fall back on. The earliest version of Pannell’s own story (most likely Grainge’s source) dates to 1834, over two centuries after the fact. This is Edward Parsons’ ‘The Civil, Ecclesiastical, Literary, Commercial, and Miscellaneous History of Leeds, Halifax, Huddersfield, Bradford, Wakefield, Dewsbury, Otley, and the Manufacturing District of Yorkshire’ (p.277):
“William Witham, who, from the pedigree of his family, appears to have been buried on the ninth of May, 1593, was supposed to have died in consequence of the diabolical incantations of an unfortunate being called Mary Pannel, who had obtained a disastrous celebrity in this part of the country for her supposed intercourse with malignant spirits. About ten years after the death of her imagined victim, she was apprehended on the charge of sorcery, arraigned and convicted at York, and was executed on a hill near Ledston hall, the supposed scene of her infamous operations. The hill where she died was long afterwards called Mary Pannel’s hill, and was regarded with abhorrence and alarm by the ignorant rustics in the neighbourhood.”
It’s interesting that this earliest written version suggests that Pannell was convicted of witchcraft in general, not of killing or even necessarily bewitching Witham specifically. Anyway, there are many later sources but all either reference each other or don’t cite a source at all, making Parsons ground zero for the legend. This makes it all the more frustrating that we don’t know his source, and certainly no period records survive today that would enable us to check this (perhaps they did in the 1830s but it seems unlikely). As Jim Sharpe states in his 1992 book ‘Witchcraft in Seventeenth Century Yorkshire: Accusations and Counter Measures’ (p. 2), ‘for the years between 1563 and 1650 assize records do not survive in quantity outside of the south east…’. This is ironic, because Sharpe is (in the same volume, p. 4) one of several scholars to treat the Grainge footnote in Fairfax’s Discourse as though it were a 17th century primary source rather than a 19th century secondary one, stating “In 1603 a woman named Mary Panell [sic], whose reputation for witchcraft stretched back at least to bewitching a man to death in 1593, was executed at Ledston.” Again, all we know is that Pannell existed at that time and was thought a witch. Gregory J. Durston includes the same details on p. 79 of his 2019 book on specifically (and ironically) witch trials, and doesn’t even bother to give a reference. Regardless, I have to assume, given Parsons’ repeated use of the word ‘supposed’ and his snide dig at ignorant locals, that he was in fact recording an oral tradition, perhaps related to him by said locals, or by members of Parsons’ own social class, scoffing at the superstitions of their peons (although as Fairfax shows, some of the upper class also believed in witchcraft).
Grainge’s 1882 footnote is actually cribbed from his own 1855 book ‘Castles and Abbeys of Yorkshire’, in which he disagrees with Parsons on the method and location of her execution;
“In 1608 [sic], Mary Pannell, who had long been celebrated for supposed sorceries, was hung at York, under the impression, that, among other crimes, she had bewitched to death William Witham, who died at Ledstone, in 1593.”
However, he (or his publisher) also ballsed up the date, so it’s possible that he was mistaken and didn’t necessarily have access to alternative sources of information. Or he may have been deliberately correcting Parsons. The assumption that she was actually executed at York makes more sense for the time and place; witches weretypically executed in the town or city of their conviction/incarceration. Incidentally, there’s no reason that Grainge would consider that Pannell was actually burned; this punishment was very rare for witchcraft suspects in England. The very suggestion doesn’t appear until 1916 with J.S. Fletcher’s ‘Memorials of a Yorkshire Parish’ (p. 97).
“On the right of the road there is a hill covered with wood, called Mary Pannal Hill. Upwards of two hundred and fifty years ago, when the country was covered with forest, when our villages and hamlets were scantily populated, and when superstition reigned in the place of education, Mary Pannal, clad as a gipsy, haunted this neighbourhood, hiding in the old quarries or sheltered nooks in the forest, and gaining a precarious living by begging or pilfering – being, in short, a poor, outcast, homeless, wandering mendicant. In winter time , the old villagers say, she would beg coals of the cartmen as they passed from the pits at Kippax to Ledsham or Fairburn, bewitching all those who refused to supply her with bits of coal, so that the horses could not get up the hills with the load. The drivers, however, devised a simple remedy; they got whip – stocks of wiggan, which enabled them to defy the powers of the witch and surmount the hills without trouble. In those days witches were put out of the way on very slender testimony. They were feared and abhorred. Ridiculous tests were employed to assist in detection; one test being to throw the suspected one into deep water, and if she sank and was drowned it was a sign that she was innocent, but if she floated it was a sign that she was guilty, and she was forthwith taken and executed. This kind of demonopathy prevailed for several centuries. For various acts of supposed witchcraft , and especially for having “bewitched to death ” one William Witham – one of the ancient race of Withams , owners of Ledstone Hall, — Mary Pannal was condemned to suffer on the gallows. The local tradition is that she was taken to the top of the hill, which still bears her name, and which is within full view of the windows of Ledstone Hall, to be hanged on a tree; but each time she was suspended on the cord, it snapped and let her to the ground unhurt, the cord being bewitched. The hangsmen were baffled, but whilst consulting and marvelling one amongst another, a bird of the crow tribe flew over, muttering slowly as it flew, “A withy, a withy, a withy!” whereupon the hangsmen got a flexible withy of wiggan from the adjoining thicket, and suspending the witch upon it, the execution was immediately consummated. Old inhabitants of Ledstone can remember seeing the identical tree felled.”
NB a ‘wiggan’ is another name for rowan, which was thought to have apotropaic properties against witchcraft.
From this we learn that there was a local tradition not just of the hillside where Pannell was supposedly executed, but of a specific purported hanging tree as well. Based on this description it had that reputation for some time prior to being cut down before Parsons ever wrote down his version of the story. Although this story was related in 1882, the ‘old inhabitants’ mentioned would have been young people when Parsons first recorded the basic story.
A couple of decades later (by which time various heraldic and genealogical sources have picked up the story, having never done so prior to Parsons and Grainge) several periodicals (‘Autocar’ mention it also, with the authentic-seeming quotes referencing the phrase ‘devilish arts’ and the word ‘sorceries’; common enough period terms that they could easily have been adapted from other cases. One example, an account of the trial of Isobel Young, even includes the Scots word ‘pannell’, as in a panel of accused people (although that’s probably coincidence and not the source of these references to Mary Pannell). If not this, it’s likely to have become associated with Pannell’s story in the same way as the phrase ‘counsell and helpe’ did in 1918 when it was implied to be a phrase from Pannell’s trial but was actually borrowed from a 1916 source that referenced Pannell and the phrase separately (it’s actually from a York Archdeaconry ‘Article’ against witchcraft in general). Regardless, I can’t find any pre-1913 or post-1922 instance of any variant of ‘sorceries and devilish arts’ with reference to Pannell.
We then encounter a gap in the storytelling record until the early (1997) internet version that I mentioned at the beginning. It maintains the basic elements, the 1916 claim of Pannell being burned at Ledston, and adds new embellishments of Witham the boy, Pannell the non-witch herbalist maid, and her ill-fated attempt to help him (plus new aspects to the ghost story):
“Turning left towards Kippax we arrive back on the Roman Ridge Road at a crossroads called ‘Mary Pannell’. It is named so after the unfortunate woman who was burned here as a witch.
Mary Pannel or Pannell was a maid at Ledston Hall towards the end of the 16th century. She, like many others, had a knowledge of ‘old’ medicines and prepared a lotion to be rubbed upon the chest of the young son of the house, one Master William Witham Esq. who was suffering from a chill. His mother mistakenly gave it to the lad to drink and poisoned him. She blamed Mary and accused her of being a witch. This was in May 1593. Mary was tried in 1603 at York and convicted. She was burned to death on the hill that bares [sic] her name that same year. Local tales tell that she haunts the hill and its Roman road leading a horse. Anyone who witnesses the apparition will have a death in the family soon after
At this crossroads was an Inn which survived from medieval times until the beginning of this century – only short sections of stone wall mark it’s existence today.”
“Mary Pannell, of Ledston, lived in a small hut and mixed enchantments and made curses and is said to have had dealings with evil spirits. She is said to have bewitched to death William Witham, Esq., of Ledston Hall, in 1593, and was convicted in York in 1603 and put to death by burning on Mary Pannell Hill, on the edge of Castleford.”
By 2004 it had been revised based on information from ‘John & Carol’ to fit the earlier (1997) version, albeit with the ‘health warning’ that it was a ‘local legend’;
“She is said to have bewitched to death William Witham, Esq., of Ledston Hall, in 1593, and was convicted in York in 1603 and put to death by burning on Mary Pannell Hill, on the edge of Castleford. Local legend has it that Mary was a maid who knew a little about medicine. She gave a lotion to rub on a child’s chest for a chill but the mother (an important person of the time) gave it to the child to drink. The lotion killed him and Mary was burned as a witch for it.
Her ghost, leading a horse, is supposed to haunt the Pannell Hill and it is claimed that anybody seeing her will have a death in the family. [Submitted By: John & Carol]”
The story also appears in ‘Horrible Histories: Gruesome Great Houses’ (2017) by Terry Deary who like other 21st century writers is keen to ‘reclaim’ Pannell as a village ‘wise woman’, i.e. a magic practitioner and not simply an innocent herbalist. This fits the modern popular view of witchcraft suspects as well-meaning ‘white witches’ targeted by the patriarchy (although any pagan will tell you that there’s no such thing as ‘black’ or ‘white’). Mary’s popularity in ‘Mind Body & Spirit’ books and online has turned her into something of a meme, but in this case I don’t think that’s all she is.
The geographical evidence – the hill being named after Mary Pannell – is important here, especially in light of the folklore recorded by Parsons and Roberts. It’s not much of a hill and is therefore often confused with the more noticeable and spookier-looking wooded western slope of the adjacent Sheldon Hill (often locally called ‘Mary Panel Wood’). Despite this it is an officially named location, appearing on the current footpath sign directing walkers from nearby Kippax and on Ordnance Survey maps drawn up in the late 1840s (labelled separately to Sheldon Hill). For the name to appear on official government maps, the name must have been quite long-standing. Although all of the written evidence for the Witham story (and ghostly Mary) centres around the early 19th century, it’s quite plausible that it could be 18th century or even older. It’s more likely that it emerged within living memory of Witham, local folklore to explain his untimely death, which may have attracted extra and sustained local attention due to the fame of his daughter, Lady Mary Bolles. Whether there was any historical connection between Pannell and Witham, we will probably never know. At the very least, Mary Pannell really existed, was really thought to be a witch, and the story of her and William Witham is genuine folklore, not some recent urban myth.