In today’s world of polyester, acrylic and spandex, traditional textiles and fibres can be increasingly difficult to find when it comes to buying quality clothing. What are the origins of the fibres used to create the textiles which our parents and grandparents grew up with, in the days before manmade fibres started to dominate the fabric-making world?
In this posting, we’ll find out together! We’ll find out what various fibres and textiles are, what they’re used for, and where they come from. While there are a multitude of fibres out there, I’ll just be covering the most common ones in this posting, or it could go on forever!
So rug up, and get comfortable…
COTTON
Used to make all kinds of fabrics, from toweling to velour to seersucker, and used to make all kinds of garments, from shirts, to blouses, to socks, and undergarments, cotton has been cultivated for centuries. That said, cotton, which grows in warm environments, is extremely difficult to harvest! Picking the bolls of cotton from the plants by hand was a slow, labour-intensive, and even painful exercise. Although the plant itself has no thorns or spines, the cotton boll – the bulb, or ‘fruit’ of the cotton plant – can be hard and spiky. When the boll opens, you gain access to the fluffy cotton fibres inside.
Once you’ve picked the cotton, it must then be processed to remove all the seeds caught up inside the fibres – an extremely slow, laborious process which took hours to complete! This is what slaves in the American South had to deal with day in, day out, for weeks on end during harvest-time on cotton-producing plantations back in the 1700s and 1800s. To make cotton-processing easier, Eli Whitney invented the cotton engine, or “Cotton Gin”. Raw, unprocessed cotton fluff was stuffed into one end of the machine, and a crank was turned, spinning a spiked drum. The drum-spikes basically ‘combed’ the cotton fibres through a mesh, teasing them out bit by bit. The mesh was just wide enough to let the cotton fibres through, but not large enough to admit the seeds caught up in the fibres. In this way, the raw cotton fibres could be separated from the seeds, and pure cotton could be gathered up, baled, and shipped out for spinning and weaving into fabric.
Once packed and shipped, raw, processed cotton was sent to cotton mills, either in the northern United States, or across the Atlantic to the UK. Here, it was spun and woven into fabric. The deafening noise of the rattling looms and spinning-wheels caused some cotton-mill workers to go deaf. It’s the origin of the term “Cloth Ears”, meaning an inability to hear properly.
LINEN
Light, airy and breathable, linen is the fabric produced from the fibres of the flax plant. Because of its light weight, softness and absorptive nature, linen fabric is often used for warm-weather clothing. Linen jackets, trousers and suits, linen shirts and handkerchiefs…linen was also used for toweling and bedsheets, which is why we still have the terms “bath linen” and “bed linen” today. Linen is generally the recommended material for summertime clothing because of its thin, strong, breathable, and lightweight construction.
SILK
We’ve all heard of the expression ‘smooth as silk’, but where does silk come from?
Silk is the thread which is extracted from the cocoons of the silkworm (the adorably-named ‘bombix mori’).
Silkworms eating mulberry leaves
Silkworms are now purposefully farmed and bred to produce silk, and the little critters are pretty pampered for the luxurious fibre that they generate. They’re fed almost exclusively on the leaves of the White Mulberry tree, although they can eat a (limited) number of other leaves. Sericulture, the practice of farming silkworms, has a history of at least 5,000 years, and originated in China. For literally thousands of years, China was the main producer, and exporter of silk, and guarded its silk-weaving and silk-farming processes jealously! Europeans loved silk for its softness and smoothness, its strength, and durability, but getting silk was almost impossible. Imperial decrees forbade anybody from detailing to a “foreign barbarian” where, how, when, or with what silk was manufactured, and Europeans remained in ignorance for centuries.
Eventually, knowledge of silk leaked out of China, and by the Middle Ages, silk-farming and production had begun in the Middle East and later, in Europe.
Silk has incredible properties. Spun, and then woven into fabric, it’s incredibly strong and dense, despite its light weight, and this made it ideal for all kinds of garment-making applications. In fact, some of the world’s first bulletproof vests were made of silk! Layers and layers and layers of silk were placed on top of each other, and then firmly stitched and quilted together, to form a thin, but very firm protective cloth padding which was impenetrable by arrows, and even by various types of gunfire. It’s how a lot of body-armour was produced before the invention of kevlar.
WOOL
Shorn from sheep (or lambs), wool has been used for centuries for everything from blankets and bedding, to tunics, hose, trousers, jackets, suits, coats, scarves and mittens. Depending on how it’s been carded, spun, and woven, wool can be anything from soft and plush to thick and fluffy, to smooth and luxurious!
These days, most “wool” garments are not pure wool. To give it strength and durability, it’s usually blended with synthetic fibres (polyester) to create a ‘wool-blend’. High-quality wool-blends are anywhere from 60-40 wool-poly, up to 80-20 or even 90-10 wool-poly. Wool has incredible water-shedding properties, as well as insulation, for warmth. It’s also robust against grime and light stains, and, depending on how it’s constructed – even fireproof!…although for that last quality, you’d want 100% wool construction.
Back in the old days, wool garments were 100% wool, and you can still find that today, if you know where and how to look, but they will cost more.
CASHMERE
Fluffy cashmere goats!
Mmmm…cashmere! Soft, fluffy, smooth, and warm. Cashmere is the name of the wool that is shorn from the Cashmere or “Kashmir” goat, which is native to India and Pakistan. Famed for centuries for its softness, cashmere is used for scarves, socks, coats, and other winter-weather clothing. As with pure-wool fabric, pure-cashmere is expensive. To stretch the budget a bit, cashmere may also be blended, almost always with wool, or sometimes, silk, for a more lightweight finish.
ALPACA
A type of camelid native to South America (in particular, Peru), alpaca wool is again, one more step above just ordinary sheep-wool. What makes Alpaca and Cashmere wool so popular is that the fibres of their wool are so extremely fine. This means that any fabric produced from their wool is both thin, and lightweight, but also incredibly dense, which makes them beautifully soft, and warm. Alpaca wool is used for blankets, scarves, and winter clothes due to its natural insulating properties and luxuriously soft texture.
VICUNA
Ever heard of vicuna? Probably not! That’s hardly surprising, considering that at one point, this little South American camelid was very-nearly extinct! Today, they’re no-longer extinct (yay!!)…but that fact has done little to hide the fact that vicuna wool is the MOST EXPENSIVE WOOL IN THE ENTIRE WORLD.
How expensive?
Well, for any reasonably sized vicuna-wool garment (say, a suit, an overcoat, a jacket, etc), you can expect to pay MULTIPLE TENS of THOUSANDS of dollars.
This is because of two main reasons: Vicuna wool is extremely fine and soft, and therefore, dense, and high quality (Oooh, luxury!), but also, because vicuna are small animals, and only produce a relatively tiny amount of wool each year. In an entire year, you’d be lucky to get more than a few tons of fleeces out of the global population of vicunas. Not a few hundred tons, not a few thousand…just…a few tons. And that’s it. And because vicuna only live in South America, you also have to factor in import and transportation costs for this valuable fibre, which drives the already high prices up even higher!…just to get your hands on what would be only a few hundred kilos of wool, if that.
With the rise of internet gaming, gaming consoles, and PC gaming, traditional tabletop games such as card-games, chess, checkers, carom, etc, are starting to lose out in the face of stiff competition from their more hip, on-screen counterparts. However, one game which has never seemed to die out, even in the digital age, is the age-old Chinese favourite called…Mahjong!
The most famous of all Chinese traditional games, in this blog-posting, we’ll be looking at the history of mahjong, how it’s played, where it came from, where it went, and what happened to it along the way.
So, shuffle your tiles, build your walls, form your melds, and place your bets!
It’s time to go mahjonging…
Mahjong – What’s in a Name?
‘Mahjong’ is the accepted modern spelling of the traditional Chinese game known as “Mah Jiang”. The most literal translation of the word ‘mahjong’ is ‘sparrows’ (‘Mah’ in Chinese), or ‘chattering sparrows’. This is believed to have been derived from the clattering, chattering, clacking noise produced by traditional mahjong tiles, which sound like chittering, fluttering birds.
An alternative spelling of the game – chiefly used in the United States – is “Mah Jongg” – for some reason, with two ‘g’s on the end. This is actually a trademarked name, and is not in any way related to the traditional Chinese pronunciation, Wade-Giles Romanisation, or pinyin spelling. I’ll explain how it got its “two-g’s” spelling, further on down in the article.
The History of Mahjong
The exact origins of mahjong are unknown. Where, when and by whom the game was invented have been lost to history. Creative marketing, myths, and legend, will tell you that mahjong is an ancient game, invented thousands of years ago, by the great Chinese philosopher, Confucius, as a way to train the mind, that it was played by the concubines and empresses in the Forbidden City in Peking, and that from these lofty beginnings, the game was gradually democratised over the passing centuries to the Chinese peasantry, to become the national game of China!
…Right?
I’m very sorry to disappoint you, but…none of that is even slightly true! Not one bit of it.
Detective-work and educated guesses by Chinese historians seem to have traced the game’s roots to Chinese card-games played in the 17th and 18th centuries. Such games were similar to modern Poker, or Gin-Rummy, which are the closest European equivalents to modern mahjong, in terms of gameplay.
The problem was, of course, that paperboard playing-cards did not last very long. They were easily prone to damage, warping, tearing, and splitting. These thin, paper cards were difficult to hold, fiddly to handle, and lightweight, which means they can blow away in the wind…hardly ideal when you’re in the middle of a game.
The game that’s most similar to mahjong, before mahjong itself was invented, is known as Yezipai, or simply “Yezi”. It was played using small slivers or slices of ivory, bone, or wood, an improvement on paper cards, but still not as hard-wearing as modern mahjong tiles. The thin sheets of ivory and bone were easily broken and could be snapped in half, ruining an entire deck due to one person’s clumsiness!
It’s for this reason that someone – nobody knows who – decided to transfer the designs on the cards onto durable, heavyweight bone and ivory tiles – solid blocks which could be stacked, stood up, laid down, packed and unpacked easily, and which could withstand years of heavy-handed playing.
When this transition took place, nobody seems to know, but it appears to have happened by the early 1800s. As for where the game was invented, that’s a bit more straightforward: In the first half of the 19th century, when mahjong was likely in its infancy, the game was only really being played in one location in China: Ningpo.
A port city in Zhejiang Province, Ningpo was one of several “treaty ports” opened by the British as a result of the unequal Treaty of Nanking, which ended the 1839-1842 First Opium War.
The chief British diplomat stationed in Ningpo in the mid-1800s was a man named Frederick E.B. Harvey. Harvey’s official title was British Consul to Ningpo, and he was in charge of the British Consulate within the city.
Harvey’s diplomatic career in China started in Hong Kong. Thereafter he was transferred to the International Settlement of Shanghai, and finally, to Ningpo, in 1859.
It was while living in Ningpo that Harvey met a man named Chen Yumen – the person who would introduce him to the relatively new game called ‘Mahjong’.
Harvey’s letters home to England, and diary-entries while living in Ningpo, are the first written English records detailing the gameplay, rules, and culture surrounding mahjong. His writings are also among the first references, in any language – to the existence of mahjong in any capacity, giving us a fairly accurate starting date for mahjong in the early 1800s.
From its creation in Ningpo, mahjong spread to Shanghai, Peking, Tientsin, and eventually, to all of China.
Mahjong in the 20th Century
For most of the 1800s, mahjong remained a largely Chinese game, played wherever four Chinese people could be found to fill a mahjong table, but this started to change at the end of the 19th century.
Chinese migration in the second half of the 1800s, and the turn of the 1900s saw the game being exported to ethnic Chinese communities overseas, such as those in San Francisco and New York in the United States, to the British Asian colonies of Hong Kong and Singapore, and to other cities with large Chinese populations such as London in England, or Toronto, in Canada. Western exposure to mahjong started largely in the early 1900s – and a lot of it had to do with one city:
Shanghai.
As mentioned previously, mahjong is believed to have been invented in, or near, the city of Ningpo, on the southern shores of Hangzhou Bay in Zhejiang Province.
Well, if you study a map of China, you’ll find out that the nearest major city to Ningpo is just across the bay, and a few miles north – the city of Shanghai – built around the Huangpu River, which leads to the Yangtze nearby.
By the late 1800s, knowledge of mahjong had spread to Shanghai. This larger, more cosmopolitan city adopted the game, and made it their own. Mahjong was played everywhere in Shanghai, from inside peoples’ homes, to public parks, teahouses, private clubs, and even in dedicated mahjong houses. Mahjong manufacturing was also centered around Shanghai. The large, urban population meant that there were loads of off-cuts of the materials used to make mahjong sets: Wood, bone, ivory, and bamboo, so the city was the natural location where mahjong sets would be produced.
It was from Shanghai that mahjong was exported, either physically, or by word-of-mouth, around the world. It was in Shanghai, or more specifically, within the confines of the International Settlement, that mahjong was first exposed in a big way to Western audiences. British, American, French, Russian, and Jewish expats living in Shanghai (known as “Shanghailanders”) became fascinated with the game, and started playing it with their Chinese friends.
At the same time, Western tourists visiting Shanghai were purchasing sets of mahjong, and taking them home as souvenirs, or writing about them in letters and postcards, and posting these back to loved ones and friends in Europe and North America. Expats who had lived in Shanghai for some number of years, and who had come to love the game, purchased mahjong sets as mementos of their Chinese adventures, and likely played mahjong during the long steamer-journeys home to the USA, Canada, or Europe, exposing the game to even more foreigners.
Mahjong in the West
It was in this way that mahjong started catching on in Western countries – particularly Britain, Canada, the United States, and countries in Western Europe which had extensive contact with China. Mahjong started being imported to the USA in the early 1920s by Standard Oil Company executive Joseph Park Babcock. Babcock had headed up the Standard Oil office in Shanghai, operating out of the International Settlement. While living in China, Babcock and his wife had developed a taste for mahjong, and he got the notion into his head that if he marketed it correctly – mahjong could become huge in the United States!
To this end, Babcock wrote a simplified rule-book for mahjong, and started marketing it aggressively as “Mah-Jongg” (with two g’s) in the USA.
Mahjong was already starting to gain traction in the U.S., because of, as I mentioned previously – written references to the game in letters and postcards, and because foreign tourists were bringing back mahjong sets from China as souvenirs of their travels. However, it was Joseph P. Babcock’s creative streak that really set the ball rolling when it came to the arrival of mahjong in the United States.
Along with the simplified rules and importing new sets directly from Shanghai, Babcock came up with a whole fanciful “history” for the game. In the early 1900s, all things “Oriental” were highly en-vogue in the Western world. Chinese-style clothing, dresses, furniture, food, Chinese decorative elements and colour-schemes, were all the rage. Look no further than the reconstruction of Chinatown in San Francisco, post-1906, as one example.
It was into this heady mix of fried rice, silk robes, chopsticks, and a blur of red, black, and yellow hues, that the first large-scale Western contact with mahjong had entered. Mahjong was seen as being mysterious, new, exciting, dangerous, hedonistic, and exotic! No game like it had existed in the West before, and Americans bought up mahjong so fast that importers working with manufacturers in Shanghai couldn’t keep up with demand! Luxurious mahjong sets made of beautiful woods, with inlaid cases decorated with polished metalwork, and intricately carved tiles were bought and sold by, and from big-name department stores and gaming-products manufacturers, such as Parker Brothers in the US (more famous these days for selling “CLUE”).
Mahjong became so popular in America that there was even a song written about it! “Ma is Playing Mahjong”, from 1924! The lyrics are, perhaps, not very politically correct, 100 years later, but its existence speaks to the incredible impact that mahjong had on American culture. You can listen to it here…
It was in this way that mahjong became incredibly popular in the United States, starting in the 1910s and 20s, and going right through the 1930s, 40s, and 50s, and well into the present-day!
While mahjong thrived in the West, mahjong in China was under attack! During the Cultural Revolution – the ten-year period between 1966-1976, mahjong was banned in China for being a decadent, wasteful extravagance, and an “old idea” that had no place in the “New China”! The ban was lifted upon the end of the Revolution, when Chairman Mao died in 1976.
The Mahjong Set
Obviously, to play mahjong, you need a mahjong set. A traditional mahjong set comes with dice (at least two, and sometimes up to four), a wind-disc or indicator, with wind-directions engraved or printed on it in Chinese characters, tally-sticks (for scoring) or tokens / coins (again, for scoring), and last, but not least – the tiles!
A full mahjong set contains 144 tiles, divided into suits. The suits are:
Circles, Bamboo, Wan (or ‘characters’), Winds, Dragons, Honours, and Bonuses, also called Flowers and Seasons. Unless you’re playing competitive mahjong, the bonuses/flower tiles, can be discarded, as they won’t affect play unless you’re actually scoring the game. Because of this, mahjong sets which are bought just to enjoy the fun of the game, rather than for competition, usually exclude these tiles, for a set of 136 tiles, instead.
Each suit has four sets of tiles with numbers going from 1-9, for circles, bamboo, the wan/characters, and four-each, of the winds, and dragons. By tradition, One of Bamboo is indicated by a bird (usually a peacock, or similar). The “Wan” tiles have numbers in Chinese characters, with another character (the “wan”) underneath. “Wan” is the Chinese word for “10,000”, so for example, a two-wan tile is actually “20,000”. Again, this is used in scoring the game, but when playing for fun, most people ignore this stuff. There are four-each, of the honours and bonus tiles.
To play the game effectively, at the very least, you will require a pair of dice, and a full set of mahjong tiles (which, again, is 144 pieces).
In the 1800s and during the first half of the 20th century, when mahjong was at its height of international popularity, mahjong sets were sold in fantastically elaborate cases. These cases or cabinets had handles, sliding doors, and tile-drawers to hold the tiles and paraphernalia for playing. Today, such cabinets (there’s usually 4-5 drawers – one for each suit, and a fifth drawer for the bits and pieces), in good condition, complete with their sets of playing tiles and accessories, cost hundreds, or even thousands of dollars each.
Modern mahjong sets, usually made of melamine plastic (unless you’re rich enough to afford a handmade set which is produced the old-fashioned way using bone and bamboo!) are sold in simple briefcase-style boxes for ease of storage and transport. Some modern-day manufacturers, looking to recapture the beauty of the antique cases from the 1900s, will produce modern-day sets in vintage-style cases, complete with the handles, sliding doors and pull-out drawers.
How to Play Mahjong!
Now that you have your mahjong set, you need to know how to use it! How do you play with it? How do you win? What’s the POINT OF THE GAME!?
The following instructions are given based on the use of a traditional mahjong set – which has 144 tiles– and gameplay as followed using traditional Chinese/Hong-Kong-style rules.
The aim of a game of mahjong is to build a winning hand of tiles (14 in number) comprised of FOUR MELDS and ONE PAIR.
A “meld” is a grouping of tiles, and a pair is…a…pair! Two matched, identical tiles.
There are three traditional melds:
Pong, Kong, and Chow, also spelled as “Pung”, “Gung”, “Chi”, and various other spellings, depending on Chinese dialects. For the sake of simplicity, I will use “Pong, Kong and Chow”.
A “Pong” is three identical tiles. For example – three white dragons.
A “Kong” is four identical tiles. For example – four West Winds.
A “Chow” is three suited tiles in-sequence. For example – one-two-three of bamboo, circles, or wans, or 2, 3, 4, or 4, 5, 6…you get the idea.
Once you have built four melds (which would usually be 12 tiles), then you have to get your “pair” – two identical tiles. Once you’ve got that, you’ve won the game! Traditionally, the winner will clamp their winning hand together between their fingers, and then slam them down on the table in triumph, to announce their winning hand! (trust me, you should totally do this. It’s a lot of fun!).
And that’s basically it. There are other details, which I’ll go into later on, so keep reading…
Setting up the Game
To play a game of mahjong, you need at least two people (and ideally, four), a square, or circular table, and plenty of time to enjoy a leisurely afternoon of gossip, gameplay, tea-drinking, and shouted profanity, when you find out that someone has beat you at the table!
First, you have to “wash” or shuffle the tiles. Once the tiles are shuffled, you have to build your walls.
There are four walls. If you’re using a traditional 144-tile set, then the walls are 36 tiles each, or two rows of 18 tiles, double-stacked.
The ritual of building the walls is one of the reasons why mahjong was so fascinating to Europeans when they first saw the game. The customs and intricacies of gameplay were unlike anything they had ever seen with cards, or chess, or checkers. It simply had no comparison to anything in the West. In the American version of mahjong (and yes, there is an American version), this stage of the game is known as “Building the Great Wall of China” (because, why not, right?). It’s another element of the game which harks back to the Western exoticism of mahjong in the early-20th-century.
Once the four walls are built, they’re set out in a square. Then you throw the pair of dice into the square, and count around the players going anti-clockwise until you reach the number of the dice. The person you land on is the dealer.
You throw the dice again, and then count along the dealer’s wall. You break the wall at that number, and then each player takes three stacks of four tiles (so, 12) from that break in the wall, again, going anticlockwise around the walls.
The dealer takes an additional stack, giving them 14 tiles. Every other player takes ONE extra tile (so, 13 tiles). The tiles that you’re given (or have taken) form your “hand”. These are the tiles you will concentrate on for the duration of the game. Got all that? Right! The game is now ready to start.
Playing a Game of Mahjong
To begin, if anybody has “bonus” tiles – Seasons, or Flowers – toss them out. You won’t need them in gameplay unless you’re doing a professional game with scoring. Replace those tiles with fresh tiles from the wall. Take a minute to set up your tiles and arrange them in a way that makes sense to you, and see if you have any patterns emerging, or any melds or pairs you can form. When setting up your tiles, they’re stood up on-end, facing you. This conceals your hand from other players, displays your tiles easily for quick manipulation, and allows you to slide, part, or push your tiles together as required, to build melds and pairs.
Got all that? Right! Next step…
Now, the dealer casts out his first tile to kick the game off. By tradition, a game of mahjong moves in an anticlockwise direction around the table.
Each player TAKES one tile, sets it into their hand, and then CASTS OUT one tile that they don’t need. That is considered one turn. Once a player has done that, play moves to the next participant, and so-on, around the table.
As the game progresses, you’ll end up with two “piles” on the table. One is the “draw pile” or the “wall”, and the other pile (in the middle of the table) is the “discard pile”. These are all the tiles that people have chucked out of their hands that they don’t need. As a courtesy to other players, keep the discard pile neat and tidy, as it helps people to know which tile was freshly discarded, and prevents later confusion during gameplay.
You may take a tile from the discard pile to form a meld, or to complete a winning hand and end the game. However, if you do this, then you must “open” the meld to the rest of the table. So, for example, if someone throws out a tile and you find that taking that tile produces a meld for you, you can grab it and shove it into your hand. But then, you have to drop those tiles down onto the table to show the other players the meld that you’ve built from that discard.
You don’t have to do this if you form a meld from a tile taken from the wall-tiles, during your turn.
And so the game continues until a person has a winning hand of FOUR MELDS and ONE PAIR. A winning hand is typically 14 tiles – four groups of three, and one pair, or 18 tiles – four groups of four, and a pair – although this is much harder to attain, so most people will stick to a 14-tile winning hand.
When you have built your winning hand, line up your tiles in a row, grip them together firmly, and then slam them down onto the table, all together, in one, swift, sure, satisfying, and smug move, to show that you’ve won the game!
And that is how mahjong works! It’s really that simple.
Of course, there are complexities – for example – what type of mahjong are you playing? There’s three main styles – Japanese-style mahjong, also known as “Richii Mahjong” (“Richii!” is what you shout when you’re one tile away from winning!), American-style mahjong, which developed in the 1920s and 30s, and finally, the oldest, and most authentic version – Hong-Kong-style mahjong. Most Asians who play mahjong will have grown up learning Hong Kong-style mahjong.
Buying a Mahjong Set
So – you wanna buy a mahjong set, huh?
Sure! I mean they’re not that hard to buy, are they? There’s loads of them on eBay, AliExpress, and other websites. You can probably buy one in any Chinatown in the world, or while visiting countries with large Chinese populations such as Singapore, Malaysia, Hong Kong, and Taiwan, etc. Or, you might just find one at your local weekend flea-market.
If you do buy a set, it’s likely to be a modern set, with plastic tiles, counters, tally-sticks, dice, and other accessories, in a briefcase-style box. If you buy a secondhand set, make sure that the case is in good condition, that all the pieces are present and correct, and that you can open and close the case smoothly and securely – the last thing you want is to pick up the case and have everything spill out! Whoops…
But, I hear you say…
“I Want to Buy one of those Fancy Antique Mahong Sets!”
No problem! You can still buy one of those – but there are a lot more things to think about. Antique sets are more likely to have missing pieces, have structural damage, and of course – have higher prices! Depending on age, condition, completeness and rarity, an antique mahjong set can be had for a few hundred dollars, all the way up to a few thousand dollars!
When buying an antique set, make sure that you have all the tiles – a full set is 144 tiles. A set without the “bonus” or “honour” tiles is 136 tiles. Most sets are one, or the other. If it has less than 136 or 144 tiles, then there’s tiles missing!
Check the case for damage. Splitting, cracking, dovetail-joinery coming apart, and so-on. Check any inlays for fit – if they’re getting loose, you’ll have to poke them out, and glue them back in to prevent loss. Check the drawers to make sure they slide in and out smoothly, and that the handles and doors work properly. A lot of these old cases have split wood, cracks, and faulty joinery, so it pays to check literally every square inch of the case, front and back, side to side, top and bottom. Some faults are repairable with glue, clamping, and reinforcement, others are a total loss.
Check the metalwork, as well. Handles, pull-tabs on the drawers, and the corner-tabs on the sides of the case. Usually, these are brass, or nickel-silver. They’re riveted or hammered into place, so check the nails to make sure that nothing’s coming apart. If it is, nail it back in and glue it in place. Traditionally, these cases were meant to be picked up and carried by their handles – you might not want to do that if it’s a rickety case. A case in good condition should be able to be carried without fear of anything coming apart!
Last but not least – check the tiles themselves. Antique mahjong tiles are made in two parts: An upper tile-face, and a lower tile-base. On the majority of antique sets, these were BONE on top and BAMBOO on the base. Other sets used ivory, or special hardwoods, etc. The tiles are spliced together using dovetail joints. High-quality sets will have solid, firm, secure joints, well carved and tight-fitting. Cheap sets have joints which are loose or in danger of falling apart! Traditionally, no glue was used to hold the tiles together. Simple friction was all that kept them as one.
Antique mahjong sets were manufactured by hand. That means that all the woodwork is hand-cut and joined, and the tiles are hand-cut and dovetailed together. Likewise, the tile-faces are carved or engraved by hand. The more intricate the engraving, the higher-quality the set is. Similarly, the more bone-content you have on each tile, the higher quality the set. Sets with hardly any bone on the tiles are cheap and tacky. Sets with loads of bone in each tile are higher quality, as they can withstand higher-quality, more intricate engravings.
My Antique Mahjong Set
In closing this article, I feel it only proper to write one last chapter – with which to introduce to my readers, my own personal mahjong set.
I bought this at auction back in 2018, and paid what some thought, was a rather exorbitant price, at the time. However, recent developments have shown that I basically paid peanuts for something so valuable that it’s basically irreplaceable…certainly for the price I paid!
Comprised of a rosewood case, complete with brass fittings, a sliding door, and four tile-drawers, my mahjong set is one of my absolute pride-and-joys! I would never sell this, and I love being able to use it. The tiles are made in the traditional way – bone and bamboo, dovetailed together, and carved by hand. I don’t know how old it is, but my guess would be early-to-mid 20th century.
The entire case – including the door, and the four, sliding tile-drawers – is made of Chinese rosewood, or what is known as “huanghuali“, in Chinese. The pull-tabs on the lid, and the tile-drawers are little brass butterflies.
Each drawer holds one suit of tiles. Circles, bamboo, wans, and then the dragons, winds and bonuses all live in one drawer by themselves, for a total of 144 tiles. There’s also two tiny bone dice which go with the set.
One thing you may not have noticed about the set is how incredibly SMALL it is! The case measures just 5.5in. x. 5.5in. x 9in! I’ve seen tissue-boxes bigger than that! The tiles are all half-sized, and they’re absolutely adorable! Here are the various suits of tiles…
Overall, the set is in fantastic condition. There’s no damage to speak of, and everything is in perfect, working, usable condition. And I do use it! When my friends and I play mahjong, this is the set we use, and we have a lot of fun with it.
Anyway – this concludes this rather lengthy posting, all about mahjong! Its history, how it’s played, and how to buy and use your very own mahjong set.
Happy playing!
Want to Find out More?
Information for this article was gleamed from the website of mahjong historian Gregg Swain, which may be found at Mahjong Treasures.
Additional information was gleamed from the CCTV documentary about the history of mahjong, which may be found on YouTube (or at least, it could be, at the time of writing this posting).
Father Christmas, Santa Claus, Pere Noel, St Nicholas…he goes by many names…but who exactly is this mystical, mythical, legendary being, who flies around the world once a year, delivering chocolates and candies, toys and treasure, and…lumps of coal…to good boys and girls of all ages?
Who, or what is Santa Claus? Where does he come from? And where did he get the name “Santa Claus”, anyway?
Well, get yourself some milk and cookies, because we’re about to find out.
A Visit from St. Nicholas
The origins of Santa Claus go back centuries! In fact, the origins of Santa go back nearly 2,000 years!
Weren’t expecting that, were you?
Nicholas of Myra was a Christian bishop who lived in what is today – Turkey – between the years 270 – 343A.D. After his death, he was canonised as a saint, and became known as St. Nicholas of Myra. His patronages included pawnbrokers, fishermen, repentant thieves, pharmacists, single people, and…children!
The most famous story about St. Nicholas involves an old man who had three daughters who were about to be married. Not having any money for their dowries, he prayed for deliverance, and St. Nicholas threw three sacks of gold down the chimney of the man’s house, which landed in the stockings which his daughters had hung by the fire to dry that evening.
This is the origin of hanging out Christmas stockings.
It’s also the origin of the three balls of gold, which is the traditional symbol of pawnbrokers.
As the patron saint of children, it became the tradition for people to commemorate St Nicholas on the anniversary of his passing: the 6th of December, 343A.D.
Religious observances of St Nicholas eventually blended with the Christian rebranding of ancient Pagan year-end customs, celebrating the birth of Jesus, and the two quickly became inseparable. This is what led to the traditional “12 Days of Christmas” (which, in case you’re wondering, starts on the 25th of December, and ends on the 5th of January).
For centuries, people celebrated St Nicholas as the patron saint of children every December, and it became the custom for people to dress up as St Nicholas, wearing traditional bishop’s robes and hats…which were red!
As the custom of observing St Nicholas’s Feast Day, as it became known, spread across Europe, elements to his lore were added. In the Netherlands, Saint Nicholas became “Sint Niklaas”, in the Dutch language, and he was believed to ride around across the rooftops every December, dispensing sweets and presents to all the good boys and girls, with the aid of his magical flying horse, and his assistant!
Yep – Santa’s first little elf was a blackface boy named “Zwarte Piet” (“Black Peter”). Originally an African slave-boy, his image was later gentrified to be a child chimney-sweep (his blackness the result of all the soot and ash), who sat on the chimney-stacks around the Netherlands, eavesdropping on children throughout the year. At the end of the year, Zwarte Piet wrote up a report for Sint Niklaas, declaring which children had been naughty, and which ones had been nice, and had deserved treats for their good behaviour through the year.
Thus was born the custom of Santa Claus visiting houses, delivering treats, and having little helpers!
As this custom grew, the Dutch “Sint Niklaas” was corrupted into “Sint’er’klaas”…which eventually became “Santa Claus”.
Santa Claus is Coming to Town
Into the 17th and 18th centuries, the Dutch tradition of Sinterklaas and Zwarte Piet, their flying horse, and their pre-Amazon home-deliveries, continued to grow, and eventually spread across the world, reaching the United States.
At this time, St Nicholas, or his alter-ego, Santa Claus, had not yet been fully established. It was still the custom in Europe (and is still to this day, in some places) to depict St Nicholas in entirely religious garb, with long, flowing robes, religious headgear, and a bishop’s staff. It wasn’t until 1823 that the first description of Santa Claus as we might recognise him today, was made.
And I say ‘description’ because that’s exactly what it is – a literary description.
Down the chimney St. Nicholas came with a bound. He was dressed all in fur, from his head to his foot, And his clothes were all tarnished with ashes and soot; A bundle of Toys he had flung on his back, And he looked like a pedler just opening his pack. His eyes—how they twinkled! his dimples how merry! His cheeks were like roses, his nose like a cherry! His droll little mouth was drawn up like a bow And the beard of his chin was as white as the snow; The stump of a pipe he held tight in his teeth, And the smoke it encircled his head like a wreath; He had a broad face and a little round belly, That shook when he laughed, like a bowlful of jelly. He was chubby and plump, a right jolly old elf, And I laughed when I saw him, in spite of myself; A wink of his eye and a twist of his head, Soon gave me to know I had nothing to dread
“A Visit from St Nicholas”, by Clement Clarke Moore. 1823.
That’s an excerpt from the famous poem “A Visit from St Nicholas”, written in 1823, better known as “The Night before Christmas”.
This was the first literary text to give us much of what we recognise today as being standard Santa lore – the fact that he flies through the sky on a sleigh, that this sleigh is pulled by eight reindeer – Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blitzen – that he only delivers toys to good boys and girls when they are ASLEEP, and that he wears a fur-lined suit against the sharp winter cold.
But most importantly – this is the first literary description of Santa as a person – a short, cheery little toymaker with a big, white bushy beard and a fat, round belly that wobbles when he laughs.
Before this time, Santa was still depicted as a stern, religious figure. Moore’s description of Santa in his poem made him appear to be a more genial, cheerful, personable fellow, much more like a patron saint of good boys and girls should be. Moore’s poem became massively popular, and it wasn’t until 1832 that Moore even admitted to writing it! Today, there are four original handwritten copies of ‘A Visit from St Nicholas’, all penned by Moore himself.
I Saw Mommy Kissing Santa Claus
By the mid-1800s, the modern image of Santa Claus was breaking further and further away from his original religious roots in the 3rd century, but right up until the time of the American Civil War in the 1860s, there was still no generally-agreed-upon “look” for Santa. Exactly what he was and what he could look like was still largely up to individual taste and preference, and where and how you grew up, and what version of Santa you were taught about as a child.
This all changed in the 1870s.
It was in the late 1860s and through the 1870s, 80s, 90s and the early 1900s, that one man would change our entire perception of what, and how, Santa Claus was, and would look like – forever.
And that man was a German-American immigrant named Thomas Nast:
Born in Germany in 1840, Thomas Nast attained great fame in the United States starting in the 1860s, and which lasted until his death in 1902. His celebrity came from his skills as a cartoonist and caricaturist, and his artworks were published in many newspapers and magazines throughout the United States. In the 1870s, Nast started drawing images of Santa Claus for Christmas issues of the various magazines and newspapers which published his works. It was during this time – possibly inspired by Clement Clarke Moore’s famous poem – that Nast started drawing Santa as described in the famous literary work.
Nast’s cartoon: “Jolly old Santa Claus“, from 1881, is just one of several cartoons that he drew between the early 1870s up until the late 1890s, depicting Santa as we would know him today: A fat old toymaker with a red, fur-lined suit, white beard, red cap, and a large belt!
While Nast was certainly not the first person to try and draw a depiction of Santa Claus or St Nicholas, Nast is the first person to draw Santa as he is imagined in the modern world. Our popular image of Santa is, in many ways – Nast’s Santa, and without him, the image of a plump, cheery old man with a white beard in a red suit, would likely not exist…because apart from anything else…Santa used to be depicted wearing a suit of green!
Coca-Cola and Santa Claus
In the 1920s, the Coca Cola Company started an aggressive advertising campaign featuring Santa Claus drinking its iconic beverage, with the red label on Coca-Cola bottles matching the red dye in Santa’s famous suit. This led to the often-touted, but false belief that Santa’s red suit is because of his association with Coke!
Right?
Wrong.
As mentioned earlier, Santa wearing a red suit with a belt and hat and boots goes all the way back to Thomas Nast in the 1870s. Coca Cola may have refined the image to more of what we’d think of today as being Santa Claus, but it certainly didn’t originate the idea of Santa in his red suit. Regardless, the association between Santa Claus and Coca-Cola continues to this day.
Rudolph the Red-Nosed Reindeer!
…is not one of the original reindeer!
Don’t believe me? Read the Clement Clarke Moore poem from 1823 again. The literary work that gave birth to so much of our Santa Claus lore makes no mention of him.
In the poem, Santa has eight reindeer: Dasher, Dancer, Prancer, Vixen, Comet, Cupid, Donner, and Blitzen!
But no Rudolph.
Why not?
Because Rudolph wasn’t added until nearly 100 years later, in 1939! He was born in the pages of a children’s colouring storybook written by author Robert L. May for the Montgomery Ward department store in Chicago. May recounted how he used his daughter, Barbara May, as the guinea-pig for his new story, testing out names and rhymes on her, to see which ones she liked the most. May tried several different names, from Rollo to Reginald, to Rupert!…but Rudolph won out.
May got the idea for Rudolph having a glow-in-the-dark nose after seeing a heavy fog rolling across Lake Michigan, which inspired the whole thing about “one foggy Christmas Eve”, and the rest, as they say, is history!
Santa Claus Today
Santa Claus continues to morph and change into the 21st century, even if his basic outfit of red, black and white hasn’t changed since the mid-1800s. It’s now customary for kids to leave out milk, cookies and carrots for Santa and his reindeer at Christmas (or, if one meme is to be believed, pizza, and a pint of beer). Either way, Santa is here to stay. And remember kids – if anybody ever tells you that Santa isn’t real – remember that he was born nearly 2,000 years ago in Turkey. So yes – Santa really was a real person.
Want to Know More?
Here’s a few sources I used to get my facts straight…
In the 21st century, with keyboards, laptops, PCs and other electronic devices being the main means of non-verbal communications, more emphasis these days is placed on the importance of fast, accurate and smooth touch-typing, than upon almost any other computing skill. Being able to type fast and smoothly, with a minimal of errors, using both hands and all ten fingers across a keyboard is seen as both an essential skill, and as a desirable trait in our increasingly connected online world.
But as little as a hundred years ago, another skill was held up to a standard just as high as that of fast and accurate typing is today – the art of penmanship!
So, what exactly is penmanship? Where did it come from? How was it taught? How did it evolve over the centuries, and what’s happened to the art of penmanship in the 21st century? This, and other, related topics, will be the subject of this posting.
A Brief History of Writing
Before we explore the history of penmanship, we first need to explore the history of writing as an activity. The first evidence of writing of any kind took place in the Sumerian civilisation of the Middle East, and was comprised largely of cuneiform – a type of wedge-shaped text produced by pressing a stylus into a soft medium which could be used to record the marks – such as clay, or wax. Erasing or reusing the tablet was simply a matter of pressing out the incorrect marks, and re-marking the correct cuneiform marks on top.
Writing with ink started with the reed pen and sheets made from the reeds of the papyrus plant (which gives us the word ‘paper’ today).
Originally, writing was largely pictographic in nature – a person draw small images or representations of images, to express words and thoughts – similar to ancient Egyptian hieroglyphics. This led to what became known as the “Rebus Principle”.
In its most basic form, the ‘Rebus Principle’ involved the use of homophonic words joined to other words or sounds, to create new words. For example, the symbol or picture of a bee, plus the symbol for a tray or platter, could be written together to form the word ‘betray’, for which no other symbol existed. In this way, each individual sound within a given language could be represented by its own symbol – which became known as letters – and these letters could be combined to form modern text and writing.
The Evolution of Writing to Penmanship
Throughout much of history, literacy of any kind was limited to those with access to books, writing materials and educational sources – largely monasteries, wealthy noble families, universities, schools and churches. Emphasis on writing was less about learning how to write, as it was on simply copying texts. Before the age of the printing press in the 1450s, texts of any kind had to be laboriously copied out by hand, word by word, line by line, stroke by individual stroke.
This slow, methodical pace made for elegant, flowery and highly elaborate texts – but it took people ages to write anything, and the end-result could be nigh impossible to read! It’s for this reason that cursive script – that is – handwriting with letters formed swiftly, and joined together cleanly with loops and connectors to form a single, continuous stroke – started becoming common. It was just easier to write everything in a series of smooth, connected movements, rather than individual stop-and-start motions.
Cursive of any kind dates back centuries, but in this posting, we’ll be looking at the development of modern cursive script. So – where did it come from?
The first cursive script used by English speakers for which there was an established style, and which was recognised by name, was known as the ‘Secretary Script’ or ‘Secretary Hand’, developed in the 1500s, and was one of several styles of cursive then in use across Europe. Others included ‘Court Hand’, and even the ‘Italic Hand’ – and yes – it was the sloping angle of this style which gave this kind of Italic its name.
The problem with these hands or scripts was that there was really no solid uniformity, and while they were fast to write, they were equally difficult to read! To show just how hard, here’s an example of Secretary Script:
How much of that can you read? Not much, huh? And yet, this was written by one of the most famous people in the world – you’re looking at William Shakespeare’s handwriting. Secretary script, or a variant of it, is how he learned to write.
Advances in Cursive Script
The lack of uniformity in handwriting was clear for all to see. It’s for this reason that in the 1600s and 1700s, solid efforts were made to improve handwriting, and to make it smoother, and neater. This led to the rise of English Roundhand, a type of cursive which spawned many imitators, and a style which is still popular today.
“Roundhand” referred to a collection of script-types developed in the 1600s and 1700s. This all came about because of the frustration of French court officials in the 17th century – after all – you try and read important legal documents handwritten in six or more different styles of cursive! This became so intolerable that they demanded something be done about it! This led to the Controller-General of Finances in France, at the time – Jean-Baptiste Colbert – to decree that from then on, only three types of handwriting were to be used in legal documents! Probably much to the relief of everybody around him.
One of these three scripts was known as the ‘Ronde’ style, which, when it came to England, was anglicised as ‘Round’ or ‘Roundhand’, and started being publicised in copybooks printed as guides to people wishing to learn the newest, most stylish, and effective ways of writing by hand! This is probably the first instance of a uniform cursive script being spread around a given population.
Here, we can see a variation of French Ronde script. Still elaborate, but much easier to read! Or at least, much more-so than the cramped Secretary Script that preceded it in the 1500s. Curly letter-shapes were artistic, but also easily recognisable, making handwritten documents much easier to read. It was from this script that English ‘Roundhand’ and ‘Copperplate’ styles evolved, and which spread to places like the North American colonies, and Canada in the 1700s.
Above, we can see an example of Roundhand Script, inspired by the French ‘Ronde’ script of the 17th century. It was the first style of cursive handwriting really designed to improve both the speed of the writer, and the legibility of what was written, for the reader. Texts like the American Declaration of Independence were first handwritten in Roundhand.
Writing Enters the Machine Age
The invention of the electric telegraph in the 1830s and 40s, and the rise of steam-powered technology such as trains and steamships meant that correspondence started to grow rapidly. For the first time in history, efficient postal-systems allowed letters and documents to be spirited around the world in hours or days, instead of weeks, or even months, just a generation before. This led to further improvements and refinements to the art of cursive handwriting.
One of the biggest developments in the history of cursive script happened during this time – the arrival of Spencerian.
Spencerian, developed by American teacher Platt Rogers-Spencer, was, like most scripts which preceded it – a rounded cursive script, designed to be neat, fast, and legible. To make handwriting easier and faster, Mr. Spencer looked through various examples of previous styles of handwriting to see what he wished to keep and what he desired to discard. His aim was to have a script that was both fast to write, but neat to read. To this end, he largely eliminated most of the excessive curls, swirls and flourishes found in earlier handwriting styles, such as English Roundhand.
Spencer recognised that business was starting to grow, and aimed his new style of writing at businesspeople and others in professional occupations, as a type of cursive which anybody could understand, and to ensure this, he simplified it a great deal.
Here we can see an example of Spencerian script from 1884, roughly twenty years after Spencer’s death. While still fairly elaborate, it is uniform, neat and quite legible to modern eyes, barring a few stylistic changes between then and now.
Spencerian script remained de-rigeur in much of the English-speaking world for the better part of a hundred years and lasting well into the 20th century, only dying out with the rise of typewriters in the mid-1900s. Before then, it was held up as a prime example of neat, professional cursive handwriting. One of the most famous examples of Spencerian script still seen emblazoned upon millions of cans and bottles all over the world today is the Coca Cola logo!
Designed by bookkeeper Frank Mason Robinson in 1885, the logo is simply a variation of his own Spencerian handwriting – and has remained virtually unchanged for over 130 years!
The Palmer Method of Writing
Efforts to improve cursive script continued throughout the 1800s, and by the 1880s and 90s, a new script had emerged: The Palmer Script.
Like Roundhand and Spencerian which preceded it, Palmer was an attempt to cut down on needless frivolity and improve legibility. Lowercase letters such as ‘s’ and ‘r’ were given more distinctive shapes to make them stand out, and flourishes on capitals and ending-letters were kept to a minimum to ensure clarity of text. As with other scripts, the aim was to improve the flow of writing, without introducing needless frivolity, leading to a cleaner end-result.
Developed by Austin Palmer in the 1880s and 90s, the “Palmer Method” as it became known, relied on ease of motion, and muscle-memory to produce good handwriting. Palmer reasoned that the easiest shapes for people to draw in quick succession were circles and loops.
To this end, his system of cursive relied heavily on letters and letter-forms being made up of curls, curves, circles and loops. This made the script easier to remember, easier to flow, and easier to read, since there would be greater uniformity between the letters. This is clearly seen in this writing sample from a Palmer textbook:
The Palmer method, and similar styles of cursive which followed after it, became the preferred methods of writing because they were fast, easy, and without the time-wastage of needless embellishments like loops, curls, and excessive flourishes which were seen in preceding handwriting styles, such as Spencerian from the the 1820s.
The A to Z of American Zanerian Script
American calligrapher Charles Paxton Zaner invented Zanerian script in the 1880s, which led to the establishment of the Zanerian College of Penmanship. This turned into the Zaner-Bloser Company, which offered instruction manuals, copybooks and writing guides, as well as penmanship courses, to people in professional careers looking for an effective and clear style of cursive writing.
As with other styles of cursive at the time, Zaner was inspired by the clear, but overly flowery Spencerian script, and sought to simplify it, especially when it came to giving instruction to school-children in their penmanship classes. A more clean-cut script would be easier and faster to learn, and would help students produce consistent handwriting. A variant of Zanerian script – D’Nealian script, invented nearly 100 years later in the 1970s, is the style of cursive which I learned at school in the early 1990s.
Teaching Cursive Script – How Was it Done?
I’m sure most of us can think back to our days in school when penmanship was one of the main classes that we had to take. Copybooks and pencils, slates and chalk, or even sand-trays – were used to teach children how to write effectively!
Before the 20th century, and even for the vast majority of it, teaching children the art of neat and efficient cursive writing was seen as a core part of any educational system’s curriculum. Teaching penmanship was essential, and for almost everybody, ‘penmanship’ meant ‘cursive’.
So, how was it done?
For children to be taught their letters effectively, schools needed easy, cheap, and effective methods for teaching students the repetitive tasks of letter-formation over and over again, without wasting huge amounts of money on paper, ink or pencils. To achieve this, they had some pretty ingenious solutions!
Back in Victorian times, writing of any kind started with copying down letter-shapes which were drawn on the blackboard by the teacher. Copying of basic letter-shapes was done using a sand-tray – which is literally exactly what it sounds like – a wooden tray filled with soft, fine-grained sand. Using their fingers, or a stick, children could trace the shapes of letters in the sand to get a feel for how to draw them correctly. Resetting the tray for a new letter was a simple matter of smoothing out the sand and starting again.
Once students had learned the basics of letter-formation, next came the task of learning how to do this with an actual writing instrument. In most cases, this involved a slate tablet and a pencil, or even a piece of chalk. Using a slate and pencil allowed students to familiarise themselves with the act of writing, and not just trying to get letter-shapes. Penmanship classes using pencils and slates would last until the children were deemed to be old enough to handle a pen and ink, and to write on paper.
For the longest time, learning how to write in ink involved a dip-pen and an inkwell – even after fountain pens became commonplace. This was largely due to cost – it was cheaper to buy cheap dip-pen ink in-bulk by huge bottles, and to issue students with simple wooden pen-holders, than it was for students to buy fountain pens and bring them to school. At the turn of the 20th century, fountain pens were still extremely expensive luxury items, and certainly not something that a parent (or teacher, for that matter) would wish to spend on a child who was only just learning how to write! Well into the 1900s, hundreds of schools around the world still relied on dip-pens, inkwells, pen-holders…and the services of an ink-monitor – a student nominated by the teacher to walk around the classroom and refill each student’s desk inkwell with fresh ink at the start of the school day.
Writing with a dip-pen and inkwell was rather more involved than today’s practice of just yoinking the cap off of a pen and scribbling away with it. Knowing how to correctly orient a nib, how to control the ink-flow, stopping to re-dip your pen every few words, and not dribbling ink everywhere meant that writing with a pen was a much more daunting prospect for a child than it might at first appear to be.
For this reason, teachers would give out penmanship certificates, or pen licenses to children as an incentive, to show that a particular child had mastered writing enough to be allowed to use a pen, since it meant that they could be relied upon not to make a huge, godawful mess on the page when they reached for the latest prize in their quest for cursive greatness! I still remember being ten years old, and being given my pen license in school – a rectangular sheet of thin cardboard – sky-blue in colour – with the teacher’s name, signature and date on it, and a brief message saying that I had qualified to use a pen!
Once students had graduated to using a dip pen and inkwell, the next step was familiarising themselves with their copybooks. A copybook is exactly what it sounds like – a book where you copied out the letters, letter-combinations, words, and sentences within to build up muscle-memory, and to train your hand to produce neat, legible handwriting. The exercises inside the book taught students how to form letters, common pairs or groups of letters, like th, ing, sc, and so-on, and common words, phrases and sentences.
Apart from being practice, a copybook also served as a record of the student’s progress in improving their handwriting. Students who were clumsy, inattentive or otherwise careless might dribble ink across their penmanship homework, resulting in them ‘blotting their copybook’, an expression which would later evolve to mean taking part in (or having rumored to have taken part in) any activity which might leave an indelible mark upon one’s personal character and reputation.
Writing Right with the Right Hand
You’ve probably heard this in a hundred books, been told this by your grandparents, or seen it mentioned in movies and TV shows all over the world.
In schools, left-handed children were forced to learn how to write using their right hands.
But why? What’s the point of it? Why bother?
While loads of people will tell you all kinds of elaborate stories and myths and biblical passages and so on…the truth of the matter was that kids were taught to write with their right hands as opposed to their left, due to the materials given to them to practice with.
The human eye naturally looks from left to right, which is why the vast majority of languages are written in this direction. A right-handed writer therefore writes away from the text that they’re putting down on the page. A left-handed writer, by comparison, writes towards the text as they go along the page. This means that their hand hovers over the freshly written text as they write. Doing this with a slate and pencil, or even worse, a sheet of paper, a pen and fresh ink, could cause stains, smudges and streaks across the page…and across the student’s palm.
It was in an effort to prevent this from happening, that teachers insisted that all students had to write with their right hands, whether they actually could, or not. Fortunately, such practices no longer exist, and today, lefties can write as they wish!
The Decline of Quality Penmanship
Emphasis on quality penmanship, and, especially – cursive script – has been on a steep decline since the turn of the 21st century. While handwriting, out of necessity has remained part of most school curriculums, teaching students more than the basics in penmanship tends to have fallen by the wayside, with greater emphasis being placed on computing skills, typing, and keyboarding, in our much more digital and online-oriented world.
Last but not least…
Throughout this posting we’ve covered various styles of cursive handwriting, watching the gradual evolution from 1500s to the 1900s. But, practice as you might, nobody’s handwriting is going to be exactly like the exercises in their copybooks, and nobody’s handwriting is going to be exactly like any other person’s.
We change strokes, letter-forms and styles according to what is most comfortable to our hand-movements, our grips, and our personal tastes, to form something which is entirely unique to our own personality and preferences. So what exactly do you call a person’s individual style of writing? Ever wondered?
The answer is “hand“! This is the name given to a person’s individual style of script or longhand writing, and is as unique as a person’s fingerprint. And now you know!
These days with everything that’s going on with gender-identities, pronouns, titles and how you should address a person, trying to juggle everything without making some real, or imagined faux-pas, can become increasingly challenging.
But what about traditional titles, or forms of address? Mister, Missus, Miss, gentleman, layman…where did these all come from? What did they originally mean? Today we’ll find out.
A “form of address”, or an “honorific”, is defined as the most formal, polite, or correct method for addressing a person, and which usually precedes a person’s surname. So, let’s begin.
“Mister” (Mr.)
Dating back to the mid-16th century, “Mister” emerged as a variation of “Master”, and is the honorific given to men who hold no other titles or positions, and is usually followed by their last name. (“Mr. Smith”).
In times past, however, this was not always the case. In earlier times, the prefix “Mister” did not always come before a person’s surname. For example, in a family, where you had a father, mother, and two grown sons, only the head of the household would be “Mr. Smith”. As it was considered rude (and overly familiar) to address someone by their given names, Mr. Smith’s sons would also be called “Mister” – but ahead of their first names, as opposed to their last names (“Mr James”, “Mr. Richard”, etc). In the 1700s and 1800s, and most of the 1900s, this was considered the most proper usage of the term.
If you were addressing multiple men (eg., for a company name), then the convention was to use the contraction “Messrs.”, instead (eg. “Messrs. Wilkins, Wilkins, Entwhistle & Dodd. Solicitors”). “Messrs.” was a contraction of the French “Messieurs”, the plural of the French male honorific, “Monsieur” (typically contracted as “M.”, as in “M. Hercule Poirot”).
In the medical field, male surgeons are traditionally known by the honorific of “Mr.”, as opposed to “Dr.”, because in times past, you didn’t need to earn a medical degree to become a surgeon. And because you didn’t need to earn the degree, it also meant you didn’t earn the privilege of being titled “Doctor”. This tradition started all the way back in the Middle Ages when surgeons were barber-surgeons, and the convention just…stuck.
“Missus” (Mrs.)
“Missus”, the prefix usually given to a married woman, is a contraction of “Mistress”, although the prefix of “Mrs.” before a woman’s surname did not necessarily mean that she was married.
A married woman was traditionally titled as “Mrs. John Doe”, taking her husband’s full name, which was later on contracted to just “Mrs. Doe”. The convention of a wife taking a husband’s full name is now largely dead, except in the most formal of circumstances, such as with official invites (“Mr. & Mrs. John Doe are cordially invited…” etc, etc.).
That said, just because a lady was called “Mrs.” did not always mean that she was married. In some instances, the prefix “Mrs.” was given as a sign of respect. This is most often seen in the 1700s and 1800s with high-ranking female servants.
Mrs. Hughes and Mrs. Patmore, the housekeeper and the cook, in “Downton Abbey” were not married, but by convention, were given the prefixes “Mrs.”, to acknowledge the fact that they held important positions within a nobleman’s household – that of housekeeper – the most senior female servant – and of head cook – the most senior servant in charge of the kitchens. This convention has largely fallen by the wayside these days, but if you’ve ever wondered why in old novels or movies, you see it being used – now you know why. The same goes for Mrs. Bridges, the cook in “Upstairs, Downstairs“. As Hudson the butler said:
“To my certain knowledge…there has been no ‘Mister’ Bridges, the title ‘Missus’ being the usual honorarium enjoyed by cooks of a certain class”.
Another contraction of “Mistress”, “Miss” has always been the traditional form of address for unmarried ladies (spinsters), and young girls. Only when there were multiple ‘Miss’es in a family, would the prefix be placed before a girl’s first name, however (“Miss Jane”, “Miss Lucy”), in order to differentiate them, similar to the convention with placing ‘Mr’ before a man’s first name. If there was only one woman and no need for further distinction, then it’d be before the person’s surname (“Miss Marple”).
“Master” (Mast., or Mstr.)
Here’s one you haven’t heard of before!…Or at least, not recently. You’d have to be pretty old to remember the days when the term “Master” was used in everyday correspondence.
Back in the 1700s and 1800s, and right into the early 1900s, the title ‘Master’ was given to prepubescent boys – typically, boys who were below the age of thirteen. A boy of twelve or below was always titled “Master”, while a boy over the age of twelve (and into adulthood) was titled as “Mister”. One of the reasons for this convention was to easily tell at a glance, in a written document, which people mentioned were children, and which were not. On the passenger list of an ocean liner, for example, a family traveling together might be listed as:
“Mr. & Mrs. John Smith Miss Amelia Smith. Mast. Edward Smith”
“Master” isn’t as often used today as it once was, but the convention continues to exist, nonetheless. Exactly when a boy transitioned between ‘Master’ and ‘Mister’ is a bit unclear, however. The convention was that ‘Master’ referred to a child or youth below the age of legal adulthood, and that ‘Mister’ referred to a legal adult. In some instances, a boy switched from being a ‘Master’ to a ‘Mister’ at the age of 13. This convention dates back to the Middle Ages, when boys were considered men beyond the age of twelve. Depending on where you read, however, a boy might continue to be addressed as ‘Master’ all the way up until their 18th birthday. The most traditional use of the term generally refers to boys below the age of 13.
Examples of boys being called ‘Master’ include the comic books, TV series and movies featuring the cartoon character Richie Rich, whom his butler, Cadbury, invariably addresses as ‘Master Richie’ – not because he’s the master of the household, but because he’s a little boy, and in ‘To Kill a Mockingbird‘ by Harper Lee, where Calpurnia calls Scout’s older brother ‘Mister Jem’.
“MISTER Jem!?” Scout says.
“Well, he’s just about ‘Mister’ now”, Calpurnia tells her, indicating that Jem Finch is growing up, and as a teenager, can no longer be called ‘Master’ Jem, as he once was.
Gentleman/Gentlewoman
These days, the term ‘gentleman’ (and its less-used counterpart, gentlewoman), usually refers to a person’s behaviour. You might have heard of the expression of “conduct unbecoming an officer, and a gentleman!” as they used to say in military regulations, or a “gentleman’s agreement”.
But what exactly IS a “gentleman”?
Originally, the term ‘gentleman’ referred to a rank in society. At the top you had royalty, below that came the nobility or the aristocracy, and below that, came the gentry. Beneath that were the yeomanry.
A ‘gentleman’ was therefore a person of the gentry class, which typically included people who held no titles of their own (i.e. dukes, earls, counts, viscounts, baron/etc, etc)., but who might be of the mercantile or professional classes – basically what we’d call the upper middle-class today – learned men – teachers, doctors, learned professionals, etc.
Over time, the term ‘gentleman’ became more general, and referred to any well-behaved, cultured male from the upper echelons of society. Since ‘gentlemen’ were usually landowners who earned an income through collecting rent from their landholdings, properties and estates, there was no need to hold down a regular job. This is why, for a long time, the belief was that “gentlemen don’t work! Not real gentlemen!” – as Miss O’Brien says in “Downton Abbey“. Being a gentleman therefore implied being rich enough that you could simply live off the rent from your properties, and the dividends from your investments, and not having to lift a finger otherwise.
“Sire”
A now, rather outdated term used to address a male monarch, or other, high-ranking noblemen, “Sire” is a corruption of the word ‘Senior’ (much in the same way that ‘Alder’ – as in ‘Alderman’, was a corruption of the word ‘Elder’).
“Sir”
A contraction of ‘Sire’, which again, came from ‘Senior’.
“Sirrah”
Unless you read a lot of old books, or novels, or watch TV shows or movies set in the 1700s or 1800s, or even before then, you’re not likely to have come across this title. At the top was ‘Sire’, below that was ‘Sir’…and at the very bottom was ‘Sirrah’ – a derogatory form of address for a man, or boy, who was either younger than you, or of inferior social status. “Sirrah” was often used as an insult or to express contempt or disgust. (“Out of my way, Sirrah!”).
“Esquire” (Esq.)
Although not as often used today as it once was, ‘Esquire’ still remains popular in formal circles. It is what’s known as a “courtesy title” – a title given for the sake of decorum and good manners, to show respect to a person (in this case, a man) who held no other title. A good example is Mr. Sherlock Holmes, Esq., consulting detective!
A Note on Names
These days, it’s extremely common to call just about anybody by their first name, regardless of rank, title, position, or level of acquaintance, but this was not always the case.
In the 1700s and 1800s, and indeed, most of the early 1900s, addressing a person by their given name was rarely done. It was considered rude and overly-familiar to mention a person’s first name – especially a person you didn’t know, or who was your social or professional superior! A person’s rank, title, or other appropriate prefix was used, along with their family name, except when there were multiple members of the same family present. Traditionally, the only people who used a person’s first name were immediate family, close relations, or very, very old friends, whom the person had usually known since childhood.
In instances where people were friends, it was still common for people not to use their first names, again because it was seen as being overly familiar, and something only done between siblings, parents, cousins and other close family. This convention is clearly visible in numerous works of literature written during the 19h century.
Sherlock Holmes never calls his friend ‘John’, and Dr. Watson never calls him ‘Sherlock’, despite being friends for over twenty years. In ‘Tom Brown’s School Days‘, Tom’s best friend, East, is only ever known by ‘East’ (even though his first name is Harry), and the school bully is only ever referred to by his surname – Flashman. Again, in ‘Pride and Prejudice‘, the male lead is almost exclusively called ‘Mr. Darcy’, even though his first name is Fitzwilliam.
Use in Everyday Life
In an age before telephones, telegraph, or high-speed internet, when the most common method for distance communication was through letters, formality of this kind, and adhering to such forms of address was very important, especially when communicating with someone with whom you weren’t particularly well-acquainted. It was all about decorum, not being overly-familiar, and keeping a respectful distance, in a time when rank, title, and social standing were placed on a much higher pedestal than they are today.
This is also why, in letter-writing and etiquette manuals of the era, you get those really elaborate openers and closures in letters, stuff like “I have the honour, sir, to remain your obedient servant“, and so on. Just like everything else, it was all about rank, title and social standing.
Closing Statements
Well – there you have it! A brief history of all the most common titles and forms of address still in use today.
But, what happened to all these niceties? Where did they go? What happened to them!?
The simple answer is – they were no longer seen as necessary! As communications got faster, people found it more convenient to dispense with all but the most rudimentary and necessary of titles. It saved time, and prevented confusion and delay. On top of that, it was seen as excessively formal, and even bordering on being pretentious! People felt more comfortable in being addressed by their first names rather than their surnames. Using titles, prefixes and family names sounded too impersonal, and in a world where people wanted to be more open, inviting and approachable.
Ever since ancient times, Europeans have held the Far East in awe, fantasizing that countries such as China, India, Indonesia and Japan were magical kingdoms filled with all kinds of wondrous, rare and amazing commodities – silk, porcelain, tea, spices, beautiful timber, rare dyes, ivory, tortoiseshell, and fascinating jewels! Trade-goods such as ivory, ginger, cinnamon and pepper were exported from China and India all the way to Europe along a network of roadways, rivers and coastal sea-routes which eventually became collectively known as the ‘Silk Road’.
Commodities transported along the Silk Road were rare, expensive, exotic…and open to theft…unscrupulous dealers…confidence artists…spoilage or damage…and all manner of mishap. Added to that the fact that products took literally months to travel from India, Indonesia, Japan or China to Western Europe, and it’s no wonder that the prices paid for these things were astronomical!
To try and keep costs down and to maximise how much they could purchase (and sell) at any one time, European traders increasingly took to shipping vast amounts of exotic goods back to Europe by sea. However, this was expensive, dangerous, and extremely time-consuming, with a round-the-world voyage taking the better part of a year, or more, to complete. Having to sail around the Cape of Good Hope, or Cape Horn, meant that sea-voyages between Europe and Asia were never going to be easy, or safe.
To try and rectify this, ever since the 1500s, and the discovery of the Americas, Europeans had set their sights on trying to find a faster, easier route to Asia – one which didn’t sail around Africa, or around South America, one which could vastly speed up trade between East and West.
What is the Northwest Passage?
The route chosen to try and improve trade between Europe and Asia was one which sailed west across the Atlantic, north, past Greenland, and then west, past the northern coasts of Canada, and then south into the Pacific.
This route became known as the Northwest Passage.
In an era when Europeans were still calling mythical continents “Terra Australis Incognitia” (“Unknown Southern Land”), nobody actually knew whether a Northwest Passage even actually existed!…What if it did? What if it didn’t?
The only way to know for sure was to physically sail to Canada, and map out the entirety of the northern Canadian coastline to find out if it was even possible to sail from the Atlantic coast of Canada to the Pacific.
Numerous expeditions had tried over the years, with little success. Even famed navigator and Royal Navy officer, James Cook, legendary for mapping most of the South Pacific – had tried – and failed – to find the Northwest Passage.
In 1837, King William IV died, and his niece, the 18-year-old Princess Victoria, ascended the British throne as Queen Victoria!
The Victorian era, as the period between 1837-1901 is known, was an age of optimism, determination, confidence, and great technological advancements! Huge progress was made in the fields of medicine, manufacturing, industry, technology and communications during this time. It was for all these reasons that, in the 1840s, Britain decided that it was time for another crack at the Northwest Passage!
Equipping the Franklin Expedition
In the 1840s, the British Admiralty decided that the time was ripe for another expedition to the Northwest Passage. To lead this daring venture into the frozen north, it selected what it thought, was the best man for the job: Captain Sir John Franklin.
Born in 1786, Franklin was a man of considerable accomplishment. A veteran of the Napoleonic wars, and naval officer who fought alongside Admiral Nelson, a fellow of the Royal Geographical Society, and former Lieutenant-Governor of Van Diemen’s Land (Tasmania), Franklin was used to a rough life, and spending years away from home – both of which would be vital qualities required of any leader of such a perilous expedition! Franklin was also selected for his intelligence – of all the letters after his name and title, were three which were possibly, the most impressive: FRS. Fellow of the Royal Society!
The Royal Society of London for Improving Natural Knowledge – or the Royal Society, for short – is the oldest surviving, and most famous, learned society in the world. Since its establishment by royal charter by King Charles II in 1660, it has been at the forefront of scientific, technological and medical research and advancements for the past 360 years! Membership to the Royal Society is strictly by invitation ONLY, and to be invited to become a member (or more precisely, to be granted a ‘fellowship’) is not only a gigantic honour, but also confirmation of one’s vast contributions to the worlds of science and technology!
Famous fellows of the society included Sir Isaac Newton, Dr. Stephen Hawking, Sir David Attenborough, Charles Darwin, and brainiac-of-brainiacs: Albert Einstein!
To gain admission to the Royal Society is so difficult that surely anybody who held the letters FRS after their name, was certainly not going to be some addlepated dunderhead, right? As if the powers-that-be needed even more convincing – Franklin had already headed a number of other expeditions to the Arctic in previous years! What else could one ask for? The Admiralty was convinced, and duly appointed Franklin to be expedition leader.
The Ships: Darkness and Terror…
With its leader selected, the next task was to find some way of getting the crew through the Arctic. The Admiralty selected two ships: The Erebus, and the Terror! Erebus is named for Erebos – Greek God of Darkness!
Two ships. Darkness, and Terror.
Sounds like a good omen!
To survive the long, likely multi-year voyage through the Arctic, the two ships were renovated or “fitted out” to be as well-built for their new task as possible. The two ships had already proved themselves capable of arctic exploration in the past – in the 1830s, both vessels had sailed in company to Antarctica with Sir James Clark Ross (after whom the Ross Ice Shelf in Antarctica is named), but the British Admiralty wasn’t taking any chances when it came to trying to find the mythical Northwest Passage.
To this end, the ships’ hulls were reinforced with iron plates and rivets to guard against crushing ice, striking icebergs, and slamming into rocks. The interiors were fitted out with cabins, galleys, toiletry facilities, libraries, an infirmary and physician’s offices, and the holds were modified to fit as many of the essential supplies as possible. As a further safeguard, they were also subdivided into watertight compartments, just like on later ships like the Titanic, to try and reduce the dangers of flooding if the ships were holed by ice.
On the technological side, the ships were equipped with steam engines and propellers, and even a rudimentary, steam-powered central-heating system, to keep the ships at least moderately comfortable in the freezing sub-zero temperatures that the crew were certain to encounter. To make sure that the ships weren’t rendered impotent and immovable by some sort of mechanical breakdown during the voyage, even diving suits were included in the equipment-stores, so that underwater repairs could be made.
Food and Drink for the Voyage
The Franklin Expedition was expecting to be away from civilisation for at least two to three years, during which time, they would have to survive almost entirely on whatever food they had brought with them. To this end, the ships were almost exclusively provisioned with one of the greatest wonder-products of the Victorian age!
Canned food.
Canned, and bottled food, have existed since the late 1700s. Canning started becoming really popular in the Georgian era, when Napoleon Bonaparte insisted that somebody had to come up with a convenient way of packaging large quantities of food so that it could be transported easily, stored safely, and wouldn’t spoil for weeks, months or even years at a time.
Food that was canned and sealed tight could last almost indefinitely – a useful trait for an expedition that would be away from civilisation for up to three years! Early canned food was packaged so well that the cans were nigh impossible to open! Soldiers during the Napoleonic Wars were given canned food as rations, but exactly how the hell you get into them – well – that was a matter of ingenuity! Bayonets, axes, hammers, chisels, pocketknives, and even the odd musket-round were all used to try and crack open the almost impregnable containers to gain access to the food within! Canning was almost too effective for its own good!
The task of provisioning the Erebus and Terror with the necessary rations was given to industrialist Stephen Goldner. On the 1st of April, 1845 – April Fools’ Day – an order of…wait for it…8,000 cans of various types of foodstuffs…were to be prepared in just SEVEN weeks! If this was Franklin’s idea of an April Fools’ Day prank, then Goldner was not impressed, and while he tried his best to meet the order, the need to cook, pack, lid, and solder, over a thousand cans of food a week – all done by hand, remember – led to inevitable quality-control issues. In the understandable haste, lids were improperly placed on the cans, and the lead-based solder used to ensure an airtight seal between the lid and the can itself was unevenly and sloppily applied by workers who were rushing to meet deadlines. This led to some cans being left with tiny holes and gaps in the solder, or even instances where the lead solder leaked into the food due to improper application!
Whoops…
While Goldner’s canning company provided the crew with the majority of their everyday food, for more specialised, luxury provisions – and what good Victorian exploratory expedition could be without those – the ships’ officers turned to another supplier: Fortnums!
Founded in London in 1707, Fortnum & Mason has, for over 300 years, been one of the most famous department-stores in the world. Specialising in exotic and luxury foodstuffs, Fortnum’s has been patronised by almost every great explorer in history, from Sir Edmund Hilary to Earl Carnarvon and Howard Carter – and the officers of the Franklin expedition were no exception!
Along with the food, equal attention (or perhaps, rather more attention) was paid to something which was…rather more important: What the crew would drink during the voyage. As it would be impossible to bring along enough drinking water, wine, rum, grog, brandy and scotch for a trip expected to last at least two years, the Erebus, and the Terror were fitted out with a new innovation: Desalination plants! These water-filtration systems would be able to process seawater pumped up from the ocean and onto the ships, and make it drinkable, giving the crew – in theory – an endless supply of fresh, drinkable, desalinated water!
The Crew of the Franklin Expedition
Everybody knew that going to the Great White North was going to be a perilous and possibly fatal endeavour. Because of this, the Franklin Expedition had to choose its crew with great care. In total, the two ships – Erebus and Terror, were loaded with 134 men: 24 officers, and 110 seamen and other crew.
Among the crew were four surgeons, two blacksmiths, cooks, stewards, four stokers (to handle the steam-engines), royal marines, engineers, and dozens and dozens of able seamen. Added to this were the two captains: Of the Erebus – Captain Sir John Franklin himself, and of the Terror, Captain Francis Rawdon Crozier.
The two ships were fully provisioned and equipped, crewed and loaded, and left England on the 19th of May, 1845.
The Trip to Greenland
The first leg of Franklin’s voyage was from England to Greenland. Departing as they did, on the 19th of May, the ships sailed north, first to Scotland, and thence to Greenland. During this part of the voyage, the two arctic ships were accompanied by two more ships bringing up the rear with extra supplies and equipment. When the ships arrived in Greenland, the supply ships loaded their cargoes into the two arctic ships and prepared to head for home. Unnecessary equipment, cargoes, and all outgoing mail from the two expedition ships were loaded onto these ships during the expedition’s stay in Greenland, so that they could be returned to England. Along with all the mail and other things that wouldn’t be going on this epic voyage of discovery, were five men:
Thomas Burt, an armourer (aged 22), William Aitken, a royal marine (aged 37), James Elliot, a 20-year-old sailmaker, Robert Carr (another armourer, aged 23), and an able seaman named John Brown.
The five men, one from the Erebus, and four from the Terror, were excused participation in the expedition on the grounds of health – all five men had fallen ill, although what of was not recorded. They were loaded onto the departing supply ship, and went with it when it returned to England.
They didn’t know it yet, but these five men would later count their lucky stars to be the only members of the expedition to ever return alive.
The Voyage to the Great White North
Once the ships had been re-provisioned, and all necessary supplies and shopping had been accounted for and completed, the next step in the voyage was the most perilous: entering the Arctic Ocean!
With their modern, canned provisions, central heating, steam engines, desalination plants, and even the first, rudimentary daguerreotype cameras, everybody back home in England felt that the Franklin Expedition was by far the best-equipped and most prepared crew that had ever set out to tackle the ferocity of the frozen north, and if they couldn’t find the Northwest Passage…if indeed such a passage even existed!…then nobody would!
The ships left Greenland at the height of summer, and sailed northwest, towards Canada. The aim was to reach the Arctic during the warmest months of the year, to give them as much time as possible to explore the region before the all-too-short Arctic warm season faded away, and they would be doomed to spending months trapped in the ice, waiting for the spring thaw the next year.
By the 28th of July, 1845, the ships had crossed Baffin Bay off the west coast of Greenland. It was at this point that the ships were spotted by two whalers sailing south – the Enterprise and the Prince of Wales. This would be the last time that any of the Franklin crew, or their two ships – would be spotted by European eyes. It was shortly after this that the Erebus and the Terror disappeared into the frozen, merciless embrace of the Arctic, to begin their expedition proper.
The Sea of the Midnight Sun
Exactly what happened to the remaining 129 men of the Franklin Expedition from July 28th onwards, can only ever be guessed at. Few written records remain, and what eyewitness accounts that there were (by Inuit eskimoes native to northern Canada) were initially, largely discarded as fanciful, overblown and inaccurate, by rescuers who refused to believe the truth of what had actually taken place so far from civilisation.
1845: The First Year
The Erebus and the Terror sailed in company westwards from Baffin Bay and into the frozen wastes of the Arctic Ocean. With only scant maps to guide them, and absolutely no ability to rely on compass-bearings (being so close to the North Pole, magnetic compasses were useless), the crew had to rely on the positions of the sun, stars and moon to navigate.
The Arctic summer was particularly cold that year, and progress was slow. By the time the ice started to freeze up again in the approaching winter, the two vessels had only made it as far as Cornwallis Island. Unable to go any further, Franklin and his crew made the decision to stop here for the winter. The ships were anchored off the coast of a tiny, gravely outpost sticking up out of the water – Beechey Island.
Along with being their winter camping-site, Beechey was also where the crew of the Franklin Expedition farewelled the first three of their own: John Torrington (Petty Officer), William Braine (private, Royal Marines), and John Hartnell (Able Seaman). Later autopsies on the corpses determined that the three men had died of what the Victorians called ‘Consumption’ – or tuberculosis.
1846: The Second Year
With the spring thaw, the ships started moving forward once more. It was Franklin’s job to find the fastest, safest route through the Arctic to the Pacific Ocean, and to try and achieve this, he was determined to avoid the more extreme, more northern routes that might be available to him, and instead stick to southern passages through the Arctic. After mapping Cornwallis Island, the Franklin expedition had two choices to make:
They could either sail directly west, between Melville Island to the north, and Victoria Island to the south, and out into the Arctic Ocean, or else south, between Prince-of-Wales Island and Somerset Island.
Not wishing to linger in extremely-northern latitudes for any longer than was absolutely necessary, Franklin’s crew elected to sail south, reasoning that it might be warmer, and therefore, easier to navigate. To this end, they sailed from Cornwallis Island through Peel Sound, a stretch of water between Prince-of-Wales Island to the west, and Somerset Island, to the east.
What nobody aboard the Erebus or the Terror could’ve known at the time was that they were sailing into a deathtrap.
The problem with sailing south during the spring thaw of 1846 through the Sound was that this was the exact same route that all the sheet-ice, icebergs and growlers took, when they too, broke free from other bodies of ice, and started drifting! Currents drove them south towards Canada, and Franklin’s two ships soon found themselves trapped in the mother of all arctic traffic-jams! Had they sailed west, the ice would simply have floated past them as the expedition made for the open sea past Melville Island, but by going south – the Franklin ships ended up going in the exact same direction as all the ice that they were trying so desperately to avoid! The result? The ships became stuck in ice. Again.
At first, Franklin’s crew were unphased. After all, they knew that something like this was likely to happen, and so once they had made as much forward progress as they could, they dropped anchor off King William Island on the 12th of September, 1846, and prepared to make winter camp, yet again. Not wishing to stay onboard the ships in case they broke free of the ice prematurely, or were crushed by the compacting force of more ice piling up behind them, the crew instead offloaded necessary supplies from the ships and set up camp on King William Island itself, where they would be safe from the risk of ice cracking, breaking and splitting apart if the floes and sheets shifted unexpectedly.
1847: The Third Year
By early 1847, it was time to start moving again. The spring thaw had come and while the ice did start breaking up, as it should’ve done, it wasn’t nearly as much as one might’ve expected. The ships moved at literally a glacial speed, limited entirely by the movement (or lack of movement) of the ice which surrounded them.
At the end of Peel Sound, the two ships once again reached a junction where they would have to make a decision: Sail east, around King William Island, or sail south, past the island, and then westwards past Victoria Island and continue onwards to find the Northwest Passage.
Unsure of the exact geography of King William Island, and whether or not they would be able to sail all the way around it, the ships chose to stick to their current route and sail south.
Again, it was a decision that they would come to regret. What none of them could’ve known was that by sailing east and around King William Island, they would avoid the heaviest ice-floes, popping out near the southern coast, and then sailing on past Victoria Island’s southern coastline and towards the Pacific. By sailing directly past King William Island’s western coast, however, they were headed into a virtual logjam of ice, which packed together in an immovable, frozen barrier, their movement slowed to a crawl thanks to all the small islands that bridged the gap between Victoria and King William Islands.
It was through this narrow, congested, ice-clogged channel that the two ships now had to navigate.
The men tried everything to get through the ice. Axes, sledgehammers, chisels and ice-saws were used to cleave, slice and cut through the ice, which was several inches, or even feet thick, but their efforts yielded negligible results, with the ships barely crawling forwards. In desperation, the men even resorted to using more dangerous (but also more effective) mining techniques to get through the ice!
Using augers, the men drilled shafts into the ice, and packed them with gunpowder. The powder was tamped down and fuses were lit. The explosions fractured the ice, but not nearly enough to break it into manageable chunks, once again forcing the men to expend valuable calories in shifting the tons of ice by using hand-tools.
By May, 1847, the ice remained just as immovable as ever, and it was at this point that the crews started to lose hope. With explosives running low, no coal to fire the engines, and the men exhausted from the freezing cold and backbreaking labour, morale started to plummet among the crews.
What happened next is only known thanks to a message, stored in a metal tube and hidden in a cairn (a pile of stones built up to form a pillar) that was left by the Franklin crew. Dated the 28th of May, 1847, it stated that four days previously on the 24th, a party of eight men (two officers and six men) had left the winter encampment on King William Island, and were attempting to explore and map the island. The note concluded “All Well”.
1848: The Fourth Year
Whatever the hopes of the Franklin crew might’ve been, they appeared to have vanished quickly. Just a few weeks after the note in the cairn had been written, Captain Sir John Franklin died, on the 11th of June, 1847. With the ice refusing to thaw for a second year in a row, the men remained trapped on King William Island throughout 1847 and into 1848, where the ice…AGAIN…refused to melt!
Getting desperate, the men decided that the time had come to look to their own salvation, and to abandon the mission entirely. On the 25th of April, the cairn was revisited, and the note extracted from within. An addendum was written on the few inches of remaining paper, stating that Franklin had died, and that the crew were abandoning their ships to the Arctic pack-ice. By now, twenty-four additional men had died. From 134 to begin with, minus 5, left them with 129. Minus Franklin was 128, minus 23 others, was 105 surviving crew.
Exactly what the twenty-four men (of which Franklin was one), had died of is not recorded, although later examination of the bodies revealed a mix of scurvy, tuberculosis, pneumonia, hypothermia, and lead poisoning.
Deciding that it was safer to leave the ships and head south to find civilisation, the crews took the drastic step of lowering the ships’ lifeboats onto the ice. Packing the lifeboats with all the available food, tools and other equipment that they might possibly need, the remaining 100-odd men, after consulting a few charts, made for Back River, 250 miles away, to the south. They reasoned that, if they could reach the river, then they could sail the boats south, and find help.
And so, on the 26th of April, 1848, pulling lifeboats and sledges packed with food and materiel, the 105 surviving crew started off on the journey that they hoped, would lead to their own salvation. Leaving the ships behind, they headed to King William Island, and started the agonising, freezing, painful and exhausting trek south.
With little water, freezing temperatures, snowstorms, dwindling provisions of increasingly questionable edibility, and suffering from everything from frostbite to scurvy, lead-poisoning and pneumonia, the men headed off into the frigid arctic wastes towards Back River, never to be seen again.
The Franklin Rescue Missions
Franklin’s crew had been forewarned that they should expect to spend at least two years, if not three, in the freezing north of the Canadian archipelago, and that they would not likely return home for many, many years. Everybody knew this. Franklin knew it, Crozier, his second in command, knew it, the admirality knew it, and Lady Franklin, Sir John’s wife, also knew it.
It was for this reason that two whole years passed, before any great concern was raised about what might be happening to the Franklin crew. In 1847, Lady Jane Franklin started getting worried, and began a gentle pressure on the Admiralty to send out a search party to try and find her husband.
The Admiralty, however…decided not to. They saw no need. After all, the Franklin Expedition was expected to be gone for up to three years! There was surely no need to panic! Not yet, anyway. However, not everybody shared the Admiralty’s confidence in the Franklin crew.
One of the men who didn’t was a certain fiction author and journalist, a man who was a close, personal friend of Lady Franklin – a man named Charles Dickens. Using his journal, Household Words, Dickens and Lady Franklin roused up public support for a rescue mission, and between 1848 to 1858, nearly four dozen search-and-rescue missions were launched to try and find the Lost Expedition, with Lady Franklin personally sponsoring…wait for it…SIX different expeditions to try and find her husband, or else, to discover his fate!
So…what happened to Sir John Franklin?
Franklin’s Lost Expedition
Exactly what happened to Franklin’s expedition has been a mystery for over 170 years. Nobody knows all the facts, and nobody knows all the truths. What is known is gleamed from what few scant documents and relics that could be found, and what eyewitness accounts the search-and-rescue teams could gleam from local Inuit natives. So, what did happen to the Franklin crew?
What follows is the most widely-accepted and, believed-to-be, accurate timeline of events.
April 26th, 1848. After two winters and two summers trapped in the ice, the Franklin crew decide to abandon their vessels, pack what supplies they can carry onto sledges and lifeboats, and trek south to Back River, to try and save their own skins.
The 104 remaining men drag their supplies onto King William Island, and head due south. It is freezing cold and the going is impossibly hard. There are no trees, no grass, no bushes…no vegetation of any kind. Just freezing wind and scattered limestone shale all over the place. The exhausted, hungry, starving men are struggling to heave the massive lifeboats along, stopping every few hours for rest and food, or to try and make camp.
This is what we know, according to all surviving written records. What happened next was gleamed from testimony taken from local Inuit Eskimos living in the area.
Unable to make it to the river, the men returned to the ships, deciding that it was safer to stay aboard them, rather than risk their lives out in the open. In 1849, when they felt stronger, they started out again in smaller groups, heading south once more.
With rations almost exhausted, the crew learn how to hunt seals and caribou to survive. Where possible, the Inuit assist them, either in hunting, or in butchering their kills. Each party thanks the other, using gifts of meat to repay each others’ kindnesses. The men learn how to cook their kills by starting fires using seal-blubber for fuel.
Slowly, parties of men of greater and lesser numbers, start to leave the Erebus to try and once again, make the perilous trek south.
During the winter of 1849-1850, the Inuit witness the crew performing a military-style burial ceremony. It is believed to be the funeral of Captain Franklin’s second-in-command – Captain Francis M. Crozier, officer in command of the Terror.
After this, more Inuit witnessed more crews of men still trying to head south. One group of up to forty men were witnessed dragging a boat behind them. They later speak of coming across a campsite littered with dead bodies, racked by starvation, cold and disease. Examinations of the bodies…or what’s left of them, anyway…caused the bleak prospect of cannibalism to rise to the surface…a fate later confirmed by proper autopsies.
By summer, 1850, the ice finally thaws. The remaining crew try to get the Erebus moving again, but severely weakened and ill, they almost all succumb to starvation and disease. Inuit recall boarding the ship to find the men dead in their cabins.
After this, in 1851, the Inuit locals report the existence of four more men. Accompanied by a dog, they head west. Who these men are, where they ended up, and what became of them is unknown. These men – whoever they are, and whatever became of them – are believed to be the last survivors of the Franklin Expedition.
These details, pieced together from written records, eyewitness testimonies from the Inuit, and relics and evidence recovered during the several fruitless searches for the Franklin crew, are all that are conclusively known about the fate of the men. Between buried bodies, human remains, and a single ship’s lifeboat with two corpses inside it, there was precious little to go on, and by the end of the 1850s, it was conclusively proven that the Franklin Expedition – widely believed to be the most well-prepared, well-stocked, most technologically-advanced polar expedition ever assembled – had been a horrifying, abject and abysmal failure, which tested mankind’s resolve and limits to so far beyond their breaking-points that polite, Victorian-era society scarcely dared to believe it.
Why did the Franklin Expedition Fail?
The loss of the Franklin Expedition is one of the great human tragic mysteries of the world, up there with the abandonment of the Mary Celeste, the disappearance of the Roanoke Colony, and the loss of the S.S. Waratah.
What happened? How did it happen? Why? These were the questions that Lady Franklin, the Admiralty, and the millions of people all over the world, demanded answers to, when the worst was finally revealed in the years following the exhaustive search-and-rescue efforts made in the 1850s.
So, what exactly went wrong?
The Ships
On the surface, the Erebus and the Terror looked like the ideal vessels for the Franklin Expedition. They were robust, well-built warships which had already proved themselves in arctic exploration well before they were selected as the vessels which would convey the Franklin crews to glory! The ships had been strengthened, reinforced with iron sheeting, had had central heating installed, and steam-engines with propellers and comfortable quarters prepared for the men. So, what went wrong?
Before they were used to convey Franklin and his men into the pages of history, the Erebus and the Terror had been used by Sir James C. Ross, during his explorations of Antarctica. And before that, the two vessels had been bomb ships! Bomb ships were a type of warship designed to fire mortar-rounds, instead of the conventional cannon-shot that most sailing warships would’ve used. They were literally used to bombard the enemy – hence ‘bomb ships’.
Because of this, their construction meant that they had to be very stable, to withstand the powerful recoil of the mortars going off on their decks. This meant that they were heavy, and had low centers of gravity, to reduce the risk of them capsizing and rolling over from the recoil of the mortars. This is great in battle, and even great when you’re sailing through the depths of the Southern Ocean…but it’s useless up in the Canadian archipelago! These heavy, ungainly ships with their deep drafts were unsuited for the shallow waters of the Arctic, especially when it came to weaving through the dozens of little islands immediately north of the Canadian mainland.
In the 1840s, the two ships had been converted to steam-propulsion, but crudely. Steamships did exist in the 1840s, but the engines installed on these two vessels had one great flaw – they weren’t maritime engines! Instead of purpose-built ship’s engines, the Erebus and the Terror were fitted out with small steam-locomotive engines used to power trains! Weighing up to 15 tons each, the engines were of a poor and inefficient design, and nowhere near powerful enough to generate the horsepower required to move the ships forward at any appreciable speed, or for any great distance, partially because neither were they given anywhere near enough coal! When the ships had been provisioned in England, only 120 tons each of coal, had been provided to them. At 10 tons a day, this would last just 12 days.
12 days’ worth of coal, for a voyage expected to last three years.
Because of this, the engines were almost never used. It took too long to heat them up, boil the water, create the steam and drive the engines to move the ships…and the engines weren’t nearly powerful enough, anyway. Why the Erebus and the Terror hadn’t been fitted out with proper maritime engines is unknown, but the end-result was the same: The engines, being underpowered and rarely used – were nothing but dead weight at the bottom of the ships, making already slow vessels even slower.
The Food
It’s long been believed that one of the main contributing factors to the disaster of the Franklin expedition was the food that the men ate during the voyage. The vast majority of it was canned. In theory, this was a good idea. Canned food is easy to store, easy to ration, takes up less space, is easier to cook and easier to serve.
But only if it’s done properly.
The cans of food and beverages used on the Franklin expedition were poorly sealed and the food was not properly prepared. This led to spoilage, leakage, and loss of nutritional value. The result? The men started suffering from the one disease that all seamen lived in mortal terror of: Scurvy, caused by a crippling lack of Vitamin C.
Scurvy had been the nightmare-fuel of sailors for hundreds of years, even before the Franklin expedition set out. In the 1700s, it was discovered that citrus juices could prevent scurvy, and to this end, the Royal Navy instituted a system whereby every sailor was given generous quantities of grog every day, to keep scurvy at bay. Grog was a mix of rum, watered down with lime or lemon-juice. This sweet-and-sour cocktail, a mix of booze and vitamins, kept sailors hydrated, healthy, and happy.
The canned provisions might’ve done as well, if they had been prepared properly. But apart from the lack of vitamin C, the canned food posed another great danger: Botulism. When food (especially meat) goes bad, the bacteria known as Clostridium Botulinum starts to form, which can lead to symptoms such as sight problems, speech problems and fatigue – all of which would be exacerbated by the freezing cold of the Arctic.
Navigational Issues
Another huge problem for the crew, apart from the shortcomings of their ships and the deficiencies in their provisions, was the much more unmanageable problem of dealing with navigation.
To find your way around the world in the 1840s, you needed three pieces of equipment: A sextant to tell you your latitude, or North-South position, a chronometer or clock, to tell you your longitude, or East-West position, and a compass, with which to give you your direction, or ‘heading’. Compasses are magnetic. They will always point towards the magnetic North Pole, which as we know, is populated by a fat guy in a red suit who runs the world’s largest toy factory!
The problem with magnetic compasses is that the Magnetic North Pole moves. A lot. The rotation of the Earth means that the pole is constantly shifting – more than once throughout the Earth’s history, the poles have flipped completely, and then flipped back again! In most latitudes, this isn’t an issue – the North Pole (ie: True North) is so far away that these slight variations in movement made by Magnetic North are imperceptible, and a general northerly bearing is usually sufficient to guide a ship along its route – if it needs a more accurate fix – well! – that’s what the chronometer and sextant are for!
But the problem is that the shifting poles and questionable compass readings get more and more extreme the further north or south you go. Right up in the Arctic Circle, with the pole moving all the time, the compasses get completely disoriented as they try to keep up with the shifting magnetism of the North Pole. The result?
The compasses don’t point north. They don’t, because they can’t, and they can’t, because they point to Magnetic North, which, as I said, is constantly moving. This leads to the very real problem of your compasses being completely useless. You can’t rely on them at all to point the way, and can only manage to do so by maps, stars and the position of the sun…if the sun will deign to rise, that is – in the Arctic Circle, that isn’t always guaranteed.
Along with faulty compass readings came the added strain of trying to navigate a seascape which had very few accurate charts. No complete maps of the Canadian Archipelago existed in the early 1800s, and for the first time since the search for Australia in the 1700s, mankind found itself quite literally sailing off the edge of the map.
This inability to rely on maps meant that every directional change the crew made would be a literal, and figurative – shot in the dark. They had no idea what lay ahead, or what to expect. Instead of sailing west, they sailed south. Instead of sailing east, they sailed west. Instead of trying to avoid the ice, the Erebus and the Terror found themselves trapped in endless floes which refused to melt for years on end!
Captain Sir John Franklin
The last factor which spelled doom for the Franklin Expedition was, arguably, Sir John Franklin himself. While he was an arctic veteran, a famous explorer, naval hero and man of letters who was immensely popular with the British public, and well-liked by his crew, Franklin did have a number of shortcomings that made him less than ideal for the mission at hand.
Franklin’s unsuitability as expedition-leader is borne out by the fact that he wasn’t even the Admiralty’s first choice for leader! Sir John Barrow, Second Secretary to the Admiralty, and the man in charge of finding the crew to man the expedition, had hoped to convince the elderly Sir John Ross to come out of retirement and head the expedition. Ross was a famous naval officer and arctic explorer of note, but he was already approaching old age, and had no desire to go to sea – especially on such a risky mission as this!
Barrow’s second choice was Sir James Clark Ross – Sir John’s equally-famous and well-accomplished nephew, another famous polar explorer! Unfortunately, Sir James had just gotten married, and, like his uncle, had no desire to go gallivanting off around the world at such short notice!…especially when he had far more interesting diversions waiting for him at home.
Barrow’s THIRD choice for commander was an Irishman named Francis Rawdon Crozier – another polar-exploration veteran of note. While Crozier was experienced, Barrow wanted an Englishman to head the expedition, and since his first two choices had bowed out and his third was not ideal, he finally settled on Sir John Franklin – Option #4!…Ouch!
At 59, Franklin was already approaching old-age by Victorian standards. While he was a naval officer, and a polar explorer, and certainly had the intelligence to get himself admitted to the prestigious Royal Society, Franklin had many shortcomings as well. Although he was a popular hero, and was well-liked by the men under his command, whom he treated with kindness, consideration and respect, the fact of the matter was that Franklin was stubborn, hotheaded, took unnecessary risks, and didn’t always respond positively to well-meaning advice.
All these faults led to his near-death in 1819, when he attempted an overland expedition from Canada to the Canadian Archipelago, to find the Northwest Passage by land. His foolhardiness and inattention led to eleven of his 20-man crew dying of cold and starvation. By the time they abandoned the mission and his men had convinced Franklin to turn back, the remaining men were close to death themselves. Franklin survived by literally cutting up and eating the leather uppers on his hiking boots! This humiliating end to what was supposed to be glorious victory of exploration, led to him being mercilessly lampooned as the “man who ate his own shoes”!
All these issues – the failings of the ships, the problems with the food, and nutrition and health of the crew, and the navigational challenges and uncertainties faced by the navigators aboard the two ships are what led to what was supposed to be the most well-equipped polar expedition in history, going down as one of the greatest exploratory failures of the Victorian era.
In the end, the first successful maritime navigation of the Northwest Passage took place in 1906 under the command of Norwegian explorer Roald Amundsen. Unlike Franklin, Amundsen had the good sense to turn EAST when he reached King William Island instead of continuing south – and that made all the difference. By turning east, he was able to sail around the island and pop out the bottom. All the tiny islands north of him (through which the Franklin Expedition had attempted to sail) now worked in Amundsen’s favour, instead of against him, like with Franklin. The islands which formed the bottleneck of icebergs and field-ice that had so impeded the Franklin expedition, kept the waterways past Victoria Island relatively ice-free, allowing Amundsen a clear passage westwards towards the Pacific Ocean!
What Happened to Erebus and Terror?
Everything I’ve written about thusfar, has just about covered the various fates of the crew, but what about the ships that they left behind? The Erebus and the Terror. What happened to them?
Again, the only way to be sure of anything, is to go by Inuit testimony. The ships were known to be above ice throughout the 1840s, but by the early 1850s, were starting to fall victim to the crushing ice that had by now, surrounded them for years. The Terror succumbed first – carried south by the crushing ice, the ship finally broke apart and sank off the southwest coast of King William Island in an area now known as Terror Bay in honour of the lost ship.
The Erebus was marginally more successful – the pack-ice had drawn the ship south along with its sister-ship, but instead of crushing the ship to matchsticks, the ice thawed enough for it to start moving again, although it did not get very far.
In 2014 and 2016, the wrecks of the Erebus and the Terror (in that order) were discovered by Canadian maritime explorers. To protect their historical integrity, the exact location of the wrecks was (and still is) a closely guarded secret, so as to prevent recreational divers from attempting to find them. The British Government, the official owner of the two shipwrecks, gifted them to the Canadian government, which later entrusted the safekeeping and guarding of the wrecks to the Inuit people, through whose territory the two ships had tried to sail, all those years ago. Among the artifacts raised from the ships was the bell of the Erebus.
The President and the Expedition
The Lost Expedition of Sir John Franklin, and their noble quest to find the Northwest Passage, happened back in 1845, over 170 years ago, and yet, in the 21st century, one particularly poignant reminder of the expedition’s great peril remains with us still. It’s likely that you’ve seen it on TV. Several times, in fact. It’s likely that you’ve seen it in photographs, on the internet, on Youtube, in TV shows, and even in big-name Hollywood movies!
Even if it wasn’t what it is, it would still be an immensely famous artifact, and yet, this irreplaceable piece of history is quite literally overlooked, every single day, without most people realising even in the slightest – what it actually is.
What is this artifact, you ask?
The desk of the president of the United States of America.
Gifted to President Rutherford B. Hayes in 1880 by Queen Victoria as a token of goodwill between the two nations of America and the United Kingdom, the desk – popularly known as the “Resolute Desk”, was made from the timbers of the British warship HMS Resolute, when it was broken up in the 1870s. The desk is one of three that were commissioned by Queen Victoria when the ship was finally scrapped. The other two are the “Grinnell Desk”, and a smaller ladies’ writing-desk, made for the queen herself.
The Resolute was one of the several ships which in the 1850s, sailed to the Arctic to try and find Franklin’s Lost Expedition. Had it not been for much better planning, the crew of the Resolute could’ve joined the crew of the Erebus and the Terror in their icy fate! When the Resolute got stuck in the ice, the crew decided to abandon ship, and sought refuge on an accompanying vessel which sailed them back to safety. Fearing that their ship would, like the Franklin ships, be crushed into matchsticks by the tons of ice and then sunk, they never expected to see it again. However, to everybody’s amazement, the ship broke free of the ice in the spring thaw, and drifted around the North Atlantic for years, before it was discovered by American whaling ships and sailed back to America.
The ship was restored to full working condition, with replacement rigging, sails and flags, and was returned to the Royal Navy as a gesture of goodwill between the United States and the United Kingdom. Years later, the gesture was reciprocated in the giving of the Resolute Desk – which to this day, still bears a plaque on it detailing its role in the Franklin expedition.
Want to Read More?
For the sake of brevity, I haven’t covered everything about the Franklin Expedition in detail. If you want to find out more, here’s the sources I used…
With the news that there’s going to be a Downton Abbey MOVIE in the works, with most of the original cast teaming up all over again to make a big splash on the big screen (and just in time too. I mean, Maggie Smith ain’t gettin’ any younger, here…), I’m sure that a lot of period drama buffs will be dusting off their DVD collections or hard-drives which contain the episodes of ‘Downton Abbey’, and will sitting back to enjoy all that high-class British drama once again, to bone up on everything that’d happened in the series from the pilot episode in April 15th, 1912.
Downton Abbey has singlehandedly been attributed to a rise in interest in things like classic formal attire, household servants, early 20th century history – and that most high-class of all high-class things: owning a grand country estate and a huge manor house which is centuries old! Indeed, the whole thing of ‘grand country house living’ has always been something that people have been fascinated about for decades, probably because it’s where all the major action happens in all those old love-stories, drama series, and of course, who could forget the classic ‘country house mystery’ genre (“It was Colonel Mustard in the billiard room with the candlestick!”).
In this posting, I’ll be looking at the country house way of life. Where it came from, what it was like, how it survived, and finally – what happened to make that way of life disappear almost entirely from the face of the earth in the space of a few short years. So, let’s begin…
Ham House, near London, dates back to the 1610, and is among the earliest examples of what we would call a ‘grand country house’ today.
All around the world, throughout history, one of the biggest status symbols that there has ever been, is the grand country house estate. They existed in Canada, America, all throughout Europe, in Asia, and even as far away as Australia.
But when most people think of grand country house estates, they almost invariably imagine the great estates and grand houses built in England, Scotland, Wales and Ireland during the 17th, 18th and 19th centuries. When people picture the pinnacle of high-fashion, high-class, ultra-rich living, a grand country estate is almost always one of the prerequisites to such a lifestyle and way of life.
That said, most people – even most rich people – don’t live this way anymore. Why not? How and where did this style of living come from, how did it sustain itself, how did it survive, and finally – how did it finally collapse, to become a forgotten, romanticised remnant of history, something to be elegantly recreated in TV dramas and movies such as Upstairs Downstairs, Downton Abbey, The Secret Garden, Gosford Park and the stories of Jane Austen?
Before Grand Houses – Castles and Manors
The first grand houses were not really houses as we know them today. They were castles! Castles as we imagine them today originated in France in the early Middle Ages. Originally made of wood and earthworks, large, elaborate castles, built of stone and with impressive defenses like earthworks, moats, ditches, drawbridges, gatehouses, corbels, jetties, battlements and crenelations started being built in the 1000s, 1100s and 1200s. One such example is the Tower of London.
Castles were not just houses, though. They served multiple purposes. They could be houses, sure. But they were also usually centers of government, storehouses, military barracks, vaults, prisons and much more besides. Nevertheless, they were the original ‘grand country houses’.
By the 1500s and 1600s, with the rise of cannons, muskets and pistols, and the decline of traditional European feudalism, the castles of old started changing, too. They became less imposing and more like military fortresses and strongholds, rather than large, multipurpose structures. Now, castles existed purely for defense, and any thoughts to turning the structure into a home were generally considered secondary (think of Walmer Castle, once home to the Duke of Wellington himself!).
It was at this point that the ‘castle’ started splitting apart into three distinct entities: The palace, the fortress, and the manor house.
The Fortress
All castles are fortresses. Not all fortresses are castles. That’s what a fortress is in a nutshell – a fortified or strengthened structure designed as a military barracks and stronghold – from the Latin word ‘Fortis’ – meaning ‘strength’.
That said, some fortresses were still called ‘castles’, likely out of habit. Castles built in the 1500s by rulers such as Henry VIII were still called ‘castles’ even though they bore very little resemblance in design or appearance to castles of the Middle Ages. 16th century ‘castles’ were lower, more angular and were designed to house musketeers and heavy artillery, not archers, crossbowmen, knights and men-at-arms.
The Palace
As society stabilised, the need to house the country’s ruler in a fortified castle or stronghold lessened. This gave rise to another structure – still grand and imposing, but designed more as a statement of wealth, power and opulence, rather than as one of protection and military might – the palace! Structures like Hampton Court Palace, Whitehall Palace, the Palace of Westminster, the Palace of Versailles, the Winter Palace and Summer Palace, and the Palace of the Forbidden City reflect this. They’re grand and protected, but are built more as showpieces rather than as military strongholds.
The Manor House
Last but not least comes the manor house.
As the need for castles disappeared, the first ‘great houses’ built by the nobility or the military aristocracy started to appear. These were called ‘manor houses’. They were built more as homes rather than as military fortresses or castles, and were designed chiefly – like with palaces – for comfort and good living. Yes, some still had a nod to their militaristic pasts, such as moats, battlements, bridged entryways and gatehouses – but these were now seen more as anachronistic design-features, meant to make the building look more impressive and flashy, rather than actually serving any real defensive functions. The battlements built on the tops of 16th and 17th century manor-houses were small and thin – not designed as shield for defending soldiers, as battlements on castles centuries before, had been.
The Rise of the Manor House
As fears of endemic warfare died away in the 17th and early 18th centuries, the aristocracy started producing grander and grander country houses. With no wars to blow money on, the wealthy started blowing money on flashy homes instead. Homes with features like huge, double-hung sash windows with lots of glass, features like huge doors, high ceilings, a fireplace in every room, elaborate kitchens to produce gargantuan feasts, ballrooms, living rooms, music rooms, lounges, bedroom suites and enfilades.
Hardwick Hall in Derbyshire. Built in the 1590s, it represents a new type of grand house that was being built at the time, very different from the castles of the Middle Ages. Meant for comfort and good living, rather than defense and security, it earned the nickname ‘Hardwick Hall, more Glass than Wall’, due to its gigantic windows.
An enfilade, by the way, is a long series of rooms one after the other that stretches from one end of the house to the other. In later times, these would slowly be closed off, and the leftover corridor became known as a Long Gallery, or just a ‘gallery’. With so much wall-space, people would hang their pictures, sketches, portraits and paintings there. This is why we view art in a ‘gallery’ to this day.
The North Enfilade at Blenheim Palace.
Along with the gallery and everything else came the inclusion of a chamber for private parties where people could withdraw and be alone with each other. Today, we call them ‘drawing rooms’.
It was during this time that bedrooms and bedroom suites started becoming a thing. Instead of sharing rooms (or even sharing BEDS, which was very common in those days!), you now had your own bedrooms! And if you were really wealthy, then your room would also have a ‘closet’.
The ‘closet’ was a small chamber next to your bedroom. It served a function similar to a study or sitting-room, and a private space to do personal things like pray, write, read or relax. Since only one’s most intimate and personal activities and deepest emotions and feelings were expressed within a closet, it became associated with secrecy and personal thoughts and feelings. That’s why we call someone who has revealed their sexuality – something extremely personal – to the world – ‘coming out of the closet’.
Inside a Manor House
Along with bedrooms, a gallery, dining room, kitchen and large reception rooms, early manor houses had a few rooms which we don’t have today, or whose functions have changed significantly.
One such room is the pantry. Yes, in times past, a pantry was an actual room, not just a cupboard full of instant noodles, coffee and tea. The pantry was the room where all things associated with baking bread were stored, including mixing-bowls, kneading-boards, dough-troughs, forms, molds and other baking implements, along with baked goods themselves, which were stored there to keep them cool and dry and away from moisture which would cause mold.
On top of that came a room which has disappeared entirely – the still room.
The still room was the chamber where you distilled (hence the name) essentials oils, drinks, alcoholic beverages and medicines. At a time when country houses had to be much more self-sufficient than they are today, a chamber for making your own drinks, medicines and alcoholic beverages was important. As it became more and more possible to buy these things rather than make them, by the start of the Victorian era, still rooms had disappeared, incorporated into kitchens in older houses, and being left out completely in newer ones.
Another room which used to exist in old houses was something called a ‘buttery’.
No, the buttery was not where you stored butter and cream and jam (delicious as they are…) – no. A buttery was where you stored…butts!
Okay, stop giggling.
Butts are kegs…barrels…casks!
Casks of beer, casks of wine, kegs of rum and so on. Basically, it’s where you put drinks. Now obviously drinks have to be kept cool, so the buttery was almost always a basement room, usually under the kitchen. The person who was in charge of looking after the buttery was the…butler, and originally, the man’s job was to maintain and serve the household stocks of beverages. In time, the butler took on more and more responsibilities until by the 1700s and 1800s, he had become the chief of ‘below-stairs’ life, organising and rostering all the other servants.
The Heyday of the Country House (1500s – 1700s)
The country house as we know it, or even as we imagine it, started being a thing as early as the 1500s. From then to now, it went through many changes and morphed in and out of different forms. First they were fortified manors, then graceful mansions, then sprawling estates!
Where, you might ask, did they get the money to build these houses?
Make no mistake, a country house was expensive to build, and even more expensive to maintain (but more about that, later).
Highclere Castle, the setting for the hit period TV series, ‘Downton Abbey’. Highclere has featured in many TV shows over the years, including numerous episodes of the 1990s ‘Jeeves & Wooster’ series, and at least one episode of ‘Miss Marple’.
Many of the people who owned country houses also owned vast, vast, VAST tracts of land, usually passed down father-to-son, father-to-son for countless generations dating all the way back to the Middle Ages. Charging rent on this land for farmers who wished to use it to grow crops, raise livestock, or otherwise make a living there, was the chief form of income for landed aristocracy.
The same applied for anything else – any watermills, flour-mills, brick or tile kilns, any ovens or bakeries, and any villages and taverns or inns built on the extensive lands owned by the local landlord all had to pay rent or taxes to the lord of the manor. So long as he was smart, the lord of the manor could live off this income more than just a little comfortably, without ever having to lift a finger…except to coin his money, perhaps. This is where the whole thing about the ‘idle rich’ and the idea that aristocracy didn’t have to work for their money, came in. This is touched on in “Downton Abbey” where Miss O’Brian says that “gentlemen don’t work, not real gentlemen!“.
This system lasted for years, decades, centuries! Passed down father to son over and over again. In this way, landed families could amass GIGANTIC fortunes, and since most of this money wasn’t taxed – they could do whatever they liked with it – and most of them blew their fortunes on building bigger, grander, more opulent houses, amassing huge collections of silverware, antiques, furniture, paintings and foreign curiosities. If the lord of the manor had a day-job (like being a government minister, army officer, or a naval captain of skill and fame), then they could swell their coffers even MORE by earning a salary, or winning prize-money during battles.
Laws favourable to the aristocracy keep them in power, and in money and for centuries, they held a near monopoly on much of the land, enabling them to milk it for all it was worth.
Keeping it in the Family
One reason why the aristocracy held onto their homes for so long and were able to maintain these lavish lifestyles for generations was because of the peculiarities of English law, which stipulated that (unless stated otherwise), country house estates and their contents always passed from the master of the house to his eldest son and heir (or the second-eldest, if the first had died and left no heirs of his own).
This was all maintained due to one of the key plot-elements that ran through the core of award-winning TV series “Downton Abbey“. As Maggie Smith put it, “The entail must be smashed!“.
OK. Point taken.
What the hell is an entail?
In its simplest terms, an entail was a legal device which regulated the laws of inheritance in Britain. An entail was a form of trust (whereby one party – say a parent – sets something aside – say, the house and estate – in the hands of a second party – say, a lawyer – to give to a third party at a particular time – say, the heir to the estate, when that parent dies).
Basically, the entail stipulated that houses HAD to be passed down, father to son, father to son, generation after generation. Or if not father to son, then at least homeowner to his closest living male descendant (be he a cousin, a nephew, or a brother, and so-on).
Passing land and property down like this through the generations is how you end up with these massive country houses filled with all kinds of expensive treasures – because the properties were never ALLOWED to be sold or gifted to anyone outside the family – it was basically illegal to do so. In Downton Abbey, as Lady Mary isn’t a man, she can’t inherit the house and estate or the money that goes with it, which leads to all sorts of complications, which drives the series along.
The Country House Enters the Modern Era (1700-1900)
The 1700s and early 1800s was the era of the great expansion of country houses. This is when aristocracy built grand houses and expanded on even grander ones. Money was flowing in from trade and commerce and rent and taxes, and they were all living the good life. But something happened in the 1700s that started to force a change.
The Industrial Revolution.
Prior to the 1700s, most people lived in small towns or villages, or out in the country. Most people were farmers or artisans or tradesmen. The pace of life had barely changed in centuries because there was nothing to change it, and nothing around to make it worth bothering to change. But when the first steam-engines, canals, and later, train-lines started being developed, life would never be the same again. Suddenly, it was possible to work faster, produce more, earn more, do more with what you had! And this had a huge impact on the country house way of living.
A great example of a grand country manor built without any regard for expenses is Manderston House in Scotland. Constructed at the start of the 20th century, when architect John Kinross asked the owner (Sir James P. Miller, 2nd Bt. Manderston) what the construction-budget would be, he was simply told that “It doesn’t matter”, and to just get on with building it.
With the rise of factories and warehouses, better wages and a more reliable income than could be had from farming or rearing livestock, peasantry, tenant farmers and villagers in the countryside fled from their jobs that they’d had for centuries, and moved to cities like London, New York, Paris, and Berlin, to work in better jobs with better pay and better conditions and more job-security.
Suddenly, there weren’t so many people working the land anymore.
Fewer people working the land meant fewer people that the local landlord could tax.
This meant that for the first time in centuries, the cornerstone of aristocratic wealth – control of the land, and taxing the people who lived on it – was starting to crumble. At the same time, a new landed gentry started to rise up to challenge the old aristocracy. They had no titles, no fancy lineages going back to the Middle Ages, no flashy family names or noble birth – but the one thing they did have was MONEY.
And LOTS OF IT. These were the industrialists. Factory-owners, mill-owners, railway entrepreneurs, shipping magnates, import-export moguls, bankers, manufacturers and wheeler-dealers. And they wanted a taste of what previously had been the preserve of the aristocracy – a big flashy house out in the country, away from the smog and dust and soot of the big cities. And so, they started building.
And building.
AND BUILDING.
The 1700s and 1800s saw dozens of country houses being raised from the ground upwards in Canada, America, Europe, Britain and Australia. If the way to show you’d arrived at the top echelons of society was to have a flashy house surrounded by fields, then the nouveau riche of the industrial age were going to make damn sure that they would have the biggest and flashiest houses possible, and some even started competing against each other to see who could have the biggest, grandest, most outlandish homesteads, much like how the ultra-rich now compete for yachts, jets and cars – 300 years ago, they competed with gardens, dining halls, gilded entryways and grand ballrooms for those swanky, all-night parties.
The Rise of Industry
As industry started to rise and rise, and the new industrialised landed gentry started buying up land and building grand houses on them, the old aristocracy started to crumble. By the 1830s and 40s, steamships had become a thing. Now, it was possible to buy a ticket, get on a train and head to the docks, get on a ship and sail safely across the Atlantic to the New World – all in a couple of weeks – whereas it would’ve taken MONTHS to do this by horse and cart, and in a sailing ship! Since people could now move, and could now seek newer and better opportunities, they were no longer tied to the land. As travel and trade rose, the grip of the old country house owners started to crumble.
One huge blow was dealt in the 1800s in the massive farming slump that happened across Britain and Europe. America, with its huge tracts of land, railway systems and steamships, could grow, harvest, and import grain, flour, wheat, barley and other foodstuffs to Europe much faster and more efficiently than the Europeans could produce them on their own. As a result, farming in Europe (and especially in Britain) started to crumble – and in England, the bottom basically fell out of the agricultural market. Wheat prices in Britain disintegrated and farmers fled their farms, or else moved to livestock instead of crops.
And what did the aristocracy rely on for their money? Rent from farmers. If there weren’t any farmers, there wasn’t any rent. If there was no rent to collect, there was no money coming in! And this had a massive impact on country house living.
Maintaining a Country House Estate
Country houses are huge structures. Dozens of bedrooms, loads of reception rooms, servants quarters, laundries, kitchens, cellars, basements, guestrooms, stables, carriage-houses…remember that they used to have to be self-sufficient, so they had to have everything they needed to support themselves. This meant that they were HUGE. And in the 1700s when the money was flowing – noblemen and noblewomen built bigger and bigger houses, expanding and expanding, renovating and rebuilding over and over again.
This is fine – great, even – when you have a steady income coming in from the land that you can charge rent on, but what happens when that disappears?
The problem was that these country houses were massive money-pits. It took thousands of pounds to run them every single year. Cleaning, heating, water, food, drinks…and that doesn’t include maintenance – water-pipes, flooring, roofing, sweeping the chimneys, repairing the windows, fixing the gutters, repairing the masonry and upkeeping the gardens.
And we haven’t even begun to look at the wages for the indoor servants, which in some houses could number up in the dozens of people! This was made even MORE complicated by the fact that, from the 1700s to the 1850s, Britain actually had a servant tax.
Yes, that’s right. A SERVANT TAX.
To be specific – a tax on male servants.
See, men are really useful – they can serve as stable-boys, footmen, coachmen, gardeners, butlers, valets, hallboys…but the problem is – they can also serve as soldiers, sailors and military officers. In time of war, (such as during the American Revolution in the 1770s, the French Revolution and the Napoleonic Conquests of the 1790s and 1810s, and the European conflicts such as the Crimean War in the 1850s), the country needed soldiers and sailors. And if they were busy serving you, instead of fighting for king and country, then you, as the householder, were expected to recompense the government for their loss in manpower – by paying a tax on every single male servant that you had in your employ!
Add that to the costs of heating, lighting, water, food, drink, wages, maintenance…see how expensive this is?
And that’s provided that you’re not also trying to keep up with the Joneses by trying to live like a billionaire every day of your life! By the second half of the 1800s, British aristocrats were struggling to maintain their lifestyles. Rising costs, falling income and the fact that their houses were gigantic caused a lot of them to just give up!
Many were now cursing their ancestors for blowing millions of pounds on big flashy extensions and expansions, which were now far too expensive to maintain properly. Some aristocrats maintained more than one house – most of them maintained at least two! A country house (the big flashy one) and the townhouse – a smaller, more modest, usually terraced Georgian or Victorian house, often situated in London, which was the family’s base of operations during the London social season in the summer months. As country houses grew more and more expensive to look after, most families abandoned them and just upped sticks and moved into their townhouses fulltime instead.
The Dollar Princesses
The European and British industrialist classes didn’t have to worry about money. They’d built their fortunes from the ground up and had lots of money flowing in from factories and mills, shipping lines, railroad companies and mercantile ventures. As such, they could afford to fuel their luxurious country house lifestyles much more easily than the old aristocracy. Too proud, or unable to work for a living, the aristocracy struggled on, running their houses on dwindling inheritances, and shrinking income from their estates due to the sharp decline in farming. But just as it all seemed lost, salvation was at hand, from, as Churchill would later put it, “in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old“.
For the British aristocracy, liberation from their growing financial nightmares came in the form of the ‘dollar princesses’.
The term ‘dollar princess’ comes from the late Victorian era. It referred to young American or Canadian heiresses of marriageable age who came from the social elite and the upper professional classes of North American society. The daughters of families like the Vanderbilts, the Carnegies, the Rockefellers and the Morgans – the big, old-money robber-baron clans who had amassed gigantic fortunes in the 1830s to the early 1900s.
In most cases, rich American fathers wanted their daughters to marry into respectable, high-society families. Naturally, you don’t get much more high-society than British nobility, and so wealthy American fathers and mothers started looking across the Atlantic for potential marriage-partners for their little baby girls. At the same time in England, impoverished English noble heirs (remember that houses and estates ALWAYS passed down the MALE line of inheritance) were looking for potential wives who would be loaded with cash in order to dig them out of their present financial disasters.
To kill two birds with one stone, the logical thing to do was for American heiresses to marry English heirs. And they did. In their droves! The heir to the Vanderbilt fortune married the Duke of Marlborough and a Brooklynite named Jennie Jerome married a certain Lord Randolph Churchill.
Yes. THAT Churchill.
If not for the dollar princesses, Winston Churchill would have never been born.
Working in a Country House
One of the reasons why English country houses were so expensive to run was simply down to the sheer amount of manpower required merely to keep it operating on a daily basis. Country houses were enormous structures and without modern technology, it took a small army of servants, inside and outside, just to keep them functioning smoothly, never mind what happened during big events like holidays, family birthdays, wedding anniversaries and Christmas!
The servants on Downton Abbey.
A typical household could have up to a dozen or more staff including the butler, housekeeper, chef or cook, at least one kitchen-maid, at least two or three housemaids, at least two or more footmen, scullions or scullery maids and hall-boys who did double- or even triple-duty as boot-boys and pantry-boys (basically hall-boys did all the heavy manual labour below stairs). On top of that you have valets, ladies maids, and if there are young children – governesses or nannies all on top of that.
“You rang, m’lady?” – Many 18th, 19th, and early 20th century grand manor houses (and even many townhouses built in the same era) were equipped with extensive service-bell systems, comprised of wires or cables, pulleys, levers, pivots and springs, which attached a bell at one end in the servants’ hall to a specific room in the house. The wires and pulleys ran up the walls and along the ceilings (usually behind the walls and ceilings) and in and out of rooms, up and down stairs. Usually the bells were all grouped together on a ‘bell-board’ where each bell was tagged to the room it served. It’s the earliest form of ‘intercom’ found in households. In the early 1900s, some of the old cable-and-pulley networks were replaced by new electric bells, but in some houses which couldn’t be bothered (or couldn’t afford) to replace the old systems, the traditional cable-and-pulley system remained in operation still.
And that’s just the inside staff! Tack on a coachman or chauffeur, stable-boys, and gardeners and you’re looking at a staff upwards of 15 or 20 people at least, to serve a family consisting of maybe – six or eight members. By the early 20th century, as an industry, domestic service (or being ‘in service’, as it was called) was THE largest single employer in Britain.
The Country House in the 20th Century
By the early 1900s, the country house was just about ticking over. Money from dollar princesses, wiser investments and careful money management had just about staved off the wrecking ball, but not for everyone.
Remember how I said that in the 1600s and 1700s and early 1800s, country houses were being built bigger and grander and more luxurious every passing week?
By the 1880s and 1900s, such grandeur was considered excessive…and expensive! It was during this time that some grand country houses started being demolished! Families either moved into a smaller villa on the estate, or just gave up country living altogether, and moved to London to their townhouse in Belgravia or Mayfair.
Nevertheless, country house living was still a thing in the early 20th century. With money to burn, some houses were modernised. Plumbed bathrooms with hot water were installed, electrical wiring was set into the walls, gas fittings and oil-lamps were replaced by switches, wall-sconces and pendant lights. In some houses, even telephones were installed. Coachmen, stable-boys and the park drag coach soon got the boot, to be replaced by a chauffeur, mechanic, and the new Rolls Royce open touring-car.
1910s Rolls Royce Silver Ghost Touring Car. One of the most sought-after automobiles of the early 20th century.
The early 1900s was rapidly becoming the end of an era, though. As noted historian Ruth Goodman said, the Edwardian era was “the last great blast of country house living“.
The country house lifestyle was living on borrowed time by the Edwardian era. Rising taxation and then the Great War in 1914, kicked it in the knees and it was now starting to stumble. Servants left to find better and more stable work in shops, offices, factories, on the railroads and other industries. Domestic service was becoming much less appealing as a career by the 1900s.
Part of the problem was the extremely – EXTREMELY long work-hours; 16-18 hour workdays almost every day of the week were normal for most servants, and time off was very, very limited. On top of that, wages just could not compare with what someone might earn working in an office, a shop, running their own business etc, where there was more flexibility in hours and time off. When the war came, thousands of male servants chucked it in, rushed off to enlist, and, whether they survived the war or not, most never came back!
The kitchen at The Breakers, one of the many grand Belle Epoque mansions constructed for, and lived in by, the stupendously wealthy Vanderbilt family. Here, meals for the entire household – upstairs, for the family, and downstairs, for the servants, would’ve been produced, at least three times a day, every day of the year.
The interwar boom known as the Roaring Twenties kept the country house chugging along for another decade or so, but the writing was on the wall. High taxation after the war, and a significant reduction in the manpower required to run a country house estate – even with modern conveniences – meant that they were getting more and more expensive to operate, and as Lady Grantham’s mother said, “These houses were built for another age“. And she wasn’t kidding!
Rear view of ‘The Breakers’, the Vanderbilt family’s mansion at Newport, in Rhode Island, now owned by the Preservation Society of Newport County.
The Crash of 1929 hit a lot of country house owners hard. With heirs lost in the Great War, and now family fortunes on the line (once again) because of the coming of the Great Depression, it was just getting harder, and harder, and harder to enjoy – let alone maintain – the country house lifestyle. It was during this time that many country house owners sold up, packed up, and moved out. Houses were demolished, turned into schools, office-buildings, hospitals and hotels. But worse was yet to come.
The End of the Country House Lifestyle
The final nail in the coffin for the country house lifestyle was the Second World War. Rationing, bombing, evacuations, lack of funds, lack of manpower, and rising taxation after the war meant that the country house way of living was just impossible to maintain, or continue.
By the 1930s and 40s, and certainly by the 1950s, the whole idea of living in a grand country house, waited on by an army of servants – was rapidly being seen as increasingly outdated and old-fashioned. People just didn’t live like that anymore, didn’t work like that anymore! As the years clocked by, country house living was seen as some sort of relic, a grand remembrance from the lavish excesses of the Victorian age, but in no way applicable to people living modern lives in the postwar period.
Demolished almost in its entirety, the palatial Trentham Hall was one of the first grand English country houses to be pulled down, in the early 1900s. This painting dates to 1880, when the house was already in decline.
Finding domestic servants to run the houses was almost impossible now, and unless you were stinking rich – and could remain stinking rich for the rest of your life, come what may, paying servants was getting harder and harder and harder.
The plight of many old English country houses was summed up in the famous Noel Coward song “The Stately Homes of England“. Although meant to be comical, the song graphically outlines just how desperate some country house owners were to do anything to keep their old family estates together, including selling off absolutely anything “with assistance from the Jews, we’ve been able to dispose of rows, and rows, and rows of Gainsboroughs, and Lawrences, some sporting prints of Aunt Florence’s, some of which, were rather rude!” and that “although the Van Dykes have to go, and we’ve pawned the Bechstein grand, we’ll stand by the stately homes of England!”
It was during the 1920s, 30s, 40s and 50s that a lot of the grandest country houses were consigned to history. Demolished, repurposed, sold off, or simply abandoned, it was up to national historic trusts, social history groups and historical preservation societies to step up to the plate.
In England, the National Trust, in Australia, the National Trust of Australia, and in America, entities such as the Rhode Island Historical Society, and the National Register of Historic Places all rushed to snatch up, preserve and protect grand country houses. In England, Scotland and Wales, surviving country houses are mostly looked after by the National Trust (usually gifted to the Trust by families who no longer wished to live there). In America, the Rhode Island Historical Society protects and preserves the grand villas or ‘cottages’ (as they were euphemistically called, so as not to be seen as being too ostentatious…) which the wealthy of the Gilded Age and the Belle Epoque built on the island’s coastline.
The Country House Lifestyle Today
Although the upstairs-downstairs, masters-and-servants lifestyle of yesteryear is now little more than a distant memory, what is life like inside grand country houses today? Do any of them still exist anymore?
Actually, yes they do! A number of grand country houses (both in the UK and abroad) are still lived in and operated as private homes (some even by their original families), today. However, as was the case a hundred-over years ago, living in an old, grand country house is still a major hassle. It was a hassle 100 years ago when these houses were 100, 200 years old…now it’s even MORE of a hassle when some of them can be 300, 400 years old! The biggest hassle by far, is just the sheer upkeep required. Guttering, roofing, windows, heating, plumbing…trying to get effective rewiring done on a gigantic house is hard enough – imagine how much harder it is when it was built 300 years ago!
As an example – Buckingham Palace recently underwent rewiring, and miles and miles and MILES of antique gutta-percha and cloth electrical cords were stripped out, to be replaced by safer, and more reliable modern cabling and wiring. Imagine how much that costs – and that’s for a building that’s in regular use with regular maintenance…
Living in a Grand Country House Today
Living in a grand country house today comes with many, many challenges. Chief among these is just the sheer upkeep required to keep the house standing. Remember that many of these places are now centuries old and require constant maintenance. Gutters, roofing, heating, plumbing, electronics, gas supplies…another burden is taxation, and at some times, even the limitations placed on what can be done to the house under local historical preservation laws.
But that aside, do people still live in grand country houses?
“Althorp”, the country manor which is the traditional home of the Spencer Family. Princess Diana lived here before her marriage. It remains in the Spencer family to this day.
Amazingly – yes, some do. The Spencer family (famous members include Princess Diana and Winston Churchill) still live at Althorp, their country seat, and Princess Diana grew up and lived there before her famous marriage to Prince Charles. Another famous country house which is still inhabited by the original family is Chatsworth House in Derbyshire. In the 1800s and early 1900s, Chatsworth was a very popular hangout for British aristocracy, and even British royalty – King Edward VII, son of Queen Victoria, was a frequent guest there.
Chatsworth House in Derbyshire, built in the early 1500s.
Chatsworth House is the country seat of the family which holds the title of the Dukes of Devonshire. Since the mid-1500s, that’s been the Cavendish family…and they’ve lived there ever since, including during a particularly scary year of English history – 1665.
For those not up on their English history, 1665 was the year of the Great Plague of London. During this time, the plague spread (through contaminated cloth) to the village of Eyam (“Eem“), just a few miles from Chatsworth. Within a couple of weeks, the entire village was infested with the plague and to prevent a nationwide pandemic, the village leaders ordered that everybody in the village had to adhere to a strict quarantine. Nobody in, nobody out, until the disease had run its course, and the quarantine could be lifted.
Of course, the villagers could not do this alone. The Earl of Devonshire (as the head of the Cavendish family was, at the time), as the local landowner, felt sympathy for the villagers and agreed to provide whatever assistance he could offer. In exchange for silver coins washed in vinegar, he would send deliveries of food, drink and medicine to the village common at regular intervals (but always at night), in order to give the infected villagers the bare necessities to keep going.
Eyam is now famous as the plague village, because despite the ravages of the Black Death, a disease so infectious that even today, it is only studied under STRICT controls – a surprisingly large number of villagers survived, and it was the Earl of Devonshire, operating from nearby Chatsworth House, which aided in this miracle.
That particular earl (William Cavendish), was later promoted to the Duke of Devonshire (the title they hold today) by King William III (of ‘William & Mary’ fame) in 1694, for his assistance in the Glorious Revolution of 1688, which saw the much-hated Stuart, King James II, kicked off the English throne, to be taken by William of Orange and his wife, Queen Mary.
Anyway…enough of 17th century English history, the black death, and the Glorious Revolution. We digress…
Biltmore Estate (photographed here in 1900) is the largest privately owned home in the entire United States. It still stands today and it’s still owned by the Vanderbilt Family.
In a word – yes, there are still grand country manors (both in England and elsewhere, such as Australia, France, Germany, America and Canada, to name a few) which are still lived in by families and run as private homes. In some cases, they’re even still lived in by the ORIGINAL families which built the house when it was new (although this is very rare). But that said, most grand country houses now survive as a mix of half-house, half-business. In order to fund the maintenance and restoration of the house, most families which still live in them usually also operate them as businesses – either renting out spaces for parties, weddings, anniversaries, receptions, or as filming locations for period dramas and movies (as mentioned previously, Highclere Castle has fulfilled this role many, many times – check Wikipedia for a full list of the castle’s film credits, which are quite extensive).
Will grand country-house living ever return?
Honestly? I doubt it. While it’s very elegant and refined and reeks of upper-class sophistication, the fact of the matter is that it’s a lifestyle that is extremely hard on the wallet. Unless you’re a billionaire who’s making millions every day, and can afford to keep a full-time army of, live-in domestic staff to run the house, then honestly…no.
That’s not to say that some people don’t do it, as seen above, there are some houses which are still used in this way, but they’re very much the minority. Most people – even most people with the money to do it – would generally prefer not to, just because of the expense, but also because most people just don’t live their lives that way anymore, even if they did have the money to not only maintain it, but also enjoy it. The days of upstairs and downstairs, servants bells, footmen and butlers, of servants halls and bringing the car round to the front of the house after an evening’s entertainment.
Today, it’s a lifestyle and a way of life that exists in novels and movies, TV shows and historical romances. As the movie says…
“…Look for it only in books, for it is no more than a dream to be remembered. A civilization gone with the wind“.
In this modern world of ours, with GPS, satellites, radio, live weather-mapping, and global communications, travelling anywhere – by car, by bicycle, by airplane, by ship – is pretty routine. We expect to board our vehicle, make ourselves comfortable, and arrive in our destinations…eventually. But for centuries, travelling across the world’s oceans was fraught with all kinds of dangers. Storms, hurricanes, rocks, reefs, pirates, illness, disease, rogue waves, navigational errors and the very real risk of getting hopelessly lost, to the extent that you couldn’t possibly find your way back home again.
This all began to change in the 1700s, when for the first time in history – mankind developed reliable methods, and instruments, for accurate, and safe, navigation at sea, beyond sight of land. Inventions such as the refracting telescope, the octant, the sextant, and the marine chronometer or ship’s clock, allowed sailors and navigators for the first time, to confidently sail beyond sight of land, cross an ocean, and confidently sail back again.
But there was a problem.
Sailing in the 1700s and 1800s was done by what was called ‘Line-of-Sight’ navigation. That meant taking visual references of what you could see of the world around you, to figure out where you were. For example, to find out your latitude, you measured the angle of the sun at noon, against the horizon to determine how far north or south of the equator you were. Similarly, you checked the time on your ship’s clock at noon, to determine the time-difference between local time (noon, indicated by the sun) and your port of departure (the clock) to calculate distance traveled.
The same thing applied when sailors got closer to land. Now you might think that this was easy – but in many ways it wasn’t. Sure, seeing land is great, but navigating along its coastline is not. Especially in bad weather, or at night, or when it’s cloudy, or excessively windy, or if the waves are too high…you get the picture.
Sailing ships are slow-moving vessels, and were equally slow to respond. If you were sailing along the coast and spotted trouble – you first had to consider yourself very lucky – since not all trouble could be spotted with the naked eye, or even the aid of a telescope – but secondly – you had to move very fast in order to maneuver your ship out of the way – not an easy task when it relies on the wind and currents to provide its motion.
This brings us to the one navigational aid which for centuries, has guided sailors in and out of harbours, and safely from shore to shore – an unblinking, unflinching guide, and a metaphor for hope against a sea of troubles.
The humble…
…lighthouse.
What IS a Lighthouse?
A lighthouse, as most people are doubtless aware – is a navigational aid in the form of a fixed point of illumination raised inside a tower to guide ships at night, or to warn them of dangers which may exist where the lighthouse was constructed (or nearby). For centuries they were so important that thousands of them were built all around the world lining coastlines, inlets, harbours, inland seas and lakes, as well as important navigable waterways such as estuaries, rivers and bays.
How old are Lighthouses?
If we accept that the term ‘lighthouse’ refers to a fixed point of illumination which serves as a navigational aid for shipping – lighthouses probably date back to antiquity – certainly to the time of Ancient Rome, and the fabled Lighthouse of Alexandria, in Egypt.
The fabled Lighthouse of Alexandria – a digital reconstruction, based on available archaeological evidence.
But if we accept that the lighthouse is a functional structure – a building specifically raised for the purpose of being a navigational aid, with a light inside it and someone to tend to that light, then how far back can we trace their development?
The Tower of Hercules, in Spain. Raised by the Ancient Romans in the 100s, the tower was renovated in the 18th century, and remains in use today as a navigational aid.
Most sources would say the 1600s. Or more specifically, the end of the 1600s.
“Why?” you might ask.
Simply because by the 1680s and 1690s, overseas traffic was increasing greatly. The end of conflicts in Europe such as the Anglo-Dutch Wars meant that there were suddenly loads of sailors sitting around with nothing to do. Some turned to piracy, while others tried to put their seamanship to good use, becoming merchantmen who sailed the high seas doing trade with Europe, the Med, and the American colonies of European powers. The uptick in oceangoing traffic meant that for the first time in a long time, a proper system of lighthouses was necessary to ensure that ships and their valuable cargoes were guided safely back to their home ports.
The First Lighthouses
The first lighthouses, mostly built in the 1690s and during the 1700s, were simple wooden towers raised on clifftops, rocky outcrops or islands off the coasts near shipping hazards like rocks or coral-reefs, or near the mouths of harbours. Their lanterns were little more than candles protected by glass. These early lighthouses were primitive, and were not strong enough to survive for any serious lengths of time. Wooden beams, no matter how securely-anchored or ingeniously dovetailed, could not withstand the force of Atlantic gales, pounding waves or the effects of nonstop dousing by a limitless amount of seawater.
Built by Henry Winstanley in the 1690s, the first Eddystone Lighthouse was a simple, wooden tower which was destroyed in a storm in 1703.
And on top of that – they were made of wood – not the best material for building a structure designed to hold a light, when the only lights available at the time were naked flames. After various early lighthouses either collapsed, were swept away, or got burned down, architects in the 1700s started looking for better designs and more robust materials with which to build them from.
The First Modern Lighthouse: Smeaton’s Tower
The first really successful lighthouse was raised in the 1750s, by architect John Smeaton, on the Eddystone Rocks off the southern coast of England near Cornwall. This replaced two wooden lighthouses built in previous decades, neither of which lasted for any great length of time. Learning from their mistakes, Smeaton looked for inspiration for his new lighthouse in nature.
The old lighthouses had been built of wood – weak – easily rotten – susceptible to both fire and water – hardly an ideal material. So the first change he proposed was to build the new lighthouse in stone. But this would prove useless if he couldn’t get the shape right, since the lighthouse would have to withstand the powerful storms of the North Atlantic, and the English Channel. While wood itself was unsuitable as a building material, it nonetheless served as Smeaton’s inspiration – large trees, which produced the kind of wood used in lighthouses at the time – were very tall – but also thick and broad at the base, with deep roots.
Why should his lighthouse not follow this model?
Smeaton decided that for his lighthouse to survive, it would have to be well-anchored into the rocks which he hoped his beacon would warn against. It would have to have a broad, deep base, with a sharp, upward curve from the ground, on top of which the tower would be built. This would serve to direct the force of any wave hitting the tower from the horizontal, to the vertical, where it would lose all its energy, posing no structural threat to the building.
Smeaton’s Tower. Raised in the 1750s, this lighthouse off the coast of Cornwall is widely considered the first modern lighthouse built with a specific design in mind.
To stop the tower from being blasted apart by the waves like previous, wooden structures had been, Smeaton had the stone blocks (granite) carved so that they dovetailed into each other, locking together like pieces of a puzzle. For extra strength, holes were drilled through the blocks and were further reinforced with rods of marble serving as dowels to stop the stones from splitting apart.
Apart from the stone lighthouses built by the Romans, the Eddystone Lighthouse or Smeaton’s Tower, as it became known, was the first really modern lighthouse to last any decent length of time. From the date of its opening (1759), it remained in continuous operation for over a century, until it was demolished, and replaced by the fourth (and current) Eddystone lighthouse, in 1877.
The Different Types of Lighthouses
Since the beginning, there were always two main types of lighthouses. Either cliffside or island lighthouses, with million-dollar beachfront views, or isolated lighthouses, built on rocky outcrops offshore, or even out in the middle of nowhere! Shore-based lighthouses were the easiest to build, and could become quite elaborate. Lighthouses built out on the open sea on the other hand, needed to be simple, but sturdy. In many cases there simply wasn’t the money, the time, or the space, to build anything more than the tower which held the light itself, and two or three rooms for the keeper and all his necessary supplies and worldly goods, to be housed in.
Shore-Based Lighthouses
Lighthouses built on land near the coast were typically constructed near harbours, fishing or coastal communities, or close to offshore shipping-hazards, to guide sailors along the coast and to warn them of the dangers lurking nearby. Since space was not a problem, shore-based lighthouses could be quite elaborate. Depending on the design and the land and funds available, the lighthouse would be used solely as a beacon tower, with storage inside for equipment, tools and supplies. The lighthouse keeper, and possibly his family as well, would live in the lighthouse keeper’s cottage next door. Boundary walls around the lighthouse-keeper’s property would protect both his dwelling and the lighthouse from waves nearby. Lighthouse keepers and their families would’ve been familiar as members of the local community.
Offshore Lighthouses
Being a lighthouse keeper could be a dangerous, lonely and exhausting job – no moreso than when you were the keeper of a lighthouse in the middle of nowhere! Many lighthouses off the coastlines of England, Ireland, Scotland, the United States, Australia and elsewhere around the world where there were large maritime communities, were manned by keepers who had the unenviable task of tending to a lighthouse from which they could not escape. Many were cooped up in them for weeks or even months on end.
Since they were built out in the open, offshore lighthouses had to be constructed much more differently than land-based ones. They had to be taller, their foundations deeper, their bases had to be shaped in specific ways to absorb the shock of waves, and the space inside had to be very carefully utilised. Unlike a shore station, a lighthouse in the open sea had to contain everything that it might need and be self-sufficient for weeks on end.
In a lighthouse built out in the open sea, space had to be used carefully. The keepers usually only had one or two rooms to themselves for eating, sleeping and activities. The other rooms would’ve been used for storing things like rope, wicks, cleaning materials, food, equipment, lamp-oil, and spares of things required to keep the lighthouse functional.
The Anatomy of a Lighthouse
Regardless of where they were built – all lighthouses had some things in common. The first thing was of course – the light, or ‘lantern’. This was housed in the ‘lantern room’, at the top of the lighthouse, protected by glass windows all around. Another feature of almost all lighthouses was the ‘gallery’, the circular balcony that wrapped around the outside of the lantern room. This allowed lighthouse keepers to maintain the windows and structure of the lantern room in a relatively safe environment.
In some places, a bell or horn was also mounted out on the gallery, and was used to indicate the presence of the lighthouse when it was shrouded in heavy fog, and the light could not easily be seen. The gallery also served as a convenient and safe platform from which the keeper could observe (and possibly, communicate with) passing ships, to warn them of danger. In lighthouses which were out in the middle of the sea, their greater height was used to create extra rooms inside the lighthouse which would be used for storage of long-term supplies like food, rope, wood, lamp-oil, wicks and tools. On shore-based lighthouses, these necessities could be stored in the keeper’s separate quarters or storage-buildings, leaving the lighthouse itself relatively uncluttered.
Lighthouse Development
As the 18th century morphed into the 19th century, lighthouses started becoming more and more sophisticated. Dovetailed stone construction, as pioneered by men like Smeaton, and Robert Stevenson, the famous Scottish engineer, became THE way to build sturdy lighthouses. Stevenson proved this beyond doubt when he raised the famous Bell Rock lighthouse off the coast of Scotland in the early 1800s. Taking years to build, the lighthouse had survived everything that Mother Nature could hurl at it, and despite being well over two hundred years old…the Bell Rock lighthouse remains operational to this day.
Lighting the Way: Beacon Development
But it wasn’t just in terms of construction that lighthouses were changing. They also changed in terms of the lights which they housed. Early lighthouses used candles, or crude oil lamps. By the late Georgian era of the 1800s and 1810s, more effective Argand-style lamps were being used. These lamps had a cylindrical glass tube, or ‘chimney’ placed over and around the flame. The chimney – like the chimney in a fireplace – drew the hot air created by the lamplight, up the shaft, drawing oxygen with it. This current of air fed the flame, which now would burn brighter than it would’ve done without the chimney. Combined with polished reflectors placed behind the chimneys, these lamps could produce a powerful beam of light once the luminous output had been properly concentrated.
Early lighthouse beacons were little more than this. They were later mounted on rotating platforms or stands, which could be driven around by powerful, mechanical clockwork motors. These needed to be wound up every few hours to ensure their continued operation. By placing different lenses in front of the lamps, different colour combinations of light could be achieved. Together with the rotating lamp-base, keepers could produce distinct and easily recognisable lights which could be seen by ships far away. Red and white flashing light might, for example, indicate a particularly dangerous rocky reef which sailors had to avoid.
Throughout the 1800s, lighthouse beacon design continued to improve. Different configurations of lamps and lenses, reflectors and housings were all built, designed, tested and tried, to improve the amount of light produced, the output generated, and the intensity of the beam thrown out from the top of the tower. The next big development from the Argand lamp came in 1822, when French physicist Augustin Fresnel invented the ‘Fresnel Lens’.
A Fresnel lens installed inside a lighthouse. The distinct, curved and rippled effect of the lens meant that all the rays of light coming from the lamp in the center could be angled and concentrated into a single, focused beam.
The Fresnel lens was not a flat sheet of glass, like a window-pane, which was what other lighthouses had used previously – it was a series of prisms – shaped and ground and arranged in such a way that all the angles of glass directed and concentrated the light from its source (the lamp) into a single, solid, unbroken beam. The shape of these prisms not only concentrated the light, but also magnified the output of the lamps, meaning that even a relatively small light could now throw out a beam that could be seen clearly for miles away!
Changing the Lights
As lighthouse technology improved, not only were towers and lenses upgraded, but so were the lights themselves. Originally they were nothing but fires…then candles…then whale oil lamps. In the Victorian era, lighthouses were mostly oil-fired, first with whale-oil (a holdover from the 1700s), and then eventually, kerosene. None of these were really fun to use – they were either really hot, really smoky, really heavy, or really smelly – remember that the lighthouse keeper had to lug fresh candles, fuel or wicks halfway up the tower every time he had to tend the light! Keeping soot and smoke off of the glass and lenses was a regular chore, as well as filling the lamps and trimming the wicks so that they would burn evenly and not produce excessive smoke.
An improvement in lighthouse technology came around in the second half of the 19th century when it was discovered that combining water with the mineral calcium carbide produces flammable acetylene gas. It was easy to produce, relatively safe, and, once a proper system of drip-valves and vents had been created – relatively easy to use and control.
Originally, acetylene lamps were small things – they were used in miners’ lamps, bicycle lamps and the headlamps of steam trains, but as the technology improved, they were also used in lighthouses, replacing the old oil-burning lamps with their wicks and smoke and soot, with cleaner, brighter, gas-fired lamps, which required less maintenance and were easier to operate. All you needed was enough water, and enough carbide to keep the chemical reaction (and the production of gas) going.
Signalling with Light
The classic lighthouse has a single or double-ended beam of white light, which swings and pivots around in a circular motion, shining out to sea. But as lighthouse technology grew more advanced, additional features were added. As early as the 1810s, lighthouse designers were using coloured lenses to create alternating beams of red and white light. The speed and direction of rotation, the combination of colours produced by the lantern, and the paintwork on the lighthouse tower were all methods used to aid sailors at sea, who could pinpoint their position by observing far-off lighthouses through their telescopes. Lighthouses were marked on maps with lists of their peculiar singularities marked next to them. By comparing what they could see through their telescopes with what they could read on their charts, sailors could navigate safely along the coast, fully aware of what was around them.
With the advent in the late 1800s, of electric light, another feature of lighthouses was introduced, previously impossible to implement – flashing lights! Now, not only could lighthouse lanterns spin and change colours, they could also turn on and off. Different combinations of blinking or flashing lights were used to distinguish different lighthouses, and these specific combinations became known as ‘characteristics’, with each lighthouse having a different one to set it apart from its neighbours. Along with its other features, a lighthouse’s characteristic is also included among its information when marked on charts.
The Heyday of the Lighthouse
The lighthouse reached its pinnacle in the second half of the 1800s. Hundreds were constructed around the world in the early 1800s to protect and guide shipping, and even more were constructed in the second half of the 19th century with the rise of regular, steam-powered ocean-going passenger-ships. Lighthouse keepers, their partners, their families and children all became part of the communities they served. Some lighthouses and keepers became famous for actions or events which they participated in or witnessed. By the end of the 1800s, the great flurry of lighthouse-building had ended. So many had been constructed that there was almost a surplus of lighthouses to be had!
The Evolution of the Lighthouse
As the lighthouse became more sophisticated, robust and prominent, certain lighthouses started being used for more than just warning ships at sea. Due to their unique positions, lighthouses were also used as weather-stations. Records of wind direction and strength, air-pressure, temperature and rainfall were kept and published to try and help predict the weather. With the coming of radio, lighthouses, with their great height, were ideal as broadcasting and receiving stations.
As the 20th century dawned, more sophisticated technology started entering lighthouses. Electrification allowed for much more powerful lights, doing away with oil and gas. Some lighthouses which were too remote continued to be gas-fired, but by the second half of the 20th century, improving technology such as solar-panels meant that even the most remote lighthouse could be electrified, and more importantly – automated. The days of the humble keeper were numbered. Lighthouses could now be operated remotely. In the event that maintenance was required, shore-based lighthouses could be inspected by volunteers or the local coastguard. Offshore lighthouses could be accessed by helicopters, and some lighthouses were modified to have landing-pads on their roofs.
Famous Lighthouses
Throughout their long history, some lighthouses have become famous, even infamous. Here are a few of the more noteworthy lighthouses throughout history…
The Cape Race Lighthouse (Newfoundland, 1912)
Built in 1907, the Cape Race Lighthouse served as both a lighthouse and radio transmission-station. It replaced an earlier lighthouse built in the 1850s.
A lighthouse and telegraphic receiving station, the lighthouse at Cape Race was the first land-station to receive wireless telegraph messages and signals of distress from the sinking Titanic, in April, 1912.
The Smalls Lighthouse (Wales, 1801)
Built on the Smalls – a rocky reef off the coast of Wales, the Smalls Lighthouse was the location of a famous incident in 1801, where one lighthouse-keeper died in a freak accident. The surviving keeper had to tend the light (and store his partner’s lifeless corpse) inside the tower for weeks on end. He was eventually driven insane by the nonstop storms, cabin-fever, guilt over his colleague’s death, and sheer isolation. From 1801 to 1980 when lighthouse automation began in the United Kingdom, this incident required that all British lighthouses be manned by at least three keepers at any one time.
The Bell Rock (Scotland, 1810)
Raised in the 1800s, the Bell Rock Lighthouse off the Scottish east coast, was the first great stone lighthouse to be built since Smeaton’s Tower in the 1750s. It’s design and construction were overseen by Robert Stevenson, a Scottish engineer. The lighthouse was so well-designed that it has remained in use to the present day. Stevenson’s success led to a three-generation dynasty of his family producing lighthouses around the United Kingdom. One Stevenson who did not join in the family business was Robert’s grandson; Robert…Louis…Stevenson…who became a children’s author.
Flannan Isles Lighthouse (Scotland, 1900)
Built in the 1890s, the Flannan Isles Lighthouse is one of the most northerly, and remote lighthouses in the world, near the Hebrides Islands off the coast of Scotland.
Remember how in 1801, a lighthouse keeper serving the Smalls Lighthouse died, and the other one was driven mad? To prevent this from ever happening again, the British government decreed that all lighthouses in the British Isles had to be manned by at least three men at all times.
Such was the case in 1900, shortly after the Flannan Isles Lighthouse was first put into operation. Unfortunately, this safeguard was for naught when, in December of that year, a strange chain of events was put into motion.
On the 15th, the steamer SS Archtor was steaming through the Hebrides. The weather was wild and inclement, but despite the heavy seas, the crew on board noticed that, while they could see the lighthouse clearly – they could not see its light, which should’ve been turned on during the storm to guide ships through the treacherous waters.
When the ship made safe harbour three days later, the crew immediately reported this oddity to the local authority – the Northen Lighthouse Board. Established in the 1780s to regulate the construction and operation of lighthouses in Scotland, the Northern Lighthouse Board was responsible for everything that happened to all the lighthouses under its jurisdiction. The heavy seas continued, and it wasn’t until after Christmas that it was deemed safe enough for the Board to send out a relief-boat to the island to examine the light.
Upon arriving, no signs of life could be seen. No trunks or crates of provisions to be restocked, no flags flying, no lights in the lighthouse, and no response when the ship’s captain operated the vessel’s piercing steam-whistles to try and get the lighthouse’s attention. When the ship docked and the crew went ashore, things only got weirder.
Inside the lighthouse, beds were unmade, the light was found to be both topped up with oil, and in working order, there were no signs of forced entry, and no signs of a struggle. Further examinations of the island revealed significant storm-damage along the western shore. In the end, an official investigation by the Northern Lighthouse Board concluded that two keepers had gone out in the storm to respond to this damage and had been swept away by the powerful waves and storm-surges. The third keeper abandoned the lighthouse in an attempt to rescue his companions and was also lost to the sea.
Point Bolivar Lighthouse (USA, 1900)
The Point Bolivar Lighthouse in Texas, USA, rose to prominence (along with its keeper) in 1900, during the famous Galveston Hurricane.
Inaccurate weather forecasts, and a certain amount of overconfidence on behalf of the local weather bureau resulted in thousands of Galvestonians being caught off-guard when, in August of 1900, a powerful hurricane swept through the Gulf of Mexico towards the US Gulf Coast. Galveston, a low-lying city just nine feet above sea-level, had no flood defenses at the time. As people fled from the pounding rain, rising winds and fierce storm-surges, 125 people, including the lighthouse keeper, his assistant, and both their families – sought refuge in the Point Bolivar Lighthouse – the nearest structure of any significant strength – to Galveston.
Everybody in the lighthouse survived the storm, which all but leveled Galveston in the space of a few hours. The keeper, George Claibourne, repeated this feat when, in 1915, Galveston was hit by another hurricane! This time, fifty people sought refuge in the lighthouse and survived. Claibourne died three years later while on duty.
The Fourth Point Lighthouse (Java, 1883)
Built by the Dutch in 1855, as one of a series of lighthouses along the west coast of Java, Fourth Point was designed to guide ships through the Sunda Strait, which runs between Sumatra and Java, the two biggest islands of the Indonesian Archipelago. Operated by the Schuit family (Gerrit, Catharina, and their son Joseph), the Fourth Point Light was regularly visited in early 1883 by Dutch scientist Rogier Verbeek – a pioneering vulcanologist who came to Indonesia (or the Dutch East Indies as they were then called) to study the region’s dozens of active volcanoes.
Verbeek used the vantage point of the Fourth Point’s lantern-room gallery to observe the region’s most famous volcano of all, through his telescope – thirty miles out to sea – known to the natives as ‘Krakatau’.
Known to history, as ‘Krakatoa’.
In August, 1883, Krakatoa erupted so spectacularly that shockwaves and tsunamis rippled all the way around the world, with huge ocean disturbances reaching as far away as India, Africa, Madagascar, and the Pacific Coast of the USA. The eruption was heard as far away as Australia and the Indian Ocean, where sailors thought the sounds of the eruption were from battleships firing off broadsides!
Verbeek was lucky enough to survive the eruption – thousands did not. Among them were all three members of the Schuit family, who were all killed when a volcano-generated tsunami ripped the Fourth Point Lighthouse off its foundations and washed it away.
It was likely that the Schuits felt safe in their lighthouse and did not wish to leave it. Or perhaps they decided to stay to maintain the light – despite the fact that the volcanic ash was so thick that it turned day into night, with ashy rain so thick that the light was rendered completely useless.
All that remains of the original Fourth Point Lighthouse. It was rebuilt in 1885 in a different location.
One ship which the Fourth Point Lighthouse would’ve been trying to guide was the S.S. Governor-General Loudon. Loaded with passengers (eighty-six in total), Chinese coolies being ferried to Sumatra, and the ship’s captain and crew, the Loudon was sailing to Krakatoa for a day-trip. Ever since the volcano had become more active, tourists and locals alike were dying to get up close and personal with a real spitfire!
As they left the island, the volcano started erupting and the Loudon was soon trapped in a volcanic hellstorm which it could not escape from, with over a hundred people on board. Unable to reach safe harbour because the volcanic storm had destroyed the jetties at the port town of Telok Betong (it’s original destination), it’s commanding officer, Captain Lindemann, managed to ride out the storm, first by steering his ship towards the volcano (to absorb the shock of any waves) and then out to sea (to escape the worst of the eruptions). They arrived safely back in Java battered, filthy, scared out of their wits – but nonetheless, alive.
Lighthouses Today
In the 21st century, lighthouses still exist – although it’s been decades since one was built. The majority of lighthouses around today were the ones built in Victorian times, or constructed in the early part of the 20th century. The vast majority of them are automated, and have been, since the 1970s and 80s. Some have been torn down due to redundancy, and some have been turned into historic monuments and tourist-attractions.
The Gay Head Light was built in 1856. It was moved in 2015 to a safer location, further away from the crumbling coastline of Martha’s Vineyard to protect it from imminent collapse.
Some have been saved from demolition by jacking them off their foundations and moving them further inland, or by maintaining their structure and operation with the help of historical preservation societies. In many coastal communities, the local lighthouse is often seen as a beacon, not only of guidance, safety and hope, but also of local town pride. One example is the Gay Head Light, on Martha’s Vineyard on the American east coast, which was moved away from the crumbling coastline in 2015 to save the lighthouse for the local community.
Ever since mankind discovered that the Earth was round, and that it could get to one place on the globe by either going east, or west, it has always striven to find the shortest routes to its chosen destinations.
Starting in the late 15th century, when European powers first discovered and began colonising the Americas, interest began to grow about the countries that lay far beyond the great North and South American landmasses: Japan, China, Korea, Indochina and the Spice Islands of the South Pacific, as well as the fabled ‘Terra Australis Incognitia’, which legend held, existed somewhere far beyond the immeasurable horizon. For untold centuries, these mystical lands of mystery had remained largely unseen by Western eyes.
Overland treks to the Far East took months to complete, assuming that you’d get there at all, of course! Sailing from Europe to Asia took just as long, going past Spain, past the Rock of Gibraltar, down south, hugging the coast of Africa, and then either hooking around into the Indian Ocean, or risking a dangerous passage through the torrential storms that lurked around Cape Horn. Only the bravest sailors with the biggest balls ever reached the Orient, and made it back alive.
European Interest in the Far East
The East had much to offer the West. Porcelain, ivory, silk, tea, and exotic spices like pepper, mace, ginger, nutmeg, cinnamon, and other commodities like rubber, tin and exotic fruits such as bananas and coconuts. For centuries, these commodities were hideously expensive. Lavishing money on any one of them was considered a sign of almost obscene wealth. And anybody who could get their hands on commercial quantities of these products and ship them back to Europe in a timely manner could make themselves fortunes and fortunes on top of more fortunes in gold and silver!
It was for all these reasons – the desirability, the danger, the long waiting-times, and the sheer risk involved, that ever since mankind first became aware of the Americas, that he has tried to find ways around them. A sea-route through or around the Americas would cut a trip from Europe to Asia by weeks, and allow for relatively fast and efficient trade. But in an age when most of the world remained unmapped, how was this to be done?
Remember please that this is a time before great arctic and antarctic exploration, a time before coast-to-coast mapping, a time before GPS, satellite imagery and photographs from space. Nobody knew what lay at the extremities of the Earth. But somehow, they were going to have to find out!
What is the ‘Northwest Passage’?
The Northwest Passage is the name given to a collection of potential or theoretical sea-routes that might, or might not, exist between Europe, and Asia, going along the top of the North American continent, through the Arctic Ocean, and past the northern border of what is today, Canada. Once mankind had become fairly proficient with sailing and navigation, and had started getting a better idea of how the world was shaped, he almost immediately wanted to try and tackle this transportational and geographical nightmare head-on!
A History of the Northwest Passage
The Northwest Passage as we understand it today was first thought of back in the 1400s, shortly after knowledge of the Americas started being spread around the courts and cities of Europe. Eager rulers, kings and princes commissioned sea-captains to provision their ships, round up their crews, and sail off into the wide blue yonder, to find a viable passage up and over the North American continent to the riches of the far-off Orient. Throughout the 16th and 17th centuries, voyages and expeditions which departed from Canada, and the northern regions of what would become the United States, and from Europe, all attempted to map and plot both the frozen wastes of the far north, and to find a way through it which was viable enough to become a regular shipping-route.
Persistence in finding the Northwest Passage, should it exist, lasted for several lifetimes. By the 1700s, dozens of expeditions and voyages had tried, and failed to find a way through the frozen north between the northern Atlantic, and Pacific, oceans. The vast majority of these expeditions were beset by all the usual problems of the era: weather, food, and disease. The freezing temperatures, the lack of food and adequate equipment, and the scourge of scurvy from a lack of fresh fruits and vegetables, which were difficult to keep in an edible condition on a voyage lasting several months, meant that even the most determined adventurers either had to turn back, or died before they got the chance to. Even before the turn of the 19th century, attempts to find the Northwest Passage, and the grisly fates which awaited those who were foolhardy enough to try, had become part of common folklore.
The most legendary of all these voyages is one which may not even have ever taken place: The Voyage of the Octavius.
The Octavius, as far as ghost-ships go, has to be one of the greatest maritime tales out there, up alongside the Flying Dutchman and the Mary Celeste herself! So what happened?
The story goes that in 1775, the whaling ship, the Herald encountered a seemingly-abandoned three-mastered schooner, bobbing off the west coast of Greenland. It being the 11th of October, the weather was already solidly set into winter, and the crew on the Herald were surprised not to see anybody on board, tending the sails, steering the ship by the helm, or even just standing on-deck! After hailing the ship and receiving no reply, some of the Herald’s crew lowered a boat, rowed across, and boarded the vessel and started to investigate.
What they found was the entire crew – captain, his wife, child, and all the deckhands and sailors, stone-dead and frozen in their bunks, berths and cabins, their frigid corpses preserved by the winter cold. In all, the boarding-party came across twenty-eight dead bodies.
Despite what you might think, during the days of sail, it was not uncommon for vessels to come across other vessels in the open sea, which had been completely abandoned. This happened so often that there were actually recognised rules, regulations and procedures for salvaging vessels, sailing them to major ports, and then putting forth a salvage-claim in order to earn some extra money.
What made the Octavius different from other ships was what the Herald’s crew supposedly found in the captain’s cabin. Apart from the body of the captain himself, they also came across his log or journal. The last entry was dated in November, 1762! If this was correct, then the Octavius had been floating around in the North Atlantic for over a decade! Further examination of the extremely fragile logbook revealed that the ship had sailed to China, and was attempting to sail home again to England via the supposed ‘Northwest Passage’. The ship, like many before it, became frozen in the ice, locked into a white, shivering prison that wouldn’t release it for months on end. It was this mention of seeking, and then apparently finding, a navigable Northwest Passage, which made the story of the Octavius so famous.
Apparently too freaked out by what they saw to take anything from the ship, let alone try and salvage it, the crew of the Herald upped anchor and sailed off without so much as attaching a towline to the Octavius, leaving it to float off into the mists of history from whence it came.
So, is the story true?
Probably. Probably not. The first record of any sort of event like this happening doesn’t make its way into a source of any kind until 1828, when a similar tale appeared in an American newspaper in Philadelphia. Similar stories popped up throughout the 1800s and the early 1900s, likely based off of the 1828 version. In all likelihood, something like the Octavius did happen, once upon a time, and the story had been told, retold, mixed and muddled with other ships and other stories for two-hundred odd years that the truth, if it ever existed, was lost to time.
Whether or not the story is actually true, its very existence proved a point: People were fascinated about the possibility of the Northwest Passage and about ships which had supposedly found it, and sailed it; so much so that people were willing to believe these stories even if they were apparently baseless and purely anecdotal.
Finding the Northwest Passage!
As the 18th and 19th centuries progressed, greater and greater importance was being placed on trying to find the Northwest Passage, if indeed, it existed at all. Remember that much of the world was uncharted at this point and most people in Europe, the Americas and the Pacific knew little, if anything about the lands beyond those in their immediate surroundings. This is how myths like the ‘Island of California’, ‘Atlantis’ or ‘Skull Island’ with King Kong, or even Australia, came to be.
The Age of Reason, and the Age of Enlightenment were all the rage in Europe at the time. In the 1700s, more and more things were being examined not religiously or spiritually, or taken at face value. People wanted proof! They wanted facts! They wanted to study and understand everything about everything that they could. “Here be Monsters!” was no longer acceptable on a map. You had go out there and find out all you could about these monsters first! Likewise, writing down on a map that there was a ‘Northwest Passage‘ north of Canada was also not acceptable. You needed proof! And the first people to find definitive proof of such a waterway would make international geographical history!
So, who do you ask to search for the nearly impossible?
Captain James Cook.
Although he didn’t discover Australia, Captain (or more correctly, Lieutenant) James Cook was famous in Britain as being a bit of a whiz when it came to navigation and cartography. He filled in and found out more places in the South Pacific than almost anyone else at the time. He mapped and plotted and charted every little sandbar that he could find during his voyages. And he almost always came back to England as a scientific and exploratory hero. If anybody could find the Northwest Passage, it was Cook!
The chaps at the Royal Society in London and other great learned institutions certainly seemed to think so…so much so, that they begged Cook to come out of retirement to do so!
Cook’s Voyage Northwest
Any country which found a viable way to, and through the Northwest Passage would gain considerable prestige. Because of this, it’s perhaps unsurprising that Cook’s last voyage was somewhat of a secret. Of course nobody could hide the fact that a world-famous, celebrity explorer was going off on another voyage, but they could at least hide the reason for the voyage. The public and press were told that Cook and his crew were returning a Tahitian native whom they had befriended, back to his home-islands in the South Pacific, and while there, would explore more of the islands in that region.
The Tahitian, Omai, had spent two years in England previous to 1776, and although he was a smash-hit among the gentry, aristocracy and the intelligentsia, the time had come, he decided, to go home. Cook agreed to arrange his passage and, on the 12th of July, 1776, Omai, Cook and his crew, set sail for the South Pacific.
Omai arrived in Tahiti in August, 1777, a year and a month after leaving England. Cook saw his friend off, and then set sail northwards for North America, and the hoped-for Pacific entrance to the Northwest Passage. On the way, he stopped at, and named, a cluster of volcanic islands. His charts called them the ‘Sandwich Islands’. Today, we call them ‘Hawaii’.
Despite his persistence, Cook wasn’t able to find any definitive evidence of a Northwest Passage, although he did spend several weeks mapping the entire Pacific coast of the North American continent. Sadly, Cook never returned to England – he died in Hawaii during the voyage, in 1779.
Finding the Northwest Passage: 19th Century Explorations
Attempts to find the Northwest Passage continued after Cook’s death in 1779. But it was very tricky going. Europe – indeed, most of the world, in the 1600s and 1700s experienced what was called the ‘Little Ice Age’, a period in history (approx 1600-1850), when the world on a whole was much colder than it is today. So cold, in fact that slow-moving rivers (like the Thames in London) would freeze solid! In London, you could even go skating on the Thames in winter time, and people held ‘Frost Fairs’ on the river, to make the best of a bad situation, and have a bit of fun!
‘The Frozen Thames’, a painting from 1677. The structure in the background is Old London Bridge.
Unfortunately, while the Little Ice Age meant that you could go skating on the Thames every winter, it also made exploring the Northwest Passage extremely difficult – because with dropping temperatures, winters were longer, and harsher, making traversing the frozen north extremely difficult. Many expeditions were lost, had to turn back, or even had to cancel their expeditions altogether. Providing exploratory crews with suitable ships, enough food, and adequate supplies was hampered by the fact that nobody knew how long it would take to explore the Northwest Passage. They could be gone for a year…two years…three years…five years! The uncertainty meant that only the bravest of sea-captains agreed to take on the challenge.
The Fateful Franklin Expedition
One of these sea captains was Sir John Franklin.
Admiral Sir John Franklin (1786-1847), was the head of one of the most famous – and tragic – expeditions to the Northwest Passage during the 19th century. Tragic, because most people expected that if an expedition was going to succeed, then it was going to be Franklin’s!
The crew were all stout and sober men, and their two ships – Erebus and Terror – were modern, sturdy, reinforced, and had the latest motive technology on board: a couple of these newfangled steam-engine contraptions! They also had vast quantities of every possible type of equipment they could need, and large stores of canned food, which would last much longer than other types of preserved food, and which would remain edible for years on end, if necessary.
The Erebus and the Terror, moored off the New Zealand coastline in 1841, before their fateful trip to the Arctic…
Armour-plated against the ice, and with steam-powered engines, the two ships were expected to be no match for something as pesky as ice! Their reinforced, iron-clad bows were expected to be able to smash through the ice, and their steam engines were expected to force them through with no problems at all! No more reliance on wind, and currents, no more being hemmed in by frozen wastelands – The Franklin Expedition would charge ahead full of that courage, zeal and confidence that seemed to permeate every aspect of the Victorian age!
The ships in the Franklin expedition were not fast by any means – they maxed out at about 4kn. (four knots, or four nautical miles an hour) each, and each ship was fitted with just a single screw propeller. But they were able to cover significant distances. Leaving England on the 19th of May, 1845, they arrived in Greenland about two months later. Here, they offloaded some crew, dispatched letters back to England by the next available ship, took on extra provisions, supplies and equipment, and then headed for Canada, the cold white north, and hopefully – the fabled Northwest Passage of dreaded mythology and legend!
The crew of the Franklin expedition left Greenland in late July, 1845, heading for Canada. Here, they sailed through the waters surrounding the province of Nunavut, and started heading…northwest. In the winter of 45-46, they made landfall at the tiny Beechey Island, east of the current hamlet of Resolute – one of the most northern settlements in all of the Canadian arctic. From there, they sailed for King William Island. By now, the ships are well within the bounds of the Arctic Circle.
Despite their modern technology, steam-powered machinery and self-assured preparedness, the expedition starts getting into trouble. By September of 1846, both the Erebus and the Terror have become locked in hard by ice off the coast of King William Island. Unable to make any progress until the ice released their vessels, the crew (approximately 130 men and officers), decided to set up camp nearby, and try and weather it out.
Their preparations were no match for the ferocity of Mother Nature, however. The two expedition ships remained trapped in the ice. unable to move, the men were forced to abandon them, and continue on foot to what they hoped, would be the nearest civilisation. On the 11th of June, 1847, Franklin himself died, the remainder of his expedition party eventually succumbing to starvation, hypothemia and frostbite, with some of them even resorting to cannibalism to try and survive. By the end of 1848, there were hardly any of them left.
The details of the fate of Franklin, his crews and his two ships were gleamed by British explorers in the years after his expedition, when they went in search of his ill-fated party. They encountered native Inuit people who related these tales to them, explaining what had happened and what they had witnessed. It was clear that, even in the age of steam and canned food – the Northwest Passage would not be so easily conquered.
In fact, finding the Northwest Passage proved so difficult, that it would be sixty years, and another century later – before it was achieved at all!
Roald Amundsen and the Passage
The man who finally broke through the Northwest Passage, and who finally made history, was a Norwegian – Roald Amundsen! Instead of going all-out, like Franklin had done, Amundsen decided on a different approach – using small ships, and fewer crew. This would, he believed, make travel easier, and safer. Without bringing along absolutely everything, including the kitchen sink, Amundsen believed that travel through the arctic would be sped up and the risks involved with being overburdened, would be minimalised.
The Gjoa, photographed at the time of Amundsen’s great 1903-1906 NW. Passage Expedition.
Amundsen made history in the early 1900s, finally breaking through the Northwest Passage during his expedition of 1903-1906. During this epic trek, he and his crew learned valuable tips from the local Inuit people, who showed him how to stay warm, the best types of clothing to wear, and the most efficient ways to travel across the ice and snow. Amundsen’s tiny wooden vessel – the Gjoa – the first ship to successfully traverse the Northwest Passage – remains a national treasure of the Kingdom of Norway, and is on public display at the Norwegian Maritime Museum in Oslo.
The Northwest Passage Today
Throughout the 20th century, more explorers and navigators attempted to breech the Northwest Passage, all with varying degrees of success and failure. But what about now?
Actually, it’s pretty easy to navigate the Northwest Passage today. Thanks to global warming, and the resultant reduction in polar ice, dangerous sea-routes once frozen shut for centuries, have now become navigable by modern shipping. It is still dangerous to go there – you need a special ship with reinforced bows and strengthened hulls to make it through safely – but it is possible. As yet, the Northwest Passage is only sparsely used, however.
This is largely due to safety concerns raised by the Canadian government. Since it’s nominally a Canadian waterway, Canadian officials would be in charge of the safety of any vessel passing through the Passage. To ensure the safety of the vessel and crew, rescue-stations and other facilities would have to be constructed along the course of the Passage. These would only be practical if enough ships passed through the area to make their construction worthwhile. Until they are, people sail the Northwest Passage at their own peril, with the very real risk that if something goes wrong – they’d be entirely on their own, since the nearest major settlements would be too far away to easily contact.
This gorgeous artifact and fascinating piece of medical history is the latest addition to my collection of antique brassware, and is also the latest thing I won at the local auction-house…
“Ooooooh!” I hear you say.
“Wussit do?” I hear you say.
“Can I have one??” I hear you say.
Well…uh…no, you can’t have one! I’ve been chasing one of these for five years, and I finally got one!
“Awww…okay fine!…But…w-whassit do?”
Well, it’s a pill-rolling machine, from the Victorian era! Ain’t it neat?
No, seriously dude…what does it actually, like…really, do?
…I just told you. It’s a pill-rolling machine!
I know, I know, it looks like some sort of antique cheese-grater, but yes, this is actually a pill-making machine, and back in the mid-1800s, no self-respecting apothecary would’ve been without one of these proudly on display on his shop counter!
“So how does it work?” I hear you ask, “And I mean…why does it exist? I thought pills were made in factories and stuff?”
Uh, yes…they are…now. But 150 years ago, they weren’t. In this post I’ll be talking about what this device is, how it works and what it does, I’ll also be going into a few of the differences between pharmacies today, and how they were, a hundred and fifty-odd years ago, in the middle of the 19th century, when this pill-rolling machine was invented…
Your Friendly Village Apothecary
This machine dates back to the days when your local pharmacist or apothecary bought, sold, and manufactured all his own drugs, medicines and curatives to everybody who lived within the bounds of a given community, and when the dispensing, manufacturing and purchasing of medicine was very different to how it’s done today.
These days, we get sick, we go to the doctor, he’ll give us a script, we’ll take it to the pharmacist, he’ll read it off, get the medicine, give it to us and we’ll walk out of his shop with a bottle of pills, a tube of paste, a jar of ointment, and a bag of diabetic jellybeans.
Back in the 1850s and 1860s, when machines like this were invented, how you got your medicine was very different.
For one thing, you likely didn’t even go to the doctor! Back in Victorian times, physicians were usually far beyond the reach, financially, of most people. Your average, workaday schmoe likely never met a doctor professionally, unless it was a real emergency. On a day-to-day basis, most poor and middle-class people would visit the pharmacist or apothecary for the majority of their healthcare needs.
Even if you did go to the doctor, he’d write out a prescription, and the instructions he generally gave you were to take the script to the apothecary and have the chap behind the counter make up the medicine for you, which the apothecary would’ve done anyway, even if you hadn’t gone to the doctor. And that’s the key difference between a Victorian pharmacist, and one which trades and deals today: Victorian pharmacists and apothecaries MADE their drugs, whereas modern pharmacists just sell them.
Let’s make some drugs…
Back in Victorian times, there was no such thing as off-the-shelf medicine. Every tablet, pill, suppository, ointment, potion, lotion, tincture and syrup to treat everything from a sore throat to fever, headaches to constipation, was made laboriously by hand, by the pharmacist. There was no such thing 150 years ago, of medicine-making factories like what we have today.
“So where’d they get their drugs from, then?” I hear you ask.
Well, what used to happen was that pharmacists would draw on the centuries of accumulated knowledge passed down from master to apprentice, over countless generations. This knowledge foretold of which plants, herbs, roots, leaves, barks, piths, saps, syrups, foodstuffs and various animal parts, had healing properties. It was knowing how to find these ingredients, how to identify them, how to use them and what they did, that was the biggest part of being a pharmacist or apothecary back in the Victorian era. Indeed, a lot of ancient and past medicine had far less to do with pills and potions, and more to do with herbs, roots, leaves and saps. A lot of the medicine was plant-based (it still is, we just don’t realise it, that’s all!), and because of this, a pharmacist 150 years ago did not have packets and jars and bottles of medicine – he would’ve had jars, and jars, and jars, and row after row after row of drawers, all filled with these plant extracts and component-parts.
Old apothecary shops were famous for having dozens, hundreds of jars, bottles and drawers, all filled with plant and animal components, all of which were used for treating illnesses. Stuff like willow-bark, opium, cannabis, cocaine, smelling-salts, essential oils, cold-creams, arsenic, cyanide, moisturizers, lip-balms and all other manner of countless ingredients!
What used to happen is that you’d go into your apothecary, and he would diagnose you, and then recommend a treatment based on that diagnosis, or based on the symptoms which you told him of. After making his diagnosis or recommended course of treatment, the apothecary would then make the medicine for you – on the spot, there and then. This might take a few minutes, it might take hours! You might be told to come back later to pick it up, or you might just take a seat in the corner and read the newspaper in the meantime.
Victorian-era Medicine
Medicine for most of the Victorian-era varied little from medicine in previous centuries. All medicinal plant-and-herb components were bought, sold, and used in their raw form. No aspirin – just willowbark. No sleeping-pills – just opium. No laxatives, just rhubarb!
So what happened when you had to take your medicine?
Well, to make it easier to digest, and to make the active components easier to absorb, the plant material had to be broken down. This was most often accomplished by grinding, crushing, pounding and muddling, using an apothecary’s mortar and pestle, like this one:
A lathe-spun Victorian apothecary’s mortar and pestle, made of brass to make it easier to clean, more resilient to constant daily use, and to prevent medicine or poison from absorbing into the body of the mortar (which might cause poisoning!) This one’s from my personal collection of antique brassware.
Once the medicine had been crushed, ground and pulverised into dust, it could then be either dispensed into a jar, wrapped up in sachets, sealed inside capsules, or mixed with syrup in order to form a paste, which could then be rolled, or pressed into pills or tablets. As tablets were tricky to make by hand, some medicines were simply sold as the powder they ended up as – put inside a folded piece of paper and put inside a box along with a whole heap of others. One folded piece of paper meant one dose. The medicine was unfolded, tipped into a glass of water or other convenient beverage, and then consumed. It’s the origin of the expression ‘to take a powder‘. My dad remembers having to do this when he was a child for things like painkillers when he had a fever or headache – he said it always tasted horrible!
The Victorian Pill-Roller – How Does it Work?
Hard tablets were tricky to make. The powder had to be poured into a mold, the mold was closed and then hammered to compress the powder. The mold was broken open and a single tablet would drop out. This was slow, fiddly, and imprecise. Making pills on the other hand, which didn’t require this fiddly process, was much easier. And that’s where my Victorian pill-roller comes in.
Once the necessary ingredients for the pills had been measured, crushed, ground and pulverised, a final ingredient was poured into the mortar – syrup. The syrup wasn’t there to sweeten the mixture, it was there to act as a binding-agent. You mixed the syrup into the powder until the entire thing coagulated into a paste or doughy mixture. Then you could scoop out the entire mass, and roll it into a snake or sausage – one long, continuous worm of medicine!
Obviously, nobody wants to take an entire worm of medicine, no matter how sick they are. So to make it easier, the whole mass had to be cut up and shaped into pills.
This used to be done by hand. And there’s nothing wrong with that, except that no two pills were then ever exactly alike – which could be dangerous if the medicine was exceptionally potent!
To even the odds, and to make pills more uniform, the pill-roller was invented, around the 1850s.
So, how does it work?
Well it’s very simple. It has two parts (well three, but the third one is missing – I’ll get to that later on).
The largest piece is the board. This is set at an angle and is comprised of the rolling surface, the cutting grooves, and the collection-tray. The large flat surface is for rolling out the pill-paste into the sausage that I mentioned earlier. This is then rolled towards the brass cutting-grooves. The paddle (the second piece) is flipped over so that the grooves there line up with the grooves on the board.
Rollers on the ends of the paddle roll against the brass edges of the board, and they guide the paddle straight across the grooves, taking the pill-mass with it. The grooves on the paddle and the board slice up the pill-mass and, after rolling the thing back and forth a couple of times like a rolling-pin, the circular pills – each one exactly the same size now (wow!) – roll off the grooves and into the tray at the bottom. And there you have it – two dozen pills all done in less than a minute! Talk about mass production, huh? This process could be repeated countless times and the results would always be the same – perfectly shaped pills, which were all the right size, and the right dosage.
Now, remember I said that the board was on an angle? That’s to ensure that the pills only roll one way – across the grooves from one end, to the other, turning from lumps of clumpiness on one end, to emerge as recognisable pills on the other. Now this presents a problem: Pills are round. And if you studied university-grade physics like I didn’t, then you might or might not know that round things on a sloping surface…roll. A simple application of gravity overcoming friction.
To prevent your newly-formed pills from rolling off the board, onto the table, and then all over the floor, the pill-roller came with a third component, which on this one, is missing – a removable, wooden collection-drawer. At the end of a session of rolling, the pills would land inside the drawer and remain there while you made more. When the drawer was full, you could slide it out and empty its contents into a jar or bottle, easily, and cleanly.
That said, simply rolling the pills wasn’t always sufficient. To improve their look, or to change their shape, each pill was then placed inside a highly sophisticated pill-rounding device, which is different from a pill-rolling device, in that it doesn’t roll the pills, it rounds them.
What’s the difference? One device makes the pills, the other one pretties them up for the camera.
The pill-rounder is basically a flat wooden disc or cup. You stick it over the pill (one pill at a time) and slide it back and forth and all around. This rolls the pill inside all over the place, smoothing out any lumps and bumps, so that it’s a perfect sphere. Shaking the rounder back and forth flattens out the sides so that it looks more oval than circular – one trick to differentiate pills from each other if they’re the same size or colour, but have different functions – was to make different pills a different shape. You don’t want to confuse a laxative with a sleeping-tablet…
Restoring the Pill-Roller
Anyway, so much for the pill-roller and how it works. What about fixing it up?
Well, this is what it looked like when I bought it…
As you can see, worn out, and rather dry. The wood was supposed to be a beautiful dark mahogany colour and the brass is supposed to be a gleaming gold…instead both elements look rather dusty. In that photograph it’s almost impossible to tell them apart! It took a lot of polishing with Brasso and ultrafine steel-wool to restore the brass back to its previous luster…
The brass grooves and rails after my first concentrated polishing effort. It would take a lot more to finish it off.
Apart from polishing and cleaning the brass, I also had to tighten screws, fix dents in the brass rails (which fortunately were few and easily remedied), and clean out the grit and dust stuck inside the cracks.
The biggest repair I had to do was to rebuild the one missing piece from this device: The pill-collection drawer. This involved a lot of careful measuring, tracing, cutting, and research.
Rebuilding the Drawer!
I didn’t know that this thing was missing something when I bought it. I was so excited at the possibility of owning it that this had never crossed my mind! It was only after I’d started researching it, that I’d realised that something was missing. In researching the history of these things and trying to dig out photos of them online, I started to realise that mine was incomplete. Fortunately, rebuilding the drawer looked like a relative cakewalk, so I headed out, purchased the necessary materials, and started.
The first step was to measure and mark all the pieces that I’d need, after looking at loads of photos to determine the general style and shape of the thing. The next step was to cut them out and figure out how they’d all fit together. Due to the shape of the board and the grooves which the drawer had to slide in, each piece had to be carefully sanded, chiseled, cut, measured and oriented a specific way, otherwise it wouldn’t work.
Sanding and chiseling took up the most time. The first and easiest step was to measure, cut and sand the baseboard for the drawer. This had to fit perfectly, because everything else would be measured and cut in relation to how it moved inside the pill-roller. Once its size was perfected and it could slide in and out comfortably, I started on the side-pieces. These were harder because to fit inside the drawer-space, they actually needed quite a lot of wood taken off. I accomplished this with a ruler, pencil, hammer and chisel to carefully score, chip and split off as much wood as I needed, before sanding the chiseled area smooth.
The next step was to cut the curved, quarter-circle rails that would be at either end of the drawer. One end had to be lower than the other, so that the pills would roll into the drawer easily. The other end had to be higher, so that the pills wouldn’t then be encouraged to roll out the other side! The challenge here was to cut and sand these rails to the right length. Too short and they’d fall out and be the wrong size. Too long and if I forced them between the sides of the drawer, I risked splitting the pill-rolling board in half – which would be a disaster!
The next step was to fit all the pieces together, and ensure that they would slide in and out smoothly, without jamming…
All the pieces fitted together, before final assembly.
Once I was satisfied with how they fit together, I started gluing them together. This was the easiest bit. I started with the end-stop rail first, then the rail closest to the pill-grooves. And then I glued the side-panels onto the sides of the rails and the top of the baseboard. Then I slid the whole thing into the drawer-space to compress it a bit while the glue dried. This was the result:
Drawer goes in…
…drawer comes out!
I had to be very careful with these last few steps. The drawer had to be just the right size. If it was even a fraction too small, then it would just fall out. If it was a fraction too big, then it would jam, and quite possibly damage the board. But patience paid off and the results speak for themselves. The final step was to nail the pieces together here and there, just to provide some extra strength and peace of mind, and then to stain everything with oil to bring out the grain and colour, but the project was essentially finished at this point – all the other things that still had to be done were purely cosmetic. The main ‘reconstructive surgery’ as it were, was now completed.
BEFORE:
AFTER:
And there you have it. The finished product. Next comes staining, and perhaps a demonstration of how this thing actually operates, but that’ll be for another posting! Stay tuned!