Lubricating your Sewing Machine – More Accessories!

In my ongoing quest to find bits and pieces to complete the restoration of my grandmother’s Singer 99k knee-lever sewing-machine, I have two kinds of lubricant, with which to tantilise you.

When you ambled into your friendly local Singer Sewing Center and left with your brand-spanking-new sewing machine, it would’ve come complete with all manner of wizzlewozzles and doohickies, doodads and thingdoodles.

Today, few vintage machines have these bits and pieces still with them. They’ve been used up, lost, thrown out, broken or just forgotten about, and you can’t just go back down to your local Singer shop to buy them anymore. So instead, you have to seek them out all individually and separately. It’s frustrating because you don’t always know what to look for. But sometimes, you get lucky.

Using the BRK motor-manual which I bought last week (see other posts in this category) as a guide for what to look for, I headed out into the world of the local flea-market. While there, in the pre-dawn chill of a Melbourne winter, with only my torchlight to guide me, I chanced upon this:

Holy mackrel! It’s a Singer oil-can!

After anywhere from 40-90 years, there’s obviously no OIL left in the can. But I bought it anyway, for a couple of dollars, for the sake of completeness. Why did I buy it?

Because, even though it’s as dry as the Sahara Desert, it is, nonetheless, the original style of oil-can that went with my machine when it was brand-new.

This is a a standard Singer bentwood case:

On the inside of that case, on the back left-hand side (if the ‘SINGER’ logo is facing you), is a little bent wire bracket, screwed into the paneling.

If you’ve got a Singer machine with a bentwood case and ever wondered what that bracket was there for…well…take a look at the picture of the oil-can up above. Keep it well in your mind, and then scroll down…

Yup! That little bent metal bracket is to hold the oil-can! See how nicely it sits in there and how HAPPY it is to finally be back home? You can tell it’s smiling. You can just tell.

When oiling your vintage Singer sewing machine, be sure that you oil all the moving parts which are MECHANICAL. That means NO OIL should go into the electric BRK machine-motor at the back/side of the machine. If you do that, horrible things will happen. It will heat up, start smoking and will probably catch fire and blow up, because the oil’s gone all through the motor, interfered with the electronics (such as they are on these old machines) and started an irreversible chain of catastrophic events.

Oil the pistons, shafts, cranks, levers, wheels, hooks…anything that’s mechanical. But do NOT apply sewing-machine oil to the motor. Or you’ll live to regret it.

But hold on. I told you I had TWO types of lubricant!…What’s the other one?

You might remember this manual from a previous posting:

Having read the warning, you’re sitting at your desk wondering “What the hell is this ‘motor-lubricant’ stuff?”

The motor-lubricant, which is the only thing that should be used to lubricate the BRK Singer sewing-machine motor, is a thickish, pasty substance. Originally, it came in this tube, which I purchased today for a paltry $1.00:

The tube is, structurally, in excellent condition, without cracks or leaks, and it’s almost completely full of its original supply of paste! This is the lubricant which you should use to lubricate your Singer BRK machine-motor.

If you can’t find any of these neat little tubes of paste, then nick down to your local sewing-machine shop (if you have one) or hardware store (if you don’t), and ask for good-quality motor lubricant. It should be like a soft, gel-like paste which can sit inside the motor and keep things nice and smooth, but without dribbling and leaking everywhere like oil would.

Once you have it, take it home and apply it sparingly, to the oiling holes either side of your Singer BRK machine-motor. The oiling-holes are these little metallic holes at either end of the motor:

See it? It’s that tiny little steel-lined hole, above the big, fat, black plastic screw-head. That’s why the nozzle on the paste-tube is so small, because it has to fit into that miniscule little opening.

Still hunting for more bits and pieces…

 

Lots of Little Singer Pieces!

No, I didn’t drop my grandmother’s sewing machine down the staircase, resulting in a carnage of wood, metal, rubber and broken tiles. What I did manage to do, was to get my hands on the first group of several attachments which I’m chasing after for my restoration project involving my grandmother’s 1950 Singer 99k sewing machine.

I already have the buttonholer, and now, I managed to get some more extra bits and pieces for it.

A poke around the flea-market today dredged up the following treasures from the sludge of the drudge:

Yes, some of it is hidden by the sticker in the middle (which was original to the booklet), but it reads in its entirety:

“INSTRUCTIONS 
for using and adjusting
Singer BRK electric motors
with knee-control for
family sewing-machines

The Singer Manufacturing Co”

The bit in italics is the part that’s covered by the warning-sticker.

Along with the cutesy little booklet, which is the one which my Singer would’ve come with when it was brand-new, I bought this:

It’s a box of Singer sewing-machine attachments…or some of them. I haven’t managed to find ALL the pieces I need yet, but good things come to those who wait. Inside the box, we have:

I know what about 3/4 of the objects inside that box are. Others, not so sure. For example, we have inside the box, a…

Seam Guide

The seam-guide, held in-place by it’s accompanying nut (which simply screws into the appropriate hole in the machine-base), is used to guide two pieces of fabric under the presser-foot during sewing and to make sure that the size of the seam is consistent throughout the piece. This is an older seam-guide and sewing-machine, so it doesn’t come with measurement-markings. If you wanted that, you’d need to use your measuring-tape as well.

Hemmer Foot

The hemmer-foot is used to create a hem along the edge of raw fabric (to prevent fraying). You feed the fabric through the machine and through the hemmer. As the fabric passes through, the curved bit at the top flips the fabric over to create a neat, even fold which is then stitched into a nice, crisp hem.

Adjustable Hemmer

This is an adjustable hemmer. It’s much like the one above…it does the same thing, it makes hems. But this one has a slide and gauge on it that allows you to make hems of different widths, according to your taste. Anywhere from a full inch, all the way down to 1/16 inch.

Binder Foot

The binder or binding foot does…just what it says it does. It binds. It’s handy for stuff like attaching lace, ribbons and other decorative things to the edges of clothing.

Screwdriver

Isn’t this cute!? It’s a teensy-weensy-widdle-bitty screwdriver! And, it’s a Singer-brand screwdriver, too! It’s probably got a head of 2mm or something. Exactly WHAT one would use this for on a sewing machine…I’ve no idea…but it sure is cute. None of the screws on the Singer are this tiny, but I suppose I’ll hold onto it for the sake of completeness. And I can let the mice borrow it when they need it.

Finally, there are two mystery-feet inside the box. I haven’t figured out what they do or what they are.

They hold SIMANCO part-numbers 86177, and 85954. I’ve tried looking them up, but I can’t find any lists of serial-numbers that correspond.

If anyone knows, tell me!

In the meantime, my quest to complete the Singer continues.

In an unrelated note, I found an antique handcrank sewing-machine at the flea-market today. I had no intention of buying it, for a number of reasons (completenes, quality, manufacture, the list goes on), but I reckoned it looked kinda cool. So I took a couple of photos of it:

It came with it’s original coffin-style case and was dated to ca. 1900, made in Germany. Other than that…the seller had no idea.

Hand-crank machines such as this one were very common. Big companies like Singer were still making them, well into the 1940s and 50s when electronic machines had already taken over. I suppose they had an advantage during the War, when electrical supply was unreliable at best…

I’m still on the hunt for a Singer oil-can and more and more feet and fiddly bits. Here’s a group-shot of everything I’ve found so far:

The red box contains the buttonholer. The green box contains the feet and attachments. The manual balancing on top is how to install and/or remove the machine-motor that’s hidden around the back of the machine. The machine itself is a 1950 Singer 99k knee-lever machine.

 

Singer Attachment No. 86718 – Buttonholer

Well, I said I’d keep you folks updated with what I found for my Singer sewing-machine, and this is the first of those updates.

First, my sewing-machine restoration-adventure.  

Okay. This posting is about the first attachment which I purchased for my Singer. It is a buttonholer. It is Singer Part No. 86718. This attachment is designed to fit onto Singer 99, 99k and 66-model machines (and other Singers with a single square slide-plate in the middle of the left side of the machine-bed). It came in a handsome red box…

And has a pretty red and cream colour-scheme, with ‘SINGER’ on top:

The bit that you see on the right is the dog-cover. It covers the feed-dogs underneath the presser-foot, to stop them shifting the fabric to where you don’t want it (on older machines like this, dropping the dogs isn’t an option).

The two red knobs at the back are to adjust SPACE (size of the buttonhole) and BIGHT (closeness of the stitches that form the buttonhole outline). The big red knob at the front is to adjust the position of the sliding foot at the front of the buttonholer, to determine where you want the buttonhole to start.

Just like everything else made by Singer back in the ‘Good Old Days’, this thing is solid steel. All it needs to work is oil.

After I bought it, I took it home and opened it up. In this photo, you can see (…or not, it’s REALLY small…) that the cream-coloured cover is held on by one tiny little screw, to the right of the big red knob:

It was moving very stiff and jerkily, and after I opened it up and wiggled it around a bit, I found out why. It was full of this thick, grey, gummy oil that was acting more like paste than lubricant. So I wiped off as much of it as I could before re-oiling the whole thing using machine-oil and putting it back together.

This is a very simple buttonholer. It doesn’t do fancy keyhole-buttonholes or buttonholes of different lengths and whatnot. It just does buttonholes. And in the end, that’s really all you need. You can adjust the size of the buttonhole manually anyway, by turning the red knob on the side as you go.

Oh, and for the Americans who are looking confused right now, my research tells me that this style of buttonholer was manufactured in the 1950s and was prevalent in Australia and in the United Kingdom and Europe. But it appears not to have been exported to America or Canada, which will probably explain why folks stateside are unaware of its existence.

How to Use It?

Your guess is as good as mine. When I bought it, it didn’t come with a manual (although it did come with a sheet of “anti-corrosion paper“). To figure out how to use it, I mostly watched videos, read blogs and just used common sense. But for anyone else who picks up one of these things without the manual…

1. Screw Down Dog-Cover

The feed-dog cover/plate is the rectangular thing with the black bit dangling off it. The black dangly bit goes over the two holes that you’ll find in the machine-bed, to the right of the needle-plate. In the attachment-box, you’ll find one or two small screw-bolts. Poke one of these through the hole in the middle of the black dangly bit, and screw it into one of the two holes in the middle of the machine-bed (it doesn’t matter which one).

Raise the presser-foot and slide the main body of the dog-plate over the feed-dogs and needle-plate.

There is a small rectangular hole in the dog-plate. This is where the NEEDLE goes through, to make the lockstitch under the needle-plate. Make sure that this tiny hole lines up with the hole in the needle-plate. Otherwise your needle will just be smacking its head against solid steel and going nowhere. Once it’s lined up, tighten up that little nut from earlier, to make sure the plate doesn’t wriggle away.

2. Remove presser-foot and attach buttonholer

This is a little easier said than done.

First, you gotta unscrew the bolt that holds the presser-foot onto the foot-bar and remove it. Put it somewhere where it ain’t gonna walk off on you.

The attachment hooks onto the presser-foot bar from the back. There’s a hook in the middle of the front of the attachment that goes around the presser-foot bar, and a ‘fork’ that sticks out, which should go above and below the needle-clamp on the needle-bar. Best to shove it in at an angle. It can be fiddly, so take your time.

Once it’s on, drop the foot-bar lever, and screw the attachment firmly onto the presser-foot bar using the supplied bolt (it’s the bigger one, about an inch long). Once it’s in, adjust the buttonhole guide so that it’s at its outermost setting.

Note: When preparing your machine to put the attachment on, be weary of the orientation of the thread-breaker (that’s the little clampy-piece that’s stuck onto the presser-foot bar). You may need to twist it around so that it’s out of the way of the front of the attachment, otherwise it’ll scratch against the buttonholer, like you see it had in mine.

Raise the attachment, feed in the patch of fabric that you want a buttonhole to be made in, and drop it.

3. Run the Attachment

Once it’s in and bolted on, drop the foot-lever, and then run the machine SLOWLY. Running it too fast will tangle up the cloth and lead to all kinds of strife. Better slow than sorry. The attachment will pull the fabric in, punching in little holes and driving the needle and thread through them, making neat stitches. It’ll then move the fabric to the right, stitch across, and then stitch back, and then shift the fabric over to the left, stitch across…and that’s a buttonhole! Some people like to run the attachment through again, to make the buttonhole nice and thick.

Whatever you do, make sure that the thread-tension discs are set correctly. If not, you’ll end up with snapping thread, and huge masses of loose thread on the underside of the buttonhole. That not only looks messy, but it jams the machine.

Once you’ve done one buttonhole, raise the needlebar, raise the foot-bar, shift the fabric over to the next space, and do it again!

Easy as pie.

 

The Story of the Rape of Nanking

“Nanking”. A beautiful name, isn’t it? In Chinese, it means ‘Southern Capital’, similiar to how ‘Peking’ means ‘Northern Capital’. In the 21st Century, the city of Nanjing (it’s modern spelling) is one of the biggest and most important cities in all of China, just as it was back in the 1930s, when soldiers from the Japanese Imperial Army overran the city and murdered, burned, raped and pillaged it to the ground in one of the most horrendous war-crimes in the history of the world. The infamous ‘Rape of Nanking’ is one of the most brutal and controversial war-crimes ever. But what actually happened?

For the purposes of continuity, the original Wade-Giles spelling of ‘Nanking‘ will be used throughout this posting.

What Was Nanking?

Nanking was and is one of the most important cities in China. Built along the famous Yangtze River in southern China, it has been a major center for culture, trade, commerce, politics and government for centuries. After the fall of the Qing Dynasty, the last of the great Imperial Chinese dynasties, which for countless centuries, had ruled over the lands of ‘Zhongguo‘…the Central Kingdom…the new Republic of China nationalist government, the Kuomintang, set up shop in Nanking. This ancient and proud city was to be the capital of the new, capitalist, democratic China. After much thumb-twiddling, um-ing, ah-ing and foot-shuffling, in 1927, Nanking became the new capital of the new China.

Nanking, like almost every other major city in China at the time, played host to a significant Western expatriate community. Just like in Peking and Shanghai, Western businessmen, religious leaders, reporters, journalists, artists, writers and families descended on Nanking, carving out their own portions of the city where they lived alongside the local and native Chinese population.

The Second Sino-Japanese War

In 1931, the Japanese began their assault on China. By degrees, they claimed larger and larger swathes of Chinese land for themselves, starting with Manchuria in 1931. In 1932, they unwisely attempted to invade the city of Shanghai, an important sea-port. The Chinese Nationalist Army fought them off and kept the city safe for another five years.

In August, 1937, the Japanese Imperial Army invaded Chinese Shanghai. The city was then divided into two sectors – the Chinese sector on the outsides of town, and the famous Shanghai International Settlement, the vast expatriate zone, in the heart of the city. Not wanting to draw Western powers into the war (yet), Japanese troops only attacked Chinese Shanghai. After fierce fighting for three months, the city fell in October of 1937. Thousands of Shanghai Chinese fled into the Settlement, secure in the knowledge that the Japanese would not dare attack them within its boundaries, for fears of bringing British and American troops on their heads.

After the fall and occupation of Chinese Shanghai, and the road now clear into the interior, the Japanese headed westwards, seeking out the Nationalist capital, the ancient Chinese city of Nanking.

The Battle of Nanking

Nanking was the next great city that the Japanese attacked, after capturing Peking and Shanghai. The battle started on the 9th of December, 1937.

Back in September, the Japanese had carried out extensive air-raids on Nanking, softening it up for the impending invasion. Heavy raids were carried out for weeks on end. When Shanghai fell in October, the Nationalist Army abandoned the city and retreated to Nanking, to try and defend the capital.

It was soon realised that defending the capital against hardened Japanese troops was pointless. Although most Chinese officers had received the most modern military training (mostly in Russia), the majority of regular soldiers were uneducated peasants or working-class Chinese with only mediocre training, hardly fit to take on the strength of the Japanese.

Rather than risk his entire army being gobbled up by the Japanese, Chiang Kai-Shek ordered it to retreat even further into the Chinese interior, while leaving a small force behind to stall the Japanese.

By November, bombing-raids on Nanking had intensified and it was at this time that everyone who could leave, did leave. Wealthy Chinese of means, businessmen, Western expatriates and anyone who could find a car, boat, bicycle, horse and cart or had a decent pair of shoes fled the city to escape the Japanese.

The Japanese overran Nanking in a matter of weeks. The Chinese defense-strategies collapsed as inexperienced Chinese soldiers fled from the Japanese. Although there were pockets of resistance, the Japanese annihilated Nanking even easier than Shanghai. In early December, the city was placed under siege. The Chinese defenders were given an ultimatum of surrendering the city, or facing an all-out Japanese assault. When a surrender was not given, the Japanese began their invasion-proper, of the city of Nanking.

The city’s ancient defensive walls were blasted aside by the Japanese. Once they’d gained control of the city by mid-December, 1937, the most infamous Japanese war-crime in history began.

The Rape of Nanking

It’s called by many names. The ‘Nanking Incident’, the ‘Nanjing Massacre’…but most people will know it by its most famous name.

The Rape of Nanking.

Starting on the 13th of December, 1937, and lasting for six weeks until the end of January, Japanese soldiers raped, killed, pillaged, looted, burned and destroyed anything and everyone left within the confines of the city of Nanking. Men, women, children, the elderly, the babies, the walking-wounded, were all shot, clubbed, bayoneted, raped, burned alive, buried alive, decapitated or drowned in an orgy of destruction that went for a month and a half without end. Estimates of victims range from 40,000…to 200,000….to 320,000 Chinese civilians of all ages. That sounds even bigger when you consider the fact that in 1937, the population of Nanking about a million people.

It was the most horrific Japanese war-crime ever. And even today, seventy years later, it’s still not taught in Japanese schools. Japanese schoolchildren have never heard of it. Never read about it in their textbooks, and their teachers have never told them about it. They’ll learn about the battle and the siege and the invasion…but the rape is suspiciously absent.

During the war, the Japanese Imperial Army was notorious for ignoring the rules of war, more commonly known as the Geneva Conventions. Chinese prisoners of war were executed along with civilians, and no quarter was given to anyone.

The Chinese civilians still left within the confines of Nanking would search for anywhere to hide. Cellars, bunkers, bombed out buildings…but most famously, about 250,000 of them managed to find security…for a time at least…in the unofficial D.M.Z. in the middle of Nanking.

The Nanking Safety Zone

With the Japanese invasion imminent, Western expatriates still within the city (mostly religious leaders, diplomats and medical staff) took it upon themselves to try and set up a D.M.Z within the city…a demilitarised zone.

It was given the rather misleading title of the “Nanking Safety Zone”.

It might be a zone.

It might be in Nanking.

But it certainly didn’t guarantee safety.

The Japanese were not willing to attack Western institutions or persons, for fear of bringing Western powers into ‘their war’. To try and use this to their advantage, the Westerners attempted to set up a safety-zone in the middle of Nanking. The Japanese had said that they would not attack any part of Nanking where no threat existed (i.e: Where there weren’t any Chinese soldiers).

To that end, Chinese soldiers evacuated an area of the city about 8.5 square kilometers in size. Within that space were established about twenty to thirty individual refugee-camps, which took up about 3.8 square kilometers. For the sake of comparison, Central Park in Manhattan is 3.4 square kilometers in area.

Into this space was crammed roughly 250,000 Chinese refugees. Surveying the entire project were all the Western expatriates then left in the city, about 27-30 of them, all told.

One of the men who was central to the establishment and operation of the Nanking Safety Zone was a German. His name was John Rabe. He was a businessman, which some people might know…and he was a Nazi, which some people might not know.

Despite the name, the Nanking Safety Zone, the zone did not automatically provide ‘safety’.

The Japanese agreed not to attack any place which did not pose a threat to their interests. But at the same time, they did not recognise the fact that the Safety Zone existed at all. To them, it was just another part of the city for them to loot and pillage. So remaining within the Safety Zone did not mean that you were entirely secure. The Japanese were well-known for entering the Zone when it took their fancy, snatch up a few hundred men and women and either haul them off and execute them or rape them, or just shoot them dead where they stood. Unlike the International Settlement in Shanghai, the Japanese had no qualms about just going in and causing havoc.

At the end of January, 1938, the Japanese claimed to have ‘restored order’ to Nanking. The Nanking Safety-Zone was forcibly disbanded and everyone was made to return to their homes. Although not entirely effective, John Rabe, commonly known as the “Good Nazi of Nanking“, is credited with saving the lives of approximately 250,000 people.

Want to know more?

I suggest you read this website dedicated to the Battle of Nanking.

 

Australia: From Colonies to Country

Some of you may remember that I wrote this posting for Australia Day, back in January. At the end of it, you may recall that I said I’d write about more Australian history sometime in the future.

Well, the future is now. So let’s get cracking.

Colonial Australia

For all of the 19th century, Australia was an island of colonies. They were given names such as “Van Diemen’s Land”, “Victoria”, “New South Wales”, and “Queensland”. Admittedly, the remaining colonies of South Australia, Western Australia and the Northern Territory were hardly the most poetic of names to go along with the names of the other colonies, but I digress…

In the second half of the 19th century, Australia had finally broken out of the phase of being “Terra Australis Incognitia“, the great unknown southern land. It was now firmly established that an island south of Asia did exist, and that it was inhabitable, and that it now had a name. “Australia”.

Australia was seen as a great social experiment. Prior to this, no Western civilisation had colonised a landmass further south than this great, empty sandpit in the bottom left of the Pacific Ocean. The British Government was quick to realise that having Australia as a British colony would be very useful. It would be able to secure British dominance in the Southeast Asian region, along with their holdings in Singapore and Hong Kong. This would balance out the colonial scale, since nearby, the French, the Dutch and the Germans also had colonies. Colonies like French Indochina (Vietnam), the Dutch East Indies (Indonesia) and the German-held Papua New Guinea.

Colonial Australia was a hard and dangerous place to live. Summers are hot, scorching and dry. Cities were still mostly made up of wooden buildings, two storeys high, and streets were largely unpaved. Also, then, as now, Australia played host to the largest number of dangerous animals in the world – Spiders, sharks, snakes, and the vicious Spotted Quoll:

…D’awwwww…

The Victorian Gold Rush

Life in colonial Australia cheered up in the 1850s, though. Gold had been found sporadically for years, but in 1851, the great Victorian Gold Rush hit Australia. And it was a rush, alright. People from all over the world came to Australia, to go to Victoria, to find gold! The population of Victoria’s capital city, Melbourne, went from 10,000 people in 1840, to 123,000 people by the mid-1850s!

Towns like Bendigo and Ballarat popped up overnight and became booming centers of trade. Just like in almost every other gold-rush in history, in California, or Canada…a significant amount of the money made came, not from mining, but from merchants and shopkeepers who sold equipment to the miners at inflated prices. Shovels, buckets, pans, tents, billys (kettles, that is), bedrolls and countless other things were in high demand, and the scheming and unscrupulous shopkeepers could make a pretty penny or two from “mining the miners” for their hard-saved money.

The Victorian Gold Rush allowed Melbourne to grow at a fantastic rate, and it soon rivaled Sydney, the oldest city in Australia, in population, if not yet in size.

The Rush allowed Melbourne to build magnificent public buildings, like the state library, the town hall, the state parliament building, treasury, and several bridges across the Yarra River in the middle of town.

Australia slowly cast off the criminal element of its past and began to grow. Famous people came to Australia to look around. Prince Alfred, son of Queen Victoria, came for a look in 1868. Two hospitals (one in Sydney, one in Melbourne) were named after him. And it’s probably just as well that there were hospitals around, because the prince was the target of an assassination attempt while he was there! He was shot in the back, but the bullet was recovered and the prince made a full recovery.

Towards a Country

Australia was a ‘country’, but not yet a nation. It had separate colonial militias, but no national army. It had lots of railroads, but it was not possible to travel all around the continent without changing trains at each border, since each colony used a different gauge of rails. As the 19th century drew to a close, Australians wanted more and more to become their own country, their own nation and their own people.

Much like the United States, a hundred and thirty years before.

But unlike the United States, Australians didn’t start stockpiling rifles and muskets.

By the 1880s, there was increasing nationalism in Australia. A higher and higher percentage of people who lived in Australia were actually born there, instead of coming to Australia from overseas. Fewer people saw themselves as being “British” but as being “Australian”. Improved communications in the 1800s, such as finally, a nationwide telegraphic network in 1872, allowed them to communicate with each other faster and easier. This brought people closer together, and strengthened the ideas that Australia should become a nation.

To that end, in the 1880s, the Federal Council was formed, a body of men whose job it was to make Australia a nation. The Federal Council was the closest thing to a national government that existed before Federation itself.

Colonies were not all in favor of federation, however. They worried that having a big national government would mean that colonies with larger populations would bully those with smaller populations. They feared that individual colonial laws, taxes and tariffs would be stamped out by a more powerful national government. They were also scared that giving power over the country to one body, instead of splitting it up amongst lots of small ones, would cause problems, since any decision made by the national government would affect everyone. In the 1870s and 80s, the American Civil War was still very fresh, and Australians didn’t want to have their own civil war!

As the years ticked by, however, federation started looking more and more interesting, and in referendums held in each state, a higher and higher percentage of people were voting for the creation of the Australian nation.

1901 – Australian Federation

On the 1st of January, 1901, the 20th Century began. And so did Australia. It was now its own nation. Its colonies were now states, and it had its own national government. It was now the Commonwealth of Australia.

It still is.

Australia was the new kid on the block in the world stage. And it wanted to do things differently. Much differently. Australia was seen as the great big new social experiment that the world would gather around to watch. Things would be done differently here and the global community sat back to watch the results of this new experiment, this new country, this new nation called Australia. Laws were enacted in Australia which were never seen in England, or indeed, in any other country on earth at the time. Some laws were popular. Some were not. Some were incredibly controversial, even for the time! Australia in the 21st Century might pride itself on multiculturalism, but it wasn’t always like that…

Immigration Restriction Act (1901)

A similar law existed in America. It was called the Asian Exclusion Act of 1924. But Australia was the first country to implement a law such as this.

What was it?

The Immigration Restriction Act of 1901 was an act that regulated who could come into Australia. They didn’t want any undesirable people in this great social experiment that Australia was! They wanted Australia to be pure, clean, innocent and…

…white.

Incredibly white.

More bleach was air-dropped into Australia before 1965 than any other country on earth.

The Immigration Restriction Act of 1901 was designed to keep out undesirable people from the Australian nation. Asians. Jews. Africans. Americans. Anyone seen as undesirable. How did they do this?

Simple. They asked them if they could speak English!

There wasn’t going to be any other language in this new country other than English, so if you wanted to live here, you had to speak English. If you couldn’t, you couldn’t come in. Simple!

This was primarily designed to keep out Asians. I’m here, so it obviously didn’t work.

The problem was that a surprisingly large number of foreigners spoke English.

So much for that idea. To try and add a few more tripwires in this new immigration law, the government started changing the conditions of entry. How did they do this?

When you arrived in Australia, you had to take an English test to evaluate your language-skills. When it was found out that this wasn’t effective in keeping out the global rabble, the law was…altered.

Instead of giving a test in English, a test could now be given in ANY European language. And I do mean ANY language. German. French. Italian. Polish. Russian. Latish. Czech. Spanish!

…it still didn’t work. But it’s what they tried.

Pacific Island Labourers Act (1901)

Along with the Immigration Restriction Act of 1901, there was also the Pacific Island Labourers Act of 1901. This was designed to kick out of Australia any persons living there who came from islands near to Australia. Again, this backfired. While several thousand Pacific-Islanders were indeed shipped out of Australia, a significant portion of them were able to apply to stay in Australia.

How?

Simple. Because they weren’t from the Pacific Islands. Their parents, or grandparents were. But they were born in Australia! It wasn’t legal to send them back to some place which they weren’t from in the first place, so the government had to let them stay put.

And there were a lot of them in Australia. They’d been brought over starting in the 1860s to work in Queensland, on the sugar-plantations. They were dark-skinned people, after all, and they were surely much better at working in the harsh, humid, hot and sunny Queensland climate than white folks. But then it was decided that they just had to leave.

The “White Australia Policy”

All these acts and laws and regulations were designed to create something unique in the history of the world. A completely white country. It wasn’t like America where blacks and whites were simply segregated…no. In Australia, they wanted to make sure that the whole country was white from the very start!…The Aborigines didn’t count, though…

There was a lot of support for a White Australia, but just was just as much dissent. And a significant amount of dissent came from Britain.

Why?

Australia was part of the British Empire. And the British expected Australia to trade with other countries within the Empire. Countries like Singapore, Hong Kong and India. The White Australia Policy irritated the British and they weren’t happy with the fact that it existed, because it meant that non-white subjects from British colonies couldn’t live and work in Australia, an act that was sometimes necessary for purposes of trade and business. This was why the British objected to the White Australia Policy. But then, Australia was by now its own country and nation…it could do what it liked without having to listen to England.

The White Australia Policy survived for decades, strengthening and weakening and gaining and losing support through the years. During the 1930s, fears of the Japanese and a second coming of the “Yellow Peril” increased support for a White Australia. However, after the Second World War, the need to repopulate Australia caused the Policy to be significantly relaxed, when the government realised that it could not afford to be picky about who it allowed into the country if they expected Australia to survive. It was during the postwar years that the White Australia Policy began to crumble in earnest.

The fact was that the policy had never really been any good. Non-whites had been trickling into Australia for years, and the policy never completely kept unwanted foreigners off of Australian soil. On top of that, Australia needed a larger population in the postwar era to fill up the gaps left by all the dead soldiers from the War. It was unreasonable and impossible to ask all red-blooded Australian males to do their patriotic duty and shag like rabbits on Viagra, and copulate for the good of the nation, so the Australian Government had to look…overseas! (horror of horrors!)…for more people!

The popular slogan became: “Populate, or Perish!”

This meant that Australia had to increase its population if it expected to survive in the dangerous and uncertain postwar world. Massive tourism and immigration campaigns started, encouraging people from everywhere (so long as it was white) to come to Australia!

A large percentage of the new arrivals in Australia were refugees from the Second World War. European Jews, British war-brides, displaced persons with nowhere else to go. But in the 1950s, 60s and 70s, more and more Asians started flooding into Australia. Trouble in Asia was encouraging people to leave and move south. The Chinese Civil War, the Korean War and the Vietnam War were driving people out of Asia towards Australia.

The White Australia Policy finally collapsed when international events made it impossible to implement – the numbers of Korean, Chinese and Vietnamese refugees pouring into Australia made the Policy a joke, and it was officially ended in 1966.

Universal Female Suffrage

Australia, the great social experiment, while it may not have been as forward thinking in issues of race and culture, was certainly more open to other ideas…such as the shocking notion of allowing women to…vote!

In 1902, Australian women were allowed to vote alongside men.

…Yeah. So what’s the big deal?

The deal is that Australia was the first country in the Western world to do this!

Britain? Nope. 1918.

America? Try again. 1920.

Germany? 1918.

France? Good luck. Not until 1944.

China? Surely, communists with all their equality and whatnot? Nope. 1947.

Canada? 1917.

Australia was the first! (Okay, second. New Zealand – 1893…damn Kiwis…).

Australia’s Place in the World

In 1901, Australia officially became a nation. It could go to war, it could run its own affairs, create its own laws, set its own taxes and was no-longer tied to Britain!…Except that it still (and still does) have the Queen as its head of state, and the Governor-General as the Queen’s representative in the Land Down Under.

Australia was a big exporter…and importer. It sent out shiploads of gold, iron, wool, wheat and leather, and in came things such as consumer-goods from England and America.

Australia was miles from England…it took two months to get there by ocean-liner…but a lot of Australians saw themselves still as being British. They supported Britain in wartime and peacetime. When Britain went to war with the Dutch South-Africans (the Boers) in 1899, Australia sent troops off to fight. When Britain went to war with Germany in 1914, Australia sent troops off to fight. When Britain went to war with Germany (again!) in 1939, Australia sent troops off to fight.

Why?

Australia is on the other side of the world, for God’s sake! Why on earth would it get involved in British wars?

Popular opinion in Australia listed reasons such as…

– Similar cultures.
– Helping “Mother England”.
– Failure to hep England in her time of need would result in England being too weak to help Australia in hers.

In the Edwardian-era, imperial pride and ties to “Mother England” still ran strong through the fabric of Australian culture and society. When soldiers fought and died in the First World War, they died in service of “The Empire”, not Australia. Indeed, such was Australia’s closeness to Britain that when the First World War came around in 1914, over sixty thousand Australians signed up to go to war.

The interesting bit?

Not a single one of them was a career-soldier.

Australia was the only country to participate in the First World War, which had a completely volunteer army. Shopkeepers, schoolteachers, engine-drivers, cable-car gripmen, farmers, shearers, bank-tellers and waiters rushed to sign up for the army. The most experience that Australia really had of fighting in big wars was in the Boer War of 1899 (during which, Australian soldier Harry ‘Breaker’ Morant was tried…and executed…for trumped-up charges of ‘Treason’, disobeying orders, and killing innocent noncombatant Boers).

After the Second World War, Australia stopped looking to Britain for aid, and turned increasingly towards the United States. Colonialism died a slow death as the European powers grudgingly (in the case of France, incredibly so!) gave up their colonial posessions. Australia joined the British Commonwealth, the collection of countries which shared historic, colonial ties with Britain.

 

They’re Coming to Take Me Away! – A Compact History of Mental Illness

Mental illness is a horrifying thing. It has had a long, long, long, troubled past, full of superstition, horror, misunderstanding, experimentation, mistreatment, pain, suffering, abuse and conjecture. It’s the stuff of horror movies like “House on Haunted Hill”. For centuries, the mad and insane have suffered, some in silence…others, not so much.

This is the history of madness. A look at how mental illness has been viewed throughout the centuries, and how people attempted to treat it, control it and cure it.

The Nature of Madness

Mental illness has been around for as long as mankind, and for as long as it has existed, there have been explanations for it, reasons for it, cures and treatments for it, whether they be right, wrong, effective, ineffective or just plain crazy!

How far mental illness can be traced is totally unknown. Only since the dawn of the written word and reliable records can we can even begin to guess at how many centuries mental illnesses have existed for, or how far back certain specific illnesses can be traced.

The Cause of Insanity

People have been trying to figure out what caused mental illnesses for as long as they’ve been around. One of the earliest explanations was that it was related to the movements, phases and positions of the Moon. The Latin word ‘Luna’, or ‘Moon’, has given us the words ‘lunacy’ and ‘lunatic’.

Other common beliefs included posession by devils, demons, evil spirits…or that the person was a witch. In the last case, the most expedient ‘cure’ involved a large stake, lots of wood and a burning torch. To deal with ‘evil spirits’ or ‘demons’, the most common ‘cures’ were either an exorcism, or a terrifying operation called a trephination or a trepanning.

Trepanning was the practice of gaining access to the brain by means of making an incision in the organ’s outer casing.

In other words…drilling a hole in your head.

Trepanning is still practiced today, but its benefits (relieving pressure on a damaged and swelling brain) are much better understood now, than they were back in the Middle Ages, when this treatment was used to ‘cure’ insanity and release a person’s demons from their soul.

Trepanning was carried out using one of a variety of drilling or boring tools…such as this delightful instrument:

Stay very still and don’t sneeze…

The procedure was typically carried out in the following manner:

1. The patient was seated (or laid) on a chair or bed and secured in-place (either with straps or with the aid of surgeon’s assistants).
2. The head (or the necessary portion of it), was shaved smooth.
3. A Y-shaped cut was made into the skin, and the skin then peeled back.
4. A mark was made on the bare skull and the drill placed thereon.
5. Start drilling.

Oh…and if you’re the patient, you get the unique firsthand experience of watching everything that happens. Because there’s no anesthetics.

Trepanning was used to treat more regular health-issues, such as migraines, headaches and so-forth, but it was most famously used for the treatment of mental illness.

As folklore, superstition and religion slowly gave way to reason, logic, science and medicine towards the 1700s, a greater understanding was sought of the lunatic. What caused someone to go mad, what they should do with him, how he should be treated and what might happen to him. In Georgian England, the answer lay in one word.

Bedlam.

Or, as it is properly called, the Bethlehem Royal Hospital.

The Bethlehem Royal Hospital, or as it was more commonly called,Bedlam, was…and is (it’s still around today!)…the most famous mental hospital in the world. It’s also one of the oldest. Its existence goes all the way back to the early 14th century, when it was established in 1330.

Like Bedlam

The Royal Bethlehem Hospital, or Bedlam, was, is, and remains, the world’s most famous mental hospital. Even today, a phrase survives. A place that is rowdy, noisy, out of control and crowded with people is described to be “like Bedlam”. As indeed, the hospital was, during its most famous and notorious period, in the 1700s.

Previous to this time, the inhabitants of Bedlam were referred to as ‘inmates’, as if it was a prison for the mentally ill. In 1700, the inhabitants (also nicknamed ‘Bedlamites’) were called ‘patients’ for the first time. Between 1725-1734, ‘Curable’ and ‘Incurable’ wards were opened, where patients were supposedly housed accordingly. But despite the apparent show of progress, Bedlam was a hellhole.

In the 1500s and 1600s, the hospital was filthy and patient-care was almost nonexistent. Barely anything changed by the Georgian era. Patients were often chained to walls, locked in filthy cells or subjected to brutal ‘treatments’, such as ‘The Chair’.

It didn’t DO anything. You were strapped in an armchair. Tied down. Secured. Then the chair was hoisted up into the air and spun around…and around…and around…and around…and around…It was supposed to punish you for being ‘mad’, hoping that you would repent of your wicked and sinful ways and be an upstanding citizen once more.

Unsurprisingly…it didn’t work. Unless the purpose of the treatment was to make you expel your lunch, that is.

For almost the entirety of the 1700s, Bedlam was a popular tourist-attraction in London. It was common for the wealthy, upwardly mobile classes of British society to take in the sights…and one of them was a trip to the Bedlam Hospital, where, for a small fee, you could be granted admission to the wards. Here, you could view the lunatics and bedlamites and if you wished, you could poke them through the bars of their cells with your walking-stick to watch their reactions. It’s fun, trust me. Bring the kiddies…It should always be a family outing, a trip to a lunatic asylum.

One of the most famous depictions of the Bedlam Hospital is the final painting in a series by Georgian artist, William Hogarth, titled ‘A Rake’s Progress’. Painted in the early 1730s, this is what the notorious lunatic asylum looked like in the 18th century

By the turn of the century and the coming of the Victorian era, views on mental health were (gradually) changing and conditions at Bedlam did eventually improve. Government inquiries, reports and investigations brought to light the shocking conditions inside Bedlam and by the dawn of the 19th century, the regular tours had died away after surviving as a London fixture for nearly a century. The patients were given proper care and attention and the buildings improved.

The Maddest of them All

The most famous mad Georgian of them all was one of the kings who gave his name to the era. King George III. Up to 1788, he was a sharp, intelligent, learned man. He enjoyed science, technology, mechanics, farming and nature. He had a lovely and loving wife and a HUGE family (fifteen children in total!). But from then on, attacks of mental illness eventually robbed him of his senses. He died, blind, deaf and insane, locked in a tower in 1820. When his beloved wife, Queen Charlotte, died in 1818…nobody even bothered to tell him.

Mad Words

The Georgian era gave us a number of our most commonly-used words when describing mental illness – “Crazy”, and “Insane”, from Middle English meaning ‘cracked‘, and from the Latin word ‘Insanus‘ (‘Unhealthy’). ‘Psychiatry as a discipline, was first practiced in 1808, when the word was coined by a German physician, Dr. Johann Christian Reil, from the Greek words meaning “Medical Treatment of the Mind”.

A Victorian View of Madness

Mental illness was not widely understood in Victorian times, but things were gradually improving. The Industrial Revolution made life faster. For the first time, things could truly be mass-produced.

And lunatic asylums were no exception. As a partial list, we have:

The Hanwell County Asylum (built 1830).

The Surrey County Asylum (built 1838).

The Royal Bethlehem Hospital (extended, 1837).

The City of London Lunatic Asylum (opened 1866).

Guy’s Hospital (Lunatic Ward, opened 1844).

The list goes on. And this is just in England.

Thanks largely to reforms at the turn of the century, the Victorian-era lunatic was handled with much greater care, but probably with just as much misunderstanding. Causes, and treatments for, mental illnesses…and indeed, the distinctions between one illness and another…were still very much muddled up. But progress was…slowly…being made.

The increase in number, and size, of asylums and hospitals around the world, as well as the number of patients, caused problems. Although chaining patients up was no longer an acceptable method of restraint, something was needed to stop patients from hurting themselves. If they couldn’t be drugged up with heroin, opium, laudanum and morphine (common Victorian drugs for calming someone down!), then they had to be rendered a negligible force in some other manner.

Its existence predates the Victorian era, but the straightjacket was the most common method.

Invented in 1790 in France, it was first used at the Bicetre Hospital in the southern suburbs of Paris. Bicetre was not a place where you wanted lunatics to run wild. It wasn’t just a hospital. It was a lunatic asylum, a prison and an orphanage as well!

The straightjacket was used regularly on mentally ill patients, even before the Victorian era. It was the only way that badly understaffed mental asylums could control all their patients at once. But a straightjacket isn’t supposed to be worn for a long period of time (restricting the limbs like that causes blood-clots and other nasty things…perhaps why Houdini wanted to break out of them so often). Bicetre Hospital was one of the first mental asylums, along with Bedlam, to introduce humane treatment methods for the mentally ill during the sweeping social and moral reforms that spread around Europe and the United Kingdom in the 1790s.

Research and theorising into the causes and possible treatments of mental illness started in earnest in the 1800s. Pioneers such as the famous Dr. Sigmund Freud, helped to guide the way. Freud, a Jewish German, fled Nazism in the 1930s and settled in England. He was on the hit-list of people to kill when the Germans invaded Britain. Fortunately for Freud, the Germans never invaded. And even if they had, it wouldn’t have done them any good. He died less than a month after the war started.

Phrenology

Perhaps you might have seen one of these?

These days, people buy them as paperweights, bookends, curiosities, dust-collectors, souveniers, decorations and hat-holders. But back in the Victorian era, these things were used to understand the brain.

Or something like that.

Shakespeare once wisely said that there was “no art to finding the mind’s construction in the face” (taken from ‘Macbeth‘, that was, in case you’re wondering). What the Bard meant was that it’s impossible to just look at someone, study their face, and then automatically know what’s going on inside their head.

Apparently, Victorian psychiatrists, doctors and psychologists…disagreed with the great playwright, because for most of the 19th century, phrenology held sway as the latest way to read and understand the workings of the human brain. And they were onto something!

…or not.

Phrenology has absolutely NO medical or scientific fact to back it up at all. It was dismissed as quackery by the end of the Victorian era and was declared to be of no practical benefit at all to the fields of medicine or science.

But what was phrenology?

The ‘Science’…so-called…of Phrenology, supposed that a person’s personality and traits, his mannerisms and so-forth, could be determined, or even predicted, by studying the shape of his head. If you’ve ever heard of ‘death-masks’ (masks or busts made of prisoner’s heads after their executions), they were made to try and study the heads (and minds) of the “criminal class”, as it was called in Victorian times. It was hoped that by studying the heads of criminals, their shapes, their foreheads, positions of ears and so-forth, a general list of  ‘characteristics’ could be compiled, showing the public the typical face (and traits) of someone who is (or would become) a criminal.

Phrenology advocated the belief that the brain is divided into segments or “organs”. Each organ controlled an emotion, or trait, such as lust, hope, curiosity, aggressiveness, gentility, connivance and so-forth. It was believed that by examining the head of a person, you could map or determine his personality traits.

How?

Using a pair of phrenology calipers. They look like this:

You can stop sucking in your belly. They’re not for measuring body-fat.

The calipers were used to measure the head. By examining the size of the cranium (that’s fancy medical talk for your skull) the phrenologist could pick up on any abnormalities. He was looking for bumps or strange inconsistencies on your head. The positions of these bumps on your head were transferred to a chart (or to a phrenology head) where a number would be printed. The number corresponded with a trait, printed on an accompanying list. The bumps indicated the areas of the brain which were, supposedly, the most developed, and by extension, the personality traits that were most developed within your brain. This could determine your mood, temperment, likelihood for criminal behaviour, propensity towards violence, drunkenness, abusiveness, gaiety…all kinds of things! Fascinating!

Did it work?

No.

But it sure makes for interesting blog-material.

Phrenologists, as they were called, believed that each section of the brain controlled or housed a particular trait or emotion. You can see that here in this chart from 1895:

As you can see, phrenology didn’t last very long. This page is taken from a medical dictionary from 1895. Note the opening passage, that phrenology was the “…science of the special functions of the several parts of the brain, or of the supposed connection between the faculties of the mind and the organs of the brain…”.

Phrenology continued to linger long after it was dismissed as quackery by the respected medical community. It’s mentioned in “Dracula”, by Bram Stoker, and in numerous Sherlock Holmes stories by Sir Arthur Conan Doyle. Most notably, in “The Hound of the Baskervilles”, where Dr. James Mortimer confesses to an interest in phrenology…specifically, in a close examination of Holmes’s head! (“A cast of your skull, sir, until the original becomes available!”).

Want to know more about phrenology? Here’s an interesting and rather funny lecture given by Prof. John Strachan of Northumbria University in England. Enjoy!…Oh, this is just Part 1. If you want the rest, click on the video and it’ll take you to YouTube, where you can see the rest of it.

A New Century

The Great War of 1914 brought a new horror to the world of mental illness. It was given the title ‘Shell shock’, and was believed to be caused by the deterioration of the mental state, caused by the constant bombardment of artillery shells. The unrelenting stress thus created, it was believed, caused the affected person’s brain to just snap and blow a fuse.

It was in the first half of the 1900s that mental illnesses started getting names. Names like…

Catatonia (1874).

Schizophrenia (1908), from the Greek words that described a ‘split mind’.

Melancholia (An older term. ‘Depression’ today).

Bipolar Disorder (1957). Previously called ‘Manic Depression’ (1952) and ‘Circular Insanity’ (1854).

The 20th century also brought forth a new and terrifying treatment. One which had no sure and certain outcome and which could, if performed poorly (or performed at all!), leave the patient as a comatose vegetable. It was called the lobotomy.

Tinkering with what does the thinking, has been a fascination for centuries…just look at medieval trepanning. The lobotomy had its roots in late-Victorian scientific and medical experimentation. Great strides were being made in medicine during the turn of the 20th century. New drugs, new ways of doing things, new understanding, new technologies, were making the treatment of patients faster, safer, cleaner and more effective. Why not might the same be done for the human brain?

Mostly because the results were almost always a failure.

The Lobotomy

Ah, the lobotomy. Famous in horror films for turning monsters into angels, angels into monsters, and right-thinking people into perfect vegetables. But what is it?

The lobotomy as is most commonly thought of, was developed in the mid-1930s by Antonio Egas Moniz. In 1935 and 1936, Moniz ‘perfected’ one of the most controversial medical treatments in the history of medical treatments…and that’s saying a lot.

A lobotomy involves making two incisions (holes) in the front of the head and inserting a pair of blades into the brain, whereafter two cuts or slices are made into the frontal lobes (quarters) of the brain. This was supposed to alter the workings of the brain, calm the patient down and affect a remarkable change in personality.

…or not.

Some lobotomies were pulled off with relative success. Others became tragic failures. Because the lobotomy required small, precise slices or cuts into the brain, a small, precise instrument was used. Originally, that instrument was one of these:

The scientific term is an ‘orbitoclast’, but its similarity to the axes and picks used by mountain-climbers…

…caused people to call operations carried out with these instruments, ‘ice-pick lobotomies’.

Unsurprisingly, lobotomies were incredibly risky. Patients risked everything from death, paralysis, becoming a vegetable, losing their faculties, their ability to speak, see, function properly in society…It makes you wonder why such a treatment was ever devised in the first place! One of the most famous people to receive a lobotomy was a 12-year-old boy named Howard Dully. He’s still alive today. He was born in 1948. The lobotomy was performed on him with the permission of his parents. The damage was so severe that it took him his whole life for his brain to recover and for him to be able to function properly in society again. The lobotomy is such a mythical procedure in medicine today that he wrote a book about what it was like to have one, and the effects that it had on his life. His memoir is titled “My Lobotomy”.

The effects (and benefits, if any) of lobotomies were disputed almost immediately. Even by the 1940s, people were questioning whether or not this ‘procedure’ did anything useful at all. The Soviet Union made the performing of lobotomies illegal as early as 1950. By the 1970s, most other countries had followed suit. During the heyday of the lobotomy (the 1940s and 50s), up to 18,000 people were lobotomised in the United States alone.

Electroshock Treatment

Electroshock treatment or therapy dates back to 1938. It was devised by Italian psychiatrists Ugo Cerletti and Lucio Bini. Cerletti first experimented on animals, as all good scientists did back in those days, before moving onto human patients.

Why on earth would zapping someone with electricity be considered a good thing?

Cerletti believed so because he noted a remarkable change in his aggressive, mentally ill patients. Once zapped, aggressive patients tended to be calmer and more manageable. This was seen as a good thing (who wouldn’t agree?) and electroshock therapy was slowly introduced around the world, to treat those who had mental illnesses that caused them to be a danger to those around them, such as the “criminally insane”.

Electroshock therapy is obviously dangerous. Improper use of the therapy can lead to brain-damage, most notably, temporary or permanent memory-loss. It was often prescribed for violent criminals to calm them down, or for mentally ill patients who posed a physical danger to those around them. It’s still used today, to treat extreme depression, although in the 21st century, it’s much safer. It can still result in varying levels of memory-loss however…so if your doctor decides to prescribe you this treatment…think twice before saying ‘Yes’.

Looking for more Information?

Index of British Lunatic Asylums

The History of Phrenology

Documentary Film:Bedlam: The History of Bethlehem Hospital“.

History of the Bethlehem Royal Hospital

“What is a Lobotomy?”

“What is a Lobotomy?” (WiseGeek.com)

 

Choking or Charming? The History of Ties and How to Tie One

Ties. They can look flashy, fashionable and snappy, or they can bring back visions of boardrooms, the office, school or military dress-uniforms. They can be stylish and colourful, or they can be choking and restrictive, or possibly inducive of autoerotic asphyxiation…which might not be a bad thing. But I digress; fewer articles of clothing are more polarising to men other than whether you would, do, would not, or do not, choose to wear a tie.

Some people wear ties on a regular basis as part of a uniform. Some men wear ties because they’re part of their personal style or ‘look’. Personally, I’m in the camp of the latter. I started regularly wearing ties again about two years ago, and I’m still wearing them regularly today. In fact, I’m wearing one right now as I type this.

But how long have men been putting things around their necks? Where did they come from? Why on earth would someone do this?

The History of Neckwear

People have been putting on neckcloths for centuries. The modern necktie and its cousin, the bowtie, the two most common neck-coverings today, were descendants of one of the most common neck-coverings of the 17th century – the Cravat.

The cravat, a wide, scarf-like neck-cloth tied loosely around the throat, was the neck-covering of choice from the 1600s up to the 1800s. Some people who want a more loose and loungey, casual look in their neckwear still wear cravats, and their cousins, the neckerchieves, today. The word ‘cravat’ came from the French  ‘cravate’, which was a corruption of the word ‘Croat’, from the country of…Croatia, where the cravat was born in the 1630s.

The original purpose of these neck-coverings and cloths (be they cravats, kerchiefs or ties), was actually to hold the shirtfront shut, and to stop wind and cold air from blowing down into your clothes and onto your chest…it was a comfort thing.

The Birth of the Tie

By the mid-1800s, the cravat, a staple of men’s wardrobes for the past two centuries, was beginning to get a bit raggedy around the edges. As with a lot of other elements of men’s clothing at the time, people started wanting simpler, better-fitting, less flamboyant clothing. The cravat was being seen as a relic of the Regency era of the 1810s and it was quickly becoming sooo last century.

So the modern necktie was born. While the cravat and the neckerchief never really went away, by the last quarter of the 1800s, they were beginning to do serious battle with the new kid on the block – the necktie.

The necktie was popular for a number of reasons.

– The cravat is generally tied loosely and floppily around the throat. This was fine…so long as you didn’t have to keep tightening it up all the time. In the increasingly mechanised world of the late 1800s, the loose, wavy cravat was a liability. If it unluckily unravelled over a piece of whirring machinery, it could strangle the wearer to death! The necktie was done up so that it provided a tighter, safer knot.

– The necktie was simpler and didn’t take up so much real-estate. Cravats are like icebergs – three quarters of the cloth is stuffed down your shirtfront. And cravats are big, bulky things – that leads to a lot of excess material. Neckties were slim and simple, without wasted fabric.

– The necktie was easier to put on. A cravat had a lot of fabric and tying one could be frustrating. A necktie was thinner and had less to fuss around with, making it faster and more convenient to tie.

Tie Knots

There’s a multitude of tie-knots out there. According to author Thomas Fink, who did a study of the necktie, there are no fewer than 85 ways to tie a necktie. Screw that! I’m only going to talk about two knots.

Not all shirts are the same and not all collars are the same. So you should always know at least two tie-knots. One of the most common knots is the Four-in-Hand.

Four-in-Hand Knot

The four-in-hand knot is probably the simplest knot ever. Supposedly, it was named after coachmen, who would tie up the reins of their carriage-horses in a similar way, to stop them from tangling up during long drives. If you’ve never tied a tie and you really need to know how, this is the fastest way:

1. The tie’s draped around the popped-up collar and over your shoulders, with the right side longer than the left (and with the wider side on the right).

2. The right side of the tie is crossed over the front of the left and then pulled behind it to the right.

3. Then the right side of the tie is crossed over the left side again.

4. Then, it’s pulled up through the gap below your neck.

5. Using your fingers, wriggle a hole through the last of the two loops that you made around the skinnier portion of the tie.

6. Stuff the wide end of the tie down through that new hole you wiggled open with your fingers. Pull it down so that it’s nice and tight.

7. Close the gap below your neck by pulling on the short end of the tie to draw the knot up.

Note: As this is the knot that generally uses the least amount of material, you might end up with a tie that hangs down too low. If it does, untie and do it again. Simply repeat step 3 two or three more times. This uses up the extra fabric so that you don’t have so much left over when the knot’s done up.

The four-in-hand knot is best used with shirts with spearpoint collars, that leave only a small space between the ends of the collar. This is because the knot that results from tying a tie in this fasion is rather long and skinny (unless you wrapped the tie around the knot a few more times to use up the extra fabric like I mentioned up above).

The Windsor Knots

The knots which are collectively called the Windsor knots, go by many names. But the general style was named after the dapper Duke of Windsor (who caused a scandal in 1936 when he abdicated the British throne). However, the knot itself was actually invented by his father (King George V), who had a reputation of being a strict dresscode adherent.

Windsor knots are also called Full Windsors, Double Windsors and Half-Windsors.

What the hell is the difference?

Full Windsor and Double Windsor are the same thing.

Half-Windsor is…a…half-Windsor.

So how do you tie one?

The Windsor-knot is famous, not only because of its royal connections, but because it’s a fat, chunky knot with a lot of symmetry. Here’s how you do one up.

1. Drape your tie over your shoulders and around your popped-up collar. Longer, wider side on the right, skinnier and shorter side on your left.

2. Cross the long side over the short side (left).

3. Loop the long side behind, and then up, through the gap below your neck, and outwards. Then, pull down. The front of the wider end of the tie should now be hanging down, on your left-hand side.

4. Cross the wide side of the tie back behind the knot again, but this time, pull it right across to the right side of your body.

5. Pull it up and stick it through the gap below your neck, keeping to the right side, this time. Pull it down. This is similar to step 3, only in reverse and on the other side of the knot. In this case, the BACK side of the wide end of the tie should be facing outwards.

6. Draw the wide end of the tie across the front of the knot to the left, and then poke it through the gap below your neck from behind, again (as in step 3).

7. Toss the wide end of the tie back over your left shoulder. Stick your finger down the front of the knot to make a hole there. Stuff the rest of the wide end of your tie down that hole and pull tight.

8. Pull the shorter, back end of the tie down to close the knot.

If you’ve done it right, then you should have a fat, triangular knot. Also, there should be a little dimple in your tie just below the knot – a distinctive trademark of a Windsor knot.

Alright. You’ve just done a Full or Double Windsor knot. Again, as with the four-in-hand knot, if you’ve left with too much fabric, undo the tie, readjust and tie again, but repeating steps 3 and 5 one or two more times to use up the extra fabric to make the finished length more correct.

So how do you do a Half-Windsor knot?

Easy. Just do half the steps. That’s why it’s called a Half-Windsor!

Do steps 1-3. Then instead of doing step 4 (bringing the tie around the back of the knot to the front to the right and stuffing it down the neck-gap again), simply bring it around the back, all the way around the front and to the back again (in a loop), then do steps 3, 7 and 8. Done.

The Four-in-Hand, the Full/Double and Half-Windsor knots are the most commonly-used tie-knots. The Windsor knots, because of their chunkier results, are best tied on shirts with spread collars, where a knot that takes up a lot of shirt real-estate is preferrable.

The Bowtie

Like the necktie, the bowtie is descendant from the granddaddy of all neck-cloths, the cravat. Okay. I won’t go into all that again.

Doing up a necktie is easy. Most boys learn how to do one up for school uniforms and the like. But a bowtie can be daunting and scary and intimidating!

It isn’t.

Bowties carry certain connotations – You’re a professor, banker, teacher, doctor, Hercule Poirot, or if you wear thick-rimmed glasses, have buck-teeth and wear a shortsleeved shirt – a nerd. But bowties can also carry a connotation of skill…mostly because they’re perceived as being impossible to tie.


Agatha Christie’s dandy Belgian detective, Hercule Poirot (portrayed by David Suchet), with his trademark bowtie

They’re not.

And this is how you do it.

I’ll be honest. There are a bazillion video-tutorials on YouTube that show you how to do up a bowtie. And you could disregard everything here that’s to come, and just go and watch one of those. But one reason why there’s so many of those videos is because they all show you how to tie a bowtie…but they don’t tell you. “You do this, then this, then this, then this, then this…voila!”

Yeah. Slow down. The video’s over in two minutes and you’re standing there with a piece of crap tied around your neck and your big fancy Black-Tie dinner is in half an hour. You’re screwed.

Tying a bowtie is easy – I got it after just three attempts. If a doofus like me can do it, anyone can. Here’s how:

1. Pop up your collar.

2. Adjust the length of your tie. Quality bowties have a cinch or an adjuster on them. Use this to get the length of the tie right.

How long does the tie have to be? Well, if it’s draped over your chest and around your neck, the left end of the tie should be at your nipple or at the top of your sternum. The right end should be about an inch or so longer than that. Adjust the tie’s length so that this is achieved (the short end of the tie is always on the left, the long side is on the right). Go ahead. I’ll wait…

3. Okay, done that? Now, to tie stuff up. Cross the long end of the tie over the short end (to the left). Stick the long end up behind the knot and pull up, firmly. The long end of the tie is now on your left shoulder and it should’ve looped around there from behind the left part of the tie. Yes? Good. Leave it there.

4. The other half of the tie is now pointing straight down. It should be shaped like a fish, with the tail pointing down and the head pointing up. Fold the tie in half across the middle of the head. Then twist this part of the tie to the left. You should have a nice, bowtie-looking shape under your chin if you’re looking in the mirror.

5. Keeping this position, flop the other part of the tie (on your shoulder) down over the middle of this bowtie shape.

6. Pinch the head and tail of the fishy bow which you created in step 4, together, and pull outwards. You’ll now have two holes. One between your neck and the tie itself, and one smaller one in front of that, just big enough for your finger to poke through.

7. The long half of your bowtie is now hanging down just like the other half was, with the fishtail pointing down and the head pointing up. Do as you did with the other half of the tie – Fold the head in half and twist it to the left.

8. Now for the tricky bit. Remember that little hole I mentioned at the end of step 6? You’re now gonna shove the folded head of your other fishy through that hole. I find it helps to fold the fish again, lengthwise, to fit it through here.

9. Done that? Now you should have:

Front bow – folded side on the left. Fishtail on the right.
Back bow – Folded side on the right. Fishtail on the left.

During this procedure, the middle of your bowtie knot might become a bit twisted. You can untwist it slightly to make it a little neater.

Now, to tighten and neaten.

Pull on the two fishheads to tighten the knot, and on the two tails to loosen the knot. Keeping pulling on the heads and the tails until symmetry and a comfortable tightness has been attained. Straighten out the fish-heads (especially the inner right one, which may have become a little crinkled in the exercise), and you’ve got a perfect bowtie!

Now you’re ready for that school formal or prom or that Black Tie family reunion or that fancy dress party which you’re attending as a Computer Nerd.

To undo the tie later, simply pull on the fishtails, and the whole thing just comes apart.

Note: Don’t worry if the tie isn’t absolutely super-duper crazy mega-perfect. No self-tied bowtie ever is. But you can try to get close.

 

“Ah Watson! The Needle!” – Sherlock Holmes and Drugs

“Which is it today? Morphine or cocaine?”
“It is cocaine. A seven-per-cent solution. Would you care to try it?”

– Dr. Watson speaking to Sherlock Holmes, ‘The Sign of Four’

Sherlock Holmes is famous for a lot of things. His deerstalker cap, his pipe, his address (“Two-twenty-one B…Baker Street”), his phenomenal deductive powers and of course…his drug-use. That’s what this posting is about.

The Holmesian Canon (the collection of short stories and the novels), was written by Sir Arthur Conan Doyle, who was knighted in 1902 for services rendered during the Second Boer War (1899-1902). But before the Boer War, Doyle enjoyed the use of another title.

Dr. Arthur Conan Doyle.

That’s right. He was a physician.

He wrote the Holmes stories in the considerable amounts of spare time that he had between appointments and consultations, to make the extra money that his medical practice failed to provide.

It’s probably not surprising then, that medicine and drugs play a big role in the Canon, since after all, the stories were written by a doctor.

Sherlock Holmes and Drugs

The Holmesian canon gives us a window into the world of Victorian England, at the end of the 19th century. We see clothing, transport, social attitudes, science and technology. And we also get a glimpse into Victorian medicine. How many of the characters are doctors or surgeons? Dr. Mortimer, Dr. Watson, Dr. Trevelyan, Dr. Carthew…the list goes on.

But Holmes’s closest association with medicine (apart from Dr. Watson), is his use of drugs.

I will say this once. So pay close attention.

Sherlock Holmes was not a drug-addict.

He says so himself. Holmes’s brain is overactive. It is constantly whirring around looking for things to occupy itself with. When he’s working on a case, his brain is occupied with problems, facts, deductions, inferences and pieces of evidence.

When Holmes doesn’t have a case, his brain has nothing to work on. Nothing to stimulate it. He gets bored and cranky. Hence the drugs. They serve to keep his brain occupied and stimulated when he doesn’t have a case. He hops off them the moment that he does have one. At best, you might say that Holmes was a recreational drug-user. But certainly not an addict. If he was, he’d be huffing on opium and shooting up heroin all day long, even if he was on a case…which he has never done.

“…My mind rebels at stagnation. Give me problems, give me work, give me the most abstruse cryptogram, or the most intricate analysis, and I am in my own proper atmosphere. I can dispense then with artificial stimulants. But I abhor the dull routine of existence. I crave for mental exaltation…”

– Sherlock Holmes, ‘The Sign of Four’

Now, I will sit back while I’m broadsided by a group of angry people screaming at their computer-screens, saying how Holmes is a drug addict because he shot himself up with cocaine and morphine, how he did tobacco, how he huffed opium and did heroin and every other kind of illicit drug imaginable. Of course he was a druggie. Those are all illegal drugs!

…No they’re not.

Drugs in Victorian England

You have to understand that we read the Holmesian stories through modern eyes. Through the eyes of people living in the 21st Century. When these stories were written, some well over a hundred years ago, things were very different.

The most important difference, for the purposes of this posting, is that in Victorian times, opium, morphine, cocaine, laudanum and heroin were all completely legal.

Yes they were. Believe it, or not.

You could go into your Boots chemist in London on Fleet Street and buy a bottle of opium or morphine just as easily in 1885, as buying a bottle of aspirin pills is today. Nothing was thought of it and nothing was said. It was as easy as that. And 100% legal. Owning, using, purchasing and selling these drugs was as common as cough-drops. There was almost no regulation or laws surrounding these substances…mostly because at the time, their side-effects were less well-understood than they are today.

Opiates, especially (opiates are the drugs derived from the opium poppy), were used extensively in Victorian times, either as sedatives, sleeping drafts or painkillers. Sleeping-tablets contained opium or morphine. Sedatives (drugs to help you relax) most likely also contained opium or one of its related drugs.

The most common painkiller of the time was a powerful drug sold in bottles and which was used to treat everything from toothaches, headaches, joint-pains and back-ache. Called ‘Tincture of Laudanum’, this highly potent cocktail of alcohol and opium was powerful and effective…but also extremely addictive. And it was sold as freely in Victorian times as any other non-prescription pain-relief medication is sold today.

The Status of Drugs

In Victorian times, when the Holmesian canon was written, there was almost no regulation about drugs and poisons. The closest thing you had was the pharmacist’s ‘Poison Book’.

By law, pharmacists had to keep a record-book of poisons. Anyone wanting to purchase poison would have to fill out a line in the book. Their name, address, reason for purchasing poison and so-on…and sign their entry in the book. That was pretty much it.

But the drugs which, in the 21st century are illegal, had no regulation in Victorian times. Their side-effects were not understood and they were so widely used by everyone from doctors and surgeons to parents treating their sick children, that nobody thought anything of it.

It would not be until 1920, with the passing of the Dangerous Drugs Act, that drugs like cocaine and heroin would finally be outlawed in England.

Holmes’s Use of Drugs

At best, Holmes was a recreational drug-user. He shot himself up with morphine and cocaine to alleviate the agonising spells of boredom he had between the cases which were his real addiction. Opium is occasionally mentioned in the canon (most notably in ‘The Man with the Twisted Lip‘), and its famous side-effect of drowsiness (which is what made it so popular as a painkiller and sleeping-agent) was recorded therein, but no mention is made of Holmes ever actually taking the drug.

Whatever you might think of Holmes and the use of the drugs mentioned in the canon, you need to understand the historical context of the stories and the manner in which drugs were viewed at the time, and how they were used by Holmes, both very different from how they’re handled and used today.