Not for the faint of art. |
Complex Numbers A complex number is expressed in the standard form a + bi, where a and b are real numbers and i is defined by i^2 = -1 (that is, i is the square root of -1). For example, 3 + 2i is a complex number. The bi term is often referred to as an imaginary number (though this may be misleading, as it is no more "imaginary" than the symbolic abstractions we know as the "real" numbers). Thus, every complex number has a real part, a, and an imaginary part, bi. Complex numbers are often represented on a graph known as the "complex plane," where the horizontal axis represents the infinity of real numbers, and the vertical axis represents the infinity of imaginary numbers. Thus, each complex number has a unique representation on the complex plane: some closer to real; others, more imaginary. If a = b, the number is equal parts real and imaginary. Very simple transformations applied to numbers in the complex plane can lead to fractal structures of enormous intricacy and astonishing beauty. |
Today, from the Land of Party Poopers (actually, from Thrillist): No, That Isn't Duct Tape on Your Plane's Wings An aircraft mechanic explains what the tape you sometimes see on plane wings really is. Why Party Poopers? Well, because they're taking one of my few precious joys out of life. See, I don't fly all that often. Once a year, maybe. (Okay, twice, if you count the round trip as two trips.) So I don't get to do this often, but when I see that tape on a plane, I usually wait until the plane starts to taxi away from the gate to loudly exclaim, "Hey, look, the wings are being held together by duct tape!" I also find the fake outlets at waiting areas incredibly hilarious, though I've never done that prank, myself. Those are little moments of happiness for me, but this article sucks the joy out of the first one. Well, at least, it would, if people actually read Thrillist. Maybe my faux-freakout over the tape will still have its desired effect. Anyway, after all that, I'm sure you're dying to know what it really is on the wings. As a passenger, noticing that your plane's wings are seemingly held together by the same silver duct tape that your dad uses to fix anything around the house is, by all means, a frightening sight. Or, you know, it would, if duct tape weren't so damn useful. "That's not actually duct tape," says an aircraft mechanic in a TikTok video addressing the issue. "That's speed tape, [...] and speed tape is an aluminum-base tape that's designed specifically for aviation due to the large speeds and the large temperature differentials that aircraft are subjected to." I actually knew that. But knowing that it's called "speed tape" doesn't help for shit. Like, from the sound of it, it should make the airplane go faster, but if that were the case, the whole plane would be covered in it, right? If it has something to do with the "large speeds" (eyetwitch) as well as temperature differentials, why call it speed tape and not cold tape? Instead, sometimes, it's used as a temporary sealant to prevent moisture from entering specific components. Uh huh. Okay. Doesn't tell me why it's called speed tape. "Speed tape, also known as aluminum tape, is a material used for temporary, minor repairs to nonstructural aircraft components," an FAA spokesperson told Thrillist. And it's called that because...? Yes, I know I could ask some random AI the question and get some kind of answer, but that's not the point. The point is, why can't the article purporting to explain all about speed tape not even bother to explain why it's called speed tape? You can relax now and enjoy your flight stress-free. HA! Like there aren't 498 other things about flying that cause stress. Oh, right: 499 if I'm around. |
Way the hell back in 2018, Quartz published the article / stealth book ad I'm linking today. Does it? Does it really remain at the center of dining controversy? Because I thought that in 2018, and even now, the "center of dining controversy" is how to handle mobile phones at the table. On June 25, 1633, when governor John Winthrop, a founding father of the Massachusetts Bay Colony, took out a fork, then known as a “split spoon,” at the dinner table, the utensil was dubbed “evil” by the clergy. While this article is US-centric, and makes no attempt to be otherwise, other sources show that the fork has been considered a tool of the devil since it was introduced to Europe. This is, naturally, just another in a long list of humans considering anything new and different to be necessarily evil, because we're kinda stupid like that. Forks were pretty much unheard of during Winthrop’s era. People would use their hands or wooden spoons to eat. The Museum of Fine Arts (MFA) in Boston says that only “a handful of well-to-do colonists,” adopted the use of the fork. I mean, technically, you're using your hands either way. When Americans finally started their love affair with the fork, their dining etiquette compared to their international peers became a source of controversy for centuries, whether it’s the way the fork is held, only eating with the fork, or using the “cut-and-switch.“ Oh, no, different countries do things differently. The horror. During the time it took for Americans to widely start using the fork, dining cutlery was evolving in England. Knives changed to have rounded blade ends, since forks had “assumed the function of the pointed blade,” says Deetz. I'm betting there were other reasons for the switch, like, maybe, deweaponization? So if you've ever wondered why some cultures point fork tines up while others point them down, well, the article explains that. Sort of. Unsatisfactorily. Still not mentioned: why formal place settings are the way they are. Also not mentioned in the article (perhaps one of the books it promotes says something about it, but it's unlikely I'll ever find out) is the abomination known as the spork. |
It's time-travel time again. Today's random numbers brought me all the way back to July of 2008, with a short and ranty entry: "Those Naughty Brits" Apparently, there was a link to a (London) Times article, in the chick section, about "kinky sex." It should be surprising to no one that the link is dead and now just redirects to the Times main page, which I didn't bother looking at. "Why do many of us like kinky sex?" apparently opened the original article, based on what I said in that entry. These days, I have preconceived ideas about headline questions: First, if it's a yes/no question, the answer is probably "no." Second, if it's a "why" question, the answer is probably "money." I think I'm wrong about the second idea, but only this time. 2008 Me: Why is this in the "women" section? Men don't want to read about kinky sex? Please. I'm guessing men are less likely to consider it kinky, outrageous, or naughty. But what the fuck do I know (pun intended)? 2008 Me: In conclusion, the article seems to be designed to be provocative, but semantically null. I guess that was me, waking up to the practices of major information outlets. 2008 Me: What happened to investigative journalism? Hell, what happened to comprehensive news stories? Gods, 2008 Me was so young and naïve. 2008 Me: ...an excuse to link the blog of a friend of mine... Said blog no longer exists, and I have no recollection of who the friend was now. 2008 Me: Journalism may not be dead yet, but it's starting to wander and stink. Dead now. Mostly. 2008 Me: I blame bloggers. Clearly, that was an attempt at irony. The reality was, and is, way more complicated than one single reason, as these things usually are. I'm not getting into it here, and I'm probably wrong, anyway. But this look into the far-distant past has been enlightening, and maddening. Still, one constant that hasn't changed, and was an old constant even in 2008: sex sells. And, apparently, kinky sex sells more. |
Appropriately enough, the first entry after the completion of my five-year daily blogging streak is from Cracked: How the Tomato Became Torn Between the Lands of Fruits and Vegetables A confusing, red, plant-based chimera Right, because the most important characteristic of a tomato is which category we pigeonhole it into. But, okay, I'll play along. I don’t know what it is about the fruit-versus-vegetable designation of a tomato that I find so particularly annoying, but it twists in my brain like a knife. That sounds serious. Maybe, instead, take a break and think about Pluto for a while. As it sits today, the tomato is indeed, botanically a fruit. At the same time, it is legally a vegetable... Yes, and my mom was, to me, my mother, but to my dad, she was his wife. So? First, let’s stick to the science, which decidedly declares a tomato a fruit according to botanical guidelines. Well, botanically, it's a berry. And, according to botany, strawberries, raspberries, and blackberries are not berries. Why this would matter to anyone trying to fix dinner or dessert, though, is beyond my comprehension. Where the other side of the argument comes from is the culinary world, the place where most people are interacting with tomatoes on a daily basis. It’s also the dominant layman’s classification, probably due to the fact that it’s based in common fucking sense. Here's where I usually rant about how common sense is usually wrong and needs to be superseded by science. But the classification of a tomato isn't like studying what its nutritional characteristics are. Categories and classifications are imposed from the outside and are supposed to help us make sense of the universe, like what the definition of "planet" or "mammal" should be. Then something like a platypus comes along to remind us that the universe fundamentally doesn't make sense and we shouldn't expect it to. Point is, we could just as well say "any topping on a Big Mac is officially a vegetable," which might settle the tomato question once and for all, but move the discussion to whether cheese should be called a vegetable or not. And yet, no less of an authority than the Supreme Court has ruled differently. Unsurprisingly, it’s money-related, specifically to do with tariffs. In the late 1800s in America, the taxation on fruits and vegetables was starkly different. Fruits could be imported with impunity, while bringing in foreign veggies would demand a steep 10-percent tariff. An importer named John Nix saw opportunity in the science, and refused to pay tariffs on a shipment of tomatoes, since they were technically fruits. The case climbed all the way to the Supreme Court, where it was heard in 1893. It also should come as no shock that, in some cases, a thing can be categorized as one thing in one context, and another thing in other contexts. Like, astronomers consider any element that's not hydrogen or helium to be a "metal." That works for astronomy. It doesn't work for structural engineering. As I read it, the Supreme Court agrees with the people, issuing the legal equivalent of “sure, technically, but come on, dude.” Leaving aside for the moment that botanists and biologists are also (usually) people, all that means is that, in the US, tomatoes are vegetables by legal definition. I vaguely remember some nonsense a while back about whether ketchup, which doesn't have to be made from tomatoes but usually is, should also be considered a vegetable for the purpose of school lunch nutrition or something. Left unsettled, then, is still the question of whether a hot dog (with or without ketchup) is a sandwich, and I maintain that no, it's a taco. |
1827. No, I'm not referring to the year. 1827 is what you get when you multiply 365 by 5, and then add 2. Yes, today is not only Friday the 13th, but it's also the day I claim a five-year daily blogging streak, having shat an entry out every single day between December 14, 2019, and today: one thousand eight hundred twenty-seven entries. (There were two leap days in there, hence the "add 2.") Granted, they weren't all great entries. Some of them were probably not even very good. But I put thought and effort into each of them, and I really did do one every day (as defined by WDC time, midnight to midnight); we're not set up here to release entries at some scripted time, or to make up for lost days. But that's the limit. There will be no six-year blogging streak, at least not in this item. With fewer than 100 entries left in its capacity, the end looms like a kaiju over Tokyo. I thought about ending it today, but nah. Or maybe on the solstice, because that would be appropriate. Or on December 31, because the very first entry was on a January 1. No, I think I'll make the attempt to continue until entry #3000, and then... hell, I don't know. Take a break? Start a new one? Retire from writing? I haven't decided yet, and, knowing me, won't decide until the very last possible minute. Well, I promised something different today, and there it is: a great big brag. Tomorrow, I'll be back to my usual humble self. Hey... stop laughing. |
Another day, another book ad. But an interesting book ad, this one from Big Think. I promise something different tomorrow. A bold challenge to the orthodox definition of life In “Life As No One Knows It,” Sara Imari Walker explains why the key distinction between life and other kinds of “things” is how life uses information. I'm not going to weigh in on whether she's right or wrong, or somewhere in between. That's above my pay grade (not that that's ever stopped me before). I do think it's an interesting approach that adds to the conversation of science, even if it's ultimately a categorization issue, like the planetary status of Pluto or the sea status of the Great Lakes. Sara Imari Walker is not messing around. From the first lines of the physicist’s new book, Life As No One Knows It, she calls out some big-name public intellectuals for missing the boat on the ancient, fundamental question, “What Is Life?” I'm not sure how ancient, or fundamental, that question really is. With regards to humans and other animals, our distant ancestors could pretty much figure out the difference between life and not-life. With plants, it may have been a bit trickier, as they tend to not move even when they're alive. But I think Jo Cavewoman would scoff at the question. Dog: life. Rock: not life. (Yes, I'm aware that belief in animism might counter what I just said, but I'm talking in generalities here.) It probably took until we started looking through microscopes that we began to question the boundaries. Is a spermatozoon "life?" How about a virus? Since then, it's my understanding that people have proposed several different definitions for life, all necessarily based on conditions on Earth, and scientists and philosophers have been arguing ever since, as scientists and philosophers love to do. Subtitled The Physics of Life’s Emergence, one of the book’s major themes is a critique of the orthodox view in the physical sciences that life is an “epiphenomenon.” "Epiphenomenon" is another word with a kind of slippery definition. I don't like to quote dictionaries as sources, because they're descriptive and not prescriptive, but the definition I found was "a secondary effect or byproduct that arises from but does not causally influence a process." Which, well, thanks? That doesn't help. The Wikipedia article on it is similarly confusing, at least to me, with the added bonus of also coming from a source people don't like to cite. What's worse, in my view, is when people conflate "epiphenomenon" with "illusion:" This is the argument, often heard in mainstream popular science, that life is a kind of illusion. It’s nothing special and fully explainable by way of atoms and their motions. To address the latter assertion first: "nothing special" is a value judgement, and "fully explainable" is laughable hubris. As for the "illusion" thing, well, I've banged on in here on several occasions against the "time is an illusion" declaration. But that can be generalized to anyone airily calling anything an "illusion." To me, an illusion is something that, upon further study, goes away: a stage magician's trick, or those seemingly moving lines in a popular optical illusion picture. But no matter how much we study, for instance, time, it doesn't go away. I think a better word description would be "emergent phenomenon," meaning that it's not fundamental, but rather a bulk property. Like temperature. One atom doesn't have a temperature; it only has a vibration or speed or... whatever. Get a bunch of atoms together, though, and the group has an average speed, which we read as temperature. Or, to use everyone's favorite example, the chair you're sitting in. "It's an illusion," some philosophers claim (generally after taking a few bong hits). "It's not real." Well, look, any philosophy that doesn't start with "the chair is real" is a failure, in my view. Your ass isn't sinking through it; therefore, it's real. Sure, it's made of smaller pieces. On the macro scale, it's got a seat, probably a back (sometimes in one continuous piece), maybe arms, legs and/or casters, maybe a cushion for said ass. This doesn't make the chair any less real; it just means there's a deeper level to consider. Similarly, the cushion, for example, is usually a fabric stretched over some stuffing. The fabric itself can be further broken down into individual fibers. The fibers, in turn, are made of molecules, some of which have a particular affinity for one another, giving the fiber some integrity. The molecules are made of atoms. The atoms contain electrons, protons, and neutrons. Those latter two, at least, can be further broken down until you're left with, basically, energy. And maybe there's something even more fundamental than that. None of that makes the chair any less real. It just shows that our understanding can go deeper than surface reality. But surface reality is still reality. And so it is with life. I know I'm alive, for now, and that's reality. I'm pretty sure my cats are, too, and the white deer I saw munching on leaves in my backyard yesterday. Not so sure about the leaves, it being December and all, but I am as certain as I can be of anything that they are a product of life. Whew. Okay. Point is, I'd like to see these macro-level phenomena labeled something other than "illusion." It's misleading. In the standard physics perspective on life, living systems are fully reducible to the atoms from which they are constructed. Yeah, well, physics gonna physic. Just as with your chair, things can be studied at different scales. Biology is usually the science concerned with life. But biology is basically chemistry, and chemistry is basically physics. This doesn't make biology an illusion, either. Still, they will argue, nothing fundamentally new is needed to explain life. If you had God’s computer you could, in principle, predict everything about life via those atoms and their laws. I'm gonna deliberately misquote James T. Kirk here: "What does God need with a computer?" Walker is not having any of this. For her, the key distinction between life and other kinds of “things” is the role of information. Well, that's amusing. Not because it's not true—like I said, I'm not weighing in on that—but because from everything I've read, physics is moving toward the view that everything is, at base, information. Yes, that might be what energy can be broken down into. Or maybe not. I don't know. But "information theory" is a big deal in physics. Whether there's something even more fundamental than information, I haven't heard. Life needs information. It senses it, stores it, copies it, transmits it, and processes it. This insight is, for Walker, the way to understand those strange aspects of life like its ability to set its own goals and be a “self-creating and self-maintaining” agent. Okay. Great. Let's see some science about it. As usual, there's more at the link, if you're interested. Might want to sit down for it, though. You know. On that chair which is definitely real and hopefully not alive. |
A Slate article from half a year ago takes on an issue I've been wondering about for a long time. On Both Sides of My Brain For years now, I’ve been puzzled—and annoyed—by the way people seem to insist on labeling what type of person one can be. I’ve finally solved my problem. Ah, I recognize that personality type! The author must be a non-labeler! June 25, 2024 5:40 AM Yeah, I don't usually copy timestamps in here, but this one gave me pause. Is the writer an extreme night owl, or an extreme early bird? (Or was the article's publication time scripted? Different time zone? Who knows?) Recently, after I did a silent retreat, I was trapped on a five-hour car journey (long story) with someone who was obsessed with labeling everything. People have “math brains” or “creative brains,” there are “boy chores” and “girl chores,” and in any relationship you will have “the person who reads the map” and “the one who is social.” Well, there's your problem: you did a silent retreat, and then got stuck as a captive audience while someone spewed out all the words they couldn't during the retreat. This labeling tic is all over the internet too; indeed, much of the content I see online seems premised upon the idea that everything can be better understood if we simply group it as a type. Yes, maybe we should call that kind of person a Tag Hag. The relief in the comments is palpable: Oh, I’m that label! Everything makes sense now. That's kind of what's been bugging me about labeling, to be serious for a moment. I am what I am (to quote either God or Popeye), so how does putting a label on it help? Like, we all know I'm into science, pedantry, gaming, and science fiction; how does it help me or anyone else if I get put in the "nerd" box? Usually when I find myself dealing with a “labeler” in real life, it’s because this idea of there being two types of brains has come up. There are two kinds of people in the world: Those who think there are two types of brains, and those of us who know that's been thoroughly and completely debunked. It’s not just attachment styles. All over those platforms, you see vlogs and infographics declaring that people can be understood best as bundles of fixed, unchanging symptoms, related to corresponding bundles of trauma, grouped neatly under buzzy labels. This is, of course nonsense. People can be best understood by their astrological natal chart. Yes, I'm back to making jokes. But speaking of astrology: Then there is the enormous popularity of astrology meme accounts. I find it hard to take exception with this iteration of labeling, though, because my star sign is Aquarius, so the @costarastrology account (with its 2 million followers) always presents me with flattering personality reads that position me as a cool, aloof, intellectual sort. That's not your star sign. It's your sun sign. Your personality is also influenced by what sign the moon was in at your birth, and which one was intersected by the eastern horizon (which is what I said above). As I'm an Aquarius sun and moon (rising sign unknown), I know that astrology is complete horseshit (but sometimes fun horseshit). On that five-hour car journey with the labeler, though, I could not simply go outside. They were always rushing to finish my sentences too, with an ending they expected might fit with the kind of thing I had been saying. There was a manic, frantic energy to every exchange. As if something terrible might happen if I were permitted to finish a sentence by myself. Note to self: if I ever do a silent retreat (which I won't), arrange my own damn transportation. Alone. And speaking about it, I should admit, to my psychoanalyst a few days later helped me clarify my thoughts further. (That’s right, my psychoanalyst. This essay was not eccentric and unhinged enough already.) Right, because everyone who sees a shrink is eccentric and unhinged? Come on, lady, if you're going to rage against labels, at least stop enhancing the stigma surrounding mental health issues. But in the wake of the silent retreat, everything seemed bathed in a rosy glow of calmness and goodwill. My thoughts were infused with peace and love and so forth. So, after my frustration had exhausted itself (and, mind you, that did still take a while), I had a sort of epiphany. After all, wasn’t there some of the labeler in me? Even by calling this person a labeler, I was assigning them a type. I can almost forgive the dig on psychotherapy after seeing this level of self-awareness. Almost. Not quite. There is, of course, more to the article, including another epiphany about some people needing to maintain control over social situations, or something. I don't know; I'm not sure if that revelation makes things better or worse. Just like with labels. So, I leave the article, my own curiosity unresolved, more confused than ever. Maybe I should see a shrink. |
From aeon, a tale as old as time. Well, as old as civilization, anyway: The fermented crescent Ancient Mesopotamians had a profound love of beer: a beverage they found celebratory, intoxicating and strangely erotic So, they were human. I should note that, like many free articles, this is a stealth ad for a book. But, for once, it's a book I would buy. (I'm not going to add to the advertising; the details are there at the link.) Hamoukar, Syria. 20 May 2010. We are midway through what will be the last excavation season at the site for some time. The following spring will see the outbreak of a long and brutal civil war. I don't talk about them much in here, but I do keep up with current events. Still, I find it hard to follow all the ins and outs of the disturbances in Syria. Nevertheless, yes, I heard about Assad, and I remember my inner cynic (which, frankly, is just Me) going, "Oh great, what fresh hell will Assad be replaced with?" Today, though, the archaeologist Salam Al Kuntar, balanced on tiptoe at the bottom of a tomb, has just uncovered a little green stone. It is a cylinder seal, an ancient administrative device. We roll the tiny seal in clay – just as its former owner once would have – to reveal an impression of the intricate scene carved into its surface. It may not be the finest seal ever seen, but the tableau is eye-catching: a stick-figure man and woman are having sex, the man standing behind the woman, who bends over to drink from a jug on the ground. And is that a straw emerging from the mouth of the jug? I, of course, instantly knew the implication: chick was drinking beer, maybe because her partner was ugly. I once saw a shirt that read: "BEER: Helping ugly people have sex since 1862!" And I snorted and said, "Yeah, right. More like 6000 BCE." It may surprise you that our ancestors had sex, until you stop and think about how they became our ancestors. Indeed, the drinkers of ancient Mesopotamia often drank via straw – though not always, shall we say, in this particular position. While drinking beer through a straw today is as much a social faux pas as serving warm white wine, it was kind of necessary then because, apparently, the beer had floaty things in it and the straw kept them from getting swallowed. It would filter out the biggest and grossest solids. Yes, the Sumerians (probably) invented beer. No, it wasn't the tall, frosty Pilsener of today. For one thing, no refrigeration. For another, no hops. But it was still fermented grain, hence: beer. Banquets were a key part of the social calendar in Mesopotamia, and beer was an essential element. But people also drank beer at home, on the job, in the tavern, in the temple, pretty much everywhere. Before we knew shit about microbes, beer was often a better choice than water because the process requires boiling water, which we know now destroys bad microbes. Hell, a big part of beer production today involves letting the proto-beer (called wort) cool enough for the yeast (good microbes) to be able to survive and work their magic. They wouldn't have known exactly why beer was good, only that it was. Perhaps you have encountered the notion that beer was ‘invented’ in Mesopotamia. That is a hypothesis at best. Yeah, well, it's still better supported than other hypotheses. And, as the global search for earlier and earlier traces of alcoholic beverages gains steam, there is at least one key takeaway: beer was invented (or discovered) many times in many different places. I'm okay with that clarification, and will note that yes, it is one of those things where you can have a legitimate philosophical argument over "invention" vs. "discovery." The famous ‘land between the rivers’ was also the land of Ninkasi, goddess of beer. When Ninkasi poured out the finished beer, ready to drink – a Sumerian song tells us – it was like ‘the onrush of the Tigris and the Euphrates’. Before you rush out and claim a name, there's already Ninkasi Brewing. It's located in Oregon, so I haven't tried many of their offerings, but I seriously doubt they used the ancient Sumerian recipe. However, as the article later attests, other brewers have attempted Sumerian beer. One of them was named Gilgamash, which utterly delights me (hence the entry title today, which, now you know, wasn't an original Waltz pun). But I'm getting ahead of myself, here. The article goes on to discuss several aspects of ancient beer culture, including a paraphrased version of when Inanna got Enki so drunk that he gave her all his prized possessions. I'm sure I've covered that in here before. Beer was brewed at home, in neighbourhood taverns, and in breweries managed by palace and temple authorities. In some cases, we know the names of the brewers – for example, homebrewers Tarām-Kūbi and Lamassī (both women), tavern-keepers Magurre and Ishunnatu (both women), and palace brewers Qišti-Marduk and Ḫuzālu (both men). While your image of a brewer today probably involves a very large, very bearded man, historically, beer has been either a female project or ungendered. The most detailed account of the brewing process appears in the ‘Hymn to Ninkasi’, goddess of beer. But this lyric portrait of Ninkasi at work in the brewery is hardly a set of instructions for brewing beer. I will defer to this author's greater experience in the historical arena, but everything else I've read does call it a recipe of sorts. Not standardized like today's recipes, with their precise measurements and somewhat detailed instructions following about 50 pages of backstory, but more like a mnemonic, which the brewers were expected to fill in with passed-down knowledge and maybe even proto-science. I'm not sure the distinction is overly important to us. Hell, we have problems re-creating other recipes from a century or more ago, precisely because a lot of the handed-down knowledge is lost. What's more important is that beer was important enough to write hymns to the gods about. The author has a lot more to say about this, and I can, again, provisionally defer to his greater knowledge. In conclusion, yes, I would read that book for sure. I might wait to buy it until after the holiday season, though, just in case someone wants to give me one as a present. No, that's definitely not a hint. Or is it? |
This one's been hanging out in my queue for a long time, but it's not exactly time-sensitive. As they say, time and tide wait for no one. Lord Kelvin and His Analog Computer This tide-predicting machine was one of many advances he made to maritime tech The source is a publication of the IEEE, the electrical engineering professional organization. But fear not; the article isn't very technical. Civilizations recognized a relationship between the tides and the moon early on, but it wasn’t until 1687 that Isaac Newton explained how the gravitational forces of the sun and the moon caused them. Nine decades later, the French astronomer and mathematician Pierre-Simon Laplace suggested that the tides could be represented as harmonic oscillations. And a century after that, [William] Thomson used that concept to design the first machine for predicting them. Thompson was Lord Kelvin and, yes, he's the one the temperature scale is named after. One wonders what it would have been called had Thompson not become a noble, because Thompson is a boring name for a unit of measure. Thomson’s tide-predicting machine calculated the tide pattern for a given location based on 10 cyclic constituents associated with the periodic motions of the Earth, sun, and moon. (There are actually hundreds of periodic motions associated with these objects, but modern tidal analysis uses only the 37 of them that have the most significant effects.) Translation: it's complicated. The most notable one is the lunar semidiurnal, observable in areas that have two high tides and two low tides each day, due to the effects of the moon. Which is what most of us think of when we think of tides, but it's not as simple as "it's high tide when the moon is directly overhead." There's a lag, and there are local conditions that affect the timing of tides (such as sea floor depth). On Thomson’s tide-predicting machine, each of 10 components was associated with a specific tidal constituent and had its own gearing to set the amplitude. Basically, it's a very complicated clock. Sure, the article calls it an analog computer, and I'm not going to argue with professionals (especially ones not in my field) but I think that's a categorization issue. At some point of increasing complexity, a clock stops being a clock and starts being an analog computer. But in my view, if it involves the timing of natural phenomena like the movement of solar system bodies, it's a clock. The device marked each hour with a small horizontal mark, making a deeper notch each day at noon. Turning the wheel rapidly allowed the user to run a year’s worth of tide readings in about 4 hours. But this bit, an output device, is probably what pushes it into the computer category. It also was designed for prediction, not for reading what the tide is right now. As with many inventions, the tide predictor was simultaneously and independently developed elsewhere and continued to be improved by others, as did the science of tide prediction. One thing I'm still unclear on when it comes to tidal prediction: the wind plays a role. And wind is way, way harder to predict than the future relative position of the sun and moon. I'd ask my dad, the sailor, but I seem to have misplaced my Ouija board. In any case, mostly, I just liked the article and the history lesson, and I wanted to muse about the differences between clocks and analog computers. |
It's time for another foray into the jungles of the past. This one comes from way back in 2018—Christmas Eve, to be exact: "Millennials Killed Millennials" The entry featured an article from The Atlantic; this was before I started using xlink tags for articles, so it's a raw URL link. That source has changed its policy since then, and every time I open it now, I hit a paywall. I'm not above paying to read or watch something. I have a few subscriptions. But even I can't subscribe to (or keep up with) everything and, more importantly, I try not to link paywalled articles here (as I said, that entry from 2018 was from before they installed a paywall). Point is, unless you subscribe to The Atlantic or have found some way around the paywall that even I haven't been able to figure out, you're only going to get the first few paragraphs of the original story. Oh, sure, it talks about "subscribe" and "free trial," but I'm deeply suspicious of any "free trial." You'd think they'd lift that restriction for older articles, but apparently not. Not ragging on them, by the way. They should be able to make money if their content is useful. I used to link a bunch of their stuff, and I'm not judging anyone who subscribes. I'm just explaining why 1) I don't feature Atlantic articles anymore and 2) today's entry is more about my earlier entry than it is about the original article. Having said that, let's look at what I was thinking six years ago this month. The entire concept of demographic "generations" annoys the shit out of me. And it only gets worse as time goes on and I read more crap like this. I've since moderated my feelings about that topic. As with most other things, it's not binary; I'm not obliged to either love it or hate it, with nothing in between. My stance on the practice is complicated, but I'll try to explain my current thoughts: 1. Yes, the "generation" labels are pretty arbitrary. So are Gregorian calendar months, but it's an established system that has its uses. 2. In this system, I'm early Generation X, which hardly anyone ever talks about, opting instead to rag on "boomers" or "millennials" or "Gen-Z." Or praise them, depending on the author's beliefs and age. 2a. Gen-X is supposed to be, among other things, the slacker generation. Am I a slacker because I was born under the Slacker sign, and therefore it's expected of me; or is it just my basic nature? 3. It's one thing to draw conclusions about a subgroup and market to that subgroup. It's quite another to point at a single individual from that subgroup and simply assume that they have all the traits associated with that subgroup. I don't really have a problem with the former, at least not anymore. There's more, but I'm not writing a dissertation, here; what I'm really trying to say is that my own views have shifted over the years. On top of which, you'd have to convince me that our 1983-born X-er has more in common with someone born in 1966 than with someone born in 1986. That is something that no one has been able to convince me of, yet. Some people have tried to get around it by slicing generations more finely; you get, like, the "Oregon Trail generation," which of course refers to the classic "you have died of dysentery" computer game and not actual westward-ho pioneers. But all that does is support my point: slice finely enough, and you're back to taking each person individually, rendering the whole marketing concept more useless. Also, some things suck and other things get better. This is due not to a single "generation" or cadre of ages, but every single person doing his or her own thing. I'm not sure exactly what I was thinking when I wrote that, but I recognize it's probably unclear. Much of what sets "generations" apart, in modern terms, has to do with technological advancements and societal changes, all of which require more than one person. Like, someone had to invent Crocs, sure, but also, someone had to be convinced that they're not ugly and to start wearing them. And then other people had to think that the original wearer was cool enough to start a fashion trend. I'm pretty sure the whole "generations" thing is just another way to divide us, like politics or countries. It distracts us from the real issues, which we can either work to solve, or ignore, depending on one's individual preference. Again, my hardline stance on that has evolved, though I still have a measure of distrust. These days, for instance, the popular usage of "boomer" and "millennial" doesn't comport with their marketing definitions; a "boomer" is simply someone older than you whose attitude you don't like; and a "millennial" is someone younger than you whose attitude you don't like. This is one reason Gen-X gets ignored (but don't worry; we're used to it). But it is marketing. Not science. So you know what I want to see Millennials finally kill? Generations. This harks back to the original article, which was apparently about the Millennial generation killing off cultural institutions beloved by Boomers. Never mind that these same Boomers killed off cultural institutions beloved by their parents' generation. Anyway. Changed perspective or not, that entry's still there, even if accessing its linked article remains a pain in the ass. But, to borrow the rallying cry of my generation: "Whatever." |
Speaking of time, here's a Guardian article about people who had more of it than usual. Never take health tips from world’s oldest people, say scientists Scientists still trying to work out why some people live beyond 100, but agree it is best to avoid taking advice from centenarians themselves No, we should definitely take health tips from people who die young, instead. The death of the world’s oldest person, Maria Branyas Morera, at the age of 117 might cause many to ponder the secrets of an exceptionally long life, but scientists say it could be best to avoid taking advice on longevity from centenarians themselves. Far as I can tell, the secrets to an exceptionally long life include such gems as "stay alive" and "don't die." According to the Guinness World Records website, Branyas believed her longevity stemmed from “order, tranquility, good connection with family and friends, contact with nature, emotional stability, no worries, no regrets, lots of positivity and staying away from toxic people”. Also, unicorns and fairies. I mean, those are probably more real than her litany. However, Richard Faragher, a professor of biogerontology at the University of Brighton, said that in reality scientists were still trying to work out why some people lived beyond the age of 100. Because they didn't die. Or, in some cases, because they assumed the identity of their deceased parent so they could go on collecting the... whatever benefits. Faragher said there were two main theories and they were not mutually exclusive. The first, he said, was that some individuals were essentially just lucky. At some point, though, lucky stops being "you didn't die" and starts being "you died." The second theory, he said, was that centenarians had specific genetic features that equipped them to live a longer life So, a different kind of luck. But still luck. Faragher said both theories, however, resulted in the same warning: “Never, ever take health and lifestyle tips from a centenarian.” Certainly, if I'm unlucky enough to live that long, I'd troll the hell out of anyone asking me about health tips. "See, now, the key is to kick a puppy every day. Doesn't have to be your puppy. Doesn't even have to be the same puppy. But it's gotta be a puppy, not a dog. Or a kitten or a kid. Puppy." He added: “What you see with most centenarians most of the time – and these are generalisations – is that they don’t take much exercise. Quite often, their diets are rather unhealthy,” noting that some centenarians were also smokers. What would amuse the hell out of me would be if no exercise, diets considered unhealthy, and smoking really were the keys to long life. “The fact that [centenarians] do many of these unhealthy things and still just coast through [life] says they’re either lucky or typically very well endowed [genetically],” he said. Again, both of these things are luck. Faragher added that many of the mooted possibilities for why centenarians live longer could actually be examples of reverse causation. For example, the idea that having a positive mental outlook can help you live for a very long time might, at least in part, be rooted in people being more sanguine because they have better health. Glad they acknowledged this. I mean, they could do studies to gain insight into it, but apparently it's more important to promote "positive mental outlooks," especially in a world falling apart all around us. “From about 100 years ago, what we started seeing was huge advances in life expectancy driven by improvements in reducing the likelihood that children die,” said David Sinclair, the chief executive of the International Longevity Centre, noting that was largely down to the introduction of vaccinations and clean water. Well, that lasted about a century. “What we’ve had over the last 20 years, and we’re going to see over the next 20 years, is a similar focus in terms of old age,” Sinclair said, adding that included improvements in vaccines for flu and shingles, statins, and other medications that would help increase life expectancy among older people. Sure, because, obviously, life expectancy is a more important metric than life quality. But he said governments also needed to take action to help individuals to make healthier choices – choices that would ultimately help them live longer – adding that many people lived in environments where it was difficult to exercise, eat well or avoid pollution. Ah yes. "Want to live longer? It's your fault if you don't, even though you're economically stranded in a polluted area with little access to fresh food, thanks to our policies. So instead of fixing economic disparity, despair and environmental degradation, we'll make it illegal for you to eat cheeseburgers. There, we fixed it!" As Sinclair said, while news stories about centenarians tended to be upbeat, it often emerged that such individuals faced challenges, such as living alone for many years. Challenges? Hell, that's probably the real reason they lived longer: not having to deal with other peoples' bullshit all the damn time. |
The Big Think article that came up for me today involves math. Fair warning so you don't end up defenestrating your device. Time: Yes, it’s a dimension, but no, it’s not like space The fabric of spacetime is four-dimensional, with three for space and only one for time. But wow, time sure is different from space! Time is different from space? I never would have guessed, what with them having different names and all. When did you first realize that the shortest distance connecting any two points in space is going to be a straight line? I'm not sure that's a fair question. It's something I'd consider intuitive. What's hard to grasp, sometimes, are the cases where the shortest distance between two points isn't a straight line, because that runs counter to our everyday experience. In fact, that realization, as far as human knowledge is concerned, comes from a place we might not realize: the Pythagorean theorem. "In fact," I think they've got this backwards. The Pythagorean Theorem may quantify the "shortest distance" intuition, but I'm pretty sure humans knew about the straight-line thing before they had numbers or geometry. (Also, the idea predated Pythagoras by hundreds or thousands of years; the Greeks didn't invent everything.) Taking all three of these dimensions into account — so long as we assume that space is still flat and universal — how would we then figure out what the distance is between any two points in space? Perhaps surprisingly, we’d use the exact same method as we used in two dimensions, except with one extra dimension thrown in. I feel like the only "surprising" thing here is that the math is basically the same. Thinking about distances like we just did provides a great description of what we’ll wind up with if we consider flat, uncurved space on its own. What will happen when we fold time, as a dimension, into the equation as well? You might think, “Well, if time is just a dimension, too, then the distance between any two points in spacetime will work the same way.” At which point, unsurprisingly, the math isn't basically the same. There are two fundamental ways that time, as a dimension, is different from your typical spatial dimension. The first way is a small but straightforward difference: you can’t put space (which is a measurement of distance, with units like feet or meters) and time (which is a measurement of, well, time, with units like seconds or years) on the same footing as each other right from the start. Feet? Footing? Get it? Haha. Fortunately, one of the great revelations of Einstein’s theory of relativity was that there is an important, fundamental connection between distance and time: the speed of light. Yes, and the invariance of that speed is still very, very hard to wrap your head around, because it, unlike the "straight line" thing, runs counter to everyday experience. However, there’s also a second way that time is fundamentally different from space, as a dimension, and this second difference requires an enormous leap to understand. In fact, it’s a difference that eluded many of the greatest minds of the late 19th and early 20th centuries. The key idea is that all observers, objects, quanta, and other entities that populate our Universe all actually move through the fabric of the Universe — through both space and time — simultaneously. Turns out that everything in the universe is moving at a constant speed... through spacetime. Once that was pointed out to me, a whole lot of other stuff started to make more sense. It turns out that the faster (and the greater the amount) you move through space, the slower (and the lesser the amount) you move through time. Like that, for instance. There’s an even deeper insight to be gleaned from these thoughts, which initially eluded even Einstein himself. If you treat time as a dimension, multiply it by the speed of light, and — here’s the big leap — treat it as though it were an imaginary mathematical quantity, rather than a real one, then we can indeed define a “spacetime interval” in much the same fashion that we defined a distance interval earlier... Great. Wonderful. Now we'll get "time is imaginary" on top of "time is an illusion" nonsense. I'd forestall this by pointing out that imaginary numbers aren't actually imaginary (or at least they're no more abstract than the "real" numbers), but that's not going to stop the airy pseudophilosophy. There is, of course, quite a bit more at the link. I'm not sure if it's really that useful; I feel like there's either too much or not enough math to keep people interested enough to follow the arguments. But if all we can get out of it is "spacetime is weird, and time is different from space," maybe that's good enough. |
Yeah... I'm just going to leave this here. Really, isn't that what most of us want? Quantum-enhanced metrology techniques are emerging methods that enable the collection of precise measurements utilizing non-classical states. Non-classical... so, rock or hip-hop? To realize a significant metrological gain above classical metrology techniques using quantum-mechanical principles, Xu and his colleagues set out to devise an approach that would enable the generation of Fock states with up to 100 photons. Okay, sure, if that's your thing. No, I don't really understand the article, either. Nor did I look up what a Fock state is; I will eventually, but it might get in the way of my amusement right now. Point is, I only saved this one on the behest of my inner 12-year-old. |
Getting hit in the head is rarely a good thing, but on the occasions when it is, Cracked has us covered. 5 Unexpected Twists After Accidents Scrambled People’s Brains The bad news is a huge hospital bill. The good news is you have superpowers now And hey, if you're in a civilized country, you might even get to skip the "huge hospital bill" part. If you get hit on the head, you might die... Occasionally, however, your brain may change in a way no one could predict. Yeah, I wouldn't recommend getting hit on the head, bitten by a radioactive spider, or falling into a vat of industrial chemicals as reliable means of obtaining superpowers. 5. Turning Into a Math Artist History doesn’t record exactly what karaoke songs were performed in Tacoma on September 13, 2002. They must have been pretty bad because two men went up to Jason Padgett outside one karaoke bar and kicked his head in. The only song I knew in 2002 that would inspire that level of rage was by Celine Dion. Anyway, he got a concussion, but then... Padgett was diagnosed with a variant of synesthesia, where rather than perceiving colors or sounds when confronted with unrelated sensory input, he sees math. While some people might consider this ability a curse, I call it a superpower. 4. Your Mental Diseases Cured He told his mother he’d rather die than go on, and she replied (according to George’s account), “If your life is so wretched, just go and shoot yourself.” So, he did. Mom of the Century award, right there. As for whether anyone should consider trying something similar as a form of self-medication, doctors said, “No, of course not. What are you, nuts?” I just wonder if this was the inspiration for the end of Fight Club. 3. Gourmand Syndrome Doctors associate it with a specific type of damage to the brain’s right hemisphere. It’s not an eating disorder. Those who have it do not overeat (or undereat). They just become very interested in high-quality food. Hey, I wonder if that's what happened to me, only with beer. 2. Becoming a Chinese Caricature One of the sillier possible effects of a coma is known as foreign accent syndrome. Silly, maybe. Not really a superpower. 1. Absolutely Nothing A Frenchman came to the hospital in 2007 with a seemingly insignificant complaint: His legs felt weak. Doctors gave him some scans and discovered something slightly more serious: He appeared to be missing almost his entire brain. We have an epidemic of that over here. As for how he was able to engage in a profession despite lacking a brain, well, it turned out that he worked for the government... Saw that coming. Anyway, again, I don't recommend slamming your head into a wall to see if it gives you superpowers. Unless you sing Celine Dion at karaoke. |
I'm not entirely sure why I saved this particular Live Science article, and it's only been like three weeks. Yeah... it's not really a paradox. Flanked with fjords and inlets, Alaska is the state with the most coastline in the United States. Easy to accomplish when you're a giant peninsula with many craggy islands offshore. But what is the length of its oceanic coast? It depends on whom you ask. According to the Congressional Research Service, the number is 6,640 miles (10,690 kilometers). But if you consult the National Oceanic and Atmospheric Administration (NOAA), the coastal edges of the state total 33,904 miles (54,563 km). Yep, that's a big difference, all right. Not to brag, but I knew the answer. Still, I want to say that, obviously, the former number is incorrect, because Congress always lies. The coastline paradox occurs because coasts are not straight lines, and this makes them difficult, or impossible, to measure definitively. From an aircraft, you can see that the coast has many features, including bays, inlets, rocks and islands. And the closer you look, the more nooks and crannies you'll find. Oh, now I remember why I saved it. It's related to fractals like the Julia set or Mandelbrot set, which involve complex numbers, and, well... you know. As a result, the length of a coastline depends on the size of the ruler you use. This isn't just a coastline issue. Lots of survey boundaries follow the thread of a river (or, in the case of VA/MD, the low-tide shoreline of the Potomac), which has similar characteristics. But if you used a smaller ruler, you'll capture more complexity, resulting in a longer measurement. Hence, a paradox. Okay, I suppose, for some definitions of paradox. Regardless, that's what it's called, so I'll run with it. According to work published in 1961, English mathematician Lewis Fry Richardson noted how different countries had different lengths for the same shared border because of differences in scales of measurement. In 1967, mathematician Benoit Mandelbrot expanded on Richardson's work, writing a classic Science paper on the length of Britain's coastline. This later led him to discover and conceptualize the shape of fractals, a curve that has increased complexity the more you zoom in. This is also related to why no one can agree whether China or the US is the third-largest country by area: it depends how you measure some of the boundaries, including the coasts. Also, vertical differences get thrown in; this is the same fractal problem, only in two dimensions (surface), not one (boundary line). The concept of "dimensions" also gets modified when you're dealing with fractals; you can get fractional dimensions. Which, it should come as no surprise, gave fractals their name. I object, however, to the idea of "increased complexity the more you zoom in." I'd argue that you get the same complexity, just at different scales. I guess the sentence can be read like zooming in reveals greater complexity. The article also features some nice Mandelbrot set zoom animations, which I always find fascinating. This can hold true for coastlines. You could technically measure a coastline down to the grain of sand or atomic level or smaller, which would mean a coastline's length could be close to infinity, Sammler said. Another nitpick: no such thing as "close to infinity." In the real world, as opposed to a purely mathematical construct, there's a minimum length (it's very, very small). That minimum length implies an upper bound to how long a fractal boundary can be. It can be very, very long... but that's still not infinity. Coastlines are also shifting entities. Tides, coastal erosion and sea level rise all contribute to the fluctuating state of coastlines. So maps from the 1900s, or even satellite imagery from a few years ago, may not resemble what coastlines really are today. And if you want to get really technical, it changes from moment to moment, as portions are eroded and others built up. Not to mention general changes in sea level. As I've noted before, you can't step into the same river once. So how much coastline does Alaska, the United States, or our entire planet, have? We may never know the accurate number. It's a paradox, and like many things in nature, escapes our ability to define it. Which shouldn't mean we throw up our hands and give up. I like to think of it as a metaphor for life itself: always approaching an answer, never quite getting there. But learning more and more along the way. |
Today, we have an article from Fast Company about something of great worldwide import. How Comic Sans became the Crocs of fonts After 30 years of abuse, Comic Sans is ready for its redemption. Objection! Comic Sans never deserved the opprobrium heaped upon it by self-proclaimed font snobs, whereas Crocs deserve every criticism and then more on top of that. Comic Sans has turned 30, and it’s done being your punch line. I have long said that it should be the Official Sarcasm Font of the Internet. For three whole decades, Comic Sans cowered at your reproaches and winced at your jokes. That's obviously poetic license, but I have occasionally wondered how the anticomicsans vitriol might have affected the poor, innocent font creator. It barely flinched when Google’s practical joke made sure that searching for “Helvetica” would render all results in Comic Sans. Okay, I hadn't heard of that, but that's legitimately hilarious. But Comic Sans has just hit the big 3-0—and it’s ready for its second act. Great, make the rest of us feel even older than we are. Don’t take it from us. Take it from various studies that have been done on the subject of “turning 30.” And from the three experts who contributed to this story and said that turning 30 marks a period of introspection and change. And this is where it goes from whimsical poetic license to stretching a metaphor beyond its elastic limit. But, whatever, I'm entertained. For starters, Comic Sans wants you to know it wasn’t ever meant to be taken seriously. Vincent Connare, who was then a typographic engineer working at Microsoft, created the typeface in 1994. Well, that answers part of my musing above, musing that never reached the level of "why don't I just google it?" Most of us were blissfully offline back then, so Microsoft had devised a program called Microsoft Bob to teach people how to use computers. I'm pretty sure 1994 was the year I first obtained an internet provider. But I'd been using computers for at least 15 years before that, both for work and recreation. Well, "recreation" included learning how to code, which, let me tell you, was a lot harder to do before the internet. We had to buy books. It also included playing early video games, which is why I'm not good at coding to this day. Comic Sans was inspired by the comic books Connare had lying around in his office. Hence the name. I always figured it was from comic books, not funny-ha-ha comics. Comic Sans appeared on restaurant menus, funeral announcements, official government letters, bumper stickers, business signs—so many places, in fact, that there’s a popular subreddit on the topic. Even the Vatican used it in 2013 to commemorate Pope Benedict XVI. Like I said, I don't hate the font, but if I saw it everywhere, I'd learn to. Like with Crocs. The boom continued well into the early aughts... Noughties, dammit! ...at which point Microsoft released a licensed version of Comic Sans in 2010. And the hate might also have spilled over from a generalized dislike of Microsoft, who have definitely made some... questionable... design decisions. Like Clippy. Over the past few years, the font has become a favorite among people with dyslexia because “the letters can appear less crowded” and “it uses few repeated shapes, creating distinct letters.” I'd heard that the font was originally designed to be dyslexic-friendly. Perhaps that was fake news. Designed that way or not, it seems to be so. Perhaps some people got less vocal about their distaste for CS for fear of being labeled ableist. Like the Eiffel Tower (which drew a slew of protests while it was still in construction), or that Mariah Carey Christmas album, the typeface has become nothing short of iconic. (Though we can agree to disagree on the Mariah front.) Oh, yeah, damn right we disagree. I have no hate for Mariah Carey and acknowledge that she's talented, but that album is about 70% of the reason I don't go out in public in December. Today, Comic Sans is the Crocs of fonts. First we hated it, then we loved to hate it, now we kinda, maybe love it because we’re experiencing it through a different lens. "We" my ass. My issues with the writing, and the author's questionable taste in music and footwear, aside, I'm glad to see Comic Sans finally getting some... well, not love, exactly, but less hate. I still say it should become the Sarcasm Font, but no one listens to me. |
Sundays are when I usually dip into the past, and today is a Sunday, so here's a blog entry I did way, way back in 2007: "Useless" The internet has changed somewhat since 2007, so the links aren't what they used to be. They're still active URLs, surprisingly enough, but I wouldn't go in there without a condom. First was the virtual bubble wrap. As I noted, the bubble wrap makes the popping noise as you mouse over it, but then regenerates. Infinite bubble wrap, right? Wrong! It was Flash-powered, and Flash is dead. The Web lost a lot of awesome stuff that day, including the utterly useless bubble wrap site. I don't, in fact, know why the URL is even still there. Just to taunt us, maybe. Or perhaps for some more nefarious purpose, hence the condom. The second URL at the site was http://www.papertoilet.com/ That one is, to my vast surprise, still operating. I guess it's not Flash. So many great things on the internet lost to time, and the one that remains is a virtual roll of toilet paper that you can unroll to reveal... absolutely nothing? Fitting. And I'd recommend staying away from the third link, even if you're behind seven proxies. It was, as far as I can tell, just a stapler that you could make go kachunk. Well, not a stapler. A picture of one. Or whatever. And it seems to be gone, anyway. Useless to us, maybe not to malware producers. And so the unweirdening of the internet proceeds. We have lost so much that we'll never, ever get back, and what do we have to show for it? Influenzas and trolls. Useless. |
Something interesting I found at BBC Future: It's also a book ad: In our new book, we explore the many internal and external factors that influence and manipulate the way we think – from genetics to digital technology and advertising. And it appears that language can have a fascinating effect on the way we think about time and space. And I'll do my usual pointing out that this isn't settled science. But a growing number of experts believe language can influence how we think just as our thoughts and culture can shape how language develops. Honestly, I'd kind of figured that was the case. I think in words, myself. Usually English ones. But I've asked other people, and some say they don't. I imagine kids these days think in emoji. For example, we know that people remember things they pay more attention to. And different languages force us to pay attention to an array of different things, be it gender, movement or colour. I'm still just focused on getting French pronunciation close to correct. Linguists, neuroscientists, psychologists and others have spent decades trying to uncover the ways in which language influences our thoughts, often focusing on abstract concepts such as space and time which are open to interpretation. I hope they took into account my anecdotal evidence about not everyone thinking in words. There follows a good bit of examples of the science people did to investigate this. I could probably nitpick some of the methods, but I don't feel like it today. This is the part that most interested me, as someone trying to be bilingual: Things start to get really strange, however, when looking at what happens in the minds of people who speak more than one language fluently. "With bilinguals, you are literally looking at two different languages in the same mind," explains Panos Athanasopoulos, a linguist at Lancaster University in the UK. "This means that you can establish a causal role of language on cognition, if you find that the same individual changes their behaviour when the language context changes." While it is true that I complained about how bad this year's Beaujolais Nouveau was (it was really bad), and joked about how that means I'm turning French, that doesn't mean I'm turning French. When you learn a new language at my advanced age, things go a lot slower. We aren't necessarily prisoners to thinking a certain way, though. Intriguingly, Casasanto has shown that you can quickly reverse people's mental time representation by training them to read mirror-reversed text, which goes in the opposite direction to what they're used to. This refers to a part I didn't quote, which stated that people generally conceive of time moving in the same direction as their language writing. In contrast to language learning, I taught myself how to read mirror-reversed text (and upside-down, and upside-down mirror-reversed) when I was a kid, and I still conceive of time's arrow as moving left to right. Maybe that's a case of just because I can do it doesn't mean I've internalized the connection to conception of time. There's a whole lot more to the article, but I'll skip to near the end: As this body of research grows, it is becoming increasingly clear that language is influencing how we think about the world around us and our passage through it. Again, though, I wouldn't take it as settled science but more of a working hypothesis. And while being multilingual won't necessarily make you a genius, we all can gain a fresh perspective and a more flexible understanding of the world by learning a new language. A reasonable assertion, I think. One reason to actually learn a language rather than relying entirely on smartphone apps for translation. |
Several years ago, I did an entry on the Stoned Ape Hypothesis, and expressed great skepticism: "Expanded Consciousness" I promptly forgot all about it, until this Big Think article pinged my radar. A new spin on the “Stoned Ape Hypothesis” The controversial theory about magic mushrooms and human evolution gets a much-needed update. One might wonder (fairly) why I even give this attention if I dismiss it so readily. After all, I'm not here repeatedly sharing flat-Earth links, right? Well, it's different because, for one thing, it's not completely falsified the way flat-Earth doctrine is; for another, talking about it might help normalize the use of psychedelics. In the realm of human evolution, few theories have captured the public imagination quite like the “Stoned Ape Hypothesis.” It also might increase understanding of evolution in general, even if this particular hypothesis turns out to be a truckload of manure. Originally proposed by ethnobotanist Terence McKenna in his 1992 book Food of the Gods, this provocative idea has recently resurged in popular discourse, thanks in large part to its discussion on Joe Rogan’s widely followed podcast. Well, now I'm even less inclined to believe it. However, matching the enthusiasm for the theory is the skepticism that opposes it, and critics have branded it “pseudoscience,” successfully demoting it from a legitimate scientific hypothesis to fringe status. I'm not going that far. But I still haven't seen any real evidence. Since most academics approve of this characterization, I’ve long felt motivated to “steelman” McKenna’s theory, which I think will prove to be more right than wrong. Okay, fair enough. Your opinion, man. The article goes on to do just that, and it's easy enough to follow. McKenna’s highly amusing and admittedly speculative answer to the puzzle was that psychedelic substances helped spark the rapid evolution in human cognition, consciousness, and culture. According to his story, our early hominid ancestors would have inevitably encountered psychedelic fungi while foraging for food in locations like the African savanna. The psilocybin in these mushrooms would have provided adaptive advantages to those who consumed them, including enhanced cognition, creativity, and elevated states of consciousness. Okay, so, what happened to the other species who consumed them? Because I would find it even harder to believe that it was only our ancestors who ate magic mushrooms. Did it have an effect on the antelope? The zebra? The... whatever the hell other foraging species roamed Africa at the same time? Or maybe it only works on primates? Well, plenty of primates lived in places with shrooms, and we don't see them doing rocket science or writing novels. For evolutionary theorists, this sounded too close to Lamarckism, the idea that acquired traits could be passed down to offspring, a theory that fell out of fashion with the emergence of Darwin’s theory of natural selection. Which is exactly what I said in my earlier entry, but of course, I'm not a biologist. Still, I understand there's some leeway for heritability of certain acquired traits. This is called epigenetics (or so I'm told). McKenna, though, had more than a few answers to these criticisms, which makes the theory difficult to judge as flat-out right or wrong, since some of his explanations could be more or less correct. You don't get to just push a theory out there and expect us to judge it as "right" or "wrong." Like, I hereby theorize that there's life on Pluto. You can't prove me wrong, so it's a legitimate theory, right? No. No, it is not. One promising alternative explanation, which you could say represents the “status quo alternative” to McKenna’s theory, is that social and cultural factors played a unique role, such that increasing social complexity created a natural selection pressure that strongly favored intelligence over physical attributes. Looking around, I find the idea that, in humans, intelligence can be favored over physical attributes, almost as unlikely as the magic mushroom hypothesis. ...why would psychedelics then mostly disappear from our diet, rather than being a regular part of our contemporary lives, the way a drug like caffeine is? Now, that right there is cultural bias. Other cultures incorporate, or used to incorporate before missionaries came along, psychedelics into their sacred rituals. (In the author's defense, he does acknowledge some of these instances later in the article.) I realize that my statement there works in favor of Stoned Ape. That's okay. Skepticism doesn't mean outright rejection. According to the New Stoned Ape Theory, psychedelics likely served as a “chemical catalyst” for a special kind of “cognitive-cultural phase transition,” characterized by a shift in perspective at the individual level that propagates through culture (“goes viral”) and restructures the worldview of society, bringing about a transition at the societal level. Which, looking back at my earlier entry, I acknowledged as a possibility that I could accept (given evidence). I quote Younger Me: "And maybe - just maybe - I can see psychedelics being an engine for social evolution." The article is fairly long, as BT articles tend to be. I'm not going to critique each claim, though there's plenty to critique. I'll just point out one other quote, one that claims to sum things up: To summarize the theory in a sentence: Psychedelics, as “worldview shifters,” can create a cognitive phase transition whose spread creates a social phase transition — a shift in culture. It’s that simple! I'm a big fan of Occam's Razor, but when it comes to evolution and human cognition, I reflexively distrust anything that's "that simple." Which, again, doesn't mean it's wrong. The article kind of undercuts itself at the end by proposing something even weirder and more speculative, but I'm not going to weigh in on that except to say that it sounds like the ramblings of someone who just ate mushrooms. Which is fine. There's plenty of actual evidence that hallucinogens can, under certain circumstances, be beneficial. I just think the whole thing needs more science. |
I've written about the Trolley Problem before. At length. I even wrote a very short story featuring it: "The Trolley Problem" [18+]. This is very likely to be the last time I feature an article about it; this blog is steadily approaching its end. The article itself is a few years old, but I'm not aware of any progress in Trolleyproblemology since it came out in 2018. It's also from Slate, so no surprise they got it wrong. Does the Trolley Problem Have a Problem? What if your answer to an absurd hypothetical question had no bearing on how you behaved in real life? It was never meant to have bearing on how you behaved in real life. Consider this article from Philosophy Now (limited free articles), which concludes: The answer, in my view, is that there is no definitive solution. Like most philosophical problems, the Trolley Problem is not designed to have a solution. It is, rather, intended to provoke thought, and create an intellectual discourse in which the difficulty of resolving moral dilemmas is appreciated, and our limitations as moral agents are recognized... I do not believe there will ever be a perfect solution to the Trolley Problem, nor a consensus as to the best possible solution. All we can hope for – and should hope for, as I have argued – is to utilize the tools of philosophy as well as the scientific method to continue this discourse. The Trolley Problem does not have to be resolved; it merely needs to be contemplated, and to be the topic of our conversations from time to time. That is, of course, the opinion of one philosopher, but it rings true to me. Philosophers, however, aren't known for having a sense of humor. They're all Very Serious Thinkers. We have a different name for philosophers with senses of humor: we call them "comedians." And comedians have been having a field day with various permutations of the Trolley Problem, many of which are legitimately hilarious. Which is, as the Very Serious Philosopher notes, the point—even if he'd be appalled at the humor elements. So, back to the Slate article: I ask because the trolley-problem thought experiment described above—and its standard culminating question, Would it be morally permissible for you to hit the switch?—has in recent years become a mainstay of research in a subfield of psychology. And there's the "problem," right there: It's not psychology. It's philosophy. In November 2016, though, Dries Bostyn, a graduate student in social psychology at the University of Ghent, ran what may have been the first-ever real-life version of a trolley-problem study in the lab. In place of railroad tracks and human victims, he used an electroschock machine and a colony of mice—and the question was no longer hypothetical: Would students press a button to zap a living, breathing mouse, so as to spare five other living, breathing mice from feeling pain? Right, because our moral calculus involving mice is obviously exactly the same as it would be with fellow humans. I'm not saying people don't feel sorry for mice. I always feel sorry for the ones that Edgar Allan Purr leaves on the doorstep. But I'd be horrified if he brought us a dead human for a present, instead. Not that he could, but, you know, as long as we're talking hypothetically. It’s a discomfiting result, and one that seems—at least at first—to throw a boulder into the path of this research. Scientists have been using a set of cheap-and-easy mental probes (Would you hit the railroad switch?) to capture moral judgment. But if the answers to those questions don’t connect to real behavior, then where, exactly, have these trolley problems taken us? I suppose the answer to that depends on whether you ask a philosopher, a psychologist, a lawyer, or a comedian. It also seemed a little off that trolley problems were often posed in funny, entertaining ways, while real-life moral dilemmas are unfunny as a rule. Except that it's the comedian's job to make things funny when they're not. There's a lot more at the link, of course, but I've banged on long enough. In short, I disagree with the basic premise that it's a psychology issue instead of a philosophy one. Still, as I've noted in here before, it's not completely hypothetical: there are real-life situations where exercising agency can make a difference, one way or the other. So it's worth thinking about. And it's worth making jokes about, because comedians can often do philosophy better than philosophers. |