Items to fit into your overhead compartment |
The random numbers gave me this one today, from Popular Science earlier this month. Vertically rolling ball ‘challenges our basic understanding of physics’ ![]() The lab-built orb can roll down a 90-degree surface. Yeah... that bit in the headline is apparently an actual quote from the scientists involved. But, as we'll see, the article didn't convince me that this is the kind of thing to say that about. Take recent observations made by a team at the University of Waterloo, for example. Under a very specific set of conditions, these experts achieved something previously thought impossible under gravity’s constraints: they documented a sphere not falling or sliding, but rolling down a vertical surface. I'm not saying it's not cool, mind you. “We double-checked everything because it seemed to defy common sense. There was excitement in the lab when we confirmed it wasn’t a fluke and that this was real vertical rolling.” But that's what science is for, in part: to "defy common sense." The surreal display of physics relied on a pea-sized soft gel sphere’s finely tuned elasticity and its relationship to a vertical surface—in this case, a glass microscope slide. If researchers crafted a polymer orb that was too soft, then the sphere inevitably either stuck to the slide or slid down it. If the object was made too rigid, then gravity caused it to simply fall straight down. It seems to me, based on that paragraph, that far from being an accident that would require rewriting physics from the ground up, they were working on this specific setup, and have a pretty good idea of what causes the behavior. As they explain in their study recently published in the journal Soft Matter, these attributes produce a “dynamically changing contact diameter and a unique contact asymmetry.” That sounds awfully close to bizspeak jargon. But it's science jargon. No, I don't fully understand it, either. Harnessing the physics of vertical rolling could one day be applied across soft robotics to create new machines capable of inspecting pipe interiors, exploring difficult-to-reach cave systems, and future devices destined for the moon or Mars. I guess everyone wants to know what the practical use for something is. There doesn't have to be one. If you're going to insist on listing possible practical uses, though, it might be good to be more details on exactly how it'll be useful. For example, we already have robots for inspecting pipe interiors; what would this do differently? That's about it, really. My issues with the reporting aside, it's a pretty cool concept, and that was reason enough for me to share it. |
Another entry for the architectural round of "Journalistic Intentions" ![]() Carson Mansion Ever wonder what makes a house a mansion? I know I have. You expect that houses exist on a continuum, from those trendy one-room trailer dwellings all the way up to, I don't know, something that burned down outside L.A. earlier this year. At some point on that continuum, it stops being "house" and becomes "mansion," like a high enough hill becomes a mountain, or pond to lake to sea, or pebble to rock to boulder. Well, unless you're in a profession that needs to strictly define ranges within that continuum, it turns out it's not that simple. The Great Salt Lake is, for example, larger than the Dead Sea. Perhaps, then, what makes a house a "mansion" isn't size (after all, there are castles which serve as homes but aren't called mansions), but appraised value? Or perhaps the presence of servants' quarters? Or is it some distinguishing architectural feature? But then you also have words for similar things like "manse" or "manor." Those are, perhaps obviously, derived from the same Latin root word, one which also evolved into the French word that translates to house, "maison." So, apparently, if you call it a mansion, and enough people nod and say "Yep, that's a mansion," then it's a mansion. Not everything has to fit into neat little boxes. One can imagine a scenario where, upon measuring someone's expensive and recently-built abode somewhere in the US, an inspector might say, "I'm sorry, ma'am, but your dwelling comes in at 14,999 square feet, and, as you know, to be considered a mansion, it must contain at least 15,000 square feet." On the plus side, that could be amusing and I'm going to have to work it into a story at some point. It's not always straightforward to measure the area of a house, though, whether you're measuring in square feet, square meters, or, in the case of some mansions, acres or hectares. You'd think it would be easy, but often, it's not. Obviously, if the house is a rectangle, like mine, you can just measure the long side and the short side, multiply them together, then multiply by the number of floors. Right? Wrong. What you're really measuring is floor space, so you have to account for wall thickness. Also, interior wall thickness. Also, stairs, which, well, which story do you count them in? And what about bathtubs, sinks, kitchen cabinets? They count? But then you get weird old houses or mansions or castles, for instance the one featured today, ![]() As an aside, I always wanted a house (or mansion; I'm not picky) with a tower. Still, even if I were wealthy enough to afford one, I wouldn't want to be the one maintaining it. My own relatively simple house is hard enough to keep up with, and I'd feel bad having servants to do it for me. They can be cool to look at, though, and us peons will just have to settle for admiring the exterior. At least until the owner's goons chase us off the property. |
Well, today's article is likely to be almost as controversial as the one about bagels. But, I'm guessing, it's a bit more of a sensitive topic. As a disclaimer, I know almost nothing about the source here, so I have no idea what agenda, if any, it might have. Well, for starters, what I "know" is that it's not anyone else's goddamn business. Not satisfied with managing your own life path? Better try to tell other people how to live. Ashley Manta knew she didn't want kids as early as her teen years. "Oh, my dear, you'll change your mind!" "Just wait until your biological clock starts ticking!" "It's different when they're your own!" "Even as I got older, after college and grad school, when I looked at my friends who had kids, they always seemed exhausted, stressed, and financially strapped," Manta says. "Kids are expensive! I'd rather spend my money on growing my business, traveling, and saving for the future." Then there are the people who consider "exhausted, stressed, and financially strapped" to be badges of honor. Manta is far from alone. The subreddit r/childfree has 1.5 million members, and there's a sterilization subreddit with 17,000 people dedicated to discussing permanent birth control options like getting their fallopian tubes removed (called a bilateral salpingectomy). I have mixed feelings about reddit; I don't visit the site often and I don't have an account there. But if that's what it takes to find people to help support you in a world that seems increasingly and bafflingly pronatalist, I won't rag on it. I will point out, though, that this article is very woman-centric. That's okay. There's a real difference in how society views childfree women than childfree men, which is really weird when you think about it, but weird is par for the course when considering human society. The article goes on with a fair amount of detail about the bilateral salpingectomy, or bisalp, but I'm not here to talk about the mechanics of it, just the social aspects. So I won't reproduce it here. (Pun absolutely intended.) The current political landscape is a major reason that child-free people like Manta are seeking out permanent forms of birth control. "When Roe v. Wade was overturned by the Dobbs decision in 2022, I knew I needed to start looking into more permanent forms of birth control," Manta says. At the risk of getting political, this may be an example of what they mean when they talk about "unintended consequences." "Around the same time, I started considering what it would look like to live in a state other than California, specifically Texas where I have family, and I knew I would never feel comfortable living in an anti-choice state if there was any possibility of my becoming pregnant." And don't give me that "so just don't have sex" bullshit. Rape is a real (and horrible) thing that happens to real people, and, contrary to what certain ignorati proclaim, it does sometimes result in pregnancy. Also, I've rarely heard of anyone saying that to men for the same reason. Anne Langdon Elrod, 27, has known she doesn't want to have children for several years. In 2019, she says, she came to the realization that American society often falls short in supporting expectant and working mothers. Not only are children now a luxury, but they're an expensive luxury. "And a coworker of mine explained to me that pregnancy is considered a preexisting condition, and many women are unaware they need to enroll in such insurance before becoming pregnant — unless their employer offers a group plan. Hearing her perspective opened my eyes to the complexities women face when planning for motherhood." Way back in the early 90s, I distinctly remember reading a passage from my company's health insurance handbook: "Pregnancy is treated like any other illness." I have no idea why more people didn't catch that and call them out on their phrasing. Anyway, obviously, I have no personal experience with this, being very much not female. But some of my favorite people are women. Some of them have kids. Some don't. It is, and it should be, a personal choice. I don't mean "choice" in the way it's been co-opted into the everlasting argument about abortion, but a proactive choice. I've known women who were told, in no uncertain terms, that it didn't matter that they knew, absolutely knew, that they didn't want to be mothers; the doctors wouldn't do anything permanent lest they change their mind and end up suing the doctor. I didn't experience that as a man; I just got "okay, here's a referral to a urologist." That was nice for me (well, apart from a few days of soreness), but the inequality of it pisses me off. My real point here, is this. Or, rather, the points are these: 1) it ain't nobody's business if you want to have kids or not, regardless of sex or gender, except maybe your life partner's if you have one; and 2) Adults should be trusted to know their own minds, not infantilized with things like "oh, honey, you'll change your mind." And just to be clear, I'm not hypocritical enough to say "don't have kids" here; that would be making it my business, which I just said it wasn't. (While I have been known to say that, it's usually in reference to someone who's on the fence about the decision and, maybe, experiencing social pressure to do the opposite.) What I am saying is: let's not shun or shame those who make that decision. I can only imagine how terrible it is to really want children and be unable to have them (though that describes my parents), but it's also a Bad Thing to not want children and be forced to have them. |
Pay attention now, because this may be the most important article I share all year. Possibly even in the entire blog. It sometimes surprises people who aren't familiar with the area to learn that NYC tap water is damn good. I mean, how could it be? It's the largest city in the US and one of the oldest, with infrastructure dating back to the 18th century. The city is famously grimy, and don't get me started on the industrial waste in the rivers. But the tap water? That, they got right. Hence the popular hypothesis that it's the water that makes the bagels there so iconic. But a moment's thought should be enough to question this. Apart from the pizza dough (also the best in the world, obviously), other bread-like substances, also made with NYC tap water, don't stand out in popularity the way the bagels do. The bread is good, mind you. Just not much different from what you get anywhere. Also, the beer. Beer is usually around 90-95% water, so that ingredient is of massive importance in the beverage. But there's nothing special about NYC beer. I mean, some people think Brooklyn Brewing makes an exceptional product, but they're wrong. (It's not bad, though.) So, anyway, the article. I’m just going to say it: New York City has the best bagels in America, and probably in the world. The absolute greatest, fresh out of the wood-fired oven at St. Viateur in Montreal come close, but what New York has over its Canadian counterpart is a far wider availability of good bagels almost anywhere. There are those who would get very, very angry at the above. I've heard that the Montreal bagel place mentioned there is truly the epitome of paragons. I haven't been there, so I don't know. But the last point there is important: there exist many fine bagel establishments in NYC, whether in Manhattan our out in the other boroughs. The folk knowledge used to explain the concentration of high quality bagels has become gospel: it’s all about the water, that there’s something about the pH or the mineral content in NYC tap water that causes the city’s dough to be intangibly superior. Folk knowledge can be right. Usually, it's not. America’s Test Kitchen conducted a taste test, pitting bagels made with both Boston and New York City municipal water against each other. The results amongst tasters showed the two batches to be virtually indistinguishable. I have my issues with blind taste tests. Taste isn't a disconnected sense. Sometimes, something tastes better just because it's rare and precious. Sometimes, it changes with ambiance; imagine being served a five-course dinner at a dive bar, and maybe you know what I'm talking about here. You can "prove" to me all day that a $300 bottle of scotch doesn't taste all that different from a $50 bottle of scotch, and it wouldn't matter; I'd still like the $300 bottle better. Because privilege is a glorious spice indeed. That said, I think with bagels, it's okay to do this. I just think they ought to have used other cities' water too, and maybe some well water too. You know, for science, and not just because I'm imagining eating that many bagels. So what actually makes them better in New York? In my opinion, it’s the people. Bagels arrived in New York City by way of Ashkenazi Jewish immigrants from Poland in the late 1800s, and ever since, New Yorkers have been perfecting the craft of bagel rolling and boiling, holding on fiercely to tried and true techniques and recipes even in the modern era. Yeah, um, that's still an opinion with no data backing it up (other than "okay, so maybe it's not the water.") Sure, it's an opinion I agree with. Still. This is, of course, also why New York pizza is superior. Only difference is nationality/religion. When you have a history spanning centuries and generations of people making the same food, often in competition with one another, it’s that pride and local association which produces greatness. It’s why Philadelphia makes the best cheesesteaks, Texas has the best barbecue, and why New York City makes the greatest bagels. I can accept this proposal. Except for one important thing: Texas does not, in fact, have the best barbecue. Now, if you'll excuse me, I have to run away from this angry mob of Texans that suddenly appeared. |
An entry for "Journalistic Intentions" ![]() Purcell and Elmslie ...would be a great name for a band. Or, you know, a duo. Like Simon and Garfunkel, Sonny and Cher, or Hall and Oates. You know, when I first heard Hall and Oates on the radio, I thought the DJ called them "Haulin' Oats." I had to verify I didn't switch to a country station accidentally. This is probably why they styled themselves Daryl Hall & John Oates: to avoid precisely that mondegreen. ![]() It may seem like I don't like them. This is not the case. They were (and presumably still are, though they divorced last year) talented musicians, and they deserved their fame and airplay; lots of people liked their muzak. It's their genre that leaves me cold, offending me with its inoffensiveness. Simon and Garfunkel, on the other hand, crafted nothing but greatness. Okay, maybe they had a few stinkers, but for the most part, their stuff was amazing. I'd be remiss if, after all that, I didn't mention the musical comedy duo Garfunkel and Oates, a couple of women who usually make me laugh. They apparently named themselves that because of the lower billing of the second names in those duos. One of them apparently called it the "silver medal," which I suppose works better than "second fiddle," because back when I was in an orchestra, in high school, I was often second fiddle. (I didn't mind; the first-chair violinist was much less lazy and much more talented than I was.) It's because of those classic duos, though, that I've often wondered how they decide which order to name themselves in. Maybe it's ego for one of them; musicians can certainly have those. Maybe it's just a matter of marketing, and their manager decides. Maybe they just pick the one that most easily rolls off the tongue; this is almost certainly the case with Sonny and Cher, the latter of which had 90% of the talent. But it still makes me wonder why they didn't just pick a band name, like Walter Becker and Donald Fagen when they came up with Steely Dan (yes, I know, they started out trying to be a multi-person band, but that didn't last long). Would they have achieved their greatness as "Fagen and Becker?" Probably, because they were awesome. But "Steely Dan" is a great name, and the story of how they chose it always makes me smile. For the literary connection, of course. No, I'm not going to link that; feel free to look it up if you don't already know. I guess the band name thing wouldn't work in a more serious profession, like law, medicine, or architecture. I managed, though, when I partnered with one other person to create an engineering and landscape architecture firm: we didn't use our names, but came up with a company name we both liked. That was 100 years after Purcell and Elmslie, though, so I suppose times changed. And look at that, I'm all out of time, and I haven't even addressed the actual title (I will say they were architects). That's okay. I'd never heard of them. Like with the name Steely Dan, the information is out there waiting to be discovered. |
Amusingly, today's article comes to us from Nautilus. The Octopus Propaganda Hidden in Modern Maps ![]() An old visual trick may promote conspiratorial thinking about global power I say "amusingly" because octopuses are tentacled cephalopod molluscs, and so are nautiluses, which that site is named for. Just to be clear, I did some digging and it seems that once a word is indisputably a part of English, it's acceptable and even stylistically proper to form a plural with it in the standard English manner. What's weird, though, isn't that they're occasionally pluralized with their ancient forms, but that, while both are words of Greek origin, the other proper plurals are "octopodes" and "nautili." The former is a Greek plural form; the latter, Latin. Hence why I think I'm going to go with the "just use English and stop being pedantic" idea. The only awkward thing is that "octopuses" sounds way too close to "octopussies," but it would be far from the only word that would make middle-schoolers giggle knowingly, so... whatever. In any case, I really don't want to make this a blog about etymology, so on to the article. For centuries, an odd form of iconography has maintained a stranglehold over the globe: the octopus map. Ho, I see what you did there. Octopus. Stranglehold. Political cartoonists and mapmakers have long used the creature to illustrate a wide variety of forces threatening to throttle their foes: from empires, religious groups, and ideologies to financial systems—even abstract concepts such as the great unknown. This is, of course, blatantly unfair to the entire octopus species. They're not trying to take over the world. I mean, I have no doubt that they could if they wanted to. But they're not trying to. Map-dwelling military octopuses multiplied through the 20th century: They were commonly drawn during both World Wars, for instance, by satirists and cartoonists on both sides of these conflicts. This iconography was adopted by Marvel, in which the symbol of Hydra was a stylized, but easily recognizable, octopus. With a human skull, which would probably annoy biologists to no end if they were still following Marvel comics and movies after the whole "radioactive spider," "gamma ray Hulk," "shrink down to subatomic size," and "superpowers from industrial waste" things, to name just a few. I digress. The point is, that icon came from somewhere, and I think it was, in part, inspired by these octopus maps; the Hydra that the villainous organization was named for had multiple heads, not arms. Michael Correll, a data visualization researcher, and his colleagues at Northeastern University wondered if these data-driven images were making subconscious appeals to audiences’ emotions, so they set out to assess how octopus iconography works on the mind. We fear what we don't know, and we and octopuses inhabit vastly different ecosystems, so for a long time, they were largely unknown. Plus, they're alien-looking, hence scary. What they found is that even subtle octopus imagery in maps can inspire conspiratorial thinking in viewers. Ah! But that's what They want you to think! There's some information in there about the methodology they used, but I'm not going to weigh in on it. Just my usual disclaimer: this is one study, not settled science. Ultimately, their survey results indicated that even the more subtle maps “could still engender negative sentiments and attributions of ill-intent” on a similar scale to those with more overt octopus imagery. I also have no idea how or even if they controlled for participants' preconceptions. For instance, I'm willing to wager that the result would be somewhat different if the octopus was on one's own country than on someone else's. This suggests that it’s important to pay close attention to details in data visualizations, as they can have a major impact on audiences’ thinking. What it suggests to me is that it's remarkably easy to use iconography to sway viewers' emotions. This power can, of course, be used for either good or evil. Or somewhere in the middle. Point is, it's a form of propaganda and mind control by itself, one potentially misleading the viewer into thinking that the Other is the evil one when, to paraphrase a famous horror movie line, sometimes the scary phone call is coming from inside the house. But, as they say, knowledge is power. If you're aware of this sort of thing, perhaps you can be armed (pun absolutely intended. Octopus? Armed? I'll be here all week; try the veal) against their machinations. Or, like with the blatant psychological techniques employed by marketers and grocery stores, you could just shrug and get on with your shopping. Ooooh, Oreos! |
Another one for "Journalistic Intentions" ![]() Brutalism The French language has, for Anglophones, a few words and phrases that are called "false friends:" words that are the same or similar in both languages, but can have very different meanings or connotations. It should come as no surprise that the French term for them is "faux amis," which is pronounced something like foes-ah-MI, which itself is amusing to this Anglophone because "foes" in English is the opposite of "friends." As far as I know, though, the pronunciation is coincidental; "foe" comes from our Germanic roots. For an example of a faux ami, "demander" is a standard French infinitive verb ("standard" in that it follows one of the usual patterns to make present, past, future, etc. tenses) that translates to "to ask" or "to request." This can easily trip up an English speaker, for whom "demand" is way more intensive than "ask." As in "Karen asked to see the manager" is milder than "Karen demanded to see the manager." But in French, one might say, "Cette chienne-là a demandé de voir le gérant," and that would imply that she asked politely. Okay, so that's not the actual translation of Karen. But it amused me to type it, and that's what matters. It is possible to demand something in French, but the word for that is apparently "exiger" (I'm not 100% sure because I try not to make demands, myself). Anyway, I was talking about les faux amis. There are others, of course, but I'll focus on the actual subject here: brut. When I was a teen and had just discovered that people liked me more if I actually wore deodorant, my scent of choice was the then-common brand Brut. It had the bonus of coming in both stick and spray forms, and the spray form was, I discovered, excellent for making improvised flamethrowers. Just not while pointed at my pits. I was a reckless kid, sure, but not that stupid. I don't know if they make it anymore; I switched to the nice safe Old Spice long ago. I also don't think they make spray deodorants anymore. Closest I can think of is Axe, which is more a joke than anything else. I don't even know if that's still around, and I can't be arsed to look it up. Nor have I seen roll-ons. It's all gel sticks, now. I suppose they're easier to deal with if you're getting on an airplane; I don't think they're considered "liquids" the way an aerosol or roll-on would be. But yes, I recommend that you use some form of deodorant when flying; we're all stuck in this tin can with recycled air. Again, I digress. My point is that "brut," in French, has nothing to do with brutes (like the kind of people who don't wear deodorant on airplanes). You've probably seen the word in other contexts: it's used to describe champagne, sometimes, in which case it means "dry." Of course, champagne is a liquid and therefore not actually dry; it's just that we use "dry" to describe an alcoholic beverage with a low sugar content. But the most common translation, and the one that matters today, is something like "raw" or "rough." This is the one that gave us Brutalism, because that particular architectural style is notably uncladded, not burdened by an excess of things like paint, trim, or siding. And it generally involves concrete. I'm no expert on architecture. Sure, my cousin is in that profession, and I've absorbed some knowledge from working with other architects, but the field is entirely too artsy for this engineer. But, as an engineer, I've learned a thing or two about concrete over the years, and then forgotten most of it. At one point, I worked as a dispatcher and yard manager at a ready-mix plant, and I knew how to mix the ingredients in just the right proportions for most common use cases: foundations, basement slabs, sidewalks, curbs, etc. Specifically, though, we're talking about hydraulic cement concrete. "Hydraulic" in this case means that water is involved. "Cement" is the key ingredient for concrete; while people colloquially use the two interchangeably, talking about things like "his feet slapped against the cement," this is objectively wrong. And "concrete" in this context refers to the material, and it isn't the opposite of "abstract." And yet, Brutalism, for all its concrete rawness and roughness, very often incorporates elements of abstract art. Because language is weird. The other thing you need to know about concrete, in an architectural and/or structural application, is that it can be very, very strong in compression. That is, a concrete pillar can support a good bit of weight without failing. Introduce tension, however—either by putting it in a place where it'll be pulled on or, more likely, applying lateral shear forces—and that shit'll crack right up (to use the technical term). Which is why we have "reinforced concrete," with steel bars inside the slab or beam or whatever handling the tension. Why not just make the whole thing out of steel, then, which is about as good in tension as in compression? Money, of course. Concrete is basically chunky mud: crushed limestone cement, big rocks, small rocks, little tiny rocks (aka sand), all mixed with water and left to cure. Not "dry." "Cure." It's a chemical reaction that makes concrete a solid, and it technically never stops; the water doesn't all evaporate away (in fact, in the summer, you often have to keep it from evaporating out), but gets incorporated into the chemical matrix, kind of like the liquid in waffle batter. The waffle is (kind of) solid, but the batter was liquid. Different chemicals, of course. Trust me on this one: waffles taste better. All of which is to say that I don't need to have an opinion on the aesthetic qualities of a Brutalist structure. Which is good, because I'm not trained in the abstract; I'm trained in the concrete, and I can therefore appreciate it just for the material. |
Here, Smithsonian reports on a thing we've always known, but apparently science had yet to confirm. Cats Can Recognize Their Owner’s Scent Compared to a Stranger’s, New Research Suggests ![]() In an experiment, domestic cats spent longer sniffing cotton swabs with the scents of unfamiliar people than swabs with the scent of their owner Well, except for the glaringly obvious mistake in the headline: cats don't have "owners." Cats have staff. Your cats may act aloof, but they likely know more about you than they let on. It's still perfectly acceptable to use the possessive pronoun with them, though. This is because the possessive isn't always possessive; sometimes, it's relational. I got screeched at once for using the phrase "my wife" because it "implies ownership." Well, first of all, if I say "my school," am I implying that I have legal ownership of the school building and the ground it sits on? If I say "your country," am I making you its dictator? No. No, I am not. It very clearly means "the school that I go to" or "the country you live in." Other examples of the relational possessive pronoun may not be so obvious in context, but saying "my husband" or "your husband" or "his husband" in no way implies actual possession, any more than "your kids" does. Responsibility, maybe. Not ownership. If I could change just two things about English grammar, it would be to invent a different pronoun for such relational situations, and to invent yet another one for discerning the you-and-me "we" from the "me and others but definitely not you" we, so there's no question that when I say "we're going to a party," I don't mean you, but me and my friends. That would get really complicated really fast, but it would also eliminate a great deal of drama. And sitcom plots. But I digress. Also, cats aren't aloof (okay, some are, but that's a vile stereotype). They're just not as needy or demanding as those... other... popular pets. Which is why I live with them. Research has shown that cats can tell when you’re speaking to them and that they recognize the voice of their owner—they just might choose to ignore it. And yet, they keep using that word. It is inappropriate. Now, a new study published in the journal PLOS One last week suggests cats can distinguish their owner’s smell from the scent of a stranger. As a cat-tender, I was already sure this was true. Still, as always, it's good to have science backing you up. The article goes into the methodology they used. I won't repeat it here, because it's kind of gross. Yeah, science is important, but it can definitely be gross. Look, at least it's not as disgusting as the article on vultures I featured a while back. But this is, to me, the really interesting part: The researchers also analyzed video recordings of the cats inspecting the tubes and found that the animals tended to sniff familiar odors with the left nostril. They mostly used the right nostril for unfamiliar scents. Scientists have previously observed similar behavior in dogs and other animals like fish and birds, according to a statement from the journal. That, I wouldn't have suspected. There might be an explanation for it: “The left nostril is used for familiar odors, and the right nostril is used for new and alarming odors, suggesting that scenting may be related to how the brain functions,” says Uchiyama to Kate Golembiewski at the New York Times. Or it might be something else entirely; this is what science is for, as noted: Carlo Siracusa, an animal behavior researcher at the University of Pennsylvania School of Veterinary Medicine who was not involved with the study, says to the New York Times that he would be wary of relating the nostril use to brain function without research that scans the felines’ brains. Still, it's a hypothesis; it can be supported or falsified by further experiments. It's the beginning of science, not the end. The thing that sent me, though; the part of the article that convinced me that this was worth sharing in the first place, was in the last place I looked: Even if more research is needed, “I really commend this group of scientists for being successful in engaging 30 cats in doing this stuff,” Siracusa adds to the New York Times. “Most cats want nothing to do with your research.” Fortunately, in this case, curiosity didn't kill the cat. |
Time for another inspiration from "Journalistic Intentions" ![]() I.M. Pei As much as some pedants who are slightly more pedantic than I am would like to believe otherwise, there is no One Correct Way to handle initials. In part, this is a matter of style. Some sources require the formality of breaking the letters up with a space: I. M. Pei. Others, notably the influential AP style, dictate initials with periods and no space when the individual goes by initials: I.M. Pei. (Other notable examples include H.G. Wells and, amusingly, E.B. White. I say "amusingly" because E.B. White was the White of Strunk and White, the authors of the American English standard style manual Elements of Style.) I'd like to emphasize, however, that this is a blog, not a scholarly report. While I try to adhere to conventions of spelling, word usage, and sentence structure, mostly for practice, I know I'm all over the place where it comes to style. Even sometimes typing incomplete sentences. Additionally, I do these things pretty quickly; like everyone, I make mistakes; and, of course, I don't know everything, having not memorized Chicago, AP, or Strunk and White style guidelines. Let's also point out that it's AP for Associated Press, not A.P. or A. P. But there's another wrinkle when it comes to names: in English-speaking areas, at least, it's generally accepted that we can style our own names. If I wanted to go by RoBert Waltz, then no one gets to tell me I'm doing it wrong. They can say it's dumb, silly, precious, or a marketing gimmick, sure, but not wrong. Incidentally, it wasn't E. E. Cummings who decided to style himself e e cummings, but one of his publishers; it amuses me to no end that that asshole's gravestone bears his fully-spelled legal name... in all capitals. All of which is to say that both I.M. Pei and I. M. Pei are acceptable renderings of the famous architect's name. And yet, both are, in a sense, wrong. I had to look this up, okay, because I'd never learned his full name: Ieoh Ming Pei. But, although he became a U.S. (or U. S. or US) citizen, he was born in Guangzhou ![]() On a personal note, I didn't first encounter his work in Paris or Hong Kong or Cleveland (the latter two of which I've never even been to), but close to home in Washington, D.C. (or DC or D. C.) The art museum he designed featured, I was told, the most acute angle in all of architecture. I don't know if that's still true or not. The museum was fairly new at the time, but the stone (I think it was granite) of that particular feature was already worn down at roughly human torso height, from all the tourists who just had to touch the angle. If my memory serves (which it might not), it's right there at the main entrance. This is why we don't touch art, folks. I did get an opportunity recently, as regular readers already know, to see his perhaps most famous work, the glass pyramid at the Louvre. I didn't touch it, though. And yet, the work of his that brings me the greatest joy is the one in Cleveland: the Rock and Roll Hall of Fame. While, as I said, I haven't been there (I do intend to visit one day, even though it's in Ohio in general and Cleveland in particular), I have of course seen pictures. And that wonderful bastard made it feature a glass pyramid, echoing the one he designed for the Louvre, which I can only hope pissed off Parisians by diluting its uniqueness. Yes, I love France, but I can never pass up an opportunity to poke the French. It's also worth noting that even the Louvre pyramid pissed off a lot of French people, but since it's stuck there in plain sight and not on a different continent or mentioned in some obscure blog, I don't find that nearly as amusing. But that's the thing about art, in which I include architecture and music: it's meant to elicit emotions. Sometimes, those emotions are negative. That's the risk you take. And just like with names, you can flaunt the "rules" and make stylistic choices, and everyone else has to live with them. |
A blatantly US-centric thing from Mental Floss. I don't see anything about when it was published. Maybe my script blocker keeps me from seeing that. So I have no way of knowing how current the data is; add that to my usual distrust of how accurate the data is on that site. Still, I found ways to be amused. The Most Commonly Misspelled Word in Each State ![]() From ‘beautiful’ to ‘supercalifragilisticexpialidocious,’ find out which words America struggles to spell the most, broken down by state. The English language is complicated. Com... pli... how do you spell that? In addition to its complex grammar rules and commonly confused terms, many words are straight up impossible to spell if you’re not familiar with them already. Technically, all words are impossible to spell if you're not familiar with them already. Even prominent authors like Jane Austen, Agatha Christie, and F. Scott Fitzgerald were known to struggle with spelling. This is why editors exist. Or, you know. Used to. WordUnscrambler found the most commonly misspelled word in each state using search data from Google Trends. Researchers looked at inquiries like “how do you spell,” “how do I spell,” and “how to spell,” considering up to 120 variations of top spelling searches. That method, to but it bluntly, sucks. It won't give you the most commonly misspelled words. It's just one window into how people try to figure out the spelling of something. Like, if I'm not sure how to spell something (usually someone's name), I don't ask Google how to spell it; I just start typing into the search bar. No "how to spell" or "spelling of" or putting "spelling" at the end. That's okay, though. Like I said, this is just for fun and funnies. Many states—West Virginia, Wyoming, and five others—struggle with the word beautiful. At the risk of plucking the low-hanging fruit: I suspect West Virginia struggles with all kinds of spelling. Vermont’s obsession with Mary Poppins (1964) is evident, as the state is most curious about how to spell supercalifragilisticexpialidocious, according to the data. That's not evidence of obsession. That's evidence that the word (yes, it's made-up, but all words are made-up) isn't in spell checkers, and Vermont has figured out most of the other words. You can find the complete list of each state’s most misspelled word below: Obviously, I'm not going to comment on all of them. Many of them are repeated, anyway. Alabama - Different Thus reflecting that state's deep distrust of anything different. Y'all ain't from 'round here, are ya? Alaska - Tomorrow In the northern parts of Alaska, "tomorrow" can be months away. If you go by sunset/sunrise, anyway. Arkansas - Quesadilla Just spitballing here, but I'm betting they struggle with its pronunciation, too. Florida - Compliment Of all the associations on this list, this one surprises me the least. Still. Are we sure they weren't trying to spell "complement?" Hawaii - Luau OH COME ON. Indiana - Taught See my comment above for Florida. This one comes in a close second. Except maybe they were trying to spell "taut." Iowa - Through Because almost no one goes to Iowa, except to go through it. Nevada - School I'm dying of appropriateness over here. Utah - Definitely Considering how many times I've seen people spell this "defiantly," the only surprise is that it's only on this list once. Virginia - People I'm only including this because it'd be unfair to make the West Virginia joke up there without calling out my own state. Besides, we spell it "peepul" here. To be serious for a moment, really, it's okay to not know how to spell something. Last I heard, English has more than a million words. While some fall out of favor, others are added almost daily: those shamelessly stolen from other languages, or ones that are made up out of someone's head and later catch on. Only the most dedicated nerds could possibly know how to spell all of them, and even then, everyone makes mistakes. There's probably one in this very entry. Looking up the spelling of something doesn't make you stupid. It indicates you're conscientious (I didn't have to look that up) and trying to improve. And I'm not going to rag on anyone for doing that; on the contrary, I praise their efforts to learn something. So, I'm going to go ahead and assume that those Hawaiian searches for "luau" were from the many tourists infesting that archipelago. |
The Sun shines bright on the Northern Hemisphere today, the day of the Summer Solstice. (For some it'll be tomorrow because of time zones; the solstice occurs at the same moment for everyone on Earth, but local times vary). And of course it's the Winter Solstice in that... other... hemisphere. So instead of randomly selecting a link, I've picked this one to talk about today. Nothing new or shocking here, but there are always reasons to go over the basics. The summer solstice marks the official start of summer. Before anyone from that... other... hemisphere freaks out, that statement is true globally (it's just the solstices are switched). Well... it's definitionally true for what we call astronomical summer, solstice to equinox. There's also meteorological summer, which runs from Gregorian dates June 1 to August 31; US marketing summer, which runs from Memorial Day to Labor Day; and some Northern cultural definitions of summer, which go from roughly May 1 to August 1, with the summer solstice near the middle of the season. Confusing? Sure. Here, we're only talking about astronomical summer. It brings the longest day and shortest night of the year for the 88% of Earth’s people who live in the Northern Hemisphere. But we appreciate the other 12%. They keep the important hemisphere from getting too crowded. Astronomers can calculate an exact moment for the solstice, when Earth reaches the point in its orbit where the North Pole is angled closest to the Sun. "Illusion! Fakery! Sphereist conspiracy!" From Earth, the Sun will appear farthest north relative to the stars. People living on the Tropic of Cancer, 23.5 degrees north of the Equator, will see the Sun pass straight overhead at noon. Another way to put it is that if you're standing at the North Pole, the Sun reaches its highest point above the horizon. Oh, and watch out for bears. Hope you brought a coat. The Sun’s angle relative to Earth’s equator changes so gradually close to the solstices that, without instruments, the shift is difficult to perceive for about 10 days. This is the origin of the word solstice, which means “solar standstill.” As I pointed out yesterday, we have lots of Latin-root terms that we use, especially in science. But in this case, I feel like the Germanic equivalent would actually be way cooler: "Stillsun." Maybe it's just me, but I think that's more badass. Both can be misleading, though. The Earth doesn't stop turning, so from our spinning point of view, the Sun never really stands still; it just reaches its most northern or southern excursion. Monuments at Stonehenge in England, Karnak in Egypt, and Chankillo in Peru reveal that people around the world have taken note of the Sun’s northern and southern travels for more than 5,000 years. Well, there's two that I didn't mention yesterday. We know little about the people who built Stonehenge, or why they went to such great effort to construct it — moving multi-ton stones from rock outcrops as far as 140 miles away. And yet, it is impossible to speculate about it without referencing This Is Spinal Tap. |
Let's do a thing for "Journalistic Intentions" ![]() Terra Cotta It occurred to me once, long ago, that of all the words we've swiped from Latin, Terra is up there among the most common. Sure, "Earth" is a perfectly good Anglo-Saxonism; the Germanic-derived word has the advantage of being only one syllable, and we're lazy. Still, it's a thing, especially in science fiction, to use the two interchangeably. They can both be a name for our spinning home, but, as with Moon/Luna, the adjectival form of the Latin is far less awkward to deal with. "The Terran mating ritual is complex and fascinating" just sounds better than "The Earth peoples' mating ritual is complex and fascinating." And don't get me started on the infantile "Earthling," which, while not an adjective, conjures up images of menacing little green dudes with clear helmets and ray guns. This, however, leaves "Earth" as the only planet whose English name (the IAU generally uses English as a lingua franca, which also amuses me because "lingua franca" isn't exactly Germanic) wasn't inspired by the Romans. We might consider doing that with the seventh planet from the sun, by the way, in an effort to finally put an end (heh heh he said end) the tired old puerile jokes about its name. What strikes me as odd is that the Romans weren't even stellar astronomers. That's a pun, see; I wasn't saying they never looked at the stars or planets, just that when it came to skygazing, they weren't known for building big-ass stone calendars like whoever built Stonehenge, Göbekli Tepe, or Nim Li Punit. I suppose they did give us our Terran civil calendar, but as regular readers know, I have ambivalent feelings about that one: It isn't really related to anything but the Earth, Sun, and background stars—and even there, it's only loosely connected to equinoxes and solstices (don't get me started on how it leaves the Moon out entirely, except for acknowledging a subdivision called "months"). And, let's be real here, it took over a millennium for them, or at least Pope Gregory XIII, who was in Rome so he counts, to figure out how to do it with decent accuracy. So we have Earth and Terra, and they're synonyms. Right? Wrong. There's one important difference: both words don't just refer to the planet. Earth is also a word (not capitalized unless it's at the beginning of a sentence like this one) for dirt or soil. I had a soil mechanics professor in engineering school who got pedantic about not calling soil "dirt," so of course we called it "dirt" behind his back. But this leads to amusing things like calling bulldozers and similar machines "earth movers," which confused the hell out of six-year-old Me: "You mean something can MOVE the EARTH?!" Terra, on the other hand, when not referring to the world, describes the surface of it, not its three-dimensional depth. We still see this usage in Latinate words like "terrain" or "territory." This, too, leads to amusement, because I've seen references to "lunar terrain" or "Martian territory." Now, full disclosure, I knew all of this stuff already, but I did use Google (ignoring AI slop) to verify that my memory was somewhat accurate. And it was. (I also had to check the spelling of Göbekli Tepe, which is in Türkiye and supposedly the most ancient known calendar site.) What I've never known, because it's impossible to know everything, is what in the living fuck "cotta" is. So let's go back to Google, and ignore its atrocious AI. Dictionary result: "a short garment resembling a surplice, worn typically by Catholic priests and servers." Oh... it's related to the word "coat." Suddenly, everything clicks into place like terracotta tiles: "earth coat." This explains its use in architecture, but- Hang on, what was that? It's not from "cotta," but from "cocta," which translates to "cooked?" Dammit. Okay, well, at least it's descriptive: baked earth. Except. Remember when I elucidated the difference between "terra" and "earth?" (I mean the uncapitalized versions.) Yeah, you're getting it: you're not baking the surface of the Earth to make your ceramic tiles or soldiers or whatever; you're firing up the kiln to harden the clay you dug up from it. Language is weird. |
Another one from the BBC today, though it's a few years old. Because you suspect they're faking it? Have you ever come across someone who is incredibly kind and morally upright – and yet also deeply insufferable? "Morally upright" is a loaded phrase, though. Some people think it's a moral imperative to hate entire groups of people for who they are, for example. But I think the article is talking about their actions relative to your own standards. They might try to do anything they can to help you or engage in a host of important, useful activities benefiting friends and the wider community. Those absolute wankers! How dare they?! Yet they seem a little bit too pleased with their good deeds and, without any good reason to think so, you suspect that there’s something calculated about their altruism. Okay, yeah, it's the attitude I can see being the problem, not necessarily the deeds. Yet this scepticism is a known behaviour, described by psychologists as “do-gooder derogation”. Oh, come on, shrinks; you can come up with a catchier name than that. And while the phenomenon may seem to be wholly irrational, there are some compelling evolutionary reasons for being wary of unreciprocated altruism. Lovely. Now we'll get treated to a slew of unsupported evolutionary psychology hypotheses. One of the earliest and most systematic examinations of do-gooder derogation comes from a global study by Simon Gächter, a professor of psychology at the University of Nottingham in the UK. Nottingham... Nottingham... now why does that sound familiar? Something about a bad guy who became a folk hero. It's right on the tip of my brain. I'm just saying, maybe there's a bit of bias in Nottingham. Like many studies into altruism, his experiment took the form of a “public goods game”. Normally, this is where I'd close the window. Games don't necessarily reflect real life. Lately, I've been playing a video game as an assassin/thief. Would I do that shit in meatworld? Hell, no. Not just because there tend to be severe penalties if caught, but because I think it's wrong to harm people who aren't trying to harm you. I've also played similar games as a fine, upstanding, knight in shining armor type. Point is, it's a game, and it doesn't translate to reality. You want another example? Play Monopoly with your friends sometime. See how long they stay your friends. (The article describes the psych game's rules and methods.) Somehow, selfishness and selflessness were considered to be morally equivalent. Well, I do tend to think that everyone does everything for, ultimately, selfish reasons. It's just that some of the stuff we do, like donating to a relief fund or taking care of a sick person, also helps other people. We wouldn't do it if it didn't feel good on some level. Strikingly, this tendency seems to emerge early in life – at around the age of eight. Before which, presumably, we don't think about other people at all. To understand the origins of this seemingly irrational behaviour, we need to consider how human altruism emerged in the first place. According to evolutionary psychology... Groan. ...hardwired human behaviours should have evolved to improve our survival and our ability to pass on our genes to another generation. In the case of altruism, generous acts could help us to foster good relationships within the group which, over time, help to build social capital and status. 1. We're not machines. "Hardwired" is a shit metaphor. 2. Okay, at least they use "could" instead of "did." Importantly, however, reputation is “positional” – if one person rises, the others fall. This can create a strong sense of competition, which means that we’re always alert to the possibility that other people are getting ahead of us, even if they are achieving their status through altruism. We’ll be especially resentful if we think that the other person was only looking for those reputational benefits, rather than acting out of a genuine interest in others, since it may suggest a cunning and manipulative personality more generally. Well. That seems plausible. This doesn't mean it's true or false. It, like most evo-psych, appears to be largely guesswork, working backwards from observed behavio(u)rs. All this means that altruistic behaviour can make us walk a metaphorical tightrope. We need to balance our generosity perfectly, so that we are seen as cooperative and good, without arousing the suspicion that we are acting solely for the status. See the problem here? That bit's still couched in language implying manipulation and calculation, rather than, you know, just wanting to help people for the sake of being a good family member or neighbor. Ryan Carlson, a graduate student at Yale University, agrees that altruistic behaviours are often appraised from multiple angles besides the generosity of the act itself. “We don’t just value altruism – we value integrity and honesty, which are other signals of our moral character,” he says. Yep, people do seem to appreciate integrity and honesty. So if you can fake those qualities, you're golden. The research might also help us to avoid accidental faux pas when we act altruistically ourselves. At the very least, the research shows that you should avoid noisily broadcasting your good deeds. I've never felt comfortable being noisy about any good I might accidentally do in the world. Ultimately, the only fool-proof way to avoid do-gooder derogation may be to do your best deeds in complete secret. Oh, I don't think that's the only way. "Don't do any good deeds" is another. If, you know, you're a complete bellend. Okay, so, ultimately, the article is yet another ad for a book (which is promoted in the endnote therein). So, where does this author stand on the altruism scale, if he's apparently doing the good deed of illuminating a quirk of human psychology... for profit? No, I don't have an answer for that. Making money is not in and of itself a bad thing; it's only when people pass some arbitrary threshold of greediness, and/or do it fraudulently, that I have a problem with it. I just find the potential contradiction, as usual, amusing. |
Today, I show that the BBC tackles the most important questions of our time, making them accessible to curious readers of English everywhere. What mystery is that? Do they have souls? Do they have brains? Why are they so damn cute? (Article includes pictures of cute kitties.) Garfield, Puss in Boots, Aristocats' Toulouse – cultural icons maybe, ginger most certainly. Also fictional. You make cartoon cats orange because the color pops. And now scientists across two continents have uncovered the DNA mystery that has given our furry friends, particularly males, their notable colour. While it's good for cats to maintain a bit of mystery, it's also good to learn more about how DNA works. I just hope their curiosity didn't kill any cats. They discovered that ginger cats are missing a section of their genetic code, which means the cells responsible for their skin, eye and fur tone produce lighter colours. Gingers have a reputation for appearing to be brainless, or at least not as clever as other kitties. I guess it turns out they are missing something. The breakthrough has brought delight to the scientists but also the thousands of cat lovers that originally crowdfunded the research. I'm including this paragraph before anyone gets too outraged about who's paying for it. For decades scientists have observed that cats with completely ginger colouring are far more likely to be male. This tallies with the fact that the gene is carried on the X chromosome. And we've also known that calicos (who usually have some ginger patches) are far more likely to be female. The ARHGAP36 gene is also active in many other areas of the body including the brain and hormonal glands, and is considered important for development. The researchers think it is possible that the DNA mutation in the gene could cause other changes in these parts of the body linked to health conditions or temperament. Huh... so maybe the correlation in cats between being ginger and certain personality traits isn't completely off-base. "Many cat owners swear by the idea that different coat colours and patterns are linked with different personalities," said Prof Sasaki. "There's no scientific evidence for this yet, but it's an intriguing idea and one I'd love to explore further." Well, leaving aside for the moment that cats don't have "owners," it would be an interesting line of study. Though I'd think that coming up with objective standards for behavior (or behaviour for British cats) is much trickier than assessing the color (or colour) of their fur. Especially when you'd also need to control for nurture / nature causes. But whatever the science, there's no denying that they're good kitties who deserve treats. |
Apparently, there's still stuff to learn about the most famous pissing contest in history. From Big Think: The secret reason the USA beat the USSR to the Moon ![]() Sixty years ago, the Soviet Union was way ahead of the USA in the space race. Then one critical event changed everything. "Secret reason?" Well, now that you're telling us, it's not a secret anymore, is it? On July 20, 1969, our species achieved a dream older than civilization itself, as human beings set foot on the surface of another world beyond Earth when they walked on the surface of the Moon, some 380,000 km away. Which, as regular readers might remember, I consider to be not only the most significant thing humans have achieved, but the most significant thing we can ever achieve. Yes, moreso than sliced bread. Slightly moreso than the Skip Intro button. That doesn't change my categorization of what got us there as a pissing contest. For my country, at least, it's been downhill ever since. From Space Race to Race to the Bottom in less than 60 years. (For comparison, the time between the Wrights' first controlled, powered, piloted flight and the Moon landing was just a little over 65 years.) If any nation was going to do it, most thought it was going to be the Soviet Union. Mostly because a large, centralized government / economy is more suited to massive projects like this than a distributed, lassiez-faire system. But the pissing contest came up because we had to prove that Freedom and Capitalism would always win out over Dictatorship and Communism. So to do so, we had to adopt some of their philosophies (sadly, not the one about workers' rights, which, to be fair, they mostly only paid lip service to, as well). After the disastrous Apollo 1 fire, it seemed like a foregone conclusion that the Soviets would be the first to walk on the Moon. Yet they never even came close. There is a very compelling (to me, at least) show series on AppleTV called For All Mankind that considers an alternate universe where they did, in fact, win the race to the Moon. It is, of course, fiction. Why not? The answer lies in a name that most people have probably never heard of: Sergei Korolev. Just a wild guess here, but judging by the name, he was from the USSR. Long before humanity ever broke the gravitational bonds of Earth, there were a few scientists working to pioneer a new scientific field: theoretical astronautics. This is one of the fields generally lumped under the category "rocket science." In the early days, all of these concerns were mulled over by theorists alone. A few pioneers stand out in the history of the early 20th century... Let's see... American, French, German... But before any of them came Konstantin Tsiolkovsky, who was the first to understand the vital relationship between consumable rocket fuel, mass, thrust, and acceleration. ...and Russian. You might recognize Tsiolkovsky as the guy who conceptualized the space elevator, which, unlike rocketry, has yet to be put into practice. Sergey Korolev was Tsiolkovsky’s pioneering experimental counterpart, who dreamed of traveling to Mars and launched, in 1933, the first Soviet liquid-fueled rocket and the first hybrid-fueled rocket. In 1938, however, he became a victim of Stalin’s Great Purge. What is it with dictators and purges? They never end well for anyone. Once World War II ended, both the USA’s and the USSR’s space programs were boosted by the addition of captured German scientists... Unlike the USA, though, the legacy of Tsiolkovsky gave the Soviets an initial edge. I'm pretty sure they were also better at keeping secrets. Of course, as with any technology, rocket science can be used for good or evil. The same tech that (mostly) peacefully explored space allowed for development of ICBMs. The article goes into some detail about this, and Korolev's contribution to it—which could then be adapted to send humans into [echo chamber effect] spaaaaaace. He was declared fully rehabilitated, and began advocating for using the R-7 to launch a satellite into space, met with utter disinterest from the Communist Party. But when the United States media began discussing the possibilities of investing millions of dollars to launch a satellite, Korolev seized his chance. In less than a month, Sputnik 1 was designed, constructed, and launched. Hence the beginning of the pissing contest. Also one advantage of not having a (mostly) free media: helps to keep secrets secret. Less than a month later, Sputnik 2 — six times the mass of Sputnik 1 — was launched, carrying Laika the dog into orbit. What the article glosses over is that, while they had the tech to send a dog into space, they did not, at the time, have the capacity to bring her back. Poor pup. On April 12, 1961, Korolev’s modified R-7 launched Yuri Gagarin into space: the first human to break the gravitational bonds of Earth, and also the first human to orbit Earth. Him, they were able to bring back. With the 1964 fall of Khrushchev, Korolev was put in sole charge of the crewed space program, with the goal of a lunar landing set to occur in October of 1967, which would mark the 50th anniversary of the October revolution. There was just one complication: when the October Revolution took place, Russia was still using the Julian calendar, which, over time, had drifted many days off of the Gregorian calendar which most of the rest of the world had adopted. The October Revolution thus took place in November. But it was not to be: Korolev entered the hospital on January 5, 1966, for what was thought to be routine intestinal surgery. Nine days later, he was dead from what was reported as colon cancer complications, although many to this day suspect foul play. Well... okay, two complications. Also, everyone suspects foul play when a prominent Russian dies. It's assumed unless there's overwhelming evidence to the contrary. Without Korolev as the chief designer, everything went downhill quickly for the Soviets. While he was alive, Korolev fended off attempted meddling from a variety of rival rocket designers, including Mikhail Yangel, Vladimir Chhelomei, and Valentin Glushko. But the power vacuum that arose after his demise proved catastrophic. Why does that sound familiar? The names don't, though. I hadn't heard of them any more than I'd heard of Korolev. Partly, this is because the Soviets kept them secret. The first flight under Korolev’s successor had ended in the worst disaster imaginable: the first in-flight fatality of any space program conducted by any space agency on Earth. This would prove not to be a one-off event, either, as further setbacks suddenly became the norm. Gagarin, the first human in space, was tragically killed in a test flight in 1968. Mishin developed a drinking problem, coincident with multiple N-1 rocket failures and explosions that followed the Soviet Space program throughout 1969. Look, let's be fair, here: first of all, Gagarin's death, while tragic, was (if we weren't lied to about it) during a routine training flight of a MiG, not a rocket test flight. Second, I strongly doubt that having a drinking problem would slow a Russian down. But the death of Korolev, and the mishaps under his successors, are the real reason why the Soviets lost their lead in the space race, and never achieved the goal of landing humans on the Moon. And yet, if it weren't for the Russians, the US would have wiffled and waffled for many more years before launching someone to the Moon and, very likely, a third party (India? China? France? Probably France) would have gotten there first. (Suddenly, I have an idea for my own alt-history livre qui parle des français sur la Lune. Obviously, it would have to involve a mixed-gender crew and the various positions and techniques they'd invent in 1/6 G.) However we felt about the USSR and feel about the Russian government now, their accomplishments, and the precedents they set, were instrumental in us finally putting boots on the Moon. A stunt? Sure. Pissing contest? Absolutely. But one that provided scientific and technological discoveries and breakthroughs. For all of humankind. Even the French, whose actual contribution to the Space Race was putting the first and only cat into orbit (unlike Laika, she came back). For humankind, sure, but not so much for dogs. |
The answer to the headline question from today's The Conversation article is, obviously: Right here where I'm sitting. I’ve spent decades trying to understand general relativity, including in my current job as a physics professor teaching courses on the subject. I know wrapping your head around the idea of an ever-expanding universe can feel daunting – and part of the challenge is overriding your natural intuition about how things work. Part of the point of science is to tell us when intuition and the misnamed "common sense" fail us. Unfortunately, not everyone accepts these overrides, maybe because they're convinced they're the center of the universe. Or maybe they think the thoughts they come up with are just as valid as those who are educated and trained for this sort of thing. For instance, it’s hard to imagine something as big as the universe not having a center at all, but physics says that’s the reality. As with many things in general relativity, that really depends on your point of view. And so we see the apparent contradiction there, right? On the one hand, perception of reality changes with point of view. On the other, my ignorant musings aren't as valid as a trained scientist's careful research and experimentation. If someone says "the Earth is flat," isn't that just as valid as centuries of observations supporting its roundness? In short: no. On Earth, “expanding” means something is getting bigger. Like our waistlines. This idea is subtle but critical. It’s easy to think about the creation of the universe like exploding fireworks: Start with a big bang, and then all the galaxies in the universe fly out in all directions from some central point. Which is why calling it the "Big Bang" is compelling, but confusing. Even my preferred nomenclature, the Horrendous Space Kablooie (thanks, Calvin), is misleading. It’s not so much the galaxies that are moving away from each other – it’s the space between galaxies, the fabric of the universe itself, that’s ever-expanding as time goes on. When your mind gets blown, though, it is a conventional explosion. Metaphorically speaking. A common analogy is to imagine sticking some dots on the surface of a balloon. As you blow air into the balloon, it expands. Because the dots are stuck on the surface of the balloon, they get farther apart. Though they may appear to move, the dots actually stay exactly where you put them, and the distance between them gets bigger simply by virtue of the balloon’s expansion. As the article goes on to point out, that analogy is inadequate in a few ways. Another one I've heard is you put a raw loaf of raisin bread in the oven, and as it rises, the raisins get pushed further apart. That's incomplete, too. The thing we think of as the “center” of the balloon is a point somewhere in its interior, in the air-filled space beneath the surface. But in this analogy, the universe is more like the latex surface of the balloon. The balloon’s air-filled interior has no counterpart in our universe, so we can’t use that part of the analogy – only the surface matters. It's like asking "where's the center of the Earth's surface?" If you restrict your search to the surface itself, you'll never find a center, only important points like the poles, or New York City. But none of these are an actual "center," no matter how many jokes you make about it. One of the most mind-blowing things, though, isn't addressed by the article: that the universe is, to our limited perception, inside-out. The further away you go, the more back in time you see, until it's surrounded by the detectable remnants of the oldest matter/energy in the Universe. A consequence of this is that we do appear to be in the center of the universe. But then, so does every hypothetical being standing on every other planet out there. And this is why appearances can't be taken at face value. |
Continuing with yesterday's theme, because the random number generator has gained sentience and likes to have a laugh at my expense, a three-year-old article from The Guardian: ‘What are our lives for?’: a philosopher answers kids’ existential questions ![]() What happens when Plato mixes with playtime? Philosopher Scott Hershovitz answers the questions that confound children and adults alike "Kids, you're here because either your parents felt the existential dread of certain mortality, or they were just trying to steal five minutes of fun from their dreary, meaningless lives when the condom broke, thus condemning them to 18+ years of misery." I’ve got two boys, Rex and Hank. They have been asking philosophical questions since they were little, and they try to answer them too. I honestly don't remember if I was that way when I was a kid. I know I was curious about science, which my parents absolutely encouraged. I feel like if I'd asked my dad what the meaning of life was, his answer would be either something completely absurd like "ducks," or he'd be like "I don't know. Go ask your mom." But I don't think the question ever occurred to me, or, if it did, I shrugged it off much like I do all these years later. If God created everything, who created God? Leyha, 7 Does God exist? I don’t know, but I’m sceptical. And your question points to one of the reasons why. Imagining that there’s a God doesn’t help us explain anything. It just raises new questions, which are at least as mysterious as the old ones. I figure the Western conception of God is usually depicted with a long, flowing beard because He didn't have access to Occam's Razor. I sometimes feel like I’m the only real person and everyone else is a robot. How can I know if that’s true? Ursula, 8 Well, Ursula, you're only asking that because you were programmed to. If they were really good robots, you wouldn’t be able to tell, at least not without cutting them open. And let’s not do that, since they would get hurt if your hypothesis was wrong. They'd also get hurt if your hypothesis was right. A philosopher named Descartes once tried to imagine that everything he believed was wrong. He didn’t suppose the people around him were robots, since they hadn’t been invented. Instead, he imagined that an evil demon was filling his head with falsehoods – that none of the people or things he thought he knew actually existed. Except, presumably, the demon. Ask yourself the same question, Ursula. Is there any reason to think that you, and you alone, are real? Probably not. Unless you’re the main character in a movie and I’m just another robot trying to trick you … This made me chuckle, because it sounds like something I would say to some snot-nose proto-solipsistic kid. Why are there numbers? Sahil, 5 Specifically to annoy you, Sahil. Where was I before I was here? Josh, 3 Nowhere! The universe has been around for billions of years, but you weren’t part of it until very recently. And, one day, you'll be nowhere again! (The next few questions tackle that angle on the question. I don't feel the need to snark on them further.) What are our lives for? Caspar, 5 Well, Caspar, they are like cogs in a giant machine. As long as the cog works, everything works. As soon as it stops working, you're replaced by a different cog. Seriously, though, this leads in to the part I really wanted to quote: Lots of people want to know what the meaning of life is. They’re searching for something that will help it make sense that we’re here, and maybe tell us how to live. But I think they’re making a mistake. The universe doesn’t care about us... But we are here, and we should care about each other, even if the universe doesn’t care about us. There may be no meaning to our lives. But we can find meaning in our lives by filling them with family and friends and fun – and projects that make the world a better place. You get to decide what your life is for, Caspar, so try to make it something cool. Because that's pretty close to my own thoughts on the subject—at least on those rare occasions when I stop making jokes get serious about it. Why is it bad to have everything I want? Abraham, 4 ...Last, there’s a song by the Rolling Stones called You Can’t Always Get What You Want. That’s true. And you have to learn how to be disappointed without making yourself – and everyone else – miserable. And I'm just quoting this to emphasize the combined absurdity and absolute greatness of a philosopher quoting the Rolling Stones. Why do people end up doing things that they don’t want to do? Sarang, 4 Money. Do the needs of the many outweigh those of the few, or do the needs of the few outweigh those of the many? Arthur, 7 Seven is probably a bit young to be watching Wrath of Khan. Arthur, did you get help with your homework? Or did some grownup put you up to asking this question? I’m a little suspicious, but I’ll answer anyway. I'm a lot suspicious. Still, it's a valid question. As for the answer, well, the author took a lot of words to say "it depends," as philosophers are wont to do. Is your imagination made of atoms? Josie, 7 Now that kid's almost certainly going to become a scientist or philosopher. Or, better yet, both. Unless, of course, the world beats them down just like it does everyone else. Lots more at the article. As usual, the philosopher never seems to mention comedy as critical to the meaning and/or purpose of life. I guess that's a "me" thing. Philosophers don't generally have senses of humor (or humour, considering the source). We have a different name for philosophers who are funny; we call them "comedians." |
Another article on meeeeeaaaaaniiiiing, this one from Quartz: The secret to a meaningful life is simpler than you think ![]() Some people seem to spend their whole lives dissatisfied, in search of a purpose. But philosopher Iddo Landau suggests that all of us have everything we need for a meaningful existence. Let's see... money, check; humor, check; beer, check... yep, he's right. Even if he does sound like he got his name from a Star Wars background character. According to Landau, a philosophy professor at Haifa University in Israel and author of the 2017 book Finding Meaning in an Imperfect World... I'm pretty sure ads don't do anything to make our lives meaningful, unless we're the ones profiting from them. ...people are mistaken when they feel their lives are meaningless. The error is based on their failure to recognize what does matter, instead becoming overly focused on what they believe is missing from their existence. That's a lot of words to say what I've known for a very long time, which is that contentment stems not from having what you want, but from wanting what you have. In other words, Landau thinks that people who feel purposeless actually misunderstand what meaning is. "But I know! And I can tell you, for just $29.95 for the hardcover or $29.45 for the Kindle edition!" Look, I'm not really ragging on someone trying to make money. Just the practice of disguising ads as articles. I know I link a lot of them in here, but that's because a) this is a writing website, and it's a book promotion and b) some of the very few articles that don't require a subscription or are otherwise behind a paywall are that way because they are ads. Those who do think meaning can be discerned, however, fall into four groups, according to Thaddeus Metz, writing in the Stanford Dictionary of Philosophy. Some are god-centered and believe only a deity can provide purpose. Others ascribe to a soul-centered view, thinking something of us must continue beyond our lives, an essence after physical existence, which gives life meaning. Then there are two camps of “naturalists” seeking meaning in a purely physical world as known by science, who fall into “subjectivist” and “objectivist” categories. Now, that, I find interesting, though it does strike me as just another example of humans' obsession with putting everything into nice little boxes with neat little labels. The first two categories there are pretty self-explanatory, I think, and the article explains the difference between subjectivist and objectivist naturalism. For those who feel purposeless, Landau suggests a reframing is in order. He writes, “A meaningful life is one in which there is a sufficient number of aspects of sufficient value, and a meaningless life is one in which there is not a sufficient number of aspects of sufficient value.” Yeah, well, personally, I'd add: "And no one else gets to say whether someone's life has a sufficient number of aspects of sufficient value." Landau argues that anyone who believes life can be meaningless also assumes the importance of value. In other words, if you think life can be meaningless, then you believe that there is such a thing as value. You’re not neutral on the topic. While I feel that this is probably true, I also think it's trivial. It's almost exactly like saying "If you think something is worthless, then you believe in the existence of worth." It relates to my musings on how a hole is always defined by what it's a hole in, rather than some arbitrary volume that we call a "hole." Some might protest that Landau’s being simplistic. Some might make things more complicated than they need to be. In fact, there are even less complex approaches to meaningfulness. In Philosophy Now, Tim Bale, a professor of politics at Queen Mary University of London in the UK, provides an extremely simple answer: “The meaning of life is not being dead.” Oooh, another Monty Python fan. Casey Woodling, a professor of philosophy and religious studies at Coastal Carolina University in South Carolina, proposes in Philosophy Now that the question of meaningfulness itself offers an answer. “What makes a human life have meaning or significance is not the mere living of a life, but reflecting on the living of a life,” he writes. Yeah, well, I have a different take on that: a human life is not much different from a cat life or a beetle life. Birth, life, maybe reproduction, death. Do beetles care about meaning? I doubt it. Cats? Almost certainly not, judging by the ones who live with me. So why are we so profoundly concerned with it? Just because only we (as far as we know) have the capacity to communicate it to others? The next section kind of agrees with me there: In the Eastern philosophical tradition, there’s yet another simple answer to the difficult question of life’s meaning... [Lao Tzu] suggests meaning comes from being a product of the world itself. No effort is necessary. Well, I'm certainly not here to resolve all the philosophical differences between East and West. Or to sell you anything. I just find this stuff interesting. Does it have value? That's up to you. |
After yesterday's screed, I find it necessary to emphasize that "deterministic" doesn't imply "predictable." Fortunately, today's random number pulled up this article, from MIT Press Reader, about someone who tried to predict everything. The Blunders of a 16th-Century Physician-Astrologer ![]() Horoscopic prediction is an inherently uncertain field, as Italian polymath Gerolamo Cardano had occasion to confirm more than once. "Inherently uncertain?" I'd have gone with some variation of "complete garbage" or "utter twaddle," depending on how many British articles I'd been perusing recently. Some of us will remember that Ronald Reagan and his wife Nancy consulted astrologer Joan Quigley before any major presidential decision. In fairness, this probably resulted in some better outcomes than Reagan just going with his gut. That the celestial bodies are not always reliable became evident when no astrologer was able to predict that on March 30, 1981, at 2:27 p.m., EST, President Reagan would be shot in the chest during an assassination attempt. But let's stop and consider for a moment: what if one of them, somehow, did? Would predicting a Presidential assassination attempt (during which other, less obnoxious people actually died) be of any use? I suppose one could say "you will be shot if you stand at point x at time t." So Ronald McDonald, believing unquestioningly in the science and predictive power of astrology, goes to great lengths to not stand at point x anywhere near time t. Then no one shoots at him. This nullifies the prediction. And could anyone, besides the astrologer, say with any confidence, "the only reason you didn't get shot at was because you heeded my advice?" Put another way, I could say, "When you go to the beach this weekend, don't go in the ocean between 2 and 3 pm, because if you do, you'll be bitten by a shark." So you get out of the water at 2, back in at 3, and you can spend the rest of your life telling everyone what a great forecaster I am because you didn't get attacked by a fish. Quigley peremptorily affirmed that she could have predicted the regrettable episode, because it was “very obvious,” if only she had drawn up his charts. Unfortunately, her occupations had precluded her from doing this. Having known several astrologers, when I read this, I laughed. Absolutely something one of them would say. Today, people’s blind belief in the power of astrology to reveal the future strikes us as absurd, because our mental stance is radically different. Who's this "our" person? Plenty of people don't find it absurd. Also, don't disparage absurdity by conflating it with bullshit. (As I've said before, I find astrology interesting as folklore and as the precursor to astronomy; it's still bullshit.) Notable among all divinatory physicians was a man of extraordinary eccentricity and uncommon genius: the Italian polymath from Pavia (some say Milan), Girolamo Cardano (1501–1576); his name is usually transcribed in English as Jerome Cardan, a custom that will be followed here. For context, this was a few years before Shakespeare wrote "The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings." Or anything else that we know of. In his clinical work, Cardan revived metoposcopy, the art of divination by looking at the lines of the face, especially of the forehead. Okay, that word was a new one for me. Also bullshit, but I wonder how much of that translated into the equally bullshit phrenology. Also, I should emphasize that, at the time, science really hadn't been invented yet, and the practice of medicine in general was more empirical and superstitious than evidence-based. The article delves into an overview of metoposcopy, then: The basic underlying hypothesis is simple: The forehead is the scroll on which God wrote His sublime word. In the legend of the Golem, which has Kabbalistic origins, the animating essence is usually a Hebrew word etched upon the forehead of the creature. I always thought we should do that with humanoid robots just to fuck with people. An aside: the most famous Golem story involved the city of Prague. Prague is a Czech city. The word "robot" came to us from the Czech language. Robots, at least the ones that are vaguely humanoid, are basically techno-golems. Also, the Czech are a Slavic people, and Slav is the root for our word "slave," which is also what "robot" means in Czech. I find these coincidences amusing. Well, except for the part where someone, somewhere, considered an entire culture to be a slave race. That's not so amusing, regardless of which race. Furthermore, divination never lacked fervent followers, as its practitioners thrive under the cloak of infallibility. If the event predicted actually occurs, the prognosticator’s clairvoyance will be deemed miraculous and the clairvoyant a being of preternatural acumen. But if the prediction fails to take place, the diviner can concoct elaborate reasons that will explain the failure, deflect the blame, and, in so doing, flaunt a profound learning in the esoteric art of divination. That trick is hardly limited to divination. Thanks to his international renown, he was called to the then remote and barbarous Scotland... "Then?" Okay, okay, I'm kidding. Please don't play bagpipes at me. ...to provide medical care for His Excellency, Bishop John Hamilton (1512–1571). Then, asked to draw his horoscope, the astrologer-physician predicted that Hamilton would live happily, but would be in danger of dying from cardiac disease. What actually happened was that the bishop was taken prisoner during the capture of Dumbarton Castle, summarily condemned to execution, and hanged at Stirling in 1571, thus achieving the dubious distinction of being the first Scottish bishop ever to die at the hands of an executioner. "Whew, he luckily missed dying of a heart attack!" There are a couple of other examples of his confident predictions that turned out to be, and I'm using the literary device of understatement here, slightly off. There is a tradition that Jerome Cardan had engaged in all sorts of astrological calculations by which he determined the exact date — year, month, day, and hour — of his death. The fatidical moment approached fast, yet nothing seemed to indicate that he was about to breathe his last. Therefore, our man decided to lock himself up, refused to eat, and let himself die. Now that, mes amis, is what I call absolute dedication to one's closely-held beliefs. If it's true. Which it probably isn't. He died in Rome, aged 75, while under the protection of Pope Gregory XIII, who had recognized his outstanding merits. And, though the article doesn't say this, this was the same Gregory who codified the civil calendar that the world uses to this day (and which I rail against from time to time). I guess some things are predictable after all. Like the Earth's orbit around the Sun. |
Stepping into a quagmire today, because the author of this aeon piece is, as stated in her bio, "a philosopher specialising in theology and natural science." So I'm going to have issues from the very beginning, as I consider theology a "subject without an object," in the words of someone smarter than I am whose name I can't find right now. Many worlds, many selves ![]() If it’s true that we live in a vast multiverse, then our understanding of identity, morality and even God must be reexamined The word "if" is doing most of the work in that subhead. As far as I'm aware, the idea of a multiverse, while making for some interesting (and not so interesting) fiction, is not something that can be supported or falsified scientifically. It arises as a possible, and terribly misunderstood, consequence of one of many interpretations of quantum physics. Recently, I was caught on the horns of a dilemma. I had a decision to make and, either way, I knew my life would follow a different track. On one path, I accept a job offer: it’s an incredible opportunity, but means relocating hundreds of miles away, with no social network. On the other, I stay in Oxford where I’d lived for a decade: less adventure, but close to my friends and family. Both options had upsides and downsides, so I wished that I could take the job and turn it down, somehow living each life in parallel. Well… there was potentially a way to make this happen. I could have my cake and eat it too. One of the most misunderstood things about multiverse speculation is what would cause the Universe to split. It supposedly happens, if at all, when a quantum entity such as an electron is no longer in a probability function, but acquires a defined state. It's not because of human choice. As electrons do this all the time all over the universe, the number of universes split off in this way is a number so large as to be incomprehensible to us (but still just as far from infinity as the number 1 is). There are smartphone apps that can help you decide between two options by harnessing the unpredictable quirks of quantum mechanics. But this is no ordinary coin toss, where randomness decides your fate. Instead, it guarantees that both choices become realities. It guarantees no such thing, and even if it did, there would be no way to get your money back because the "both realities" thing cannot be verified. In principle, though, this would produce results more truly random than most methods, including the proverbial coin toss and the simple app I use to choose these articles from a list, so it could have its uses. It’s inspired by the ‘Many-Worlds’ interpretation of quantum mechanics, first proposed by the physicist Hugh Everett III in his doctoral dissertation in the 1950s. He argued that our Universe branches into multiple worlds every time a quantum event takes place – and thousands happen every second. While "thousands" implies something less than "millions," the actual number is exponentially higher than even millions. But my main quibble here is the conflating of "universe" and "world." Unless you're speaking Hebrew, those words are different: the universe is also exponentially larger than the world. As a philosopher of religion, I am interested in how this mind-boggling scientific theory might force us to reexamine even our most deeply held beliefs. One, it's not a theory; it's a hypothesis. Two, when a theologian says something like this, what they really mean is "how do we still fit God into our world-views, given this information?" It's like they almost get it, but not quite. In fact, I believe that the Many-Worlds interpretation of quantum mechanics encourages us to radically reconceptualise our understanding of ourselves. Perhaps I am not a single, unique, enduring subject. Perhaps I am actually like a branching tree, or a splitting amoeba, with many almost identical copies living slightly different lives across a vast and ever-growing multiverse. Even if Many-Worlds is fact, which I'm not saying it is, there's a really important difference between that hypothesis and trees or amoebae: a tree is an entity that we can see in its entirety, and each branch continues to contribute to the whole; the daughters of a splitting amoeba continue to co-exist and might even bump into each other; but once the Universe splits off, that's it: no further contact between the branches. At all. Ever. Not even in "theory." Only in popular fiction. (I've written such fiction myself, though I wouldn't call it "popular," before I realized it was just adding to peoples' confusion about this sort of thing.) I also believe that this picture encourages us to rethink our ideas about moral responsibility, and what religion tells us about God – maybe, even, abandon the traditional idea of God altogether. So close. SO close. For starters, if we live in a universe where there are multiple versions of you, thorny questions are raised about whether these versions of you can be considered the exact same person. There's that "if" again, but this time, it's even more iffy: we do not "live in a universe where there are multiple versions of you." Assuming, again, MWH is real (which, again, I do not), the clones all occupy different and forever separate universes. There's a somewhat-logical philosophical consequence to MWH called quantum immortality. It asserts that if a quantum event could either cause your death or not, your consciousness follows the "alive" path. I say logical, but logic can rest on false premises. Theology, for example. The article goes into other possible philosophical implications (most of which I find to be spurious), and then: An additional thorny problem raised by a universe of many worlds is that of moral responsibility. Most ordinary people’s moral intuitions about right action – whether some action was freely made, whether it accords with shared moral principles, and whether a person can be held responsible for it – were formed under the assumption that we live in a singular universe. Ha! Wait until you start thinking about the moral-responsibility consequences of our lack of free will as it is traditionally understood. The problem is, Many-Worlds is a deterministic theory – and determinism is considered by many, though not all, philosophers to be incompatible with genuine freedom. Oh... so close. So very, very close, but not quite. Consider this: Assume an entity that exists outside of space and time, for whom the past, present, and future has already happened, is happening, and will happen, all at once, and they're aware of all of it. To them, what we call the future is just as immutable as what we call the past, because it's all the same "thing." From the point of view of such an entity, we don't make choices; we're a train that never jumps the tracks and runs on an entirely predictable schedule. That entity could never be surprised. If they could be surprised, they wouldn't be all-knowing. In other words, if God can be surprised, He's not omniscient. If He cannot, then we don't have free will. (This assumes such an entity in the first place, of course.) Across the multiverse, everything that can happen does happen; each branch is inevitable. If that’s the case, even if we feel like we have the freedom to choose what actions we take, this may in fact be an illusion. We wouldn’t think me morally responsible for pushing over my grandmother if someone held a gun to my head and threatened to kill me if I didn’t. Similarly, if all my actions are determined by physical forces outside my control – like the laws of quantum mechanics – then it seems pretty unjust to punish me for them. There are plenty of reasons to punish people who do what we consider to be wrong, whether we have free will or not. I won't go into that here. Lots more at the article, but, as I said, it's possible to build entire logical edifices on nonexistent foundations, which results in the entire building sinking into the quagmire that I just waded through. It's fine to do the thinking, though. One of our superpowers as humans is the ability to imagine the impossible; as with all superpowers, though, it's possible that we abuse it sometimes. |