Bear with me.
Metaphor is a metaphor. A metaphor is a transfer is a carrying over (meta- (GR) = trans- (L) = over, across, -phor (G) = -fer (L) = carry, bear). A metaphor is a transfer of meaning from one literal sense to a figurative one where, hopefully, the transfer to the new domain keeps things one-to-one.
Imperfect metaphors are called leaky abstractions (at least in software). And 'leaky abstraction' is a leaky abstraction. They leak because the literal source of the abstraction leaks through to the abstraction layer. An abstraction or a metaphor is metaphorically used as a container of fluid some of which leaks out (but onto what? which is the source and which is the target (in another metaphor of metaphor)). You can't take the abstraction layer literally, you have to know something about the underlying source.
Also, 'figurative' is figurative or metaphorical; I often take metaphor as a synecdoche (or metonymy; synecdoche is a metonymy of metonymy) of figure of speech, in that it, figure of speech, is not literal. A figure is a picture, which is a good metaphor for metaphors... or figures of speech. 'Literal', on the other hand, happens to be somewhat non-literal because it is about writing, which is a metaphor for verbatim, that is the primary surface definition. 'Literally' has been used non-literally (that is as a general intensifier) literally for ages, but universally (that's hyperbole which is a figure of speech which is just a lie that we all agree to and not metonymy) recognized as wrong.
Borges said (in This Craft of Verse) that Lugones said (in Lunario sentimental) that "all words are dead metaphors", which is a dead metaphor because nothing has really died. Or rather the original meaning died or faded away very slowly, but you could resurrect it a little if you tried. So it's a little leaky.
We're swimming in metaphors!
Also 'Metaphors We Live By', by Lakoff and Johnson
Friday, January 22, 2016
Thursday, January 21, 2016
Technical Debt is a Leaky Abstraction, but so what
Technical debt is a recent term in software engineering used to describe potential later problems that may be caused by known decisions now. For example, suppose a customer asks for a feature that can be implemented cheaply and quickly, but it will introduce security holes or be very difficult to generalize if done quickly, or will prevent another rare feature from working without lots of rework. You're making things work now for some pain later. You would have much less total pain overall if you 'do it right' today, but cheaply and quickly are more important.
(from collab.net)
Saying 'we have a lot of technical debt in our software' is very loose talk for 'we have either a lot of bugs that no one is complaining about' or 'we don't do code review so I bet there's a lot of crap that gets deployed'.
A leaky abstraction is another metaphor in software engineering. When one creates a new layer intended to hide all the details of a lower layer, For example, floating point numbers are leaky because they try to hide all the gross details of finite bits per number approximating perfect precision of the desired numbers, but sometimes you end up having to be aware of the underlying implementation because they leak through the abstraction (eg a+b usually equals b+a except when b is much smaller than a, and to deal with that you need to know some details of how the floating point ops really work under the hood).
Technical debt is a leaky abstraction in that the metaphor cannot be taken too literally; if you try to follow implications of the words, it breaks. Debt is a balance in a ledger, you can owe or be owed a value. Technical debt, being very qualitative, is hard to put into numbers, you just have a vague sense of 'this is bad' vs 'this is really bad' vs 'this is tolerable'. The debt isn't really about the features themselves but about the time and mental effort needed to implement things.
The first picture is a terrible explanation, the following is much better:
(from commadot.com)
All I'm saying is that technical debt is a leaky abstraction, a faulty metaphor. There's no paying back, it's just cleanup.
Using the term is great because it is more politic than saying 'The code is a mess and needs some cleanup. Features are really hard to add'.
(from collab.net)
Saying 'we have a lot of technical debt in our software' is very loose talk for 'we have either a lot of bugs that no one is complaining about' or 'we don't do code review so I bet there's a lot of crap that gets deployed'.
A leaky abstraction is another metaphor in software engineering. When one creates a new layer intended to hide all the details of a lower layer, For example, floating point numbers are leaky because they try to hide all the gross details of finite bits per number approximating perfect precision of the desired numbers, but sometimes you end up having to be aware of the underlying implementation because they leak through the abstraction (eg a+b usually equals b+a except when b is much smaller than a, and to deal with that you need to know some details of how the floating point ops really work under the hood).
Technical debt is a leaky abstraction in that the metaphor cannot be taken too literally; if you try to follow implications of the words, it breaks. Debt is a balance in a ledger, you can owe or be owed a value. Technical debt, being very qualitative, is hard to put into numbers, you just have a vague sense of 'this is bad' vs 'this is really bad' vs 'this is tolerable'. The debt isn't really about the features themselves but about the time and mental effort needed to implement things.
The first picture is a terrible explanation, the following is much better:
(from commadot.com)
All I'm saying is that technical debt is a leaky abstraction, a faulty metaphor. There's no paying back, it's just cleanup.
Using the term is great because it is more politic than saying 'The code is a mess and needs some cleanup. Features are really hard to add'.
Tuesday, January 19, 2016
Names make theories make names
Richard Feynman, the Nobel Prize winning physicist and Challenger disaster O-ring explainer, has a couple of anecdotes about naming.
One is about the 'map of the cat', where Feynman had to give a talk to his graduate zoology class. In preparation, he went to the library and asked for a map of the cat, to which the librarian responded "You mean a zoological chart!" (I'm guessing 'what a funny thing to say') Then later when he presents this, his fellow bio students say "We know all that!". Feynman says that they "had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes."
The other story is much earlier in his life. All the (nerdy) kids on the playground try to one up each other on what their dad's taught them. "What do you call that bird? What about that bird?". But Feynman's dad said something like it's called X in language Y, Z in language W.
What is the point about these parables? He goes on to explain that names don't explain anything. He explains that these technical terms for anatomy or for bird varieties or for whatever science are just terms, they're not explanation themselves. There is a tendency even for these technical terms to be magical invocations, the totems of a closed guild, supplications to the gods, when these terms are barely paratactic gestures of pointing, a superficial 'behold', an inarticulate label with no explanation of the depth of the experience. "What kind of bird is that?" "It's a sparrow" "Why does it do that?" "It's a sparrow." as though emphasis is explanation enough.
Except... what do you expect? Is naming so terrible? Do you want to do away with naming and simple move on to the much more interesting explanation? Is naming so simpleminded?
Where did the names come from? Without this being an explanation of historical linguistics, presuming terms like gastrocnemus, sparrow, and inertia are somewhat random and distinct, these labels are hooks for the concepts. A lot of thought, using other labels or conceptual manipulation, led one to label this object or concept one thing, that another. If our thoughts are not necessarily tied to language, communicating them certainly is (though a gesture or picture can go a long way, like a shrinking ring in ice water).
Names and words are little theories in themselves. We learn most of them superficially, but eventually we acquire their nuances. A new word, like inertia, is opaque to the newcomer, but at some point in time, the scientist or experimenter was playing with a number of concepts, and eventually some concepts coalesced out of that thinking and one of them was given the label 'inertia'.
Yes, knowing a name doesn't explain anything. Or rather, it explains very little. Knowing how to use a name is nontrivial, but has little explanation to it. Answering why requires being able to manipulate a number of names, but having those names is necessary. 'Black hole' is the culmination of a lot of thought. That it is a thing is the consequence of lots of thinking. And the start of a lot of thinking. Once you get to that concept, a lot of thought has gone on and not having that term would be a great loss, leaving us to swim around a number of concepts of relativity but don't quite say what they really mean. Feynman is saying that it is dumb to stop at the names of things. Sure, don't stop. But don't let that stop you from naming things because it'll be that much harder to continue without the name.
A name is itself the end product of a theory, and it makes further theories possible.
One is about the 'map of the cat', where Feynman had to give a talk to his graduate zoology class. In preparation, he went to the library and asked for a map of the cat, to which the librarian responded "You mean a zoological chart!" (I'm guessing 'what a funny thing to say') Then later when he presents this, his fellow bio students say "We know all that!". Feynman says that they "had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes."
The other story is much earlier in his life. All the (nerdy) kids on the playground try to one up each other on what their dad's taught them. "What do you call that bird? What about that bird?". But Feynman's dad said something like it's called X in language Y, Z in language W.
You can know the name of that bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird. You’ll only know about humans in different places, and what they call the bird. So let’s look at the bird and see what it’s doing—that’s what counts.” (I learned very early the difference between knowing the name of something and knowing something.)
What is the point about these parables? He goes on to explain that names don't explain anything. He explains that these technical terms for anatomy or for bird varieties or for whatever science are just terms, they're not explanation themselves. There is a tendency even for these technical terms to be magical invocations, the totems of a closed guild, supplications to the gods, when these terms are barely paratactic gestures of pointing, a superficial 'behold', an inarticulate label with no explanation of the depth of the experience. "What kind of bird is that?" "It's a sparrow" "Why does it do that?" "It's a sparrow." as though emphasis is explanation enough.
Except... what do you expect? Is naming so terrible? Do you want to do away with naming and simple move on to the much more interesting explanation? Is naming so simpleminded?
Where did the names come from? Without this being an explanation of historical linguistics, presuming terms like gastrocnemus, sparrow, and inertia are somewhat random and distinct, these labels are hooks for the concepts. A lot of thought, using other labels or conceptual manipulation, led one to label this object or concept one thing, that another. If our thoughts are not necessarily tied to language, communicating them certainly is (though a gesture or picture can go a long way, like a shrinking ring in ice water).
Names and words are little theories in themselves. We learn most of them superficially, but eventually we acquire their nuances. A new word, like inertia, is opaque to the newcomer, but at some point in time, the scientist or experimenter was playing with a number of concepts, and eventually some concepts coalesced out of that thinking and one of them was given the label 'inertia'.
Yes, knowing a name doesn't explain anything. Or rather, it explains very little. Knowing how to use a name is nontrivial, but has little explanation to it. Answering why requires being able to manipulate a number of names, but having those names is necessary. 'Black hole' is the culmination of a lot of thought. That it is a thing is the consequence of lots of thinking. And the start of a lot of thinking. Once you get to that concept, a lot of thought has gone on and not having that term would be a great loss, leaving us to swim around a number of concepts of relativity but don't quite say what they really mean. Feynman is saying that it is dumb to stop at the names of things. Sure, don't stop. But don't let that stop you from naming things because it'll be that much harder to continue without the name.
A name is itself the end product of a theory, and it makes further theories possible.
Wednesday, January 13, 2016
Prescriptivism vs Descriptivism: which is worse?
Prescriptivism vs Descriptivism: which is worse?
These two words are used to describe one's attitude towards language usage; at its very simplest do you prescribe or describe how you speak, what are people supposed to do vs what people actually do, 'should' vs 'is'. When these terms are thrown around (and I do mean thrown, like mud pies) it's almost always meant to sting.
Objectively, prescriptivism is usually understood to mean keeping strictly to the formal rules of a language and descriptivism is more about discovering and recording the rules of language however people say things (whether they match the formal version or not). Newspaper editors and school teachers are often the supposed standards of prescriptivism and linguists as that of descriptivism. Your secondary teachers are teaching you the rules of good grammar, and the linguists are being scientific about what the rules actually are.
M-W's third edition dictionary (1961) is often cited as a classic of the descriptivist abyss, putting in words of dubious provenance, all the good profanity (which at least got some kids to crack it open at least once). The dictionary was decried by many as the nadir of pandering to idiocy, the last gasps in the decline of western civilization.
Informally, from the prescriptive point of view, descriptivists are 'anything goes'; whatever people say is what is allowed, nothing and no one is wrong, there are no mistakes, 'ain't' and 'between you and I' are now OK and that is just wrong and descriptivists are avatars of the decline nay the destruction of western civilization, regression towards the mean, the twilight of the idols, the idiotocracy, the worst are full of passionate intensity.
From the descriptive point of view, prescriptivists are stuck up old school marms, who make up arbitrary style rules, say 'you can't do this-you can't do that', split infinitives, prepositions at the end of a word, singular they, when people have been saying it that way forever and you just made that rule up because you are warped, frustrated old man. P's try to enforce their made up rules, when it's just their own repetition of some one else's peevish peeving on personal style. Descriptivists say that prescriptivists single way of speaking is reprehensible elitism and that they think they're are morally superior to others, and any other patterns are slack jawed, uneducated, lower class.
But that's just the tendentious version.
From the prescriptive point of view, there really are mistakes that people make, infer for imply is just wrong, 'literally' for not literal things is just wrong and native speakers just do not say it that way. A common error is not necessarily a common alternative.
From the descriptive point of view, there are many patterns out there for the same thing. Different contexts have different rules. People will say different things in different situations, one way speaking at the press conference and another in the bar, and neither is wrong (or the differences show up in different contexts).
People may very well avoid a split infinitive and prepositions at the end of a sentence in writing for stylistic or esthetic reasons, but in speech there's hardly getting around it what with all the phrasal verbs in English. The double negative ain't no (= "isn't a") problem as a perfectly everyday way of speaking for some varieties of English (that are, as the linguists say, not highly socially respectable (= redneck or AAVE)) or grammatically and logically appropriate either, two negatives making a positive or a form of understatement (that was not an uncomplicated sentence).
And this is where it comes down to the real difference.
Descriptivists are really prescriptivists at heart. Descriptivists just recognize many more varieties than prescriptivists, and those varieties tend to be much more informal or used by socially non-pinnacle subpopulations. People who are called by others prescriptivists do seem a little judgy, and people who are called descriptivists do seem a little too accepting of things that are (i.e. errors). But if you just label large groups of patterns as varieties, Prescriptivists are just talking about (mostly) a single variety, the newspaper/college paper variety, and descriptivists allow for a wider range of varieties, informal or regional or inarticulate (well, maybe not the last one). There are still mistakes. It just depends on the context or variety you're in.
These two words are used to describe one's attitude towards language usage; at its very simplest do you prescribe or describe how you speak, what are people supposed to do vs what people actually do, 'should' vs 'is'. When these terms are thrown around (and I do mean thrown, like mud pies) it's almost always meant to sting.
Objectively, prescriptivism is usually understood to mean keeping strictly to the formal rules of a language and descriptivism is more about discovering and recording the rules of language however people say things (whether they match the formal version or not). Newspaper editors and school teachers are often the supposed standards of prescriptivism and linguists as that of descriptivism. Your secondary teachers are teaching you the rules of good grammar, and the linguists are being scientific about what the rules actually are.
M-W's third edition dictionary (1961) is often cited as a classic of the descriptivist abyss, putting in words of dubious provenance, all the good profanity (which at least got some kids to crack it open at least once). The dictionary was decried by many as the nadir of pandering to idiocy, the last gasps in the decline of western civilization.
Informally, from the prescriptive point of view, descriptivists are 'anything goes'; whatever people say is what is allowed, nothing and no one is wrong, there are no mistakes, 'ain't' and 'between you and I' are now OK and that is just wrong and descriptivists are avatars of the decline nay the destruction of western civilization, regression towards the mean, the twilight of the idols, the idiotocracy, the worst are full of passionate intensity.
From the descriptive point of view, prescriptivists are stuck up old school marms, who make up arbitrary style rules, say 'you can't do this-you can't do that', split infinitives, prepositions at the end of a word, singular they, when people have been saying it that way forever and you just made that rule up because you are warped, frustrated old man. P's try to enforce their made up rules, when it's just their own repetition of some one else's peevish peeving on personal style. Descriptivists say that prescriptivists single way of speaking is reprehensible elitism and that they think they're are morally superior to others, and any other patterns are slack jawed, uneducated, lower class.
But that's just the tendentious version.
From the prescriptive point of view, there really are mistakes that people make, infer for imply is just wrong, 'literally' for not literal things is just wrong and native speakers just do not say it that way. A common error is not necessarily a common alternative.
From the descriptive point of view, there are many patterns out there for the same thing. Different contexts have different rules. People will say different things in different situations, one way speaking at the press conference and another in the bar, and neither is wrong (or the differences show up in different contexts).
And this is where it comes down to the real difference.
Descriptivists are really prescriptivists at heart. Descriptivists just recognize many more varieties than prescriptivists, and those varieties tend to be much more informal or used by socially non-pinnacle subpopulations. People who are called by others prescriptivists do seem a little judgy, and people who are called descriptivists do seem a little too accepting of things that are (i.e. errors). But if you just label large groups of patterns as varieties, Prescriptivists are just talking about (mostly) a single variety, the newspaper/college paper variety, and descriptivists allow for a wider range of varieties, informal or regional or inarticulate (well, maybe not the last one). There are still mistakes. It just depends on the context or variety you're in.
Sunday, January 10, 2016
Bayes Theorem != Bayesianism
Words are slippery. They have many meanings. Change an ending and you change everything. Not exactly everything, but enough to confuse everybody.
Bayes theorem is not the same as Bayesianism.
Bayes theorem is an elementary mathematical truth of elementary probability.
Bayesianism is a big trend in statistics for creating and interpreting new statistical tests.
So it is fairly obvious now that they are different. They are certainly related, but they are still different things altogether.
Some details are in order.
Thomas Bayes was a man who the famous Bayes Theorem (of elementary probability) was named after.
Also, the direction of Bayesian statistics was similarly named after him. Because of how the theorem thinks of probabilities.
Sure, all three, the man, the theorem, and the statistics philosophy, are related. But not as closely as you would imagine.
First, Bayes was an eighteenth Presbyterian minister in Kent, England. I'm sure he was quite important to the people around him, but the theorem named after him, was never published by Bayes himself. The theorem was stated de facto by someone else in passing, and also in passing mentioned that Bayes had discovered it. This someone else was Richard Price who was also probably important to the people around him but never had anything like a theorem named after him.
Now to the theorem.
Symbolically, Bayes theorem is, at its simplest,
$$
P(A|B) = \frac{P(B|A) P(A) }{P(B)}
$$
Simple enough, if you know elementary probability. Translated to English this means that the probability of an event $A$ having occurred, given that you know $B$ has occurred, can be computed from the probability of $A$ divided by the probability of $B$ times the probability of $B$ given $A$.
Big deal right? So what, right? The clever twist to this is that it is allows you to reverse the direction of causality, to use the past history of A's and B's together to determine the probability of something you don't know about $A$ from something you do know (the other three things on the right hand side). The calculation is elementary, you just count all your events (where $A$, $B$, both $A$ and $B$, and neither have occurred). $P(x)$ is the fraction that is the number of events where $x$ happened divided by the total number of events. $P(x|y)$ is the (note all fractions are between 0 and 1).
The proof is also elementary. From the explanation, we can see that $P(x|y)$ is the fraction the number of events where both $x$ and $y$ occur divided by all the $y$ events (whether $x$ occurred or not). That is
$$
P(x|y) = \frac{P(x) {\rm \ and\ } P(y)}{P(y)}
$$
Since $x and y$ is no different than $y and x$ this means that
$$
P(A|B) P(B) = P(A) {\rm \ and\ } P(B) = P(B|A)P(A)
$$
Divide the two ends by $P(B)$ and you're done.
This is a very short set of inferences, almost purely arithmetic. Thinking of the Venn diagram, a division is really the proportion of the subset in another larger set. A little more complicated, but giving even more insight, is to note that if you look at a 2x2 contingency table, A given B is the proportion in a row, and B given A is in a column, the theorem allows you to move between row and column.
To use it, tabulate a number of events where both $A$ and $B$ may have occurred. This allows you to calculate
It might seem at this point that the theorem is a lot of extra thought work when, if you have the contingency table already, just compute anything you want right there $A$ given $B$, $B$ given $A$, whatever. The idea of using the theorem is that often you are not presented with the contingency table, but have some good idea of the different values. The theorem allows you to update $A$ given $B$, if $B$ given $A$ is somehow magically known to you already. Actually often you don't know $B$ either.
To make this more concrete (a common example), let $A$ be 'patient has toenail cancer', $B$ 'new test on toenail is positive for cancer'. And as you may suspect, tests are not perfect. Sometimes they raise a false alarm, they are positive when there really is no cancer, and sometimes they miss the cancer, it is negative when there really is cancer. And usually there is a huge cost in discovering if the patient really truly has cancer (like the patient dies and you do an autopsy, or you do a biopsy and discover the cancer too late to do anything). So you totally know $P(B)$, when your tests are positive, at least for your lab because you just count. You may have a good idea of $P(A)$ because of national statistics, but your much smaller set of patients may be unusually prone to the disease or unusually healthy. And you may have a good idea of $P(B|A)$ because you know of your patients already who has cancer and who of them had a positive test. And what you want to know is the probability that this one new patient really has cancer, given that the new test turned out positive.
So this says for a given positive test, you can calculate the probability of cancer (there are lots of class examples showing how this is not trivial, that lots of tests are made, to ensure that you don't miss a cancer, the test is very sensitive and may include a lot of false positives. Then if your test is positive, there is still a low chance of having cancer just more than if the test was negative.
But the point is you don't always have the full contingency table at hand. That's what makes Bayes theorem so useful.
To give some perspective, Bayes theorem is very useful theorem but it is very simple. It is one of the simplest theorems ever. It's mostly just a convenience of calculation. It is simple on the order of the theorem that multiplication distributes over addition. It is almost trivial in proof, and its application is mostly just a simplification of calculation. Calling it a theorem is on the order of calling Monaco a country: it is certainly a country but in practice is more like a small but popular and overpopulated district in an overpopulated area of a much larger popular country.
Now to Bayesian Statistics. Statistics in general as a discipline is only a couple centuries old (and that is stretching it). Its mathematical foundations with axiomatic probability and statistical distributions based on the central limit theorem and the normal curve and special functions and practice centered on p-values really started taking off in the early 1900's with Pearson, Fisher and others. This manner of doing statistics was eventually named 'frequentist' statistics in distinction to the newer trend called Bayesian statistics. The trendy new field was called that, not because frequentists did not use or believe in Bayes theorem, but because in its manipulation of distributions relied on computing new distributions based on prior ones, analogous to how Bayes theorem computes new probabilities.
The point is that Bayesian statistics is not some super exploded hypergeneralization of Bayes theorem but rather a large set of mathematical machinery that allows one to compute in many different ways how well different hypotheses are supported by data. Instead of the frequentist procedures t-test and ANOVA, that have very strict assumptions on the distributions of the data, these new procedures allow a parameter of the hypothesis of the existing distribution which then gets updated by the procedure (or if you don't have an idea of the distribution, you can always assume the uniform prior).
You'll note I can say quite a bit more about Bayesianism, but I have spent more about Bayes theorem. Volumes could be written (and have) about Bayesianism and the religious wars between Bayesianism and Frequentism (one could have a meta-religious war whether that war is religious or substantive). There are extremely specific things about Bayes theorem I can write, but about Bayesianism it is easy to stop early without going into lots of complicated math.
So Bayes theorem does not mean the same thing as Bayesianism. Bayes theorem is a tiny almost trivial calculation in elementary probability, with a lot of uses. There is no controversy about the theorem. Bayesianism (or Bayesian statistics) is a forceful trend in statistical practice with a large set of alternative theoretical and practical procedures. It is a controversial trend among statisticians, or rather it was controversial in the 70's snd 80's and is mostly mainstream among them now right along side traditional frequentism.
So don't let the similarity of the names lead you to think they are the same. They are similar and related, but not that much.
- If you are thinking of a calculation of a probability of $A$ given $B$ using the probability of $B$ given $A$ (a very very specific usage), then you're using Bayes theorem.
- If you're thinking of a trend in statistics that avoids distribution assumptions, or rather allows you to specify your arbitrary distribution assumption, then you're talking about Bayesianism.
Tuesday, January 5, 2016
In defense of astrology
Astrology is the study of personality and personal events with respect to celestial objects and ones birthdate and location. It is often used in prediction of events of a person's life and their personality. In the European tradition, astrology is mostly based on the position of the sun in the sky, which constellation of the zodiac it is in, on a person's birthdate.
Most people are aware of astrology through the idea of a daily horoscope (often printed in newspapers on the comics page next to the word jumble or chess column) (these days with the increasing marginalization of newspapers, I don't know where people experience it).
Many people view it, and dismiss it, as entertainment, on par with palm reading or Tarot cards, not something to be taken literally. Those with a scientific or practical bent might dismiss it outright as an empty myth or BS because it is, to use the jargon, not falsifiable. There are a some who still believe that it may work.
But the general consensus is that it is BS. All I want to do is pedantically (but not to contradict this consensus which I share) show that Astrology is indeed scientific in the technical sense, just one that is not supported by the data. A science that has been falsified is still scientific, just disproven.
There are two common applications of astrology. One, based on your birth sign one out of twelve, to predict, is very vaguely the events of the day. For example (taken from a random horoscope site; it is very exemplary of the kinds of writing):
These tend to be the greatest source of disbelief: how can everybody having the same birth sign have all those things happen? These read like fortune cookies that could be true of anyone whatever their birth sign. Accidentally read the fortune of another sign and that is just as likely to happen or not. Astrological predictions like the above are a great example of confirmation bias in action; all you need to happen is for one thing to match what happened that day (if you read the horoscope after the day), or you just may make one of things happen, and you'll come to 'believe' that horoscopes work.
The other common application is a prediction of your distinct personality. A natal chart, the position of major celestial bodies with respect to the constellations and each other at the the time and place of birth, is used to predict personality. The position of the sun is the most important, telling you your 'sign', but the moon and planets and their relations are important. This facts lead to statements of the form like 'The moon is on the cusp of Libra and Virgo, and Mars is in opposition to Venus but square with Saturn meaning that...' followed by some explanation involving the personalities of those celestial bodies, inspired by their mythological stories.
(image from wikipedia)
My defense is that all these things make astrology not simply a myth but an actual scientific theory, scientific in the sense that it could be the case, and that we can test whether predictions by it or its parts hold or not and there is a postulated mechanism for these effects that holds up. Of course, none of it holds at all. My defense is simply that astrology is legitimately falsifiable.
Counter to most opinions, I consider astrology to be falsifiable because one can make a study of daily horoscopes or a study of star charts and individual's life events or personalities. There is very scientific measurement of celestial body positions. There can be predictive personality tests (but many currently personality tests are currently not considered particularly good at prediction, for example Meyers-Briggs personality test (MBTI), the somewhat similar Big Five test is supposedly somewhat more predictive).
St. Augustine's dismissal of astrology because twins don't have identical outcomes is motivating to me but scientifically could be problematic. Twins do share quite bit of personality. This thought experiment this could be modified only the slightest to be more supportable; compare the lives of two children born the same day in the same hospital. The general predictions of 'astrological theory' can barely be supported. Some lifestyle and class similarities will predict most of the similarity between the two children.
The horoscopes in newspapers or your astrological sign ("You must be a Libra!") are, I agree, not scientific because they are so vague as to be barely even functional much less predictive (meaning that the pronouncements they make can barely be mapped coherently to life events or personality.
But an astrology based on actual measurement could be falsified and therefore is a scientific theory. Its effects just have never been established.
By the way, the Middle-eastern-based astronomical astrology (the greco-roman constellations-sun-moon-planets and mythically associated personalities) may have exact predictive power but there's one huge glaring plain old mistake in execution. The constellation the sun is in, one's sun-sign, as usually given today, is mismatched with the prescribed dates. The twelve ranges of dates are set on the sun based along the ecliptic and the positions of the constellations. The constellation labels were set 2000 years ago. Because of the (astronomically measured) precession of the equinoxes, the position of the sun during the year has shifted a little more than the space of one constellation.
(source: wikipedia)
That is, on March 21st (the spring equinox, that start of the zodiacal year), when tradition states that the sign of Aries begins, in the year 2000 the sun is actually as measured in the sky against the backdrop of the constellations, already part way through the next constellation of Taurus.
So this is not an error of interpretation or logic, it is an even more elementary error of data recording/reporting.
Most people are aware of astrology through the idea of a daily horoscope (often printed in newspapers on the comics page next to the word jumble or chess column) (these days with the increasing marginalization of newspapers, I don't know where people experience it).
Many people view it, and dismiss it, as entertainment, on par with palm reading or Tarot cards, not something to be taken literally. Those with a scientific or practical bent might dismiss it outright as an empty myth or BS because it is, to use the jargon, not falsifiable. There are a some who still believe that it may work.
But the general consensus is that it is BS. All I want to do is pedantically (but not to contradict this consensus which I share) show that Astrology is indeed scientific in the technical sense, just one that is not supported by the data. A science that has been falsified is still scientific, just disproven.
There are two common applications of astrology. One, based on your birth sign one out of twelve, to predict, is very vaguely the events of the day. For example (taken from a random horoscope site; it is very exemplary of the kinds of writing):
Today you urgently need to understand and master the art of balancing the physical reality with your vision. While your plans are ambitious, you have to understand the actual obstacles top these plans. Otherwise, you are headed for a collision course in spite of all your good intentions. You also need to understand that your plans may be conflicting with those of someone else who is as determined and ambitious as you.(source (no link to the specific text for a day, just generic for the sign))
These tend to be the greatest source of disbelief: how can everybody having the same birth sign have all those things happen? These read like fortune cookies that could be true of anyone whatever their birth sign. Accidentally read the fortune of another sign and that is just as likely to happen or not. Astrological predictions like the above are a great example of confirmation bias in action; all you need to happen is for one thing to match what happened that day (if you read the horoscope after the day), or you just may make one of things happen, and you'll come to 'believe' that horoscopes work.
The other common application is a prediction of your distinct personality. A natal chart, the position of major celestial bodies with respect to the constellations and each other at the the time and place of birth, is used to predict personality. The position of the sun is the most important, telling you your 'sign', but the moon and planets and their relations are important. This facts lead to statements of the form like 'The moon is on the cusp of Libra and Virgo, and Mars is in opposition to Venus but square with Saturn meaning that...' followed by some explanation involving the personalities of those celestial bodies, inspired by their mythological stories.
(image from wikipedia)
My defense is that all these things make astrology not simply a myth but an actual scientific theory, scientific in the sense that it could be the case, and that we can test whether predictions by it or its parts hold or not and there is a postulated mechanism for these effects that holds up. Of course, none of it holds at all. My defense is simply that astrology is legitimately falsifiable.
Counter to most opinions, I consider astrology to be falsifiable because one can make a study of daily horoscopes or a study of star charts and individual's life events or personalities. There is very scientific measurement of celestial body positions. There can be predictive personality tests (but many currently personality tests are currently not considered particularly good at prediction, for example Meyers-Briggs personality test (MBTI), the somewhat similar Big Five test is supposedly somewhat more predictive).
St. Augustine's dismissal of astrology because twins don't have identical outcomes is motivating to me but scientifically could be problematic. Twins do share quite bit of personality. This thought experiment this could be modified only the slightest to be more supportable; compare the lives of two children born the same day in the same hospital. The general predictions of 'astrological theory' can barely be supported. Some lifestyle and class similarities will predict most of the similarity between the two children.
The horoscopes in newspapers or your astrological sign ("You must be a Libra!") are, I agree, not scientific because they are so vague as to be barely even functional much less predictive (meaning that the pronouncements they make can barely be mapped coherently to life events or personality.
But an astrology based on actual measurement could be falsified and therefore is a scientific theory. Its effects just have never been established.
By the way, the Middle-eastern-based astronomical astrology (the greco-roman constellations-sun-moon-planets and mythically associated personalities) may have exact predictive power but there's one huge glaring plain old mistake in execution. The constellation the sun is in, one's sun-sign, as usually given today, is mismatched with the prescribed dates. The twelve ranges of dates are set on the sun based along the ecliptic and the positions of the constellations. The constellation labels were set 2000 years ago. Because of the (astronomically measured) precession of the equinoxes, the position of the sun during the year has shifted a little more than the space of one constellation.
(source: wikipedia)
That is, on March 21st (the spring equinox, that start of the zodiacal year), when tradition states that the sign of Aries begins, in the year 2000 the sun is actually as measured in the sky against the backdrop of the constellations, already part way through the next constellation of Taurus.
So this is not an error of interpretation or logic, it is an even more elementary error of data recording/reporting.
Monday, January 4, 2016
'Thing Explainer' review
'Thing Explainer" is a picture book by the engineer-cartooner Randall Monroe (who does xkcd). This has so much stuffed into it, not just information but also manner. It is two major things that are at odds, so I don't which one to present first. So I'll just do it: it's a book that 1) uses only the first 1000 most frequent words in English (so it has to use some cleverness for technical concepts) and 2) presents technical concepts.
- How it's made and How it works. If you like 'Cathedrals', Lego, science, whatever, this is 'How it's made' but for more sciency things.
- At the same time, it is Oulipo-style constrained writing. The author took an arbitrary restriction (not exactly arbitrary) and, for all the labels of the hugely entertaining drawings of scientific concepts and engineering artifacts, made them entirely using the 1000 most common words in English.
The note at the end of the book states what his Oulipo rule really is (and gives the words he allows himself:
And he doesn't feel like he has to use every word in the list (this isn't serial music); he doesn't use a couple extremely common but profane words. The funny thing is is that he doesn't force himself to absolutely strictly adhere to the rules.
But the point he intended comes across. He wanted to explain abstruse scientific comments in a manner understandable by those early in the language (and thought) learning process. Language is not thought, so those who understand the concepts already can find the technical terms that match up what the author translated into slightly more verbose 'simple' sentences and similarly, the accessible vocabulary is used to get across the same concepts seemingly necessarily captured in abstruse technical vocabulary.
There's another side to using 'basic' English. The bad side of this is that it is deliberately dumbing down science. Simplify the language removes true things. It is intentionally anti-intellectual.
Or rather, it is nominally anti-intellectual, it could be anti-intellectual, but it is not. It is not the diametrical opposite of anti-intellectual, which I take to be obscurantism, because the whole point of the exercise is to be intellectual, present knowledge, and it is intentionally not obscurantist, because it is intentionally trying to present knowledge in a digestible form.
One exercise is to attempt to translate back to technical language. What is the scientific terminology that was the source of the circumlocution. What exactly is a 'fear water'? Oh, it's the adrenal gland, or at least adrenaline. When you translate technical terms, which are mostly stipulated and have their single technical definition, into multiple non-technical terms, there is the likelihood that the new term will be using high frequency words commonly used for lots of things already. So the new term requires some more frequent use to make sure it applies to that original singular technical concept. For example, 'power box' is used for electric battery but surely could be used for other boxes associated with power. But it is easy to see that with repetition, we could get used to just using 'power box' for just those items we currently call electric batteries.
Highlights:
- Includes descriptions of: a nuclear bomb, the ISS, the periodic table, a smartphone, the LHC, a jet engine, cell anatomy. Look, I could just list every page. It's both science and technology.
- 'thousand' isn't in the top 1000, so discussing itself is already constrained (it's 'ten hundred').
- 'Nine' isn't in the top 1000, so it is given as 'one more than eight', 'almost ten'.
- 'Gold' is the only element name that remains on the periodic table.
- the tree of life. Most animal names are fairly uncommon nowadays, now that few of us live on farms or the wilderness. So lots of liberties taken here. But a lot of the descriptives work. For example, 'gray tree jumper' for squirrel, 'sea dog' for seal, 'pocket babies' for marsupials, 'sweet thing' for fruit are all excellent. But there are some clunkers like 'food often in cans' (I have no idea), 'small dog' (is that a fox?). Here, with so many examples, it's nice to see how one can be paratactic and perceptual and that's often what one finds in other languages (because it is so hard to see the many times it already is the case in your own). Nice inclusion of 'water bear' for water bear/tardigrade/moss piglet/slowstepper.
I want to see the translation to Chinese. Well, the translation back to English. So I wonder if there'd be a difference if the Chinese were created from scratch (thousand most common characters or thousand most common terms) (presumably translating the given text directly would, or should, be dumb.
Also, I'd like to see Euclid's Elements done the same way.
One negative. If you're like me, you're getting older. I mean that with respect to the book. Meaning with respect to how it is printed. Meaning... ugh. My eyesight isn't terrible, but the font size is so small that it is a chore to read. I'm sure it wouldn't be so bad if my eyes were younger.
Sorry, two negatives. I am disappointed it is so short, only 60 pages but some fold outs.
Holy crap, a skyscraper!
- How it's made and How it works. If you like 'Cathedrals', Lego, science, whatever, this is 'How it's made' but for more sciency things.
- At the same time, it is Oulipo-style constrained writing. The author took an arbitrary restriction (not exactly arbitrary) and, for all the labels of the hugely entertaining drawings of scientific concepts and engineering artifacts, made them entirely using the 1000 most common words in English.
The note at the end of the book states what his Oulipo rule really is (and gives the words he allows himself:
Note: in this set I count different word forms as one word
I could have said ship but I stuck to boat because space boat makes me laugh.
And he doesn't feel like he has to use every word in the list (this isn't serial music); he doesn't use a couple extremely common but profane words. The funny thing is is that he doesn't force himself to absolutely strictly adhere to the rules.
But the point he intended comes across. He wanted to explain abstruse scientific comments in a manner understandable by those early in the language (and thought) learning process. Language is not thought, so those who understand the concepts already can find the technical terms that match up what the author translated into slightly more verbose 'simple' sentences and similarly, the accessible vocabulary is used to get across the same concepts seemingly necessarily captured in abstruse technical vocabulary.
There's another side to using 'basic' English. The bad side of this is that it is deliberately dumbing down science. Simplify the language removes true things. It is intentionally anti-intellectual.
Or rather, it is nominally anti-intellectual, it could be anti-intellectual, but it is not. It is not the diametrical opposite of anti-intellectual, which I take to be obscurantism, because the whole point of the exercise is to be intellectual, present knowledge, and it is intentionally not obscurantist, because it is intentionally trying to present knowledge in a digestible form.
One exercise is to attempt to translate back to technical language. What is the scientific terminology that was the source of the circumlocution. What exactly is a 'fear water'? Oh, it's the adrenal gland, or at least adrenaline. When you translate technical terms, which are mostly stipulated and have their single technical definition, into multiple non-technical terms, there is the likelihood that the new term will be using high frequency words commonly used for lots of things already. So the new term requires some more frequent use to make sure it applies to that original singular technical concept. For example, 'power box' is used for electric battery but surely could be used for other boxes associated with power. But it is easy to see that with repetition, we could get used to just using 'power box' for just those items we currently call electric batteries.
Highlights:
- Includes descriptions of: a nuclear bomb, the ISS, the periodic table, a smartphone, the LHC, a jet engine, cell anatomy. Look, I could just list every page. It's both science and technology.
- 'thousand' isn't in the top 1000, so discussing itself is already constrained (it's 'ten hundred').
- 'Nine' isn't in the top 1000, so it is given as 'one more than eight', 'almost ten'.
- 'Gold' is the only element name that remains on the periodic table.
- the tree of life. Most animal names are fairly uncommon nowadays, now that few of us live on farms or the wilderness. So lots of liberties taken here. But a lot of the descriptives work. For example, 'gray tree jumper' for squirrel, 'sea dog' for seal, 'pocket babies' for marsupials, 'sweet thing' for fruit are all excellent. But there are some clunkers like 'food often in cans' (I have no idea), 'small dog' (is that a fox?). Here, with so many examples, it's nice to see how one can be paratactic and perceptual and that's often what one finds in other languages (because it is so hard to see the many times it already is the case in your own). Nice inclusion of 'water bear' for water bear/tardigrade/moss piglet/slowstepper.
I want to see the translation to Chinese. Well, the translation back to English. So I wonder if there'd be a difference if the Chinese were created from scratch (thousand most common characters or thousand most common terms) (presumably translating the given text directly would, or should, be dumb.
Also, I'd like to see Euclid's Elements done the same way.
One negative. If you're like me, you're getting older. I mean that with respect to the book. Meaning with respect to how it is printed. Meaning... ugh. My eyesight isn't terrible, but the font size is so small that it is a chore to read. I'm sure it wouldn't be so bad if my eyes were younger.
Sorry, two negatives. I am disappointed it is so short, only 60 pages but some fold outs.
Holy crap, a skyscraper!
Subscribe to:
Posts (Atom)