Showing posts with label cs. Show all posts
Showing posts with label cs. Show all posts

Friday, January 6, 2017

A meta-note about NP-completeness and its current state of (lack of) knowledge of

Recently, Scott Aaronson wrote a short-book-length summary of the current state of knowledge about NP-completeness. This is an honorable tradition, starting with the classic and definitive book by Garey and Johnson, then Sipser, Cook, Wigderson, and Allender. Aaronson gives his spin on things (a little more on the quantum computing side) but hits well (like the others) all the great points: Turing Machines, reductions, completeness, the hierarchy of harder problems, what is conjectured to be the case (P != NP), etc.

What I want to do is point out the unsaid things, those things that are obvious to insiders and don't need to be said, but are inscrutable to outsiders. The outsider may realize it or be told it explicitly but not really get it understand it because, you know, math.

The most important thing is that, of all the things stated, all the theorems, all the papers and books, the great number of words and symbols written about the subject, all these things are written about what we don't know about NP-completeness. We don't know whether P = NP. We don't know whether NP = coNP. We don't even know whether Graph Isomorphism is in P (early 2017 Babai retracted 2015 quasi-polytime algorithm, just sub-exponential). We don't know. We don't know so much. Every day there are books and papers published that are essentially just outlining the boundaries of all the so many things we don't know about P vs NP.

Most of the theory surrounding P vs NP was developed in the 1970's. Lots of small results (with great effort!) have been established over the years. We do know now that NP = PCP(log n,1) (Arora et al. 1998), NEXP is not in ACC^0 (Williams 2011). No, not small results but proofs of things that we really expected to be easy. Some specific problems have been shown to be in a particular class unknown before: Primality is in P (AKS 2004),  LinProg is in P (Karmarkar 1986)  UGraph Reachability is in L (Reingold 2004), but one usually expects it (relatively) easy to find an optimal algorithm for a problem...it just  turns out that all these were very difficult to discover.

But all these are minor results. One-offs. They may spark a cottage industry in their subcommunity and are undeniably important. But they are... they are only making infinitesimal progress towards P vs NP. In fact, because of this meta-intractability (get it? the problem of intractability is itself intractable), no one actually is currently attacking the direct problem of P vs NP. Most of the (great amount of) work goes into ancillary situations, extending minor results, changing the problem, all in the hopes that there might uncover one little tiny bit that makes the difference. It feels like panning for gold in that you dig up tons and toms of material, wash it through multiple levels of gratings, wash it some more, refine the dirt, wash the dirt some more, and if you get a couple of tiny specks of non-dirt that just turns out to be copper, you call that a good day.

As far as history of science goes, this is a rare situation. Usually, a bunch of data is collected, some patterns are noticed, you collect a lot more data (and all this new data is useful and interesting), these patterns look like they repeat in different weird ways, then a consolidating theory (a scientific experiential theory that is) is formulated that generalizes and explains the prior data, and then a search is made for even more data which then either confirms the theory or encourages refinement or maybe even over-throw of the first theory. Similarly in math (which the P vs NP problem is a part of), a bunch of small theorems are proven, then some (usually a group but usually a single person gets good press) see a pattern, add a few definitions and prove a big theorem that combines them all, and then further consequences of that big theory are applied to more instances (note that yes, in math it is very rare (never?) for a big theory to be overthrown because a proof is a proof (right?) of course it is, but the refinement comes either in how many new more complicated problems to explore are created that go beyond the first theory.

Anyway, the rare situation with P vs NP is that we've essentially gone almost 40 years now with the same state of affairs of lack of knowledge. The progress that there is is not steady, and any gains seem underwhelming at least from the outside (the first outsider thought is "You're so smart, shouldn't you have figured that out a long time ago?"). Even the most minor seeming result is astounding (NEXP is a very intractable class and ACC^0 a very tractable one - how come it was so hard to show NEXP not in ACC^0?).

Also, back to more basic things. All this literature, all this theorem proving hasn't really established, hasn't really given a definitive proof of the pinnacle question P vs NP...but it's almost a scientific proof, a proof by having lots of examples in mostly one direction. All the reductions, all the intermediate problems, al the proofs of no proof by method X, are really supporting evidence (not mathematical definitude) just supporting the particular direction that P is a strict, proper, much smaller subset of NP, that NP problems are actually harder and we really can't find exact and fast algorithms for NP-complete problems. That is, even though we have 40 years of not proving one way or the other, it (almost) all goes in the direction that they're probably not the same.

Tuesday, August 2, 2016

SQL JOIN Venn diagrams are only sort of Venn diagrams

SQL is a standard for querying databases. Despite questionable pronouncements that SQL is Turing complete, I hesitate to call it a language because its power is in using boolean logic in dealing with tables of data whose columns point to each other.

And often Venn diagrams, the go-to visualization for set operations, are used to help explain the process of table JOINs.

The interesting things is that set operations and table joins are not really the same thing. They're related but just not the same. Set operations, which are pretty much the same as boolean/logical operations, are simple to visualize. The picture is the universe of elements, a circle surrounds a group (a set) of elements with a property, and a set operation does something to one or more sets to make a new set.

(from Modern Dilettante)

SQL also has set operations that combine tables as though they were sets: UNION, INTERSECTION, DIFFERENCE. They simply do the same as the set operations; two tables with identical column labels have their rows combined into a single new table (UNION means all rows in both, INTERSECTION where the column/row entries match in value, etc).

But this is not how Venn diagrams are usually presented to explain SQL. UNION, INTERSECTION, etc, are not the most useful of operations (the WHERE clause of a SELECT is where the booleans are most commonly used). Venn diagrams are most often used to explain JOINs. A SQL JOIN first matches on a field from one table and a field from another (presumably a field of the same type or kind).


(source Codeproject)

These Venn diagrams explain the difference between inner, outer, left and right joins perfectly...except they are just from a different world than the traditional set operations.  A JOIN is intended to merge the information appropriately in the n by m relation (where the size of A is n and size of B is m). The universe isn't the set of rows of both A and B together. The universe is the product of rows in both. And the difference between inner, outer, etc, is purely with how the JOIN deals with NULL/missing elements in A or B.

An INNER JOIN keeps rows of AxB only where both A and B rows exist. A LEFT JOIN is only when the A part exists (B may or may not), similarly for RIGHT JOIN. An OUTER JOIN doesn't care if either a corresponding A or B exists. So the boolean idea does apply but in a strange way, only with respect to the NULL condition of the matching field. If the value of the field from A has no matching value in the field for B, then B is NULL or missing then (and vice versa).

So the Venn diagrams for SQL operations, I can't really say they are true Venn diagrams; they don't show the state of a consistent property over all elements of the universe. Or rather the universe is a bit more complicated (depends on A and B, their cross product) and the property being booleanized is whether element of one table is NULL. You can't just take an arbitrary universe of elements (with properties. With JOINs, you have to create the universe, the product, first before examining the elements (and whether the A part or B part of the new row is null or not.

Friday, October 9, 2015

Kalman Filters = dynamic programming on linear systems for sensor accuracy

Kalman filter is a method to increase the accuracy of a sensor in a linear system. (see this link for a visual explanation and derivation)

The usual example is for the position of a space ship. You have the position/velocity of the ship and also an independent sensor of those. Both of those are somewhat iffy (usually assumed for continuous variables to be Gaussian). Using these two iffy things together, you can get a much more accurate approximation (smaller variance than both) of the current position/velocity.
from bzarg


The other ingredient of the method that makes it get called a Kalman filter is the the change in the sensed data is expected to be linear, so that all of this can be modeled using simple repeated matrix operations.

Of course, this is not limited to dynamic mechanics but it makes the best presentation (because of the linear equations

The point here (which is not to explain Kalman filters) is that the computational method of correction (abstracting away the matrices) is one of a one step recurrence relation (new sensor data at each step too) which is essentially dynamic programming and even better, you only need to know the most recent item.

Friday, September 4, 2015

The Turing Test - like magic!

Clarke's Third Law: Any sufficiently advanced technology is indistinguishable from magic

Finally, an invocation of the Turing Test which doesn't lie down in fawning adulation, which doesn't assume the Turing Test is the judge of intelligence, artificial or otherwise.

First, the Turing Test is a well accepted method for judging creation of a successful Artificial Intelligence (those capitals are ironic, because artificial intelligence is mostly not HAL 9000). To generalize, the test is really that if a human believes the human source of the test data, then that is successful Artificial Intelligence. The canonical test is a teletype (so that the mechanics of communication is not in question). A person communicates back and forth over the teletype. If that person can't tell if the conversation was with a machine producing the words (presumably by the machine mimicking a human's ... uh.... humanity, rather than being hyperlogical, then success.

It is great fast thinking on Turing's part, going quickly to a workable solution, cutting out lots of junk rationalizations, don't concern oneself with the infinite hypotheses of the underlying processes, just go for the jugular of what you have, the surface behavior and believability.

But frankly it is no different from bald anthropomorphism; if the animal acts like a human it must be human-like more deeply, with the lesson that doing so is usually not very successful. (But contrarily, a subject for another time is that I think many vertebrates share many cognitive abilities of humans, and also contrarily, some behavior that is usually considered special human intelligence may have very low complexity biological mechanisms that underlie them).

Not only is the Test the basis of countless scifi plots, but also countless dumbed-down explanations of artificial intelligence machines.

If it acts like a human then it -is- a human.

Basing success on limited explicit experience rather than looking behind the curtain and seeing the design? That is just plain idiotic. It is denial of common sense. The true test of whether something is artificial or human is by looking behind the curtain, by looking inside the blackbox, by seeing how it is designed. The design is the thing that should be judged, not the paltry examples.

Finite behavior doesn't define essence. The essence defines essence. Sure there's a lot more, the rules create the instances and anecdotes which are telling, but it's the all possibilities that are relveant, not just the small handful of instances.

A counter argument might be that telling essence is not the point, and that knowing essence is not available. So that experience is finite and is all that can be known sometimes (you can't always look inside the blackbox).

Here are two analogies that express my point. Generating genre texts with ngram probabilities (using markov models or deep learning), and generating biological objects using fractals. Here's an example of generated text (from Paul Masurell):

Pride and Prejudice, Jane Austeen.

I do not. It is very much at Pemberley. The idea of their all walking out. I must acknowledge to you. When I do not marry Mr Collins had promised herself. But you have the carriage might be copied. It would look odd to be proud. You are perfectly good. Elizabeth was at the sight of Miss Darcy was delighted. You have no objection to my charge. I know not. Lydia was urgent with the keenest of all. Mr Collins, you puzzle me exceedingly. But, my proposals will not go. To the rest. But, to much conversation, no traces of them.


The results look vaguely like the real thing, and could totally pass for reality (as long as they're not inspected too closely). Also, some humans using all their own skill can only each this level of coherence. So this is a terrible example? Turn up some dials and it gets less and less 'wandering' and more coherent.

Here's another example: Fractal trees. take a visual object like a line. Tack on smaller versions of that line to itself. Repeat to each smaller tree ad infinitum. You get a fractal tree like:

(from Gurpreet's blog. he has code!)
Depending on the rule, the 'tree' can look fluffier or sparser, and more regular or irregular. And it looks so much like a real tree:


(from Sarah Campbell)
And one could go the other direction and say that nature is implementing a recursive algorithm to grow its trees. But this is obviously crap. It certainly looks like a fractal, and I'm sure there are biological processes that can be modeled by some limited nesting (see the Chomsky/Everett disagreement over Piraha). But we know the fractal trees are not made by biology but by an algorithm, and that similarly a broccoli shaped tree whose trunk and branches and branches of those has to stop at some depth to give leaves.

It's like magic tricks: they work on the toy problem (having a card you're thinking of pulled out of a just-cut-up lemon) but don't generalize at all to anything beyond.

So you can make an elephant disappear on stage? Make it really disappear. It all looks right that one time, but is not repeatable because the reality isn't there.

Here's another example, IBM's Deep Blue chess playing program. So what if it wins against a human? (or plays at all). It's not magic. It's simply following game paths. Many game paths.

The Turing Test works in very limited contexts but is superficial.

Any sufficiently advanced technology is indistinguishable from a rigged demo.  James Klass

Thursday, September 3, 2015

Spoilers in Clustering Methods

In voting schemes, when there are more than two cadidates, there is the possibility of a 'spoiler'. That is, if a third candidate is introduced, votes might be taken away only from the formerly winning candidate, 'spoiling' a chance of victory, letting a candidate not preferred by the majority to win because the majority is split between two similar candidates.

This is similar to a clustering algorithm, where having set three clusters, the agglomeration is set to distinguish between two smaller clusters such that, if only two clusters were desired, together would be bigger than the third.


(from Pier Luca Lanzi)

In the example figure, the parameter to the system for k-means clustering is 3. The upper right set is split into two separate clusters. But if the parameter were 2, then those two clusters might combine to make a cluster larger than the lower left.

Of course, that doesn't mean the lower left cluster would be preserved, some elements may move back or forth. This shows that clustering can have anomalies like voting schemes, even though clustering doesn't account for all possible orderings (permutations) of 'candidates' and the correspondence of cluster label with candidate is not perfect.

Tuesday, August 18, 2015

Optimization design: Must you trade off between two things you want?

A common trope with presenting solutions to a complicated problem is the tradeoff: you can have this at the expense of that (and vice versa), or of three desirables, you can have only two (the negative of the third is what you'll get.

For example, you can have low inflation or low unemployment, but since one can cause increase in the other (if everybody is employed, most people will have more expendable income and so prices will rise to take advantage), you can't have both at the same time.


(the Philip's curve, from SparkNotes)

Many algorithmic problems show solutions with this feature, which is usually the tradeoff between runtime and memory. If you have very little extra room (little available data manipulation space) you might have to make many passed over the data than if you could just have another full copy of the data.  One strategy is to build a big table (often in linear time) in comparison to an algorithm that uses only a couple registers and taking a long time to compute.

Or for a given technology, you want three things: quick production, low cost, and high quality, but any two of these means the third will be bad (see the CAP 'theorem' in distributed database design), you supposedly can only have two of the following, the third being prevented.
  • Consistency (all nodes see the same data at the same time)
  • Availability (a guarantee that every request receives a response about whether it succeeded or failed)
  • Partition tolerance (the system continues to operate despite arbitrary partitioning due to network failures)



Supposedly in distributed systems you an only have two of these, because any two denies the third, where a single database on a single machine guarantees all three (the same as the ACID properties).


As rules of thumb, these are all interesting and useful observations to be made about a system. It gives you constraints so that that you shouldn't feel you have to worry about overcoming them.

But this is where design and cleverness come in. The space-time tradeoff or the pick any two of three restriction is just another hurdle to overcome. To solve a problem from the start, you used cleverness to solve it. But it has some drawbacks. In trying to remove the drawbacks, you introduce another drawback, and so you think there is an intimate relationship between the two drawbacks. 

Don't stop there! Your next step of cleverness is to overcome both at the same time, and, as problems go, unless you can prove otherwise, there's no reason you can't make both good at the same time.

For example, take computing Fibonacci's number. F(n) = F(n-1) + F(n-2), F(1) = F(2) = 1.
The first answer is to use recursion and simply compute F(n--1) and F(n-2) directly. That has the drawback that, if left to do it all automatically in the recursion, F(n-k) is repeatedly computed many times. 



Notice there's lots of repeats. 

Instead you can be clever and, once you notice how you compute it by hand, starting from F(3), using the known values F(1) and F(2).Then compute F(4) because you can now that you have F(3) and F(2) and similarly F(5) and so on till you reach F(n). That is linear time but takes linear space:




So we're stuck right, with a tradeoff between time and space, the faster the algorithm the more memory it will use?

Of course not. With all these 'rules', they are just patterns that may or may not follow. It's not necessary that there's a constraint.

For Fibonacci, another observation cuts through both. Instead of going from F(1) to F(n), computing every single item, you really only ever use 2 values at a time (the two previous ones). You don't need to maintain the entire previously computed values in the list. By judicious swapping (using a single temp variable), you can run through the entire table quickly without having to save the whole table, ratcheting upwards quickly.

And for two-out-of-three, there's the example of car manufacturing. Making a car by hand in the late 1800's early 1900's was a painstaking slow process, high cost, and questionable quality. But the assembly line process made all three better: produced cars much faster, much cheaper, and quality much better because every car was made the same way.

Note that with design there's also possibly a tradeoff with simplicity: the more constraints a design tries to handle, the less simple it tends to be. But as with the assembly line method, a simple design, all three are made better. There's no guarantee that more constraints means more complex.

Sure, sometimes the constraints are intertwined, one push must mean another gets pulled. But with appropriate design there may be away around it.

Monday, July 20, 2015

Deep Learning is not Magic Learning



Any sufficiently advanced technology is indistinguishable from magic. Arthur C Clarke

"Deep Learning is Teaching Computers New Tricks"

"Andrew Ng: Why 'Deep Learning' Is a Mandate for Humans"


Deep Learning Deep Learning Deep Learning

Holy crap! Use Deep Learning to create new ideas? You may be thinking that I'm being too harsh; of course article and title writers stretch things out to be more provocative, details left to the gross middle of the article that no one reads. Well, then, yes, I'm being too harsh, not because the details are left out, but because their implications of the details are ignored.

Deep learning is not Magic Learning. Deep Learning isn't what its name says. It is 'just' a more complex (= many more layers than traditional) neural network (which is itself not exactly what its name say, it is 'just' a set (OK I'll grant network) of linear regression models, where some depend on others.  It's not magic. It's not human like learning or deep cogitation on concepts. It is just a mathematical model. It can distinguish two almost identical things. It can identify one thing out of many. But that's the all that the technique itself does (just the best in a long line of similar techniques). It (like many other techniques: logistic regression, decision trees, random forests (ooh..they're magical! Their names are so exotic!) needs to be put in a larger framework (like in a process that determines the outlines of faces in a set of cat pictures, or splitting words n a speech to text analyzer). By themselves, there's nothing magical.

This is not to say that there's something wrong with Deep Learning. On the contrary, it is a great recent development, with lots of successes (which is exactly what happened to its simpler self in the late 80's). But in the end it is 'just' a regression model, either saying yes or no to some inputs, or calculating a complex function. But that's it. It is not 'an' artificial intelligence, responding and implementing our requests like a valet. It's just (one of the more) recent advances in discrimination methods. It is an important part of the field of artificial intelligence, but not the entire thing.

Is extracting 100's of initial petroleum products (fuel, plastics, lubricants, medications, etc) magic? not to mention 1000's of downstream products created from manipulation of these?

Frankly Siri is closer to magic because at least 40 years of electrical engineers and phoneticians have worked on converting sound waves produced by a humans oral and nasal cavity, modulated by teeth and tongue, into readable letters.

Deep Learning is not magic. They are a great development in neural networks (an incremental development (a very big incremental development)), but they're not magic and they won't make you your toast for you in the morning.

(this morphed from the inarticulate unfinished beginning of a rant I had planned about ML (Machine Learning). And NLP (Natural Language Processing (not Neuro-Linguistic Programming which actually is horseshit))).

Tuesday, February 6, 2007

Bullet points from Jeanette Wing: 'Computational Thinking' (CACM, Mar 2006)

An opinion piece by Jeannette Wing (COMMUNICATIONS OF THE ACM March 2006/Vol. 49, No. 3, pp. 33-35) about how 'computational thinking' (whatever that is...well, actually that's what she -does- explain) is really an intellectual skill that is, to oversimplify terribly, good for everybody.

Sort of like humanities for scientists/engineers. She says it would be good for non-majors, but I'm guessing mostly just for scientists (i.e. everybody probably won't include people who study literature or history (well, history does have a 'scientific' component, and this might be useful there). Pardon the following exegesis, but like the Torah (or is it the Talmud, I always get that mixed up), the commentary may require more processing time than the original (which of course should be read along side this to see what few things I removed as, well not exactly fluff, but just more polemic/value judgement (and I'm only analyzing quantifiable things), which I for the most part agree at least in the direction of not the actual implementation)...

Representative quotes:

Thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction.
It represents a universally applicable attitude and skill set everyone, not just computer scientists, would be eager to learn and use.
Computational thinking is a fundamental skill for everyone, not just for computer scientists. To reading, writing, and arithmetic, we should add computational thinking to every childs anlyltical ability.
Computational thinking will have become ingrained in everyones vievs when words like algorithm and precondition are part of everyones coacbulary; when nondeterminism and garbage collection take on the meanings used by computer scientists; and when trees are drawn upside down.
This kind of thinking will be part of the skill set of not only other scientists but of everyone else.

Anyway, here's the mapping from classes in the Computer Science undergrad curriculum to her points ("Computational thinking is..."):

  • TCS (algorithm design methods): "In solving a problem efficiently, we might further ask whether an approximate solution is good enough, whether we can use randomization to our advantage"
  • AI (learning): "...and whether false positives or false negatives are allowed."
  • TCS (complexity): "reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation."
  • Software (programming paradigms) "Computational thinking is thinking recursively. "
  • Software (systems, architecture): "It is parallel processing."
  • Software (programming, LISP): "It is interpreting code as data and data as code. "
  • Software (programming langs, unification): "It is type checking as the generalization of dimensional analysis."
  • Software (programming langs, pointers): "It is recognizing both the virtues and the dangers of aliasing, or giving someone or something more than one name."
  • Software (programming langs, machine/assembly): "It is recognizing both the cost and power of indirect addressing and procedure call. "
  • TCS (algorithms): "It is judging a program not just for correctness and efficiency" (this quote continues: "... but for aesthetics, and a system's design for simplicity and elegance." which unfortunately is totally absent from the curriculum or from the teacher's within-class comments on the curriculum)
  • Software (systems): "Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system. "
  • Software (programming, design, modularity) "It is separation of concerns. "
  • general engineering (not mappable to any particular art of the UG curriculum) "It is choosing an appropriate representation for a problem or modeling the relevant aspects of a problem to make it tractable."
  • Software (correctness, theorem proving) "It is using invariants to describe a system's behavior succinctly and declaratively. "
  • Software (software engineering, modularity) "It is having the confidence we can safely use, modify, and influence a large complex system without understanding its every detail. "
  • Software (software engineering): "It is modularizing something in anticipation of multiple users or prefetching and caching in anticipation of future use."
  • Software (software engineering): "Computational thinking is thinking in terms of prevention, protection, and recovery from worst-case scenarios through redundancy, damage containment, and error correction. "
  • Systems(OS)/Software(programming): "It is calling gridlock deadlock and contracts interfaces. "
  • Systems(OS): "It is learning to avoid race conditions when synchronizing meetings with one another."
  • AI/TCS (search space/algorithms): "Computational thinking is using heuristic reasoning to discover a solution. "
  • AI (planning): "It is planning, learning, and scheduling in the presence of uncertainty."
  • AI/TCS (algorithms/constraints): "It is search, search, and more search, resulting in a list of Web pages, a strategy for winning a game, or a counterexample."
  • Systems (software): "Computational thinking is using massive amounts of data to speed up computation."
  • TCS/systems (algorithms, architecture) "It is making trade-offs between time and space and between processing power and storage capacity."
  • Systems/software (architecture, design ): "When your daughter goes to school in the morning, she puts in her backpack the things she needs for the day; that's prefetching and caching. "
  • TCS (algorithms): "When your son loses his mittens, you suggest he retrace his steps; that's backtracking."
  • TCS (algorithms): "At what point do you stop renting skis and buy yourself a pair?; that's online algorithms. "
  • Systems (OS): "Which line do you stand in at the supermarket?; that's performance modeling for multi-server systems. "
  • Systems/software (software engineering): "Why does your telephone still work during a power outage?; that's independence of failure and redundancy in design. "
  • What? never heard of it: "How do Completely Automated Public Turing Test(s) to Tell Computers and Humans Apart, or CAPTCHAs, authenticate humans?; that's exploiting the difficulty of solving hard AI problems to foil computing agents."
Uses of CS thinking in other domains:
  • AI (learning) : "For example, machine learning has transformed statistics. Statistical learning is being used for problems on a scale, in terms of both data size and dimension, unimaginable only a few years ago. Statistics departments in all kinds of organizations are hiring computer scientists. Schools of computer science are embracing existing or starting up new statistics departments."
  • AI/Databases (knowledge structures): "Computer scientists$(B r(Becent interest in biology is driven by their belief that biologists can benefit from computational thinking. Computer science$(Bs (Bcontribution to biology goes beyond the ability to search through vast amounts of sequence data looking for patterns. The hope is that data structures and algorithms, our computational abstractions and methods, can represent the structure of proteins in ways that elucidate their function. Computational biology is changing the way biologists think. "
  • TCS (algorithms) "Similarly, computational game theory is changing the way economists think; "
  • AI/EE (agents): nanocomputing, the way chemists think;
  • TCS (complexity) "... and quantum computing, the way physicists think."
Many of these items are explicitly covered in a traditional CS curriculum (whatever -that- is, I of course consider what I did as traditional because, well, I did it. But some of the items are not exactly traditional text book subjects, but are certainly part of the air you breathe. The taxonomy of CS subject domains goes like this: TCS (theoretical CS: includes, broadly, algorithms, complexity, logic/discrete math/other math applied to CS, data structures), AI (artificial intelligence: learning, cognition, logic, planning, robotics, heuristics that aren't algorithms), software (programming languages, compilers, semantics), architecture and hardware (frankly I don't know the difference here other than, though both involve manipulating actual physical objects, architecture deals with objects that are larger than those in hardware), systems (a grab bag of topics: operating systems, databases, graphics), and NA (numerical analysis, or more recently renamed scientific computing).
WHAT IT IS, AND ISNT
The second half of the article gives more abstract concepts of 'computational thinking'.
Computer science is not computer programming. Thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction;

Computational thinking is a way humans solve problems; it is not trying to get humans to think like computers.

Computer science "inherently draws on" mathematical thinking as its formal foundations" and on engineering thinking, "given that we build systems that interact with the real world. "

"The constraints of the underlying computing device force computer scientists to think computationally, not just mathematically. " ... well isn't computational thinking one particular type of mathematical thinking?

For everyone, everywhere. Computational thinking will be a reality when it is so integral to human endeavors it disappears as an explicit philosophy.
A beautiful sentiment... but I disagree with the general (and specific) inference. Consider 'mathematical thinking'...it is very "integral to human endeavors", but has not disappeared as an "explicit philosophy".

CS is really a 'liberal art'. Myth: computer science has been 'solved', only implementation remains. Wing seems to leave the thread of 'computational thinking' here and addresses more things associated with CS as official subject to be followed (she continues to say very important things I agree with, and of course need to be said, and the best place to say them is in an article extolling the virtues of computational thinking, and you should read it... but none of that fits my agenda at the moment so I won't really comment, other than to say that she must be a department head looking at the lowering #'s of CS undergrads, worrying about the great number of asst profs they just hired to take care of teaching the glut of CS UGs that were projected but never materialized).

Extremely abbreviated commentary:

  • There's a difference between things you learn because they were said in class and things you learn because you took the class (but no one ever said).
  • Jeannette Wing has done in three pages what Stephen Wolfram claimed to be doing in 1000; show how computation is a new, useful paradigm for (scientific) thinking (this is intentionally a slam against Wolfram's ANKS).
  • How could one compose a similar 'look at the whole world through these eyes' for a mathematician? And from the other direction, how could non-scientific domains (the humantities) use this (these) worldviews?
  • Psychology (cognitive psychology) has already had the 'information processing' view since the 60's/70's.