Deep learning is just a deeply layered neural network, and by deep they mean more than one internal layer. It has recently gained much attention because of its successes with images and go and speech and all sorts of things plain old NNs weren't doing so well at.
But what about automated deduction? That's not rhetorical, this is entirely speculative and answer free. I am wondering if DL (or any ML technique) could be thrown at AD (or ATP (automated theorem Proving) whatever you'd like to call it).
First the optimism: wow wouldn't that be cool, a method that would prove really hard mathematical theorems (the ATP community has to work hard and do a lot by hand to do things like flyspeck) or even . But desiring the outcome doesn't say anything about the implementation.
Next the pessimism. By analogy, images and speech/text are iffy optimization problems. Lots of perturbations get you the same thing. But logic (and similar combinatorial problems) are all or nothing. It is either true/proven or false/wrong. how could an approximation optimization method translate to logical combinatorial problems? Where do you get the scads of supervised instances required by DL? Millions of tagged images from MNIST but from TPTP literally only thousands.
So, I don't know. But just because I don't know how to do it doesn't mean it can't be done.
No comments:
Post a Comment