Posts

Showing posts from April, 2011

coinductive winnowing

I implemented a version of the winnow algorithm for online learning in SML, because I was trying to understand learning algorithms in terms of coinductive data. The idea behind a learning algorithm, as I understand it (as someone who was just taught about them yesterday), is that given a stream of tagged inputs (like images marked with the locations of faces), you should get better and better at predicting the tags -- specifically, you should make a number of total mistakes bounded by the input size. Alternatively, you can think of the "tags" as a function from input to answer (for simplicity, from bitstring to bit) that the learning algorithm is trying to approximate. So the "learner" gets to take this function, which I call the "reference function", as an input, and compare its predictions to the actual result, which it can use to update some priors. Roughly, it has the shape: 1. Given an input, make a prediction 2. Check the answer with the ref

Server OK (belated)

I recently finished LFinLF , a mechanized proof of the metatheory of LF, which took me about two and a half years and 36,503 lines of Twelf. I should focus my exposition efforts about it into a paper, but I thought I should update the occasional follower of this blog with the result. There are a few remaining questions I have been intrigued to consider as a result of finishing this proof: Can the common folklore that "we don't need family-level lambdas" be made formal? I used them in my proof to define translation: all LF terms translate to normal Canonical LF terms by way of eta-expansion; for families to be treated similarly, we need lambdas to expand the ones at pi kind. But there are other ways to do it which I think would shed varying degrees of light on why  we do without them. This proof formalizes the correctness of (a particular algorithm for) LF type checking ; could we also verify type reconstruction? Could I formalize the (also folkloric, though from sli