Notes that I made from Piyush sir's Programs, proofs and types course (Aug-Nov 2022).
Uses Coq.
Type
|
+----------+----------+
| | |
Set | Prop
| |
+-------+ +-----+
| | | |
bool nat True False
False
is something that cannot be proven.
But we can have proof of False -> False
.
not {A:Type} (A:Prop) := (A -> False) : Prop
Because False -> False
is the type of a function which gives a way to produce a proof of False
if we give it a proof of False
.
But we can never have False
value.
A -> ~~A
A -> ~(~A)
A -> not (not A)
A -> not (A -> False)
A -> ((A -> False) -> False)
intro A.
A: Prop
x: A
-----------------------
((A -> False) -> False)
intro H
A: Prop
x: A
f: A -> False
-----------------------
False
exact f
**************
A -> ~~A
A -> ~A -> False
A -> (A->False) -> False
And the proof is:
fun (A:Prop) => fun (x:A) => fun (f: A->False) => f x
as in
λ(A:Prop). λ(x:A). λ(f:~A). f x
NB: We can't prove ~~A -> A
Why?
(A->False) -> False -> A
Cannot prove Godelian probl
intuition
tactic would make short work of the whole thing in this case.
Riemann hypothesis Fermat's last theorem
There may be props where law of excluded middle is provable.
LEM ok in classical logic, not in intutionistic (aka constructive) logic.
Conjectures can be made like A \/ ~A
whose proof we can't have (yet).
True, False
or True (True -> False) or False (False -> False)
:DBT: Can we make T→F? :DBT: Obj of type F→F? id
Pausing logic, starting deptypes.
:DBT: Inductive principle of bool :DBT: Inductive principle of nat :DBT: Would it make any difference if nat:Type
instead of nat:Set
? :DBT: Definition of CoC/CIC :DBT: Definition of N,Z,R in coq :DBT: Definition of complex nos? :DBT: patmat and addition of n in utlc :DBT: How does the fixpoint work for a simple recursive function in coq? ie, what's the signficance of Fixpoint
. :DBT: que le paradox utlc originale :DBT: que le paradox type:type dans coq?
nat_rec =
fun P : nat -> Set => nat_rect P
: forall P : nat -> Set,
P 0 ->
(forall n : nat, P n -> P (S n)) -> forall n : nat, P n
Arguments nat_rec _%function_scope _ _%function_scope
_%nat_scope
n + m
≡ (+) n m
is like
match n with
| O => m
| S n' => S ((+) n' m)
end
ie, in Coq:
Fixpoint plus (n m:nat) : nat :=
match n with
| O => m
| S n' => S (plus n' m)
end.
May the types guide you.
NB: Getting the type right doesn't mean that the program is necessarily correct. Types only guide us.
Eg:
Definition id1 (A:Type) (a:A) : A := a.
Compute id1 nat 3.
Compute id1 bool true.
Compute id1 _ true.
:DBT: Maximal vs minimal expansion ({}, []) :DBT: Type unification problem. General unification problem is undecidable. First order unification is decidable/solvable.
Compose fn g.f = f(g(x))
Unification variables = like 'holes' for gallina. Type inference based on the unification constraints. Unification variable generated for every hole (ie, _
). that needs to be figured out
id _ 42
id ?A 42
Coq knows 42:?A from id's type.
Coq also knows that 42:nat.
So ?A is nat.
Gallina type inference not as powerful as sml. But still quite capable.
id _ (fun x => x+1)
id _ (fun x:_ => x+1)
Type checking + inference
:DBT: Underscore can fill non-type variables as well. ou ça? :DBT: que Arguments autre implicit args?
Definition id {A:Type} (a:A):A := a. Definition id [A:Type] (a:A):A := a. Arguments id [A] _.
id 42 is sort of internally expanded to use holes.
Java: you got babysit the compiler
Implicit args are args that are necessary but boring.
Necessary but not that interesting.
Compiler should be smart enough to figure out the 'boring' types by itself.
Example: compose fn
@
can be used to get an explict arg version of a variable made with implicit args.
Or just for a few implicit args like
id (A:=nat)
(* Order or args is irrelevant here (the 'partial' way), unlike in the case of @ *)
:DBT: Templating fnal prog in cpp
You could have no bugs if haven't got any code. Little code => less bugs So find ways to write as less as possible. But then again, you would need a smart enough compiler.
Less code -> less bugs -> less maintenance
Implicit args gives us a chance to not write code. :)
id
Is it
where ls:list nat
map id ls: forall {A B:Type}, (A -> B) -> list A -> list B
:DBT: maximal vs minimal insertion when implicit and explicit args are mixed.
Set Implicit Arguments.
(* May not be a great idea *)
:DBT: Predicate logic possible without deptypes?
Inductive list (A:Type) : Type :=
| nil : list A
| cons : A -> list A -> list A.
:DBT: Why Set
for list
won't work? :DBT: Why didn't coq stdlib declare A
as implicit in the def itself instead of in Arguments
. ANS: because we need to use the A
in nil
.
Head function of list A
:
Definition hdList {A:Type} (l: list A) : option A :=
match l with
| nil => None
| cons x _ => Some x
end.
We need to handle the nil
case as well since Gallina functions need to be total. Because if it wasn't, we can prove anything. But in a theorem we can't leave out some branch. ie, we can skip some cases. Not good. :DBT: How non-total (ie, partial) function.
Throwing exceptions is also making the function partial.
:DBT: No infinite loop in coq. If infinite loop, we can prove False. Comment?
Every coq program should terminate.
Decidable type checking + infinite loops not possible in a proof assistant. Got to sacrifice somewhere.
Either use option
or use a default value.
Definition hdList' {A:Type} (l: list A) : option A :=
match l with
| nil => None
| cons x _ => Some x
end.
Vector type.
Deptype: Type depends on a value.
list 2 bool =>
Parametrized on both bool and 2.
Type depends not just on bool, but also on 2.
:DBT: Don't do Import Vector. Use Require Import Vector. Porqoui? Require
makes that file available if present in standard path. Doesn't make everything in it visible. Require Import
makes it avaialbe and makes everything in it visible.
Vector.t
's definition is similar to tha of list
.
Inductive t (A:Type) : nat -> Type :=
| nil : t A 0
| cons : A -> forall n:nat, A t n -> A t (S n).
(* Args of [cons]: A:Type (t), A (cons) n,
t
gives type families. For every natural number, there's a type.
Vector.t bool 0
Vector.t bool 1
Vector.t bool 2
:DBT: Difference between
cons
would change though.Vector.t
makes head
possible
Definition vectorHead {A:Type} {n:nat} (v:Vector.t A (S n)) : A :=
match v with
| Vector.cons _ x _ _ => x
end.
(* Newer coq versions know that nil case is not needed to make this
function total *)
Compute vectorHead (Vector.cons _ 3 _ (Vector.nil _)).
:DBT: What can be done for old coq versions to make coq believe that vectorHead
is total? Newer versions can do figure it out for themselves.
vectorHead
definition as made by coq looks like:
vectorHead =
fun (A : Type) (n : nat) (v : Vector.t A (S n)) =>
match
v in (Vector.t _ n0)
return match n0 with
| 0 => IDProp
| S _ => A
end
with
| Vector.nil _ => idProp
| Vector.cons _ x _ _ => x
end
: forall (A : Type) (n : nat), Vector.t A (S n) -> A
Arguments vectorHead {A}%type_scope {n}%nat_scope
Parameters of Vector constructors:
cons
A:Type
(Vector.t)x:A
n:nat
xs: Vector.t A (S n)
nil
A (Vector.t)
Deptypes => can put code invariants right there in the type. Malformed data can be made inexpressible.
A type that can have arity 1/2/3.
Inductive op:Set :=
| Plus
| Minus.
Inductive Exp:Set :=
| Const: nat -> Exp
| App: op -> list Expr -> Expr.
Here, unary operations can be applied to n-terms. Well typed, but wrong.
Right expressions would still be okay though. But it would allow wrong ones as well.
Inductive op:nat->Type :=
| Plus: op 2
| Unary: op 1.
Inductive Exp:Set :=
| Const: nat -> Exp
| App: forall n:nat,
op n -> Vector.t Expr n -> Expr.
Now, it looks right.
Without making any proof, we can be sure that malformed expression can't be there since they simply can't be expressed. Simply can't be made. If it can't be made, it can't be wrong. ça va?
This illustrates one of the motiviations for having deptypes.
:DBT: (Other) motivations for deptypes. :DBT: Motivation for deptypes from logic side.
All the type checking is done at compile time. Without even running it.
:DBT: Set vs Type. What makes a type 'Large non-propositional inductive type'.
Either the compiler catches the error or the runtime does it. Compiler doing it is better for us. :-)
Logic motivation of deptypes.
Type families.
P: nat -> Prop n < 42
Following are props:
P is a predicate on nat
.
Its truth depends on the nat
value it gets.
P
is not Prop, but nat->Prop But P n
is a Prop.
P n
are type families. Comme Vector.t A n
Every n is associated with a type.
:DBT: Proof of 2<34.
Bottom line: We need deptypes for predicates.
Connection between logical world idea and computer programming world.
Curry-Howard correspondence.
(So, making functions first class makes sense.)
Type | Logic |
---|---|
Arrows | Implies |
Mathematical statements correspond to types.
Math stmts need not be true
2>5
Type system designed in such a way that it should be impossible to create a value of type 2>5.
:DBT: Is it possible to prove that 2>5 is false?
Prop-Type separation useful for extraction.
(Type families.
eg: nat->Type
for every nat, there's a type.)
Predicate themselves can't be true/false. Needs a value as arg.
∀x. φ(x): φ(x) is true for every single x ∃x. φ(x): φ(x) is true for at least one x
P: nat -> Prop
(forall x:nat, P x) : Prop
(exists x:nat, P x) : Prop
How to prove a Prop involving forall
?
ie, if P:nat->Prop
, prove P 0
, P 1
, P ∞
:DBT: Induction??
:DBT: Difference forall x:nat, P x
and nat -> Prop
? Or fun x:nat => P x
?
Dependently type functions. Return type depends on input.
f:nat -> A
f 0 : A
f 1 : A
f:forall x:nat, P x
f 0 : P 0
f 1 : P 1
Types are not just sets. Thanks to deptypes. :DBT: But we can simulate it in set theory.
f:forall x:nat, P x
Give me a nat x, and I would give you a proof of P x
.
This is why it can be said that forall x:A, B
is a generalization of A -> B
.
:DBT: You wouldn't have stumbled upon this in the type world if we didn't look at it from the logic side. Comment?
:DBT: Continuation. I thought it was about passing args. Apparently not.
Constructive logic + continuation = classical logic (ie, boolean logic)
Sign of a good design.
It works in places you wouldn't even have expected it to work.
P: A -> Type
if P was an identity fn.?? As in,
P a = B
:DBT: How is this? A -> B
same as forall a:A, P a
Look at function involving vector. Eg: tail function accepts t n
and gives t (S n)
.
:DBT: Comment?
Polymorphism = monomorphism + deptypes
where the morphism is dependent.
list
in coq gives type family as well. Not over A
, but over Type
(for some Typeᵢ).
exists
(forall x:nat, P x is the proof of ∀x, P x.)
Got to be a constructive proof.
∃x:A, P x
First produce an x:A, and a proof of P x
exists x:A, P x
So, a proof of exists is a dependent tuple
(x:A, prf:P x)
Note that the type of prf depends on the value of x. Hence the dependence mentioned above.
Turns out that we can make exists if we have forall and inductive types. :DBT: Comment?
So exists is not considered 'core' in coq.
Ob:Senator:Man:Human
A dependent type needed.
2+3 = 5
eq (2+3) 5
So,
Fixpoint eq (n m:nat) : Prop :=
match n, m with
| O, O => True
| _, O => False
| O, _ => False
| S n', S m' => eq n' m'
end.
or just
Fixpoint eq (n m:nat) : Prop :=
match n, m with
| O, O => True
| S n', S m' => eq n' m'
| _, _ => False
end.
(* eq is a predicate taking two values. A bi-predicate. *)
An equality 'check' involves bool. This is on Prop
.
This was the functional way.
Another way is to have an inductive type.
Inductive eqI : nat -> nat -> Prop :=
| refl: forall n:nat, eqI n n
Arguments refl {n}.
Here, we give a way to prove eq n m when n==m. But no way to prove unequal numbers.
eq (2+3) 5
reduction
eq 5 5
refl.
:DBT: Both functional and inductive way have their own benefits.
General equality can be proven via eqI.
The function version needed to 'look within' the values (a match is done). And in this case works only for nat. Whereas eqI
works for any type. But eq
is not generic.
:DBT: A boolean version is also there. Find out.
:DBT: Proofs that can't be proven. Gödelian world. Incomplete.
The eqI or the eq that we defined got to be trusted, though.
:HW: Define Le in functional and inductive ways.
eq: nat -> nat -> Prop
was a bivariant predicate.
Fixpoint eqb (a b:nat) : bool :=
match a, b with
| O, O => true
| S a', S b' => eqb a' b'
| _, _ => false
end.
bool => an equality check.
Fixpoint eqP (a b:nat) : Prop :=
match a, b with
| O, O => True
| S a', S b' => eqP a' b'
| _, _ => False
end.
Almost same as eqb
.
Inductive eq : nat -> nat -> Prop :=
| eq_refl : forall n:nat, eq n n.
eqP 2 3 is False. False is a type. If we can produce a value of type False
, then we can say 2 and 3 a e same.
eq
can be made more general.
Inductive eq {A:Type}: A -> A -> Prop :=
| eq_refl : forall a:A, eq a a.
Arguments eq_refl {A}.
eqP
looks inside or inspects the values. So can't be general.
:DBT: eqP
is mostly useless. Where is it useful?
Checking equality is a decidability problem. General eq is undecidable.
eqrefl A a fun range depends on A and a.
arrow is a special lesser version of forall
.
forall is like function on steriods in Gallina.
Parameter Indices
+---+ +-----------+
| | | |
Inductive Name A B : A -> B -> nat -> Type
Parameters (fixed) and indices (variable).
Type family => type schema
Inductive eq2 {A:Type} (a:A) : A -> Type :=
| eq_refl2: eq2 a a.
Arguments eq_refl2 {A}.
Compute eq_refl2 3.
(*
= eq_refl2 3
: eq2 3 3
*)
Here we made the a
a parameter instead of an index… Oh..
We fixed the a
.
So, parameter available for all constructors since it's fixed in the type itself.
:DBT: Coq's eq is like this. Any benefit in making a
parameter instead of index?
forall a:A, pf:a=a, pf = eq_refl (A:=A) (a:=a)
(You might want this to be provable)
Equality type paths in topology homotopy
Types as topologies Values as points Equality => paths in topology
Rules of the game => axioms There's no one type theory. There's no one mathematics.
eq_refl
isn't the 'only way' in which we can prove equality.
Well, not eq_refl
alone.
Eg: eq (n+0) n
Definition rightidentity' (n:nat) : n+0=n :=
(fun n : nat =>
nat_ind (fun n0 : nat => n0 + 0 = n0) eq_refl
(fun (n0 : nat) (IHn : n0 + 0 = n0) =>
eq_ind_r (fun n1 : nat => S n1 = S n0) eq_refl IHn) n) n.
:DBT: Why coq can see 0+n
is n
but not n+0
is n
? I guess it's because of the way Nat.add
is defined?
:DBT: Unset printing notations only for a specific notation.
:DBT: C'est un bon opaque, transparent exemple.
nat_ind
denotes the principal of mathematical induction for nat
.
If we evalute this, it would end up being eq_refl
nevertheless.
programs could be thought of as an abstract object. With many representations. Including string repr.
(Eg: nat is an abstract idea. Binary is a repr, decimal is a repr)
Ambiguous grammars. Eg 2 + 3 * 4 can have two parse different trees.
Front-end:
String -> AST(src)
|
|
| Compiler translates
|
V
Back-end:
AST(src) -> String
Stack machine => no regs. Only stack.
:DBT: Why is it called segfault?
More general theorems are often easiser to prove.
s = p = nil, we got what we wanted.
:TODO: ltac crush in course repo :DBT: SearchRewrite
Tactics.
Tactic lang not part of TCB (trusted kernel).
Term made by tactics are type checked by coq before being accepted.
Tactic can be buggy but errors introduced by it would be caught by coq's type checker.
Theorem foo : 2=2.
Proof.
exact 42.
We used exact
here to say that this is the proof, but coq's type checker caught it.
We can make tactics as complicated as we want. Makes it convenient.
∵ coq would never accept an incorrect proof.
(So, it's convenient to have ltac outside the trusted kernel.)
But we trust the correctness of Gallina's type checker.
Bugs in tactics doesn't affect correctness of terms, because they are filtered by the type checker.
Tactics are sort of like automated ways to form terms. C'est tout.
A couple of syntactical stuff for ltac tactics:
tac1. tac2
: tac1 applied first, bunch of goals may be generated, one goal is focused automatically, apply tac2 to this focused goal.tac1; tac2
: tac1 applied first, bunch of goals may be generated, apply tac2 to each of these goals.One almost never need to have a 'dot' at the end of an ltac definition.
match
in ltac and gallina are different things though they look similar.
We can even write tactics which does decision procedures and use ocaml if needed.
Unlike Gallina's match
, ltac match
can backtrack. If one tactic fails, it backtracks and tries the next match pattern.
Gallina's match is a declarative match whereas ltac's is sort of imperative ('do this if this') with side effects ∵ it manipulates goal
match goal with
| _ => intro.
| patt2 => intros.
end
Here,
Syntax to match against:
[ hypotheses |- goal ]
Tactics are usually best kept general. More specific you make them, less places you can use them at.
:DBT: How to control name of induction hypothesis?
First wrote the tactics manually, saw the patterns and made a tactic with those patterns => automation.
:HW: Try commutativity with ltac
Still Ltac.
Unification variables.
Ltac done.
Correct by construction stuff.
Eg: head function of Vector.t
.
Best way to prove is being in a situation where you needn't prove anything. Proof is built-in to the type. The invariant is built-in. Eg: agda ML tree example
Coq forces us to be correct (total functions) so we can't write a function of type
hd: list A -> A
list A -> option A
or A -> list A -> A
are possible though.
(* 'Safe' variant of [hd] *)
Definition hd {A:Type} (l:list A) : option A :=
match l with
| nil => None
| cons x xs => Some x
end.
But type being correct needn't mean the definition is right.
We could've had this as:
(* 'Safe' variant of [hd] *)
Definition hd {A:Type} (l:list A) : option A :=
match l with
| nil => None
| cons x xs => None
end.
where the hd
always returns None
.
The spec (ie, type) doesn't completely capture the semantics of hd
. 'Not correct by construction. Semantically.'
The spec should be simple enough.
We should be able to trust the spec.
Could've been
hd {A:Type} (l:list A) (pf: l≠nil) : A :=
This is better but still not good enough ∵ although now we are sure that hd
would return a value in l, it needn't be its first element.
And to top if off, we still got to take care of pattern match of nil. We know l can't be nil, but still got to handle the nil case.
Fail Definition hd2 {A:Type} (l:list A) (pf: l <> nil) : A :=
match l with
| cons x xs => x
end.
(We could've given a default but that could get 'even more complicated'.)
Coq couldn't recognize the fact that l can't be nil.
l <> nil
or
nil -> False
If you use proof mode to write code, probably Defined
would be what you need.
Proof mode used to do computation => Defined
. Not Qed
. Usually.
Convoy pattern.
Convoy of args: because index used instead of parameter in hd
definition.
match
match exp as T in x0 x1 .. xn
return ret_type
with
...
end
match .... end
ret_type
is depdnent on the bound variables.
Dependent type.
On each branch of the match
, the return type is computed.
match l as l0 return (l0<>nil -> A) with
Branch | Value |
---|---|
(x:xs) | (x::xs)<>nil -> A |
nil | nil<>nil -> A |
Fail Definition hd {A:Type} (l:list A) : A :=
match l with
| cons x xs => x
end.
(* 'Safe' variant of [hd] *)
Definition hd {A:Type} (l:list A) : option A :=
match l with
| nil => None
| cons x xs => Some x
end.
Search (list _ -> list _ -> Prop).
(* pf should be proof that l≠nil *)
Fail Definition hd2 {A:Type} (l:list A) (pf: l <> nil) : A :=
match l with
| cons x xs => x
end.
(*
we got a proof saying that l won't be nil, yet we can't use that fact.
The nil branch will not occur, yet we got to handle that branch
*)
(* forall A:Type, False -> A *)
Definition absurd {A:Type} (pf:False) : A :=
match pf with end.
(* False has got no constructors, so that's fine *)
(* Turns out we can even have [A] implicit in the above function *)
Unset Printing Notations.
Check nil <> nil.
(*
l <> nil
eq l nil -> False
*)
Print eq.
Check eq_refl nil.
Check @eq_refl.
Check @eq_refl _ nil.
Fail Definition hd3 {A:Type} (l:list A) (pf:l<>nil) : A :=
match l with
| cons x _ => x
| nil => match (pf (@eq_refl _ nil)) with end
end.
Definition hd4' {A:Type} (l:list A) (pf:l<>nil) : A.
Proof.
induction l.
- refine (match (pf _) with end).
exact (eq_refl nil).
-
(*
A : Type
a : A
l : list A
pf : not (eq (cons a l) nil)
IHl : forall _ : not (eq l nil), A
============================
A
*)
Abort.
Definition hd4 {A:Type} (l:list A) (pf:l<>nil) : A.
Proof.
refine (
match l with
| nil => _
| cons x xs => x
end
).
Show Proof.
refine (match (pf _) with end).
Abort.
(* Cannot produce, so try producing False if we got a function returning False. Because False is supposed to be impossible to construct. *)
(*
l in pf is still l and not []
That's the problem. So the proof can't go forward.
*)
(* So we make l<>nil and index instead of a parameter *)
Definition hd5 {A:Type} (l:list A): l<>nil -> A.
Proof.
(* If we intro H now, we can't special H for particular values of l *)
refine (fun H =>
match l with
| nil => _
| cons x xs => _
end
).
-
Restart.
refine (
match l with
| nil => _
| cons x xs => _
end
).
- refine (fun H => _).
exact (match (H (eq_refl nil)) with end).
- refine (fun H => _).
exact x.
Show Proof.
Defined.
Definition hd6 {A:Type} (l:list A): (l<>nil) -> A :=
match l with
| nil => fun pf => match (pf (eq_refl nil)) with end
| cons x xs => fun _ => x
end.
Set Printing Notations.
Print sig.
Compute exist _ 2 (le_n 2).
Compute exist _ 2 (le_n 2).
Compute exist _ 2 (le_n 2).
(* Let's make return type of match dependent on l *)
(* match discriminates over l and returns return *)
Definition hd7 {A:Type} (l:list A): (l<>nil) -> A :=
match l as l0 return (l0<>nil -> A) with
| nil => fun pf => match (pf (eq_refl nil)) with end
| cons x xs => fun _ => x
end.
(* Newer coq versions seem capable of figuring this out without explicit general match though *)
Definition hd_subset {A:Type} (l: {l':list A | l' <> nil}) : A :=
match l with
| exist _ (cons x xs) pf => x
| exist _ nil pf =>
match (pf (eq_refl nil)) with end
end.
(*
A portion of the code we /got/ to write, but is computationally irrelevant. Just to make the type checker happy
In the above portion, nil branch is comutationally irrelvenat sort of.
Using tactics altogether needn't always give right answer, because it would only match the type. Definition could be different. We got to guide the proof appropriately. Then it would be fine.
*)
(*
exist _ (cons 2 nil)
Definition hd_subset {A:Type} (l: {l':list A | length l' > 0}) : A :=
match l with
| exist _ (cons x xs) pf => x
| exist P nil pf =>
match (pf nil) with end
end.
*)
Gallina's type system is too simple??
We want the trusted code base to be simple enough to be trustable. The type checker.
Type inference may not work always.
match's as
used to be required in earlier coq. Now it will figure it out.
as
is needed if the discriminee is not a simple variable and is a complex expression.
Prop
Prop Set
\ /
Type₁
|
Type₂
|
...
Proof irrelevance.
For a hd taking first element by using a proof that it isn't empty, as far the code is concerned, the proof is irrevelant.
I don't care what exactly is the proof, but I care that there is a proof.
That such a term exists.
Computationally proofs cost you some resources.
Only meant for type checking.
When the actual code runs, it is irrevelant for the final result.
Prop
is a proof irrelevant universe.
:DBT: SProp?
Prop adds to computational effort without computational gain.
Prop
terms get erased when the code is extracted.
Definition hd6 {A:Type} (l:list A): (l<>nil) -> A :=
match l with
| nil => fun pf => match (pf (eq_refl nil)) with end
| cons x xs => fun _ => x
end.
(*
[refine] would've been immensely helpful to focus on one thing at a time.
*)
Require Import Extraction.
Extraction hd6.
This gives:
(** val hd6 : 'a1 list -> 'a1 **)
let hd6 = function
assert false (* absurd case *)
| Nil -> | Cons (x, _) -> x
Regarde, Proof terms disappeared!
The assert false
indicates that hd6
is not meant to deal with empty lists.
:DBT: not just for functions, also for types
Likewise with sig
(existential?? or subset type) defined like:
Inductive sig (A : Type) (P : A -> Prop) : Type :=
exist : forall x : A, P x -> {x : A | P x}
(* Subset of A where P is provable *)
(* Like a dependent tuple *)
Arguments sig [A]%type_scope _%type_scope
Arguments exist [A]%type_scope _%function_scope
sig
tags a value with a proof.
Every A:Prop
will be erased upon extraction.
match on Prop should be Prop. So can be done only on False
??
:DBT: ex
proof in Prop world.
Print ex.
(*
Inductive ex (A : Type) (P : A -> Prop) : Prop :=
ex_intro : forall x : A, P x -> exists y, P y
Arguments ex [A]%type_scope _%function_scope
Arguments ex_intro [A]%type_scope _%function_scope
*)
Keeping track of proof terms is gonna cost us. So erase it if it won't be used.
Prop: universe of logical propositions. proof irrelevant type. As in computationally irrelevant.
Used to check correctness at compile time. But irrelevant at run time.
64bit processor has only one type. 64bit words (ie, bits). The int
, bool
, etc are 'erased' by the compiler.
match
on Prop
must always return a Prop
.
∵ that value will depend on a Prop
which would get erased later. thereby making a computationally relevant value from a computationally irrelevant value.
:DBT: Can we patmat on Prop
vals?
Inductive foo:Prop:=
| Foo1
| Foo2.
Axiom foo':foo.
Fail Compute (
match foo' with
| Foo1 => 0
| Foo2 => 1
end
).
(*
Error:
Incorrect elimination of "foo'" in the inductive type "foo":
the return type has sort "Set" while it should be
"SProp" or "Prop".
Elimination of an inductive object of sort Prop
is not allowed on a predicate in sort Set
because proofs can be eliminated only to build proofs.
*)
But this Prop Type separation also means that we often have repeat stuff.
For example:
Prop | Type |
---|---|
or (A \/ B) | sum (A + B) |
and (A /\ B) | prod (A * B) |
Π-types: the family of dependent types (named by Martin Löf).
Eg:
forall (x:A) (B:A->Prop), B x
If we have proof x of A, then we got a 'recipe' for the proof of B x.
A*B => 2 indices. 0 and 1.
Π type is like big fat product type (ie, tuple). Which indexed by the value of a:A.
(This is just the intuition.)
:DBT: Will this tuple be of infinite length?
Eg:
(a,b) ≡ fun x => if x then a else b
Hence the name Π-type.
Usually written as:
Π (B x)
x:A
Comme un database: Student, rno, mark
Stock set theory (or any set theory??) doesn't have deptypes
In classical logic, we only need one of ∃ and ∀, and the other can be defined in terms of the other.
Similar to Π-type, there is Σ-type. ie, there exists deptype style.
A:Type
B:A -> Prop
Σ (B x) ≡ B a₀, B a₁, ...
x:A
(equivalent to ex in coq)
ex
Inductive ex (A : Type) (P : A -> Prop) : Prop :=
| ex_intro : forall x : A, P x -> exists y, P y
(don't be confused by the forall here, even when we are dealing with ex. It's like A -> P x -> exists y, P y). :DBT: From where did the y come from?
NB: forall
is built-in. No separate inductive for that.
An example usage:
Goal
exists x:nat, x > 2.
Proof.
refine (ex_intro _ 3 _).
constructor.
Qed.
:DBT: sigma type and sum type are same?
ex: sigma type. as if there is a constructor for each component of the disjoint sum.
Advantage of sig over ex is that we can do patmat returning non-Prop with sig.
sig A P is essentially A upon extraction. Because it is essentially a subset of A.
equivalent types but no equal.
{l:list A | l <> nil} and {l:list A | length l > 0}
which is the type of non-empty lists.
:DBT: inversion might help in cases where a <> is involved.
ex | both proof, A irr comp |
sig | proof comp irr |
sigT | both pf, A comp relev |
:DBT: Use of sigT
:DBT: sum type: variant in ocaml
Sigma vs Pi type:
{x:int, y:string} x - 1st field
a big fat product = pi type
'Field names' come from A. ie, i:A
:DBT: Martin Löf named it.
lly,
Σ
P 0 + P 1 + … + P n + … (0, p₀:P 0) (1, p₁:P 1)
Could think of the '0' in (0,p0:P 0) as a tag/'constructor'. These are disjoint. Independent of each other.
'Constructor names' come from A. ie, i:A>
:DBT: Tagged union
:INFO: exists is just a notation for ex type.
P : bool -> Type P true = A P false = B
forall x:A, P x is same as A*B
P true = A ie proj1 P false = B ie proj2
sigT bool P is same as A + B
2 ways:
A + B
^ ^
| |
+--+ +--+
| |
A B
A * B
| |
+--+ +--+ pr2
| |
v v
A B
:DBT: A+B can be made from sigT?
Inductive sumor (A : Type) (B : Prop) : Type :=
| inleft : A -> A + {B} | inright : B -> A + {B}
curly bracket parts are props.
sumor: like option type sumbool: like non-blind bool type
you get proofs as well.
Any type with two constructors we can do:
So we usually make the first constructor something that would be sort of 'true'. Most obvious use case of this is bool.
One constructor taking two args, we can directly do let (a, b) = value
. Most obvious use case of this is tuple.
:DBT: sumbool and sumor and variants of sum type. Comment?
Fixpoint eqb (n m:nat) : bool :=
match n, m with
| O, O => true
| S n', S m' => eqb n' m'
| _, _ => false
end.
How to prove that this eqb is correct? We would need proof.
eq_spec1 : forall n m:nat, n=m -> eqb n m = true
eq_spec2 : forall n m:nat, n<>m -> eqb n m = false
Both should hold at the same time.
eq_dec
comes with proof, unlike bool.
eq_dec nat
is forall n m:nat, {x=y} + {x<>y}
eq_dec
is correct by construction. Type system guarantees its correctness.
Extraction sumbool and you get bool.
These are the two advantages of sumbool over bool.
Definition fgHelper {A B C D:Prop} (f:A->C) (g:B->D)
(a:{A}+{B}) : {C} + {D} :=
match a with
| left a' => left (f a')
| right b' => right (g b')
end.
Lemma foo1 : forall n m:nat, n=m -> S n = S m.
Proof. auto. Qed.
Lemma foo2 : forall n m:nat, n<>m -> S n <> S m.
Proof. auto. Qed.
Fixpoint eqdec_nat (n m:nat) : {n=m} + {n<>m}.
Proof.
refine (
match n, m with
| O, O => left _
| S n', S m' => _
| _, _ => right _
end).
- auto.
- auto.
- auto.
- Check (eqdec_nat n m).
Check (foo1 n m).
Check (fgHelper (foo1 n m) (foo2 n m)).
pose (foo1 n' m') as eq1.
pose (foo2 n' m') as eq2.
pose (fgHelper eq1 eq2) as H.
Check (eqdec_nat n' m').
exact (H (eqdec_nat n' m')).
Defined.
It's possible to have a function like:
sumbool A B -> bool
Definition sumbool2bool {A B:Prop} (x:{A} + {B}): bool :=
match x with
| left _ => true
| _ => false
end.
But we can't have:
A \/ B -> bool
∵ A \/ B
is Prop.
Fail Definition or2bool {A B:Prop} (x:A\/B) : bool :=
if x then true else false.
(*
Incorrect elimination of "x" in the inductive type "or":
the return type has sort "Set" while it should be
"SProp" or "Prop".
Elimination of an inductive object of sort Prop
is not allowed on a predicate in sort Set
because proofs can be eliminated only to build proofs.
*)
Had this been allowed, we would've been taking computational info from a Prop.
—
forall x:T, B
is similar to
T -> B
but in the former, the type B
depends on x
.
:DBT: Comment est elim
sumbool vs bool. bool boolean blindness. sumor vs option.
Definition sumbool2bool {A B:Prop} (x:{A} + {B}): bool :=
match x with
| left _ => true
| _ => false
end.
Check left.
Check left (B:=1>1).
Check left (le_n 1).
Check left (B:=1>1) (le_n 1).
Compute sumbool2bool (left (le_n 1)).
Check right (A:=1>1) (le_n 1).
Compute sumbool2bool (right (A:=1>1) (le_n 1)).
Definition sumbool2bool' {A B:Prop} (x:{A} + {B}): bool :=
if x then true else false.
Compute sumbool2bool (left (le_n 1)).
Compute sumbool2bool (right (A:=1>1) (le_n 1)).
Fail Definition or2bool {A B:Prop} (x:A\/B) : bool :=
if x then true else false.
(*
Incorrect elimination of "x" in the inductive type "or":
the return type has sort "Set" while it should be
"SProp" or "Prop".
Elimination of an inductive object of sort Prop
is not allowed on a predicate in sort Set
because proofs can be eliminated only to build proofs.
*)
Print or.
(* oder [de]*)
Fail Definition or2bool {A B:Prop} (x:A\/B) : bool :=
match x with
| or_introl _ => true
| or_intror _ => false
end.
(* pred with sumor *)
Definition pred1 (n:nat) : nat + {n=0} :=
match n with
| O => inright (eq_refl 0)
| S n' => inleft n'
end.
Print sumor.
(* If you give me none, you have to tell me why, amigo *)
Definition lsthd {A:Type} (ls:list A) : A + {ls=nil}.
Proof.
refine
match ls with
| nil => _
| cons x _ => inleft x
end.
refine (inright _).
auto.
Defined.
Compute lsthd (cons 3 nil).
Compute lsthd (nil).
An anecdote:
If an organization offered a million dollars to the guy who writes a definiton for the following spec:
hd {A:Type} (l:list A): option A
the guy could just make
hd
returnNone
at all times and the spec is complete. Because no proof is involved. Onus is on the user to prove that the list is indeed empty.On the other hand if the spec had been:
hd {A:Type} (l:list A): A + {l=nil}
the guy would have had to make
hd
more correct. Because if he saying that no head element is there since the list is empty, he would also have to produce a proof saying that the list is indeed empty.
If you write a function returning a bool judging whether two nat
values are the same like:
Fixpoint eqb (a b:nat):bool :=
match a,b with
| O, O => true
| S a', S b' => eqb a' b'
| _, _ => false
end.
How would you prove that this function is correct? You would have to take this function's word for it because it offers no proof of the correctness of its judgement.
As in if a=b, a proof saying eq a b
or if it isn't a proof saying, eq a b -> False
.
Compare this with:
Fixpoint eqbb (a b:nat) : {a=b} + {a<>b}.
refine(
match a, b with
| O, O => left eq_refl
| S a', S b' => _
| _, _ => _
end).
- right.
discriminate.
- right.
discriminate.
- apply (eqbb (S a') (S b')).
Qed.
Oops…
It didn't go through ¹.
Cannot guess decreasing argument of fix.
An article by Robert Harper: Boolean blindness
hd : forall {A:Type} (l:list A), A + {l=nil}.
is stronger spec than
hd : forall {A:Type} (l:list A), option A
the latter could trivially satisfied by giving None all the time.
Purpose of a spec is to rule out invalid programs. Restricting the kind of programs that are acceptable.
A class of undesirable programs have been rule out, but not all of them.
For example, the former could be satisfied by list.last.
Totally complete spec could turn out to be too much work.
Extraction erasure => no overhead when extracted program is run.
eqdec (n m:nat) : {n=m} + {n<>m}.
is correct by construction. We can't do anything else. No leeway to do anything else.
Construction itself is a proof of its correctness.
If we had done
eqb (n m :nat ): bool
we would've to give sepearate proof proving soundness and completeness.
eqb_soundness (n m:nat), eqb n m = true => n = m
eqb_completeness (n m:nat), eqb n m = false => n <> m
Either of these alone won't do the trick.
∵ eqb_soundness
could be made trivial if eqb
always returned true
.
Defined => a simpl,etc would expand it. Exploding terms. Qed => Only interested in the truth value. Not gonna unravel the term.
Too large proof terms expanded => could be computationally inefficient if we just are bothered about its truth value.
Opaque | Transparent |
Computational content doesn't matter | Computational content matters |
General advice:
Don't use the proof mode like a sledgehammer.
Definition hd0 : forall {A:Type} (l:list A), option A := fun A l =>
match l with
| nil => None
| cons x _ => Some x
end.
Lemma hd0_completeness: forall {A:Type} (l:list A),
l=nil -> hd0 l = None.
Proof.
induction l.
- auto.
- intros H.
rewrite H.
auto.
Qed.
Lemma hd0_soundness: forall {A:Type} (x:A) (l:list A),
hd0 (x :: l) = Some x.
Proof. auto. Qed.
(* ein stronger guarantee *)
Definition hd1 : forall {A:Type} (l:list A), A + {l=nil}.
Proof.
intros A l.
refine
match l with
| cons x _ => inleft x
| nil => inright eq_refl
end.
Defined.
Definition pred0 (n:nat) : option nat :=
match n with
| O => None
| S n' => Some n'
end.
Lemma pred0_soundness: forall n:nat,
S n > 0 -> pred0 (S n) = Some n.
Proof. auto. Qed.
Lemma pred0_completeness: forall n:nat,
n = 0 -> pred0 n = None.
Proof.
intros n H.
rewrite H.
auto.
Qed.
Definition pred1 (n:nat) : nat + {n=0} :=
match n with
| O => inright eq_refl
| S n' => inleft n'
end.
Print hd1.
(*
hd1 =
fun (A : Type) (l : list A) =>
match l as l0 return (A + {l0 = nil}) with
| nil => inright eq_refl
| (x :: l0)%list => inleft x
end
: forall (A : Type) (l : list A), A + {l = nil}
Arguments hd1 {A}%type_scope _%list_scope
*)
https://coq.inria.fr/refman/language/core/sections.html
Sectioning and module system are part of the vernacular language. Gallina is quite simple (∵ type checker better be simple).
All this code eventually boils down to Gallina.
:DBT: Difference between Parameter and Variable.
Few advantages:
(** docstring *)
Section map.
Variable A B:Type.
Variable f:A->B.
Fixpoint map (l: list A) : list B :=
match l with
| nil => nil
| cons x xs => cons (f x) (map xs) (* Didn't need to specify f here *)
end.
Check map.
(*
map
: list A -> list B
*)
End map.
Check map.
(*
map
: forall A B : Type, (A -> B) -> list A -> list B
*)
Context is more capable than the other two.
Section mapcontext.
Context {A B:Type} (f:A->B).
Fixpoint map (l: list A) : list B :=
match l with
| nil => nil
| cons x xs => cons (f x) (map xs) (* Didn't need to specify f here *)
end.
Check map.
(*
map
: list A -> list B
*)
End mapcontext.
Check map.
(*
map
: (?A -> ?B) -> list ?A -> list ?B
where
?A : [ |- Type]
?B : [ |- Type]
*)
We cannot make l
as well a section variable because it changes as map
recurses. On the other hand, f
remains constant.
:DBT: This. l is an index??
Set up the context to say multiple things about the same objects.
Only the section arguments which were actually used would become part of definition.
Definition singleton (a:A) := cons a nil.
Check singleton.
(*
singleton
: A -> list A
*)
End map.
Check singleton.
(*
singleton
: forall A : Type, A -> list A
*)
'Typed version' of the stack machine that we had done before.
Expressions:
App
NConst
BConst
App
If e e1 e2: e:𝔹 and e1 e2:same type
Section doesn't always enhance readability. Sometimes it's difficult to figure what are the arguments needed by a function without doing a check on it.
Typed version of the toy verified compiler for a stack machine.
We need to restrict our expr to only well-typed expressions.
Inductive type : Set := Nat | Bool.
Definition typeDenote (t:type) : Type :=
match t with
| Nat => nat
| Bool => bool
end.
Inductive binop : type -> type -> type -> Set :=
| Plus: binop Nat Nat Nat
| Mult: binop Nat Nat Nat
| And: binop Bool Bool Bool
| Eq: forall t:type, binop t t Bool. (* 2 things of same type *)
Search (bool -> bool -> bool).
Definition booleq (a b:bool) : bool := negb (xorb a b).
Definition binopDenote {t1 t2 t:type} (b:binop t1 t2 t)
: (typeDenote t1) -> (typeDenote t2) -> (typeDenote t) :=
match b with
| Plus => plus
| Mult => mult
| And => andb
| Eq Nat => Nat.eqb
| Eq Bool => booleq
end.
Inductive exp : type -> Set :=
| NExp : nat -> exp Nat
| BExp : bool -> exp Bool
| Binop : forall t t' t:type,
binop t t t' -> exp t -> exp t -> exp t'
| If: forall t:type, exp Bool -> exp t -> exp t -> exp t.
Fixpoint expDenote {t:type} (e:exp t) : typeDenote t :=
match e with
| NExp e' => e'
| BExp e' => e'
| Binop _ _ _ b e1 e2 =>
(binopDenote b) (expDenote e1) (expDenote e2)
| If _ cond e1 e2 =>
if (expDenote cond) then (expDenote e1)
else (expDenote e2)
end.
Expression is correct by construction. Well typed by construction.
Constraints are captured as additional parameters to the expression type.
Arguments Binop {t t'}.
Check Binop.
(*
Binop
: binop ?t ?t ?t' -> exp ?t -> exp ?t -> exp ?t'
where
?t : [ |- type]
?t' : [ |- type]
*)
Arguments Binop [t t'].
Check Binop.
(*
Binop
: forall t t' : type,
binop t t t' -> exp t -> exp t -> exp t'
*)
match expr as y0
in <T x1 x2 ... xn> -- chance for 1st order uni probl
return <expr involving x1 x2 ... y0>
with
In our case,
match op in Binop t t t'
return typeDenote t -> typeDenote t ->
Higher order unification problem 2 terms got to make them of same type
:DBT: So there are cases where higher order unificaion problem is decidable? An example?
Unification variables.
Unification constraints generated. generally unsolvalbe. So give some hints to Gallina by means of the generalized match.
Solve for a, b, c. which changes in each branch.
TODO: read one rule of patt matching in CPDT
Unification problems:
unif algo of gallina will always be incomplete
f(x1, y1, g(y2,y3) = f(h(a), y1, g(y2, y3)) 1st order unification ∵ function symbols (f and g) are fixed.
f(x1, y1, g(y2,y3) = g(x1, y1, g(y2, y3)) 2nd order unification ∵ equivalence of 2 funs has to be figured out.
Shallow embedding: 'reuse' type of meta language (Gallina) itself.
Has pros and cons.
Inductive exp : Type -> Type :=
| const : forall (T:Type) (t:T), exp T.
Check const nat 3.
Essentially a product type.
Look at vectors, which are like length delimited lists:
Inductive t (A:Type):nat -> Type :=
| nil : t A 0
| cons : forall n:nat, A -> t A n -> t A (S n).
Instead of length, we are interested in the types of the individual elements.
hlist will be parameterised by a list of types.
:DBT: Ltac tactic capable of accepting args: ANS: Yes!
No need of option stack
output as in the case of untyped stack machine. Because the types guarantee the correctness.
Rules out erroneous programs. Errors like underflow, etc cannot happen.
It will never execute if the input stack is of wrong type.
Require Import Bool. (* For Bool.eqb *)
Require Import List.
Import ListNotations.
(** * Source language *)
Inductive type : Set := Nat | Bool.
Definition typeDenote (t:type) : Type :=
match t with
| Nat => nat
| Bool => bool
end.
Inductive binop : type -> type -> type -> Set :=
| Plus: binop Nat Nat Nat
| Mult: binop Nat Nat Nat
| And: binop Bool Bool Bool
| Eq: forall t:type, binop t t Bool. (* 2 things of same type *)
Definition binopDenote {t1 t2 t:type} (b:binop t1 t2 t)
: (typeDenote t1) -> (typeDenote t2) -> (typeDenote t) :=
match b with
| Plus => plus
| Mult => mult
| And => andb
| Eq Nat => Nat.eqb
| Eq Bool => Bool.eqb
end.
(* Stack of types *)
Definition tstack := list type.
Inductive instr : tstack -> tstack -> Type :=
| push: forall (ts:tstack) (t:type), typeDenote t -> instr ts (t::ts)
| exec: forall (ts:tstack), binop t1 t2 t3
| push {ts:tstack} {t:type} : typeDenote t -> instr ts (t::ts)%list.
Proof by reflection. A family of techniques.
Reflection as in 'reflect about your thought process'. 'Look into yourself'. As in introspection.
We have some domain A:Type, And some operations on A, and some properties about these operations We wanna prove some theorems about these.
A typical example:
a = b
where a and b are expression over type A. Involves operations on A as well.
ring tactic uses proof by reflection under the hood.
Require Import Arith.
Goal forall a b:nat,
(a+b)*(a+b) = (a*a)+2*a*b+(b*b).
Proof.
intros.
ring.
Show Proof.
Restart.
intros.
(* ring_simplify takes a polynomial expression and expands it. ie, it normalized the polynomial. *)
ring_simplify ((a+b)*(a+b)).
Abort.
Compute 2^3. (* 8 *)
Lemma a2: forall a:nat, a + a = 2*a.
Proof.
intro a.
induction a.
- auto.
- simpl.
SearchRewrite (_ + 0).
rewrite <- plus_n_O.
auto.
Qed.
A ring is (R, +, *, 0, 1)
For the (a+b)² = a²+2ab+b² example,
Proof is straightforward, but is tedious.
Goal forall a b:nat,
(a+b)*(a+b) = (a*a)+2*a*b+(b*b).
Proof.
intros a b.
SearchRewrite (_ * (_ + _)).
rewrite Nat.mul_add_distr_r.
rewrite Nat.mul_add_distr_l.
rewrite Nat.mul_add_distr_l.
SearchRewrite ((_ * _) = (_ * _)).
SearchRewrite (_ * _ = (_ * _)).
SearchRewrite (_ * _).
rewrite (Nat.mul_comm b a).
SearchRewrite ((_ + _) + _).
rewrite <- (plus_assoc_reverse).
SearchRewrite (_ + _ = 2 _).
SearchRewrite (_ + _).
SearchRewrite (2 * _).
f_equal.
rewrite <- (a2 (a*b)).
simpl.
Abort.
Automation is a reason why we go after proof by reflection.
Design an inductive type that captures the expressions on which you need to prove the theorem.
(The ring
tactic can deal with multi-variant polynomials.)
reify
+------------->---------------+
| |
Original Other type (Concrete repr)
| |
+-------------<---------------+
denote
We'll reflect on this 'ohter type'.
Typically, the reification is done by a tactic.
Reify:
Inductive Poly R : Type :=
| Zero: Poly R
| One: Poly R
| Const: R -> Poly R
| Plus: Poly R -> Poly R -> Poly R
...
...
(a+b)²
could get reified as:
Mul
(Plus (Const a) (Const b))
(Plus (Const a) (Const b))
Second component: norm
Often a tactic? A normalization procedure is involved.
Eg: (a+b)² = a² + 2ab + b²
Only then can we find if two expressions are equivalent.
Third component is a function that gives semantics. A denote fn. This does the unreification. Usually a Gallina function.
Fourth step: prove correctness of normalization.
ie,
∀p:Poly R, denote p = denote (norm p)
(norm function could've been id fn, the smemnatic correctness lemma would've been satisfied. But not useful..)
norm: domain. equivalence classes. There may be multiple represenations of the same polynomial. The norm function should ideally map all elements in the same equivalence class to the same point (ie, the same normal form).
Finally, after proving everything, we need to unreify. This has to be a tactic.
Ltac ring :=
match goal with
| ?x = ?y =>
(* Just for illustration purposes. reify is not a fn, but a tactic *)
assert x = denote (reify x)
assert y = denote (reify y)
(* The above 2 claims can be proven by computation *)
(* Now we could rewrite as: *)
denote (reify x) = denote (reify y)
(* From the semantic correctness lemma, we got *)
denote (norm (reify x)) = denote (norm (reify y))
(* Note that [reify x] is the actual reified term *)
(* If x and y were equivlant, LHS and RHS would end up being the same by now. Making the proof trivial. *)
:DBT: ✓ reification is usually done by ltac. Is there other way? Yeah, ocaml plugins. (It just got to be in the 'meta-level' where it can look at the terms of Gallina itself.) ring
probably uses it.
Proof by reflection means reify and reflect on the 'concrete form'.
Advantages:
Crucial role played by computation.
:HW: cpdt tautology simplifier.
An example use of Proof by reflection.
Inductive even : nat -> Prop :=
| evenO: even 0
| evenS: forall n:nat, even n -> even (S (S n)).
Check evenO. (* 0 is even *)
Check evenS 0 evenO. (* 2 is even *)
Ltac crush := repeat constructor.
Goal
even 50.
Proof.
crush.
Show Proof.
(* LARGE proof term!! *)
Qed.
For bool
, it is an (efficient) decision procedure. Which is much simpler than the huge proof terms. Because computation is involved.
Fixpoint evenb' (n:nat) : bool :=
match n with
| O => true
| S (S n') => evenb' n'
| _ => false
end.
Fixpoint evenb (n:nat) : bool :=
match n with
| O => true
| S n' => negb (evenb n')
end.
Compute evenb 31.
Compute evenb 30.
How do we connect this efficient decision procedure to the proof?
We can using proof by reflection.
In ssreflect, there's reflect
.
refelct P b
says that the Prop
P
is equivalent to the b
which is decision procedure.
Axiom evenb_even: forall n:nat, evenb n = true -> even n.
Goal
even 256.
Proof.
apply evenb_even.
now simpl.
Show Proof.
(*
(evenb_even 256 eq_refl)
*)
(* Simpler proof term! *)
Qed.
Note the importance of making evenb_even opaque
. If it was made with Defined
, the simpl
would've done the old thing and made a giant proof term.
Lemma evenb_even1: forall n:nat, evenb n = true -> even n.
Proof.
intros n H.
induction n.
- exact evenO.
- induction n.
+ pose proof (eq_refl (evenb 1) : eq (evenb 1) false) as H'.
discriminate.
+
(* We need: evenb (S n) = true *)
Abort.
reification is opposite of abstraction.
Abstract object we got as a formula like True \/ True
, which is a meta-language object. So we cannot use a Gallina function to do the reification. But we can using ltac.
ltac is a typeless language.
ltac has constr:expr
because it needs to look at terms while also being able to construct terms.
Let's try the tautology prover example from CPDT:
(* Tautology prover *)
Goal
True /\ True.
Proof.
split; trivial. (* Or just [repeat constructor.] *)
Qed.
Goal
(True /\ True)
-> (True \/ (True /\ (True -> True))).
Proof.
tauto.
Show Proof.
(*
(fun H : True /\ True =>
and_ind (fun _ _ : True => or_introl I) H)
*)
Qed.
(* Datatype into which we will reify *)
Inductive taut : Set :=
| TTrue: taut
| TAnd: taut -> taut -> taut
| TOr: taut -> taut -> taut
| TImpl: taut -> taut -> taut.
(* Semantics *)
Fixpoint tautDenote (t:taut): Prop :=
match t with
| TTrue => True
| TAnd t1 t2 => (tautDenote t1) /\ (tautDenote t2)
| TOr t1 t2 => (tautDenote t1) \/ (tautDenote t2)
| TImpl t1 t2 => (tautDenote t1) -> (tautDenote t2)
end.
(* Correctness of semantics *)
Lemma taut_lm : forall t:taut, tautDenote t.
Proof.
intro t.
destruct t.
- simpl.
auto.
- simpl.
split.
+ simpl.
(* No, we need induction *)
Restart.
intro t.
induction t.
- now simpl.
- now split.
- simpl; now left.
- now simpl.
(* Or as easy as [intuition] ?? Not sure. *)
Qed.
Ltac reify P :=
match P with
| True => TTrue
| ?t1 /\ ?t2 =>
let rt1 := reify t1 in
let rt2 := reify t2 in
constr:(TAnd rt1 rt2)
(* If the [constr] wasn't here, ltac would've thought the TAnd to be a tactic. *)
| ?t1 \/ ?t2 =>
let rt1 := reify t1 in
let rt2 := reify t2 in
constr:(TOr rt1 rt2)
| ?t1 -> ?t2 =>
let rt1 := reify t1 in
let rt2 := reify t2 in
constr:(TImpl rt1 rt2)
end.
(* Got to use let reify. It won't work otherwise. Not really like Gallina fixpoints. *)
(* The one that we're gonna use. *)
Ltac tautcrush :=
match goal with
| [ |- ?P ] =>
let rp := reify ?P in
(* [tautDenote rp ≡ ?P] *)
exact (taut_lm rp)
(* If what we were after was an equality proof, we may have had to use assert here. *)
end.
Goal
(True /\ True)
-> (True \/ (True /\ (True -> True))).
Proof.
intro t.
tautcrush.
Qed.
reify
+--------->------+
| |
+------------+-----------+ +----+
| non | | | |
|tautologies |tautologies| |taut|
| | | | |
+------------+-----------+ +----+
Prop
We can't distinguish between 2 + 5 and 7 in Gallina. Both will look like 7.
Hence, Gallina cannot examine Gallina terms as such.
We need something at the meta-level.
Viz, ltac or ocaml plugin.
Pure Gallina won't make it.
Metacoq would work as well. We basically just need a way to examine at the meta-level. A way to construct the syntax of coq itself.
Another proof by reflection example. Similar to ring tactic, but we don't need to resort to ocaml.
Reiterating some of the advantages of proof by reflection:
- proof terms simpler
- tactics faster
A monoid is a set A, together with a binary operation *
on A such that:
*
is associative
e ∈ A
(aka 1), such that
Essentially, a monoid is like group with the need for an inverse element.
Equivalence of left and right identity lemmas:
If we got left and right identity lemmas, we can prove that left and right identiy are the same element.
iₗ * a = a
iₗ * (a * iᵣ) = iₗ * a (by right identity lemma)
Also,
iₗ * (a * iᵣ) = a * iᵣ (by left identity lemma)
ie, we got
iₗ * a = a * iᵣ
And since * is associative,
a * iₗ = a * iᵣ
∴ iₗ = iᵣ
Example of monoids:
Require Import List.
Import ListNotations.
(*
What we aim at: equivalence by means of associativity.
Like:
((a * b) * (c * d)) * e
being same as
a * ((b * c) * (d * e))
*)
Section monoid.
Context (A:Type).
Context (op: A->A->A).
Context (e:A).
Hypothesis op_assoc: forall (a b c:A),
op (op a b) c = op a (op b c).
Hypothesis id_l: forall a:A,
op e a = a.
Hypothesis id_r: forall a:A,
op a e = a.
(* AST corresponding to the terms we want our
reflection-based tactic to handle *)
Inductive mexp : Type :=
| Idty: mexp
| Const : A -> mexp
| Op: mexp -> mexp -> mexp.
Ltac reify P :=
match P with
| op ?a ?b =>
let ra := reify a in
let rb := reify b in
constr:(Op ra rb)
| e => Idty
(* | ?x => constr:(Const x) *)
| _ => constr:(Const P)
(* Anything else, including things like f(g(x)),
would become const. ie, anything that it doesn't
recognize. *)
end.
(* Normalization function could be a flatten. list mexp would be the normalized form.
We do comparison on the normalized form.
e1 = e2 if norm(e1) = norm(e2)
*)
(* Normalization got to be done as well, mon amié.. *)
Fixpoint mexpDenote (m:mexp) : A :=
match m with
| Idty => e
| Op a b => op (mexpDenote a) (mexpDenote b)
| Const c => c
end.
Definition mconcat' (l:list A) : A :=
fold_left op l e.
Fixpoint mconcat (l:list A) : A :=
match l with
| [] => e
| (x::xs) => op x (mconcat xs)
end.
(* Normalization function *)
(* 1 * a = a, but we are not bothering with it to keep things
simple. *)
Fixpoint flatten (m:mexp) : list A :=
match m with
| Idty => []
| Op a b => (flatten a) ++ (flatten b)
| Const c => [c]
end.
Lemma concatSingle : forall a:A, mconcat [a] = a.
Proof using All.
intro a.
unfold mconcat.
simpl.
auto.
Qed.
Lemma foldLemma : forall (a:A) (l1 l2:list A),
fold_left op (l1 ++ a :: l2) e =
op (fold_left op l1 e) (fold_left op (a :: l2) e).
Proof using A op e id_l.
intros a l1 l2.
induction l1.
- simpl.
rewrite id_l.
rewrite id_l; trivial.
- simpl.
rewrite id_l.
rewrite id_l.
rewrite fold_left_app.
induction l2.
-- now simpl.
-- simpl.
unfold fold_left.
SearchRewrite fold_left.
Admitted.
(* Might need some lemma regarding List.fold as well *)
(* only introduce l1. Will need assoc *)
(* associativity of op has to be used somewhere if this is to be proven *)
Lemma mconcatLemma :
forall l1 l2:list A,
mconcat (l1 ++ l2) = op (mconcat l1) (mconcat l2).
Proof using A op e id_l id_r.
intro l1.
induction l2.
- SearchRewrite (_ ++ []).
rewrite app_nil_r.
assert (mconcat [] = e).
+ unfold mconcat.
auto.
+ rewrite H.
rewrite id_r; trivial.
- simpl.
rewrite id_l.
rewrite foldLemma.
simpl.
now rewrite id_l.
Qed.
(*
op (mconcat (flatten m1)) (mconcat (flatten m2)) =
mconcat (flatten m1 ++ flatten m2)
*)
(* Normalization lemma *)
Theorem normLemma : forall m:mexp,
mexpDenote m = mconcat (flatten m).
Proof using All (*op e id_l id_r*).
intros m.
induction m.
- simpl.
compute; trivial.
- compute.
eauto.
- unfold mexpDenote.
fold mexpDenote.
rewrite IHm1.
rewrite IHm2.
simpl.
now rewrite mconcatLemma.
Qed.
(* takes an exp, reifies it and asserts that the denote(reified exp) = exp *)
Ltac reify_rewrite exp :=
let reif := reify exp in
let H := fresh "H" in
assert (H: exp = mexpDenote reif)
by (compute; trivial);
rewrite H;
rewrite (normLemma reif);
simpl.
(* Change e1 = e2 goal to
mexpDenote e1 = mexpDenote e2 *)
Ltac monoid_crush :=
match goal with
| [ |- ?e1 = ?e2 ] =>
reify_rewrite e1;
reify_rewrite e2
end.
(*
If we got to prove that
e1 = e2
Then,
re1 = reify e1
re2 = reify e2
assert denote re1 = denote re2
Using flatten correctness lemma, which is
denote e = mexpLDenote (flatten e)
to get
denote re1 = denote re2
mexpLDenote (flatten re1) = mexpLDenote (flatten re2)
With f_equal,
flatten re1 = flatten re2
denote and reify has been written such that the proof is by computation.
*)
Variable x y z:A.
Check op e e.
Example src1 := (op x y).
Example rei1 := (Op (Const x) (Const y)).
Compute mexpDenote rei1.
Compute flatten (Op (Const x) (Const y)).
Compute mconcat [x; y].
Goal
op x (op y z) = op (op x y) z.
Proof using All.
now monoid_crush.
(* Outdated comment: f_equal at this point finishes the proof. *)
Qed.
(* mconcat $ flatten is our normalization function in this monoid example *)
Goal
op x (op x z) = op (op x y) z.
Proof using All.
fail monoid_crush.
End monoid.
Monoid expression example continued..
Again, we look at this because
Ring more complex than monoid primarily because of commutativity. Normalization function (flatten
in the case of monoid) would become more complex then.
Normalization function: Map all elements of the same equivalence class to a same value.
+--------------------+
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
+--------------------+
Equivalence classes that capture when two expressions are equivalent.
The properties of monoids mean that we can prove stuff like:
2 * (3 * 4) = (2 * 3) * 4
because if follows from associativity.
But it can't show that in (N, +, 0)
2 + 2 = 4
because monoid theorems say nothing like that.
Except, of course, for the identity element like
2 + e = 2
Because when monoid is still general, that's all we can say.
But when we know that the set is ℕ, we have more info.
What have previously been distinct separate equiv classes may now turn out to be the same (sort of merged together).
Equiv classes of 2 + 2 and 4 are same if the set is ℕ.
Normalizaton function 'preserves' the equivalence classes. For all elements in an equiv class, there will be only one element.
Identity is essentially a form of normalization, but not useful because it doesn't add anything to it. That's okay, because it doesn't lead to incorrect proofs. Just won't be able to prove anything useful.
How good the tactic is determined by how capable the normalization function is.
If the normalization is capable of reducing
(a + b)² to a² + 2ab + b²
Commutative monoid: Commutativity complicates things. Because the number of possibilities blow up! n!
ways. The normalization has to take these into account if it's to be capable.
Okay, coming back to our monoid tactic.
Eg:
e1 = x.(y.z)
e2 = (x.y).z
(*
x, y, z
x.y.z.1
*)
re1 = reify e1
= Op (Const x (Op (Const y) (Const z)))
re2 = reify e2
= Op (Op (Const x) (Const y) (Const z))
denote re1 = denote re2
(* This is 'proof by computation' *)
monoid example's proof
Eg:
∀x, P(x, y)
where x is bound and y is free.
Another eg:
Φ(y) = Integral[ x.y dx]
y is free, x is bound.
Yet another eg:
let x = 3 in
x + y
where x is bound, y is free.
That means, way to handle bindings
Syntax. Just syntax. No semantics at this point.
e = x (variable)
| e1 e2 (e1 applied on e2)
| λx.e (function maping x to e)
Set of free variables in lambda calculus:
FV(x) = {x}
FV(e1 e2) = FV(e1) ∪ FV(e2)
FV(λx.e) = FV(e) \ {x}
Substitution and binding are two sides of the same coin.
Substitution:
Replace all free occurrences of x in e with t.
e[x/t]
(This is just a notation)
x[x/t] = t
y[x/t] = y (if y≠x)
(e1 e2)[x/t] = e1[x/t] e2[x/t]
(λx.e)[x/t] = λx.e
(λx.e)[y/t] = λx.(e[y/t]) (if y≠x)
Variable names doesn't matter, it's the binding structure that matters.
Name conflicts are a bother. Make sure that the binding structure never changes.
Free variable capture problem: Free variables accidently becoming bound variables.
λx. (x (λt. x t))
Problems like this is why substitution, despite being quite simple, can get incredible complex to implement.
α-conversion (or α-reduction): change the name of variable. To avoid free variable capture.
β-reduction
Upon seeing an 'abstraction application', we can replace all occurrences of the variable bound by that abstraction with the term being applied.
(λx.e) M ≡ e[x/M]
This is what gives meaning to the entire lambda calculus.
Computation happens via β-reduction. ie, by this substitution.
Another pproblem,
(λx. (λt. x)) t
Binding changes on this application.
So α-convert.
(λx. (λy. x)) t
Now β-reduce.
(λy. t)
So yeah, substitution is tricky to implement.
We can't just blindly substitute. Easy to get it wrong while implementing a lambda calculus interpreter.
Binding is always local.
We gonna look at 3 representations
Let's have variables represented with nat
.
Inductive term : Type :=
| Var: nat -> term
| App: term -> term -> term
| Abs: nat -> term -> term.
First order representation of (untyped) lambda calculus continued..
First order because variable can be 'quantified'.
Advantage:
Disadvantage:
α-conversion changes bound variable without changing the meaning of the term (ie, the binding structure).
We are currently interested in variable bindings.
Use of de-Bruijn indices can avoid the need for α-conversion.
Eg: λx. x (λy.y x)
is λ.1 (λ.1 2)
de-Bruijn index means:
Disadvantage:
:HW: Typed version of de-Bruijn in CPDT
:DBT: de-Bruijn indicing starts from 0 or 1 usually. :DBT: de-Bruijn indicing when there are free variables.
Higher Order Abstract Syntax.
Can't be done in Coq because it cannot represent non-terminating terms??
Haskell can do it though.
λx.e
means function mapping x
to e
.
Use the meta-language function to represent this => HOAS. Makes things much simpler.
Inductive expr : Type :=
| App: expr -> expr -> expr
| Abs: (expr -> expr) -> expr.
(* | |
+------------+
|
V
Gallina function!
λx. λy. (x y) would be like
Abs (fun x => x) (Abs (fun y => x)
*)
We are accessing a meta language construct from object language construct, where:
(This can't be written in Gallina.)
Error: Non strictly positive occurrence of
"expr" in "(expr -> expr) -> expr".
positive occurrence
|
(expr -> expr) -> expr
|
negative occurrence
Negative occurrence => it's not clear how you get the starting element of the inductive type.
So it's not well formed.
Non-terminating => proof can be done by infinitely deferring things.
—
A fixed type T
F A =
Let
g: A -> B
then we can lift (uniformly)`
F A -> F B
—
f
A ------> B
g*
F A -------> F B
because
g*: λt2a. λt. g (t2a t)
g* = fun t2a => g ∘ t2a
—
F is covariant. Change A then it varies in that way itself. Covariant functor?
Positive.
On the other hand,
H: A -> T
is contravariant ∵
then given h:A -> B
, we can get
h*: F B -> F A
because
f
A ------> B
h*
H B -------> H A
fun b2t => b2t ∘ ? (f or g??)
—
Negative.
In A -> B
Contravariant position occuppied same type was the problem.
:DBT: contra of contra becomes co??
Common in math. vector space to vector field?? V -> V*
—
Advantage of HOAS:
:DBT: What about open terms?
No need of α-reduction Beta reduction: simple, efficient. Because it's all functions.
All is good, except for one problem. What problem?? Exotic terms.
Like in the case of de-Bruijn indices, no straightforward way to represent free variables.
β-reduction could be like:
Fixpoint beta (e:term): term :=
match e with
| Abs f e' => e (* Can't do anything more here *)
| App e1 e2 =>
let e1p = beta e1 in
let e2p = beta e2 in
match e1p with
| Abs g => g e2p
| App _ _ => App e1p e2p
end
end.
Efficient because we use the function call mechanism of the meta-languages.
Only closed terms.
Combinator | Lambda term | Meaning |
---|---|---|
K xy | λx. λy. x | Constant fn |
S | λx. λy. λz. x z (y z) | |
I x | λx.x | Identity fn |
For example, K
of SKI combinator calculus could be expressed in HOAS as:
(Abs (fun x =>
(Abs (fun y =>
(Abs (fun z =>
(App
(App x z)
(App y z)))))
We use lambda of meta lang to capture lambda of object lang.
HOAS uses metalanguage's abstraction to have object language's abstraction. Substitution simplified. No explicit name => no name collisions. Fast/efficient.
Exotic terms. Abs: (E -> E) -> E
We meant for just textual substitution but the function can be general. Can do anything. ∵ it can inspect E. Can inspect the x in
λx. x ≡
fun x:E => rv
^
|
Can inspect this x
Should not look at the argument x.
The (E -> E) can capture more things than that we want. Including things that are not in lambda calculus.
inspect arg => dependiing on the val it can do different things. Which is not simple textual replacement.
Fully abstract => bijection?? HOAS => not full abstraction and we lose associated proofs??
Eg: β-reduction could be made an infinite loop.
HOAS can still represent all closed lambda calculus terms.
PHOAS doesn't have these problems.
Parametric Higher Order Abstract Syntax.
Remember:
Exotic terms arose in HOAS because we could inspect the value.
Abs (fun x =>
match x with
| ..
end) tm
PHOAS doesn't allow that.
Why do we say that the only value of the type
∀A:Type, A -> A
is the identity function?
Because we don't know what A is. That's because we can't inspect the A value. It's abstract as far the function is concerned.
Inductive term (V:Type) : Type :=
| Var: V -> term V
| App: term V -> term V -> term V
| Abs: (V -> term V) -> term V.
Expression parametrized by a 'variable type'.
id {V:Type} = Abs (fun x:V => Var x)
----
Constant function
λx. λy. x is
Abs (fun x:V =>
(Abs (fun y:V =>
Var x))
The parameter.
This is what makes PHOAS different from HOAS
|
+--------+
| |
Definition id := fun V:Type =>
Abs (fun x:V => Var x)
We can't inspect x. Because we don't know of what type it is. This eliminates the exotic term problem of HOAS.
Inductive type: Set :=
| Bool: type
| Nat: type
| Arrow: type -> type.
Inductive term (V:type -> Type) : type -> Type :=
| Var: forall t:type, V t -> term V t
| App: forall t:type, term V t -> term V t -> term V t
| Abs: forall t:type, (V t -> term V t) -> term V t.
Choose V carefully. Depending on what you want to do with the terms. Thoughtful choice of V can make work lot enjoyable.
V=nat, easy way to find number of variables in an expression??
PHOAS can be converted to LC. So if not anything, whatever you can do in LC you can do in PHOAS. Usually more.
Semantics => we are just giving a meaning.
For example, consider non-Euclidean geometry. Suppose all axioms of Euclidean geometry except the parallel axiom holds.
Parallel axiom: Given a line and a point in a plane, there can be only one line passing through the given point that is parallel to the given line.?????
Consider a spherical surface. A sphere.
Suppose:
(*
object lang: STLC
meta lang: Gallina
*)
(* Object language type system, reified in meta language *)
Inductive type: Set :=
| Nat: type
| Arrow: type -> type -> type.
(* Gives meaning to types in object language as types in the meta lang
*)
Fixpoint typeDenote (t:type): Type :=
match t with
| Nat => nat
| Arrow t1 t2 => (typeDenote t1) -> (typeDenote t2)
end.
(*
UTLC => there are not types. ie, every variable is of 'the same type'.
STLC => every variable is associated with a type
ie, variables are parameterized over [type].
Kinda like
var = type -> Type
[var Nat] is the type family of variables of type [Nat].
UTLC?? terms in PHOAS coud've been like
Inductive term (V:Type) : Type :=
| Var: V -> term V
| App: term V -> term V -> term V
| Abs: (V -> term V) -> term V.
Here, [V] is the family of types.
V could've been String => a form of first order repr of UTLC.
Inductive term : Type :=
| Var: String -> term
| App: term V -> term V -> term V
| Abs: (V -> term V) -> term V.
Parametericity like
val: forall v:Var, term V
we can't inspect val.
*)
(*
Universe of types
*)
Inductive term (V:type -> Type) : type -> Type :=
| Var: forall t:type,
V t -> term V t
| App: forall t1 t2:type,
term V (Arrow t1 t2) -> term V t1 -> term V t2
| Abs: forall t1 t2:type,
(V t1 -> term V t2) -> term V (Arrow t1 t2).
Arguments Var {V t}.
Arguments App {V t1 t2}.
Arguments Abs {V t1 t2}.
(* Closed expressions. Quantified over V => no inspection
=> no exotic terms *)
Definition Term (t:type) := forall (V:type->Type),
term V t.
(* Now we can't inspect V => exotic terms ruled out. *)
(* Choice of V is pertinent. *)
(*
λx:nat. x
Abs Nat Nat (fun
Type: term (fun _ => nat->nat) Nat
---
λx:Nat. x
Type: term V (Arrow Nat Nat)
Abs (fun x:Nat => Var x)
*)
Definition idnat (U:type->Type) : term U (Arrow Nat Nat) :=
Abs (fun x:U Nat => Var x).
Definition IDNat: Term (Arrow Nat Nat) :=
fun U:type->Type => idnat U.
(* Choice of V is pertinent. *)
(* Eg: Wanna count number of Var nodes. Doesn't care about anything else. *)
Fixpoint countHelper {t:type} (e: term (fun _ => unit) t) : nat :=
match e with
| Var _ => 1
| App t1 t2 => (countHelper t1) + (countHelper t2)
| Abs f => countHelper (f tt)
end.
Compute countHelper (IDNat (fun _ => unit)).
(*
:HW: Use Term to lift countHelper
*)
(*
:HW: give Semantics
*)
Continuing…
(* (λx:Nat. x) 42 *)
App
(Abs (fun x:V Nat => x))
(Const 42)
(* Can't say what exactly V is. So can't inspect. *)
Parametricity of V prevents inspection.
semantics Sth —————-> domain ∩ exotic
We can't prove that this doesn't have exotic terms from within coq.
We can't prove sth about the metalanguage from within the meta language. ∵ Gödel's incompleteness theorem.
Polymorphism:
PHOAS:
What if V
is typeDenote
?
You get semantics for the expression as well!
semantics: forall t:type,
term typeDenote t (* object lang type *)
-> typeDenote t
Semantics: forall t:type,
Term t -> typeDenote t
= fun t:exp t => semantics (typeDenote t)
(* ??? *)
Could be implemented like:
Inductive type: Set :=
| Nat: type
| Arrow: type -> type -> type.
Fixpoint typeDenote (t:type): Type :=
match t with
| Nat => nat
| Arrow t1 t2 => (typeDenote t1) -> (typeDenote t2)
end.
Inductive term (V:type -> Type) : type -> Type :=
| Const: nat -> term V Nat
| Var: forall t:type,
V t -> term V t
| App: forall t1 t2:type,
term V (Arrow t1 t2) -> term V t1 -> term V t2
| Abs: forall t1 t2:type,
(V t1 -> term V t2) -> term V (Arrow t1 t2).
Arguments Const {V}.
Arguments Var {V t}.
Arguments App {V t1 t2}.
Arguments Abs {V t1 t2}.
Definition Term (t:type) := forall V:type->Type, term V t.
Fixpoint semantics {t:type} (e: term typeDenote t)
: typeDenote t :=
match e in (term _ t0)
return (typeDenote t0)
with
| Const n => n
| Var x => x
| App f e' => (semantics f) (semantics e')
| Abs f => fun e' => semantics (f e')
(*| Abs f => fun e' => (semantics f) e'*)
(*got: typeDenote t1 -> term t2*)
(*want: typeDenote t1 -> typeDenote t2*)
end.
Definition Semantics {t:type} (E: Term t) : typeDenote t :=
semantics (E typeDenote).
We can give semantics only to closed terms. Free vars => Nada!
follow the types and you'll see the code writes itself! That's how you know that you're on the right path.
Choose the V wisely. C'est votre choice. Depending on what you want, choose V.
Function that does constant folding. And prove its correctness.
Program Definition. Kinda like refine
tactic.
Tip:
Write the types. Keep refining till you finish the function.
Fnal extensionality
∀x, f(x) = g(x)
Require Import FunctionalExtensionality.
Require Import Coq.Program.Equality.
Inductive type : Set :=
| Nat: type
| Arrow: type -> type -> type.
Fixpoint typeDenote (t:type): Type :=
match t with
| Nat => nat
| Arrow t1 t2 => (typeDenote t1) -> (typeDenote t2)
end.
Inductive exp (V:type->Type): type -> Type :=
| Var: forall t:type, V t -> exp V t
| App: forall t1 t2:type, exp V (Arrow t1 t2) -> exp V t1 -> exp V t2
| Abs: forall t1 t2:type, (V t1 -> exp V t2) -> exp V (Arrow t1 t2)
| Const: nat -> exp V Nat
| Plus: exp V Nat -> exp V Nat -> exp V Nat.
Arguments Var {V t}.
Arguments App {V t1 t2}.
Arguments Abs {V t1 t2}.
Arguments Const {V}.
Arguments Plus {V}.
Fixpoint expDenote {t:type} (e: exp typeDenote t)
: typeDenote t :=
match e with
| Var x => x
| App f e' => (expDenote f) (expDenote e')
| Abs f => fun e' => expDenote (f e')
| Const n => n
| Plus e1 e2 => (expDenote e1) + (expDenote e2)
end.
Definition plus {V:type->Type} (e1 e2: exp V Nat) : exp V Nat :=
match e1, e2 with
| Const a, Const b => Const (a+b)
| _, _ => Plus e1 e2
end.
(* Constant folding *)
Fixpoint foldexpr {t:type} {V:type->Type} (e:exp V t) : exp V t :=
match e with
| Const c => Const c
| Var v => Var v
| Plus e1 e2 => plus e1 e2
(* Old one:
let e1' := foldexpr e1 in
let e2' := foldexpr e2 in
match e1', e2' with
| Const x, Const y => Const (x+y)
| _, _ => Plus e1' e2'
(* in case if either e1 or e2 had scope for folding,
that will be dealt with in this last branch. *)
end
*)
| Abs f =>
let
f' := fun vT => foldexpr (f vT)
in
Abs f'
| App f e' => App (foldexpr f) (foldexpr e')
end.
Lemma plus_correctness: forall e1 e2:exp typeDenote Nat,
(expDenote e1) + (expDenote e2) = expDenote (plus e1 e2).
Proof.
intros e1 e2.
(* Can't use normal tactics here because e1 is already 'bound' using Nat *)
dependent destruction e1;
dependent destruction e2;
simpl; trivial.
Qed.
(* Closed expressions *)
Definition Exp (t:type) := forall (V:type->Type), exp V t.
Definition ExpDenote (t:type) := forall (V:type->Type), exp V t.
Definition foldExpr {t:type} (E:Exp t) : Exp t :=
fun V:type->Type => foldexpr (E V).
(* dependent destruction and functional_extensionality uses internal lemmas which are not provable in coq? *)
Lemma foldexpCorrectness: forall (t:type) (e: exp typeDenote t),
expDenote e = expDenote (foldexpr e).
Proof.
intros t e.
induction e.
- now simpl.
- simpl.
rewrite IHe1.
rewrite IHe2.
reflexivity.
- (* Might need functional extensionality here *)
(* Got to prove that f = foldexpr f *)
(*
Check functional_extensionality.
unfold foldexpr.
fold @foldexpr.
f_equal.
f_equal.
*)
simpl.
apply functional_extensionality.
eauto.
- now simpl.
- simpl.
now rewrite plus_correctness.
(* Old att:
set (rhs := expDenote (foldexpr (Plus e1 e2))).
simpl.
rewrite IHe1.
rewrite IHe2.
subst rhs.
*)
Qed.
(* This proof is 'manual'. Change the langauge ie, expr and this proof got to be done all over again. Automation would be nice. *)
(*
Hint Resolve lemma_name: localdb.
*)
(* autounfold with hintdb. eauto with hintdb *)
(* multimatch *)
(* keeping crush tactic general and non-problem specific is best. So it remains relevant even when the problem changes. *)