Based on vidoes of lectures by Alexandra Silva at OPLSS-2019.
Mistakes my own.
Coinduction: Same thing but infinite number of times.
Set of infinite sequences over a set A:
A^ω = {σ | σ: ℕ → A}
σ = (σ0, σ1, σ2, ...)
head(σ) = σ0
tail(σ) = (σ1, σ2, ...)
tail(σ)(n) = (σn+1, σn+2, ...) = ∀n,σ(n+1)
Using these destructors, let's try defining a few functions:
even: A^ω → A^ω
even(σ)(n) = ∀n,σ(2n)
odd: A^ω → A^ω
odd(σ)(n) = ∀n,σ(2n+1)
Let's try a merge. As in
merge: A^ω * A^ω → A^ω
(σ0, σ1, ..) * (τ0, τ1, ..) ↦ (σ0, τ0, σ1, τ1, ..)
Got to be like:
merge: A^ω * A^ω → A^ω
merge(σ,τ)(n) = σ(n/2) if n is even
| τ((n-1)/2) if n is odd
Or to be define more mathematically (inductively??), in terms of hd
and tl
,
hd(merge(σ,τ)) = hd(σ)
tl(merge(σ,τ)) = merge(τ, tl(σ))
Wow.. we can define functions operating with infinite streams just by having destructors which can be used to observe the stream at various points.
Some more stuff:
hd(even(σ)) = hd(σ)
tl(even(σ)) = odd(tl(σ))
= even(tl(tl(σ)))
hd(odd(σ)) = hd(tl(σ))
tl(odd(σ)) = even(tl(tl(tl(σ))))
tl(even(tl(σ)))
The following should be true:
merge(even(σ), odd(σ)) = σ
even(merge(σ,τ)) = σ
odd(merge(σ,τ)) = τ
so we need to prove that
∀n∈ℕ, lhs(n) = rhs(n)
Starting with merge(even(σ), odd(σ)) = σ
,
∀n∈ℕ, ρ(n) = σ(n)
where
ρ = merge(even(σ), odd(σ))
Induction.
ρ(0) = hd(ρ)
= hd(merge(even(σ), odd(σ)))
= hd(even(σ))
= hd(σ)
= σ(0)
We got induction hypothesis as:
ρ(n) = merge(even(σ), odd(σ))(n)
Now for the induction step,
ρ(n+1) = tl(ρ)(n)
= tl(merge(even(σ), odd(σ))(n)
= merge(odd(σ), tl(even(σ)))(n)
= merge(odd(σ), tl(odd(tl(σ))))(n)
= merge(even(tl(σ)), tl(odd(tl(σ))))(n)
...
= tl(ρ)(n)
= ρ(n+1)
ie, we need to come back where we started. But how to close the gap? Non-trivially.
—
Let's try proving
even(merge(σ,τ)) = σ
Let ρ = even(merge(σ,τ))
ρ(0) = hd(ρ)
= hd(even(merge(σ,τ)))
= hd(merge(σ,τ))
= hd(σ)
= σ(0)
IH is: ρ(n) = σ(n)
Induction step:
ρ(n+1) = even(merge(σ,τ))(n+1)
= merge(σ,τ)(2(n+1))
= tl(even(merge(σ,τ)))(n)
= odd(tl(even(merge(σ,τ))))(n)
= ...
= σ(n+1)
What we had been trying was bisimulation.
A relation R ⊆ Aω x Aω is a bisimulation if ∀ (σ,τ) ∈ R:
(we can't say equality for the tail parts. so we say that in terms of R itself.)
Coinduction:
σ ~ τ => σ = τ
where σ ~ τ
means that there is a bisumulation relation R
such that (σ,τ) ∈ R
.
Dexter Kozen quote:
A proof is valid by induction if you have a good reason for the proof to be valid.
A proof is valid by coinduction if you don't have any good reason for the proof not to be valid.
This is so because in a proof by coinduction, we are trying to see if there are counter-examples.
In coinduction, we try to show that there is no good reason to say that the statement is invalid.
A DFA is a pair S, <o,t>)
where:
S
: set of stateo: S → 2
: output function
2
is like 𝔹
. It's just {0,1}
.F ⊆ S
, o_F
is the characteristic function of the set F
of final states.t: S → Sᴬ
: next state function
a:A
while at a state s∈S
, s:Sᴬ
is the next state (since it's a DFA, only one next state).A
is the alphabet. ie, a finite set of terminal symbols.Note that we didn't say anything about the initial state.
—
A*
can be defined as an inductive type.2^(A*)
can be defined as a coinductive type. ??A word w ∈ A*
is accepted at a state s ∈ S
if:
w=ε
and o(s)=1
w=au
and u
is accepted by the state t(s)(a)
Another way to define word acceptance:
If we have:
t:S⨯A → S
we can inductively extend it to:
t̂:S⨯(A*) → S t̂(s)(ε) = s (no change in state. DFA => no ε-transitions) t̂(s)(au) = t̂[t(s)(a)][u]
Then we can say that w
is accepted by a state s
if o(t̂(s)(w))
is 1.