Curry-Howard for an imperative programming language?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












The Curry-Howard isomorphism links proofs of propositions, with "programs" and types.



But the way I am introduced to it, "programs" is interpreted in a functional way, i.e. in lambda calculus with dependent type theory.



I am wondering:



  • Does the "programs as proofs" isomorphism still apply when we consider "imperative programs" rather than lambda calculus?


  • Is there a practical way (i.e. a particular proof assistant) to actually program proofs using a more imperative language style, with dependent type theory? (ps. my question is motivated because I actually want to try this out)







share|cite|improve this question





















  • Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
    – Mees de Vries
    Jul 25 at 9:58










  • You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
    – Daniel Schepler
    Jul 27 at 17:07










  • Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
    – Daniel Schepler
    Jul 27 at 17:17










  • Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
    – Eric Towers
    Jul 28 at 20:26











  • The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
    – DanielV
    Jul 28 at 20:28















up vote
4
down vote

favorite












The Curry-Howard isomorphism links proofs of propositions, with "programs" and types.



But the way I am introduced to it, "programs" is interpreted in a functional way, i.e. in lambda calculus with dependent type theory.



I am wondering:



  • Does the "programs as proofs" isomorphism still apply when we consider "imperative programs" rather than lambda calculus?


  • Is there a practical way (i.e. a particular proof assistant) to actually program proofs using a more imperative language style, with dependent type theory? (ps. my question is motivated because I actually want to try this out)







share|cite|improve this question





















  • Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
    – Mees de Vries
    Jul 25 at 9:58










  • You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
    – Daniel Schepler
    Jul 27 at 17:07










  • Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
    – Daniel Schepler
    Jul 27 at 17:17










  • Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
    – Eric Towers
    Jul 28 at 20:26











  • The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
    – DanielV
    Jul 28 at 20:28













up vote
4
down vote

favorite









up vote
4
down vote

favorite











The Curry-Howard isomorphism links proofs of propositions, with "programs" and types.



But the way I am introduced to it, "programs" is interpreted in a functional way, i.e. in lambda calculus with dependent type theory.



I am wondering:



  • Does the "programs as proofs" isomorphism still apply when we consider "imperative programs" rather than lambda calculus?


  • Is there a practical way (i.e. a particular proof assistant) to actually program proofs using a more imperative language style, with dependent type theory? (ps. my question is motivated because I actually want to try this out)







share|cite|improve this question













The Curry-Howard isomorphism links proofs of propositions, with "programs" and types.



But the way I am introduced to it, "programs" is interpreted in a functional way, i.e. in lambda calculus with dependent type theory.



I am wondering:



  • Does the "programs as proofs" isomorphism still apply when we consider "imperative programs" rather than lambda calculus?


  • Is there a practical way (i.e. a particular proof assistant) to actually program proofs using a more imperative language style, with dependent type theory? (ps. my question is motivated because I actually want to try this out)









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 29 at 4:07
























asked Jul 25 at 9:33









Programmer2134

3,17321043




3,17321043











  • Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
    – Mees de Vries
    Jul 25 at 9:58










  • You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
    – Daniel Schepler
    Jul 27 at 17:07










  • Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
    – Daniel Schepler
    Jul 27 at 17:17










  • Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
    – Eric Towers
    Jul 28 at 20:26











  • The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
    – DanielV
    Jul 28 at 20:28

















  • Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
    – Mees de Vries
    Jul 25 at 9:58










  • You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
    – Daniel Schepler
    Jul 27 at 17:07










  • Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
    – Daniel Schepler
    Jul 27 at 17:17










  • Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
    – Eric Towers
    Jul 28 at 20:26











  • The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
    – DanielV
    Jul 28 at 20:28
















Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
– Mees de Vries
Jul 25 at 9:58




Note that e.g. Haskell allows a syntax that looks pretty imperative by wrapping code in the identity monad and using do syntax. More generally I don't think the difference between imperative and functional programming is fundamental so much as practical.
– Mees de Vries
Jul 25 at 9:58












You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
– Daniel Schepler
Jul 27 at 17:07




You might be interested in looking at en.wikipedia.org/wiki/Hoare_logic which provides a mechanism for formalizing proofs of properties of imperative programs.
– Daniel Schepler
Jul 27 at 17:07












Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
– Daniel Schepler
Jul 27 at 17:17




Otherwise, it could seriously complicate the logic: for example if $f$ is a function with possible side effects, then you could no longer say $f(x) = f(x)$ necessarily.
– Daniel Schepler
Jul 27 at 17:17












Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
– Eric Towers
Jul 28 at 20:26





Are we sure this wouldn't be better asked (and answered) at CS? For instance, cs.stackexchange.com/a/72056/35854 seems nearly responsive to what I understand to be the Hoare type theory content of the Question.
– Eric Towers
Jul 28 at 20:26













The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
– DanielV
Jul 28 at 20:28





The basic form of curry howard relating constructive positive propositional logic and typed lambda calculus does not apply to most imperative languages because most imperative languages are strictly stronger than typed lambda calculus. Turing Complete vs Primitive Recursive.
– DanielV
Jul 28 at 20:28











1 Answer
1






active

oldest

votes

















up vote
5
down vote



+25










One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.



Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $pi_i:A_1times A_2to A_i$ and $langle M,Nrangle : A_1times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $mathsflet$, e.g. $mathsflet langle x,yrangle = T mathsfin U$ where $T:A_1times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)



Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.



Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)



Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.



A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.



1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.



2 I don't mean to suggest that Daniel Schepler meant otherwise.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862220%2fcurry-howard-for-an-imperative-programming-language%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    +25










    One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.



    Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $pi_i:A_1times A_2to A_i$ and $langle M,Nrangle : A_1times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $mathsflet$, e.g. $mathsflet langle x,yrangle = T mathsfin U$ where $T:A_1times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)



    Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.



    Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)



    Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.



    A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.



    1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.



    2 I don't mean to suggest that Daniel Schepler meant otherwise.






    share|cite|improve this answer

























      up vote
      5
      down vote



      +25










      One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.



      Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $pi_i:A_1times A_2to A_i$ and $langle M,Nrangle : A_1times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $mathsflet$, e.g. $mathsflet langle x,yrangle = T mathsfin U$ where $T:A_1times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)



      Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.



      Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)



      Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.



      A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.



      1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.



      2 I don't mean to suggest that Daniel Schepler meant otherwise.






      share|cite|improve this answer























        up vote
        5
        down vote



        +25







        up vote
        5
        down vote



        +25




        +25




        One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.



        Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $pi_i:A_1times A_2to A_i$ and $langle M,Nrangle : A_1times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $mathsflet$, e.g. $mathsflet langle x,yrangle = T mathsfin U$ where $T:A_1times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)



        Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.



        Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)



        Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.



        A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.



        1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.



        2 I don't mean to suggest that Daniel Schepler meant otherwise.






        share|cite|improve this answer













        One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.



        Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $pi_i:A_1times A_2to A_i$ and $langle M,Nrangle : A_1times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $mathsflet$, e.g. $mathsflet langle x,yrangle = T mathsfin U$ where $T:A_1times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)



        Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.



        Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)



        Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.



        A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.



        1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.



        2 I don't mean to suggest that Daniel Schepler meant otherwise.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 29 at 2:35









        Derek Elkins

        14.9k11035




        14.9k11035






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2862220%2fcurry-howard-for-an-imperative-programming-language%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?