I've been reading

*A Revolution in Mathematics? What Really Happened a Century Ago and Why It Matters Today*(Notices of the AMS, vol.59 no.1, Jan 2012, pp31-37) and a stack of other papers by mathematician Frank Quinn. He describes how, since the early twentieth century, professional mathematicians have worked in a particular way, the way they found most effective. Shockingly, Quinn points out that this is not how students are taught to do mathematics (in the USA) at any level up to and including a large part of undergraduate material. Instead, because of a misunderstanding by maths educators, students are taught outdated and inferior methods from the nineteenth century. This difference, although it seems both obvious and remarkable once pointed out, has remained unnoticed by mathematicians and educators until very recently.

Quinn goes on to argue that this unnoticed difference could to a large extent explain the poor and declining results of mathematics education in the USA. (And to the extent that other countries use the same poor methods, we should expect the same poor results elsewhere too.) The problem has become worse in recent years, says Quinn, largely because those in control of maths education believe that their methods are just fine, and believe that success will come with more vigorous application of the same poor methods. So rather than change these methods, the authorities have successfully championed "more of the same", leading to a "death spiral" from which escape appears uncertain.

I find Frank Quinn's ideas very convincing. I've done enough maths tuition that from personal experience I recognise some of the concrete problems of student understanding that he describes. It's rather frightening to see these problems explained as merely symptoms of a larger systemic problem. I think Quinn's ideas are crucial for the future of maths teaching, but I believe there are lessons here for us too, as we consider how best to teach programming. It's possible that we too have been going about things the wrong way, misunderstanding the fundamentals, and that the problems we face in teaching programming are

*exactly the same problems*that Quinn describes. If so, the good news is that we can immediately draw on Quinn's experience and suggestions of how to do it better. I'll return to this thought later, but first let's examine more closely what he says about the difference between nineteenth and twentieth century methods in mathematics.

Up until the late nineteenth century, a mathematical proof depended to a large extent on physical intuitions and on understanding what a mathematical model "really" meant:

The conventional wisdom is that mathematics has always depended on
error-free logical argument, but this is not completely true. It is
quite easy to make mistakes with infinitesimals, infinite series,
continuity, differentiability, and so forth, and even possible to get
erroneous conclusions about triangles in Euclidean geometry. When
intuitive formulations are used, there are no reliable rule-based
ways to see these are wrong, so in practice ambiguity and mistakes
used to be resolved with external criteria, including testing against
accepted conclusions, feedback from authorities, and comparison with
physical reality.
(See

*A Revolution in Mathematics?*, p31.)The great revolution in mathematics, which took place from about 1890 to 1930, was to replace intuitive concepts and intuitive proofs with systems of explicit rules, like the rules of a game, without worrying about what they "really" meant. To some people, including philosophers and elite mathematicians with excellent intuition, this seemed like a loss, but it brought with it an amazing benefit: arguments based on consistent application of these rules led to

**completely reliable conclusions**. This meant that mathematicians could build a much more complex edifice of rules and proofs than would ever have been possible in the nineteenth century, safe in the confidence that however complex, it was all still correct. The new methods also opened up mathematics research to ordinary mathematicians, not just to super-stars with extraordinary intuition. Of course, constructing the required proofs still required imagination and experience, but there was now a systematic way to proceed when you got stuck:

When someone reaches his personal limits of heuristic reasoning and
intuition, the reasons for failure are obscure and there is not much
that can be done about it. This is why advanced mathematics was
limited to a few extraordinary people up through the nineteenth
century, and why students feel stupid when they reach their limits
today. The great discovery of the early twentieth century was that
basing mathematics on disciplined reasoning rather than intuition
makes it accessible to ordinary people. When people reach the limits
of good basic logical skills then the failures are localized and can
usually be identified and fixed. There is a clear, though disciplined
and rigorous, way forward. Experts do eventually develop powerful
intuitions, but these can now be seen as a battery, charged by
thousands of hours of disciplined reasoning and refinement.
(See

*Reform Mathematics Education ...*, p11.)As programmers, we can recognise that debugging a proof under the twentieth century mathematical regime is very much like debugging a program: "failures are localized" and so, with disciplined reasoning, "they can usually be identified and fixed". Twentieth century methods are "error-displaying", in the sense that if a mathematical argument produces a false conclusion, then it will be possible to find the error, because the error will be in the mis-application of some rule. Mistakes happen; potential proofs usually have errors, in the same way that programs, when first written, usually have bugs. But if the steps of a proof are set out in precise detail (similar to the precise detail you need in a computer program) then you will always be able to find exactly where a rule was broken. This in turn will often suggest an alternative approach or a fix that will mend the proof.

(And of course, another link with computing is that it's one thing to define a system of rules, it's another thing to be sure of applying them completely reliably, with no mistakes. How can you construct social and physical systems which guarantee this, given only fallible humans and unreliable physical artifacts? Nothing in the real world is perfect. What custom and technique could in practice guarantee complete reliability? Mathematicians have worked out their own answers to that question, but that question is

*exactly*the concern of computer science! That question calls forth all of computer science from low-level machine design, through compilers, operating systems, programming and interaction design, to psychology and organisation theory. It is scarcely co-incidence that the mathematicians John von Neumann and Alan Turing were computer pioneers. They wanted to see their method embodied in machinery.)

So, standard practice in modern "core" mathematics revolves around systems of formal rules and the completely reliable conclusions that people can draw from them. What about mathematics education? In what sense is it still using nineteenth century methods and why exactly is that bad?

With nineteenth century methods intuition is key, and mathematics education has concentrated on intuitive understanding first and skill at applying formal rules second (or never). One problem with this is that

*correct*intuition in mathematics comes from working with rules and internalising them. Trying to hand students an intuition first is very dangerous, because they will often often get the

*wrong*intuition, or just be confused. And confusion is perhaps the better outcome, because intuitions once gained can be very hard to change. Quinn cites

*Do naïve theories ever go away?*by Dunbar et al. (in

*Thinking with Data: 33rd Carnegie Symposium on Cognition*, Ed. Lovett and Shah, 2007). The problem is that subsequent learning doesn't seem to actually

*correct*an earlier misunderstanding, it only modifies the misunderstanding: "even when conceptual change appears to have taken place, students still have access to the old naïve theories and ... these theories appear to be actively inhibited rather than reorganized and absorbed into the new theory" (Dunbar et al., 2007, p.202). A bad intuition is dangerous because it is so sticky and difficult to change.

In the USA, with its popular "reform math" curriculum, this emphasis on "understanding" has gone hand-in-hand with a drastic decline in the practical abilities demanded of students at every level. Expertise in disciplined reasoning, with careful step-by-step application of formal rules, is seen by the nineteenth century educational authorities as at best a secondary concern. Since it is unimportant, the authorities think it can be dropped with no loss. But this is a mistake which has dire consequences. Quinn comments that, as a university maths lecturer, he is responsible for developing the math skills needed by engineering students. However:

Our goals for student learning are set by what
it takes to deal effectively with the real world, and can't be
redefined. The problem is that, as compared with fixed real-world
goals, useful preparation of incoming students has been declining for
thirty years. The decline accelerated 10-15 years ago and the bottom
has almost dropped out in the last five years.
(See

*Reform Mathematics Education ...*, p3, written in 2012.)Since rigorous thinking is not emphasised by the educational authorities, it is scarcely surprising that students have internalised the principle that precision is not important. Quinn gives an example of how this typically plays out on the ground:

Recently a student came to try to get more partial credit. He had put
a plus instead of a comma in an expression, turning it from a vector
to a real number. "But there was an example that looked like this, and
anyway it is only one symbol and almost all the others are right." He
had never heard the words 'conceptual' and 'error' used together; it
made no sense to him and he would not accept it as a justification for
a bad grade. (See

*Reform Mathematics Education ...*, p4.)What, you might ask, has this got to do with learning to program computers? Well, first of all,

*these are the same people we are trying to teach how to program!*Quinn's example is eerily similar to the experiences I've had with some programming students, people who seem unable to comprehend that complete precision is necessary. There is such a gap of understanding that in many cases it seems impossible to cross it. You can think that you have communicated, but then you look at what they are doing later and you see that they didn't get it at all. They are not working precisely to find their error. They are still just superstitiously shuffling the symbols in their program, hoping that they will hit the jackpot and their program will appear to work.

What can we do? Maybe, if Quinn is right, not much. Maybe for them the race is lost. Maybe we would have had to start a decade or more earlier, with very different maths teaching in school. And so, if we really think "everyone should learn to code", maybe that's where we should start today, before it's too late.

Secondly, in programming education we may by chance have thrown the baby out with the bathwater, just as the reform maths educators did with calculators in schools. In the old days, before calculators, children had to learn to do long multiplication, for example 431 × 27, using pencil and paper. This was a bit of a chore, now largely dropped in favour of tapping the problem into a calculator, which gets the answer more reliably. However, it turns out that the children were learning a lot more by accident than how to multiply long numbers. They were learning to set out their working in an error-displaying form, essentially a twentieth century proof that their answer was correct. They had to be absolutely precise in their working, but if they made a mistake, they could check their working, find the mistake and correct it. Their teachers could periodically take in this work and confirm that their pupils were working accurately and in a proper error-displaying way. Not only all that, but children were intuitively learning something about the mathematical structure of numbers by working with them, so that when they came to polynomials they found that working out (4

*x*² + 3

*x*+ 1) × (2

*x*+ 7) was not much different to working out 431 × 27. (In fact it's a bit simpler, because there are fewer carries.) To someone with a calculator, they are entirely different.

I wonder if, in the way we try to teach programming nowadays, we may have fallen into some similar traps, by not realising what students

*accidentally*learned in the past. For example — and here's a heretical thought — are languages with GOTO really as bad as we imagine for

*novice*programmers? Dijkstra claimed that "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration". But don't forget this was just a claim: where's the evidence? Dijkstra himself obviously first learned to program in a language with only GOTOs, as I did, and I'm fairly happy that it did

*us*no lasting damage. In fact I think it forced us to think about our code and to work with it in a particular detailed way, and this earlier practice may have served us well later, even when we programed in structured languages.

The question of whether structured languages are better later, for expert coding, is not the point. Clearly for doing sums, using a calculator is better later than using pencil and paper. But for the reasons outlined above, it's not better earlier. The question in either case should not be just "can students do the old problems more reliably?". Of course students with calculators get the answer more reliably than students without, but they don't learn those other early skills which support

*future*learning. Could this, perhaps, be the case with us too? (Although this view is far from mainstream, I am relieved that I'm not the only person to suggest it. See Where Dijkstra went wrong: the value of BASIC as a first programming language.)

And finally, whether we are computer scientists promoting "computational thinking" or core mathematicians promoting twentieth century methods, it strikes me that we actually want the same thing. Perhaps we should make common cause. The easiest way to save core mathematics might be through computer science. After all, since the mathematicians did us the favour of founding our discipline, the least we could do in return would be to help them save theirs.

*(There's lots more interesting stuff on Frank Quinn's education webpage. Well worth a read.)*

I agree that our students must come to an understanding that what machines do is utterly formal, following rules without understanding or thought of what they might 'mean'. But I doubt that is how mathematicians understand mathematics. And it certainly isn't sufficient for programmers.

ReplyDeleteProgrammers do something creative: they take a vague specification and turn it into a piece of machinery. In mathematics education this is equivalent to requiring students to make proofs rather than to understand and reproduce them. My mathematics-department colleagues were always surprised that we required the weakest programming students to do that sort of thing.

The alternative, sometimes proposed, is that we should teach people to write programs from formal specifications. That supposes that somebody else writes the specifications (creative) and it begs the question of whether such a process could be uncreative (unlikely).

So intuition and experience will be part of a programmer's toolbox from the very beginning. The education problem remains how to foster that intuition, develop that experience, and make the creative leap across the gulf to formal reasoning. Ho hum.

I had a fun experience last week trying to explain to a non-programmer how "functions" in a language like C or Python work. The intuition of "function" which one gets from math up through Calculus is incredibly distant from the intuition of "function" needed to program effectively. Saying "it's just like a function in math, except, the variables can be modified, there can be side-effects, etc." is really the setup for a broken mental model. It's quite possible that the proper mental model can be arrived at more quickly by starting with GOTO and then subroutines. A language like Haskell can throw a wrench in the conclusion, but it's kind of a special case.

ReplyDeleteWRT Daniels' comment, I don't think that Haskell is really a special case anymore. Other functional languages like OCaml, Scala, Clojure and F# also have functions that operate like mathematical functions and do transformations on immutable data. While these languages are not pure and can allow side effects, "variables" are immutable by default.

ReplyDeleteI've personally written applications in F# consisting of thousands of lines of code and dozens of immutable data structures. Often the only mutable data structure would be a hash based dictionary used for caching. When the language supports immutable lists, maps, sets and user-defined types, the lack of mutation isn't missed, not to mention not having to deal with null reference exceptions.

Having a programming model where mutation and side-effects were the exception rather than the rule, would go a long way to improving current software development practices. After many years of C, C++, Delphi, VB.Net and C#, using F# been a liberating experience and shown that there really are better ways of developing software than what is now considered normal.

The problem is teaching is a general profession. Teachers currently could not teach math properly because they do not know it in the way described.

ReplyDeleteI think there is a general confusion between the mathematical foundational crisis and the gradual rise of more rigorous methods. This is understandable because the schools of thought during this period had names like "formalism" and "intuitionism". Here's how I understand it:

ReplyDeleteIn the late 19th century, several new ideas and paradoxes made mathematicians think seriously about the principles mathematics is based upon. By the early 20th century, formal logic with set theory emerged as potential candidate for unifying mathematics with a common foundation. There was debate on whether this could ever completely succeed. There were several schools of thought on this issue, with formalism and intuitionism being two many accounts focus on. Formalism argued that mathematics should be viewed as a "meaningless" game of symbol manipulation, with no independent meaning at all. They thought this could be achieved by axiomatizing formal logic as a series of such manipulations. On the other hand, intuitionism posited that mathematics is fundamentally a product of human intuition. Many intuitionists criticized formal logic and set theory because

(1) these foundational objects by definition weren't constructed from familiar objects like the integers, which is how all maths before was done

(2) objects which are constructed from "intuitive" objects like the integers have the advantage that their existence (and more importantly their consistency!) is only dependent on the existence and consistency of basic intuitive objects, as opposed to the existence and consistency of more troublesome objects like sets.

At the around the same time there was a concurrent process of making mathematics more rigorous by making definitions and proofs more precise. This was associated with Formalism and formalists simply because any foundational work encourages people to check these things. Mathematical rigor is what makes "everyday" proofs the error-checking behavior that Quinn mentions. In opposing formalists, there was backlash from some intuitionists against introducing (what they considered to be) excessive rigorous justification to the detriment of mathematical intuition.

But the notion that rigor is fundamental to mathematics was never seriously up for debate. I doubt that any intuitionist would have said that such rigor is unnecessary. The easiest way to see this is to look at Analysis Situs, Poincare's tract on the theory of manifolds. Poincare was an ardent intuitionist, but the style of his text is generally similar to that of modern tomes: it defines the terms being used and then proves theorems about them.

It's true that some of the definitions and proofs weren't completely rigorous, and that as a result some of the theorems he proved were incorrect. But that was true of all mathematics at the time! By the same token, many classical texts on algebraic invariant theory (mostly written by formalists) had incorrect results because more effective modern geometric tools hadn't been invented yet.

Any concession that proof or justification is necessary is a form of mathematical rigor. The alternative to using rigor in mathematics is...nothing. Mathematics can be defined with maddening circularity as the study of mathematically rigorous notions. The philosophy about where these notions come from (formalism vs. intuitionism) has nothing to do with it.

It's troubling to hear influential people such as Bret Victor argue that we should do away with abstraction in maths (read: rigor) and instead use intuition. At best, rigor and intuition are complementary notions. Modern mathematicians are no more or less intuitive than their 19th century counterparts; rather, they simply have more rigorous intuition, or intuition about more modern and more rigorous ideas. We humans are so terrible at working with formal systems that we have to use intuition as a heuristic to shortcut most problems. But the actual maths is the rigorous bit we use to justify ourselves to everybody else.