I think I'm hungry
My first thought was what the hell is a cheese algorithm, and what would it have to do with mathematics.
Chess grandmaster Garry Kasparov has played one of the first computer chess programs ever created, written over 60 years ago by Alan Turing. Credit: VideoLectures.net As part of the University of Manchester's Alan Turing Centenary Conference, Kasparov took on Turochamp, an algorithm that Turing wrote by hand, but never had …
Has anyone actually ever used pseudocode, outside of 'intro to programming' type courses? It ism after all, a collossal waste of time. Specifying an algorithm to a useful degree without handwaving is practically the same amount of work required to actually code up that same algorithm in an appropriate language, modulo a few bits of punctuation or whitespace.
I'd venture that compared to some appropriately expressive programming languages, waffly English specifications can easily be less clear to read, more awkward to define and a lot more verbose.
"Flow charts are pseudo-code"
"Specifications are pseudo-code"
If they are isomorphic to an actual algorithm, then they may as well have been written in a programming language in the first place. If they are not, then they are handwavey wishful thinking; more pseudo than code.
If they are isomorphic to an actual algorithm, then they may as well have been written in a programming language in the first place. If they are not, then they are handwavey wishful thinking; more pseudo than code.
--
Thats kind of the point of a spec though, isn't it. Its a statement of intent, an conceptual representation of something that will be made real in code later on.
If you do commercial coding, you get a spec of what is supposed to be implemented, you make some tests to prove you've delivered it (of whatever variety) and the code that is the specification made concrete.
Being able to represent an algorithm in both a logical spec and in code is a good idea, forcing you to consider the problem at hand, rather than getting caught up in syntax of your chosen language.
They are an abstraction, and totally necessary for a problem beyond the trivial.
I draw your attention to my original statement:
"Specifying an algorithm to a useful degree without handwaving is practically the same amount of work required to actually code up that same algorithm in an appropriate language, modulo a few bits of punctuation or whitespace"
Anyone care to disagree?
So, on to your point,
"Being able to represent an algorithm in both a logical spec and in code is a good idea, forcing you to consider the problem at hand, rather than getting caught up in syntax of your chosen language."
So you don't seem to have contradicted me, but you are in favour of writing the same algorithm twice, once in a form where it isn't particularly useful. Pseudo indeed!
""Specifying an algorithm to a useful degree without handwaving is practically the same amount of work required to actually code up that same algorithm in an appropriate language, modulo a few bits of punctuation or whitespace""
You have it the wrong way around, as do many moronic PHBs. Proper planning makes coding trivial. It's trivial turning proper pseudo-code into real working code because the hard part - working out what to do when and how - has already been done. So yes, if you've described your algorithm properly a monkey could code it, but that's the whole point of producing the description in the first place.
Psuedocode is for the benefit of psuedo-programmers, "managers" as they're otherwise known. Any attendant benefit to the person actually bashing the bits is pure coincidence. However, this generalised hand-wavey overview is useful for letting the poor over worked, under fed, under housed and generally poverty stricken executive board who signed off release of capital for development know that they're more-or-less getting what they've forked out for without them needing to understand the finer elements of actual code.
This post has been deleted by its author
"consider the problem at hand, rather than getting caught up in syntax of your chosen language"
If the syntax of your chosen language prevents you from presenting the algorithm in a readable fashion, you need a new language.
This does happen. For example, if you are maintaining some legacy crapware written in INTERCAL then you might very well put together some pseudocode in a different language. If that different language happens to be a properly specified one for which a compiler exists then you can write and execute tests against your design. You might like that.
Generally, though, whilst all languages have peculiarities of syntax, decent ones do not impede understanding to the extent that someone who is paid to program in that language would not be able to follow the algorithm.
There's a place for a pseudocode that is just a string of hand-waving comments about the general flow of the algorithm. That place is the written documentation that accompanies the source code. As soon as you get down to specifying details, you might as well write something that the compiler can parse. As a bonus, if you write your "pseudocode" in the target language, you might even end up with meaningful variable names, short helper functions, and a general bias towards representing application domain concepts with linguistic constructs.
I use it, very often, when *designing* algorithms. I find it invaluble for stripping away the language specifics and capturing the functionality. If find it clearer to "see" an algorithm in pseudo code, and its a doddle to transfer a pseudo code description of an algorithm to most programming languages. I also believe that the code produced in this manner can be often easier to read, especially if you transfer the pseudo code naming into the final code, as the naming will usually illustrate the "why" (and not the how). I have also on several occasions been very grateful for pseudo code descriptions of some classic algos, so I can implement my own.
How useful would Turings chess playing algo be if he left it to the world as Colossus paper tape instructions?
...are not the same as source code.
As regards your algorithm design, "I find it invaluble for stripping away the language specifics and capturing the functionality", I shall once again go back to my original point, specifically, "actually code up that same algorithm in an appropriate language".
An algorithm that is principally mathematical in nature translates exceedingly well to ML; if I write it in F# it can be dropped into a .net application with ease; Scala isn't so functional but will suffice in java-land. Less mathematical algorithms I tend to prototype in something like Lua or Ruby. The syntax is clean and straightfoward, and the results of the prototyping are immediately visible. In the case of the latter, JRuby classes can easily work with JVM based systems, and IronRuby targets the CLR. Lua certainly has a JVM-friendly version (not sure about CLR these days) but also interoperates exceedinly well with C and C++. Regardless of platform, I've created working code which is every bit as useful as a handwavey design document, only it is already working.
Optimisation, if needed, may follow and may include porting to a less abstract language, but with the advantage of having a genuinely functional implementation to work from.
"Regardless of platform, I've created working code which is every bit as useful as a handwavey design document, only it is already working."
I find that the added constraint that the code must also be well formed within <language of choice> is unhelpful, to me, when designing.
Also, this isn't a "handwavey design document" - its an exploration of the problem domain, without the need to compile/interpret anything, or run anything. To each their own however - I would have no issue with someone writing 'pseudo code' which was also working Lua etc.
Back in the day, we had COBOL, and sometimes CICS, and frequently a database -- or not. We also had teams. The best results came from communicating to the end users and the team members in some sort of common language. We used flow charts, pseudocode, and sometimes a sort of graphic prototype for screen flow. We also used something we called structured programming, which is more natural to the modern programming languages, starting with PL/1.
The prototype, if any, was contained in the structured program, in which control passed to PERFORMs, also known as subroutines. The comments at the beginning of each set of paragraphs in the PERFORM would be the statement of what it was supposed to do. While writing this non-code, we discovered errors in thinking and design.
We could do this as a group, because there were also CALLed programs, with parameters. The design team could say up front what should be in the program, and define its inputs and outputs. Common data definitions were put into the same sort of INCLUDE libraries used today.
There would have been no point in writing all this in the target language, COBOL (and sometimes assembly language). It isn't self-documenting, despite claims. Using and maintaining the comments, taken from pseudocode, was a best practice that lived on in some form through C and possibly Java. Anything larger than a single algorithm (e.g., an operating system, a compiler), requiring teamwork over a period of time, needs this thoughtful design phase. Even if, like Linus Torvalds, you ended up writing it all yourself.
"Has anyone actually ever used pseudocode, outside of 'intro to programming' type courses?"
Yes, all the time. It's often simpler to write a few lines of pseudo-code to explain to our offshore build team what we're expecting them to do than draw a complex diagram or wax lyrical for several pages. Plus pseudo-code is easy to include in a text only email and not having all the weird punctuation means it's easy for business analysts to understand and validate it.
As to whether it would be quicker to write the code directly, sometimes yes, sometimes no. In particular, when writing pseudo code I can gloss over details like import statements, class/function declaration, etc.
What's all this stuff here? I didn't know there was planning going on. I thought you programmers just went into a room full of pizza and coffee, and just started bashing away at the keyboard. Only coming out once either the pizza and coffee is exhausted, or your beard has grown so long it's obscuring the keyboard... At which point, the program is declared finished, and released.
Right, except we're not allowed to write much code these days. Instead, we specify it, hand it off to an offshore team, write tests and then verify their work.
Since their work often resembles gibberish at that point, inserting pseudo code of how I expect an algorithm to work gives them an idea of what they should be writing (and something for me to beat them with if they deviate from that algorithm).
"Yes, all the time. It's often simpler to write a few lines of pseudo-code to explain to our offshore build team"
This is always interesting. At what point is your spec ever detailed enough? I've encountered offshore dev teams whose ability to understand an abstract spec is minimal, and who generally end up being little more than a human macro expansion system who turn a practically fully functional algorithm (eg. isomorphic to actual code) into something that compiles.
"pseudo-code... not having all the weird punctuation... [is] easy for business analysts to understand and validate"
Have you considered using COBOL?
Ok, so by way of example Mr Ru.
I frequently do the following
- Draw a few boxes with another coder or two to decide basic system components.
- Take a component and decide its responsibilities. These are put in some doc (even if its just a comment at the top, or a note next to its box).
- Figure out the code paths by writing very high level, 'do this', 'now do this', 'followed by this'. never goes beyond half a dozen.
- Then, in turn, expand each item into tests and code.
- Go to the pub having finished.
The first two items give us a simple spec we can check we aren't going renegade against, the third is a design/ development device that is high level pseudo code.
If you are talking about writing the gif compression algorithm in pseudo code, then sure, you might have a point (although I would probably still disagree). If you have large problems, they need to be designed properly or you'l screw it up. If you try to do all the design in your head, you're doing it wrong.
On the vagaries of languages getting in the way, I'd say it depends on your development process and the time allowed to create the correct level of abstraction. I would certainly not like to try to do my system design in code first time. Paper is cheap, writing working code is not.
I think that many programmers will agree that they actually "think" in pseudocode all the time. Sometime we write it down, sometimes we don't.
Pseudocode is a simplified process that helps us validate our thoughts more than anything else.
Maybe it should have been called PseudoThought.
Wow, that got a response...
I was rather flippantly pointing out that any "students" studying in fields that require programming or engineering would likely have been taught about designing their code, using techniques including flow diagrams and pseudocode.
Alan Turing, however, shows remarkable discipline and a highly pragmatic approach. This story should be an example to students of how software should be produced instead of the quite common approach of mashing bits of code together and creating software without a clear focussed design (I'm quite guilty of this myself at times)
And electronic computation more modern still.
"Surely You're Joking Mr Feynman?" describes in some detail how the young Feynman managed the computer department at Los Alamos. He appears to have invented pipelining, out-of-order processing and error correction using a team of young ladies armed with mechanical calculators as his massively parallel CPU.
You'll also find that classic techniques like Simpson's Rule and Gaussian Elimination were originally implemented directly on wetware. And lastly, Babbage's original motivation for trying to build a mechanical computer was to do *exactly* what people were doing already, but without the mistakes. The idea that a mechanical computer might be able to do *new* things, came later.
> According to what I've read, Ada Lovelace did exactly the same, only she did it 106 years earlier than Turing wrote his chess program.
Well, here's the daddy <http://en.wikipedia.org/wiki/Al-Khw%C4%81rizm%C4%AB>. At the very least he gave his name to it. Lived ~800 AD.
Icon is no-op.
This post has been deleted by its author
> Is there any scan of the algorithm? It'd be interesting to see.
It's in "The Essential Turing" by Jack Copeland, pages 567 to 575, which says:
"Turing's essay 'Chess' was originally published in the 1953 collection Faster Than Thought, where it formed a section of a chapter entitled 'Digital Computers Applied To Games'. (Edited by Vivian Bowden.)
The typescript is among the Turing Papers in the Modern Archive Centre, King's College, Cambridge (cataogue reference B 7)."
A lot of his stuff is online at alanturing.net, but apparently not this paper.
Take a look at The Turing Archive. I've not carefully checked the linked pages to see if they actually have the algorithm, but they certainly contain related notes.
This post has been deleted by its author
Turing didn't invent AI nor is the Father of it. He was a Genius an one of the founders of Modern Computing.
The "Turing test" wasn't a rigorous proof, just an idea that has been shown to be flawed. Nothing really to do with AI. Really it's just a conversation piece and not one of his real bits of work, mathematics or research.
Turing obviously proved that a Chess program didn't need AI either (as was thought at one stage).
So far NO-ONE has invented AI, partly because we can't even agree exactly what "real" Intelligence is or how or measure it (IQ tests don't).
Give me a spec for Intelligence rather than a vague goal (Playing Go, Chess or simulating the Archers, or convincing conversation) and I'll design the algorithms and write a program. If the computer "isn't powerful enough" or "hasn't enough database" (two oft excuses) then it will be a slow poorly educated socially inept un-British artificial Intelligence. It will be intelligent and artificial.
SImple problem of descriptors. What most laypeople think AI refers to is probably better described as "Artificial Sentience." Since we're not sure what sentience is in the first place, that complicates things.
Of course, what most people actually think when they see AI is "god that was an awful movie."
"So far NO-ONE has invented AI"
I thought Obama bin Liner invented Al Qaeda
We move on. Here is a comparison between a bloke who croaked sometime ago*. Against a really trained, honed mind at a game that was invented thousands of years previous. Kasparov would surely have had more training at this than Turing, plus didn't have to bother about solving such trivia as Enigma.
*Wasn't Turing's fate to meet an Apple?
Oh, OK. Gorrit.
From the Register article: "He wrote algorithms without having a computer – many young scientists would never believe that was possible. It was an outstanding accomplishment."
Was that one a 'canned statement' too? I'd love to know who drafted it.
Algorithms have been around for a long time. Euclid, around 300BC wrote a rather good one: http://en.wikipedia.org/wiki/Euclid%27s_algorithm The very term came from the name of al-Khwārizmī, the Persian mathematician born circa 800 AD.
The Royal Navy had trigonometric tables computed by hand (using similar algorithms to those now used in computers) for navigational purposes; they were very interested in automating that work through the Difference Engine of Charles Babbage (1791-1871) who started work on it 1822. The Fast Fourier Transform (FFT) algorithm was actually first used by Gauss, in 1805, to reduce his manual effort in his calculations concerning astronomy.
The minimax algorithm from game theory was (so I have quickly checked) first proved by John von Neumann in 1928: http://en.wikipedia.org/wiki/Minimax#cite_note-1 Doubtless Turing would have known of this algorithm.
Of course, Turing would have been better using the minimax algorithm (with its arbitrary depth look-ahead). There is nothing wonderful or disappointing that Turing drafted a program with 2-step look-head. What is disappointing is that someone who should know better thought to claim it was wonderfully original.
Turing was a great scientist/mathematician. If this sort of stuff continues, the 100th anniversary of his birth will not do him justice.
[Aside: I have interviewed for jobs (soon-to-be) computer science graduates who could not even explain a single viable argument passing mechanism of a non-recursive programming language. And that was in the late 1970s and early 1980s; I don't expect things to be better now. And certainly not if their teachers allow them to believe computers predated algorithms, in practical use.]
Best regards