Bocce?
You mean like the evaporators used in moisture farming?
Microsoft Research has introduced a new open source programming language called Bosque that aspires to be simple and easy to understand by embracing algebraic operations and shunning techniques that create complexity. Bosque was inspired by the syntax and types of TypeScript and the semantics of ML and Node/JavaScript. It's …
> And doesn't MS already have a functional language with F?
F#.
And it is a hybrid functional language (so does support mutable state) and also built on ML, This sounds much more hard core...
Of course saying "no mutable state" missed the point of the real world being mutable – the cupcake I just ate has changed and I can't eat it again. But, for much data processing working on a basis of transforming data into a new state only actually needs to mutate (the persisted) data at the final step and thus is far easier to scale out processing.
"No mutable state" doesn't necessarily miss that point. In the Haskell paradigm, eating your cupcake simply created a new universe containing no cupcake. Don't worry about the old universe, the garbage collector takes care of that.
(I'm not even joking, the IO monad's definition involves a value of type RealWorld.)
It seems to have some similarities in concept to APL.
The big drawback with APL is you need a special keyboard (or stickers) and golfball (for your IBM 2741). Now that we're not constrained by character sets and have touch screens, I'd have thought it could make a comeback - it certainly looks more elegant than the C-derived syntax of Bosque.
any programming language I used from a Microsoft interface had problems indicating errors - always saying it was a 'For Next' error instead of something else - perhaps better error checking.
But then If AI were writing code it may be reasonable to have no loops present as it could facilitate faster writing, reading and analysis / errorchecking
So no object instances then. Sorry, but trying to simplify validation, a task that can be automated, forgets that we've got machines to do that. The difficult bit of coding needs to allow for complexity as the world we're working in is complex. There are of course so many things that can go wrong, which is why structured programming helps turn a soup of code into modules with, err, structure to contain our many egregious errors.
I'm just off to rebuild Notre Dame, using only Lego 'cos that system means I can't heap bricks up randomly and I'm sure having to work around the limitations of Lego will be a pain worth having. Or not.
"There are of course so many things that can go wrong, which is why structured programming helps turn a soup of code into modules with, err, structure to contain our many egregious errors."
But what happens when the structure itself introduces things like gestfaults where each model claims to be clean but once they're strung together everything goes wrong for mysterious reasons?
What the hell is a gestfault?
Anyway, if things go wrong when modules are plugged together then thats a fault of the design, not the language. Functional languages don't magically make program design any simpler, in fact if anything the opposite is true for anything other than small mathematical problems. Functional doesn't get rid of loops, it just turns them into recursion which simply confuses the design for systems that are not by nature recursive and if a language doesn't naturally map onto the problem space its trying to solve then its not a good solution to the problem.
"What the hell is a gestfault?"
I am assuming that it is a collision between Gestalt and Fault !!!???
therefore
"A fault or 'set' of faults that are greater than the individual smaller faults that have combined when all the modules of code etc are 'working' together."
That's exactly it and one reason unit testing can never find ALL the faults. The individual units themselves may test all clean, but once you put them all together the interactivity between them produces faults that would never be seen in any individual unit. A fault that's greater than the sum of its units; thus a gestalt fault or "gestfault".
"Presently, Bosque relies on an interpreter written in TypeScript, run on Node.js"
Doesn't that pretty much say it all? (I hear a loud 'Eeewwww')
How long before Micro-shaft starts CRAMMING it at us like:
a) Windows 10 and "the Store"
b) ".Net"
c) C-pound
d) Silverlight
e) 'the Metro' and UWP (in general)
f) A DevStudio that requires EXCESSIVE mousie-clickie in a "property sheet" instead of the old-style tabbed dialog boxen (with easiy discovered hot keys) that you used to see in the dialog editor and class wizard of earlier VC++ IDEs...
etc.
And how is this better than Python for doing that sort of thing?
And what's wrong with a loop?
and don't EVEN get me started about un-typed data storage and garbage collection... which from what I can tell, seem to be unspoken 'features'.
When I was a student I was in an industrial placement and tasked with writing a programme to control some kit on an IEEE bus and print the output on the brand new and magical HP Plotter (more watchable than Game of Thrones). The language was the lovely HP Basic (no sarcasm there) and I set off and wrote the programme. It worked OK - then the rest of the lab spent their lunch hours ripping it to shreds for being "unstructured". I took this pretty seriously, so spent some time getting rid of all the Gotos and "bad" loops, etc. until it met their view of what a properly structured programme should look like. It still worked in the end, but I don't think it was much better. My analogy for the GoTo is it's a bit like Jaunting: you can either spend ages driving round the one-way system but if you could Jaunt like the Tomorrow People then surely you would.
The problem with "goto" is not its effectiveness - hell that is exactly how flow control happens in the generated assembler/object code - but in another human reading it and upon seeing a jump destination being able to work out how many ways one gets there.
For some very small functions with a local jump (please, PLEASE, don't bring up setjump/longjump here!) it might be fine as a simple way, for example, to break out of nested loops. But on a larger scale the program's intentions become unintelligible.
Mind you, there are other constructs that are also a bit dodgy, for C you can return out of a function at any point, not always clear logic there. But $DEITY forbid you find yourself working on old FORTRAN where you can have multiple entry points to a subroutine!
It was also possible that their thought process went like "If they think this is good style, they might use a bunch of gotos in the next project, which could be much larger". Depending on whether you were going to be there for a while, they either thought "They will really hate it once this catches up to them and they have to recode something big in the future" or the more pragmatic "We will really hate it if we see this on something bigger". They could then decide that you could get experience writing in a clearer style with this relatively small project to get you used to a better way.
I did not remember the multiple entry in a subroutine, by I remember there was a feature allowing multiple return points *outside* subroutine:
- Define a parameter as a label using ad'hoc syntax
- Use a variant of return to say combine return + goto Nth label within parameter list
I still remember writing a Hadamard transform subroutine in the late-'70s/early-'80s that had two entry points. The code that performed the transform was the same for the forward and inverse transforms. Only the scale factor you applied at the end was different--all the extra entry point did was calculate the correct scale factor. Seemed better than dealing with two pieces of code that differed in only a single line.
@Paul - I agree with you that reviewing/editing/fixing "unstructured" code is a pain, especially in the old days when there were no tools, just the tiny screen of a 9826. However, the main benefit that came out of the re-work I was heckled into doing was that I was able to pull out a general purpose subroutine for driving the plotter which could be used by others in the lab - whether directly from test gear or by putting user-generated data into a defined format. It became an ongoing lunchtime project with user change requests regularly submitted for things like auto-scaling, colour options, annotation, etc. and I think that the plotter sub became larger than the control software which spawned it. I took it back to poly. when the placement finished and it was still in use a couple of years later when I left.
@Headley_Grange
HP Calculators and plotters...memories. The desktop calculaters in the early 1970s were programmable. The programs occupied register space starting with the high registers and coming down. The program could reference those registers which led to the interesting, and immediately grasped, option of modifying itself. Take the square root of R15 and see what happens next. Now hook a plotter up.... What we learnt from that was that those early plotters were tough. :)
@Andrew: I think I've posted this before - but I grew up with HP test gear and computers. They were indestructible and the best. My HP11C calculator was probably the first "very expensive and hard to justify" thing I bought and it still works. It doesn't get much use now, although the iPhone app which emulates it does.
When I bought my first laptop I saw an HP model and just bought it cos it was obviously going to be the best thing in the whole shop. What a fool. What a disappointment. What a lesson. Whatever happened to HP?
Those plotters though.
Still remember Carly giving herself (the board giving her) a 16m bonus for doing such a good job. While just months before she was asking every employee to waive a day's wages.
That money went directly from the employees' mouths to her bank account.
Morale went through the floor that day.
@Paul Crawford
FORTRAN was quite happy with multiple RETURN statements as well as multiple ENTRY statements. In a memory constrained world the RETURN would save you one or two bytes over a GOTO to a single RETURN statement. When running out of code space meant resorting to manually loaded overlays this was a serious consideration.
Re: memory constraints.
I was working on a prototype system back in the PDP-11 days that we'd written using FORTRAN-IV. In order to shoehorn the code into RAM we had to use numerous overlays AND the threaded code compiler option. If memory serves, we were squeezing just shy of 300K of code into the roughly 48K of RAM that we had to work with. (Error messages consisted of numeric codes--actual text took up too much valuable RAM.) We wasted a whole weekend re-working the project to compile using the F77 compiler that the program manager had heard generated faster code. We weren't experiencing any problems with "slowness" but figured "Hey... faster is better right?" By Sunday afternoon we scrapped the re-write. The F77 compiler didn't have the threaded code option--gaining all its vaunted speedups via inline code--and we couldn't get the code to fit into memory. A complete re-write to make it fit would have seriously blown the delivery date.
"[...] In order to shoehorn the code into RAM we had to use numerous overlays [...]"
A FORTRAN program had been running production jobs for quite a while. Then one day - after a recompile - it started crashing.
The programmer had declared a large sparse array that actually spanned the code area of the program. The new compilation had linked the library routines in a different order. Previously he had been lucky - in that the used array elements only overwrote code that had been used in earlier processing. In the new version it was overwriting code that was needed later.
His overriding problem was that his program plus data set was too big for the mainframe's memory size.
I liked the entry statement in FORTRAN II - you could call one entry point to initialize and another to compute. As far as GOTO is concerned Dijkstra's diatribe was written against certain FORTRAN II programs that jumped all over the place so he was really speculating how far one could go without a GOTO. Certain flow diagrams require either a GOTO or a subroutine call.
When I taught C++ I commented that jumping out of multiple loops is much cleaner than using flag variables etc. I also told the class that I was grandfathered having been coding since 1960 so I could use GOTO and so could they if the use resulted in more structured code.
"The problem with "goto" is not its effectiveness - hell that is exactly how flow control happens in the generated assembler/object code - "
Quite - when "Go To Statement Considered Harmful" was published I just couldn't understand what it was going on about - I was an Assembler programmer at the time.
It's worse than that. Not only can you have multiple entry points, you can pass the possible exit points to the function
That is to say you can do
CALL RANK ( N, *8, *9 )
WRITE (*,*) 'OK - Normal Return'
STOP
8 WRITE (*,*) 'Minor - 1st alternate return'
STOP
9 WRITE (*,*) 'Major - 2nd alternate return'
END
SUBROUTINE RANK (N, *, *)
IF ( N .EQ. 0 ) RETURN
IF ( N .EQ. 1 ) RETURN 1
RETURN 2
END
I'm pretty sure this is what the 'no multiple returns from a function' rule refers to. IMHO it makes absolutely no sense except for fortran
"But on a larger scale the program's intentions become unintelligible."
A customer was upgrading to a new O/S on modern hardware - and wanted their existing programs to be carried forward too.
One assembler program handling papertape had been written in several modules - each coded to an ancient compulsory format for the code labels.
So the first level modules were named "AA", "BB", etc. Each routine took that as their prefix for labels by adding other letters and finally an ordinal "1", "2", etc.
As comments were also mandated to be very sparse - it was impossible to guess what a conditional branch to "AABCA12" actually did.
Fortunately the "glue" changes to the HW/SW environment meant that the program worked without recompiling or needing changes.
That's fine for a student project for internal use where the only success criteria is "it works". Not so good if you are producing complex commercial software where other developers need to understand your code and perhaps enhance it or fix issues long after you have left.
Heh.
I once went on a Cobol Advanced Topics course given by Sperry Unisysvac.
One of the very nifty things we were shown was Report Writer, a module that made Cobol into a declarative language by doing all the hard stuff in the data division. The procedure division was then reduced to:
Initialization Paragraph
(about three lines of code)
Report Generation Paragraph
Loop, Generating report, at end go to termination paragraph.
Termination Paragraph
(about another three lines).
Everyone worked the lab that morning, finished in about 3/4 of an hour and went drinking Ruddles County Bitter in the Midland Hotel.
Everyone except the Bass Charrington crew, who were a Structured Programming shop and strictly forbidden to use "go to" under any circumstances.
When we eventually rolled back into theatre, with all the bonhomie that a three hour lunch on XXX Brainsmack can bring on, they were still at it, attempting to make the simple monolithic logic model of a Report Writer procedure division conform to the "Perform Until" demands of their enterprise. They never did finish the simple lab, giving a clear and unambiguous win to the structured programming crowd.
I often smile when I think back on the irony of the Bass Charrington team not getting a drink on the last Friday lunchtime of a course, and I've often wondered how they worked around such beasts as "READ <file> AT END GO TO" and "FETCH <DMS Record> ON ERROR GO TO" in their code. The results must have been a nightmare to debug for a new guy in the shop.
No, they can't. ANSII Cobol '74 and ADMLP cards in play.
You either used AT END GO TO or risked a read beyond end of file error & crash.
You could decline to register the database error contingency with the program and cope with a programmatical error-handling routine in the case of the ADMLP code, and thereby put yourself in the frame for all sorts of pain during your voyage of discovery of just how many different ways your DMS FETCH (FIND+GET if you are using Non Sperry computers) can say "your record ain't here, mate".
Had great fun during the mid 80's twitting DB programmers for the inventive ways they had missed a few possible endings for their Life Story In Code and the lame excuses they tried to make this a DBA problem.
My absolute favorite was "Go fetch my record; on error just keep going on the assumption all is well" which was always paired with "there must be a database problem because when I change this record the previous one gets updated" when the "programmer" had checked his results.
And how is this better than Python for doing that sort of thing?
While Python isn't functional, it has functional elements. However, I don't think these pass muster for the functional purists.
As usual, it's horses for courses. There are situations when you want immutable data structures and algorithmic precision, and there are situations where you want more flexibility.
CS has been searching for a silver bullet from before machines were capable. None of the efforts have been a solution to the problem of writing software. The actual problem is in defining the problem you want to solve in the context of where the application lies. Any software has to be written by humans (for now at least). Writing software is hard work, very hard work; an art if you will. It is a fallacy to think that changing the programming paradigm magically solves the problem of writing software. It will, for some instances, reduce the problem to a more elegant solution. However, it always comes at a cost, whichever that may be.
The "right" solution will always be in the eye of the beholder. There is a reason for all the different paradigms to co-exist (we still use assembly when and where apropriate!). You must choose the right language for the right problem. That is an art in and of itself. For making the right choice in language and solution, you need to be a competent programmer with lots of experience. That level of sophistication is not for any mortal. Just like any other expert occupation.
"Writing software is hard work, very hard work"
No. Digging ditches in July in South Carolina is hard work. Welding pipeline in northern BC in January is hard work. Laying shingles in Vegas in August is hard work. Writing software is easy *work*, but hard *thinking*. Please don't conflate the two.
Now I understand why there is so much crappy software!
When "thinking" is not a form of "work", then you are obviously not paid for "thinking". Then, by that standard, the physical activity of typing on your keyboard, is all the "work" that needs to be done and paid for as a programmer. Thanks for clarifying that.
I need to find another job... One that pays for thinking.
I need to find another job... One that pays for thinking. .... b0llchit
Bear one thing foremost in mind, b0llchit, it is exclusive overwhelming thoughts rather than fantastic underwhelming thinking that carries a premium price delivering riches beyond most everyones' wildest dreams.
When "thinking" is not a form of "work", then you are obviously not paid for "thinking". Then, by that standard, the physical activity of typing on your keyboard, is all the "work" that needs to be done and paid for as a programmer. Thanks for clarifying that.
IBM, circa 1980s, evaluated programmer productivity in their mainframe groups using lines of code per unit time. Clearly they understood this concept.
That also helps explain a lot of code bloat.
People do actually die from software development. It's not obvious causes like getting crushed under a machine, but a sickness of mental anxiety that builds over a few years. There's weight gain, heart attacks, drugs/alcohol abuse, and suicide. Demanding companies will offer an enormous bonus if you can work harder or threaten to fire you if you don't. You're tired, working as many hours in a week as possible, and struggling with a fear that you can't work enough hours to recover from even the slightest mistake. Recent college graduates and H1B Visa employees are especially vulnerable.
Not every software job is this hard, just like not every manual labor job is battling frostbite or heatstroke. Remember that they do exist and people do sometimes need help.
Indeed. Hence the name of a group I ran for a while in Durban, The Programmer's Art, which combined meetings to discuss programming languages and paradigms with frequent pub visits. We never worked out whether we were Artists or Artisans but we had fun debating it over beers.
This whole Functional thing and going back all the way to Djikistra's GOTO allergy is a product of computer science.
Now sending electrons through layers of silicon to implement an abstract instruction set is definately science and /or engineering. But programming is a craft. "Computer Science" has as much relevance to programming as "Woodwork Science" would have to carpentry.
Now there is no reason why programming and associated skills should not be subject to academic study, and, a constant effort to improve tooling and technique is a good thing. But pretending it's science just clouds the issues.
It is notable that the most successful (or at least most used) languages were developed by practitioners who were experts in other fields. Python, C, COBOL were all developed by programmers looking for better tools to solve a problems at hand.
"Now sending electrons through layers of silicon to implement an abstract instruction set is definately science and /or engineering."
I learnt a lot (mostly good) from VLSI chip designers - who happened to use C for a wide variety of tasks in their daily grind. In my view they were much more adept at working with abstractions than pure comp.sci folks - borrowing from comp.sci, maths and physics to find a pragmatic (and usually elegant) solution.
Those folks took a very different view of security too, tending to work on the physical principles - eg: if accounts share access to the same storage you have to assume the info will leak (many of them cut their teeth cracking minicomputers for extra storage/compute time :P). They used TDD for everything, big or small, (there wasn't a label for it back then - it was basic engineering practice) and their transistor counting instincts ensured that the code was lean and cruft free.
Three decades have passed since, things have moved on quite a bit - but I think software still has a lot more to learn from hardware, especially as folks seem intent on chewing up more memory forcing the data to travel a few mm further than it needs to (looking at you Java), which can easily add up to lots of wasted kilowatts at the data centre. :)
@James Anderson
I had to GOOG Dennis Ritchie and Brian Kernighan to discover their educational backgrounds. I did that simply because, as you say, they came to computing from other fields, and, as I say, they are from the "before time" :) by which I mean before Comp Sci was a university degree.
Brian Kernighan, Engineering Physics (UofT) and PhD Electrical Engineering (Princeton) - https://en.wikipedia.org/wiki/Brian_Kernighan#Early_life_and_education
Dennis Ritchie, graduated from Harvard University with degrees in physics and applied mathematics - https://en.wikipedia.org/wiki/Dennis_Ritchie
As an aside, it is sad that education has become so expensive. How many great minds won't go as far as they could simply because the cost of pursuing degrees seems, and is, so high as to be a real barrier to following ones interest? Therein can lie the inspiration for the next big step in any field of study.
We have so many layers of abstraction between the program and the device now that in the future you'll be able to author a new web browser with a few thumb ups, a cat and smiley emoticon.
We had some grads a couple of years ago, who were young, hip and the future of all the programming here. They couldn't get Jenkins to work because they had no idea how a windows path works and the .exe wasn't in the path.
Technology wise, we're getting towards being a crew of a round the world boat race, where nobody knows how to swim.
"We have so many layers of abstraction between the program and the device now that in the future you'll be able to author a new web browser with a few thumb ups, a cat and smiley emoticon."
Because the browsers we're using now aren't exactly wine and roses, aren't they? Vulnerabilities and exploits ring a bell?
How are you going to do things right if there's no one around able to do it right? How do you enforce a KISS principle when everyone's idea of simple differs?
And please don't answer, "Just don't do it." You're under a DIE directive and have a deadline to meet.
In answer to your question, I heartily recommend Compiler Explorer and a couple of cppcon talks: What has my compiler done for me lately? and A Simple Commodore 64 Game in C++17.
You may also like C++ Insights which is currently linked to from Compiler Explorer.
Things might be simpler than you think behind the scenes, especially if you pick the right keywords.
We dumb everything down so the dumb can use everything. The dumb include programmers. Of course, the dumbest are the machines. And now that we want machines to check our work, a dumbest machine can only check the work of the dumbest programmer.
DUMB IS SMART
SIMPLE IS COMPLEX
AUTOMATION IS HUMAN
Yes. I was told this story by a submarine officer. I expect the many naval people here know it.
While in training they did an exercise which involved entering the water from a boat on the River Dart in uniform and then getting on again.
Only the CPO in charge of said boat engaged outboard and started to motor slowly upstream, so the trainees had to swim like hell to get back on board.
When the last one was on, the CPO said "So, gentlemen, what did you learn from that exercise?"
"Well, CPO", said one of them, "I'm going to take good care not to fall in."
"Yes, Sir," said the CPO. "That is the correct answer. Do not fall in the fucking water!"
The relevance of this excellent advice is that when considering exciting new programming languages with new paradigms and structures, the fact remains that whatever the new safety rails in one place, they tend to offer exciting new ways to fall in the fucking water.
This post has been deleted by its author
No, I haven't done everything. Never claimed to have done, either. It's only people like you who put those words into my mouth ... perhaps instead of griping about my life, you should further investigate your own options? Shirley that would be much more satisfying?
As for boats ... I know they are holes in the water into which you pour money, but what kind of numpty would pay a couple (or several) thousand dollars in haul-out fees when you can hire a kid to scrape your hull for less than a couple hundred?
There are countries where an enterprising kid with dive gear can't make a couple hundred bucks (or ten!) on a weekend, doing nothing more[0] than scraping boat hulls in the water? I'm awfully glad I don't live in one of those nanny states!
[0] The kid I've been hiring for this for the last couple years is 18 years old, grew up in a boatyard, and as a courtesy also inspects the zincs and through holes etc. while he's down there ... and will happily replace the zincs gratis if you have replacements handy. He's earning over $100,000/yr.
I notice that the OP's post was about not falling into the water. The CPO didn't say anything about abandoning ship - which it is important to do in a disciplined and safe manner whenever possible, or you could have a Titanic disaster on your hands.
Suppose I want to get an input n times and each time I get it I then test it's value and then choose what to go and do on the basis of this value before returning and getting the next input. How do I do this in a language like Bosque?
[This isn't some cynical trick Reg-comments-esqe question - I don't know the answer and I'd genuinely like someone to explain it to me.]
If you mean event processing (eg. you specify code to run on a timer) is as much a platform/environment question as a language question. And likely not something that is worked out (on the linked GitHub page a quick glance at the contents shows nothing of event handling).
If you mean you would have a loop of delay, read, process then likely this would be done with tail recursion where there isn't a suitable library method. Of course the author maybe thinking of stream processing (like the reactive extensions library for .NET).
(Note there are a lot of "TODO" items on the GitHub readme... this is not yet a complete system.)
Let's hear it for flowcharts. They show the logic in a two-dimensional form, much easier to understand than any programming language. The structured programming of Dijkstra et al was saying that flowcharts should meet a few sensible constraints aimed at making the logic verifiable.
OK, interrupts are not the easiest thing to handle in a flowchart, but that still leaves a lot of applications that benefit greatly from flowcharts. I am not sure there is any way of easily showing the logic of interrupts.
"They show the logic in a two-dimensional form"
That takes me back around 30 years when I used a tool where you drew a flowchart on screen (graphically, but I think in their own environment, not Windows) and then the tool generated the program for you. Can't remember what it was called. Made it easy to discuss the intended program with others.
This post has been deleted by its author
It boggles my mind that some people have the intelligence to do away with reality and create a theoretical structure of such beauty that loops no longer exist. This is a programming paradigm that should be in an art museum, so that the great unwashed can marvel at its pure form and uncluttered syntax, before shuffling off to the next room having already utterly forgotten it.
Meanwhile, in the real world, I have to program a daily check on a 1,700,000-record database. How you gonna do that, Mr NoLoop ?
"create a theoretical structure of such beauty that loops no longer exist."
Except, of course, they haven't. They've just hidden them away in a higher level abstraction, just like symbolic assembler hid actual addresses in labels and 3rd generation languages provided an abstraction to hide the actual hardware.
Ask yourself. If loops are such a beautiful structure, why aren't they directly implemented in the silicon? Without a loop instruction (actually, the x86 architecture has one but is very limited and slow to implement), how does the silicon do your 1.7-million-record check?
The silicon doesn't have loops, because loops are an abstraction of a conditional jump back to the start of a code block.
The Z80 had the DJNZ operator to help.
Loops are for humans. They're really easy to understand. So easy, that optimisation for performance often involves figuring out how to replace them with set operations.
> Z80.... errr I'm beginning to get the feeling we're a bunch of OLD GUYS. Soon there will be discussions on 4004's.... :v)
Z80? What about the PDP-8's ISZ (Increment and Skip if Zero) instruction, where the skipped instruction is usually a jump to the beginning of a loop.
I worked at a small bank when "Check 21" was implemented in 2004 (paper checks get turned into electronic checks). The principle of Check 21 is all paper checks get turned into electornic form for easy transmittal to/from banks. Banks had to certify that every check image is sent only once. However, it was typical for the same image to be sent by an institution more than once. Because the company that provided and supported the bank software didn't provide a means to check for duplicate submissions, While not a programmer, I often used DOS batch files and BASIC to do quick and dirty tasks to keep from having to do them manually.
The bank was typically getting up to 10 duplicate images daily from checks submitted in the previous two weeks. Using batch files and BASIC, I was able to provide a list of under 100 potential duplicate check images from the thousands of checks processed daily. This made it easy for the staff to identify and reject duplicate submissions and prevent bad customer relations with our customers. This relied heavily on loops (get next record - if no more records, exit this module - process the record - repeat) .
The banking software company did develop a duplicate check module, but it only checked for a week. The system I developed went back a full calendar month. Often duplicates were submitted 2 weeks after the original image, so the bank continued to use my method after I had left that bank.
Even BASIC can be made very structured. I always programmed using what I called "states". Every module will produce a result in one of maybe 2 or 3 states (sometimes more), but I would consider what the possible states could be and plan the next process to handle all possible states.
No, I'm not a programmer. I did take an introductory Fortran course in the mid 1970's, where I did appreciate the nasty habits one could develop with all the GOTOs and other ways of getting places in an ungraceful way.
"Meanwhile, in the real world, I have to program a daily check on a 1,700,000-record database. How you gonna do that, Mr NoLoop ?"
A function with a check function and an iterator. The iterator does the loop and the check for you under the covers. Presto, no loop in your code, it is hidden in the language construct!
Is it really so difficult to comprehend this? I realize the article has a flame-bait misleading title, but some Reg readers need to work on their reading-comprehension skills.
There is still iteration. The first example in the damn article uses map.
The language has no explicit looping construct, only higher-level abstractions, such as the aforementioned map. There are also list comprehensions, recursion (with TRO), and pipelining. All of this is right in the Bosque documentation's introduction page.
Personally, I find the Bosque syntax moderately to severely awful, but that's beside the point, since Marron himself says (again, this is in the article) that it's a research language, not intended for actual software development.
it's a research language, not intended for actual software development.
And Pascal was a teaching language, so naturally it got used for systems. "C" was really a systems language so naturally it got used for teaching and just about everything else. Pascal and C were like two ends of the structured language spectrum. Early Pascal's didn't allow you any leeway for type-shenanigans, so people designed systems where type-casting shenanigans were an absolute necessity and specified Pascal as the language. C allowed you to do anything, so people used it where type-shenanigans were positively pathological.
Here's looking forward to writing that embedded real time system in Bosque.
... so much simpler than when I was a Smalltalk programmer and had to write stuff like
squares := numbers collect: [:each | each ^ 2]
Are we actually going forwards at all?
* of course, if one were serious one would add the method to the class and just write
squares := numbers squares
I feel that this new language is still at its infancy - they are just designing the basic building blocks and it is quite far away from being usable in a problem-solving context.
They may need either:
(1) a killer application which have a common use-case (like rails for ruby), or
(2) a killer feature which have a common use-case (like channels for golang), or
(3) a killer reason to use it (like swift for iOS dev)
for it to pick up public interest.
Seems this language is not quite there yet, and i suspect that it will become a python wanna-be while light-years behind in terms of libraries and toolchains.
But reading half way through the paper, it do looks to have some novel approach to common pitfalls in programming and i feel, down the road, its syntax and concepts will sprinkle into some mainstream languages, like c# or java, to make dev life easier.
Seems that with monotonous regularity - about every decade or so - functional programming tries to resurrect itself as the next big thing come to save us. Usually with much the same arguments and handwaving. Just a different slant on how good it is all going to be. (Really - I promise - this time for sure.) You could find papers from decades ago with much the same overall argument, just saving us from some different ill.
There is a cure, it just isn't a herb.
Stupidity cannot be cured with money, or through education, or by legislation. Stupidity is not a sin, the victim can't help being stupid. But stupidity is the only universal capital crime; the sentence is death, there is no appeal and execution is carried out automatically and without pity. - Lazarus Long
You can tell that its in everyday use by surveys that list the most commonly used programming languages. (joke....)
I come across regularly these new, improved, languages and they all tout how easy they are to use and how they all generate infallible code. What's bugging me is despite there being numerous languages out there none of them actually solves problems (unless you're lucky enough to find one with a library that features a handy 'solve my problem' API). Some types of languages are better at expressing particular constructs than others but ultimately if you can't express what you're trying to do in a language like English then it is unlikely that a language du jour is going to save you.
The bastard offspring of Clojure, Scala and every other new paradigm language that has tried to "solve" traditional programming techniques by replacing them with syntactically hideous abstract operations. The way they have "got rid of" loops appears to be that the language allows an implicit for....each loop to perform an operation on each item in an inbuilt container class. As most languages now provide a for...each (or "extended for") loop syntax for iterating through containers safely, I cannot see what "problem" this solution solves. Even C++ has this nowadays.
Massive extra negative points for introcducing your new language with an example containing single letter variable names and code comments that are actually less comprehensible than the new language you are trying to illustrate.
The point is that once you've spent years developing a language, it all seems easy and intuitive. To you...
I eventually gave up on the people who didn't want their spreadsheets converted to a database application "because Excel is intuitive". (This cell contains a number or a function depending on the phase of the moon.)
Ask any honest trainer...if someone has invested ten thousand hours of their life on using Excel, of course they know how to use it.
As most languages now provide a for...each (or "extended for") loop syntax for iterating through containers safely, I cannot see what "problem" this solution solves. Even C++ has this nowadays.
The point is precisely that those languages also offer primitive looping constructs, and Marron (based on various pieces of research, some of which are linked to from the Bosque documentation) wants to get rid of those.
Honestly, people. It's an academic language, created for research purposes. The Reg dangles a little meat and you're all jumping out of the swamp to bite at it.
Agree on the horrible syntax, though, and the poor choice of identifiers and ill-conceived code comments. The other examples in the Bosque documentation generally are no better. Unfortunately many of the people who design programming languages are hostile to writing readable code, even while they extol its virtues.
"Unfortunately many of the people who design programming languages are hostile to writing readable code, even while they extol its virtues."
Sounds like R. For mathematicians, the great thing about it is that it was designed by mathematicians. For everyone else, the worst thing about it is that is was designed by mathematicians.
Suppose I want to calculate something. And the algorithm I know for calculating it has loops in it. Will it be easy to express that algorithm in Bosque, even though I can't use loops in it, using the things it has? If not, I can't see the language as being very useful.
I mean, what if I've got my program half written, and then I find out I need to calculate something as part of it that needs a loop?
The answer to your impossibly vague hypothetical question is definitely yes, it will be easy to express that algorithm in Bosque. See how easy that was? If you disagree, feel free to provide a counterexample.
(Alas, it will also be hideously ugly, due to the line-noise syntax. And, of course, completely impractical, since this is a niche research language not designed for actual production use. But since we're playing at strawmen here...)
Since at least The Little LISPer (1974) people have been writing guides for programmers - not just for computer scientists - on how to replace explicit procedural loops with higher-level constructs such as map operations and tail recursion. This is by no means new territory. What's mildly interesting about Bosque is eliminating various lower-level constructs, including explicit loops, in favor of higher-level ones, and providing other syntactic sugar such as bulk algebraic operations and pipelines. (Again, this is all right on the first page of the Bosque documentation, which is readily available via the github link in the article.)
It's a wonder some of you aren't complaining about these newfangled horseless carriages.
Bosque sounds terrible. Why is it based on JavaScript, likely the single most well know bad language out there?
Why would anyone want to do something that stupid?
And one of the worst things about JavaScript is that is relies on higher level functions like sets and maps that you can not dive into the details of when they fail due to bad data.
And to bring out another interpreter language is really foolish.
Some of the biggest problems in programming is from the slowness and memory waste of interpreted languages having to rely on garbage collection.
Interpreted languages prevent essential things like making systems calls, interacting with device drivers, accessing hardware, etc.
No explicit loops is fine. There's a lot of looping furniture around and people tend to get it wrong. There is definitely some mileage there.
However, as to the assertion 'all values are immutable' - well, my joy at that lasted about 5 minutes, right until I saw "var! x" produces a mutable variable x...
Yes, it's a little odd that the Bosque documentation begins (section 0.1) by declaring "all values are immutable", then goes right on (section 0.2) with "allowing multiple assignments to updatable variables".
I think mutables are only allowed in block scope, so in essence they're an implementation detail, but it's still rather inconsistent. Based on section 6 of the documentation, it looks like Marron is distinguishing between "values" (which have a lifetime outside a function) and "variables" (which have block scope and exist only for the lifetime of a function invocation). The docs claim that "block structured code", which seems to mean "procedural code and local variables, but no explicit looping constructs", is useful within a notionally functional language.
The claim that Bosque doesn't have loops is a little undercut by that functor example. No loops? Well lets just pipe an anonymous delegate into that map function then it's basically a foreach loop, Not only that, if you want to write your code like this you can in most modern languages anyway (I often structure my C# like this).
I appreciate the idea that they're trying to remove complexity, but I feel like it may end up introducing more complexity by taking away the choices available in a traditional for each or while loop. I doubt Bosque will actually go anywhere (I don't think it's meant to), but it's interesting to see the worth MIcrosoft is putting in to developing new programming concepts.
I love this place. I read the article, and it's an interesting and informative article, but I rushed to the end of it because I can't wait to read the comments.
10 Read 80% of article
20 Goto comments.
30 Keep going to comments.
I would love to see a The Register meet-up. I've got a feeling it would be a blood-bath. I think we hate people that are closely like us more than we hate people who have nothing in common with us. If that is incorrect then you'd all be best friends, and there'd be no downvotes. I love the downvotes. Nobody here wants to be like me, do you, and I fully understand.
It reminds me of the idea that planets form by space dust coalescing. And then water comes from somewhere, on space rocks. Plenty room for improvement there, something for the next generation to work on.
https://northernalliance.bandcamp.com/track/allys-tartan-army
Bosque - Spanish for "forest". Hmmm, being (un)able to see the wood for the trees?
Remember "Maven"? Simple two or three line build spec files? And like Topsy it grew, and there was Gradle, which grew.
Maybe we just need to invent a language called Topsy and let it grow, because it will. Sometimes reality imposes complexity on your simple system, sometimes idiots do it. There once was a "teaching OS" called Topsy. I wonder what happened to it?