back to article Don't forget the ‘C’ in Objective-C

Nowadays, it’s all too easy to take today’s fast processors for granted. At the risk of sounding like an old fogey, I get the impression that a lot of developers do just that. This devil-may-care attitude is not, in my opinion, the result of complacency but far more likely due to inexperience or even – dare I say it? – ignorance …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Bad Example

    Congratulations, you've just replaced slow yet working code with faster yet broken code, thus proving that premature optimization is evil.

    Your replacement code doesn't check that the string is the entire extension name, so any string beginning with GL_EXT_texture_rectangle would be matched (for example, if a future GL_EXT_texture_rectangle2 extension was supported, the check would erroneously assume the earlier extension was present when it might not be)

    You can't just add a space to the end of the string to search for, as it might be the last extension in the list.

  2. Anonymous Coward
    Anonymous Coward

    No, C really is dead.

    There you "C guys" go again.

    As you hinted at at the very end of your Article,

    This is a different age. The text world is now UNICODE, it's far better to be experienced in how to handle Unicode the to save a few nanoseconds finding and using the C string library.

    ONLY if the "C" code was inside a LOOP would it save anything.

    In a higher level language like Objective-C it's Not Worth the Effort to optimize by using C.

    You C guys go to extremes to keep C relevant, it isn't, except for OS development, or when you can be sure you will never touch Unicode.

    Why not optimize by using a char[], build a nested search for a space, then the first letter of the char of the property you are searching for, and then calling a substring function? Because we don't typically optimize where the benefit is in pico seconds.

    In your example, the programmer may have loaded an array to more easily view the properties in the debugger. And then kept it there because TIME is more expensive then processor cycles( outside a loop ).

  3. Rosyna

    Further flawed example

    Do you understand why stringWithCString: is deprecated? It's because it assumes the system encoding. In other words, it's the *same* as stringWithCString:encoding:[NSString defaultCStringEncoding]. You should never, ever, ever, ever call [NSString defaultCStringEncoding] or CFStringGetSystemEncoding(). It's equivalent to using no encoding. If the user's default language is korean, it'll return MacKorean, if it is japanese, it'll return MacJapanese, if it is English, it'll return MacRoman.

    SPECIFY AN ENCODING! Use NSUTF8StringEncoding if you're creating new strings. Just never, ever use the system encoding.

  4. Dave Jewell

    RE: Bad Example

    Anonymous Poster said: "Your replacement code doesn't check that the string is the entire extension name, so any string beginning with GL_EXT_texture_rectangle would be matched (for example, if a future GL_EXT_texture_rectangle2 extension was supported, the check would erroneously assume the earlier extension was present when it might not be)"

    That's a good comment, although it's something of a side issue. The essential point was that doing the thing in C was a lot more efficient.

    However, to address your point, I did a little googling and it turns out that a lot of folks do use the 'strstr' technique to check OpenGL extensions. There's a good example here:

    http://www.koders.com/c/fid0577FFB7F9B63AEC5943E5354993FE6CC313B656.aspx

    If you want to focus specifically on OpenGL extensions, I believe the *correct* approach is to use the gluCheckExtension routine which is documented here:

    http://developer.apple.com/DOCUMENTATION/Darwin/Reference/ManPages/man3/gluCheckExtension.3.html

    As Apple's documentation states, "Cases where one extension name is a substring of another are correctly handled."

    Dave

  5. This post has been deleted by its author

  6. Dave Jewell

    RE: No, C really is dead

    Anonymous Poster said: "There you "C guys" go again."

    I don't really think of myself as a C guy. I'm also a Delphi guy, a C# guy, a Cocoa guy. Heck, I'm even a VB.NET guy sometimes, but I keep that one fairly quiet. ;-)

    "As you hinted at at the very end of your Article, This is a different age. The text world is now UNICODE, it's far better to be experienced in how to handle Unicode the to save a few nanoseconds finding and using the C string library."

    True, of course, but since glGetString doesn't return a Unicode string....

    "ONLY if the "C" code was inside a LOOP would it save anything."

    Which is why I cunningly mentioned looping tens of thousand of times inside a text crunching app. Except I didn't actually use the word loop - it was 'kinda implied. :-)

    "In a higher level language like Objective-C it's Not Worth the Effort to optimize by using C."

    Totally agree. Last thing I'd want anyone to do is go through a finished, well-written Cocoa app and intersperse the code with lots of C routines. The point is to think about what you're doing and -- for each specific case -- use the most efficient API routine (whether Cocoa or stdlib) based on the data types you're working with. 99.9% of the time, a Cocoa programmer will be working with NSString instances, so it makes sense to use the Cocoa API's in those cases.

    "You C guys go to extremes to keep C relevant, it isn't, except for OS development, or when you can be sure you will never touch Unicode."

    Methinks you didn't really grasp the point I was trying to make. Mea culpa....

    "In your example, the programmer may have loaded an array to more easily view the properties in the debugger. And then kept it there because TIME is more expensive then processor cycles( outside a loop )."

    See earlier comments about loops.

    Dave

  7. Anonymous Coward
    Anonymous Coward

    A blast from the past

    I feel this is a great example of where the industry has moved on from in the past 30 years. Thankfully modern software developers now have a user focus rather than some ill-founded notion that they must write the most efficient code possible in the shortest number of lines.

    This means they don't worry about writing code that does four system calls rather than two (because of 'empathy' with the processor ... it won't sulk if you do!). Instead they worry about writing code that is easy for future developers to understand and gives them high confidence in it working (meaning less bugs and quicker release to the end user).

    Modern programmers take advantages of existing code in libraries that is known to work rather than trying to write their own until they _know_ performance is an issue (meaning less bugs and quicker release to the end user).

    Finally, the best programmers don't have 'feel' for the way in which their code is mapped onto machine code and spend their time optimising code that offends them. The best programmers use profilers and tools that tell them exactly where the bottlenecks in their code and focus their time optimising these (meaning a more responsive software for the end user).

    I suggest the author switches his focus from himself (being at one with the processor and writing the most efficient code possible) to the people that matter - his users.

  8. tom hall

    This book may be worthwhile

    I stumbled across this the other day

    http://www.oreilly.com/catalog/1593270658/

    Write Great Code Volume I: Understanding the Machine

    Write Great Code, Volume II: Thinking Low-Level, Writing High-Level

    Cant give a full blown recommendation as I'm just starting as a programmer.

  9. amanfromMars

    Empowerment from the Top Down ..... is a Certain Self-Actualisation

    <<<Of course, we’d still have to first use stringWithCString: to convert our original C string into an instance of NSString. Surely, there’s a better way?>>>>

    What's the Rush? This Way the Support Sections get to do Bits and Bytes of Leadership in the NEUKlearer Definition in the Hierarchy of Human Needs. Wii have Virtual Feeds to Seed the Perfumed Garden of Eden.... Eve's PlayGround for Tomorrow's Vista.

    Major Tom to Ground Control ....... IT Rock and Roll Patrol ... FlowurPower2 4U2.

    Don't be Coy, Men are not Boys. Just ask any Woman who has met One.

  10. Ian Michael Gumby

    C isn't dead and C is at the core of Objectivce-C

    Wow.

    Only in a mac centric world would I see this type of argument.

    Note: I guess I'm one of those old guys because I was a certified NeXTStep developer in '92.

    The point of the article was that developers should think about their task at hand and during the development process don't ignore the core of the languge.

    Objective C has C at the core of the language, and if you really wish to be proficient at Objective C, you need to grok C.

  11. Tom Watson

    Some thought needs to be done

    Maybe developers need a "reality check". Yes, the modern systems are VERY speedy, and have LOTS of memory (internal and disk), but some thought needs to be done. Why does Vista need (or demands) 1Gb of main memory? Probably because developers just don't think. The only thing that keeps this going is the fact that Intel/AMD/IBM make better processors and memory gets cheaper by the hour. The balancing comes from Microsoft/Apple who just add more bloat every big release.

    Perhaps developers (both operating system, and application) need a humbling experience. I suggest doing a simple project using an ASR33 as your I/O device. It will give everyone involved a sense of value. Will said application be useful? I don't know, but the journey starts with a single step!

    p.s. Lacking an ASR33, I suggest a machine about 15 years old with a 15 year old operating system.

  12. Josh Goodrich

    Being part of the Problem

    "I feel this is a great example of where the industry has moved on from in the past 30 years. Thankfully modern software developers now have a user focus rather than some ill-founded notion that they must write the most efficient code possible in the shortest number of lines.

    This means they don't worry about writing code that does four system calls rather than two (because of 'empathy' with the processor ... it won't sulk if you do!). Instead they worry about writing code that is easy for future developers to understand and gives them high confidence in it working (meaning less bugs and quicker release to the end user)."

    Holy cow - do you actually believe this? So you believe the user would rather have a pretty, slow running app then a useable not so flashy app? Do you work for Microsoft by any chance? Ever code writers goal should be to write the most efficient code that does the job within the schedule and budget. Not to push code out as fast as possible.

    If a future developer needs to understand the code that's why all languages have comments (and yes they are not used nearly enough) and documentation. They are the key to making it easier on future developers, not making your code piss poor simple.

  13. David Nečas

    feel

    Anonymous wrote:

    > Finally, the best programmers don't have 'feel' for the way in which their code is mapped onto machine code and spend their time optimising code that offends them.

    They do, precisely because

    > The best programmers use profilers and tools that tell them exactly where the bottlenecks in their code and focus their time optimising these

    and thus they develop the `feel' (having some idea what happens at the deeper levels helps in the process). In fact, you don't develop any `feel' after being exposed to the same patterns again and again you can hardly call yourself a programmer.

    Dave Jewell wrote:

    > I did a little googling and it turns out that a lot of folks do use the 'strstr' technique to check OpenGL extensions.

    A little googling can find awful lots of people making all kinds of mistakes. Your `strstr technique' is taught in programming classes -- as an example of common programming errors. If the goal was to make performance-conscious programmers look like complete idiots, you can congratulate yourself.

  14. Matt Kemp

    Re: No, C really is dead

    At which point did efficiency become irrelevant?

    While some may be extreme cases, why go through the effort and waste of system resources to get the same result? As someone stated above, Objective C is built on top of C - why not use the functions of C if it makes your program just that more efficient?

    While this method presented only saves a couple of system calls, if this kind of method was present throughout the code thousands of times, then the difference begins to add up. I don't understand the aversion to doing something that can only save system resources.

    The attitude that "I don't have to bother making my code efficient, so I won't" seems to be perpetuating through a lot of projects - which, as someone mentioned, is probably the reason that Vista needs 1GB of memory. When a Java program that requires a 60MB memory footprint which a lower-level language can do with 120KB at most, does it not make sense to use the more efficient version?

    Writing good, efficient code not only makes you more versatile as a programmer but also saves your users time in the future - especially for heavy load applications such as games, multimedia players and even supercomputer programs. If you feel code might be obscure or intelligible to other developers, use comments! That's the entire point of having them.

  15. Anonymous Coward
    Anonymous Coward

    Striking the Right Balance

    Tom. I disagree that the reason that software places increasing demands on hardware is that "developers just don't think".

    Developers need to balance features, maintainability, cost and delivery times against hardware demands (processor, memory, etc). They don't like in an ideal world where they have infinite time to optimise.

    I think for a modern several hundred dollar operating system to need less than $100 worth of memory to run is an excellent example of striking the right balance. Would I rather have waited for an extra year and paid an extra $100 for Vista in order for it to use 512MB - of course not. If Vista needed 5 gig of RAM then I'm sure you would have seen the developers optimising their code before release.

    This is no different to a car manufacture finding a balance between weight, engine power, fuel economy, cost, etc. They could always optimise further (using lighter materials for example) but this pushes up cost and they need to strike the right balance.

    Back in the days of ASR33's then the trade-offs were different and hardware demands were more limiting, but let's not compromise today’s software because of past limitations. If that was the case then all software would need to look great 640x480 on a monochrome screen!

    As for developers going back and coding against 15 year old kit - should car makers go back and make 100 year old cars just to see how the balance was different? An interesting excercise, but ultimately a waste of time.

  16. Kurt Guntheroth

    efficiency matters

    The original point is still valid; efficiency matters. Too many modern languages force dynamic memory allocation, which does not scale well to multi-processor systems. We're all just about to learn this lesson.

    The speed of straight-line code is not going to keep doubling like it used to do. In fact, it may decline. So only improvement in execution efficiency will make today's bloatware run. Plus, stuff that shares resources, that was reasonably quick on single-cpu chips, will have to be interlocked in the kernel on multi-os chips. Allocator naps are already way longer than the overhead from VM interpretation. They'll get worse, not better.

    I boldly predict that efficiency will get new respect in the next 10 years.

  17. Ian Michael Gumby

    Striking the right balance?

    True, one must balance performace against cost against maintainability.

    However that has nothing to do with the underlying point of the article.

    We're not talking about writing obfuscated code in an effort to squeeze a small performance edge. We're talking about using C function that are a part of the language.

    Well documented code will improve maintenance regardless if you're working in C/C++, Objective-C or SmallTalk. (Add your language here.)

    Performance?

    If you don't think about performance, then you'll never have a first tier application.

    Its "developers" like you who write garbage the first time that keep consultants like me employed. Rather than write enhancements, I'm busy fixing other people's code and improving the performance of the application.

    Want to strike a balance?

    If you paid attention in the first place and thought about your work, you wouldn't have to worry about 512MB vs 1GB.

    That extra 20% of effort in thinking first will reduce the overall cost of software development, offsetting your "balance" argument if not actually making software cheaper to produce.

    But hey! What do I know? I'm just a consultant who's trying to talk myself out of a job. ;-)

  18. Anonymous Coward
    Anonymous Coward

    re: Being part of the Problem

    "Holy cow - do you actually believe this? So you believe the user would rather have a pretty, slow running app then a useable not so flashy app? Do you work for Microsoft by any chance? Ever code writers goal should be to write the most efficient code that does the job within the schedule and budget. Not to push code out as fast as possible."

    I'd didn't say the user would rather have a pretty, slow running app then a useable not so flashy app. Later in my post, I advocated using a profiler to locate and optimise where the actual problems are. By putting the effort in where it is needed you can get a pretty AND usable app - the best of both worlds.

    I do actually believe in a focus on maintainability through simplicity ... and I've seen the importance of it many times first hand. It is by far the biggest problem I've seen on the numerous enterprise projects I have worked on, often due to high coupling and poor design (sometimes though misguided concerns about affecting performance). Some of these projects have had performance problems and I've always found these lie with very specific parts of the system and the optimisation effort can be targeted very effectively.

    Years ago when I developed complex multithreaded scientific simulations in C, the CPU spent over 99% of its time running less than 1% of my my code. The biggest problem with huge bulk of my code was maintainability not performance. Most software isn't this extreme, but the same principles apply.

    If the programmer can write code that meets the projects performance goals under budget and ahead of schedule then adjust the budget and schedule - don't waste time doing further optimisation. Focus your attention on other areas that make the users lives better (such as a usability study or lowering the cost).

    And no I don't work for Microsoft ... I understand how to develop quality software! ;)

  19. Dave Jewell

    RE: Speed?

    Rik Hemsley said: "So, you've just 'optimised' some code which - from the sound of it - runs once, at application startup. Did you measure how much time you save at application startup? I'd be willing to bet you can't measure it because it'll be statistically insignificant."

    Try reading the whole article Rik. In particular the bit which says: Determining the capabilities of the OpenGL system is something that you’ll typically do once only, in the initialisation part of your program. But looking for one string inside another is something that you might do tens of thousands of times inside a text crunching application.

    Dave

  20. Dave Jewell

    RE: C isn't dead and C is at the core of Objective-C

    Ian Michael Gumby said:

    "Wow. Only in a mac centric world would I see this type of argument. Note: I guess I'm one of those old guys because I was a certified NeXTStep developer in '92. The point of the article was that developers should think about their task at hand and during the development process don't ignore the core of the language. Objective C has C at the core of the language, and if you really wish to be proficient at Objective C, you need to grok C."

    Thank you Ian. It's nice to know that some folks understand the point I'm trying to get across. <wry grin>

    Dave

  21. Dave Jewell

    RE: A blast from the past

    Anonymous poster said: "I feel this is a great example of where the industry has moved on from in the past 30 years. Thankfully modern software developers now have a user focus rather than some ill-founded notion that they must write the most efficient code possible in the shortest number of lines."

    There are really two things we seem to be talking about here. On the one hand, I'm talking about writing good efficient code (not necessarily in the shortest number of lines, BTW, although that does seem to be a characteristic of much Cocoa source!) On the other hand, you're talking about focusing on the end user.

    Now, can you explain to me how you reckon these two things are mutually exclusive? Thing is, I like writing code that doesn't run like a one-legged dog in treacle. Why? Because I've got the end user in mind. Sorry, but not everybody has a dual-core Mac Pro you see....

    Dave

  22. Dave Jewell

    RE: This book may be worthwhile

    Tom Hall said: "I stumbled across this the other day.

    http://www.oreilly.com/catalog/1593270658/. Write Great Code Volume I: Understanding the Machine. Write Great Code, Volume II: Thinking Low-Level, Writing High-Level.Cant give a full blown recommendation as I'm just starting as a programmer."

    Thanks for that Tom - interesting. The title "Thinking Low-Level, Writing High-Level" fits in with what I was trying to get across.

    Dave

  23. Anonymous Coward
    Anonymous Coward

    Striking the wrong balance

    Ian, thanks, I was highly entertained by your comment.

    > Its "developers" like you who write garbage the first time that keep consultants like me employed. Rather than write enhancements, I'm busy fixing other people's code and improving the performance of the application.

    My bet is no one is asking you to improve the performance of anything.

    I have NEVER been asked, in my entire career to improve the performance of any code I've written. I have rarely been asked to improve the performance of anyone elses code.

    With 512meg of memory going for what, $100 dollars, that Never Happens.

    Otherwise I'd still be writing interactive systems in assembler. No, the world has passed Assembler, C, and C++ by.

    Managers ask to optimize programming TIME not run speed, that's the real world. Managers ask that you use the standard libraries and not re-invent the wheel and don't rewrite what's already been written and debugged.

    But, your comment was entertaining.

  24. Dave Jewell

    RE: At which point did efficiency become irrelevant?

    Matt Kemp: "The attitude that 'I don't have to bother making my code efficient, so I won't' seems to be perpetuating through a lot of projects - which, as someone mentioned, is probably the reason that Vista needs 1GB of memory. When a Java program that requires a 60MB memory footprint which a lower-level language can do with 120KB at most, does it not make sense to use the more efficient version?

    Well said, Matt; you're right to bring in the issue of memory footprint. Thus far, we've only been talking about execution speed but memory footprint is also important. I've spent some time digging around inside a lot of Cocoa apps over the last few months and I reckon that in many cases the code size could be very significantly reduced with a little thought -- see Part 2 of my article coming soon! :-)

    So what, a lot of folks will cry? Nowadays, we've got gigabytes of cheap RAM, blah, blah, bah. But here again, lots of little code savings add up to big code savings.

    Dave

  25. Blain Hamon

    C lives, but only to safe drivers

    I do have to agree that it's easy to get caught up on the char * example.

    But ignoring running the GL extensions check only once. Ignoring that text processing will probably want unicode support.

    I love my char *, and it is possible to do a one-liner to check for a complete word ((result != NULL) && ((result == stringBeginning) || (result[-1] == ' ')) && ((result[foundStrLen] == ' ') || (result[foundStrLen] =='\0'))) assuming the compiler short-circuits like it should. But there are still times that I'd rather an [NSString stringWithUTF8String:], even despite the alloc and release hit.

    There is a quote, attributed to Brian Kernighan, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

    Making an NSString just to do a safe strlen is just daft, I agree, and there's something to be said about using float over NSNumber, or bitmasked enum instead of several BOOLs. A good coder WILL know know to use them properly and make good, tight code with strlib.h, math.h, etc. But if the programmer isn't skilled enough to drop down into C, I hope they keep with the slow code and learn better tricks later. I'd much rather a cycle-wasting stringWithFormat: than a strcpy() where the code forgets to check buffer space.

    Of course, every company should keep at least one 350Mhz G3 and 400Mhz Pentium II, just to shake the quad-core tunnel vision.

  26. Paul

    Re: Bad Example

    I understand the point of the article is about the relevance of efficiency, but your mistake with the substring was so glaringly obvious I was looking for it before I got to the line of code you posted -- the very first comment pointed it out. I love efficiency, having spent years working in assembly, but you just proved why people do things like split strings into arrays and search for the matching element. Your article is actually an excellent COUNTERPOINT to the point you were trying to make.

  27. Anonymous Coward
    Anonymous Coward

    Which version is going to port to a handheld or battery device?

    The one with a 4MB foot-print or an 1MB foot-print.

    The one that uses the battery in 1 hour or 4 hours.

    But there is room for both: the sloppy inefficient application, running in 1GB of RAM and requiring a 3GHz processor, can be the prototype for the perfected version to run on small or low-power hardware. Development is expensive: you need to get some units out the door and some money in to be able to perfect an application.

  28. Anonymous Coward
    Anonymous Coward

    Understanding the hardware rant

    I've seen too much code written by people who clearly don't have an appreciation for what's going on "under the covers", causing Java Garbage Collection to use 10% of real-time because they haven't considered resetting and reusing buffers rather than just discarding and creating new ones, or slow loops because they recalculate a string length on every iteration (there's a gotcha with C#/.NET optimisation that actually contradicts this though... know your compiler and know your string's actual format...)

    Yes there's got to be a balance, yes it would be crazy to write all your code highly optimised (that's why you profile), but imho, there is little excuse for writing inefficient code when witing efficient code would have been just as easy. Only ignorance.

    There are too many VB developers who don't know how to manipulate bits. Even in SQL you'll see database developed code AND/ORing bit fields within a column - not because it's pretty (it defies anything "good"!), but because that's what computers are blisteringly good at. Then someone comes along and writes a query with string concatenations (NOOOO!) that work fine on their small test data before the data's built up live for a year, or indexing on case-insensitive string dates (?!), or, or, or... gnaaghhh!!!

    And I haven't gone bald yet... I need to stop caring that we need to upgrade that RAM again, and maybe migrate to a faster server, ... I know hardware is cheaper than coding time.

    The effect. I know I'm right. But I also know I'm completely wrong and outdated too.

    (Then it turns out that that remote server I parse 10GB+ of data on turns out to be a humble Pentium3...)

    PS: I hate unicode's pervasiveness - it doubles the memory requirement (who cares) BUT that halves the chance of a CPU cache hit, and makes your code slow to execute. If you need unicode, use it, if not, why bother? And then feel the cpu-love of cache hits and native RTL calls.

  29. Blain Hamon

    Re: Understanding the hardware rant

    Oooh, don't get me started on VB. It's as if that language and IDE was designed to enforce the worst habits possible.*

    And one of favorite reads rants on about the myth of C being close to hardware by delving in at the assembly level and pointing out all the inefficiencies of C being too high-level to not byte-align structs at times.

    Back to the article, the tricky bit is NSString. If the article had discussed int and float vs NSNumber, it'd be a slam dunk. If it was caching stringLength into an unsigned, it'd be a slam dunk.

    But NSString is too handy in everyday use. It's a class cluster, meaning that the NSString you get back is actually a subclass, optimized to the data. Unlike char*, give it unicode, and it can handle it. Unlike unicode, give it char*, and it will keep it at 8 bits a glyph, not wasting space. Looking at the header, initWithBytesNoCopy:length:encoding:freeWhenDone: looks promising if you must keep things lean. And you can bet that Apple's done more work optimizing string handling than I could.

    Same goes for NSArray, btw. Rediculousfish.com/blog/ pointed out that, based on a few tests, CFArrays/NSArrays actually switch to hash structures with caching behind the scenes at about 300,000 elements, making them faster than standard C arrays, even!

    *I was told about one VB-laden applicant who was given the test of making a function that counts the number of 1s in an int. Given 52, you'd return 3 because 52 is 110100b. He had the function convert the int into a string representation, i.e. "00110100" and then loop through the string, comparing character by character to "1"!

  30. SImon Hobson Silver badge

    If I could just butt in here ...

    As a user, though I have done some programming in the past (embedded systems), can I just point out that thanks in part to programmers with an attitude of "don't care, hardware is cheap" I have a 1 year old MacBook Pro with full complement of 2G RAM that is insufficient. Yes, I've seen over a gig of swap in use ! Granted, it most happens when I have that tarpit of Microsoft code called Entourage running ('cos work says I have to), allocating 1/2 a gig to an instance of Windoze doesn't help either !

    There is no room to "just add a gig fo ram for $100", to add more ram means an expenditure of several THOUSAND dollars for a newer machine (or more correctly a couple of thousand pounds for me) which I personally just don't have - even then that only allows about 1G of extra capacity as the Core 2 Duo supports LESS than 4G of RAM due to the way IO is mapped IIRC. For a good part of the time, any additional memory footprint created by the "don't care" gang goes straight into the paging process.

    There is a balance to be found. No-one expects a programmer to go through their code with a fine toothed comb looking for every single byte or clock cycle to be saved, but an adept programmer with a good feel for how their code actually ends up being run can make a big difference simply by appropriate choices WHILE THEY ARE WRITING the code.

    Yes, going back and optimising the code after the fact is going to be expensive - so don't do it that way ! Or have we regressed to the old IBM measure of productivity - the more lines of code, the more productive ?

  31. Infernoz Bronze badge

    Good and bad points from both sides

    strstr is OK if you have a plain ASCII value, but is easy to abuse if the bounding is not as you expect.

    An efficient regular expression engine would probably be far safer and quicker for both ASCII and Unicode comparisons, because it can do the flexible comparisons without splitting or false positives.

    Efficiency must be considered and not just for CPU usage, GPU, disk and network usage are also important. You would be shocked how much minor changes in disk and network usage and misuse of buffering can slow down a process. Many performance mistakes (e.g. from time compromises) can be made early on in the design of a product, if you don't think, these can be hard to pin down with a profiler and expensive to fix especially in a live product, I know first hand!

    Managers who want quicker product releases are strongly advised to read Peopleware before they compromise on product quality and good code design. Dilbert like management and their suck-up employees should no longer be tolerated by creative people, otherwise productivity and code quality will continue to plummet.

    I happen to prefer Java because I can see what is happening; dynamic interface languages like Objective-C and Ruby make things much more ambiguous than I like.

  32. Dave Jewell

    RE: Understanding the Hardware Rant

    Blain Hamon said: "*I was told about one VB-laden applicant who was given the test of making a function that counts the number of 1s in an int. Given 52, you'd return 3 because 52 is 110100b. He had the function convert the int into a string representation, i.e. "00110100" and then loop through the string, comparing character by character to '1'!"

    LOL - I guess that's the sort of approach that I was arguing against.... ;-)

    Dave

  33. Ilsa Loving

    Preaching to the wrong crowd

    I think David Jewell made a mistake by addressing performance issues in programming on the mac platform.

    And that mistake is born out by all the comments from people saying, "Ram is so cheap! Just by another gig!"

    People who are hardcore mac users have money to burn, and don't mind using an OS that needs 1 gig of memory just to function at a respectable level. Vista is not the only culprit here... OSX is just as bad.

    And being the rich white boys that they are, they see nothing wrong with the "just throw more hardware at it" solution rather than actually trying to write good code.

    I deeply resent such arrogance, because these people are completely oblivious to the majority of the population who cannot *afford* to keep upgrading their hardware every 6 months. 100 bucks may not be much to someone coding on a shiny mac with 4 cores of processing power. It's a HELL of a lot of money to a pensioner. Or to a family trapped below the poverty line but want to have a computer in the hope that their children can gain some measure of advantage that the parents didn't have.

    If it was so easy to just throw money into hardware, the world wouldn't need things like the OLPC. And despite peoples protestations about how important the "user experience" is, I don't see OSX or Vista running on said OLPC. GEE! I wonder why? Maybe... just maybe... cause efficiency and code quality IS actually still important to some people!

    (strstr being an example of bad coding practise? My god... I am ashamed for people who think like you... Programming is, always has been, and always will be, more than just whatever flavour of the month API you happen to like.)

  34. Joshua Goodall

    It *is* a lousy example.

    Just rewrite the damn article already. kthx.

  35. Aubry Thonon

    Funny old world

    You know, it's funny... but the feeling I get from every one of the "no need to optimise, just buy more memory" is that they live and work mainly in a pc-centric world. I by PC, I don't mean MS or Linux, I mean a small box that sits on a user's desktop.

    Well, I don't work in that world. I work in the real world of large companies like banks, semi-government authorities and the like; where large amounts of data have to be processed in the least time available - in the case of a Police road-side check, for example, knowing if the driver is wanted un gun charges can make one hell of a difference... and when your application has to work on diverse hardware, including PDAs, you don't have the luxury to "buy more memory".

    This is something I have been dispairing of ever since Java was introduced... not because Java and subsequent languages are bad (or good), but because now that programmers "don't have to care" universities are now pushing cookie-cutters rather than programmers. I am sorry if this sounds a bit denigrating to some of you, but this is a generelisation of the programmers I have seen coming out of universities lately - absolutely great C#/Java/.Net/whatever programmers... but the moment you take them out of the "favourite language", they all fall down and become near to useless.

    Frankly, before they learn Java/C#/whatever, languages which do most of the hard-slog for you (like memory management), students should do at least a year with C or Pascal and learn how the code and the hardware relate.

    However, this plea is going to fall on deaf ears and in six months, once again, I am going to have to re-train a bunch of useless graduates who come out of Uni to realise that the Big Systems do NOT behave like suped-up desktops.

  36. Jeremy Pavier

    Ye Guilde of Code Crafters and Software Journeymenne

    I think what gladdens my heart most is the bunker mentality that forms around the false dichotomies that people construct in their heads over this sort of debate. I love the idea that exponents of so-called advanced technologies can't see beyond the particular and grasp a general point. That's going to keep people like me in work for many years to come, and long may the pension contributions continue rolling in.

    "... there is little excuse for writing inefficient code when witing efficient code would have been just as easy. Only ignorance." Exactly.

    But ignorance seems to be one software skill that is never in short supply: "Managers ask to optimize programming TIME not run speed, that's the real world." QED. Obviously a developer with little or no experience in 'the real world'. Certainly a developer who only partially understands the delivery of functionality, and has no grasp of the concept of delivering and selling a software product.

    Oh, here's another one: "I have NEVER been asked, in my entire career to improve the performance of any code I've written. I have rarely been asked to improve the performance of anyone elses code." Is it axiomatic to assume that because they have never experienced these requests, they are never made? It may be just that they are never made to this person. Perhaps because they are not deemed capable in this area. If we were to accept this assertion, it would cause us to reject any form of refactoring or defect repair because everything is always written correctly the first and only time the code is ever touched. Blimey, I take my hat off to you matey, if you are that good!

    Working in an industry where the very suggestion that a bit more knowledge could be beneficial can be greeted with such vitriol and ad hominem attacks just underlines the conclusion I have spent years coming to. The technologies may seem new and interesting, but there's a whole bunch of medievalists out there trying to construct a priestly shell of exclusive ignorance and magical thinking around a craft they don't really understand. But as long as they mutter the right incantations, reinforce the dogma and flame the heretics, the mediocre and unskilled should find themselves a cosy niche in which keep warm for a while. One day the Software Guild may be where we send all of our idiot children instead of the Church.

  37. Richard

    Lovely <rubs hands>

    I make my money taking "performance is not a priority" systems, and making them work. So please dear programmer readers, don't listen to this guy. It doesn't apply to you. Systems scale vertically forever and ever. Piss-poor performance never harmed a system. Thank you.

  38. Paul

    Fix your article

    Dave Jewell, when are you going to fix your article? The very first comment *clearly* outlined why your example is wrong. I don't do much Win32, so I'm not sure about the Unicode issues brought up, but if the strings involved could be more than plain ASCII, you've got a whole other issue to deal with. You call the strstr problem a "side issue", but you're ignoring the fact that it doesn't matter how efficient it is if it's wrong.

    The thing is, efficiency takes a little work. How about putting some work into the example? You could put the strstr in a loop, with each pass checking that the character preceding the match is either out of bounds or a space, and the character after the match is either a space or a NULL. But then you'd have to confront the issue of why people just hack strings into arrays and call it good. Your example makes it seem easier to be efficient, but this often is not the case. You can still make a case for efficiency despite the code often being more complex. You just have to acknowledge that it's not for all situations, and it's not for everyone.

  39. mats andersson

    grow up, kids

    perf is important, but the article has wrong focus. Some string comparison doesn't make sense. Using correct data structures and overall design makes sense. (Unless you’re writing a string-lib)

  40. Kristian Walsh

    Encoding is not relevant here - this is not text processing

    Character encoding here is a red-herring. The OpenGL api in question returns a sequence of bytes, not characters [ const GLubyte* glGetString( GLenum name); ], and although never explicitly stated, the feature names are coded using ISO646/US-ASCII. Despite using a data structure (C string) normally used to contain text, these are actually a series of identifiers. (If the delimiter byte were 0xFF, and not 0x20, there would be no confusion here at all).

    To determine whether your desired identifier is present in the list, strstr is fine. As already pointed out, however, you need to check that the value strstr() returns is not the head of a different identifier. This is trivial to check, though.

    Where encoding will bite you is if you had created a string object for your desired feature using some odd encoding, and then used the raw bytes of that string as a search pattern, but that would be another illustration of the author's point - many developers not understanding what is really happening as their code is run.

    The central point of this article is still valid - OO frameworks encourage an "bureaucratic" approach to programming: guiding data though a series of needless processes because "to calculate the result requires object A's method, which takes type B, which can only be got from object C, which you can only create from a type D, which you can only create from what you've got using an object E". Many developers simply don't understand the costs of creating these worthless, fleeting objects.

    PS. I hope I'm not the only person who spotted that you've mistakenly used "<>" as the inequality operator!

  41. Blain Hamon

    Hey! You kids get off my lawn!

    Programming has always been a tradeoff. You know it as well as I. If you really wanted to get the most performance, you could make your own custom strstr, dropping even a function call overhead, looping through character by character, shaving off the cost of memory lookups by doing compares through constants, which stay in the opcode. Better yet, C allows for inline assembly for a reason!

    But that would defeat the point. Libraries are inherently inefficient in that they have to be generalized, and won't reach the fine tuning we could achieve by rolling our own. Yet we use them because unwritten code is debugged code, and we read code more than write it. In other words, by taking a step back, it's easier to reduce errors, etc, etc. Not only that, but a library allows a coder to share his skills with another, outweighing the losses.

    I know that key binding is much slower than using the older event handling and hand-updating values. And common sense dictates that having an NSButton on a window, that means a lot of extra work that I could, theoretically, pre-render. But having the key bound means I don't have to pour through my code looking for a missed update, and with Quartz Extreme, that code is offloaded to the GPU, meaning better performance, contrary to common sense.

    Does that mean we should not heed the point of the article? No. Personally, I hope he leaves it as is, or all these comments will make no sense. NSString is a bad example for reasons mentioned above, but that reinforces the underlying crux. Use the right tool for the job.

    C is a honda civic: light, fast, and good for small things. Objective C is a dump truck: heavy, strong, and can help you move a lot in one go. It makes no sense to use a dump truck just to bring home a bag of potting soil. It's pure waste to use a honda civic to carry away tons of dirt away from the construction site of an underground parking lot.

  42. Dave Jewell

    RE: Lovely <rubs hands>

    Richard said: "I make my money taking 'performance is not a priority' systems, and making them work. So please dear programmer readers, don't listen to this guy. It doesn't apply to you. Systems scale vertically forever and ever. Piss-poor performance never harmed a system. Thank you."

    Get back to Redmond where you belong! ;-)

    Dave

  43. Dave Jewell

    RE: Fix your article

    Paul said:

    "I don't do much Win32, so I'm not sure about the Unicode issues brought up, but if the strings involved could be more than plain ASCII, you've got a whole other issue to deal with."

    Paul -- under what circumatances do you anticipate that glGetString() will return something other than plain ASCII? Kristian Walsh points out that encoding issues are a red herring here -- I tend to agree. Worrying about getting non-ASCII strings from glGetString is a bit like writing your program so as to cope with the fact that strlen() might sometimes return a float! Do you program that way? ;-)

    "...you could put the strstr in a loop, with each pass checking that the character preceding the match is either out of bounds or a space, and the character after the match is either a space or a NULL."

    And if I were writing production code, I might do that, though, as I pointed out elsewhere, using gluCheckExtension() would be better. Let's try and remember that we're talking about code which might break if some hypothetical OpenGL extension gets added at some point in the future. It's rather a side issue to the main point.

    Dave

This topic is closed for new posts.

Other stories you might like