Reply to post: The case of the 16 bit signed ints reminds me of something horrible

'Just give me any old date and I'll make it work' ... said the VB script to the coder

Michael H.F. Wilkinson Silver badge

The case of the 16 bit signed ints reminds me of something horrible

I was trying out the AMD C compiler on some parallel code on our 64-core Opteron server to see if it would optimize more than gcc. Indeed, on some of our smaller images (1 Gpixel or so) it worked well, optaining some 10% more speed, which is nice. However, on a 3.6 Gpixel it crashed. Compiling the same code in gcc worked fine. I checked the code (designed to work up to 4Gpixel which is the maximum for GeoTIFF images anyway) and found we were correctly using a type "Pixel" defined as a 32 bit unsigned integer. A counter of type Pixel was used to traverse the (1D) array of pixels in each for loop. On a hunch I created a 2Gpixel image, and ran the AMD compiled code. It worked. Create an image 1 pixel larger and it crashed. Somehow the optimizer turned the 32 bit unsigned integer into a signed counterpart, causing havoc. AARGH. I then changed the definition of Pixel to 64 bit signed integer, and it still crashed at the 2Gpixel + 1 barrier. Turning off various optimizations might have solved the problem but would defeat the very purpose of using the AMD compiler. We decided to stick with gcc.

Note that this was quite an old AMD compiler, and new versions might have solved the issue.

BTW

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon