My rule:
"Program in a way that the compiler will bitch at you"
The += thing leads to problems, especially if one side is actually a pointer (either you end up overwriting pointers if you miss the + or you might end up doing pointer arithmetic by mistake).
The x = a + b thing means that if you mess up, generally the compiler will bitch about something missing or being of the wrong type, or the assignment not being possible, etc.
We have these all-signing-all-dancing tools but if you program in the way of most convenience, they won't know what you're doing is wrong.
Programming is about instructing the computer in a language it understands. You're already coming down to its level. So you may as well come all the way, especially with modern IDE assistance.
Also... compile with EVERY WARNING YOU CAN. Force yourself to fix them all (-Wall -Werror). Still no substitute for proper testing (and how can you not realise that you only added up one of a whole list of numbers?), but at least it gives you a ton of things and gets you out of the habit of possible ambiguity.
I spent my early coding years as a kid trying to turn off warnings and errors and using clever tricks. It was only later that I realised that I need to train myself to avoid them in the first place.
I don't think I have an active codebase, personal or professional, that doesn't compile cleanly on all the architectures it's aimed at. And there are numerous constructs where I just got tired of silly errors and went "the obvious way" in terms of coding it, rather than "the quick way" to solve them.
---
The best one I had in terms of a baffling code problem was caused by memset(), filling memory with '-1', on ARM, with a particular version of glibc. I was porting some long-standing code and the same code worked perfectly on multiple platforms. Worked perfectly on other glibc versions. Worked perfectly with non-negative numbers. And memset() is designed to work with signed integers by default. For some reason, the memset on that particular combination only memset every 10th memory byte or something similar, which screwed up all the future code.
When you see an explicit memset, you expect it to work. Literally, I narrowed it down to the line and then REFUSED to believe the memset could be at fault. Until I overrode it with a macro with a for loop that did the same thing. Turned out, it was a known bug in the library version for that architecture. But drove me insane tracking that down and I didn't really believe it even when someone else narrowed it down for me.
But even something as simple as a basic memset can go wrong.
(P.S. credit to Simon Tatham, whose code it was in, and who happened to work at ARM at the time).