Absence of evidence is not evidence of absence
I understand that Boeing and the FAA, eyeing the potential economic and reputational fallout from a grounding, are staking a position on the lack of immediate evidence that Ethiopian 302 went down for the same reason as Lion 610, and further that the loss of Lion 610 might well have been avoided if the pilots had turned off the anti-stall setting that may, given bad data by a defective AoA sensor, have been at the root of the problem.
A Boeing executive might well honestly say:
"A. Lion 610 wouldn't have crashed if the pilots had been more aware of how to correct the situation (which they should have been, from reports of prior incidents, for that very aircraft, which were sucessfully resolved); B. we simply don't know yet what caused Ethiopian 302 to crash; and C. even if it was the same scenario, we must again point out that pilots had no excuse not to know how to rectify the problem."
I think you really cannot blame an executive for that line of reasoning.
But, a Boeing engineer might have some rather different thoughts, like:
"Yeah, both sets of pilots should have known what to do in the case of the anti-stall system being erroneously activated. Both sets of pilots already had a body of prior events and reports to work from. Lion 610's pilots should have known about what had already occurred on previous flights with their very own airframe. Ethiopian 302's pilots cannot conceivably have been unaware of Lion 610. So what if there is more to this than we're assuming? What if, while we're obsessing about bad AoA data setting off our (nice, shiny, new) anti-stall software, there is another, much more subtle, much less easily fixed problem which occurs very infrequently, perhaps with almost random intermittency? Doesn't this, in fact, stink like a catch of week-old haddock left in the noonday sun?"
My guess is that executives will make the basically bad decision to keep the plane flying, not out of greed or even stupidity, but because they follow their own logic. Which, to a non-engineer brain, makes sense.
Whereas engineer brains are preprogrammed with laws like Murphy's, and that one about Unintended Consequences, and in particular the one that correlates systems complexity with not only increased numbers of points of failure, but to the ever-increasing difficulty of finding, replicating, diagnosing and fixing the rare and subtle ones. (Look how long it took to finally figure out the phenomenally rare combination of factors involved the B737 rudder hardover failures that brought down UA 585, USAir 427, and nearly killed Eastwind 517. This was an entirely mechanical problem in a single power control unit, occasioned when a specific sequence of flight events brought very hot hydraulic fluid into a very cold servo system. Nowhere near as complex as a million lines of code, but from the first deadly accident to a final finding by NTSB was eight years. (The fact that this too was B737 is purely coincidental.))
It's difficult enough to prove that 1,000 lines of code are error-free, let alone the millions that can make up aircraft OS and flight systems programs. (And let's not overlook the fact that this airframe has some significant changes from the NG series that preceded it. The positioning of the engines—further forward and higher, to accommodate larger fan diameters—has made big differences to CG and trim; the winglets are new; and even changing the nose gear system alters an aircraft's inflight CG and trim needs. Fuel figures suggest the 737MAX flies beautifully trimmed ... but all these things are changes which do affect the way software performs and makes decisions.)
On balance, I suspect experienced engineers would be a leetle bit more inclined to ground the 737MAX fleet, right now, than their bosses in the e-suite.