But TensorFlow is Open Source!
TensorFlow is Open Source and Google and the whole machine learning sphere draw extensively from the open source community, raising a chewy question to those of us who are NOT citizens of the USA and who do not get a democratic vote (degree of democracy and utility of electoral mechanism to be debated elsewhere) with which to make a stand for or against the actions of the US military.
Essentially, whether one approves or disapproves, if one has submitted a patch to TensorFlow or any upstream component, one is contributing to their effort. If one has helped diagnose and debug an issue, one has played a role in this. Even those innocent and ubiquitous Google Captchas feed into this in some way -- how else will the DoD identify vehicles, shop fronts and street signs with high accuracy?
This raises an important moral question about Open Source software. Your amusing cat-riding-a-skateboard detector might be used to target bombs in the future -- are you sure you want to give it away on GitHub or Kaggle Notebooks? Sure, this outcome is vanishingly unlikely. Sure, you can invent the "pacifist BSD" license and/or write "may not be used to target bombs" at the top of each Python script. The chance is still there and so the question remains open.
Targeting bombs may be hyperbole but the automated and wide-spread surveillance of private citizens of another sovereign nation -- citizens who have no vote against such actions -- is still wrong in my opinion. Whether some extra-judicial entity on the other side of the world labels those citizens as "terrorist" or "non-terrorist" is entirely irrelevant. Air-strikes are also a reality and those air-strikes are triggered and guided by such surveillance. Air-strikes are unilateral acts of war (let's call it what it is) and do kill civilians. According to the USA, they also eliminate targets labelled as "terrorists" by the aforementioned extra-judicial entities. According to me, that is debatable at a higher, international level.