Asimov wrote stories where they give the robots strong AI, and those all end in a way that I'm guessing would be objectionable to all the "don't use my software in military applications" pacifists.
One in particular features robots which have been given strong AI and "fuzzy" laws as suggested, they are free to interpret the laws so that e.g. they don't obey the orders of an idiot, it seems initially that this has been very successful, but the reader (though not the humans in the story) discovers that actually these robots have re-assessed the provided definition of "human" and decided that in fact /they/ are most human, and therefore most deserving of protection from danger, such as the danger of being dismantled if they are discovered. It is clear that something very bad is likely to happen, but the story ends.
If someone's opposed to a war, or to all wars, that's a political issue that should be influencing your choice of government, not where you buy beans or what software license you choose. Muddling such different things together is how you end up with audiences trying to change the plotline of a TV show by boycotting products from a company that advertises on the TV network that distributes the show.