A company spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War” but is instead “focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance.”
You must log in or # to comment.
How dare this AI company be against autonomous weapons and mass domestic surveillance!
Second Variety draws yet a bit closer. Or at least Watchbird, which quite accurately predicted how learning AI used to fight crime might act in 1953.
Given the well-documented issue that AI often makes errors by ignoring set constraints by the user, and then just blabs something like “Oh, sorry, my bad.”, I get a very, very bad feeling here.



