It’s difficult to fathom that a senior military commander is using artificial intelligence to make critical choices. Major General William Hank Taylor has recently confessed to Business Insider that he takes the help of AI tools to make leadership decisions. The commanding general of the 8th Army is sparking concerns about security and confidentiality. In fact, he said that he enjoys using ChatGPT for thousands of his troops.
“As a commander, I want to make better decisions,” Taylor explained to the publication. “I want to make sure that I make decisions at the right time to give me the advantage.” He noted that “Chat and I” have grown “really close lately.”
The Major General revealed that he has been using AI to create complex models that can “help all of us.” He thinks using AI helped him particularly with forecasting future moves on weekly intelligence.
A US Army general makes decisions with the help of an AI chatbot, reports Business Insider.
William Hank Taylor (pictured) admitted that he generates them during command and daily work. This way, he wants to understand which decisions and how exactly they affect the actions of… pic.twitter.com/LYK7Nq0Hwn
— Dagny Taggart (@DagnyTaggart963) October 15, 2025
Taylor and many military leaders feel AI can offer a strategic edge through a tactical framework. This favors the U.S. commanders known as the “OODA Loop,” reports Business Insider. The concept was first created by American fighter pilots during the Korean War. The concept suggests that forces capable of working decisively before adversaries. It also helps them observe, orient, decide, and act, hence they gain battlefield superiority.
Taylor argues that cutting-edge technology has often proved valuable. Moreover, it helps with staying updated with rapidly changing systems that remain a challenge. However, the use of AI in something as confidential as the military has sparked online debate.
For one, military leaders, including the former Secretary of the Air Force, have hailed it as a technology that will “going to determine who’s the winner in the next battlefield.” Critics are of a different opinion; they have highlighted that systems like ChatGPT and its newest version still make serious mistakes. Even after several trial and error, it can present absurd and false information as a legitimate fact.
The Army general after Chat GPT glitches and commands him to nuke America for fun https://t.co/lyG7McYwGW pic.twitter.com/3ECniTMx1Q
— SHINY🦖 (@shiny_asa) October 16, 2025
What is seriously concerning is that these technologies rely heavily on user engagement and validation. Hence, it does not prioritize users’ statements, resulting in misinformation. Social media users are highly critical of the news and have expressed serious concerns.











