---Advertisement---

Top US Army General Admits Using ChatGPT for Military Decisions – Triggers National Security Uproar

Author photo
Published On: October 17, 2025
Follow Us
Top U.S. General Using ChatGPT for Military Decisions
---Advertisement---

It’s difficult to fathom that a senior military commander is using artificial intelligence to make critical choices. Major General William Hank Taylor has recently confessed to Business Insider that he takes the help of AI tools to make leadership decisions. The commanding general of the 8th Army is sparking concerns about security and confidentiality. In fact, he said that he enjoys using ChatGPT for thousands of his troops.

“As a commander, I want to make better decisions,” Taylor explained to the publication. “I want to make sure that I make decisions at the right time to give me the advantage.” He noted that “Chat and I” have grown “really close lately.”

The Major General revealed that he has been using AI to create complex models that can “help all of us.” He thinks using AI helped him particularly with forecasting future moves on weekly intelligence.

Taylor and many military leaders feel AI can offer a strategic edge through a tactical framework. This favors the U.S. commanders known as the “OODA Loop,” reports Business Insider. The concept was first created by American fighter pilots during the Korean War. The concept suggests that forces capable of working decisively before adversaries. It also helps them observe, orient, decide, and act, hence they gain battlefield superiority.

Taylor argues that cutting-edge technology has often proved valuable. Moreover, it helps with staying updated with rapidly changing systems that remain a challenge. However, the use of AI in something as confidential as the military has sparked online debate.

For one, military leaders, including the former Secretary of the Air Force, have hailed it as a technology that will “going to determine who’s the winner in the next battlefield.” Critics are of a different opinion; they have highlighted that systems like ChatGPT and its newest version still make serious mistakes. Even after several trial and error, it can present absurd and false information as a legitimate fact.

What is seriously concerning is that these technologies rely heavily on user engagement and validation. Hence, it does not prioritize users’ statements, resulting in misinformation. Social media users are highly critical of the news and have expressed serious concerns. 

Latest news by author

Shrobana Rakshit

Shrobana is a passionate writer and feminist who believes in the power of words to challenge social norms, shatter glass ceilings, and inspire change. She is in constant need of coffee and fresh nutrition for her brain. You’ll often find her in the corner reading Arundhati Roy and planning her next Instagram post. She is a certified Lana Del Rey fangirl with an immense love for writing on pop culture. Now, she gets to live her dream every day and couldn’t be happier.

Join WhatsApp

Join Now

Join Telegram

Join Now

Leave a Comment