21 C
Islamabad
Friday, November 28, 2025

Autonomy’s New Fighter: Innovation or Reckless Leap?

From the Wright brothers’ first flight to today’s fifth-generation fighter, air dominance has long been shaped by the fighter pilot, a human mind navigating turbulent skies at supersonic speeds. However, today, as threats evolve, so does technology. Its latest manifestation, unveiled in October 2025, is Shield AI’s X-BAT, the world’s first AI-piloted vertical take-off and landing (VTOL) combat aircraft, which combines long-range, multirole capability, and autonomous operation. Yet, its operational and technical uncertainties, high costs, and ethical challenges make its capabilities largely aspirational, requiring careful oversight and strategic evaluation.

The X-BAT, which is a Group 5 UAV, appears to come with many features that can transform airpower. It is claimed to have a maximum range up to 2,000 nautical miles, a fighter jet-like ceiling of 50,000 feet, and a highly modular, platform-agnostic design delivered using open mission system architecture. It has a 26-foot-long ‘cranked kite’ platform with a tailless blended-wing-body design optimized for reduced radar signature. Shield AI claims that the X-BAT would utilize the same powerplant used in the F-15 and F-16, and theoretically would reach a top speed of Mach 1.2. However, these much-boosted claims mask credibility concerns, alongside technical and performance limitations, with analysts claiming that the exaggerated capabilities are masking declining sales amid the CEO’s resignation.

The X-BAT, built on the company’s earlier V-BAT, aptly illustrates how promotional claims often outpace verified results. Although the makers claim the V-BAT has flown over 170 sorties in Ukraine, the US Defense Department has released no independent assessment of its performance. The projected timelines of this loyal wingman also seem too ambitious. It was announced in 2024, aims for a flight demonstration by 2026, and production by 2029. This tight timeline overlaps with several US Air Force (USAF) and Navy programs testing AI-enabled aircraft teaming, and the drone must still master VTOL and other flight modes before full testing can begin.

Additionally, the Air Force has already selected the Hivemind software for General Atomics’ YFQ-42A as part of its Collaborative Combat Aircraft (CCA) program. Likewise, the concept of a weaponised loyal wingman UAV for active air-to-air combat remains in the testing phase and has not been fully operationalised. This suggests that, at present, the marketing rhetoric of X-BAT may be technologically premature and operationally unproven.

Moreover, another marketed standout feature of the X-BAT is its ability to take off and land at angles approaching 90 degrees before switching to thrust vectoring. This enables the aircraft to perform precise vertical landings, executing a Pugachev’s Cobra-like manoever from horizontal flight, without the need for a runway. The X-BAT design is being promoted particularly for contested environments such as the Pacific, where states like China have invested heavily in long-range strike platforms like cruise and ballistic missiles, kamikaze drones, and hypersonic weapons.

Read More: Pakistan to Induct First Chinese Submarine by 2026

The X-BAT’s ability to conduct runway-independent operations would enable sustained air operations even when infrastructure is degraded or destroyed, aligning well with the USAF’s concept of Agile Combat Employment aimed at denying targets to the enemy at remote or improvised runways. However, developing a flight-control system that reliably handles widely varying conditions remains challenging, thus requiring further improvements in the V-BAT’s VTOL capabilities.

Additionally, Shield AI states that its Hivemind software can enable controlled swarming, conduct strikes, reconnaissance, and electronic-warfare operations, and coordinate with manned platforms even under GPS-denied environments. Keeping the above in view, India, in November 2024, signed an agreement with Shield AI to co-produce the V-BAT system. This could lay the groundwork for future X-BAT co-production, potentially giving India an edge with dispersed, survivable, and networked strike and reconnaissance options in heavily jammed environments, while evading radar detection.

Though the operationalization of the X-BAT is still distant, the potential acquisition by India could have consequences, especially for Pakistan. While Pakistan maintains a layered air-defense architecture already, acquisition of this or similar technologies by India in the near to medium term necessitates vigilance on Islamabad’s part.

While much is being advertised about this new technology, several practical and ethical challenges also remain. Lack of safety concerns by Shield AI has drawn sharp criticism, raising procedural and ethical questions about maintaining lawful and accountable control over the drone. Likewise, a fully autonomous X-BAT would remove human consciousness and judgment from lethal decisions, creating the risk of misidentifying legitimate and illegitimate targets and potentially violating the Geneva Conventions. This concern is amplified by the fact that genuine, reliable autonomy in complex combat environments remains far from achievable, despite claims by defense innovators.

This gap between promise and practice heightens the danger of unlawful or unaccountable engagements. Cost is another key factor that determines the reliability of unmanned alternatives to fifth-generation fighters. F-35 costs approximately $100 million, versus X-BAT costing approximately $27 million. On paper, it is much cheaper; however, installing technologies like VTOL, stealth, and sophisticated sensors in an X-BAT with increased size, speed, etc., would raise the actual cost. Similarly, the logistics and maintenance required for remote operations would also increase the overall cost, revealing a trade-off between advanced capability and affordability.

Thus, while the X-BAT is being presented as an ambitious leap in airpower, its operational potential hinges on a number of factors that are still to be tested.  Policymakers must therefore insist on rigorous system verification, clear frameworks for human control, and crisis-management protocols to prevent technological advances from becoming destabilizing risks.

*The views expressed in this article are the authors’ own and do not represent TDI. The contributor is responsible for the originality of this piece. 

Sibra Waseem
Sibra Waseem
+ posts

Sibra Waseem is a Research Assistant at the Centre for Aerospace and Security Studies (CASS), Lahore. She can be reached at info@casslhr.com

Sibra Waseem
Sibra Waseem
Sibra Waseem is a Research Assistant at the Centre for Aerospace and Security Studies (CASS), Lahore. She can be reached at info@casslhr.com

Trending Now

Latest News

Related News