ARE "TWITTER RANDOS" A CURSE OR AN ASSET? WARFARE IN THE TIME OF SOCIAL MEDIA

ARE "TWITTER RANDOS" A CURSE OR AN ASSET? WARFARE  IN THE TIME OF SOCIAL MEDIA

By Steve Douglass 

Maj. Claire Randolph, US Air Forces Central Command (AFCENT), Chief of Weapons & Tactics — and whether it’s accurate or credible, based on reporting and official context: Maj. Randolph warned about operational security (OPSEC) risks from social media, especially when open-source observers track and publish movements of U.S. aircraft, making otherwise sensitive operational details broadly visible.


That point — that public flight tracking and social-media reporting can make mission data visible in ways militaries would normally treat as restricted or classified — isn’t a fringe view. Defense analysts and military officials in multiple countries have raised similar concerns about adversaries exploiting open-source intelligence (OSINT) platforms for actionable insight. But the phrasing seen in social media paraphrases (“considered Secret or Top Secret if done internally”) is informal and not an official classification statement from the U.S. Department of Defense.

However, state-sponsored intelligence agencies are the real strategic threat, not random people on social media. They have satellites, cyber capabilities, signal interception, and professional analysts. A single tweet or flight-tracking post isn’t dangerous by itself.

So it’s not really “state intel vs random Twitter.” It’s more like: State intel agencies + massive public data exhaust.

A single tweet might not matter. But aggregated posts, flight tracking feeds, satellite imagery, shipping trackers, Telegram channels, and enthusiast communities can collectively provide patterns. Intelligence services are fairly good at stitching those patterns together.

That said, it’s also fair to be skeptical of exaggerated OPSEC warnings. Sometimes officials talk as if social media alone is the threat, when in reality it’s a supporting piece of a much larger intelligence picture. State actors already have many tools. Open-source data just lowers the cost and speeds up analysis.

The reason OPSEC still matters is that those same agencies actively use open-source intelligence. Social media posts, public flight trackers, satellite imagery, shipping data, and online discussions may all be unclassified and legally accessible — but when aggregated and analyzed at scale, they can reveal patterns about movements, capabilities, or intentions. The power isn’t in one post; it’s in the data fusion.

The real risk is how structured actors collect and fuse open data, but militaries aren’t helpless in that environment.

There are practical countermeasures. Aircraft can limit or disable ADS-B broadcasting in sensitive situations. Movements can be blended into routine training patterns. Units can normalize deployments so that activity spikes don’t signal intent. Deception operations — including misleading training flights, timing noise, or forward deployments masked as exercises — have been part of military practice long before social media existed. The difference today is that the “audience” includes a global OSINT community.

More importantly, OSINT can absolutely be an asset. Publicly visible bomber movements can signal deterrence. Open imagery can shape narratives. Controlled visibility can reassure allies or send messages to adversaries without formal statements. In modern strategy, sometimes you want to be seen — just on your terms.

In some ways, OSINT acts like a high-level map of what’s happening on the ground or in the air. It doesn’t replace classified intel, but it complements it and sometimes even points analysts toward things they didn’t know to look for. It’s a reminder that in the modern world, information is everywhere, and smart eyes can turn public scraps into serious strategic insight.

So the reality isn’t “OSINT is dangerous and uncontrollable.” It’s that open information is now part of the operational environment. Militaries can reduce exposure, inject ambiguity, or deliberately leverage visibility when it serves strategic goals.

The key distinction is this: unmanaged openness can create risk, but managed visibility can create advantage.

But that’s the inherent tradeoff in an open, free society. Transparency, free speech, and broad access to information are strengths. But they also mean adversaries can access much of the same information. 

OPSEC isn’t about fearing random Twitter users — it’s about understanding that powerful actors can exploit openly available data, and managing that risk without undermining the freedoms that make that openness possible.

Blaming “social media” alone can oversimplify the issue. Social platforms didn’t create intelligence collection — they just made some forms of open-source data more visible and easier to aggregate. State intelligence services were exploiting publicly available information long before Twitter existed.

When officials single out social media, it can sound like shifting responsibility away from institutional adaptation. Militaries control many of the variables: emission control procedures, transponder policies, training patterns, deception doctrine, information release strategy, and force posture signaling. If sensitive patterns are consistently visible, that’s often a process or policy issue, not just a platform issue.

At the same time, social media does accelerate dissemination and pattern-building. What used to sit in scattered aviation forums now spreads globally in minutes. That changes the speed of exposure, even if it doesn’t fundamentally change the nature of intelligence work.

Framing the problem as “Twitter is the threat” is reductive. The more accurate framing is that open information ecosystems are part of the operational environment. The responsibility to adapt largely sits with institutions that operate within that environment.

Competent military planners absolutely account for the information environment — including OSINT — as part of operational design. Just like terrain, weather, logistics, and enemy capabilities, the visibility of movements is now a planning factor. In modern operations, you assume that aircraft movements, ship transits, satellite imagery, and even personnel activity could be observed and analyzed in near-real time.

Good planning doesn’t treat open-source visibility as a surprise variable. It factors in emission control, deception, pattern management, timing, and narrative shaping from the outset. If a movement needs to be hidden, planners build concealment into the concept. If it’s meant to signal deterrence, they may deliberately allow it to be seen.

Blaming OSINT after the fact can sound like an excuse because the information ecosystem is not new. The professional standard today is to assume transparency unless you actively create opacity. In other words, visibility is the baseline condition — and operational art has to be designed around that reality, not react to it.

When you see posts of a new Chinese stealth aircraft, it’s really tricky to tell what’s real. Sometimes it could be genuine intel, like a real test flight or leaked photos giving hints about the design, engines, or stealth features. Even blurry images can reveal something if you know what to look for.

But on the other hand, it could be propaganda. Militaries know how powerful a single image or video can be, so they might stage shots, pick flattering angles, or even tweak them digitally to make the plane look more advanced than it actually is. The point is often more about sending a message than showing the truth.

The bottom line is that a single post is just a clue. Analysts usually need multiple sources, context, and careful comparison before they can say anything definitive. One image alone doesn’t prove whether the aircraft is truly new or just part of a show. One thing is clear, China has mastered the art of social media leakage. 

This isn't new. The attack on Pearl Harbor happened on December 7, 1941. Early that Sunday morning, the Japanese navy launched a surprise military strike on the U.S. naval base there. It was devastating: battleships were sunk or damaged, planes were destroyed on the ground, and over 2,400 American service members and civilians were killed.

The Pearl Harbor attack is a classic example of state-sponsored real-time OSINT. Japanese spies in Hawaii, along with intercepted communications and open-source observation, gave Tokyo insight into U.S. fleet dispositions. The U.S. underestimated the intelligence threat, didn’t fully act on warning signs, and operational security failures made the fleet vulnerable.

The lesson is that adversaries exploiting open or partially open information isn’t new. What has changed is the scale, speed, and accessibility, thanks to modern technology and OSINT. Back then, it was human spies and intercepted messages; today, it’s social media, flight trackers, commercial satellites, and global online reporting. The principle is the same: if intelligence collectors can see patterns you haven’t controlled or accounted for, it becomes a vulnerability.

So, whether it’s 1941 or 2026, effective planning always needs to integrate an awareness of what others can observe and infer. Failing to do so — or blaming the platform instead of gaps in operational planning — repeats the same historical mistakes.

In closing, back in my day, the threat was classified as "guys in lawn chairs," but I'm still a firm believer in the idea that you will only see what they want you to see. 



Comments

Popular Posts