The Risks of Transient Execution in AMD Processors

Silence isn’t always empty. Deep within the hum of a processor, speculative execution mechanisms murmur secrets they were never meant to share. In 2025, researchers exposed one such whisper—the Transient Scheduler Attack, or TSA—a subtle, timing-based breach emerging from AMD’s sophisticated instruction handling. This revelation didn’t just stir headlines; it provoked a reevaluation of how modern processors balance speed with compartmentalization.

While attacks like Spectre and Meltdown previously captivated the tech industry, TSA distinguishes itself with its deft manipulation of instruction scheduling rather than simple mispredicted branching. And where Spectre sought data via branch predictor abuse, TSA carves open new leakage vectors by pressing against the very routines processors rely on to issue, queue, and flush instructions. This blog delves into TSA not with clichés or recycled phrases, but with original scrutiny, tackling it not just as a security flaw, but as an existential moment for silicon trust.


What Exactly Is TSA?

To grasp TSA, it’s essential to discard preconceived categories. This isn’t just another cache timing issue. Nor is it a rehashed buffer overread. TSA resides in the grey zone—between intention and accident, within the brief lifetime of instructions that were never supposed to reach completion.

AMD processors, particularly those in the Zen 1 through Zen 4 families, orchestrate instructions via an internal structure known as the scheduler. Here, tasks are temporarily placed, evaluated, and eventually dispatched to execution units. But speculative execution—a key performance booster—introduces a risky harmony. Operations proceed ahead of confirmation, anticipating future conditions. If the assumption proves false, the CPU reverses course, canceling the speculative instructions.

Except, not everything is perfectly erased.

That’s the heart of TSA. The attacker doesn’t need a successful execution; they only need the CPU to almost execute the right thing, just long enough to influence cache lines or timing artifacts. TSA thrives on those microscopic windows between speculation and cancellation, where secrets leave fingerprints in structures like the L1D cache, store queues, or execution paths.


The Anatomy of the Leak

Unlike traditional vulnerabilities that rely on injecting malicious payloads or escalating privileges, TSA is more refined. It observes. It measures. It infers.

Let’s consider CVE-2024-36357. This variant permits a process to speculate across privilege layers, capturing subtle shifts in the L1D cache. In practice, this could enable an attacker to sniff kernel-resident data while operating entirely in user mode.

Another example, CVE-2024-36348, allows reading of control registers speculatively—even if protections like UMIP (User Mode Instruction Prevention) are active. That’s akin to pulling down a curtain without touching it, just by watching how light bleeds around its edges.

CVE-2024-36349, seemingly minor at first glance, lets attackers deduce the TSC_AUX register—used for task-specific tagging and performance monitoring. On its own, this might seem unremarkable. But in concert with other speculative tricks, it forms part of a broader strategy: context disambiguation. Knowing which logical core is doing what, and when, is invaluable to adversaries refining timing attacks.


Disclosures That Changed the Game

These vulnerabilities didn’t emerge from idle guesswork. They were unearthed through rigorous automation. The collaboration between Microsoft and ETH Zürich yielded Revizor, a model-based fuzzing tool that treats CPU behavior like a black box and stress-tests its internal contracts.

Revizor doesn’t rely on insider knowledge. Instead, it hammers CPUs with randomized sequences of cross-domain instructions and observes inconsistencies. If a supposedly isolated process can influence the timing or behavior of a higher-privileged routine, it’s flagged for further investigation.

This technique proved fruitful. Not only were known flaws rediscovered—confirming Revizor’s accuracy—but TSA’s nuanced leak vectors were brought into public consciousness through formal proof and reproducible demonstration.

This transparency forces a shift in our understanding. These are no longer rare academic curiosities; they are structurally inevitable if performance optimizations aren’t built with rigorous microarchitectural isolation in mind.


Real-World Implications

It’s tempting to dismiss these attacks as theoretical or unexploitable. After all, no malware strain has yet been caught leveraging TSA in the wild. But that’s the wrong benchmark.

Sophisticated adversaries don’t broadcast their methods. And the very nature of side-channel exfiltration is stealthy. When data is leaked via cache timings or subtle microsecond delays, forensic tools often fail to register it. There’s no file dropped, no process launched, no log entry created. It’s espionage in its purest form.

Consider the stakes: multi-tenant cloud environments, hypervisor-protected sandboxes, enclave-like secure processing zones. In all of these, speculative execution is trusted to behave as an ephemeral optimization, not a pathway across privilege lines.

With TSA, that assumption collapses.

A single untrusted VM sharing physical cores with a host system becomes a listening post. A JavaScript engine in a browser could use transient leakage to peek into protected system memory—especially if underlying firmware hasn’t applied the appropriate microcode patches.


Patching and the Limits of Mitigation

AMD’s response has been swift, albeit complex. Firmware updates—delivered via OEM BIOS packages—now include microcode revisions that alter speculative behavior. But those changes must be combined with OS-level mitigations to be fully effective.

For instance, Linux kernel versions with TSA-aware mitigations modify syscall handling, disable certain speculation paths, and flush buffers upon context switches. Hypervisors like KVM and Xen have been updated to avoid VM-level data bleed.

However, these fixes aren’t universally deployed. Embedded systems, legacy servers, and customized virtual infrastructure often delay updates due to compatibility or uptime concerns.

And herein lies the tension: Performance optimizations enabled these leaks in the first place. Fully closing them often requires sacrificing that performance—sometimes significantly. For example, aggressive flushing of the scheduler or disabling SMT (Simultaneous Multithreading) may reduce throughput by 10–20% in some workloads.

So while mitigations exist, they’re not free. Every fix is a tradeoff between safety and speed.


The Broader Message to Hardware Designers

Perhaps the most sobering realization from TSA is that speculative side channels aren’t patchable in the traditional sense. They emerge from design decisions made years prior—decisions about how to prioritize speed, throughput, and efficiency.

That’s why TSA represents more than a security issue. It’s a design inflection point.

Hardware architects must now consider every internal queue, buffer, and timing structure as a potential conduit for data leakage. That means verification models must evolve. Fuzzing tools like Revizor must become standard in silicon validation pipelines. And future CPUs must ship with built-in, enforceable speculation boundaries—not as afterthoughts, but as primary requirements.


Lessons for the Enterprise and Defender

For sysadmins, CISOs, and infrastructure architects, TSA isn’t just a hardware footnote. It’s a call to assess exposure in their actual environments.

If your infrastructure includes AMD CPUs across virtualization boundaries, your systems may be vulnerable. It’s not enough to check firmware revision IDs. You must:

  • Validate microcode patch status using tools like cpuid and rdmsr
  • Apply kernel and hypervisor updates that coordinate with CPU mitigations
  • Isolate untrusted workloads from sensitive ones (or disable SMT)
  • Incorporate side-channel risk into threat modeling exercises

Cloud providers and hosting environments must be especially vigilant. Even if you trust your hypervisor, do you trust every tenant sharing your silicon?


The Philosophical Shift

Perhaps the most valuable insight from TSA is that speculative execution can’t be trusted blindly. We must now accept that CPU behavior is not deterministic in the ways we once assumed. There are ghosts in the pipelines—ghosts that leak, listen, and linger.

This doesn’t mean we abandon speculation as a performance tool. But we must reshape the compact between speed and safety. Where speculation is permitted, leakage must be impossible. Where leakage is possible, speculation must be tightly contained.

That will require rethinking compiler assumptions, OS scheduling routines, and even language runtime behavior. Security must return to first principles: verify, isolate, constrain.


Closing Thoughts

TSA marks a shift in our understanding of processor risk. Not a sensational threat, but a quiet, persistent flaw. The kind that hides in the scheduler, whispers through the cache, and slips past firewalls without leaving a trace.

It reminds us that performance is never free. Every clock cycle we borrow from the future carries a cost. And sometimes, that cost is our privacy.

There are no silver bullets. No single patch will make speculative execution safe forever. But by studying attacks like TSA—not just as technical bugs, but as lessons in humility—we move closer to an architecture worthy of the trust we place in it.

Until then, we listen. We measure. And we prepare. Because somewhere, deep in silicon, the scheduler is still whispering.

Leave a comment

I’m Rinzl3r

Hello! I’m Matthew, an experienced engineer at Decian, a leading Managed Service Provider (MSP) dedicated to revolutionizing IT solutions for businesses. With a passion for technology and a wealth of experience in the MSP industry, I’ve embarked on a journey to demystify the world of managed services through this blog.

My career at Decian has been a journey of constant learning and growth. Over the years, I’ve honed my skills in various aspects of IT management, from network security and cloud services to data analytics and cybersecurity. Working in an environment that fosters innovation and customer-focused solutions, I’ve had the privilege of contributing to numerous projects that have helped businesses optimize their IT strategies and enhance operational efficiency.

The inspiration to start this blog came from my interactions with business owners and clients who often expressed a need for clearer understanding and guidance in working with MSPs. Whether it’s navigating the complexities of digital transformation, ensuring cybersecurity, or leveraging technology for business growth, I realized that there’s a wealth of knowledge to be shared.

Through this blog, I aim to bridge the gap between MSPs and their clients. My goal is to provide insights, tips, and practical advice that can help business owners make informed decisions about their IT needs and how best to collaborate with an MSP like Decian. From explaining basic concepts to exploring advanced IT solutions, I strive to make this space a valuable resource for both seasoned professionals and those new to the world of managed services.

Join me on this informative journey, as we explore the dynamic and ever-evolving world of MSPs. Whether you’re an MSP client, a business owner, or just curious about the role of technology in business today, I hope to make this blog your go-to source for all things MSP.

Welcome to the blog, and let’s unravel the complexities of managed IT services together!

Let’s connect