The 7 worst CPUs of all time

We see disappointing CPUs in every generation, but few are truly bad enough to earn a spot among the worst processors of all time. Here's a look at them.

The 7 worst CPUs of all time
Someone holding the Core i9-12900KS processor.Jacob Roach / Digital Trends

The list of the best processors is constantly shifting, with AMD and Intel constantly duking it out for a top slot. Although both companies have released some fantastic CPUs, they’ve put out plenty of duds as well. And some of those have gained infamy among the worst CPUs of all time.

We dug through the archives of the past several decades of CPU releases to identify the worst processors AMD and Intel have ever put out. Some, such as the Core i9-11900K and FX-9590, are relatively recent, while others date back to the earlier days of custom PCs. Regardless, each of these seven processors have earned a dishonorable status, be it for pricing, power, heat, or just plain old poor performance.

Intel Core i9-11900K (2021)

Intel Core i7-11700K processor.Intel

This one is still fresh. Even with plenty of negative attention, the Core i9-11900K doesn’t get the ill will it truly deserves. During the time it released, Intel was very publicly in a transition period. Intel’s current CEO Pat Gelsinger had taken the reins just a month before the processor released, and Intel was heavily focused on its new processor road map and 12th-gen Alder Lake architecture. The release of 11th-gen chips felt like a mandatory move — something Intel had to do just to say it had new processors out. They were swept under the rug quickly, but the Core i9-11900K remains a standout stain on the entire generation.

Get your weekly teardown of the tech behind PC gaming

Gamer’s Nexus called it “pathetic” on release, and even TechRadar said it “feels like a desperate attempt to hold on to relevance while [Intel] works on its true next step.” Hardware Unboxed went on to say it was the “worst flagship Intel CPU maybe ever.” The reasons why are clear in hindsight. Intel was struggling to gain a foothold against AMD’s Ryzen 5000 processors, and the Core i9-11900K addressed that issue in the worst way possible.

The Core i9-11900K was the last stand for Intel’s 14nm process node, a process that Intel managed to squeeze marginal improvements out of for nearly seven years. The Core i9-11900K was a breaking point. It was using the dated process, but it actually featured less cores than the previous-generation Core i9-10900K. Intel shrank its flagship from 10 cores to only eight, and it increased power demands. That didn’t bode well against AMD’s rather efficient 12-core Ryzen 9 5900 X and 16-core Ryzen 9 5950X, both of which offered better performance at lower prices.

It was a dire situation. In reviews, the Core i9-10900K often beat the Core i9-11900K, and AMD’s Ryzen parts ran away with performance. Intel could occasionally post a few extra frames in games, but not enough to justify lower CPU performance overall and much higher power consumption. Although Intel has had plenty of stumbles over the years, there’s no processor that better encompasses the “spend more, get less” sentiment than the Core i9-11900K.

Although Intel quickly moved on to the much improved Core i9-12900K, you can still feel the impact of 11th-gen CPUs today. Up to that point, Intel was the undisputed market leader, while today, it plays an underdog role. Intel often has to undercut AMD to stay competitive. That shift happened right when the Core i9-11900K was released.

AMD FX-9590 (2013)

The AMD FX-9590 sitting on a plate.Butko / WikiMedia Commons

You could pick any of AMD’s Bulldozer CPUs for this list, but there’s no processor that encompasses just how disastrous the architecture was quite like the FX-9590. Similar to Intel’s 11th-gen chips, this was AMD’s last hurrah for Bulldozer before it would introduce the Zen architecture. The FX-9590, when it released, was the first processor that achieved clock speeds of 5GHz out of the box — no overclocking required. It was a huge step, but it came with the massive cost of power consumption.

Let’s back up for a moment. The FX-9590 actually uses the Piledriver architecture, which is a revision of the original Bulldozer design. Piledriver isn’t a huge improvement. It fixed some of the dire issues with Bulldozer, particularly scheduling tasks on its massive array of threads. The FX-9590 took the more efficient architecture and dialed up the clock speed as high as it could go, resulting in a chip that consumed 220 watts.

That’s insane even by today’s standards with CPUs like the Core i9-14900K, but it was even worse in 2012 when the CPU released. Intel’s 6th- and 7th-gen parts hung around 85W, while its 8th-gen chips barely cracked 100W. The FX-9590 used the same AM3+ socket as much lower-end AMD CPUs at the time, too, despite requiring a flagship motherboard and dense liquid cooling setup to even function properly.

Reports of freezing, insanely high temperatures, and even motherboard failures followed. Reviews at the time proved that the FX-9590 was powerful, which was all the more impressive considering how inexpensive it was compared to Intel’s competing options. But once you factored in a massive cooling array and a high-end motherboard just to keep the processor from throttling itself, you would end up spending a lot more with AMD than you would with Intel. It didn’t help that AMD’s lead over Intel was marginal in many cases, with Team Blue accomplishing similar performance with half the power draw and core count.

The FX-9590 stands as a symbol of Bulldozer’s failure as a whole, and it hit AMD where it hurt most. In 2012, the year the processor was released, AMD said it lost $1.18 billion.

Intel Core i7-7740X (2017)

An Intel X299 motherboard.Rainer Knäpper / WikiMedia Commons

Intel’s Core i7-7740X isn’t necessarily a bad processor, but it sure is a confusing one. Years ago, Intel maintained a list of X-series CPUs for a high-end desktop (HEDT) platform. The company has abandoned HEDT for the last several generations — though AMD is keeping it alive with Threadripper 7000 — but it used to be a cornerstone of Intel’s lineup. The company maintained two separate platforms. HEDT allowed for beefy motherboards with huge PCIe arrays and cutting-edge memory support, as well as high core count CPUs, while the mainstream lineup offered more affordable prices for everyone who didn’t need what X-series CPUs had to offer.

But then Intel tried to mix oil and water. In an attempt to make its HEDT platform more accessible — you would easily drop $1,000 on a processor in the range normally — Intel introduced lower-end options called Kaby Lake-X. That lineup included the Core i5-7640X and Core i7-7740X, which were repackaged versions of of Intel’s mainstream Core i5-7600K and Core i7-7700K, respectively. The main difference is that they drew more power and were more expensive, with a miniscule bump to clock speed.

There was frankly no reason to buy these parts. In order to get a Kaby Lake-X chip, you would need to invest in Intel’s far more expensive HEDT platform. Intel also decided to cut the integrated graphics for these chips, all while charging a premium for them compared to their mainstream counterparts. The results spoke for themselves. Kitguru found that the Core i7-7740X offered almost identical performance to the Core i7-7700K, just with higher power draw and a higher price. Even worse, the cheaper Core i7-7700K offered higher performance with a moderate overclock.

Despite the extra cost of Intel’s X299 platform with the Core i7-7740X, the chip didn’t even have access to the full number of PCIe lanes that the platform was capable of. It was instead limited to the same 16 available on the Core i7-7700K. It’s clear what Intel was trying to accomplish with the Core i7-7740X by offering a more affordable way to invest in the X299 platform with plans to upgrade to a more expensive X-series CPU in the future. That plan fell flat on its face, however.

Adding to the troubles for Intel was AMD’s new Ryzen chips. Just a few months before the Core i7-7740X released, AMD introduced its fiercely competitive Ryzen 1000 CPUs, which offered higher core counts and a lower price compared to the Intel competition. Rebranding the Core i7-7700K on a more expensive platform just came off as tone deaf at the time. Intel was doing nothing to address its slipping position against AMD, and in fact, it was asking enthusiasts to spend even more.

AMD Phenom (2007)

The Intel Phenom X3 socketed in a motherboard.JulianVilla26 / WikiMedia Commons

Let me set the stage here. After gaining a competitive foothold against Intel with its K6 microarchitecture, AMD went on to release its Athlon CPUs. The initial products under the Athlon brand were so impressive that AMD continues to use Athlon branding to this day, albeit for older Zen architectures. Athlon transformed AMD from a second-rate CPU maker to the competitive powerhouse it is today. By 2007, after Intel released the first quad-core desktop processor, all eyes were on Team Red for a response.

That response was Phenom, a range of quad-core CPUs that were designed to bite back against Intel’s wildly popular Core 2 Quad range. It just turned out to be more of a nibble. Before ever releasing, Phenom was plagued with issues. Exact specifications and prices weren’t concrete until the final hour, and AMD didn’t share any performance data ahead of release. The biggest issue was a bug that was discovered just before Phenom’s release that could cause a system to completely lock up. AMD developed a workaround through the BIOS, which was found to reduce performance by close to 20% on average.

And sure enough, things were bad when the first Phenom processors eventually arrived. Even a midrange Intel Core 2 Quad could outpace the flagship Phenom 9900 in just about every application, from general desktop use to productivity to gaming. And to make matters worse, AMD was asking for more money compared to the Intel competition. AMD finally released quad-core CPUs with Phenom, but they just weren’t worth recommending.

AnandTech summed up the story nicely, writing: “If you were looking for a changing of the guard today, it’s just not going to happen.” In a later review, reviewer Anand Lal Shimpi called Phenom “the biggest disappointment AMD had ever left us with.”

AMD eventually regained ground with Phenom II, offering more affordable quad-core options to the midrange market, while Intel marched ahead with its newly minted Core i7. For the window between Athlon and Phenom II, however, AMD was all but dead in the water due to the disappointing performance and high price of the original Phenom chips.

Intel Pentium 4 Willamette (2001)

Someone standing in front of an Intel Pentium 4 sign.Evangel76 / WikiMedia Commons

Intel’s Pentium 4 range eventually turned into a success, becoming the first Intel chips to use Hyper Threading and introducing the Extreme Edition branding to Intel’s lineup, which it carried forward with several generations of X-series processors. That was not the case when Pentium 4 was first introduced, however. The first generation of chips, code-named Willamette, arrived half-baked and expensive. Not only were they beat by AMD’s cheaper Athlon chips, but also Intel’s own Pentium III options.

As we get into performance, keep in mind that Pentium 4 was released in the era of single-core CPUs. Models were separated by clock speed, not model number. The first two Pentium 4 CPUs were clocked at 1.4GHz and 1.5GHz. Even a Pentium III clocked at 1GHz managed to outclass the the 1.4GHz Pentium 4, while AMD’s Thunderbird-based Athlon 1.2GHz that was released several months earlier managed to stomp both chips in productivity benchmarks. Gaming results were even worse.

At the time these CPUs were released, they were seen as a stopgap. Intel released them promising higher clock speeds down the line, which Pentium 4 eventually delivered. The range still saw its fair share of issues, however. The most pressing was Intel’s decision to use RDRAM instead of DDR SDRAM. Due to manufacturing complexity, RDRAM was more expensive than DDR SDRAM. It was such a big concern that Intel actually bundled two sticks of RDRAM with each boxed Pentium 4 CPU.

The problem wasn’t from enthusiasts building their own PCs, but rather from Intel’s partners who didn’t want to foot the bill for RDRAM when DDR SDRAM was cheaper and offered better performance (for context, DDR SDRAM is still what is used today with standards like DDR4 and DDR5). In the end, Intel was offering CPUs that were not only beaten by the competition, but they were even beaten by Intel’s previous generation. And, to make matters worse, they were more expensive due to the exotic memory interface.

In the months that followed, Intel eventually released Pentium 4 chips with better clock speeds, as well as a proper DDR interface. The branding would eventually turn into a success story with Hyper Threading and Extreme Edition, but for the first Pentium 4 CPUs that were released, the disappointment was real.

AMD E-240 (2011)

The AMD E-350 socketed in a motherboard.Nemo bis / WikiMedia Commons

We’re focusing on desktop processors for this list, but AMD’s E-240 deserves a callout for how truly terrible it really is. This is a single-core mobile CPU that can reach 1.5GHz. Based on that, you might think it came out in the 2000s — Intel released the first Core Duo in laptops in 2006 — but you’d be wrong. It came out in 2011. Around this time, even the weakest 2nd-gen Intel Core i3 processor came with two cores and four threads.

There were never high expectations for the E-240. It released on AMD’s Brazos platform, which were low-power chips looking to compete with Intel’s Atom options. Even then, the E-240 was behind when it released. The standard for Atom when the E-240 released was two cores, and even AMD’s more powerful options in this range, such as the E-300 and E-450, featured two cores. The E-240 was focused on budget notebooks, but even by that standard, it was years out of date when it released in 2011.

To make matters worse, the chip was designed with a single-channel memory controller, making the already arduously slow pace of the chip even worse. With only a single core and no support for multiple threads, the E-240 was forced to handle tasks one at a time. That had massive performance implications, with the E-240 falling upwards of 36% behind the dual-core E-350. In NotebookCheck’s review of the HP 635, which used the E-240, it wrote, “we could only shake our heads in regards to the choice of CPU.”

Just because a processor is weak doesn’t make it one of the worst of all time. There are plenty of AMD and Intel options built for bargain-bin notebooks that aren’t powerful, but the E-240 is particularly painful for the time it was released. It seems the main purpose of this chip was to swindle unsuspecting buyers into picking up silicon that was three or four years out of date.

Intel Itanium (2001)

An Intel Itanium processor with the cap removed.Sonic84alpha / WikiMedia Commons

In today’s climate, we all think of Intel as the champion of the x86 instruction set architecture (ISA). Intel developed x86, and with a wave of machines that currently use the Arm instruction set, Intel is waving its flag to prove x86 isn’t dead. Things weren’t always this way, though. Intel, at one point, wanted to kill its own child by developing a new ISA. The Intel Itanium architecture, and the series of processors that came along with it, was a joint venture between Intel and HP to develop an ISA capable of a 64-bit address width, and it was a colossal failure.

You’ve probably never heard of Itanium, and that’s because it never really cracked into the mainstream market. When Itanium first showed up on the scene, it was talked about as a monumental shift in computing. It was Intel’s answer to PowerPC and its RISC instruction set, offering a 64-bit ISA with no hit to performance for 32-bit applications. Or so Intel and HP said. In reality, the RISC competition was much faster, and a year before Itanium was even released to the data center, AMD released its x86-64 ISA — an extension of x86 that could run 64-bit applications and is still being used today.

Still, Itanium had a lot of momentum behind it, so much so that Intel kept it alive for decades after 2001, when the processors were first introduced. HP and Intel announced their partnership in 1994, and by July 2001, when Itanium was released, massive brands like Compaq, IBM, Dell, and Hitachi had signed on to the future Intel and HP had envisioned. Just a few years later, nearly all support for Itanium had vanished, with Intel copying AMD’s move by developing its own x86-64 extension.

Despite never breaking into the mainstream market, Intel technically shipped Itanium chips until 2021. And in 2011, even after x86-64 had proven itself as the dominant ISA across the mainstream PC market, Intel reaffirmed support for Itanium. The long life cycle of Itanium wasn’t a mistake, though. In 2012, court documents that were unsealed as part of a trail between Oracle and HP concerning Itanium processors revealed that HP had paid Intel $690 million to continue manufacturing the chips from 2009 to 2017. HP paid Intel to keep Itanium on life support.

We never saw mainstream PCs with Itanium processors, but the range still stands as one of the biggest failures in all of computing. It was originally billed as a revolution, but in the decades that followed Itanium’s release, Intel continued to backpedal on the scale of the ISA until it eventually fizzled out.

Editors' Recommendations

AMD ‘basically lies’ about Computex benchmark, YouTuber says PC gaming has an efficiency problem A wave of new handhelds is arriving at the worst possible time It’s official: AMD Ryzen AI 300 is up to 40% faster I tripled my frame rate with one button — and you can too