No Nvidia Killer? Big Navi's best could match the RTX 2080 Ti, fall short of the RTX 3080...

midian182

Posts: 9,744   +121
Staff member
Rumor mill: It’s an exciting time for PC fans. Not only is Nvidia set to release its consumer Ampere cards within the next few months, but AMD is also launching the RDNA 2 ‘Big Navi’ products. There had been talk of Team Red’s flagship being an “Nvidia Killer” that offered 50 percent more performance than the RTX 2080 Ti, but it seems the card won’t be as monstrous as that claim suggests.

Coreteks, citing sources in Asia, reports that AMD has started sharing details of the RDNA 2-based GPUs with its Add-In Board (AIB) partners. They say the performance level of the high-end card—codenamed Sienna Cichlid—is just 15 percent better than Nvidia's RTX 2080 Ti, and that’s only in select AMD “optimized” games.

It was expected that Big Navi would be a direct competitor to Nvidia’s top-end RTX 3080 Ti, but it seems that the RTX 3080 is a more realistic target. While that might disappoint AMD fans, there is some good news: the company is reportedly going to undercut the RTX 3080, offering gamers what will still be excellent performance for a lower price.

The publication writes that there will be two Big Navi-based cards at launch: the high-end product and a lower-end alternative—much like the 5700XT and 5700—both of which will use GDDR6 memory. There’s also a mid-range product, codenamed Navy Flounder, that looks set to launch early in 2021.

We don't have long to wait before finding out the report's accuracy. Big Navi will reportedly be unveiled in early September, with the cards launching on October 7. Nvidia, meanwhile, is rumored to release the Ampere GPUs on September 17.

Permalink to story.

 
The top end cards are only a fraction of the market anyway, as long as performance is on par and the price is right, the fact it isn't faster than a 3080ti doesn't really mean much.

Now, competing on features like ray tracing may be a different story. But, on the other side of things,AMD making much of their tech free, like freesync, does tend to win out over time. Slow and steady wins the race. Nvidia tends to be first to market but AMD has the better long game.

Either way, I'm excited for both series of cards and we, as consumers, should win out in the end when it comes to price.
 
That's been AMD's problem for the last 5-7 years. NVIDIA is always a whole generation ahead, so AMD has its Radeons set to challenge counterparts in the entry level to upper mainstream price brackets.

Like with the 5700 XT, AMD finally had a card near GTX 1080 Ti performance, which was great, except that it had to complete against the RTX 2000 series and only evenly matched the RTX 2070.
 
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
 
That's been AMD's problem for the last 5-7 years. NVIDIA is always a whole generation ahead, so AMD has its Radeons set to challenge counterparts in the entry level to upper mainstream price brackets.

Like with the 5700 XT, AMD finally had a card near GTX 1080 Ti performance, which was great, except that it had to complete against the RTX 2000 series and only evenly matched the RTX 2070.

AMD just didn't bother to make big enough chips because it's quite expensive.

5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².

If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead" *nerd*
 
Well it looks like I will be going team green or not upgrade at all. I'm not going to pay £800 for a 8GB GPU and I will not be paying £800 for a card that isn't a lot faster than my Radeon VII, I mean 50% faster in every game at 4K its minimum, if the next generation of GPU's are underwhelming I might get a good 1440p high refresh rate monitor and be done with it
 
If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti.
This would require TMSC's N7 process to have good yields with 400 square millimetres and larger dies. The biggest chip made on this node, so far, seems to be AMD's Vega 20 at 331 square mm (Ampere's Altra chip may possibly be bigger, but I've not seen any size figures for it yet) and there's a distinct lack of Radeon Instinct models on the market - of course, this could well have nothing to do with N7 large die yields.

AMD has been rather quiet of late in the GPU sector. In previous years, new chips appeared quite frequently - for example:

Navi 14 - Oct 2019
Navi 10 - Jul 2019
Vega 20 - Feb 2019
Polaris 30 - Nov 2018
Vega 10 - Jun 2017
Polaris 20 - Apr 2017
Baffin - Nov 2016

This Fall should be very interesting :)
 
AMD just didn't bother to make big enough chips because it's quite expensive.

5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².

If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead" *nerd*

Listen, that's a great technical overview of the different die sizes, and I'm sure there's some validity to that. However, considering that the 5700 XT and 2080 Ti have near equal power draw under gaming load, I think there's a pretty good argument for AMD not releasing a "bigger" 5700 XT. That card would be so power hungry and hot that you'd have an unpolished turd melting away inside your system.

The lack of ray tracing, regardless of the gimmick debate, amongst other features, yep, I'd say it feels as though AMD has been playing catch up.
 
Listen, that's a great technical overview of the different die sizes, and I'm sure there's some validity to that. However, considering that the 5700 XT and 2080 Ti have near equal power draw under gaming load, I think there's a pretty good argument for AMD not releasing a "bigger" 5700 XT. That card would be so power hungry and hot that you'd have an unpolished turd melting away inside your system.

The lack of ray tracing, regardless of the gimmick debate, amongst other features, yep, I'd say it feels as though AMD has been playing catch up.

Yeah some people don't take into consideration the power draw, same thing happen with Polaris, if AMD made bigger die the TDP would be over 300 watts which wouldn't look good at all
 
Listen, that's a great technical overview of the different die sizes, and I'm sure there's some validity to that. However, considering that the 5700 XT and 2080 Ti have near equal power draw under gaming load, I think there's a pretty good argument for AMD not releasing a "bigger" 5700 XT. That card would be so power hungry and hot that you'd have an unpolished turd melting away inside your system.

The lack of ray tracing, regardless of the gimmick debate, amongst other features, yep, I'd say it feels as though AMD has been playing catch up.
Please don't exaggerate, the 5700XT draws less than the 2080, very close to the 2070 Super (its direct competitor). According to techspot's review, total system draw for the 5700XT was 370W and for the 2080ti 433W (the 2070S had 361W)

Both companies claim similar perf/W gains with the next gen cards so, in theory, AMD should still be on par or close to Nvidia with their competing products just like RDNA is now.

Power2-p.webp
 
Last edited:
This would require TMSC's N7 process to have good yields with 400 square millimetres and larger dies. The biggest chip made on this node, so far, seems to be AMD's Vega 20 at 331 square mm (Ampere's Altra chip may possibly be bigger, but I've not seen any size figures for it yet) and there's a distinct lack of Radeon Instinct models on the market - of course, this could well have nothing to do with N7 large die yields.

Probably more with 7nm capacity, AMD's Lisa stated long time ago that 7nm capacity is tight. So it really makes no sense to make low yield products when there is lack of capacity. But still, low yields does not mean something is unmanageable. Nvidia surely have quite poor yields with GTX 2080 but as Samsung have enough capacity, that is not really problem. Poor yields just drive production costs higher. That we see with flagship card pricing.

Listen, that's a great technical overview of the different die sizes, and I'm sure there's some validity to that. However, considering that the 5700 XT and 2080 Ti have near equal power draw under gaming load, I think there's a pretty good argument for AMD not releasing a "bigger" 5700 XT. That card would be so power hungry and hot that you'd have an unpolished turd melting away inside your system.

The lack of ray tracing, regardless of the gimmick debate, amongst other features, yep, I'd say it feels as though AMD has been playing catch up.

"Half bigger" 5700XT would still be not So power hungry. Even if it would be somewhere around 300-330 watts, it's still very cool compared to some cards seen past, like Radeon 295X2 that goes somewhere around 500 watts. Or SLI/Crossfire setups.

Better argument is 7nm capacity. Because GlobalFoundries took capacity out from AMD, there was no realistic way for AMD to release much bigger GPU. It's even possible AMD planned to release "big RDNA" with GlobalFoundries tech but it was cancelled due supply issues. There was huge difference between TSMC and GlobalFoundries deals.

- With TSMC AMD no obligation to buy anything.

- With GlobalFoundries AMD had to buy certain amount of wafers or pay fine.

So AMD probably had backup plan for Very Big Navi. So that AMD has something to do if they cannot meet wafer targets. No matter how poor yields, making something is better than paying for not making anything.

Ray tracing is decades old technology. Nvidia's current hardware is just too slow for it. So basically Nvidia's ray tracing support right now is nothing more than technology demo. When ray tracing really is useful, current cards are obsolete. It always happens like this. So saying Nvidia is ahead because they have some ray tracing "support" is just laughable. Marketing crap, nothing else.
 
Please don't exaggerate, the 5700XT draws less than the 2080, very close to the 2070 Super (its direct competitor). According to techspot's review, total system draw for the 5700XT was 370W and for the 2080ti 433W (the 2070S had 361W)

Both companies claim similar perf/W gains with the next gen cards so, in theory, AMD should still be on par or close to Nvidia with their competition products just like RDNA is now.

Power2-p.webp

I used a different source. See the attached image and you'll see I'm not exaggerating.
 

Attachments

  • untitled-1.png
    untitled-1.png
    67.6 KB · Views: 6
DLSS 2.0 right now is looking like a killer app for me. It truly is. I know only a select few titles may use it, but the list will grow. I tend to play plenty of these cutting edge games that could do with a little extra boost, and that is the boost.

It could make a vast difference when you want high settings and high frame rates.
 
Ray tracing is decades old technology. Nvidia's current hardware is just too slow for it. So basically Nvidia's ray tracing support right now is nothing more than technology demo. When ray tracing really is useful, current cards are obsolete. It always happens like this. So saying Nvidia is ahead because they have some ray tracing "support" is just laughable. Marketing crap, nothing else.

I specifically said *regardless of the gimmick debate* so that we didn't have to go down this path, but I guess you didn't read into that.

I actually share that view, hence why I didn't go NVIDIA this time around, but I really don't want to have to fight about ray tracing.
 
So no "nVidia killer" again? Wow, a lot of people got their pants wet waiting since the last "nVidia killer" aka the Radeon 5700 series. You know, the irony will be if Intel`s offer becomes a legit Radeon killer...
 
Probably more with 7nm capacity, AMD's Lisa stated long time ago that 7nm capacity is tight. So it really makes no sense to make low yield products when there is lack of capacity.
Indeed and even if large die yields are good, if TMSC is already stretched with their N7 fabs, then it would make sense for AMD to hold off until there is capacity.

But still, low yields does not mean something is unmanageable. Nvidia surely have quite poor yields with GTX 2080 but as Samsung have enough capacity, that is not really problem. Poor yields just drive production costs higher. That we see with flagship card pricing.
TMSC's 12FFN yields were apparently rather good or certainly no worse than else out there, it's just that 300 mm wafers can only produce so many 545 square mm chips - with zero defects, it's less than 100 chips. So even if the binning process results in 50 or 60 fully usable dies (and the TU104 is used in 17 different products), thousands of wafers will need to be churned out to meet global demand. TMSC is doing one hell of a good job keeping up.
 
I used a different source. See the attached image and you'll see I'm not exaggerating.

RTX 2080 Ti 266 watts
5700XT 204 watts

You looked for custom model for 5700XT there.

Indeed and even if large die yields are good, if TMSC is already stretched with their N7 fabs, then it would make sense for AMD to hold off until there is capacity.

AMD had capacity problems because it was supposed to use mainly GlobalFoundries and TSMC was just backup plan.

What situation is now (more overall capacity, Apple uses other nodes, Huawei out) is somewhat unknown.

TMSC's 12FFN yields were apparently rather good or certainly no worse than else out there, it's just that 300 mm wafers can only produce so many 545 square mm chips - with zero defects, it's less than 100 chips. So even if the binning process results in 50 or 60 fully usable dies (and the TU104 is used in 17 different products), thousands of wafers will need to be churned out to meet global demand. TMSC is doing one hell of a good job keeping up.

12FFN is basically customized 16FF that is for Nvidia only. So Nvidia bought certain amount of capacity only for themselves. I meant more like 2080 Ti that is even bigger, 775 mm². Yields are surely quite poor and high pricing is pretty much explained with constrained capacity. Nvidia surely didn't buy way too much capacity for process only they can use but still they had to buy enough to meet demand.

What happens with 3000 series remains to be seen. Probably not custom process used this time.
 
While not being able compete on the highest end would be disappointing, if they can compete against the 3080 at launch that is already a step up from the current situation.

And if they are competitive wrt feature set and power consumption and undercut nVidia on price, all the better for the largest part of the market.

This would require TMSC's N7 process to have good yields with 400 square millimetres and larger dies. The biggest chip made on this node, so far, seems to be AMD's Vega 20 at 331 square mm (Ampere's Altra chip may possibly be bigger, but I've not seen any size figures for it yet) and there's a distinct lack of Radeon Instinct models on the market - of course, this could well have nothing to do with N7 large die yields.

Isn't MI 100 going to have a very large die? Then again, in that price range low yields matter much less.

Now, limited capacity on 7nm as has been mentioned on this thread may be a potential reason for skipping a consumer large die RDNA2 GPU. The opportunity cost vs. making more Ryzen 3 CPU or Ryzen 4000 APU could be too high in a capacity constrained situation.

It's often forgotten that AMD neeeds to carefully balance resources.
 
I only buy AMD so what ever it is I will be happy with.
I dont buy that green garbage with its buggy *** drivers.
That's funny cause I could say the same and I got an 5700xt.
I changed my gtx 1060 with absolutely zero drivers problems all the time I used it (3 years) for a 5700xt and got many drivers problems at the beginning. Everything is fine now but I nearly returned my 5700xt.

So I'm not sure that drivers problems are really different for both company. They are both concerned.
 
DLSS 2.0 right now is looking like a killer app for me. It truly is. I know only a select few titles may use it, but the list will grow. I tend to play plenty of these cutting edge games that could do with a little extra boost, and that is the boost.

It could make a vast difference when you want high settings and high frame rates.
DLSS is huge, it will allow budget grade RTX cards to compete with top end AMD cards and potentially even look better than them aswell. Huge amounts of support for it now too and more to come.

I’m currently playing through death stranding and DLSS quality mode looks sharper and overall better than DLSS off with anti aliasing, it’s very impressive and has come along so much since the scruffy implementation on the first set of games to use it.

It’s funny, Nvidia made a big song and dance out of ray tracing but ray tracing at this point is still a bit of a gimmick, it’s only really any good in Metro Exodus in my opinion. But DLSS on the other hand is turning out to be the real game changer.
 
12FFN is basically customized 16FF that is for Nvidia only. So Nvidia bought certain amount of capacity only for themselves. I meant more like 2080 Ti that is even bigger, 775 mm². Yields are surely quite poor and high pricing is pretty much explained with constrained capacity. Nvidia surely didn't buy way too much capacity for process only they can use but still they had to buy enough to meet demand.
You've just explained it yourself as to why the yields weren't bad - 12FFN is a tighter metal pitch revision of 16FF. In other words, it's a well matured process node. Mind you, we should really be more specific as to exactly what we mean by yield: wafer fabrication, wafer sorting, die binning, packaging. When I'm saying the likes of the TU102 and 104 have been fielding decent yields, I'm specifically referring to a combination of wafer sorting and die binning - I.e. the ratio of dies that can be used in any end product to the total number of dies fabricated.

However, if we refer to yield as being simply the total number of working dies from a single wafer, then one can argue that it is 'poor' - but this would be unfair to claim so, as the likes of the TU102 is huge. Even at 100% yield, a 300 mm wafer will only turn out 60 TU102 chips, needed for 11 different end products. This is partly why the prices are so high: just the sheer number of wafers that have to be manufactured to create the volume required.

Chips using 12FFN will still be manufactured on the 300mm production lines, using pretty much the same design rules and libraries as 16FF/12FFC, so it's not a case that TSMC have a separate production area just for Nvidia - that production just has to slot in with everything else. There's only something like 5 plants that handle those wafer too.
 
Back