Techpowerup directly measure actual power consumption of the graphics cards they test (not system draw or sensor based) and reported the following for the RX 5700 XT:
View attachment 86877
(
Source)
That’s a 62W difference between the two, and even if all of that electrical power is transferred into heat, the 5700 XT is still producing 21.8% less heat than the 2080 Ti. However, what one can claim is that the AMD GPU has a higher power per transistor value than the Nvidia one - 21.65 watts per billion transistors compared to 15.3 W/bTrans - although such power figures include DRAM consumption and board loses.
For reference purposes, the 1080 Ti is roughly the same as RX 5700 XT at 22.6 W/bTrans, the Radeon VII is 26.5, and the Vega 64 they tested would be 24.7 - this suggests that AMD went with the goal for getting the die as small as possible, to improve wafer output, as well as overall performance (it's notably better performing in games than the Vega 64), rather than outright power efficiency.
With no architectural changes whatsoever, an 80 CU 'Navi 10' would be possibly be hitting 450 to 500W, and while it wouldn't be the first graphics card they've released with that kind of power consumption, it would certainly be their first single GPU card at that level. That said, larger GPUs can't and don't need to run at high clock speeds, so some of that excessive power requirement would be clawed back that way. Also, an 80 CU next-Navi chip wouldn't be just 2x Navi 10s - it's not going to have double the number of ROPs and memory controllers, for example, so again, that will be bring the consumption down. We may still be looking at a 300-350W 'big Navi' card, though this also assumes that (a) AMD haven't streamlined the current requirements for their design and (b) TSMC haven't refined the N7 process by then.