- October 2020
- June 2020
- May 2020
- October 2019
- June 2019
- September 2018
- May 2018
- December 2017
- April 2017
- June 2016
- February 2016
- November 2015
- January 2015
- August 2014
- July 2014
- May 2014
- March 2014
- February 2014
- January 2014
- November 2013
- August 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- December 2012
- November 2012
- September 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- September 2011
- July 2011
- June 2011
- May 2011
- March 2011
- January 2011
- October 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- September 2009
- July 2009
- June 2009
- May 2009
- March 2009
- September 2008
- April 2008
- December 2007
- June 2007
- June 2005
- September 2004
- May 2002
- October 2001
- August 2001
2014-02-20: NVIDIA GeForce 750 Ti: Great graphics, bad graphs?
NVIDIA have just released their new GeForce GTX 750 Ti card, based on the Maxwell chipset. It seems like a great card at a pleasant price point and I downloaded NVIDIA's whitepaper. One thing that caught my eye is a graph comparing the GTX 750 Ti's performance to the GTX 480, the flagship card four years ago. It is terribly misleading. This is what it looks like:
1.0
0.8
0.6
Looks great, doesn't it? At first glance, you might think that there is an improvement almost clean across the board and it looks like GRID2 and BioShock Infinite perform marginally better on the old card and only 3DMark has a significant reduction, by half. Actually, it's quite the reverse! When making such a comparison between graphics cards, it makes sense to consider the older card the baseline reference. That would mean normalising its performance to 100% and placing its bars on the left of those of the newer card. This is because people in the western world expect an axis of time to go from left/old to right/new. NVIDIA effectively made the newer card the baseline, so the way to interpret this graph is not "how does this new card perform?", but as "how would our old flagship card fare when compared against this latest offering?". That is quite misleading in a whitepaper for the newer card and I have to wonder whether it was done on purpose. Another thing to keep in mind is that the vertical axis starts at 60%, so the length of the bars does not correspond linearly to the data. This is a common way to amplify the differences between data points that are close together. Here's the same data presented in a more sensible way:
100%
50%
0
Doesn't that make it much clearer that the new, cheaper card comes quite close in performance to the older flagship card and sometimes even exceeds it?
Comments
No comments, yet...
Post a comment