I’ve used several benchmark software to compare computer hardware especially nvidia GPU cards used for cuda programming. One good free benchmark software is Maxon CineBench , another is Unigine Haven
Here are surprising results generated from the new Blender Benchmark under its Blender Open data website program.
Quick summary/my findings:
- less render time running Linux [LinumMint] than under Windows 10 [all things turned off], same hardware
- SSD’s help
Linux run: Workstation with GTX1080 8Gb, GTX970 4Gb, 32Gb RAM, Quad core Intel processor. No SSD
Same hardware as above but run under Windows 10
Ran on same hardware, only using the GTX1080 8Gb , under Linuxmint – impressive resuly showing in my view and from cuda/nvidia research l did that the 4Gb of the 2nd graphics card limits the performance.
Home PC #2 Results below, newer PC, no SSD, 2 x GTX560Ti
Below results from Alienware 17 [4 years old] with Samsung Evo 500Gb SSD, GTX860M
Updating to the latest cuda development toolkit on Linuxmint 18.1 [with a GTX 1080+970]
First, update to R390 Driver
sudo dpkg -i cuda-repo-ubuntu1704-9-1-local_9.1.85-1_amd64.deb
sudo apt-key add /var/cuda-repo-9-1-local/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda
GTX1080 at about 80 degrees C – and doing 474 Sol/s
GTX970 at aboyt 56 degrees C – and doing 275 Sol/s
OS – Linuxmint
Notes, screenshots and results using, installing, testing Nvidia Cuda on Linuxmint 18.1
“CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).“
sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder
sudo apt-get update
sudo apt-get install simplescreenrecorder
Tests done on my custom built Workstation PC as well as Alienware 17 laptop – all GPU specs below.
Quick comparisons [not the focus of this post] :):
- Puget systems laptop, with a GTX 980M 8Gb – 91 billion int./sec, 1819 Gflops
- My Alienware 17 with a GTX 860M 2Gb – 43.3 b.i.p.s, 866 Gflops
- My workstation with a GTX 970 4Gb -> 146 b.i.p.s, 2919 GFlops.
Screenshots in no particular order.
Below result from Puget Systems test on their laptop, for comparison
Below my results
Have this video card on my main CAD workstation (Q8200 2.33Ghz Quad-core 8Gb RAM Vista 64bit).
Updated the video driver to a nvidia 182.08 windows Vista 64 release after some windows vista problems with the 182.20 release. Seems to be working great.
Ran POV-RAY 3.6.1 Benchmark version 1.02
Render averaged 103.58 PPS over 147456 pixels, in a total time of 1423.63 seconds.
Boinc manager benchmark results:
2344 FP MIPS (whetstone) per CPU
6656 Integer MIPS (dhrystone) per CPU
My Workstation Cinebench R10 benchmark results: 5/15/2009
Intel Quad Q8200 64bit Vista Ultimate 8Gb DDR2
OpenGL = 2417, i CPU = 2641, Multi-CPU=8879
The HP xw8600 which uses the Xeon 5400 processor, has this specs 3888 (1 cpu) , 23,445 (multi-cpu) and 6571 for opengl
Wanted to take advantage of the CUDA from NVIDIA (a leading GPU manufacturer) – a system that uses GPUs for scientific computing. You also need a minimum of 256Mb Video card RAM to take advantage of CUDA. This will also increase my BOINC/SETI processing speed. My current nvidia driver for my PNY Quadro FX370 was dated 5/26/2008 release 126.96.36.19996. So l downloaded the latest driver from the nvidia website and the new driver is now dated 12/26/2008 release 188.8.131.5220
/this works okay on my Quad-core workstation and boosted my Vista index from 3.6 to 4.2
My Vista index ratings for my Processor/RAM/Hard drive was before and after the update 5.9
Graphics rating went from 3.6 to 4.2
Games graphics capability went from 4.0 to 4.6
It will be interesting to see this week what this GPU processing does to my boinc/seti ratings.
As of 2/26/2009
Total credit 80,793
Recent average credit 933.23
World Community Grid 774,268