Advanced search

Message boards : Graphics cards (GPUs) : Poor times with 780 ti

Author Message
JugNut
Send message
Joined: 27 Nov 11
Posts: 11
Credit: 1,021,749,297
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 34414 - Posted: 21 Dec 2013 | 8:33:38 UTC
Last modified: 21 Dec 2013 | 8:35:56 UTC

Hi guy's, similar to TJ's post (GTX 770 vs GTX780Ti) I'm getting quite poor times for a 780 ti. (compared to other's)

Other than the 780 ti running poor times everything else seems normal.(logs ect) I notice no throttling or any suspicious behaviour, temps are kept under control ect, it's just slow.

After reading TJ's thread & seeing Retvari's stunning times (17,000sec's & under)I may see if I can find a copy of WinXP. My times are between 5000-7000 sec's longer(22,000-24500sec's)then Retvari's 780 ti. That's a massive amount of difference.

This is my host: http://www.gpugrid.net/show_host_detail.php?hostid=112153
Specs:
Win x64
Gigabyte 780 ti. Driver ver 331.82
MB: Asus x79 Deluxe
i7 4930k
Boinc 7.2.33
SWAN_SYNC is on.

Anyone else having similar issues?

Thanks for reading.

PS: Does anyone know what effect turning on "KBOOST" would have for Boinc?
KBOOST is a setting available in EVGA presisionX that locks the card in boost mode.

JugNut
Send message
Joined: 27 Nov 11
Posts: 11
Credit: 1,021,749,297
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 34415 - Posted: 21 Dec 2013 | 11:36:13 UTC

Sorry wrong host :)

This host in question with the 780 ti in it. http://www.gpugrid.net/show_host_detail.php?hostid=164132

Thanks again..

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34417 - Posted: 21 Dec 2013 | 14:13:23 UTC

Hi JugNut,

The name you give on the thread is better than I choose.
However my 780Ti is slightly faster then my 770 that's why I chose that name.
Perhaps we get more response here. Anyhow your times are better than mine.

One Noelia WU pushed the card to 1033MHz with a usage of 91%. Now it is back to 875MHz. But the card can run faster, that has now been proven.

At what frequency is your GPU Core Clock running and at what frequency did you set it? What is the temperature?

____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34418 - Posted: 21 Dec 2013 | 15:03:55 UTC - in response to Message 34414.

I have a Win XP disc that came with a used Dell computer I bought from a friend. I have no use for the disk, you can have it if you want it, it comes with a legal/valid key. I'll mail it to you on my dime.

Or I can make it and the key available for download to anyone who wants a copy if someone can tell me how to scrape the bits off the disk and store them in an ISO or whatever format is most convenient.

But rather than get "imprisoned" by an antiquated OS that will soon be unsupported, why not install Linux and get the same performance from your 780ti as you would with XP, some say even slightly better. You could install it once on any machine on your LAN then install a boot image on that machine. All the other machines on your LAN could then boot from that image. The only catch is that the BIOS on your mobo needs to support boot over LAN. The advantage is that you need to update and maintain only the "master" because the "slaves" boot whatever is on the master. The slaves can also run "diskless" and use a partition created on the master's disk or any disk on the LAN. You can probably do all that on Windows too with freeware or it might be proprietary, not sure. The thing is support for it could be dropped at anytime if it's XP based. With Linux the support will be forever as it's an integral part of the OS, always has been and always will be.

____________
BOINC <<--- credit whores, pedants, alien hunters

JugNut
Send message
Joined: 27 Nov 11
Posts: 11
Credit: 1,021,749,297
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 34424 - Posted: 21 Dec 2013 | 20:44:27 UTC
Last modified: 21 Dec 2013 | 20:53:26 UTC

Thanks for your post guy's.

The 780 ti is at stock clocks. IE Clock: 876MHz (Boost: 928MHz) Memory 7000MHz. (1749mhz) But it's boosts speed has always been 1019mhz for some reason.

After my post I increased the power target & temp target to (max). I'll see how that goes for 24hrs. But the boost has stayed the same so I guess that's it's limit, so WU times are unlikely to get any better, but who knows? it's worth a shot.

@ Dagorath: Thanks for your kind offer. I have an OEM copy of XP somewhere but I think it's locked to whatever PC it came with years ago.(a laptop I think)

Like most PC enthusiasts i've tried a few different flavors of Linux over the years but but I never stuck at it. So my Linux-ease is severely lacking as a result. All my PC's are dedicated boinc rigs & also have mixed GPU's (both AMD & Nvidia) & the last time I tried installing linux I had all kind of trouble installing both video drivers for boinc to use. And when your boinc addicted every second of down time seams like years, it wasn't long before windows was back up & running again:) Although I wonder if my problem is OS specific at all? Next I may give the KBOOST setting in EVGA presisionX a try.

Any other thoughts for TJ or myself guy's?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34425 - Posted: 21 Dec 2013 | 22:43:44 UTC - in response to Message 34424.

Hmm you have the same settings I have. I tried that as well indeed, switch everything to maximum but no extra speed yet. Temperature goes quick to 82-83°C then but GPU load remains around 81%.
We have both Win7, thus that could be a factor as Zoltan is suggesting.


____________
Greetings from TJ

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 34973 - Posted: 11 Feb 2014 | 2:33:27 UTC

Built a new machine with Win7 and a pair of 780Ti. http://www.gpugrid.net/show_host_detail.php?hostid=165832

A little surprised on the difference between XP and Win7 for processing times. Yes, I read it on many a thread, but sometimes one needs to see things up and close before getting smacked in the face extra hard.

A week or so ago when batch of NOELIA_DIPEPT1 came through, the cards were only running at ~45-55% utilization. Decided it would be good to try to run two WUs at once. Up till then, the cards were getting 70-80% utilization pulling 750W according to APC PowerChute off of Seasonic Platinum 1000W PSU.

So searched through old threads to figure out how to write the app_config file. http://www.gpugrid.net/forum_thread.php?id=3319&nowrap=true#29216. Since I already like to be smacked hard, I thought so many cores on the 780ti, that surely I would get some benefit out of running two!

So I ran for a couple of days with 2 WU per GPU, and switched back to start looking at new batch. Like to test on again off again especially since I seemed to be between batches by luck.

Switching back to 1 WU per GPU, I simply removed the App_config. Well, now my times were similar to a GTX680 and utilization is mostly around 50%-60%. Now only pulling 650W.

So I have looked at the usual suspects...

* I like to keep a thread free to keep it snappy since this is my main use rig. So not CPU restricted. acemd keeps 12-13% in task manager per acemd WU.
* Restarted Boinc Manger a few times.
* Warm reboot a few times.
* Cold reboot (~5 minutes off).
* Temps are fine on cards. Like to keep <75°C and are typically 65-70.
* Cards are not downclocking. Well actually, when the load goes closer to 40%, I see the card clock drop and GPU-Z says Performance Cap is limited by "Util" (card utilization).
* I have not changed drivers during this or installed anything new.
* After switching back to 1 WU per GPU, I did add the Intel GPU drivers, but the problem existed before and stayed the same after.
* Looked in client_state and even though they should only indicate time estimates, I tried changing <duration_correction_factor> and <flops>, but they were just corrected back after a couple hours. <avg_ncpus> stayed 0.5 at removing app_config so I made it equal to the <max_ncpus>, but no change in performance.

Besides deleting the project and reattaching or installing new drivers....thoughts on what file has become hung up on wanting to not utilize the cards as much as before?


Other Info:

cc_config
<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<start_delay>30</start_delay>
<skip_cpu_benchmarks>1</skip_cpu_benchmarks>
</options>
</cc_config>

app_config (that was originally run for 2 WU per GPU)
<app_config>
<app>
<name>acemdlong</name>
<max_concurrent>9999</max_concurrent>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.001</cpu_usage>
</gpu_versions>
</app>
</app_config>

app_config (trying new version to "fix" the issue)
<app_config>
<app>
<name>acemdlong</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
</app_config>

Startup Event Log

2/9/2014 10:47:56 PM | | Starting BOINC client version 7.2.33 for windows_x86_64
2/9/2014 10:47:56 PM | | log flags: file_xfer, sched_ops, task
2/9/2014 10:47:56 PM | | Libraries: libcurl/7.25.0 OpenSSL/1.0.1 zlib/1.2.6
2/9/2014 10:47:56 PM | | Data directory: C:\ProgramData\BOINC
2/9/2014 10:47:56 PM | | Running under account Jeremy
2/9/2014 10:47:56 PM | | CUDA: NVIDIA GPU 0: GeForce GTX 780 Ti (driver version 332.21, CUDA version 6.0, compute capability 3.5, 3072MB, 2839MB available, 6247 GFLOPS peak)
2/9/2014 10:47:56 PM | | CUDA: NVIDIA GPU 1: GeForce GTX 780 Ti (driver version 332.21, CUDA version 6.0, compute capability 3.5, 3072MB, 2925MB available, 6247 GFLOPS peak)
2/9/2014 10:47:56 PM | | OpenCL: NVIDIA GPU 0: GeForce GTX 780 Ti (driver version 332.21, device version OpenCL 1.1 CUDA, 3072MB, 2839MB available, 6247 GFLOPS peak)
2/9/2014 10:47:56 PM | | OpenCL: NVIDIA GPU 1: GeForce GTX 780 Ti (driver version 332.21, device version OpenCL 1.1 CUDA, 3072MB, 2925MB available, 6247 GFLOPS peak)
2/9/2014 10:47:56 PM | | OpenCL: Intel GPU 0: Intel(R) HD Graphics 4600 (driver version 9.18.10.3257, device version OpenCL 1.2, 1624MB, 1624MB available, 200 GFLOPS peak)
2/9/2014 10:47:56 PM | | OpenCL CPU: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 1.2, device version OpenCL 1.2 (Build 66956))
2/9/2014 10:47:56 PM | | Host name: i7-4770k-jz
2/9/2014 10:47:56 PM | | Processor: 8 GenuineIntel Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz [Family 6 Model 60 Stepping 3]
2/9/2014 10:47:56 PM | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 fma cx16 sse4_1 sse4_2 movebe popcnt aes syscall nx lm vmx tm2 pbe
2/9/2014 10:47:56 PM | | OS: Microsoft Windows 7: Professional x64 Edition, Service Pack 1, (06.01.7601.00)
2/9/2014 10:47:56 PM | | Memory: 15.87 GB physical, 31.75 GB virtual
2/9/2014 10:47:56 PM | | Disk: 238.25 GB total, 165.67 GB free
2/9/2014 10:47:56 PM | | Local time is UTC -6 hours
2/9/2014 10:47:56 PM | | Config: use all coprocessors
2/9/2014 10:47:56 PM | Einstein@Home | URL http://einstein.phys.uwm.edu/; Computer ID 10123203; resource share 25
2/9/2014 10:47:56 PM | LHC@home 1.0 | URL http://lhcathomeclassic.cern.ch/sixtrack/; Computer ID 10313670; resource share 25
2/9/2014 10:47:56 PM | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 7189091; resource share 25
2/9/2014 10:47:56 PM | GPUGRID | URL http://www.gpugrid.net/; Computer ID 165832; resource share 100
2/9/2014 10:47:56 PM | SETI@home | General prefs: from SETI@home (last modified 06-Feb-2014 22:07:34)
2/9/2014 10:47:56 PM | SETI@home | Host location: none
2/9/2014 10:47:56 PM | SETI@home | General prefs: using your defaults
2/9/2014 10:47:56 PM | | Reading preferences override file
2/9/2014 10:47:56 PM | | Preferences:
2/9/2014 10:47:56 PM | | max memory usage when active: 4063.87MB
2/9/2014 10:47:56 PM | | max memory usage when idle: 8127.75MB
2/9/2014 10:47:56 PM | | max disk usage: 10.00GB
2/9/2014 10:47:56 PM | | max CPUs used: 6
2/9/2014 10:47:56 PM | | max download rate: 98304 bytes/sec
2/9/2014 10:47:56 PM | | max upload rate: 49152 bytes/sec
2/9/2014 10:47:56 PM | | (to change preferences, visit a project web site or select Preferences in the Manager)
2/9/2014 10:47:56 PM | | Not using a proxy
2/9/2014 10:47:57 PM | | Suspending computation - initial delay

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34974 - Posted: 11 Feb 2014 | 12:24:54 UTC - in response to Message 34973.

Different types of work unit utilize the GPU to different extents. The NOELIA_DIPEPT WU's don't use the GPU as much. The boost will reduce when this is the case. To stop downclocking set the NVIDIA settings to Prefer Maximum Performance.
To slightly improve performance of the GPUGrid WU you can reduce the amount of CPU you allow Boinc to use. When 4 threads (50%) are allowed it's about as beneficial to the GPUGrid task as you can make it. Note that even a GPU usage improvement from say 45% to 56% means the task will complete 24% faster. With a 780Ti it's definitely worth the loss of a CPU thread or two.

Can I also suggest you try <cpu_usage>1.0</cpu_usage> rather than 0.5. With the last WHQL drivers your GPU's are likely to use more than 0.8 of a CPU each
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 34975 - Posted: 11 Feb 2014 | 13:07:29 UTC - in response to Message 34974.

Thank you for the suggestion. I have put app_config back in place and set it with <cpu_usage>1.0</cpu_usage>. I do not think this is the issue since the CPU times are on par with GPU time, task manager is running 12-13% solid, and I have spare CPU cycles. Will see tonight if any differences.

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 34988 - Posted: 12 Feb 2014 | 3:35:41 UTC - in response to Message 34974.

Times do seem to be stabilizing on WU's with the change, but same speed as a 680 still. Previously, it was saying 0.87 CPU for WU. Still seems odd, and will watch if the daily work volume increases. GPU utilization is still low at 50% for both GPU's and only pulling 630W currently.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34996 - Posted: 12 Feb 2014 | 15:17:04 UTC - in response to Message 34988.

Jeremy, I'm running SANTI_MAR tasks on a 770 and 670 under W7 and seeing 80% GPU utilization while running 6 CPU tasks.

Is SLI on?

What are the GPU clocks and what are you using to control the fan speed (MSI Afterburner for example)?

Might be worth adding a modest OC on the GPU and saving the profile; just in case the GPU's are going into a reduced power state and not recovering properly (say the GDDR5 stays low).

Did you set Prefer Maximum Performance in NVidia Control Panel; right click on desktop, open NVidia Control Panel, select Manage 3D settings (top left), under Global Settings (right pain) scroll down to Power Management Mode and select Prefer Maximum Performance.

If there isn't a desktop context menu,

    C:\Program Files\NVIDIA Corporation\Control Panel Client\nvcplui.exe




____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35010 - Posted: 13 Feb 2014 | 0:07:12 UTC - in response to Message 34996.

skygiven, Thank you for your time and thoughts. This is just an odd situation. See below responses. See below responses.



Jeremy, I'm running SANTI_MAR tasks on a 770 and 670 under W7 and seeing 80% GPU utilization while running 6 CPU tasks.

I am running 4 CPU tasks, 2 GPU Grid Tasks, 1 Intel GPU task, and then leave a core free.

Is SLI on?

No.

What are the GPU clocks and what are you using to control the fan speed (MSI Afterburner for example)?

1st card is boosting to 1200mhz at 1.161V with 55% utilization at 63°C and second card is boosting to 1187mhz at 1.174V with 55% utilization at 59°C. Using EVGA Precision for fan speed control.

1st Card on http://www.gpugrid.net/result.php?resultid=7769186


2nd Card on http://www.gpugrid.net/result.php?resultid=7769298


Might be worth adding a modest OC on the GPU and saving the profile; just in case the GPU's are going into a reduced power state and not recovering properly (say the GDDR5 stays low).

I have only added 38mhz boost with 105% Power Target. Left the memory alone since it is at 7000mhz.

[url]Did you set Prefer Maximum Performance in NVidia Control Panel; right click on desktop, open NVidia Control Panel, select Manage 3D settings (top left), under Global Settings (right pain) scroll down to Power Management Mode and select Prefer Maximum Performance.[/url]
Yes, currently at Max Performance. Have tried it both ways.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35025 - Posted: 14 Feb 2014 | 1:09:52 UTC - in response to Message 35010.
Last modified: 14 Feb 2014 | 1:13:12 UTC

Stop using the Intel GPU and compare. It's not the drivers, it's the use of the Intel GPU that's to blame. It competes with GPUGrid's CPU requirements. Ditto for other GPU projects.
PS. Not Sky, just initials :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35027 - Posted: 14 Feb 2014 | 3:31:52 UTC - in response to Message 35025.

skgiven, had to check your name a little closer. Sorry about that, you have been skygiven in my head the whole time reading the forums. :)

Disabled the Intel GPU projects, and restarted and the Santi_MARwtcap's running jumped up from 50% to 70% utilization. Will continue to watch, but it looks like that may have been the issue.

Not exactly sure why this would be the case since I have the CPU running at 4.1ghz and the acemd were each getting 12-13% shown in task manager. I'll take it though. I'll report back to see if this holds.

Thank you.

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35046 - Posted: 15 Feb 2014 | 2:36:32 UTC - in response to Message 35027.

The times are back normal for 780Ti's under Win7.

The times are the Santi_MAR422 and Santi_MARwtcap are 19,850 - 20,750 seconds for last 5 WUs.
The times before the switch were 23,450 - 24,650 seconds for previous 5 WUs.

So using the iGPU on the i7-4770k on Win7 cost 21% performance drop on the 780Ti GPUGrid processing for at least the Santi MAR422/wtcap WUs. This is even with a free thread being left on the CPU. Interesting.

skgiven, thank you for helping me find the issue.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35077 - Posted: 15 Feb 2014 | 22:23:58 UTC - in response to Message 35046.

The times are back normal for 780Ti's under Win7.

The times are the Santi_MAR422 and Santi_MARwtcap are 19,850 - 20,750 seconds for last 5 WUs.
The times before the switch were 23,450 - 24,650 seconds for previous 5 WUs.

So using the iGPU on the i7-4770k on Win7 cost 21% performance drop on the 780Ti GPUGrid processing for at least the Santi MAR422/wtcap WUs. This is even with a free thread being left on the CPU. Interesting.

skgiven, thank you for helping me find the issue.

Hello Jeremy, that are great times you are reporting under win7 with the 780Ti.
Can you please give some details about how you achieve this. I am looking since last November to get run times low. Now WU's (Santi) run around 24000 seconds and that is almost as fast as my 770. I have an Asus not the OC version. Have down clocked per advice here to achieve better times. The iGPU is not hampering me as I can not get it working at Einstein@home.
What is you clock speed, memory, temp etc. Do you use MSI afterburner or something else?
____________
Greetings from TJ

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35085 - Posted: 16 Feb 2014 | 0:05:59 UTC - in response to Message 35077.

TJ,
I use the EVGA Precision app for monitor/control, and try to keep ~<70C. My app_config and cc_config files are short.

cc_config
<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<start_delay>30</start_delay>
<skip_cpu_benchmarks>1</skip_cpu_benchmarks>
</options>
</cc_config>

app_config
<app_config>
<app>
<name>acemdlong</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>1.0</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>

The swan_sync env variable is setup.
Boinc is now set to use 99% of the CPU's (7 threads where 2 are for GPUGrid), 100% of CPU Time. While computer is in use is checked, and use CPU while computer is in use is checked. Activity menu has Run Always and Use GPU Always both checked.

Even though both cards were purchased in one order directly from EVGA, they are performing a little bit different. In the tower case, the lower card is always at max voltage 1.175v. If I sync the card settings (precision feature) to 100% power target, 0 GPU Clock Offset and 0 Memory offset, the lower card will boost to 1150Mhz at 64C at 1.175v and the upper card will boost to 1162mhz at 69C at 1.161v. I have to take the time to switch the card positions in the board to ensure it is a card difference some day, but for now, it is just interesting.

Still seeing how far I can boost them, but currently having good success with 1188 and 1212mhz. These cards are on air so working rather well for winter. Probably will change when summer gets here.

I hope this info helps.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35092 - Posted: 16 Feb 2014 | 10:31:27 UTC - in response to Message 35085.

Thank you for the information Jeremy. It gives me information to fiddle with my card.

I see you have the card run at higher clock speed then I have. Mine is running at 875MHz. We have more or less the same cc-config but I don't use an app_config.
I will first try to boost the clock and see what happens.

To my opinion EVGA are the best cards, but last year they where not available when I wanted an 780Ti.

I have two GTX660's from EVGA in another rig and they slightly differ as well. If I sync the cards, the clock speed and temperature of the second card, in the slot further away from the CPU, is little lower and a few degrees lower.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35149 - Posted: 19 Feb 2014 | 15:57:42 UTC - in response to Message 35085.

I have increased to GPU core clock and also the voltage little to try to achieve same times as Jeremy. However if a WU has run for a few seconds the GPU clock goes to 875.7MHz, very occasionally to 920-930MHz for a few minutes. GPU load is 78% with Santi's. Temperature is 72°C.
Do I need to set the voltage lower or higher?
____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35151 - Posted: 19 Feb 2014 | 16:29:01 UTC - in response to Message 35149.

I have increased to GPU core clock and also the voltage little to try to achieve same times as Jeremy. However if a WU has run for a few seconds the GPU clock goes to 875.7MHz, very occasionally to 920-930MHz for a few minutes. GPU load is 78% with Santi's. Temperature is 72°C.
Do I need to set the voltage lower or higher?


I don't think the clock drops down because the voltage is too low. It drops because the temperature is too high.

If you want the clock to stay at 920-930MHz you need to keep the temperature at or below 70*C. Increasing the voltage will increase the temperature so if you want the clocks to stay high then you have to cool the GPU better if you increase the voltage.

You can decrease the voltage to help reduce the temperature but lower voltage might make it unstable. If you can get away with lowering the voltage then OK but I would try to improve the cooling solution somehow (more fans, lower the ambient, open the case and put a big fan to blow lots of air in, duct cold air into the case, whatever works).

____________
BOINC <<--- credit whores, pedants, alien hunters

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35161 - Posted: 19 Feb 2014 | 22:22:47 UTC - in response to Message 35151.

Temperature is currently at 68°C. I can not get it low. This card will power itself down when reached 109°C. So as long as I can keep it below 80°C it should be okay. Ambient temperature will only increase as the season gradually warms.
____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35164 - Posted: 19 Feb 2014 | 23:18:23 UTC - in response to Message 35161.

I give up.

____________
BOINC <<--- credit whores, pedants, alien hunters

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35397 - Posted: 28 Feb 2014 | 17:00:00 UTC

I just replaced my two 680s with two 780Tis. Only on my first run with the new cards, but so far the times seem better.

Running 334.89:
SANTI_MAR GPU utilization 72% @ 1137MHz
NOELIA_FXA GPU utilization 79% @ 1124MHz

I always have one card in my case that boosts just a bit higher than the other.

Looking at the current progress and comparing it against my runs on the 680s, it looks like the NOELIA task will complete about 2 hours faster, and the the SANTI task will complete about 1.5 hours faster.
____________

Trotador
Send message
Joined: 25 Mar 12
Posts: 103
Credit: 9,769,314,893
RAC: 32,536
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35400 - Posted: 28 Feb 2014 | 20:27:04 UTC - in response to Message 35397.

I just replaced my two 680s with two 780Tis. Only on my first run with the new cards, but so far the times seem better.

Running 334.89:
SANTI_MAR GPU utilization 72% @ 1137MHz
NOELIA_FXA GPU utilization 79% @ 1124MHz

I always have one card in my case that boosts just a bit higher than the other.

Looking at the current progress and comparing it against my runs on the 680s, it looks like the NOELIA task will complete about 2 hours faster, and the the SANTI task will complete about 1.5 hours faster.


Which model Matt?

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35401 - Posted: 28 Feb 2014 | 20:40:15 UTC

EVGA GTX 780Ti 03G-P4-2883-KR.
____________

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35402 - Posted: 28 Feb 2014 | 22:22:26 UTC
Last modified: 28 Feb 2014 | 23:16:55 UTC

Completed times on first WUs:

NOELIA - 18110s
SANTI - 20764s

This is a great improvement over the GTX 680s, however I think I'm now running into one of the issues mentioned above in this thread. After the card running NOELIA finished it started on a SANTI and downclocked from 1124MHz to 980MHz. The other card stayed at it's boost speed of 1137MHz after completing a SANTI and beginning a NOELIA.

I have Nvidia preferences set for maximum performance. Temps are good - the card at 1137MHz is at 70C and the card at 980MHz is at 60C. Utilization for NOELIA/SANTI is still the same.

Edit: Hmm, may have jumped the gun a bit. Rebooted and now back to 1124/1137. Maybe the Nvidia preferences needed a reboot to take effect? I'll have to check again after these WUs finish.

Edit 2: ...and now it has downclocked to 980 again - the same card both times. It is the card I'm running my displays from. Never had an issue with the 680s downclocking.
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35409 - Posted: 1 Mar 2014 | 4:53:43 UTC - in response to Message 35402.

Edit: Hmm, may have jumped the gun a bit. Rebooted and now back to 1124/1137. Maybe the Nvidia preferences needed a reboot to take effect? I'll have to check again after these WUs finish.

Edit 2: ...and now it has downclocked to 980 again - the same card both times. It is the card I'm running my displays from. Never had an issue with the 680s downclocking.


I bet if you were to run an app that tracks and records the temperature and clock speeds over time for a few tasks and then graphed that data you would see that the clocks stay up until the temp goes above a certain cutoff temp then the clock drops until the temp drops back down below that cutoff temp. I bet your card that is downclocking is doing so because the temp rises above 70C. That seems to be the temp where mine downclock.

I've found that if I set the fanspeed to say 60% the temp might be at say 65C and it will stay at 65C for many minutes. If I go away for an hour and then peak at the temperature I find sometimes it has risen by 6 degrees to 71C and I also find it has downclocked.

I think the temp rises for 2 reasons (maybe more):

1) the temperature of the air going into the case rises for some reason (the furnace kicks in or someone closes a window, for example)

2) the simulation reaches a hard spot that works the GPU harder

The fix is to recurve the fanspeed or run software that monitors the GPU temp and increases the fanspeed when the temp rises and decreases the fanspeed when the temp falls. The software allows you to set a target temperature which is the temp at which you want the GPU to run. It works like a thermostat.



____________
BOINC <<--- credit whores, pedants, alien hunters

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35422 - Posted: 1 Mar 2014 | 13:54:20 UTC - in response to Message 35409.

Temperature is very model, brand and even individual card depending. My primary EVGA GTX660 runs at 75°C steady with radial fan at maximum speed which is 75% for this card and will not become any cooler. But is steady at 940MHz for 6 days without booting. Some WU's, especially Santi's can down clock the card a bit but it will go up again if that WU finishes.
____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35431 - Posted: 1 Mar 2014 | 19:23:17 UTC - in response to Message 35422.
Last modified: 1 Mar 2014 | 19:24:00 UTC

Temperature is very model, brand and even individual card depending.


I agree and that is part of the problem with tweaking GPUs to get top performance. There are so many variables to deal with. I know I always say "We need a script to solve this" but I think if we had a script to collect temperature, clocks, % usage and various other data every second (maybe 2 seconds) for the entire length of tasks and then graph that data we would get a much better understanding of what is happening. I can do that for Linux hosts and have a possible way of doing it for Windows hosts. Storing the data and graphing it is easy but I don't have code for reading the data from the GPU on Windows yet, just Linux.
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35434 - Posted: 1 Mar 2014 | 20:02:58 UTC - in response to Message 35431.

Why not use HWiNFO and its Sensor logging ?

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35448 - Posted: 2 Mar 2014 | 12:50:50 UTC - in response to Message 35434.

Why not use HWiNFO and its Sensor logging ?


I had never heard of HWiNFo for Windows until now, thanks. It looks like it might have everything one needs. If it logs the data we would want for GPUgrid tasks and is capable of composing the kind of graphs that would be useful to GPUgrid users then it would be great. Anybody doing it so far?

Python exposes many of the NVIDIA driver API calls and there are graphing apps (GNU plot) that runs on Windows as well as Linux. Python runs on Windows too. Using that API via Linux one could write one app that runs on Windows and Linux that logs precisely the data we want and produces exactly the graphs we want. That's easy with Python because it can also use the BOINC API to access task names and other useful data BOINC generates and exposes via its API. That would allow you to log data and associate its graph with a task name, driver version, OS, BOINC configuration and hundreds of other types of info/data that HWiNFO might not be able to access or graph. It might or it might not, I have no idea as I've never used it.



graphs it, apparently. TIf WiIt might be worth looking into.
____________
BOINC <<--- credit whores, pedants, alien hunters

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35449 - Posted: 2 Mar 2014 | 13:31:31 UTC - in response to Message 35448.
Last modified: 2 Mar 2014 | 13:32:09 UTC

I have installed it and used it, but the readings differ with other programs that read all sort of system information but especially temperatures differ. I have started a thread about temperature readings and it seems everyone has it own preference for a reading program.
____________
Greetings from TJ

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35454 - Posted: 2 Mar 2014 | 15:52:13 UTC - in response to Message 35449.

Since your objective is to keep the GPU temperature below a limit then believe the application that reports the highest temperature. If you can keep the temp reported by that app below the limit then you can be quite certain it actually is below the limit.[/quote]

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35458 - Posted: 2 Mar 2014 | 18:01:34 UTC - in response to Message 35449.
Last modified: 2 Mar 2014 | 18:05:37 UTC

I think there's a lot of users logging data using HWiNFO and it can directly draw graphs too. Here's one example, though for AMD, but NV is similar:

I have installed it and used it, but the readings differ with other programs that read all sort of system information but especially temperatures differ. I have started a thread about temperature readings and it seems everyone has it own preference for a reading program.


Which exact temperatures differ, can you please post which sensor is it and which value? Also what other tools do you use, which show different values ?
I believe most tools use NVAPI to read NV GPU temperatures on later families and so HWiNFO does, so I'm really wondering that there are differences.
Let me know about any issues and I'll look at that, since I'm the author of HWiNFO ;-)

BTW, I have already contacted the author of BoincTasks about an integration with HWiNFO and he thinks it's a good idea, but is currently very busy. But I think this might be implemented sometime.. would be definitively interesting to see all sorts of sensor information from HWiNFO via BoincTasks.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35462 - Posted: 2 Mar 2014 | 20:03:21 UTC - in response to Message 35458.
Last modified: 2 Mar 2014 | 20:04:51 UTC

BTW, I have already contacted the author of BoincTasks about an integration with HWiNFO and he thinks it's a good idea, but is currently very busy. But I think this might be implemented sometime.. would be definitively interesting to see all sorts of sensor information from HWiNFO via BoincTasks.


You're the author, excellent :-) Integration with BoincTasks would be very handy for all BOINC volunteers. BT runs on Linux too under Wine so integration would be cross platform compatible... perfect. I'm moving this "log data and graph it" thing to lowest priority on my "would like to code it" list and I intend to try HWiNFO for Linux ASAP. No, wait!! The hwinfo package for Linux is a different package! Or did you author it for Linux as well? If not then I would be interested in collaborating with you to make a Linux version of your HWiNFO so users can have the same experience on both platforms. First I have other stuff to clear off my plate but perhaps in a couple months...
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35464 - Posted: 2 Mar 2014 | 20:35:34 UTC - in response to Message 35462.
Last modified: 2 Mar 2014 | 20:35:52 UTC

You're the author, excellent :-) Integration with BoincTasks would be very handy for all BOINC volunteers. BT runs on Linux too under Wine so integration would be cross platform compatible... perfect. I'm moving this "log data and graph it" thing to lowest priority on my "would like to code it" list and I intend to try HWiNFO for Linux ASAP. No, wait!! The hwinfo package for Linux is a different package! Or did you author it for Linux as well? If not then I would be interested in collaborating with you to make a Linux version of your HWiNFO so users can have the same experience on both platforms. First I have other stuff to clear off my plate but perhaps in a couple months...


The HWiNFO I do is for Windows (and DOS) only. The hwinfo on Linux is a completely different thing. Porting my HWiNFO to Linux would be a really huge effort. Though I think about that sometimes, I don't believe this is going to happen in near future.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35469 - Posted: 2 Mar 2014 | 22:03:08 UTC - in response to Message 35464.

I was thinking more like I write the Linux version and make it look and feel like the Windows version as much as possible. The collaboration part would involve very little work from you. But that's a topic for a different discussion in a different thread some time in the future. PM if interested or I might PM you about it in a month or so. Right now I'm just discovering the power of GKrellM for Linux. I overlooked it for a while but today the light went on and I realized just how much it can do. It's awesome and will be included in Crunchuntu.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35472 - Posted: 2 Mar 2014 | 22:47:15 UTC - in response to Message 35469.

I was thinking more like I write the Linux version and make it look and feel like the Windows version as much as possible. The collaboration part would involve very little work from you. But that's a topic for a different discussion in a different thread some time in the future. PM if interested or I might PM you about it in a month or so. Right now I'm just discovering the power of GKrellM for Linux. I overlooked it for a while but today the light went on and I realized just how much it can do. It's awesome and will be included in Crunchuntu.


I'm not sure how you meant that, but sure, let's move this discussion out of this thread. You can PM me, or better send direct e-mail (you can find mine in HWiNFO)...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35473 - Posted: 2 Mar 2014 | 23:05:37 UTC - in response to Message 35472.
Last modified: 2 Mar 2014 | 23:06:09 UTC

Under Windows, NVSMI shows the GPU temps and drivers for all cards, and more info for Titans, Quadro's and Teslas.
It's found in C:\Program Files\NVIDIA Corporation\NVSMI (nvidia-smi.exe) and can be accessed from a command prompt in Windows.
I think it might also be found in Linux, but I haven't checked.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35483 - Posted: 3 Mar 2014 | 18:46:27 UTC - in response to Message 35473.
Last modified: 3 Mar 2014 | 18:47:24 UTC

Under Windows, NVSMI shows the GPU temps and drivers for all cards, and more info for Titans, Quadro's and Teslas.
It's found in C:\Program Files\NVIDIA Corporation\NVSMI (nvidia-smi.exe) and can be accessed from a command prompt in Windows.
I think it might also be found in Linux, but I haven't checked.


There is no nvidia-smi in Linux. For Linux they ship the nvidia-settings app which, when run from command line with no args, starts the nvidi-settings GUI. If run with args (and there are a million possible args, just do 'man nvidia-settings' to read the manual) it exposes the driver API, very powerful. Or you can just click on the nvidia-settings icon to open the GUI which allows setting fan speeds if you've set coolbits in xorg.conf. It also gives a load of info and allows other tweaks such performance profile which can be used to increase performance. What nvidia-settings does not do is allow to set a target temperature and for that reason it is not the app it could be so I give it 4 out of 5 stars.

I use calls to nvidia-settings extensively in my gpu_d script to get temp readings and to adjust fanspeed up and down for the purpose of maintaining the user specified target temp.
____________
BOINC <<--- credit whores, pedants, alien hunters

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35532 - Posted: 6 Mar 2014 | 10:13:45 UTC

I have increased the voltage of my 780Ti to 1.185mV and not rebooted the system. At first this didn't help. Temperature is steady at 72°C with ambient temperature of 27°C.
Eventually clock speed went up to 1032MHz, from 875MHz. It depends on the WU and varies within crunching a WU but clock speed is definitely higher.
The GPU load however is not changing, stay around 75% with a Santi and just over 80% with a Noelia.
The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :)

(I am still not confident enough to go over to Linux.)
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35533 - Posted: 6 Mar 2014 | 10:25:07 UTC - in response to Message 35458.

Which exact temperatures differ, can you please post which sensor is it and which value? Also what other tools do you use, which show different values ?
I believe most tools use NVAPI to read NV GPU temperatures on later families and so HWiNFO does, so I'm really wondering that there are differences.
Let me know about any issues and I'll look at that, since I'm the author of HWiNFO ;-)

I don't know which sensors are the problem but I find all different reading when checking the CPU temps. You program has a lot of information, which is great!
But I have an Asus MOBO and Asus gives a software package with it for temperature control and readings. However is too high, according to a lot of others with same MOBO.
Then there is CPUID HWMonitor, also used by many, (give very high readings with AMD CPU), CoreTemp32, TThrottle and RealTemp. If I check my CPU temperature with all programs, then there is a range in differences of 13 degrees! So for me, as I don't have the technical knowledge, it is difficult to decide which program I can believe. If I use the hottest I should be safe is the main advice. That is true, but if the CPU runs actually 13 degrees colder, I could set the fan lower which reduced sound. And I don't have to shut down my rigs to often in summer when ambient temps go to 35°C.
Therefore I need to know which program I can trust.
____________
Greetings from TJ

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35534 - Posted: 6 Mar 2014 | 12:54:37 UTC - in response to Message 35533.
Last modified: 6 Mar 2014 | 12:56:48 UTC

I don't know which sensors are the problem but I find all different reading when checking the CPU temps. You program has a lot of information, which is great!
But I have an Asus MOBO and Asus gives a software package with it for temperature control and readings. However is too high, according to a lot of others with same MOBO.
Then there is CPUID HWMonitor, also used by many, (give very high readings with AMD CPU), CoreTemp32, TThrottle and RealTemp. If I check my CPU temperature with all programs, then there is a range in differences of 13 degrees! So for me, as I don't have the technical knowledge, it is difficult to decide which program I can believe. If I use the hottest I should be safe is the main advice. That is true, but if the CPU runs actually 13 degrees colder, I could set the fan lower which reduced sound. And I don't have to shut down my rigs to often in summer when ambient temps go to 35°C.
Therefore I need to know which program I can trust.


I'll explain this, maybe more users are interested in this...
I suppose you have a Core2 or similar family CPU. These families didn't have a certain marginal temperature value programmed - it's called Tj,max and when a software tries to read core temperature from the CPU, this is not the final value, but offset from that Tj,max. So if reading gives x, all tools do "temperature = Tj,max - x". Now the problem is that actually nobody knows exactly what the correct Tj,max for a particular model of those families should be ! Intel tried to clarify this, but they caused more mess, than a real explanation. So this why these tools differ - each of them believes that a different Tj,max is used for your CPU. But the reality is - nobody knows this exactly.. There have been several attempts to determine the correct Tj,max for certain models, some folks made large tests, but they all failed.. So all of us can just guess.
The other issue with core temperatures is the accuracy. If you're interested to know more, I wrote a post about that here: http://www.hwinfo.com/forum/Thread-CPU-Core-temperature-measuring-via-DTS-Facts-Fictions. Basically it means, that on certain CPU families the accuracy of the temperature sensor was very bad, especially at temperatures < 50 C. So bad, that you can't use it at all.. That's the truth ;-)
So in your case, you better rely on the temperature of the external CPU diode...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35537 - Posted: 6 Mar 2014 | 20:12:50 UTC - in response to Message 35532.

The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :)

I hope you meant Decreased :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35552 - Posted: 7 Mar 2014 | 0:01:39 UTC - in response to Message 35534.

Thanks you for the explanation Mumak.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35553 - Posted: 7 Mar 2014 | 0:02:51 UTC - in response to Message 35537.

The times however have increased with around 2000 seconds. So even for the crunchers with post XP there is hope :)

I hope you meant Decreased :)

Yes indeed skgiven, the times are better (faster) now.
____________
Greetings from TJ

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35556 - Posted: 7 Mar 2014 | 12:57:36 UTC - in response to Message 35532.
Last modified: 7 Mar 2014 | 12:59:43 UTC

TJ,

Glad to see you figured out a way to get better performance out of your cards. Mine are at 1187 mV on max boost, so you're probably right where you should be. Your GPU utilization for Santi and Noelia also look about the same as my cards.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35560 - Posted: 7 Mar 2014 | 23:59:29 UTC - in response to Message 35556.

Yes thank you Matt. As I saw better times with other crunchers with same OS, it should be possible for me too. And I like to experiment a bit and change only a little at a time to see the results. Perhaps I increase 1 or 2 mV more to see if it can a bit more better. But so far I am happy with the results.
____________
Greetings from TJ

Profile [VENETO] sabayonino
Send message
Joined: 4 Apr 10
Posts: 50
Credit: 645,641,596
RAC: 44,792
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35738 - Posted: 19 Mar 2014 | 15:27:29 UTC
Last modified: 19 Mar 2014 | 15:45:23 UTC

Hi guys :)

I've similar problem with 780ti on Gentoo-Linux

"Long-Run" is very very slow to complete
"Shorts" run fine if I set an app_config 1CPU+1GPU (~ 1h15m - 1h40m)
if run "short" with default value , Wu takes very long time (over 6-8 hours ! )

sometime app_config is skipping and WU's starts with 0.865Cpu+1Gpu

Now I am running "Long-Run" Wu (0.865cpu+1gpu - default) and after ~4h30m progress show me 14%

supsending all other projects (except Gpugrid) this WU after ~ 20min it is at 25% progress

20 min ---> 11% !!

[edit] After a while This WU was gone :(

Now Im'm runnin a new Longrun with gpugrid only

Profile [VENETO] sabayonino
Send message
Joined: 4 Apr 10
Posts: 50
Credit: 645,641,596
RAC: 44,792
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35748 - Posted: 19 Mar 2014 | 21:09:22 UTC
Last modified: 19 Mar 2014 | 21:38:19 UTC

completed this task
Only this task was running in this host


Long Runs Computation time : ~5h


<core_client_version>7.2.0</core_client_version>
<![CDATA[
<stderr_txt>
SWAN : FATAL : Cuda driver error 700 in file 'swanlibnv2.cpp' in line 1803.
# Time per step (avg over 3220000 steps): 1.946 ms
# Approximate elapsed time for entire WU: 18484.331 s
21:41:01 (30309): called boinc_finish

</stderr_txt>
]]>




SWAN_SYNC is enable in my envronment variables ...

this task takes 115,650.00 credits only (I see that Windows OS takes 135,000.00 credits :|



With new task (longruns) I started other projects (only cpu) and now it is slow in its processing steps :( (only 3% after 30 minutes) .

previous task after ~20 minutes was at ~10% (or more...)


my systeminfo
Portage 2.2.8-r1 (default/linux/amd64/13.0, gcc-4.8.2, glibc-2.17, 3.13.6-gentoo x86_64)
=================================================================
System uname: Linux-3.13.6-gentoo-x86_64-Intel-R-_Core-TM-_i7-4770_CPU_@_3.40GHz-with-gentoo-2.2
KiB Mem: 16314020 total, 14649060 free
KiB Swap: 0 total, 0 free
Timestamp of tree: Sun, 16 Mar 2014 11:15:01 +0000
ld ld di GNU (GNU Binutils) 2.23.2
distcc 3.1 x86_64-pc-linux-gnu [disabled]
ccache version 3.1.9 [enabled]
app-shells/bash: 4.2_p45
dev-lang/python: 2.7.5-r3, 3.3.3
dev-util/ccache: 3.1.9-r3
dev-util/cmake: 2.8.11.2
dev-util/pkgconfig: 0.28
sys-apps/baselayout: 2.2
sys-apps/openrc: 0.12.4
sys-apps/sandbox: 2.6-r1
sys-devel/autoconf: 2.13, 2.69
sys-devel/automake: 1.12.6, 1.13.4
sys-devel/binutils: 2.23.2
sys-devel/gcc: 4.7.3-r1, 4.8.2
sys-devel/gcc-config: 1.7.3
sys-devel/libtool: 2.4.2
sys-devel/make: 3.82-r4
sys-kernel/linux-headers: 3.13 (virtual/os-headers)
sys-libs/glibc: 2.17
Repositories: gentoo
ACCEPT_KEYWORDS="amd64"
ACCEPT_LICENSE="*"
CBUILD="x86_64-pc-linux-gnu"
CFLAGS="-O2 -march=native -pipe"
CHOST="x86_64-pc-linux-gnu"

nvidia-drivers : 331.49


[edit] after 1h of computation , progress is at 4.8% :(

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35957 - Posted: 28 Mar 2014 | 12:22:27 UTC

I finally got a Noelia task on my GTX780Ti,and runs smooth with a steady GPU use of 90% which is way better then the 74% of Santi's and 66-72% of Gianni's.
So not only WDDM is hampering the performance of the 780Ti but also the way a GPUGRID WU is programed. As said before: I like the Noelia WU's.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35988 - Posted: 29 Mar 2014 | 13:56:35 UTC - in response to Message 35957.

So not only WDDM is hampering the performance of the 780Ti but also the way a GPUGRID WU is programmed.

From your statement above it seems that these are separate factors, but actually they aren't.
I would say that for those workunits which (have to) do more CPU-GPU interaction the performance hit of the WDDM is larger.

Post to thread

Message boards : Graphics cards (GPUs) : Poor times with 780 ti

//