Advanced search

Message boards : Graphics cards (GPUs) : needed pci-e bandwidth? mining rig idea

Author Message
erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51740 - Posted: 2 May 2019 | 8:16:37 UTC

does gpugrid really need the full 16x or 8x bandwidth of pci-e 3?

or can i also build a system looking like mining rig?
what i mean using special mining motherboard with lots of 1x lane pcie-3, connecting pcie-3 x16 riser to mainboard via usb for data transfers and putting gpu-card into pcie-16x riser.
should this work?
because the limiting factor here is pci-1x lane connection on motherboard

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51741 - Posted: 2 May 2019 | 12:21:06 UTC

You can certainly try it to see what GPU usage you get with SWAN_SYNC on Linux. Without SWAN_SYNC I notice about 30% PCIe usage on an 8x link, but with SWAN_SYNC on Linux I notice only about 2% usage.

It might be possible, but only with SWAN_SYNC on Linux, though I have never tried it.

I wouldn't go out and buy mining specific hardware until you have tested it out. Keep in mind you should want at least 1 thread free per GPU for science workloads. Let us know what you find!

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51742 - Posted: 2 May 2019 | 13:22:38 UTC - in response to Message 51741.
Last modified: 2 May 2019 | 13:24:08 UTC

You can certainly try it to see what GPU usage you get with SWAN_SYNC on Linux. Without SWAN_SYNC I notice about 30% PCIe usage on an 8x link, but with SWAN_SYNC on Linux I notice only about 2% usage.

It might be possible, but only with SWAN_SYNC on Linux, though I have never tried it.

I wouldn't go out and buy mining specific hardware until you have tested it out. Keep in mind you should want at least 1 thread free per GPU for science workloads. Let us know what you find!
it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 4,027,821,065
RAC: 12,694,216
Level
Arg
Scientific publications
watwatwatwatwat
Message 51743 - Posted: 2 May 2019 | 14:33:31 UTC

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51744 - Posted: 2 May 2019 | 18:32:10 UTC - in response to Message 51742.

it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me

I apologize if my answer was not clear enough. Basically at 8x I use 30% of the 8x bandwidth. But with SWAN_SYNC on Linux I only use 2% of the bandwidth of 8x. So I would imagine you could see less than 80% usage on 1x pcie. I have not tested this myself but I would imagine it would still work just fine with minimal loss. Keep in mind you need 1 CPU thread per GPU unlike mining which is almost 0% reliant on CPU threads.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51745 - Posted: 2 May 2019 | 18:33:36 UTC - in response to Message 51743.

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51746 - Posted: 2 May 2019 | 19:47:23 UTC - in response to Message 51744.

it is my purpose to "mine" for gpugrid -:)
no, just kidding....i do not want to mine any crypto at all.

my question was:
setting up gpu machine for only 1 purpose = gpugrid, (in a way as miners do). a lot of gpu cards in 1 case, connected to 1x lane pcie-3.
is 1 lane of pcie-3 enough to use gpugrid? or i need at least 8 lanes or 16 lanes pcie-3??

if my question or idea not clear enough, plz ask me

I apologize if my answer was not clear enough. Basically at 8x I use 30% of the 8x bandwidth. But with SWAN_SYNC on Linux I only use 2% of the bandwidth of 8x. So I would imagine you could see less than 80% usage on 1x pcie. I have not tested this myself but I would imagine it would still work just fine with minimal loss. Keep in mind you need 1 CPU thread per GPU unlike mining which is almost 0% reliant on CPU threads.
i am sorry. i didn't see your reply on my smaal screen smartphone.

now i see it.
1 cpu thread per 1 pgu. my cpu is threadripper 1950x, 16 core with STM (or SMT) 32 viewable core's. is this enough for 4 gpu's?
and...if i understand your answer rigth, then i cann't punt 4 gpu's in pcie-3 x1 lane because of that 30% of x8 lanes.
am i right?

once again, i am sorry for confusion

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51747 - Posted: 2 May 2019 | 20:04:18 UTC

30% on an 8x pcie is without SWAN_SYNC. With SWAN_SYNC on Linux I get 2% on 8x. As long as you use SWAN_SYNC it might be theoretically possible.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 4,027,821,065
RAC: 12,694,216
Level
Arg
Scientific publications
watwatwatwatwat
Message 51748 - Posted: 2 May 2019 | 21:16:38 UTC - in response to Message 51745.

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?



I was comparing how mining is different from BOINC crunching and in particular GPUGrid. How mining can get away with using just a 1x slow and how GPUGrid cannot. There ya got it. It won't work. You want a Black and white answer without any understanding, here it is. No

Just test it. It's not that hard.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51749 - Posted: 2 May 2019 | 21:33:06 UTC - in response to Message 51748.

He kind of did answer it. 30% usage at 8x. Theoretically one could run 3 GPUs across an 8x link but I'd wager there would be some performance losses at 90% utilization. 1x would not be enough in that case.

With SWAN_SYNC the tasks use a full CPU core and you'd be limited by CPU threads before PCI-E 1x lanes on a mining board.

Mining programs are small and fit in GDDR memory so there isn't a lot of traffic across the PCI-E links. GPUGrid computing requires more GDDR and CPU computations.

this whole mining sh*t is making me crazy.....I am not talking about mining....I am talking about building a system like miners do.

putting a lot of nvidia gtx-cards in 1 case, connecting them to pcie-3 x1 lane. and then only running boinc for gpugrid, using windows 10 with swansync enabled.

no mining program at all

should gpugrid run OK on pcie-3 x1 lane?



I was comparing how mining is different from BOINC crunching and in particular GPUGrid. How mining can get away with using just a 1x slow and how GPUGrid cannot. There ya got it. It won't work. You want a Black and white answer without any understanding, here it is. No

Just test it. It's not that hard.
thanks

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 51750 - Posted: 3 May 2019 | 0:39:59 UTC

In the past I have used a usb cable connected riser to a 1x pci-e slot using a GTX750ti for a few months on GPUgrid. (I no longer use it)
It was on a Linux host set to BLOCK mode.
The GPUgrid output was lower by around 15% (approx). I am guessing a faster card could suffer a larger speed reduction.
I found the GPUgrid task would randomly pause for no reason, but I could manually start the task again. (I had a script that would check the task status and start it again if necessary)
Not sure why tasks would pause, as multiple GPU cards work fine on rigs with multiple 8x/16x slots, I assumed it was just a poorly designed/implemented riser.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1288
Credit: 5,112,256,959
RAC: 9,219,224
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51751 - Posted: 3 May 2019 | 7:59:35 UTC

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51752 - Posted: 3 May 2019 | 11:11:24 UTC - in response to Message 51751.

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.

thanks for your reply.

to try I firstly need to buy gtx1070 or gtx1080 cards, used ones. the price here for used cards are around 250-300 euro;s.
spending 1000 euro for 4 cards to try something is odd for me.
that is why asked here before I spend 1000 euro.
and I would by 4 gpu's just only for gpugrid, no other intended use.

so now I know it will not work because of limitation of speed of x1 lane.

thanks all who replied

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 51753 - Posted: 3 May 2019 | 22:08:36 UTC - in response to Message 51752.
Last modified: 3 May 2019 | 22:16:08 UTC

The people in my team at Seti who are using mining hardware to support multiple high gpu count hosts (7-12 cards) find the risers and particularly the USB cables have to be high quality and shielded to crunch without constant issues of cards dropping offline.

Many hosts are using PCIe X1 slots. But the Seti task requirement is a lot less than what either Einstein or GPUGrid need for speed and bandwidth.

Answer is try it and see if it will work.

thanks for your reply.

to try I firstly need to buy gtx1070 or gtx1080 cards, used ones. the price here for used cards are around 250-300 euro;s.
spending 1000 euro for 4 cards to try something is odd for me.
that is why asked here before I spend 1000 euro.
and I would by 4 gpu's just only for gpugrid, no other intended use.

so now I know it will not work because of limitation of speed of x1 lane.

thanks all who replied

I think your motherboard already has four PCIe3.0x16 slots (probably they will run at x8 if all of them occupied). You should look for x16 PCIe3.0 risers and use them. It is recommended to resolve cooling issues. Or you can build a water cooled rig with 4 cards. The heat output will be around 1.2kW, so it's not recommended to put 4 air cooled cards close to each other.
As for the original question: In my experience the performance loss (caused by the lack of PCIe bandwidth) is depends on the workunit (some will suffer more, some will suffer less/none). To achieve optimal performance high end cards need at least PCIe3.0x8. If you are not bothered by the performace loss, perhaps they will work even at PCIe3.0x1. I'm not sure because I've never put more than 4 GPUs in a single host. I build single GPU hosts lately, because I can spread them across our flat (I use them as space heaters).

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1288
Credit: 5,112,256,959
RAC: 9,219,224
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51754 - Posted: 4 May 2019 | 0:18:40 UTC

Yes, any HEDT motherboard should be able to support 4 gpus natively. I have an Intel X99 board that supports 4 gpus at PCIe 3.0 X16 speeds and one X399 board that supports 2 gpus at PCIe 3.0 X8 speeds along with 2 gpus at PCIe 3.0 X16 speeds. As long as the gpus are no wider than two slots, even air cooled cards fit. Water cooling or hybrid cooled cards keep the cards cool so they clock well with the best performance.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51756 - Posted: 4 May 2019 | 1:00:52 UTC - in response to Message 51754.

Hybrid or custom cooling loops is best options for multi card systems. Prevent throttling of the cards if the are all in 1 box. However, if you go the route of hanging them from a support beam above the Mobo, then you could probably get away with air cooled as long as you have proper ventilation.

Keith is correct in that using a higher PCI-e bandwidth is preferable if you are looking to crunch the fastest. If not then, then yes using risers is a viable option.

As you noted in the other thread, you have to keep in mind how many threads are on the CPU, how many are available to the GPUs, are those threads going to be shared with the Sata's or M2? The more lanes you can devote to the cards, the faster they will finish the work.

Good luck

Z
____________

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51757 - Posted: 4 May 2019 | 9:08:36 UTC - in response to Message 51756.

my idea is to build a setup with mining rig frames, like this one

https://images.app.goo.gl/F8vavF34ggADQoiC9

with asus b250 mining motherboard, intel i7-7700 or i7-7700T (4 cores, hyperthreading to 8 cores/threads), 4 - 6 gpu's (gtx1070), some fans of 120x120x38mm. and of course using pcie-x16 risers connected to motherboards x1-slots

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51759 - Posted: 4 May 2019 | 12:50:36 UTC
Last modified: 4 May 2019 | 12:59:09 UTC

This software is fundamentally different than mining software and requires more resources. You will need at least 1 CPU core per GPU and I highly doubt PCIe 1x is enough bandwidth to supply a fast GPU like a 1070 (PCIe 16x riser on a PCIe 1x slot is the same thing as a 1x riser on a 1x slot.) You will need minimum 4x and recommended 8x PCIe per GPU.

I have used PCIe 1x risers in the past and I can tell you they are an absolute nightmare. Most notably if the USB connection between PCIe connectors isn't perfect you have enormous difficulty getting the Operating system to recognize the GPU.

Zoltan and I have found that CPU frequency also plays a large role in GPU usage, but you should be fine with an i7-7700. You might be better off having 3-4 GPUs per system with 16x risers and having two cheap (but high frequency) systems.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51760 - Posted: 4 May 2019 | 13:34:18 UTC - in response to Message 51754.

Yes, any HEDT motherboard should be able to support 4 gpus natively. I have an Intel X99 board that supports 4 gpus at PCIe 3.0 X16 speeds and one X399 board that supports 2 gpus at PCIe 3.0 X8 speeds along with 2 gpus at PCIe 3.0 X16 speeds. As long as the gpus are no wider than two slots, even air cooled cards fit. Water cooling or hybrid cooled cards keep the cards cool so they clock well with the best performance.

intel x99 boards have mostly 16x/8x/4x modus.
probably you are talking about asus ws-serie mainboards when you say 4 gpu's natively x16. but do you realize about the presence of PLX pcie-switches on those ws-boards? the net result is much lesser bandwidth

take a look at block diagram

https://www.overclock.net/content/type/61/id/2674289/width/350/height/700/flags/LL

my idea was not makng hedt-system with threadripper x399-chipset. way to expensive.

but....thanks for your reply.

i found someone near to me where i can test my idea with asus b250 mining motherboard, intel g4400 cpu, 8gb ram, 2x gtx1070.
i will give update about my progress.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51761 - Posted: 4 May 2019 | 21:29:54 UTC - in response to Message 51760.


intel x99 boards have mostly 16x/8x/4x modus.
probably you are talking about asus ws-serie mainboards when you say 4 gpu's natively x16. but do you realize about the presence of PLX pcie-switches on those ws-boards? the net result is much lesser bandwidth


It's not less, it's better utilization of available lanes. For this example I am only talking about Intel chips . There are only as many lanes as the CPU chip has. If you get a low end CPU with 24 lanes, then that is all you get. If you get a high end CPU then you might have 40 or 44.

For the ASUS X99e-ws ($$$$) the PCIe are x16/x16/x16/x16 because of the PLX chip. As long as you don't add other things that take up any of the lanes (m2, etc) that the GPUs are using then you can get close to x16 for the GPUs. The PLX chips have their own lanes as well that allow you to attach other items of your computer (Lan, USB etc).

Here's a link to post where someone is attempting to describe what is occurring. He's quoting an article we both read about this a long time ago but can't find right now.

https://www.overclock.net/forum/6-intel-motherboards/1618042-what-multiplexing-how-does-plx-chips-work.html
____________

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51762 - Posted: 4 May 2019 | 22:18:49 UTC - in response to Message 51761.

so, you recommend to get a motherboard with plx-swtich? is this better than b250 mining bord with x1 lanes? do i understand you correct? a motherboard with plx will crunch faster than x1-lanes?

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51763 - Posted: 4 May 2019 | 23:57:21 UTC - in response to Message 51762.

I'm saying, PCIe speed does make a difference. You are better off with higher than PCIe x1. What level of PCIe is up to you. X16 will process the data faster than x8 and so forth.

PLX boards are extreme examples but not necessary for this project. You can get by with lot of different boards as long as you take into account PCIe speeds and total number of lanes of the CPU.

Seti has a good example of someone using a mining rig, but the applications there have been refined over the years and the data packets are small enough that PCIe aren't a factor. This project, like Einstein perform better with larger PCIe lanes.


Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1288
Credit: 5,112,256,959
RAC: 9,219,224
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51764 - Posted: 5 May 2019 | 1:03:51 UTC - in response to Message 51757.

my idea is to build a setup with mining rig frames, like this one

https://images.app.goo.gl/F8vavF34ggADQoiC9

with asus b250 mining motherboard, intel i7-7700 or i7-7700T (4 cores, hyperthreading to 8 cores/threads), 4 - 6 gpu's (gtx1070), some fans of 120x120x38mm. and of course using pcie-x16 risers connected to motherboards x1-slots

That frame should work. I believe that the ASUS B250 Mining Expert motherboard is the one TBar is using with 12 gpus and a i7-6700.
https://www.newegg.com/Product/Product.aspx?Item=9SIA96K7TC8911&Description=B250%20MINING%20EXPERT&cm_re=B250_MINING_EXPERT-_-13-119-028-_-Product

His host is here. https://setiathome.berkeley.edu/show_host_detail.php?hostid=6813106

One of his stderr.txt outputs is here showing the 12 gpus.
https://setiathome.berkeley.edu/result.php?resultid=7649459019

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 51766 - Posted: 5 May 2019 | 9:49:17 UTC - in response to Message 51762.
Last modified: 5 May 2019 | 9:49:27 UTC

so, you recommend to get a motherboard with plx-swtich?
Yes.
is this better than b250 mining bord with x1 lanes?
Yes.
do i understand you correct? a motherboard with plx will crunch faster than x1-lanes?
Yes.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51767 - Posted: 5 May 2019 | 20:56:09 UTC
Last modified: 5 May 2019 | 21:45:02 UTC

edit...deleted

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51768 - Posted: 5 May 2019 | 21:44:37 UTC

ok, ppl. today i have been testing asus b250 mining board.
crap. those x1 ==> x16 risers are so sensitive and so crappy that i decided not to use any mining board.

so, i have decided to build my 3rd Threadripper build with threadripper 1920x cpu. but which motherboard has the best pcie-slots orientation for 3 gpu setup when you look at this site?

it is in dutch, but click on pictures for judgement of pcie-slots.

https://tweakers.net/categorie/47/moederborden/producten/#filter:TcvBCsIwEATQf5lzhCzFNuwHFDx46lE8hHSRlWBDUjxY8u9NEcTTMI-ZDUueJY8qcQYjZX0WmC9OS16b-RJ-kiRc2u5EBsk_ZNKPgMlaczyDXPUFbqW03ahxlVzAG7rBHvH2EXwDkXNn3KsB9d2_D65prTs

any advice??

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1288
Credit: 5,112,256,959
RAC: 9,219,224
Level
Tyr
Scientific publications
watwatwatwatwat
Message 51769 - Posted: 5 May 2019 | 22:14:46 UTC - in response to Message 51768.

I would go with either of the Asrock boards on that site. The Taichi has four X16 PCIe slots so you could fit four gpu cards. The Pro Gaming 6 satisfies your minimum 3 gpu requirement.

I have the Asrock X399 Fatality Professional Gaming motherboard with a 2920X and I really like it. There are some sweet deals on the 1950X now as AMD tries to reduce inventory in advance of Zen 2. Surprised you can even find a 1920X anymore as I thought all its stock disappeared last year when the TR2 models came out and retailers were blowing the 1920X out the door for less than $250.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51941 - Posted: 30 May 2019 | 20:36:21 UTC

ok, little update on my build project.

https://imgur.com/a/7dfC8ym

what i have done:
- asrock x399 taichi <<=== motherboard
- TR 1950x <<=== cpu (with noctua 120mm TR4 cooler)
- pcie-riser cables from x16 to x16 (25cm long, https://www.highflow.nl/hardware/videokaarten/li-heat-pci-e-gen-3.0-ribbon-flexible-riser-cable-v2-black.html
- 4 msi gtx 1070, 8gb, itx cards
- gpu load around 85%
- pcie interface load around 20% on x16-slot, around 30% on x8-slot (app_config.xml with cpu usage at 0.975)
- all gpu-cards undervolted with msi afterburner to 50% of power limit (with disadvantage lower core clock. but i dont mind that).
- crunching time according boinc around 10 hours


my next step:
building another system with a mining frame for 5 gpu cards (msi gtx 1070, 8gb, itx). this one:
https://imgur.com/a/U5gP4tW
purpose is to still using pcie riser cables x16 >> x16 to connect so much gpu's to mainboard. i will not use regular computer case, i like to have good airflow, so i choose for mining frame. and therefore using x16 >> x16 riser cables.

so, i need your advice (or have questions).
- which mainboard has more than 5 pci-e slots? (intel, amd, old, new platform doesnt matter). but it must be ATX form factor
- is it true that windows 10 64bit supports maximal 6 gpu's?

thanks in advance
erik

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51943 - Posted: 30 May 2019 | 21:55:35 UTC - in response to Message 51941.

ok, little update on my build project.

https://imgur.com/a/7dfC8ym

what i have done:
- asrock x399 taichi <<=== motherboard
- TR 1950x <<=== cpu (with noctua 120mm TR4 cooler)
- pcie-riser cables from x16 to x16 (25cm long, https://www.highflow.nl/hardware/videokaarten/li-heat-pci-e-gen-3.0-ribbon-flexible-riser-cable-v2-black.html
- 4 msi gtx 1070, 8gb, itx cards
- gpu load around 85%
- pcie interface load around 20% on x16-slot, around 30% on x8-slot (app_config.xml with cpu usage at 0.975)
- all gpu-cards undervolted with msi afterburner to 50% of power limit (with disadvantage lower core clock. but i dont mind that).
- crunching time according boinc around 10 hours


my next step:
building another system with a mining frame for 5 gpu cards (msi gtx 1070, 8gb, itx). this one:
https://imgur.com/a/U5gP4tW
purpose is to still using pcie riser cables x16 >> x16 to connect so much gpu's to mainboard. i will not use regular computer case, i like to have good airflow, so i choose for mining frame. and therefore using x16 >> x16 riser cables.

so, i need your advice (or have questions).
- which mainboard has more than 5 pci-e slots? (intel, amd, old, new platform doesnt matter). but it must be ATX form factor
- is it true that windows 10 64bit supports maximal 6 gpu's?

thanks in advance
erik

Hello erik,
Nice build! If you ever have any cooling problems make sure to install 120mm fans blowing the heat over and away from the GPUs. Is the 85% GPU load with windows SWAN_SYNC? If so, I believe you will be in the 95%+ range with SWAN_SYNC on Linux once their application is fixed soon.

You mention building another system. As far as I am aware the only motherboards with more than 4 full size PCIe slots are workstation or server boards. They typically use PLX chips to act as a PCIe lane 'switch' to the CPU. The old fashioned way to achieve high GPU counts is PCIe x1 but I don't think we've had anyone test the GPU utilization with the GPUGRID app with SWAN_SYNC under Linux.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51944 - Posted: 30 May 2019 | 22:18:03 UTC - in response to Message 51943.

forgotten to say: under windows 10 is Swan_sync enabled (set to 1). but having 80% gpu load, I don't mind that. I think the reason for such a low gpu load is my cpu. because my cpu is partially broken. (but this is another issue, asked rma at AMD)

now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%.

is a PLX-switch not becoming bottleneck for gpugrid-calculations when I make a 7-gpu-setup?

and...any info about maximal supported gpu in windows 10??

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 4,027,821,065
RAC: 12,694,216
Level
Arg
Scientific publications
watwatwatwatwat
Message 51945 - Posted: 30 May 2019 | 22:47:38 UTC

It's hard to see but is there support under those GPUs or are they just hanging by the PCI bracket? I have this one that 'supports' 6x GPUs but you can easily drill more holes for the PCI brackets. Not even $30 and it comes with fan mounts.

https://www.amazon.com/gp/product/B079MBYRK2/

There are server boards that have more than 4x PCI-E 16/8x slots. Some have 7 but may not be ATX. I've hard some people using a Rosewell cases for higher count GPU setups.

No need to limit yourself to an ATX board for either type of case. Those open air mining rigs are just made out of aluminum t-slot pieces. https://8020.net/shop

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51946 - Posted: 30 May 2019 | 23:17:39 UTC - in response to Message 51945.

It's hard to see but is there support under those GPUs or are they just hanging by the PCI bracket?
see first picture. those 2 white parallel aluminium bars are the support for riser-slots. the cards are not hanging "in the air". they sit on riser-slot, and riser-slot sits on those parallel bars. all white bars are my own adjustments.

I have this one that 'supports' 6x GPUs but you can easily drill more holes for the PCI brackets. Not even $30 and it comes with fan mounts.
my cards are itx-format. so putting a fan at back-side of the cards will not help that much to cool the gpu's. if I should hang fans on the backside, then there is about 7-8cm space between the fan and backside of gpu. so I don't expect much cooling effect.
I could put fans on the frontside of the gpu's where hdmi cables are connecting. but as of now, I don't see any advantage of this. gpu-temperature is now around 50 degree Celsius.


No need to limit yourself to an ATX board for either type of case. Those open air mining rigs are just made out of aluminum t-slot pieces. https://8020.net/shop
i am in doubt between those 2 boards, Asus X99-E WS (socket 2011-3) or Asus P9X79-E WS (socket 2011)

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 4,027,821,065
RAC: 12,694,216
Level
Arg
Scientific publications
watwatwatwatwat
Message 51949 - Posted: 31 May 2019 | 1:36:37 UTC

Ah missed the 1st link. 2nd one with fewer images are darker.

With a couple of 120mm fans along the length of the rack it would become like a wall of air, esp if there was a top/sides to force the air along the GPUs.

To get that many PCI-E slots you'll need a single socket WS board/CPU as you mentioned that came with more lanes, a 2P board or an AMD TR/Epyc setup. Even if you don't need all those lanes for GPUGrid, those are the type of systems where the slots will be available. Z10PE-D8 WS, EP2C621D12 WS or even X9DRX+-F with ten 8x 3.0 slots.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51951 - Posted: 31 May 2019 | 10:50:34 UTC - in response to Message 51949.
Last modified: 31 May 2019 | 10:59:21 UTC

https://edgeup.asus.com/2018/asus-h370-mining-master-20-gpus-one-motherboard-pcie-over-usb/could this board working with gpugrid? because data connection is usb 3.1 gen 1 (5gbps speed). is this enough data speed (usb 3.1) for gpugrid?
i want to connect 6-8 gpus.

if one lane pci-e v3.0 has around 1gbps speed and x16-slot has around 16gbps speed, then one usb 3.1 gen1 will be comparable with pci-e v3.0 x4-slot.
is my calculation rigth? or am i missing something?

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51952 - Posted: 31 May 2019 | 10:51:51 UTC - in response to Message 51944.
Last modified: 31 May 2019 | 11:08:12 UTC

now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%.


That is not as bad as I thought, though you did mention you lowered the power limit of the cards a lot. What GPU clockspeed are they running at? I would imagine with a faster card you would get less GPU utilization.

Also keep in mind this is Windows SWAN_SYNC and not Linux SWAN_SYNC so I think there is still much performance to be gained with PCIe x1

You might also be able to maximize what you can out of the limited PCIe x1 bandwidth. If you lower the power limit enough, which in turn lowers clockspeed, you could potentially maximize GPU utilization making it not only more efficient from the power limit but also more efficient with the higher utilization.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51953 - Posted: 31 May 2019 | 12:03:42 UTC - in response to Message 51952.

now for testing....connected 1 gpu to pcie-x1 v2 slot, gpu load at 68-70%, bus interface load 65-67%.


That is not as bad as I thought, though you did mention you lowered the power limit of the cards a lot. What GPU clockspeed are they running at? I would imagine with a faster card you would get less GPU utilization.

all my cards are the same, msi gtx 1070 8gb itx. all of them are power limited to 50%.
one of them is connected to pcie-x1-lane v2.0. this card has:
gpu load around 70%, bus interface load 67%, clock speed around 1650-1680 MHz, under power limit 50%

all other cards with same power limit 50% are running around 1530-1570MHz, gpu load around 85%.

all have the same temperature...around 50-52 degree celsius

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51954 - Posted: 31 May 2019 | 12:15:02 UTC

I too have a mining-esque system with hopefully 5-6 gpus. The case is already designed for open air 6 GPUs. My current and hopefully only problem is the cards I currently have require two 6 and 8 pin 12v plugs and I've run out of cables from my power supply. I should have done a bit more research before buying!

At first I had severe issues getting the GPUs to be recognized so if you ever have this problem, try updating the BIOS, this is what fixed it for me.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51955 - Posted: 31 May 2019 | 13:05:23 UTC - in response to Message 51954.

I too have a mining-esque system with hopefully 5-6 gpus. The case is already designed for open air 6 GPUs. My current and hopefully only problem is the cards I currently have require two 6 and 8 pin 12v plugs and I've run out of cables from my power supply. I should have done a bit more research before buying!

At first I had severe issues getting the GPUs to be recognized so if you ever have this problem, try updating the BIOS, this is what fixed it for me.

for psu you cxan use atx-psu with enough power, starting at 1200 watt. preferably full modular, so dont want to have molex or sata power connectors. or use hp server psu with special modules for 12x 6pin pcie-power-connector.https://tweakers.net/aanbod/1983990/mining-starters-kit-benodigdheden.html
this site is in dutch. but you can check the pictures about ho server psu.

for your build...what kind of motherboard are you using? and cpu?

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51956 - Posted: 31 May 2019 | 13:34:14 UTC - in response to Message 51955.


for psu you can use atx-psu with enough power, starting at 1200 watt. preferably full modular, so dont want to have molex or sata power connectors. or use hp server psu with special modules for 12x 6pin pcie-power-connector.https://tweakers.net/aanbod/1983990/mining-starters-kit-benodigdheden.html
this site is in dutch. but you can check the pictures about ho server psu.

for your build...what kind of motherboard are you using? and cpu?

I have a 1200 watt AX1200 fully modular from Corsair paired with an AB350 Pro4 from ASRock and a r7 1700. I would not recommend this motherboard as it has caused me great agony with GPU detection as for at least a year after its release it did not have a BIOS that allowed for what I was trying to do. It has 6x PCIe 1x slots as I don't have a need for more than 6 GPUs. I personally would recommend literally any other board.

I think a mining specific board would work the best as that is what you will be doing with it. It probably has other mining specific features built in.

I have the r7 1700 at full load with World Community Grid and Rosetta@home while also having multiple GPUs at high load. As long as you don't overwhelm the CPU with too much CPU work, everything should run at peak efficiency and speed.

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 51958 - Posted: 31 May 2019 | 21:11:24 UTC - in response to Message 51740.
Last modified: 31 May 2019 | 21:13:45 UTC

does gpugrid really need the full 16x or 8x bandwidth of pci-e 3

No - though more lanes will perform better on GPUGRID WUs that are bandwidth heavy.

In my GPUGRID experience performance loss varies 33-50% running PCIe 2.0 x1 with GTX 750 / GTX 970 / GTX 1060 / GTX 1070 or any card.

My z87 MB with 5 GPUs PCIe 2.0 x1 bus interface load 82% on any card.
PCIe 3.0 x4 70-75% bus usage.
PCIe 3.0 x8 50-60% bus usage.

GTX 970 PCIe2.0 x1 has 55% the performance of a GTX970 on PCIe3.0 x4.

GTX 1060 and 1070 PCIe2.0 x1 has 66% the performance compared to 1060 / 1070 PCIe3.0 x4.

Turing GPU on PCIe2.0 x1 I suspect will run ACEMD at 70-75% performance of PCIe3.0 x4.

On a z87 MSI XPOWER motherboard I have RTX 2070 (PCIe3.0 x8) / RTX 2060 (PCIe3.0 x4) / RTX 2080 (PCIe3.0 x4) with GTX 1080 / 1070 (PCIe2.0 x1) running (integer) Genefer n=20 Primegrid app.

x1 PCIe bus shows a 7-10% performance loss compared to PCIe3.0 x4 or 8-13% loss vs. PCIe 3.0 x8.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51959 - Posted: 1 Jun 2019 | 17:16:52 UTC
Last modified: 1 Jun 2019 | 17:17:33 UTC

i am considering this setup:

https://imgur.com/a/e03wZMM

mainboard: Onda B250 D8P (not D3-version)
support for socket 1151 intel cpu (6th gen, maybe even 8th gen)
sodimm upto 16gb ddr4 (laptop ram-modules, sodimm)

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 51960 - Posted: 1 Jun 2019 | 20:26:51 UTC

That CPU only has 16 lanes on the CPU so it would be pretty pointless to have a full PCIe connector for each GPU. It would only make sense if there were enough PCIe lanes to go around

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51961 - Posted: 1 Jun 2019 | 21:53:35 UTC - in response to Message 51960.

so, your better choice is using mainboards with PLX-switches?

or....making this

https://www.asrockrack.com/general/productdetail.asp?Model=EPYCD8-2T#Specifications mainboard of cost at 480 euro
with x8-slots open-end, so using my x16 ==>> x16 pcie-risers to setup a system with maximal 7 gpu
with cpu epyc 7251 [535 euro, 8 cores] or epyc 7281 [720 euro, 16 cores]

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 851
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 51962 - Posted: 1 Jun 2019 | 22:25:48 UTC - in response to Message 51960.
Last modified: 1 Jun 2019 | 23:03:14 UTC

That CPU only has 16 lanes on the CPU so it would be pretty pointless to have a full PCIe connector for each GPU. It would only make sense if there were enough PCIe lanes to go around

The purpose of the full PCIe connector is to give the highest possible mechanical stabilty for all the cards as these x16 connectors have latches on the "inner" end. This latch on the "inner" edge of the PCIe connector is available only on the x16 slot. This provides compatibility with shorter cards, while shorter open end PCIe slots can accomodate longer cards while providing fewer lanes, and no latch on the end.
Take a look at the last picture:
https://www.fasttech.com/product/9661024-authentic-onda-b250-d8p-d4-btc-mining-motherboard
Only the first (the closest to the CPU) PCIe slot has 16 lanes, the others have only 1 lane.

According to the Intel specification the lanes of the PCIe controller integrated into the CPU can't be used as 16 separate x1 lanes.
See the expansion options:
https://ark.intel.com/content/www/us/en/ark/products/191047/intel-core-i7-9850h-processor-12m-cache-up-to-4-60-ghz.html
https://ark.intel.com/content/www/us/en/ark/products/135457/intel-pentium-gold-g5620-processor-4m-cache-4-00-ghz.html
It goes like:
PCI Express Configurations: Up to 1x16, 2x8, 1x8+2x4
for every socket 115X CPU. (or less for low-end Celerons and Pentiums)
That is there could be at most 3 GPUs connected to the CPU: 1 on 8 lanes, the 2 other in 4 lanes.
However the "South Bridge" chip provides further PCIe (2.0) lanes: (these lanes have higher latency than the the lanes built into the CPU)
https://ark.intel.com/content/www/us/en/ark/products/98086/intel-b250-chipset.html
PCI Express Revision : 3.0 PCI Express Configurations: x1, x2, x4 Max # of PCI Express Lanes: 12
Perhaps that's the tick: the 1st PCIe is connected with all the 16 lanes to the CPU, the other 11 is connected to the south bridge (1 lane each). There's no unnecessary peripherals (PS/2 keyboard, serial and parallel ports, additional USB ports, sound controller), only one PCIe gigabit network interface controller (occupying the 12th lane).

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 51965 - Posted: 2 Jun 2019 | 21:04:58 UTC
Last modified: 2 Jun 2019 | 21:08:09 UTC

ok, yet another update, using x1-slot (on asrock x399 taichi mainboard with threadripper 1950x)

https://imgur.com/a/VV6CdUB

little explanation about the above image...
values from 2 to 20 are under power limit of 50%
values from 21 to 38 are under no power limit (so power is 100%)

changing the power limit has only effect on core clock speed, meaning 100% power will finish WU faster while gpu-temperature will grow with 4-5 degree Celsius.

another note...with or without power limit nothing changed regarding gpu-load and pcie-bus-load. these both values are around 74-76, with or without power limited (or undervolted).

my another card in real x16-slot (electrical x16) has gpu-load around 88-90%.

so, my performance loss using x1-slot vs x16-slot is from 88-90% to 70-72% gpu-load.
my conclusion: 95% chance I am going to choose for mining board for my multi-gpu setup with 6 gpu's...much cheaper that buying mainboard with 7 slots of x16-lanes (intel platform and PLX-switches).

this is just sharing info with you.
have fun !!

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 52124 - Posted: 20 Jun 2019 | 21:52:04 UTC

Turing RTX 2080 PCIe 2.0 x1 performance 50% of PCIe 3.0 x4 on Primegrid n=20 Genefer.
GPU power went from 205W to 133W - same as the GTX 1070 on PCIe 2.0 x1.

Pascal's x1 performance loss around 8-17%.Genefer a heavy on PCie bandwidth.

GPUGRID can have high PCIe bandwidth usage with Multi GPUs motherboard.

Turing PCIe x1 performance worse than Pascal / Maxwell / Kelper x1.

My Zotac mini GTX 1080 died yesterday after 28 months 24/7 service.
I returned a RTX 2070 and kept the RTX 2060.
2060 limited to 160W compared to 210W where the 2070 operated OCed for 12-20% more performace.

Today I purchased my first ever ti GPU: 2080ti: GPU Asus ROG strix COD edition (700$ open box Microcenter.)
Boosted out of the box at 1970Mhz with 280W power on n=20 WU.

With the new 2080ti I decided to test Turing on PCIe x1.

Also my 2013 z87 Haswell showing age with the OCed RTX 2080ti and 2080 on primegrid PPSsieve.
Runtimes are slower then the RTX 2080ti & 2080 skylake / coffeelake CPUs combos.

CPU speed with GPU overclocking on Primgrid (PPSsieve) scales very well as does (AP27).
These two programs require minimal PCIe bandwidth.

Higher clocked (+3.7GHz) CPUs help the OCed GPU finish the WU faster.
This is similar to GPUGRID.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,645,692,755
RAC: 2,151,548
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 52125 - Posted: 20 Jun 2019 | 22:27:37 UTC - in response to Message 52124.


Today I purchased my first ever ti GPU: 2080ti: GPU Asus ROG strix COD edition (700$ open box Microcenter.)

Wow I think you got the deal of the century, I bought a new EVGA 2080ti from Microcenter for $1015 and I thought that was a pretty good deal. The difference is, my max boost speed seems to be 1800 using 275 watts according to GPU-Z. Pretty amazing you can get 1970Mhz with only 280 watts.

erik
Send message
Joined: 30 Apr 19
Posts: 54
Credit: 168,971,875
RAC: 0
Level
Ile
Scientific publications
wat
Message 52126 - Posted: 20 Jun 2019 | 22:44:28 UTC

i will not use pcie-x1 slot.
i have decided to use mainboards with plx8747 pcie-switches, mostly they have 7 pcie-slots. fully loading mining frame with 7 gpus. i think i am going for gtx 1070, but not for sure yet.
and i am always undervolting all my gpu's to power limit 50% because of the heat.

rigth now running 4 systems. 3 for gpugrid (with totall 6gpu's), 1 system with 6 gpus for Folding@Home.

in the planning. add 1 more system for gpugrid, add 1 more system for FAH, 1 more system for WCG and FAH.


but not yet decided which gpu to choose.

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 52127 - Posted: 20 Jun 2019 | 22:57:22 UTC - in response to Message 52125.


Today I purchased my first ever ti GPU: 2080ti: GPU Asus ROG strix COD edition (700$ open box Microcenter.)

Wow I think you got the deal of the century, I bought a new EVGA 2080ti from Microcenter for $1015 and I thought that was a pretty good deal. The difference is, my max boost speed seems to be 1800 using 275 watts according to GPU-Z. Pretty amazing you can get 1970Mhz with only 280 watts.

Thanks - its an amazing price. Someone returned an efficient chip.

Even with the Turing refresh on the horizon soon and price drops - I couldn't pass a 2080ti at such cost.

I always check newegg and Microcenter for open box deals. The 2080ti was returned the night before being purchased on sale for 979usd.
I saw 2080ti deal online early this morning then walked in when the store opened.

Post to thread

Message boards : Graphics cards (GPUs) : needed pci-e bandwidth? mining rig idea

//