Advanced search

Message boards : News : 1Pflops milestone

Author Message
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28401 - Posted: 3 Feb 2013 | 22:08:20 UTC

GPUGRID has been constantly above the Petaflop in the past few days. Although just a symbolic number it feels good to reach such an impressive target for us.

Every volunteer should feel proud too, because it's all thanks to you.

gdf

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28402 - Posted: 3 Feb 2013 | 22:22:02 UTC - in response to Message 28401.

Congratulations!

That might be news worthy of pushing them out via the BOINC message system. Although I'd generally advise to use it cautionally.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile AdamYusko
Send message
Joined: 29 Jun 12
Posts: 26
Credit: 21,540,800
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 28404 - Posted: 4 Feb 2013 | 1:15:47 UTC

Awesome! I guess there are far fewer GPU Grid cruchers, but with the abundance of top notch GPU's on this project I would have thought GPU grid has been above it for quite some time.

Glad I could help, though I only have a GT 640 and GT 440 OEM crunching now. Can not wait for Short runs to return so I can put my 440 on those.
____________

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 28408 - Posted: 4 Feb 2013 | 13:35:39 UTC - in response to Message 28404.

In fact we have so much incoming data that it's becoming hard for the server to keep up. We are working around the clock to keep the pipes clear.

gameboybf2142
Send message
Joined: 19 Dec 10
Posts: 1
Credit: 421,995,530
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28416 - Posted: 5 Feb 2013 | 18:09:32 UTC

Congratulations for all crunchers, such a big milestone!

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29434 - Posted: 13 Apr 2013 | 3:47:51 UTC - in response to Message 28408.

In fact we have so much incoming data that it's becoming hard for the server to keep up. We are working around the clock to keep the pipes clear.


Generally one of the nicer problems to have :)

I imagine that the new(ish) 4.2App had a fair bit to do with getting this project over the 1PF line with it's massive increase in efficiency.

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 29435 - Posted: 13 Apr 2013 | 11:03:46 UTC

Hi, Folks:

Congratulations to all on this achievement. I am impressed, in particular by GPUGrid's showing the scientific papers to which crunchers have contributed. This really makes one feel part of something useful.

John

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 33587 - Posted: 23 Oct 2013 | 10:36:18 UTC

Woohoo - Congrats GPUGrid - broken the Petaflop barrier again!

Clearly a result of the recent:
Application stability improvements.
Smaller work unit sizes.
Improved communication - keeping crunchers motivated.

Good work all - roll on 2 Petaflops!

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33588 - Posted: 23 Oct 2013 | 12:06:24 UTC - in response to Message 33587.

WooHoo indeed. I hope we can make it to 2 Peta Flops with the Titan and 780's now fully on board.

I would respectfully suggest some aggressive PR and a comprehensive FAQ (constantly updated) which can hold the hands of new users who are not as computer savvy as crunchers at GPU Grid.

If that's done I see no reason why GPU Grid should not hit 2 Peta Flops and more...

PS What is the speed of a current super computer anyway?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33589 - Posted: 23 Oct 2013 | 12:14:07 UTC - in response to Message 33588.
Last modified: 23 Oct 2013 | 12:26:00 UTC

33.86 petaFLOPS is the present fastest non-distributed supercomputer.
Ref http://en.wikipedia.org/wiki/Supercomputer

Folding@Home is around 18 petaFLOPS.
Ref. http://en.wikipedia.org/wiki/Folding@home

Boinc combined is presently 7.6 petaFLOPS.
Ref. http://boincstats.com/en/stats/-1/project/detail
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33591 - Posted: 23 Oct 2013 | 13:10:06 UTC - in response to Message 33589.

33.86 petaFLOPS is the present fastest non-distributed supercomputer.
Ref http://en.wikipedia.org/wiki/Supercomputer

Folding@Home is around 18 petaFLOPS.
Ref. http://en.wikipedia.org/wiki/Folding@home

Boinc combined is presently 7.6 petaFLOPS.
Ref. http://boincstats.com/en/stats/-1/project/detail


Thanks SK. We're on a par with the IBM RoadRunner (2008) not bad!

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33709 - Posted: 1 Nov 2013 | 23:54:04 UTC

*Group hug*

Now, back to crunching.
____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33710 - Posted: 2 Nov 2013 | 4:39:53 UTC

So, just wondering: How much GigaFLOPS does say a 780 put out for this website's app? Meaning, if a 780 were left running at stock 24/7, how much would one increase the sites performance?

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33711 - Posted: 2 Nov 2013 | 9:12:57 UTC - in response to Message 33710.

So, just wondering: How much GigaFLOPS does say a 780 put out for this website's app? Meaning, if a 780 were left running at stock 24/7, how much would one increase the sites performance?


With nothing holding it back it should be 4 Tera Flops (or just under) of single precision FPO which is what GPU Grid uses.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33712 - Posted: 2 Nov 2013 | 11:11:50 UTC - in response to Message 33711.
Last modified: 2 Nov 2013 | 11:42:13 UTC

The theoretical single precision GFlops of the main Reference spec cards are shown below.

GeForce Card GFLOPS (SP)

GTX 650 813 GTX 650 Ti 1421 GTX 650 TiBoost 1505 GTX 660 1882 GTX 660 Ti 2460 GTX 670 2460 GTX 680 3090 GTX 690 5622 GTX 760 2258 GTX 770 3213 GTX 780 3977 GTX Titan 4500

For this project most of these GFlops values reflect the relative GPU performances reasonably accurately, but the accuracy of the Ti cards especially isn't great (some higher, some lower). The accuracy of non-listed entry level cards with miniscule bus widths are even worse.

Application and system constraints would reduce the acutal GFlops. The power targets/GPU Usage might better reflect the actual amount of GFlops being used. So a card using 90% of the GPU and running at a power target of 90% might only be using up to 90% of the available GFlops.

In my experience mid-range reference cards have a high GPU usage compared to higher end cards and would therefore come closer to reaching their reference GFlops - perhaps 90% of the theoretical GFlops would be used (on a good setup). High end cards tend to use less of what's available (say 75 to 85% GPU usage, at least on Windows). The % usage of the TDP's tends to be lower too. Values vary depending on task type.

Another consideration is that many cards are non-reference with higher TDP's and clocks.

- Good to see GPUGrid back over the 1Peta Flops mark, but I'm not sure where this is taken from or of it's accuracy; Boinc All Project Stats has the value at 2,810.281 TeraFLOPS (2.8 PFlops), and the project RAC at 214M. http://www.allprojectstats.com/po.php?projekt=48
Boinc Stats has the current GFlops at 968.609 TeraFLOPS.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33714 - Posted: 2 Nov 2013 | 11:30:41 UTC - in response to Message 33712.

In addition to what SK said: for GP-GPU it's very good to achieve even 50% of the theoretical maximum throughput. I don't know any more specific numbers for GPU-Grid.

MrS
____________
Scanning for our furry friends since Jan 2002

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,600,886,851
RAC: 8,767,303
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33719 - Posted: 2 Nov 2013 | 12:15:15 UTC - in response to Message 33712.

- Good to see GPUGrid back over the 1Peta Flops mark, but I'm not sure where this is taken from or of it's accuracy; Boinc All Project Stats has the value at 2,810.281 TeraFLOPS (2.8 PFlops), and the project RAC at 214M. http://www.allprojectstats.com/po.php?projekt=48
Boinc Stats has the current GFlops at 968.609 TeraFLOPS.

No information about FLOPS (peak or actual) is fed out from BOINC projects to the stats aggregation sites. Every FLOPS figure you see on a stats site has been derived in some way from the credit awarded by the project.

For example, the current figures at Boinc Stats are:

Recent average credit RAC 193,721,774 Average floating point operations per second 968,608.9 GigaFLOPS

The RAC figure is 199.9999938 times the GigaFLOPS figure: that's not coincidence, that's arithmetic (the definition is 200x, with a tiny rounding error because of limited precision).

So, according to BOINC statistics, all a project has to do to increase its G/T/PFlops production rate is to increase the amount of credit awarded per task - and there are some projects which have done that. That's the crazy world of statistics for you.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33763 - Posted: 3 Nov 2013 | 22:43:10 UTC

I wonder how much "science" can be done, say per month, at this speed.
____________

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33788 - Posted: 6 Nov 2013 | 15:26:44 UTC - in response to Message 33719.

So, according to BOINC statistics, all a project has to do to increase its G/T/PFlops production rate is to increase the amount of credit awarded per task - and there are some projects which have done that. That's the crazy world of statistics for you.


It's not statistics that are at fault, it's the way some people do statistics and what they do with the numbers they come up with. I once had respect for BOINC stats sites but that quickly ended when they started doing crazy stuff like deriving performance (FLOPS) from credits. They pander to the naive, like many politicians do, thereby reinforcing beliefs that have no foundation in reality.
____________
BOINC <<--- credit whores, pedants, alien hunters

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,600,886,851
RAC: 8,767,303
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 33789 - Posted: 6 Nov 2013 | 18:51:51 UTC - in response to Message 33788.

So, according to BOINC statistics, all a project has to do to increase its G/T/PFlops production rate is to increase the amount of credit awarded per task - and there are some projects which have done that. That's the crazy world of statistics for you.

It's not statistics that are at fault, it's the way some people do statistics and what they do with the numbers they come up with. I once had respect for BOINC stats sites but that quickly ended when they started doing crazy stuff like deriving performance (FLOPS) from credits. They pander to the naive, like many politicians do, thereby reinforcing beliefs that have no foundation in reality.

And the real crime against scientific method is that BOINC themselves commit the same sin, in the top-right corner of the home page, and more dramatically on http://boinc.berkeley.edu/chart_list.php. How many of those GFLOPS statements would pass a floating-point benchmark audit?

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34317 - Posted: 14 Dec 2013 | 21:53:50 UTC

What was the highest value ever on gpugrid? Are the current 1,14PFlop the top value? Good value, not long ago we struggled to go over 1PFlop again ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Post to thread

Message boards : News : 1Pflops milestone

//