I have a naive softmax-like function that is decently sped-up by futures (10x on the machine I'm using), but is numerically unstable. When I try to use the standard "subtract the max" trick, it does become stable, but futures don't help at all anymore.
This is really surprising because the stable version is written in a way that it should do no more than twice the amount of computation of the unstable version:
The future visualizer doesn't show anything suspicious (no GC, no blocks).
The amount of GC for many futures for the stable version is really surprising.
I've looked at PLT_LINKLET_SHOW_CP0=1 raco make ... but couldn't see anything suspicious (considering that I'm not sure exactly what to look for either), except that flexp is not turned into unsafe-flexp, but that should be the case for both versions, so it's not the reason of the issue at hand
I'm not sure what to look for in the output of disassemble.
I ran the softmax code from the link you provided, and it seems to me that the stable version is less than 2x slower, based on the results it prints out. My results are shown below.
This is with Racket 8.6 cs on Windows, running the code inside DrRacket with "no debugging or profiling set".
Alex.
No future:
cpu time: 1953 real time: 1959 gc time: 578
7581.949085360589
n-futures: 1
cpu time: 921 real time: 939 gc time: 0
7581.9490853606585
cpu time: 1578 real time: 1614 gc time: 31
7581.9490853606585
n-futures: 2
cpu time: 734 real time: 431 gc time: 15
7581.949085360717
cpu time: 1937 real time: 1243 gc time: 78
7581.949085360717
n-futures: 3
cpu time: 859 real time: 306 gc time: 0
7581.949085360753
cpu time: 1578 real time: 703 gc time: 140
7581.949085360754
n-futures: 4
cpu time: 1046 real time: 269 gc time: 0
7581.949085360781
cpu time: 1546 real time: 521 gc time: 171
7581.949085360781
n-futures: 5
cpu time: 1109 real time: 239 gc time: 0
7581.949085360782
cpu time: 1906 real time: 417 gc time: 78
7581.949085360782
n-futures: 10
cpu time: 1312 real time: 251 gc time: 15
7581.949085360822
cpu time: 1828 real time: 433 gc time: 296
7581.949085360821
n-futures: 15
cpu time: 1468 real time: 206 gc time: 0
7581.949085360822
cpu time: 1781 real time: 395 gc time: 218
7581.949085360822
n-futures: 20
cpu time: 1468 real time: 229 gc time: 0
7581.949085360825
cpu time: 2296 real time: 419 gc time: 406
7581.949085360825
n-futures: 30
cpu time: 1515 real time: 213 gc time: 0
7581.949085360831
cpu time: 2062 real time: 414 gc time: 484
7581.949085360831
n-futures: 40
cpu time: 1546 real time: 206 gc time: 0
7581.949085360833
cpu time: 2265 real time: 393 gc time: 640
7581.949085360833
n-futures: 50
cpu time: 1406 real time: 205 gc time: 15
7581.949085360834
cpu time: 2375 real time: 424 gc time: 125
7581.949085360834
n-futures: 100
cpu time: 1578 real time: 200 gc time: 0
7581.949085360836
cpu time: 2281 real time: 368 gc time: 453
7581.949085360836
n-futures: 150
cpu time: 1562 real time: 201 gc time: 0
7581.949085360836
cpu time: 2203 real time: 396 gc time: 203
7581.949085360836
n-futures: 200
cpu time: 1296 real time: 202 gc time: 15
7581.949085360837
cpu time: 2234 real time: 392 gc time: 546
7581.949085360837
Thanks @alexh , that's interesting. What number does (processor-count) from racket/future return?
The GC numbers are also pretty high for the stable version (in particular for the no-future run, which is intriguing).
I updated the gist with garbage collection before each run and some more info, if you have the time would you mind running it from the command line just to be sure? Thanks!
Here are the results from the new version. I ran them from the command line using racket.exe but running the code from DrRacket gives similar results -- the times are slightly different each time the code runs, but not by much.
Alex.
Welcome to Racket v8.6 [cs].
#processors: 8
No future:
cpu time: 640 real time: 634 gc time: 0
7306.594650233405
cpu time: 1250 real time: 1243 gc time: 0
7306.5946502334045
n-futures: 1
cpu time: 640 real time: 642 gc time: 15
7306.594650233394
cpu time: 1265 real time: 1260 gc time: 0
7306.594650233393
n-futures: 2
cpu time: 656 real time: 331 gc time: 0
7306.594650233534
cpu time: 1296 real time: 725 gc time: 93
7306.594650233534
n-futures: 3
cpu time: 703 real time: 232 gc time: 0
7306.594650233557
cpu time: 1390 real time: 525 gc time: 203
7306.594650233558
n-futures: 4
cpu time: 718 real time: 200 gc time: 15
7306.594650233593
cpu time: 1468 real time: 419 gc time: 171
7306.594650233594
n-futures: 5
cpu time: 953 real time: 212 gc time: 0
7306.594650233597
cpu time: 1937 real time: 458 gc time: 187
7306.594650233597
n-futures: 10
cpu time: 1250 real time: 215 gc time: 0
7306.594650233619
cpu time: 2312 real time: 474 gc time: 328
7306.594650233619
n-futures: 15
cpu time: 1343 real time: 175 gc time: 0
7306.594650233624
cpu time: 2515 real time: 418 gc time: 234
7306.594650233624
n-futures: 20
cpu time: 1187 real time: 193 gc time: 46
7306.594650233625
cpu time: 2656 real time: 424 gc time: 93
7306.594650233625
n-futures: 30
cpu time: 1281 real time: 174 gc time: 93
7306.594650233628
cpu time: 2171 real time: 416 gc time: 625
7306.594650233628
n-futures: 40
cpu time: 1375 real time: 172 gc time: 0
7306.59465023363
cpu time: 2484 real time: 421 gc time: 359
7306.59465023363
n-futures: 50
cpu time: 1390 real time: 181 gc time: 0
7306.59465023363
cpu time: 2234 real time: 432 gc time: 328
7306.59465023363
n-futures: 100
cpu time: 1375 real time: 172 gc time: 0
7306.594650233634
cpu time: 2796 real time: 429 gc time: 187
7306.594650233634
n-futures: 150
cpu time: 1468 real time: 176 gc time: 0
7306.594650233634
cpu time: 2375 real time: 433 gc time: 468
7306.594650233634
n-futures: 200
cpu time: 1359 real time: 173 gc time: 0
7306.594650233635
cpu time: 2359 real time: 429 gc time: 296
7306.594650233635
Both version seem to enjoy a speed-up factor of 3-4, which is the best we can expect with 8 cpu-threads (so likely 4 cores) and the computation time remains mostly within a factor 2 between the two versions, so it's good. Of course it's not entirely clear whether these results would extrapolate to 64 cores, but even with 5 futures I already see a large discrepancy on my machine. Is Linux the issue for the lack of speed up in my case?
Also, the GC times for the stable version are still pretty surprising. Are these numbers real or are they artifacts?
Is that a leftover of a previous version? It marks n-cols as mutable (in spite it's never mutated) and that may disable some optimizations.
As a rule of thumb, the compiler is unhappy when there are mutable variables and it's a good idea to avoid them when possible.
One easy change is to wrap the second part of softmax-ref/stable with (let ([βmax βmax])...) so in that loop βmax is not mutable. But I don't expect a huge improvement.
What about calculating first the sum only for that column to get θact and then calculating all the columns to get Z? It will be slower when there are only a few columns, but eliminating the mutable variables perhaps may help.
I had actually tried a few other versions: fully immutable with for/fold only, and with named lets only, fully mutable (@samth mentioned I might need to use set! to avoid boxing the flonums). Nothing made a significant difference. I'm certainly willing to try more variants though.
I also tried using a freshly allocated flvector to maintain the βs, no difference. Also allocating the flvector only once and passing it as argument to the function — still no difference.
It's (flmax β βmax), which can 'guarantee' eq?, which is precisely what we want to avoid to allow for not boxing flonum!
To break this guarantee, we just wrap the argument with fl+: (flmax (fl+ β) (fl+ βmax)).
The docs on flonum optimization do mention this, but in this case it's pretty sneaky!
Here's a more explicit example:
(define (flvector-max ys)
(define ymax -inf.0)
(for ([y (in-flvector ys)])
(when (fl> y ymax)
(set! ymax (fl+ y)))) ; `ymax` does not need to be `eq?` to y
(fl+ ymax)) ; again we break the `eq?` guarantee
Dang that took me a long time to find out! Hopefully I'll remember this one
With this change, the stable version is almost as fast as the unstable one (less than a factor 2).
The end result seems like what you report for avoiding flmax, but it's simplest to report the change in terms of allocations in a no-futures run, as measured by (current-memory-use 'cumulative) before and after:
softmax-ref/unstable
cpu time: 196 real time: 209 gc time: 1
9080.16077599824
allocated: 44427328 ; that's 8 digits
softmax-ref/stable
cpu time: 613 real time: 639 gc time: 6
9080.160775998242
allocated: 1949946688 ; that's 10 digits
softmax-ref/stable-immutable
cpu time: 352 real time: 378 gc time: 0
9080.160775998242
allocated: 63233040 ; that's 8 digits, used to be 10-digit 1948267584
That is, allocation is reduced by a factor of 30 or so.
You can cut allocation by about half again with the fl+ trick in the Z θact loop as (values (fl+ Z) (fl+ θact)), but it's diminishing returns.
I assumed it was an eq? thing because turning flmax into fl> changed nothing, and for (set! βmax (flmax βmax β)) it was enough to fix it to only (set! βmax (flmax βmax (fl+ β))) without changing the other branch of flmax. Not being covered by the compiler works too
Now that this is fixed, all 64 cores are fully at 100%, so futures work very well!
Last thing: would you recommend the fully immutable version over any mutation? (I could fix all versions)