Performance comparison between (append (reverse a) b)) and (foldl cons b a))

I have the following code example. I am wondering if method1 is slower than method2?

I want to write a benchmark program, but I don't have any prior experience in this.

#lang racket


(define (method1 a b) (append (reverse a) b))
(define (method2 a b) (foldl cons b a))

(define lst1-a '(1 2 3 4 5 6))
(define lst1-b '(7 8 9 10 11 12))

(method1 lst1-a lst1-b)
(method2 lst1-a lst1-b)

Thanks for the help in advance.

1 Like

You should also compare append-reverse from the srfi/1 module. I would expect method1 to be the slowest - it'll do more allocating and list traversals than the others.


You can use time for rough benchmarking. There are profiler and better benchmark packages available too.

1 Like

Here’s a benchmark script:

#lang racket

(define (method1 a b) (append (reverse a) b))
(define (method2 a b) (foldl cons b a))

(define N 10000)
(define ITER 10000)

(define lst1-a (range N))
(define lst1-b (range N))

(define (bench proc)
  (collect-garbage)
  (collect-garbage)
  (collect-garbage)

  (time
   (for ([i (in-range ITER)])
     (proc lst1-a lst1-b))))

(bench method1) ;=> cpu time: 545 real time: 615 gc time: 20
(bench method2) ;=> cpu time: 326 real time: 380 gc time: 10

So method1 is slower.

And this makes total sense. In method2, you cons about |a| times. In method1, you first cons |a| times for the reverse, and then another |a| times for the append. That’s twice the amount of work.

1 Like

Using @sorawee 's script, I found out that the append-reverse has roughly the same performance as my method2.

(bench method1) ;=> cpu time: 733 real time: 733 gc time: 22
(bench method2) ;=> cpu time: 367 real time: 367 gc time: 11
(bench append-reverse) ;=> cpu time: 369 real time: 369 gc time: 11

Why collect garbage 3 times? Shouldn't it be the same as collecting garbage once? What's the difference?

I don’t know! I saw other people do it, and I just followed that.

You can in fact see this repeated (collect-garbage) throughout the code base, though it looks like twice would be enough.

From my recollection, I heard (though without any deep understanding) that it’s related to the generational garbage collector. Running (collect-garbage) once might collect garbage in a way that has some leftover. So we should run it again a couple more times.

3 Likes

Keep in mind that this is really only something you need to be concerned about when doing performance measurements. I've seen this idiom in other GC'd languages as well. GC is intended to work this way, but before running a targeted benchmark you want to make sure that everything is GC'd beforehand and not during the benchmark.

Neither do I, but I think it's related to will executors. IIRC, the first one cleans normal stuff, the second the objects that had will executors, and the third ?????