Run infinite loop in future for message-passing style?

I write the code like the following

(struct process (fu ch))

(define (process-send pro msg)
  (async-channel-put (process-ch pro) msg))

(define (^friend my-name)
  (define ch (make-async-channel))
  (define self (future
                (thunk (let loop ()
                         (match (async-channel-get ch)
                           [(list 'ping from)
                            (printf "ping from ~a~n" my-name)
                            (process-send from 'pong)
                            (loop)]
                           ['pong
                            (printf "pong from ~a~n" my-name)
                            (loop)])))))
  (process self ch))

(let ([bob (^friend "Bob")]
       [jack (^friend "Jack")])
   (process-send bob (list 'ping jack)))

But unless I let future get an ending(by touch), it will not really print anything, should I expect the above works? Or there have problems I didn't know about with the abstraction.

Futures aren't the right abstraction for what you're trying to do. You should use threads if single-core concurrency is good enough for your use case, or places if you need parallelism. Futures are mainly useful for parallelising operations that don't require synchronisation (eg. parallel number crunching).

Blocking operations (like async-channel-get) suspend futures until they are touched and memory allocations can temporarily pause future execution, too. See this section of the guide for an explanation.

3 Likes

Well, I want something that wasn't as heavy as places to do tiny jobs and talk to each other. Seems like I don't have suitable choices.

I would use threads for that.

1 Like

Well, that's already the current solution, but still didn't quite fit my imagination.

If you would like to see the code, here it is: sauron/record-maintainer.rkt at develop · racket-tw/sauron · GitHub

Your code seems to be focused on concurrency, not getting parallel speedup (as you would expect for "tiny jobs") and thus threads are exactly the right thing. What are you imagining that's different?

The problem comes from a more complex situation, I think each maintainer should notify its dependencies modules' maintainers about its use of the definitions from them, this makes a huge impact because many messages are blocked in the background. Hence, each thread gets only a little time to handle messages.

The thread that the editor currently using didn't get any special treatment, however, so this makes the front end blocked. This is the reason why I'm looking for something and parallel processing their own messages.

If the problem is blocking then parallelism won't help.

How? Won't they get more resources rather than waiting for same core?

It depends what you mean by "blocking". If you mean that you're waiting for something to finish before doing something else, then doing that thing faster will of course be helpful. (But note that doing something 2x faster is unlikely to make something that seems slow into something that seems fast, especially with other things running on the machine.) But if you mean that one things shouldn't have to wait for some other thing to finish (which is usually the case with UIs) then you need to use concurrency and just running things faster won't fix it.

Oh, I guess I pick the wrong word XD. I mean to say Each thread gets many messages to handle, and then the frequent context switching crashes the performance.

Maybe that thread should be special? For example it could be higher up in a hierarchy of threads, to ensure that it gets more of the available cpu time, maybe you could use 14.8 Thread Groups to do that.

If you do this and it helps, would be interesting to hear about how you have used thread groups and how they work out in practice. So far I haven't used them, or seen a nice example where they get used.

2 Likes

I think the problem is thread-group is static? I'm not sure, but if it's, the code would be hard to write.

For example, I might have to share the cached data with a thread that has a higher group.