tl;dr: I'd like a way to ensure that a particular instance of a struct is eq? across threads so it can be used for concurrency management. What would the best way be?
Medium version:
If a struct goes into a parameter then it will be copied from thread to thread instead of shared, which is a problem if you want to use it to ensure singleton state. My first thought on how to resolve the issue would be this:
#lang racket
(require racket/splicing)
(struct job-manager () #:transparent)
(splicing-let ([thing (job-manager)])
(define (get-foo) thing))
I believe this would ensure that all threads are using the same (i.e. eq?
) version of foo. Is there a better way?
Long version:
The majordomo2 package is a task manager written by yours truly. I want to be able to offer a guarantee that, if requested, only one instance of a task can be running at a time; an example of why you might want this would be if you were using it to run a backup on your files -- you don't want to accidentally have two separate threads attempting to back up your files and potentially interfering with one another.
The way the module currently works is that you create a majordomo instance and then tell it to run tasks. For example:
(define jarvis (start-majordomo))
(define result-channel
(add-task jarvis build-email-greetings "hi there" '(alice bob charlie))
(define result (sync result-channel))
Each task results in two threads, a worker that does the task and a manager that keeps an eye on the worker and restarts it if appropriate and if requested. The threads are 'out in the wild' with no cross communication and no way to address them aside from waiting for a value to come back on the result channel. As a result, if you did this:
(add-task jarvis run-backup root-path)
(add-task jarvis run-backup root-path)
...then you would end up with two instances of your backup running simultaneously.
My first thought on how to prevent this was to have add-task
accept a #:singleton id
argument. This would cause the majordomo instance (jarvis
, in the above example) to keep an internal hash where it could track the fact that this function is already running and refuse to start another task that uses the same id
. There would need to be some concurrency management, such as using a semaphore to add tasks so we don't have a race condition.
In order for this to work we would need to be sure that everyone who wanted to run a singleton task was running them within the same majordomo instance, so there would need to be a singleton for that. I started to put it in a parameter and then realized that wasn't viable. Am I overthinking this? Is the little hack that I did at top the best solution?