Info.rkt: whitelisting test files for raco test

I'd like to have raco test . only test specific file(s) in my package. Listing the files in test-include-paths in info.rkt doesn't seem to have the desired effect: raco test . runs all .rkt in the current directory (including even info.rkt), not just the ones listed in test-include-paths. Adding (define test-omit-paths 'all) simply skips all files, including the ones listed in test-include-paths (i.e. it seems to have priority over test-include-paths).

Is there no (simple) way to do test file whitelisting?

1 Like seems very relevant to your question.

My understanding based on the issue is that:

  1. test-include-paths is really about including non-Racket files. So it has no effect when using on Racket files (which are already included).
  2. There is no good way to do what you want. You can use test-omit-paths with ugly zero-width assertion regex, as demonstrated in the issue, to include files that you want, but that is obviously not ideal.

Thanks, subscribed to the issue.

You might want to try the following (no changes are required to the info.rkt file):

  • run the tests using raco test --no-run-if-absent . This will make raco run tests only in files which have a test submodule.
  • additionally, for files that you don't want to be tested, add the following line somewhere in the file (module+ test racket/base). This is usually not needed, but might make the test run a bit faster.

Hope this helps,


Maybe the first paragraph of Test Configuration by "info.rkt" should be highlighted --- because I seem to recall overlooking it when learning about options like test-omit-paths:

Submodule-based test configuration is preferred (see Test Configuration by Submodule). In particular, to prevent raco test from running a particular file, normally the file should contain a submodule that takes no action.

So I think the preferred approach is (as @alexh said) always include a test submodule. If you want to test nothing in some file, have it do nothing in that file.

This will work even when you don't always control running raco test, like when you publish a package, and the tests are run by the Racket build server.

When you can directly run raco test, there are many interesting options.

  • One is what @alexh said, --no-run-if-absent.
  • Also, you can have it run submodules of a certain name. e.g. test-slow with thorough tests that take a very long time to run, as a supplement to your plain test submodules. Or benchmarks, or whatever.

After that first paragraph, the section goes on to talk about the options like test-omit-paths, as the "non-preferred but maybe necessary" approach:

In some cases, however, adding a submodule is inconvenient or impossible (e.g., because the file will not always compile). Thus, raco test also consults any "info.rkt" file in the candidate test file’s directory. In the case of a file within a collection, "info.rkt" files from any enclosing collection directories are also consulted for test-omit-paths and test-include-paths. ...

That doesn't mean that a more .gitignore flavor of specifying includes/excludes wouldn't be better. But it might explain why there hasn't been work put into that, since it's not the recommended approach?

1 Like

From my understanding, one issue with no-run-if-absent is that environments
like DrDr / won’t use the flag, so if you want
your tests to pass in these environments, it’s not really an option.

(edited: I have no idea why the first line was omitted.... It was there in my gmail...)

Thanks, I overlooked that too, going to keep that in mind as an option. In general though, to be honest, I'm hesitant adding empty test modules. It feels like I'm doing something for the machines rather than the humans reading the code. I.e., adding workaround noise saying something like "Oh and by the way, there are no tests in this file, obviously!"

Also, at least in my environment, raco test reports running info.rkt as well, and adding a test submodule there seems especially weird. So I added info.rkt to test-omit-paths.

(Actually that behavior was what initially prompted my desire to limit what's being tested).