Hi @countvajhula ! I really enjoy seeing where Qi is going at the moment. Obviously, whenever you put "typed" and "Qi" close together, you get me very interested Now my schedule is very tense at the moment (as usual for all of us I guess), but I'll drop in whenever I have time!
Last week's notes:
Diamonds Aren't an Interface Macro's Best Friend
The discussion around "compiler extension" and "language composition" might be especially of interest. It develops some discussions with @michaelballantyne , @dominik.pantucek and @SamPhillips into an approach organized around the idea of "the bottom level is hopeless." Not sure if this would be a feasible approach (or if it is already a well known one), but we have been discussing these to figure out how we could allow the Qi compiler to be extended with custom optimizations for any datatype while still keeping the core compiler lean. The two main approaches we've come up with so far are "compiler macros" and "tower of languages," (there's more context in the notes) and this current idea is related to the latter.
Other highlights:
- Racket Roguelike Library is getting really fancy
- The Qi compiler work (more than a year in the making) is now in the code review stage!
Enjoy! (Thoughts welcome)
Last week's notes:
We're getting close and things are looking to be on track for Friday's planned release. If you'd like to join for the "release party," such as it is, we're glad to have you . Bring your favorite drink! And for some entertainment, we just might be able to convince my cat Ferdinand to do tricks for us. Hope to see you there!
I would have expected this example to yield (+ 9 v 5)
(or 23):
(~> (3)
(switch [odd? (~> sqr (-< (as v)
add1) sub1)])
(+ v 5)) ;=> 14
Since the result of the switch would be the emitted 9 ((sub1 (add1 (sqr 3)))
).
Last week's notes:
Highlights:
- features of the new benchmarking suite by @dominik.pantucek
- some interesting discussions where we tried to explain some observed anomalies
- Qi's stance on backwards compatibility, including a cameo appearance by Resyntax @notjack
- A gratuitous analogy involving the Cosmic Microwave Background
An interesting one today:
Side Effects Include Confusion
We discussed (and I've made a first attempt to formalize) what Qi's guarantees on order of effects might be. We're still working it out and it's not as straightforward as we thought.
Catching up on these, here are the notes from Feb 2, the Brussels edition of the weekly Qi meeting:
Highlights:
- "effect locality" and Qi's guarantees on order of effects, and the promises we'll make about the
effect
form - some quantum effects caused by use of the probe debugger which changes the semantics of the program in trying to observe it! (pointed out by @hendrikboom3 )
- is "language composition" an approach worth investing in?
Enjoy,
A short summary of the meeting on March 1. The main highlight is, the Qi compiler is now user-extensible! That is, third party libraries (for now, "internal" developer libraries rather than just any user) can provide custom optimizations for their own datatypes.
Special thanks to @SamPhillips for starting us down this path in earnest, and to @dominik.pantucek for this "modular" implementation that achieves the goal in a simple way!
Last week's notes:
We tried to work out how to identify the "right" source syntax to blame, for the purposes of generating good error messages in the compiler. We didn't find any great answers.
Btw, FYI @hendrikboom3 , we've started to write docs on the precise handling of effects in Qi (including wrt the probe debugger), continuing from the discussion from a few weeks ago. Any feedback welcome
Friday's notes:
Highlights:
- Many recent improvements to the user docs, developer docs, and per-commit benchmarking infrastructure.
- Sometimes a hacky approach is a good approach (see " de-expander ").
- Trying to use the low-level blame object API to provide good error messages for optimized code.
- Qi's compiler is already extensible -- kind of like macros but for optimizations -- thanks to a simple architecture implemented by @dominik.pantucek . Iterating on this to explore more formal language composition schemes could allow us to easily explore alternative backends (like futures instead of ordinary functions, and much more) for the language.
- Could many languages sharing common syntax but with different semantics be considered the same language?
Last week's notes:
Tomorrow's meeting will be 2 hours after the usual time. We will be discussing challenges in producing good error messages and what's possible / best practices (e.g. use of syntax/loc
rather than syntax
) for languages hosted on Racket. We will likely also be implementing and testing a proposed solution. If the topic interests you, please stop by
Last week's notes:
Quite a lot in there and some interesting stuff. Highlights:
- Automated code generation to test the optimizing compiler against a reference compiler (the ability to extend the Qi compiler by a simple
require
makes this possible) - Racket vs python: error reporting
- Why use contracts in reporting errors?
- Why aim for 100% test coverage?
- Why 100% coverage isn't enough
- A way to get around "Schrodinger's probe," maybe
- Qi's possible connections to topology??
Enjoy
Notes from April 12:
Highlights:
- We accidentally released some code for
racket/list
deforestation - An inside look at Syntax Spec
- Some insights into Git, and how it relates to package and dependency management in Racket
- "Continuous deployment" and other release models to follow
- Experimenting with compact syntax for closures and application
Racing to catch up on these notes, but to avoid a backlog I'm going to post last week's:
This is about how we (i.e. mostly @dominik.pantucek ) are designing Qi's implementation of take
as a fusable stream component in the compiler (for the stream fusion / deforestation optimization used in functional languages like Haskell and Clojure).
Heads up: This week, @benknoble designed a very cool syntax for embedding flows, called "curlique syntax." I am stoked about the possibilities, and with any luck, he will give a demo today in the meeting starting in about 40 minutes (he is just finding out about this himself, so I'm not sure he'll be available, but he just might! ) on Gather. We will also continue discussing deforestation.
Last week's notes:
Highlights:
- Ben presented Curlique syntax
- Sid presented a
#lang
experiment - Dominik presented a compiler architecture to allow swapping alternative implementations of optimizations
- Lots of great and honest feedback (especially from Michael, as always)!
- A thought on
define
vsdefine-syntax
and streamlining access to Racket's macro tech - The perils of a perfect language
- Deforestation beyond Qi?
Re: the "#lang
experiment" it really is just a toy. We do not have serious plans for it. But if it leads to interesting ideas that can be adopted more widely, that would be a good outcome. On that point, I'm especially curious about thoughts in the community re: "One kind of binding" and whether that is viable as an interface to Racket's macro tech.
- Ben presented Curlique syntax
The notes say
Another issue is that something like this wouldn't work in the #%app version:
(map {thunk 1} (list 1 2 3))
It's true that this raises an error, but it's not a syntax error: {thunk 1}
is equivalent to (thunk 1)
because (thunk
is not an application, it's a macro! So this applies (lambda () 1)
to each element, which fails with
map: argument mismatch;
the given procedure's expected number of arguments does not match the given number of lists
given procedure: #<procedure>
expected: 0
given: 1
Of course, this is an error with or without curlique.
In particular, (map {thunk* 1} '(1 2 3))
and (map {1} '(1 2 3))
both yield '(1 1 1)
.
This makes it harder for me to understand the following paragraph
Since
thunk
seemingly occupies an#%app
position, one might expect this to result in a list of thunks. But in fact, this results in an error becausethunk
is a macro rather than a function, so it is not really an#%app
position after all. This could be surprising to users as it's not always obvious what is and isn't a macro.
The latter point about what is and isn't a macro is well-taken (e.g., test
forms are macros even though they behave like procedures, IIRC). Fortunately most syntax is documented as such.
To create a list of thunks, you might want (map {(clos 1)} '(1 2 3))
.
Also, RE: scoping
both Racket as well as Qi scopes on it
I believe we decided this was partly due to binding spaces, no? I only bind ~>
in the (default) Racket space, so the binding in Qi space requested by syntax-spec
is not perturbed.
One kind of binding
This is going to get a bit long, so I apologize in advance. I also have not taken the time to reframe it the way I wish, which is the following: I observe some difficult questions for users of your system that the document leaves unanswered, particularly around the meaning of code and the removal of a useful feature. I'm curious about how you would answer those.
I find that it's useful to think of (define-syntax f <transformer-value>)
as a combination like this
(begin-for-syntax
(define g <transformer-value>)
(compiler-hook-insert f g))
where compiler-hook-insert
is the state change that makes (scoped) uses of f
invoke the phase-1 function definition g
(which as we all know is expected to be (-> syntax? syntax?)
). Most of the time, we don't separately name g
and thus can't recover it (but as I'll discuss below, you can do this separation yourself and recover g
as a useful value).
I spell this out for a few reasons:
- Your
mac
example isn't(-> syntax? syntax?)
, which is a bit misleading! You later write "Pattern-matching code could be used both for regular code as well as macros, since 'macros' would just be regular functions, and would not require a custom pattern matching implementation," but I'm not seeing compelling reasons for this to be true. With apologies to Fear of Macros, you can do this today with(datum->syntax stx (match (syntax->datum stx …)))
(orsyntax->list
, if you prefer). For concerns unique to syntax (scope, hygiene, etc.), I think we will always need a special notation. - Your
(macro mac)
is the equivalent of my hypotheticalcompiler-hook-insert
. We're thinking along the same lines, there. - The
for-lang
has a few problems, in my mind. I think they all boil down to "Racket allows us to mixdefine-syntax
withdefine
, and you've suggested we break that."
Let me elaborate on that last point. First, for-lang
applies to the whole module in your example. That means that if mac
expands to a use of a runtime function F
defined in the same module, what happens to F
? In your model, does it matter if F
is provide
d or not? The answers to these questions are unclear and pose a troubling question: I as the macro author may not be in control of the interpretation of the module I implement it in! (If you're Sage and working on Rackith, this probably sounds great! But for a library author, it might sound like a headache: am I writing a macro or a function? Both! How do I make it have useful semantics in both cases?) One way around this is to require separating all runtime definitions from things intended to be macro definitions; AFAIK, Rust still does this (macros have to be in their own crates), but Rust still separates macros from runtime value in some way. I believe the separation of crates is the cause of some headache, too. This also means that (let-syntax
or local define-syntax
are practically gone, which removes a useful piece of composability in Racket AFAICT.
Second, you wrote
It would be possible for a single module to contain both functions as well as functions intended to be used as macros, but as a best practice, that should never be done, since the organization is well-modeled in independent modules.
As a practice, this is frequent today! Macros commonly expand to (private) runtime functions implemented in the same module, to say nothing of define-syntax
in internal definition contexts. As I wrote above, I also think this leaves a lot of questions, namely: how do I really write macros in such a system? If a macro can be (at the discretion of its user) a syntax transformer or a runtime function, what are its inputs and outputs supposed to be?
I actually think syntax-parse
is a neat demonstration of this: it can be required at runtime to produce a runtime function (-> syntax? any)
or at phase 1 as a macro implementation helper. But we haven't changed its protocol or effective contract because of where we've required it.
Third, you wrote
In addition to ensuring that there is now only one type of definition (making macros trivial to test and requiring no special handling, as they are actually just functions and not "functions with something more"), this also has the benefit of encouraging defining macros, which are extensions to the language, in dedicated modules, rather than intersperse them amongst the code they transform.
There are two concerns here: testing macros, and interspersing them.
Macro can be tested in two ways, I think,
- Using
convert-syntax-error
to assert failed uses and then regular testing strategies to show that certain invocations of the macro produce expected results. - By writing
(define transformer …) (define-syntax macro transformer)
. Then you can testtransformer
as a runtime value with protocol(-> syntax? syntax?)
and check interesting properties of it.
So it seems possible to recover the "functions" with a little care; OTOH, in your system, I'm not sure how to recover something macro authors can understand. (You seem to propose that macro authors become regular function authors, but I'm missing the protocol that allows macros in your system to things like delay evaluation for conditionals and so forth.)
Interspersing macros is, I think, a valuable tool. It may not be advisable everywhere, but I don't see it as something to forbid the way your system does. I'm curious what argument you make in favor of that.
In the end, I think that I could do something like what Racket does today with
(module my-macros racket
(require runtime-support)
(define m1 …) …)
(require (for-lang 'my-macros))
(provide m1 …)
but this is quite a burden of ceremony compared to (define-syntax m1 …)
. In other words, I see quite a bit of cost without much gain.
...
...
So it seems possible to recover the "functions" with a little care; OTOH, in your system, I'm not sure how to recover something macro authors can understand. (You seem to propose that macro authors become regular function authors, but I'm missing the protocol that allows macros in your system to things like delay evaluation for conditionals and so forth.)
Interspersing macros is, I think, a valuable tool. It may not be advisable everywhere, but I don't see it as something to forbid the way your system does. I'm curious what argument you make in favor of that.
In the end, I think that I could do something like what Racket does today with
(module my-macros racket (require runtime-support) (define m1 …) …) (require (for-lang 'my-macros)) (provide m1 …)
but this is quite a burden of ceremony compared to
(define-syntax m1 …)
. In other words, I see quite a bit of cost without much gain.
Would it be possible to define define-syntax to generate the above module code,
thereby bbrestoring the well-known Racket behaviour?
-- hendrik
I'm just seeing your very thorough response to my musings on the possibility of eliminating the define
/define-syntax
dichotomy @benknoble ! Thank you for sharing those thoughts. I'll need to read your response in detail and think about what you've said. Re: "If you're Sage and working on Rackith, this probably sounds great!" -- I got a chuckle out of this. I hope @slg does, too.
But, I signed on now just to post last week's notes:
Highlights:
- Dominik is refactoring the compiler pass architecture to support multiple alternative implementations of the same compiler pass (in this case, deforestation)
- Qi's release practices of "continuous deployment" and what that means in practice for developers and users
Enjoy!
I'm obviously very behind in writing these but here are last week's notes!
A very interesting one where we thoroughly developed and reviewed a proposed "language composition" approach to extending language semantics. We concluded that we can instead modify existing approaches and implement a few macros to allow language extension in more constrained ways, to achieve all of the specific benefits we've been looking for (e.g. extending deforestation to user-defined types, clean separation of Qi invariants and host language invariants, etc.).
Other highlights:
- What is Qi trying to be?
- "Dockerized" compiler architecture as a way to allow compiler passes to be easily composable / commutative
- Using category theory to define what a flow is allowed to be
- Compiling Qi to different, yet isomorphic, backends
- Multiple perspectives and finding the best answers
@dominik.pantucek @michaelballantyne @benknoble @NoahStoryM @scolobb @SamPhillips @rocketnia --- (as always) if I got anything wrong, feel free to edit it, or let me know what to fix and I will, thanks!
Last week's notes:
Another Step on a Winding Staircase
Highlights:
- "State consing" approach to deforestation
- Achieving clean separation of semantics between host language and DSL
- Imagining alternative backends and settings for Qi such as futures and HPC
- Some fun references to topology and nonlinear dynamics in the context of code refactoring
@dominik.pantucek @michaelballantyne I attempted to capture the refactoring that we started on and wrote down future remaining steps that I could discern. Lmk if that doesn't look right or if you have better ideas!