Maybe this is a stupid argument, but let's see where it goes; it is a Friday.
I like to think of functions in terms of expressions with holes. Obviously, this side-steps a bunch of terminology about how we fill those holes, and conflates functions with a slew of other machinery, but for argument's sake, we assume that our hole-expression (function) is incomplete (as yet to be fully applied) until all of the holes have been plugged.
So, with some tendency to triviality, there is no need to appeal to the formalisms of partial vs. curried functions, because in either case, they cannot be fully realized until all of their holes are plugged. This shifts the argument away from the head
of the function, to its body
.
It also helps us to side-step the (problem?) of keywords. By this, I am pointing vaguely to the fact that a function with keywords (either partially applied or curried) can be made to plug its holes in different orderings, depending on the precise sequence of applications one assumes. I have never looked at the implementation of keywords, per se, perhaps these are simply more layers of machinery to the head of the function, but I digress.
This brings to light, I think, an important assumption that we make implicitly: time exists.
In a world without time, currying and partial application are the same (and correspond roughly to a sequence of states wherein successively more holes are plugged) up to their completion, but in a world where time exists, like ours, the two are only the same in as far as they can be made to produce the same result (without holes).
Is the limit (over time) of an expression, the same as the value it approaches at that limit? In some sense, yes, in some sense, no. It depends on whether you consider time to be a factor or not, to my mind.
Interestingly, the featured presentation by Abelson and Sussman mentions duels in the first section of the presentation, in particular eval/apply
. Perhaps this speaks to the intuition elaborated in the thread?