Metamath and friends use a metalogic based on distinct variable constraints, rather than an explicit notion of free and bound variables. This metalogic was first proposed by Alfred Tarski in his 1966 paper "A simplified formalization of predicate logic with identity", then refined by norm, particularly to provide a cleaner separation of the predicate calculus and set theoretic axioms.
(As is common, norm independently came up with the basic distinctor ideas, then discovered Tarski's work in a literature search). Thus, I propose the name "Tarski-Megill metalogic" to describe the distinctor-based metalogic in Metamath.
The Metamath home page contains much discussion about distinct variables, variables vs metavariables, axioms vs axiom schemes, and in general the differences between traditional (binder-based) predicate calclus and Metamath.
Most other formal systems use some kind of explicit binder concept. HOL is particularly clean and orthogonal in this respect: the only primitive binder is "lambda", and other concepts such as quantifiers, substitution, and even integration are composed of this primitive in combination with higher-order functions. Mizar has bound variables in quantifiers, but (currently) no mechanism for defining new binders. Freek Wiedijk has proposed such a mechanism.
Binders are much more familiar, and it seems that distinctors scare many newcomers away. However, the use of binders comes at a price. First, it makes the metalogic more complex. HOL Light contained a bug in the program that checks for alpha conversion for many years until Bob Solovay found it.
More profoundly, HOL's metalogic imposes serious constraints on the logic it hosts, to the point where it cannot be considered "logically agnostic". In particular, the logic must support higher-order functions. This rules out Peano arithmetic altogether, and causes serious technical problems for encoding set theory. (John Harrison has some good commentary in his "Formalized Mathematics" on the choice of foundational system).
Within the design space of logically agnostic metalogics, I find that the distinctor approach is extremely powerful, especially for constructing new definitions. This property is especially important in Ghilbert, which extends Metamath with a safe definition mechanism. It is of course possible to introduce arbitrary new definitions in Metamath, but a careless definition could introduce inconsistency, so most definitions in the set.mm database are quite conservatively formed.
Here I will give a number of examples of nontrivial distinctor-based definitions. It may be challenging, or even perhaps impossible, to design a binder-based logic agnostic metalogic that retains sufficient power to express these definitions.
These are definitions in the set.mm universe (hol-zfc.gh imports the set_mm.ghi interface) geared towards constructing an HOL logic.
These definitions should be entirely practical within the Peano arithmetic universe, i.e. only importing the basic predicate calculus and peano axioms. I am currently working on constructing them.
The fact that primrec can be expressed so cleanly in PA is (to me, at least), surprising and impressive. Z2 has a reputation for being theoretically very tractable, but in practice difficult to work in directly, because of the need for coding advanced concepts in a manner compatible with the Z2 axioms. I think a serious test of the power of the Ghilbert definition mechanism will be to encode weakest precondition semantics in Z2. In HOL, the most natural type of "wp" is (program → (env → bool)) → (env → bool), where "env" is a finite map from variable names to values. This is obviously a fairly high order concept, so the ability to express it naturally in Z2 would be a fairly striking demonstration.
– raph 17 Oct 2006
…for what it's worth…
Professor McGee of MIT writes that
http://ocw.mit.edu/OcwWeb/Linguistics-and-Philosophy/24-241Logic-IFall2002/CourseHome/index.htm a valid argument is one for which, if one accepts the premisses then one ought to accept the conclusion.
Implicit in that "ought" is an accepted mode of reasoning – and what constitutes a correct form of reasoning?
In Hilbert style proofs the last formula in a list of formulas is the conclusion. The conclusion is derived by unifying the conclusion's formula and hypotheses with a previously proven theorem or an axiom.
"Unification" means to find a consistent set of simultaneous substitutions for variables to make the two formulas (and their hypotheses) match. But are the substitutions "valid"?
In Metamath $d validity checking is separate from the process of unification, and $d restrictions are inherited from the theorems and assertions used to prove a theorem. They "percolate" up from the bottom.
Curiously, in Metamath $d restrictions are never applied at the level of Syntax Axioms. They belong at the level of "Logical" Axioms.
Conversely – correct me if I am wrong – in the "textbook" free/bound variable substitution schemes, "binding" is a purely syntactic property. For example, "A. x ph" binds x within ph. Then, "proper substitution" is defined as a validity check of allowable transformations of formulas.
Thus, both $d's and bound/free/proper substitions are mechanical mechanisms for disallowing invalid formula transformations that could result in invalid -- illogical – concusions.
Therefore, these mechanisms are within the province of the Proof Verification portion of a system (plus handling the necessary declarations to specify the restrictions.)
There seems to be no technical reason why a globally inclusive system such as HDM should restrict itself to just one or the other of these systems for validating variable substitutions, regardless of which is deemed "superior". It is unreasonable to expect Planet Earth's Math/Logicians to abandon Free/Bound in favor of $d's en masse. And in any event, the "legacy documents" of ye olden days cannot be simply abandoned by HDM simply because they do not conform to $d.
Widespread – global – acceptance of Ghilbert and HDM require not just inclusiveness, but making the systems work the way math/logicians actually do their work. If math/logicians are accustomed to using Tex/PDF document formatting then whatever system is developed must, to win hearts and minds, provide support for conventional (graphical) typesetting seen in the Mathematical Vernacular. To demand that everyone abandon ship and switch to ASCII shorthand is to abandon 95% of the user base.