DERIVING OUTER PRODUCT OF VECTORS (AND MULTIVECTORS)

Note: This derivation uses the "elimination" algorithm, as in the historic derivation of determinants (leading to matrices).

Two vectors are DEPENDENT if, and only if, ONE IS A MULTIPLE OF THE OTHER: either they are parallel, or part of the same RAY. In such case, their INNER PRODUCT IS ZERO. For two INDEPENDENT VECTORS, THEIR INNER PRODUCT IS NONZERO, as in the following:

  1. Consider two independent vectors, a =[a1, a2, a3], b = [b1, b2, b3].
  2. We declare a, b to be INDEPENDENT OF A THIRD MULTIVECTOR STRUCTURE, tentatively denoted C.
  3. Let C enter into inner products with vectors a, b, via coefficients (not necessarily scalar) [C1, C2, C3], leading to TWO HOMOGENEOUS EQUATIONS:
    a · C = a1C1 + a2C2 + a3C3. [1]|| b · C = b1C1 + b2C2 + b3C3. [2]

Please note than EQUATIONS [1], [2] CONSTITUTE A SYSTEM OF TWO EQUATIONS IN THREE UNKNOWNS (the Ci). Such a SYSTEM cannot have A UNIQUE SOLUTION -- rather AN INFINITE NUMBER OF SOLUTIONS. However, we shall -- below -- make A CLEVER CHOICE.

Here, we invoke the familiar (high-school!) ELIMINATION ALGORITHM, applying it to the UNKNOWN COEFFICIENTS, Ci, to find:
(a1b2 — a2b1)C1 + (a3b2 — a2b3)C3 = 0. [3]|| (a1b2 — a2b1)C2 + (a3b1 — a1b3)C3 = 0. [4]

Please note THE COMMON MULTIPLIER in EQUATIONS [3], [4] -- namely, (a1b2 — a2b1).

This is an ANTISYMMETRIC FORM, hence, requires A SYMPLECTIC TRANFORMATION to be INVARIANT. (This is called "coversion" in MULTIVECTOR Literature, reversion 12 21, FOLLOWED BY A NEGATIVE ONE MULTIPLICATION, or INVERSION, and vice versa.)

Math students know that, given an INDEPENDENT SYSTEM OF TWO EQUATIONS IN TWO UNKNOWNS, the familiar (high school!) ELIMINATION ALGORITHM can be appled to obtain a SOLUTION of such a SYSTEM for the TWO UNKNOWNS. And this procedure ARTICULATES THE STRUCTURES OF DETERMINANT AND MATRIX IMPLICIT IN SUCH A SYSTEM. (Ancient Babylonians and ancient Egyptians SOLVED SUCH SYSTEMS, apparently without kenning the INHERENT DETERMINANT STRUCTURE. The first realization of this has been credited to a Japanese mathematician, Seki Kowa, 1683; rediscovered by Gottfried Leibnitz (1646-1716) in 1693.)

Similarly,by DECLARING THIS COMMON MULTIPLIER OF [3], [4] TO BE NONZERO (that is, the SYSTEM DETERMINANT, a1b2 ¹ a2b1), we can SOLVE [3] for C1 and SOLVE [4] for C2 -- both in terms of C3:
C1 = [(a2b3 — a3b2)/(a1b2 — a2b1)]C3. [5A]|| C2 = [(a313 — a1b3)/(a1b2 — a2b1)]C3. [5A]

We can now make "the clever choice" mentioned above. By DECLARING C3 º (a1b2 — a2b1), we can ELIMINATE C3 from both [5A] and [5b]. Then we have:
C1 = (a2b3 — a3b2); [6A]|| C2 = (a3b1 — a1b3); [6B]|| C3 = (a1b2 — a2b1). [6A]

A FORM such as (a2b3 — a3b2) resembles the standard commutator for matrices or operators: [A, B] = AB — BA. Here, the "carriers" commute (interchange), but no subscript/indix is involved. The same bracket form can be written in [k1, k2] = k1k2 — k2k1, in which there is no "carrier" commutation, only commutation of index/subscript.

However, even the second (k-)FORM above (in which each term has a single subscript) does not MATCH (a2b3 — a3b2), with two subcripts per term, exhibiting commutation (interchange) both of "carriers" and of omdex/subscript. So, I EXTENDED THE STANDARD COMMUTATOR FORM to accomodate this -- and my FORM will later induce succinct expressions in our work.

[a, b]ij º (ab — ba)ij = aibj — biaj º ¯[b, a]. [7] Note that the STANDARD BRACKET, [A,B] = AB — BA is a "vector" (MATRIX, OPERATOR, etc.) BRACKET, whereas my BRACKET is a SCALAR BRACKET.

To show another usage of my BRACKET, I choose an example in advanced calculus from a well-known text by R. Creighton Buck:

ASSIGNMENT (for MATH STUDENTS): Show that my BRACKET is IMPLICIT IN THE MULTIPLICATION RULE FOR MATRICES, in the CONTEXT OF THE COMMUTATOR. Show that it then FOLLOWS FROM A WELL-KNOWN MATRIX THEOREM, namely, aijbjk = cik, that my BRACKET INHERITS THIS TRANSITIVITY. Anything else?

ASSIGNMENT (for PHYSICS MAJORS): Show that my BRACKET renders neatly THE DERIVATION OF THE COMMUTATOR OF ONE ANGULAR MOMENTUM FORM "PRODUCT" FROM THE OTHER TWO. Anything else?

Using my BRACKET, we can rewrite [6], with cyclic intent:
C1 = [a, b]23 [6'A]. C2 = [a, b]31 [6'B]. C3 = [a, b]12 [6'C].


But what kind of STRUCTURE is this (purposefully) UNSPECIFIED C? I merely REQUIRED INDEPENDENCE WITH VECTORS a, b -- which EXCLUDES A SCALAR BUT ALLOWS CHOICE OF

  1. A VECTOR (Gibbs-Heaviside);
  2. A BIVECTOR (2-D VECTOR -- Grassmann);
  3. HIGHER MULTIVECTOR.

Choice (1) -- C as a VECTOR would seem to aim at CLOSURE OF OUR SYSTEM ON SCALARS AND VECTORS. The others, (2) and (3), would -- for the nonce -- LEAVE THE CLOSURE QUESTION "IN THE BALANCE".

American physicist/chemist, Josiah Willard Gibbs (1839-1903), and British electrical engineer, Oliver Heaviside (1850-1925), INDEPENDENTLY made the first (simplistic) choice -- for VECTORHOOD, that is, a 1-vector. (Hold the applause.)

Choices (2), (3) -- based upon the ideas of German linguist, Hermann Grassmann (x-y) -- leads to MULTIVECTOR CLOSURE -- AND (by DUALITY) INCLUDES CHOICE (1), THE STANDARD VECTOR ALGEBRA, AS A SPECIAL CASE!


C as 1-VECTOR

For C º c, choose orthonormal basis vectors:
d1 = [1, 0, 0]; d2 = [0, 1, 0]; d3 = [0, 0, 1]. [8]

(I've denoted a basis vector as di rather than as ei, to save e for "exponential". Note, however, that MULTIVECTOR THEORY is THE ONLY COORDINATE-FREE SYSTEM! Not so for tensors or differential forms. So our choice above is merely for convenience of discussion.)

Given these basis vectors, we can write:
a = a1d1 + a2d2 + a3d3.|| b = b1d1 + b2d2 + b3d3.|| c = c1d1 + c2d2 + c3d3.[9]

We can, of course, choose a coordinate system such that a2 = a3 = b2 = b3, to have:
c = (a1b2 — b1a2)d3 = [a, b]12d3 = a × b = ¯a × ¯b.[10]

Math students will recognize [10] as the familiar vector cross product, YIELDING AN ORTHOGONAL VECTOR, c, OUT OF THE PLANE OF COMPONENTS a, b.

Alas! This choice for C of a vector out-of-the-plane -- made by Gibbs and Heaviside -- DOES NOT TRANSFORM LIKE A VECTOR! In our SV Principle, we say A VECTOR IS WHATEVER IS INVARIANT UNDER THE ROTATION OPERATOR, R, for q ¹ 0.

Now, we must adopt a convention about the ROTATION, say, COUNTERCLOCKWISE. REVERSING DIRECTION TRANSFORMS A VECTOR INTO ITS NEGATIVE. Of course, a ¹ ¯a, but ¯a × ¯b = a × b.

The patchwork of calling this "an axial vector formed by the product of two polar vectors" FAILS as much as the discussion about "free" and "fixed" vectors, since ROTATION REQUIRES SPINORS and ROTORS. Generations of students have been subjected to this confusion. And we are subjected to unsightly contortions when examinees attempt to perform correct applications of "the two-fingers-and-thumb algorithm", during the stress of examination!

Also, this STANDARD VECTOR ALGEBRA IS NOT MULTIPLICATIVELY CLOSED -- hence, not a PROPER ALGEBRA. (There are other "complaints".)


C as BIVECTOR

All of the objections to C as VECTOR (outclearly of the plane of its compnents) are avoided by Choice (2), C as BIVECTOR, remaining in the plane of its components.

Our previous product was "inner" because it REDUCED DIMENSIONS, resulting in a SUBDIMENSION (POINT, 0-D) of its COMPONENTS (each 1-D). We give this PRODUCT PRODUCING A BIVECTOR the label "outer product" because IT RAISES DIMENSIONS, from 1-D to 2-D.

We denote it by "wedge", infixed between basis vectors: d1 d2; etc. We shall also distinguish BIVECTORS from VECTORS by going to uppercase. So, we shall shift from the VECTOR c to BIVECTOR C and write:
C = (a1b2 — b1a2)d1 d2 = [a, b]12d1 d2.[11]

The geometric representation is an oriented plane segment.

We pause to verify the consistency of this "Choice (2)" derivation. We said "structure" C is "independent" of the other two vectors. If vectors, u, v are collinear, then u v = 0. If, orthogonal, u · v = 0. But, clearly, a (a b) = 0, and b (a b) = 0 , so our choice is consistent.


The MAGNITUDE of the BIVECTOR B = ab is EQUIVALENT TO THE AREA OF THE CORRESPONDING PARALLELOGRAM FORMED BY SUM OF VECTORS a,b. Hence:
|B| = |ab| = |ba| = |a| |b| sin q. [12]


Other results follow from OUTER PRODUCT.