## Understanding NMR

Under ‘the term NMR’, people may correctly understand two different subjects:

1. Why do subatomic particles, in this case nuclei, precess?
2. How do Engineers exploit this precession, in order to form 2D and 3D images, in ‘NMRI’?

In this posting, I am only going to address subject (1).

Precession and spin are easier to understand, when we can simply apply the Newtonian concepts. Quantum Mechanics today tends to obscure the subject of precession. And so for most of this post, I am going to make the somewhat daft assumption that the precession of subatomic particles, is Newtonian.

If a gyroscope is spinning along an arbitrary axis, and if we apply torque to its axis, this torque integrates into the spin vector – at an angle to the existing spin vector. Unless we are accelerating or slowing down its spin. This results in the spin vector rotating – and thus in precession.

But, if we have seen the demonstration in which an off-axis gyroscope is precessing on a passive pedestal, we also observe that eventually the phenomenon weakens, and that the practical axis seems to shift further and further in the direction gravity is pulling on it.

The reason this weakening takes place, is the fact that some additional torque is being applied to the gyroscope, against the direction in which it is precessing. Otherwise, it would just precess forever. This additional torque could be due to friction with the pedestal, due to air resistance, due to magnetism, or whatever.

An artillery shell is aerodynamically designed, so that as long as it has excess spin, interaction with the air will always push it in the direction of any existing precession, and so this type of object will tend to straighten its axis of spin, into the direction with which it is flying. This would be the equivalent to the gyro from before, straightening up and standing up against gravity again.

Atomic nuclei that have an odd mass number, also have a non-zero spin quantum number, thus having spin, and also have a magnetic dipole moment. The wanton assumption could be made that its magnetic dipole moment is always parallel to its axis of spin. But then if we visualize matter as consisting of nuclei that are separated by vast, less-dense clouds of electrons, it would seem to follow that each nucleus is always precessing in response to local magnetic fields.

And even if we were to apply an external magnetic field to such a system, it would follow that precession could not yet be detected externally, because the nuclei are all out-of-phase. Ostensibly, they would also continue to precess, and to stay out of phase, simply due to an applied magnetic field. The only big difference with the practical gyro should then be, that the magnitude of their spin-vector should never change, since this should be intrinsic.

But if we were to insist on this very Newtonian description, then something else should also happen that is not as obvious. Those thin wisps of electrons should not only react to the applied field, but also locally, to the field of each nucleus precessing. So if we assume conservation of energy, there would also be reactive torque acting on each nucleus, in response to its own precession, because the density of the electron clouds is not zero.

After a certain settling period which is measurable, the nuclei end up aligning themselves with the applied field, resulting in the state that has its lowest-possible potential energy. This takes milliseconds instead of the nanoseconds that some of these behaviors should take on the subatomic scale. Precession has still not been detected.

Likewise, the fact that subatomic decay can take years instead of nanoseconds, refutes certain mundane explanations, of what might be causing that.

Well, one thing that Scientists can do is compute what the dipole moment of such a nucleus is, as well as the magnitude of its angular momentum – spin – and to compute as a function of the applied field-intensity, with what frequency all the nuclei should be precessing… This frequency is called the “Larmor Frequency”.

## Whether the Columns of Matrices have a Natural Order

Are the columns of matrices in a given, natural order, as we write them? Well, if we are using the matrix as a rotation matrix in CGI – i.e. its elements are derived from the trig functions of Euler Angles – then the column order depends, on the order in which we have labeled coordinates to be X, Y and Z. We are not free to change this order in the middle of our calculations, but if we decide that X, Y and Z are supposed to form a different set, then we need to use different matrices as well.

(Edited 02/15/2018 :

OTOH, We also know that a matrix can be an expression of a system of simultaneous equations, which can be solved manually through Gauss-Jordan Elimination on the matrix. If we have found that our system has infinitely many solutions, then we are inclined to say that certain variables are the “Leading Variables” while the others are the “Free Variables”. It is being taught today, that the Free Variables can also be made our parameters, so that the set of values for the Leading Variables follows from those parameters. But wait. Should it not be arbitrary for certain combinations of variables, which follows from which?

The answer is, that if we simply use Gauss-Jordan Elimination, and if two variables are connected as having possibly infinite combinations of values, then it will always be the variables stated earlier in the equations which become the Leading, and the ones stated later in the equations will become the Free Variables. We could restate the entire equations with the variables in some other order, and then surely enough, the variable that used to be a Free one will have become a new Leading one, and vice-versa. (And if we do so, the parametric equations for the other Leading variables will generally also change.)

As of 02/15/2018:

This is an observation which I once made, based on certain exercises in Linear Algebra, as taught, having been simplified in the way I described. Eventually, systems of solutions will come up in the real world, in which a Free Variable actually precedes a Leading Variable, both in the order they get mentioned in equations, as well as according to the order of matrix-columns. The corresponding row with a single 1, corresponding to those Free Variables, will not occur, so that it also cannot be added to or subtracted from what will be the earlier row, in such a case.

The later observation follows from the fact that such solutions have infinitely many solutions:

1. If the Free Variables are just given a set of values that follow from the solution-set, Then the Leading variables would still need to have definite values, as defined by the same solution-matrix,
2. If the Free Variables are given a set of values, that no longer follow from the solution-set, it will not follow that a different set of values for the Leading Variables, will make such extraneous solutions viable.

End of edit, 02/15/2018 )

The order of the columns, has become the order of discovery.

This could also have ramifications for Quantum Mechanics, where matrices are sometimes used. QM used matrices at first, in an effort to be empirical, and to acknowledge that we as Humans, can only observe a subset of the properties which particles may have. And then what happens in QM, is that some of the matrices used are computed to have Eigenvalues, and if those turn out to be real numbers, they are also thought to correspond to observable properties of particles, while complex Eigenvalues are stated – modestly enough – not to correspond to observable properties of the particle.

Even though this system seems straightforward, it is not foolproof. A Magnetic North Pole corresponds according to Classical Principles, to an angle from which an assumed current is always flowing arbitrarily, clockwise or counter-clockwise. It should follow then, that from a different perspective, a current which was flowing clockwise before, should always be flowing counter-clockwise. And yet according to QM, monopoles should exist.

## Self-Educating about Perpendicular Matrices with Complex Elements

One of the key reasons for which my class was taught Linear Algebra, including how to compute Eigenvalues and Eigenvectors of Matrices, was so that we could Diagonalize Symmetrical Matrices, in Real Numbers. What this did was to compute the ‘Perpendicular Matrix’ of a given matrix, in which each column was one of its Eigenvectors, and which was an example of an Orthogonal Matrix.  (It might be the case that what was once referred to as a Perpendicular Matrix, may now be referred to as the Orthogonal Basis of the given matrix,?)

(Edit 07/04/2018 :

In fact, what we were taught, is now referred to as The Eigendecomposition of a matrix. )

Having computed the perpendicular matrix P of M, it was known that the matrix product

PT M P = D,

which gives a Diagonal Matrix ‘D’. But, a key problem my Elementary Linear class was not taught to solve, was what to do if ‘M’ had complex Eigenvalues. In order to be taught that, we would need to have been taught in general, how to combine Linear Algebra with Complex Numbers. After that, the Eigenvectors could have been computed as easily as before, using Gauss-Jordan Elimination.

I have brushed up on this in my old Linear Algebra textbook, where the last chapter writes about Complex Numbers. Key facts which need to be understood about Complex Vector Spaces, is

• The Inner Product needs to be computed differently from before, in a way that borrows from the fact that complex numbers naturally have conjugates. It is now the sum, of each element of one vector, multiplied by the conjugate, of the corresponding element of the other vector.
• Orthogonal and Symmetrical Matrices are relatively unimportant with Complex Elements.
• A special operation is defined for matrices, called the Conjugate Transpose, A* .
• A Unitary Matrix now replaces the Orthogonal Matrix, such that A-1 = A* .
• A Hermitian Matrix now replaces the Symmetrical Matrix, such that A = A* , and the elements along the main diagonal are Real. Hermitian Matrices are also easy to recognize by inspection.
• Not only Hermitian Matrices can be diagonalized. They have a superset, known as Normal Matrices, such that A A* = A* A . Normal Matrices can be diagonalized.

This could all become important in Quantum Mechanics, considering the general issue known to exist, by which the bases that define how particles can interact, somehow need to be multiplied by complex numbers, to describe accurately, how particles do interact.