Went back to NJ on Friday because my good friend Pete was having a party at his new home in Philly. Stayed with my brother Julian, who also lives in Philly, on Saturday, and made it back to Moorestown on Sunday. Now we’re here. Hopefully I can post the four stories that have been buzzing in the back of my mind today, and make up for lost time. The first story takes us from abstract algebra to n-spheres to the Bott periodicity theorem of homotopy theory and the notion of Clifford aka Geometric Algebras—thanks again John Baez! The second story takes us from factory farms to the notion of kingship in anthropology, the transformative powers of racoons, and the morality of quantum mechanics. The third story takes us from AI to the Russians to my writings circa 2014. And the fourth and final story is about what it’s like in 2018 to continue Plato’s tradition of symposia aka what it’s like to be the philosopher today at the margins of the party.
So, as usual, they misled you in school, mainly by omission. Your teachers probably presented “numbers” and “algebra” like they’re fixed concepts, set in stone. Here’s what numbers are. Here’s what the rules of algebra are. Now follow the rules to calculate the solution.This is fine as far as it goes, but it implicitly makes you think “breaking the rules” is bad. And if you think “breaking the rules” is bad, it might prevent you from asking the simple question that lies at the heart of abstract algebra, and which represents the true path of mathematical enlightenment, aka:What happens to the numbers when you break the rules of algebra?
Abstract algebra thus asks you to take a new, higher vantage point. Instead of asking two separate questions once and for all time, What is number? and What is algebra?, it asks: If you change the rules of algebra, how do the numbers change? If you change the numbers, how do the rules of algebra change? Number and algebra are seen as two concepts that actually co-determine each other.
Now it turns out mathematicians have been exploring this subject intensely now for 200 years, and a lot is known. I’ll try to synthesize some of the results here, without necessarily demonstrating each step, as there are numerous excellent resources available to those who are interested in the details. I apologize for the errors I've no doubt introduced due to ignorance or overeagerness.
The basic operations of arithmetic: addition, subtraction, multiplication, and division. If you start with the counting numbers 1, 2, 3, …, you quickly find that you need to upgrade to the integers …, -3, -2, -1, 0, 1, 2, 3, … if you want subtraction to always make sense, and for every number to have an additive inverse (like -3 and 3). You also discover that you need to upgrade to the rational numbers aka ratios of integers, if you want division to always make sense, and for every number to have a multiplicative inverse (like 1/3 and 3). So even at this early stage, it’s clear: the demand that our number system be “closed” under a certain algebra naturally leads to a corresponding sophistication in the concept of number relative to that algebra.
Now suppose we demand that between any two numbers, there’s a third number (that also has an inverse, etc), then you get the “real numbers.” The real numbers can be characterized by four properties: ordering, commutativity, associativity, normitude. In other words, real numbers can always be put in some unique order aka … a < b < c < d …; the order of multiplication of real numbers doesn’t matter aka a.b = b.a; the insertion of parentheses doesn’t matter aka a.(b.c) = (a.b).c; and finally, distance between vectors of real numbers is defined by something like the Pythagorean theorem aka d.d = a.a + b.b + c.c… It’s because of this that we can define the intrinsic “length” or “norm” of a real vector, and if we have two vectors u and v, |u||v| = |u.v|.
So what happens if we relax the property of ordering? Then we get the complex numbers! (Remember, some x+y.i given in rectangular coordinates aka r.e**i.theta given in polar coordinates.) Whereas the real numbers are defined on a line, the complex numbers are defined on a plane—so there’s no single notion of orderedness to them. But they’re still commutative and associative and normie, although we need to be a little more nuanced about the Pythagorean theorem, which now appears as d**2 = a.a* + b.b* + c.c* + … where a* is a’s complex conjugate (under complex conjugation, x+y.i -> x-y.i aka r.e**i.theta -> r.e**-i.theta.) So a.a* is just x.x + y.y: the imaginary parts cancel out and we get the “real” length squared of the hypoteneuse. (You can see the old Pythagorean formula as just a special case of the new one: we just didn’t have to worry about conjugation before because the real numbers are symmetric under it.)
Great! Now what happens if we relax the property of commutativity, so that a.b != b.a? Then we get the quaternions! What are the quaternions? They describe dilations and rotations in 3D space, just as real numbers represent dilations and rotations in 1D space, and complex numbers represent dilations and rotations in 2D space. And if you spent a moment with any object at hand, you will soon realize that doing rotation A followed by rotation B is not always the same as doing rotation B followed by rotation A. You can write a quaternion analogously to a complex number as some a + b.i + c.j + d.k, where a, b, c, and d are real numbers, and i, j, and k represent three orthagonal ways we can rotate around the real axis. Quaternions aren’t ordered, nor commutative, but they are still associative and normie.
If we relax the property of associativity, so that a.(b.c) != (a.b).c, we get the octonians, which can be written as a + b.i + c.j + d.k + e.l + f.m + g.n + h.o. There are 7 imaginary axes orthagonal to the real axis, which makes 8. But, due to their non-associativity, octonions have no matrix representation in themselves! Now you may have noticed that the dimensionality of our numbers has been growing like powers of 2.
If we continue the construction to the sedenions et al, we lose the property of normitude. There’s no way to define the intrinsic “length” or “norm” of a vector of sedenions such that |u||v| = |u.v|. Another way of saying it: we lose the classical notions of “inverse” and “division,” and along with it, the tight relationship between scalar and vector that really makes the notion of a vector space even possible or at least useful. So it’s like the universe is saying: thou shalt always define thy vector spaces over the four primal number fields: the reals, the complex numbers, the quaternions, and the octonions—because according to Hurwitz’s theorem, these are the only four normed division algebras.
But the story doesn’t end there, because rather than a limitation, this turns out to be a huge advantage! But first we need to take a new perspective on multidimensional space.
Let’s begin by defining the notion of a Clifford aka a Geometric Algebra. For this, pictures are worth a thousand words. GA(0) is the Geometric Algebra defined by 1 vector. It consists of scalars that describe the dilations and contractions of this vector, and the vector itself. GA(1) is the Geometric Algebra defined by 2 orthagonal vectors. It consists of its scalars, the two vectors, and the bivector swept out from the first vector to the second. So GA(1) is just the complex numbers in disguise: the two vectors are the real and imaginary axes, and “multiplying” by the bivector is like doing a 90 degree rotation in the complex plane. GA(2) is the Geometric Algebra defined by 3 orthagonal vectors. It consists of its scalars, the three vectors, the three possible bivectors between them, and 1 trivector. These objects provide a basis for lengths, directions, areas, volumes, etc, within the space. What I mean is that, somewhat incredibly, GA(n), the Geometric Algebra defined by n+1 orthagonal vectors, itself defines a vector space in 2**n dimensions that keeps track of all the higher geometry!
Now it turns out GA(2) is just the quaternions in disguise. But consider GA(3). It has 4 orthagonal vectors. It consists of scalars, the four vectors, the 16 possible bivectors, the 4 possible trivectors, and 1 quatrovector. But GA(3) is *not* the octonians! So instead of trying to define higher dimensional space using our old construction of reals, complex numbers, quaternions, octonions, sedenions, shedding algebraic properties as we rise in powers of 2, let’s define higher dimensional space in terms of a Geometric Algebra, because even as our dimensionality goes up by powers of 2, we retain our associativity and normitude. In retrospect, the octonions represent just one fork along the road to infinity. (PS. GA(4) actually represents the 4d relativistic space that an electron lives in (once provided with the proper metric, where time is negative--written as GA(1, 3)!)
Having perceived all this, we can now appreciate the following staggering fact known as “Bott Periodicity.”
It’s ironic: the octonions were originally called the “octaves,” but it’s by graduating to GA(3) instead of to O, that we come to perceive what may rightly be called God’s true “octave.” Notationally, “H, H” means two quaternions “concatenated” together; and H(2) means 2x2 matrices whose elements are quaternions. So Bott Periodicity is the statement that after the first octave, GA(n) = GA(n-8)(16) aka 16x16 matrices whose elements come from GA(n-8). Concretely, GA(8) = GA(1)(16) = R(16) aka 16x16 matrices of real numbers. GA(16) = GA(8)(16) = GA(1)(16)(16) = R(16)(16) aka 16x16 matrices of 16x16 matrices of real numbers. So higher dimensional space actually wraps back around itself, as the original abstract “octave” is recursively embedded within it in ever more sophisticated ways.
It’s all beautifully analogous to number theory/musical harmony.
12 3 4 5 6 7 8
Between adjacent powers of two, we always find some new primes. We also find every 2nd number divisible by 2, every 3rd number divisibly by 3, etc. This means that new primes always appear situated between old primes, and so can be seen not merely as new notes, but as “higher dimensional” corrections to old notes, approximating the universal division of the octave. This is where Arnold Schoenberg, bless his heart, went astray.
Now I’m going to blow your mind and tell you that a quaternion is actually just a qubit: it is, physically speaking, a spin 1/2 particle. The easiest way of realizing this is to realize that that some quaternion t + x.I + y.J + z.K can be written as a 2x2 matrix of the form:
[ -y+i.z t-i.x]]
The quaternion basis elements are:
Compare the Pauli matrices:
So we find that, I = iZ, J = iY, K = iX. And in any case, all these are just different ways of writing bivectors. If we switch to X, Y, Z, we get a Hermitian matrix, as opposed to a skew-symmetric matrix. If we find the two eigenvalues and two eigenvectors of that Hermitian matrix, and constellate those two eigenvectors using the Majorana representation, we get two antipodal stars on the 2-sphere. The ratio of the eigenvalues determines a division of the antipodal line aka a point in the interior or surface. And, quantum mechanically, we know that pure qubit states are points on the surface of the 2-sphere and mixed qubit states are points in the interior of the 2-sphere, up to complex phase. So there you go. Now you can look at the bivectors of GA(2) as defining the X, Y, and Z measurements in QM, generalizing to higher dimensional analogues accordingly. We recall the crazy plot twist of QM whereby just as light can be broken into its component colors, sound into its component frequencies, composite numbers into their primes, so spin states can be broken down into orthagonal spin states, as in the Stern Gerlach apparatus, where the electrons emerge at random in either one of two locations, with their spins either completely parallel or completely anti-parallel with the magnetic field they just traveled through, with probabilities given by the projection of their former spin state against the axis defined by the magnetic field. The “probablistic” or “free” behavior of a spin in the Stern-Gerlach apparatus reveals that the “generators of motion,” the Hermitian matrices, each of which defines a “fixed pole” around which you can rotate another axis in a circle unitarily by some U(t) given by exponentiating the H, also defines a possible measurement/bifurcation/free choice of an eigenstate of H. And you rediscover what Dirac rediscovered, which is that an electron lives in GA(1,3) aka H(2) aka: an “electron” breaks into two entangled spins in the singlet state, a positive energy spin 1/2 and a negative energy spin 1/2, a positron part and an electron part--from which one can derive the Pauli exclusion principle on first grounds.
But backing up a bit, what makes the “Bloch representation” of a qubit—a point on a 2-sphere up to complex phase—legal anyway? The answer is the Hopf Fibration, and this brings us both full circle and to the denouement.
Consider the 2-sphere. It’s defined by some x**2 + y**2 + z**2 = 0. Consider the 3-sphere. It’s defined by some t**2 + x**2 + y**2 + z**2 = 0. Just as the 2-sphere is a 2d surface embedded in a 3d space, a 3-sphere is a 3d surface embedded in a 4d space. Hence the quaternions. Now suppose we’re in 4d real space. We can map 4d real space into 2d complex space with:
(t, x, y, z) -> (t+i.x, y+i.z) -> (r, s)
Then we can define the 3-sphere as r.r* + s.s* = 1. Now suppose we’re in a 3d real space, we can also map it into 2d complex space with
(x, y, z) -> (x+i.y, z) -> (u, v)
so that the 2-sphere is defined by u.u* + v.v = 1. Then the Hopf fibration p
p(r, s) -> (2.r.s*, r.r* - s.s*) -> (u, v)
“stereographically” projects the 3-sphere onto the 2-sphere. If two points on the 3-sphere project to the same point on the 2-sphere, aka
if p(r, s) = p(r’, s’),
then (r’, s’) = (ar, as)
where a is a complex phase
aka a.a* = 1.
And vice versa, if two points on the 3-sphere differ only in complex phase, then they map to the same point on the 2-sphere.
So this is exactly what’s going on in quantum mechanics, because, as they say, the rotation group SO(3) of rotations in 3 dimensions has a “double cover,” the spin group Spin(3), which is diffeomorphic to the 3-sphere. In essense, thinking of elements of SO(3) as points in a space, you can move continuously from one SO(3) rotation to the next, and there’s another group Spin(3) that also lets you move continously in the same way, but for every one element in SO(3), there’s two elements in Spin(3). And finally, it turns out you can map this “space” formed by Spin(3) to the 3-sphere. Furthermore, there are two ways of interpreting Spin(3): as either Sp(1), the group of quaternions with norm/length/magnitude = 1; or SU(2), the group of unitary matrices with determinant 1. But we already knew that: a quaternion is a qubit is a rotation in 3d. In quaternion language, we look at quaternions with g.g* = 1 as defining the 3-sphere. Now, a point (x, y, z) in 3d can be expressed as an imaginary quaternion: h = x.i + y.j + z.k. You can confirm that, for some unit quaternion g, and an imaginary quaternion h,
h -> g.h.g*
expresses a rotation in 3 dimensions. If you square it, you get h.h*, so it’s length/distance/norm preserving. So the unit quaternions are just the group of rotations in 3d, except that it turns out that g and -g determine the same rotation: hence the double cover business. Now if we try to represent a 3d rotation *in* 3d space, we find we can use the Hopf fibration to project our unit quaternion to a point on the 2-sphere, with a circle’s worth of freedom left over: as if to say, we can always rotate around the very axis defined by the point on the 2-sphere and its antipode, and we get the same projection. And this is the meaning of the Hopf fibration:
base space = total space
S(2) = S(3)
The “total space,” which is a 3-sphere, can be broken down into an 2-sphere “base space,” and above every point on the 2-sphere, there’s a 1-sphere “fiber,” a circle of freedom that lives atop each point of the fabric of the 2-sphere.
Now, remarkably, according to the theorems of Hopf and Adams, we find that in all of multidimensional space there are exactly four such fibrations and what’s more, they correspond to the 4 normed division algebras: R, C, H, and O.
There’s a fibration of the 15-sphere (which lives in 16 dimensional space, etc) which breaks it down into a 8-sphere base space, and a 7-sphere fiber space. But then! a 7-sphere can be fibered into a 4-sphere base space and a 3-sphere fiber space. But also! a 3-sphere can be fibered into a 2-sphere base space and 1-sphere fiber space. And lo! a 1-sphere can be fibered into a 1-sphere base space, and a 0-sphere fiber space, aka just two points. There can’t be any more such fibrations, because, after all, n-spheres rely on the whole a.a* + b.b* + c.c* + d.d* + … = 1 thing to work, with |u|v|’s ==ing |uv|’s, and the octonions are the last normed division algebra!
But consider that the 15-sphere lives in 16 real dimensional space. If we have 16 orthagonal vectors, we can consider its Geometric Algebra GA(15). Continuing the pattern,
GA(1) ~ GA(1)
GA(2) ~ GA(3)
GA(4) ~ GA(7)
GA(8) ~ GA(15)
C = C
H = H, H
H(2) = R(8), R(8)
R(16) = [R(8), R(8)](16)
So the vectors in one of the higher GA(n)’s can be fibered into vectors in the lower dimensional GA(n)’s. In the Hopf Fibrations, we have GA(0; 1; 2; 3; 4; 7) of the first octave, and GA(8; 16) aka the first and last notes of the second octave. Of the first octave, we’re only missing GA(5) = C(4) and GA(6) = R(8), and C(4) can seen as R(8) complexified, so it’s really like we’re missing just one GA(5/6). Furthermore, it’s interesting how GA(15) is conveniently GA(7)(16). It’s as if doing GA(n-8) is like dropping down to the fiber space.
And indeed, the thing is: if we have some n-dimensional vector space, we can construct it’s Geometric Algebra, and we have this “subtract 8 while making 16x16 dimensional matrices rule.” But precisely because we stuck to using normed division algebras, we can always collapse each of the elements in the 16x16 grid to get back a 16x16 matrix that acts on states that live on the 15-sphere and so can be fibered down earth! Meanwhile, at the deepest level of the hierarchy, we find only GA(0; 1; 2; 3; 4; 5; 6; 7), and so, except for GA(5/6), we can also begin the fibration there. Therefore, putting the exception to the side, we can begin the fibration anywhere! In this way, anything can be projected “down to earth” (and/or sent back up again) through the hierarchy of “celestial spheres” from 2 points, to circle, to surface, to sphere, etc, … and back.
What do we do with GA(5/6) then? Well, the algebra of C(4) is isomorphic to the complexification of GA(1,3), the space in which the electron lives. But now it can describe a photon too? I'm not sure. I'll just say now that C(4) reminds me of the dimensionality of Penrose’s “twistor space,” which describes photons traveling between spheres--although I guess twistor space is really more like GA(3) aka H, H. In any case, in twistor theory, a point in spacetime can be identified with a Hermitian matrix H:
1/sqrt(2) . [[t-z x+i.y
A twistor is a complex 4 dimensional vector that breaks into two 2d parts: r and s. If the twistor satisfies the relationship:
r = i.H.s
then that twistor is incident with that point in Minkowski space. From the r and s, you can calculate the position, momentum, and helicity of the photon, etc. By considering all the twistors incident with a given point in spacetime, you can build up a picture of the night sky of the observer.
PS. Someone needs to write/probably has already written some linguistics code that creates a concordance matrix of symbols over a corpus, vectorizes the symbols using SVD, and then constructs the Geometric Algebra over the symbol vectors. Symbol juxtaposition in a sentence is interpreted simply as the geometric product, collapsing and expanding dimensions accordingly. The sentence (or its complement), considered as a transformation, is then applied to the semantic space as a whole, altering the background against which the next sentence will be interpreted. Then test hypotheses about quantum gravity in the world of linguistics, where there is a constant feedback whereby the semantic vectors guiding thought are defined relative to the current symbol frequencies, but the use of those vectors, in general, changes the very symbol frequencies used to define them in the first place, aka shifting ground on which you stand.
Divining Arts, and Stars foreknowing Fate,
Varying the divers Turns of Humane State,
(The Works of Heav'ns high Reason) We bring down
In Verse, from Heaven; and first move Helicon,
And it's green Groves, with unacquainted Rimes,
Offering strange Rites, not known to former Times.
-- The Sphere of Marcus Manilius,
translated by Edward Sherburne, Esquire
Kind of a frustrating day. Spent a lot of time trying to make precise some ideas I had late last night, and didn't really get all that far. The basic notion is to try to treat the Classes and Objects of object-oriented programming more analogously to Fields and Quanta in physics, and basically think about ways of extending the python language itself into the quantum domain, as opposed to merely providing a library that helps you interact with a quantum computer/simulation. What follows is just me thinking allowed about the basics, without pretense to originality:
The basic primitive is the notion of some n by n Hermitian matrix. Call it H. And the game is to interpret it in 1001 ways. H has eigenvalues and eigenvectors (l_0, v_0), (l_1, v_1), (l_2, v_2), etc. And so, H defines a vector space V_H of all the vectors whose directions are left unchanged by H, aka are multiples of the eigenvectors. For example, in the basis defined by H, H itself is the vector SUM l_i*v_i | over i from 0 to n. So off the bat, I want to think about Vector Spaces Defined By Hermitian Matrices ~ Classes and Vectors ~ Objects; and naturally a Class is an Object too. What's interesting is that this isn't the only way to look at a Hermitian matrix as a vector. The Hermitian matrices of a given dimensionality can be expressed as a vector in an a vector space where the basis elements aren't vectors, but certain primal hermitian matrices themselves. So for example, 2x2 Hermitian matrices can be expressed as a sum xX + yY + xX, where the basis elements are the classic Pauli matrices. And this generalizes to higher dimensions. For 3x3, you get an 8-dimensional vector space, and it goes on from there. But interestingly, no matter what size Hermitian matrix you get, there's always a generalization of the Pauli matrices X Y Z, so you can also associate the matrix with a point in 3d. (So using the Majorana representation, we could go nxn Hermitian Matrix -> n vector in V_H -> n-1 stars on the 2-sphere. And we could also make a 2-sphere with o-1 stars on it, corresponding to the representation of H in the operator basis. And we can also always get another little sphere, with just one star too. One benefit of thinking this way is that we can always stay geometrical!)
You can take the Hermitian matrix as a whole, or one of its eigenvectors upgraded into a matrix, or one of its eigenoperators, and exponentiate it like e**iHft. Where f is frequency ~ energy, and t is time. The result is a unitary matrix, that when applied to a vector, "evolves it for a certain amount of time." Each Hermitian matrix defines a kind of pole around which we can rotate. And each U(dt) is a rotation around H by a certain amount dt. If you apply a Hermitian matrix A itself to a vector (say, in B), it perspectivally transforms it from vector space B to vector space A. Finally, you can collapse a vector against a Hermitian matrix, or a Hermitian matrix against a Hermitian matrix, to get a scalar and this is called measuring.
THE UPSHOT: Unlike in normal programming, where you can have some: int x = 10, in quantum programming, every type like "int" has to correspond to a Hermitian matrix defining a vector space, every variable like x has to correspond to some vector/matrix state, and the scalar 10 has to correspond to the measured value taken on by the state when viewed as that type, "int". Why go through the trouble? To take into account context.
There's two natural kinds of composite types. You can concatenate a bunch of H's together into one H (like basically along the diagonal). This is like "OR": you want to keep a bunch of weighted items separate and indexed by a list. Alternatively, you could tensor a bunch of H's together into a big H. This is like "AND": you have to consider all the possible relations between all the parts of the things involved: it's a structure. And you can obviously combine the AND's and OR's to build up whatever hierarchy you want. In the case of OR, you can access a part by just grabbing it and normalizing. In the case of AND, you can access a part, an "instance variable," by tracing over aka integrating over aka summing over the complement of the part, and the result is a Hermitian matrix of lower dimensionality which is the state of the variable. (And, of course, you can look at it as generating its own vector space within which others can live, etc.) Final remark: If you apply an operator to an instance variable, the operator is upgraded to the dimensionality of the whole before being performed: in other words, parts remember the wholes of which they are a part.
Now: a seemingly innocent question: what happens when we rotate a vector out of its vector space? Should it be allowed, etc? A perhaps related matter: When I partial trace to get my tensor piece, I always get a hermitian matrix state back. Even if I start with a pure vector state, its pieces are in the general case mixed states, aka arbitrary normalized sums of hermitian matrices. They can't be represented as simply vectors in a single vector space. You actually need two vector spaces, a left and a right one, to do justice to them: hence the matrix. (I think there's a universality thing here: like how in SVD, you can break an arbitary matrix into some left unitary basis, a right unitary basis, and a set of common eigenvalues.) If you still want to think of the "density matrix" as being a vector, you have to find some minimal way of writing it as a weighted sum of Hermitian matrices, with the weights being the vector components and the number of weights being the dimensionality. The idea from before is like: if you keep adding Hermitian matrices willy nilly to each other, and look at the minimal ways of writing them as sums of Hermitians, you need no more than 3, 8, etc, as I said above: it's just the operator basis. But you might be able to write them down in fewer dimensions.
So back to my question: if I try to rotate a vector v -> v' that's supposed to be invariant under some H out of V_H, what happens? What should/could happen? This is what I've been meditating on. The idea is that here's where the gauge theory stuff comes into play--the symmetry is preserved, but a quanta is added of the appropriate type to remember the difference.
Anyway, to help distract me from this mess, I went back to the Astronomica, and spent an inordinate amount of time making snippits like the above and listening to the rain outside.
At any given moment, as a scientist, you're in a tricky epistemological bind. There's what you believe because you've proven it to yourself--which doesn't always mean you can prove it to others, depending on who they are. There's what you believe because you trust someone who has proven it to themselves--which doesn't always mean you have to interpret the proof in the same way as your source. There's what's useful to believe, at least provisionally, in all its saltiness, because it inspires more ideas; and there's what's prudent to believe when the discourse is confused, especially when you're first starting out. Most difficult of all, there's what you suspect has to be true, even though you can't work out each and every little step. If we're faithfully pursuing truth, then we're all still learning, sure. But at the same time, you don't want to lead anybody astray, waste anyone's time. Nevertheless, I believe it's worth sharing our "wild" speculations, not only because maybe someone else can fill in the gaps, but also because if we don't allow ourselves to take that step into the open air of hypothesis, delivered in language, that most flexible of forms, as opposed to math, who demands her cut of every conclusion, then no progress would ever be made. We're all wrong until we're right. It's just a matter of do you have the patience to be so wrong, for so long.
If we don't have the courage to be wrong, we set a very bad example intellectually for everybody. Paradoxically, it makes humanity as a whole seem dumber than it really is, because all the most challenging revisions to our worldview are happening behind closed doors, sanitized conclusions later posted on the corkboard outside. Meanwhile, everyone outside those doors is walking around feeling alone in their speculations, never knowing that great intellectual battles of enormous stake and moment have and are always taking place, and maybe they might actually be on the winning side. Or losing, for that matter. If we truly want to inspire all people to be scientists--by which I mean, someone who has got to prove it to themselves to be satisfied--, scientists need to display their weakness and vulnerabilities for all to see in order to permit identification with them.
What makes a scientist? I was reading from Ecclesiastes the other night to my parents. It's a beautiful paean to depression. There's nothing new under the sun. Just the materiality of addiction counterposed to the necessity yet simultaneous impossibility of seeing beyond it. God created the physical world for a reason: to test us to see if we'd reject it. Spirit, therefore, can only be conceived as both: transcendent to the body, but also absolutely imprisoned by it. This is the eternal truth, discovered, lost, rediscovered, but always enduring, ready to squash any argument, however convincing, with its paradoxicality. Just as a purely classical account of natural selection implies that lies will almost always outcompete truths if they're less expensive (by implicitly assuming that features of our experience can't be transubjective, thus calling into question the very legitimacy of the reasoning on which the theory is based), so Ecclesiastes implies that rational progress, measured agaist some universal standard, is in itself impossible: the paradoxical thought of the One Truth leads around in a circle--every time it tries to free itself from matter, it finds itself yet more tightly bound to it.
The ancient classical texts often frame the question: "Is the world created or eternal?" In contrast, I think the Kabbalists of the Middle Ages were on the right track when instead they wondered: "Could God's act of creation ever end?" In fact, reading these guys, drinking from the fount of Neoplatonic late antiquity, is like reading a book on modern cosmology, once you get past the funny beginning. The story goes that God created a world of perfect symmetry, but technically it wasn't *perfect* because it wasn't also God, so God tried to make up the difference by making the world overflow with perfection: or you could look at it like the world, despite being perfect, by virtue of its perfection, was perfectly longing for God, and what was more perfect was that it should attain it, despite its being impossible. Now the world of perfect symmetry is like a sphere, where the closer you are to the surface the more abstract things are, and the closer you are to the center, the more concrete they are; and in between are the angelic inner spheres representing the hierarchical heights of abstract thought. So the light of God's perfection illuminates the sphere from outside, but it's just too powerful, and shatters the sphere, the vessel that holds it, and the light pours down to the next vessel, and shatters that one, etc, all the way down to the center, where the final broken abstract symmetry leads to the concrete particular you, now enmeshed in a fabric whose warp is material causes and whose weft is semantic causes. But the light still overflows--it can't be stopped--and now it flows through us--and so by our free will, we can break symmetries ourselves--continue God's act of creation--and this is what we call "walking around and doing shit." The point is now you have a way of thinking about how "your doing shit" backreacts on the sphere, leading to a chain of consequences that that echos across the Cathedral of Abstraction. It sounds like quantum gauge field theory to me, ya know. So I take modern physics to be empirically suggesting a dynamic at least reminiscent of some 1000 year old doctrine. Well, sans the "God" part--but wasn't that the point? So what if it took until the 20th century to do the calculations and experiments that allow us to vindicate the Kabbalists, merely by pointing to the machine, without having to say a word?
I bring this up not to suggest the Kabbalists knew quantum mechanics, nor to suggest, like Ecclestiastes, there's only One Truth to discover. I may be suggesting, in fact, I know, that some 20th century physicsts read things like the Zohar, etc. But the larger point is: Sure, there's always One: it's you. So any account of your world has to have you in it, and we resonate when we sense a similiar subjectivity to our own. Think of your subjectivity like your interface to the world. Now if some theory you have gives you an interface, however shitty, to the theory of "interfaces" themselves, and you prove some conclusions, then those conclusions will be understandable and confirmable by anybody who has also gone meta about interfaces, which is potentially anybody. And once you have that, you can appreciate the experimental evidence for both our freedom and our situatedness relative to others. But it's humbling because you realize your acts don't just move shit around, but also change, in due proportion, the very laws of reality themselves. If that seems paradoxical, it should. We couldn't, for instance, destroy the world, or can we? And if the law is that the laws change, why shouldn't that law change too--and what, then it can't change back? What kind of law is that? A shift in perspective is required: What seems like lawlessness, cosmic anarchy, is actually the perfect conservation of freedom itself. It's as much to say: the one thing we can't destroy is that paradoxical fount of light which is itself what gives birth to law in the first place: a free existence.
In my mind, what sets a scientist apart is good faith. No teacher knows everything, so they have to cater to what the student really needs, but that requires they really get to know each other; and a teacher only knows they are successful when the student teaches the teacher something new. So from afar, a scientist has to demonstrate good faith in a different way: by citations. The student needs to appreciate the shoulders of the giants on which we stand. The student follows the teacher through a thicket at night. It's very difficult if the teacher hogs the flashlight.
So anyway, here's some cool links.
Why do Things Fall?" by Leonard Susskind. I've learned so much over the years from Susskind's lectures on youtube, and later his books. I kinda love the way he writes: his imagination is intensely geometrical, but the characters in his stories are all, as it were, Platonic entities. It's a brilliant combination of down to earth and out in the stratosphere. He also just put out results of a collaboration with Scott Aaronson, one of the best writers on quantum computer science and quantum complexity theory (and also kind of a philosopher): "Black Holes and Complexity Classes". You might consult "The Second Law of Quantum Complexity" with Adam R. Brown first. You can think of quantum complexity as "the minimum number of gates require to prepare a given unitary operator or a given state from an unentangled product state." So like: you try to break down some big unitary operator into 2-local unitary operators all tensored together, and there's some minimal number of ways of doing that, and we take that to measure the "complexity." Because as we all know, all you need are single qubit phase shifts and two qubit entangling gates to get universality. So it turns out quantum complexity tends to grow over time pretty much like the entropy of an appropriately chosen classical system. I think of it like: if you consider some hot little particles cooped up in a corner, entropy increasing means that the particles get more spread out over time, due to random collisions, until they are all basically in a state of statistical indistinguishability. So the longer time passes, the harder it is to find your needle in a haystack. But instead consider some hot little strings, and you let them go. As anyone who has ever plugged something in, you know that wires take like a nanosecond to get tangled around each other. Like WTF earbuds. So for random strings, entropy increasing actually means the strings tend to get more tangled up, and so a given string actually becomes *easier* to find in the sense that it's location is highly correlated with all the other strings--the problem then becomes that it's maybe too highly correlated, and you need to find a way of drowning out the din in order to trace the subtle winding patterns of what can basically be found anywhere. (Of course, if you increase the temperature high enough, the strings get divided into those tightly bound and those so wild they can't be tamed by tangles...) But anyway, intuition aside, the point of many recent papers has been to suggest something like: increasing classical entropy ~ increasing quantum complexity ~ growth of spacetime. QM=GR, ER=EPR. Lenny is great at slogans!
On the subject of Scott Aaronson, I really enjoyed his recent blog post "Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)". And I also enjoyed the rejoinder by a certain Lubos Motl on his blog: "Aaronson, interpretations of QM, and fashions". A word has to be said here. As far as I've been able to reconstruct the story of Lubos Motl is that he was this super smart cool kid from the Czech Republic who in the 90's posted online some papers on string theory, and he ended up at Harvard, as a student (at the same time as Maryam Mirzakhani, I think), but was eventually driven out by a cabal of leftists headed by Lee Smolin. For like a decade, he's been regularly blogging from home in exile. He's writings on quantum foundations, I think, are excellent. His reviews of recent research, although I can't evaluate them personally, always in retrospect point me in the right direction, if I go hunting for more information. Then there's all the other posts which are, as we say now, alt-right trolling. My take on it is that Lubos Motl believes science has proven that freedom and subjectivity are central to the universe, and so logically, he's right wing, since the right wing in the Cold War era put freedom as its central plank (as opposed to, say, justice). His whole beef with the Intellectual Establishment is that they've made this political compromise with determinism: it's okay when gay people or whatever say they were born that way, but when I say I was born to X, you're like that's racist or something. And so, in public life, everyone goes around talking as if the GREATEST REVOLUTION IN HUMAN THOUGHT hadn't just happened. So he's like: I'm going to troll you all forever, and your arguing with me misses the point, which is that I'm flaunting my freedom to be wrong in your face. Which isn't to say, he doesn't often have a point: he's basically only convinced by social formations that leave freedom invariant--whereas everyone else is trying to redistribute the freedom around while yelling they have no choice--and maybe he has a point. Subtract the bitterness, and Lubos Motl would be a perfect scientist. What is amazing about the post I link to is that for years Motl has been bashing Aaronson as this bullshit Democrat liberal leftist peon who thinks he's good at computers but isn't, or whatever--but once Aaronson puts his cards on the table w/r/t to his interpretation of QM, Motl does this 180, not only relents, but writes this glowing review--as if to say, Don't you get it? That was the point I was trying to make all along. But then, that's Motl's thing: savage wit. (Maybe I've been unduly influenced by this piece by Masha Gessen.)
On John Baez's blog, there's a discussion "Linguistics Using Category Theory" that highlights some of this interesting work coming out of Oxford, by people like Bob Coecke et all. Over the summer, I devoured his and Aleks Kissinger's Picturing Quantum Processes". On this note, John Baez is a true hero for many reasons, not least of which is his endless exuberant yet lucid writings which date back to the 90's. Let him teach you the praise song of the division algebras--there's a related video here. He has a social conscience too!
One implication of yesterday's discussion is that if *we* are full fledged quantum entities, then by interacting with the appropriate classical computer program, we could achieve quantum supremacy right now, in ourselves. Which is to say, if we truly learn the art of quantum programming, of thinking in quantum mechanics, in principle we wouldn't need quantum computers at all for a wide variety of tasks relevant to human beings. After all, can't every interactive classical computer program be seen as a potential "controller" of the quantum system of *everything that isn't it*? And couldn't such classical programs be classed by the way in which their "feedback" structure constrains any complementary quantum system joined to it?
I really have two guiding questions today. A) What is classicality? B) What is interaction? So I have two stories to tell. The first takes us on a journey back to Kurt Godel. The second is all about trying to think about gauge theories without the fields.
First: classicality and transcendence. I've come back to this paper a number of times over the years, and this morning, I thought I'd try to work through it again. "From Heisenberg to Godel via Chaitin" by Christian S. Calude and Michael A. Stay. Here's the upshot. Consider binary strings like some x = 100101. Call the length of a bit string |x|. You can associate to the natural numbers a binary representation. Consider the binary representation of n+1 without the leading 1 and call it B(n). |B(n)| is log2(n+1). The Kraft-Chaitin theorem tells us that if you have some sequence of natural numbers a_0, a_1, a_2, etc which are the output of a computer program and:
SUM FROM i = 1 to INFINITY
of 2 ** (-a_i)
is less than or equal to 1
then you can turn that into a sequence of binary strings w_0, w_1, w_2 which are prefix-free aka no given bit string appears at the head of another bit string, with the delightful property that |w_i| = |n_i| aka the length of the bitstring is conserved under this transformation. Since we have prefix-free strings, we can define a self-delimiting Turing machine aka a Turing machine that computes by chugging along bit by bit and halting when the string is accepted as valid by "the function" or maybe never halting. Let's call it A. You can define the complexity of a bitstring x relative to A as the length of the shortest computer program on A that generates x. Because of the universality of all computers (!), if you bring in another computer B, the complexity difference between x according to A and x according to B is no more than the length of the shortest program on B that simulates A itself. Duh! It's called the "invariance theorem." So if you have two self-delimiting computers A and B, then there's some particular constant E such that for any bit string x,
length of shortest program on A that generates x aka L(x, A) <=
E + length of shortest program on B that generates x aka L(x, B)
Great. Call N(x, A) the smallest integer whose binary representation produces x on computer A. Then:
2**L(x, B) - 1 <= N(x, B) < 2**(L(x, B)+1) - 1
We can define the uncertainty of N(x, B) as the difference between the upper and lower bounds aka N?(x, B) = 2**L(x, B). We can then rephrase the invariance theorem as: for any two self-delimiting computers A and B and a bitstring x, there's some constant E such that:
N?(x, A) <= E*N?(x, B)
Now recall Chaitin's constant OMEGA. It's a real number representing the probability that a randomly generated computer program will halt relative to some computer A. Chaitin proved that the bits of OMEGA(A) form a random sequence in the following sense. Fix a number of bits s, and label the bits of OMEGA(0, A), OMEGA(1, A), OMEGA(2, A), etc up to s. Let s? = 2**-s. Then:
s? * N?(OMEGA(0, A)OMEGA(1, A)OMEGA(2, A)...OMEGA(s,A), B) >= some E.
The interpretation: there is an uncertainty relationship between the accuracy with which we can appoximate OMEGA on A on B and the complexity of the bit sequence so far relative to B--which is exactly analogous to the uncertainty relationship between position and momentum, etc, in quantum mechanics! Amazing!
Meanwhile, gauge theory approached from an oblique angle. Consider that classic time travelers paradox. Suppose you can build a time machine. You go back into the past to visit Shakespeare and give him a copy of his own plays. He ends up plagiarizing from himself. There's no doubt: Shakespeare definitely wrote Shakespeare's plays. Yet the paradox is that from this point of view, the difficulty of writing Shakespeare's plays from Shakespeare's position has been shoved under some metaphysical rug. Picture Shakespeare's plays falling through a wormhole from the present where you are to the past where Shakespeare is. One natural conclusion would be that the complexity of the wormhole connecting you to Shakespeare has to be at least the complexity of Shakespeare's plays relative to Shakespeare when he wrote them. This is an attempt at answering the question, Where does Shakespeare's intellectual labor dissapear to in the story? His labor manifests as the geometry of the wormhole that conveys his own plays back to him from some other vantage point.
It makes sense. What makes a thing what it is is its intrinsic difficulty. I could put it to you in a thousand ways, in words, pictures, mathematical equations, but if my representation doesn't do justice to the intrinsic algorithmic difficulty of the problem, then it's not a complete representation. But the thing is: you can only judge the difficulty of something relative to an observer; or, put another way, intrinsic difficulty is what is conserved when you switch perspectives. But just the same: is it not also true, that sometimes a very difficult problem suddenly strikes us as absurdly easy upon a perspective switch? If perspective switches conserve difficulty by definition, then if a something simplifies or complexifies after a perspective switch then the surplus or deficit of difficulty must have been shoved under some rug somewhere. What to make of this? What is this rug?
It turns out that the way to think about such questions is gauge theory, and it's exactly what you need to describe multiparticle dynamics in physics. Here's the idea in a nutshell: You define your system in terms of its global symmetries: all the things you can do to the whole that leave it the same. So you consider all quantum states invariant under the the global symmetry transformations, and they represent the global space in which the system lives. Suppose we have a given quantum state. It knows where it is in the global space. We can apply the global symmetry transformations and nothing happens: it stays put. But what happens when we consider a local piece of the system? How does a local piece know where it is? Suppose you can find representations of the global symmetry transformations that are of the appropriate dimensionality to act on a local tensor piece. In general, the local state will *not* be invariant when these transformations are applied. In other words, in general, a part of a system may appear to be in a different location in the global space than the whole. Taking this to be an absurdity, we "promote the global symmetry to a local symmetry." Given a local tensor piece, and appropriate representations of the global symmetry transformations, we determine the extent to which the global symmetry is broken locally and introduce an auxillary system--"a gauge field"--that precisely repairs the damage, restoring the global symmetry at the local level, the actual asymmetry shoved under the rug of another quantum system that is coupled to the whole. It is this auxillary quantum system that mediates the interactions between the parts of a whole.
Concretely, consider a spin-1/2 particle, like an electron. The electron has a four component global wave function because it is a function of t, x, y, z and needs to be symmetrical under the Lorentz transformations of special relativity. The 4 component state can be split into two two component parts, a 2x2 spin with positive energy and a 2x2 spin with negative energy. Now remember a 2x2 spin can be represented up to phase by a point on the 2-sphere. It's just a qubit. So an electron can be seen as two qubits entangled in a certain way. But what about that phase? The global 4 component wave function is invariant under multiplication by a complex phase. We also demand that the 2x2 spins are also invariant under multiplication by a complex phase. But the relative phase between the two spins actually does matter! So for the symmetry to be maintained, we introduce a boson to mediate between the fermions, a photon of light. If you change your phase locally, in order to ensure that global symmetries are met locally, the phase difference becomes a particle of its own which passes from the one spin to the other, communicating the change in relative phase: in this case, the particle can be represented as a point on a circle, an element of U(1). So A's own change of phase is invisible to A; it is B who receives news of it. There's an element of feedback, however: A changes its phase, which releases a compensatory boson that communicates the change to B. After the change, either B is invariant under the global symmetry transformations or not. If not, then it releases a boson of its own, etc. And they toss the ball back and forth, trying to get to a point where A and B can stop breaking each other's symmetries and mutually coexist in inertial motion. In brief, and I'm sure I've misrepresented much in this survey, this is the basis of quantum electrodynamics, quantum chromodynamics, the whole standard model.
Returning to the landscape of complexity, we conclude: if you switch perspectives on a problem and find the difficulty has magically increased or decreased, this signifies the emission or absorption of a force-carrying particle conveying the difficulty elsewhere.
A journey of a thousand miles begins with a single step. On February 12th, 2018, I created a single file which is this file, index.html. I just arrived back in New York from Moorestown and on the train here, I finished "Coming of Age in the Milky Way" by Timothy Ferris, which I can recommend very highly. It retells the whole history of astronomy from prehistory to the present--and basically the whole history of astronomy is as much to say: the whole history of science, since the interconnections are always dense. Ferris is an extraordinarily good storyteller, and really does justice to the freaks who figured it all out, from doglike Kepler, the mendacious Galileo, Tycho Brahe with his metal nose and dwarf fed scraps under the table. Ferris doesn't bullshit the science, and also knows when to respect mystery, a rare combination. He knows how to weave the long arc. The stars drive Newton to his mechanics, his mechanics gives birth the machines drilling into the earth for coal, turning up eons of geological history. Lyell's geology--and Malthus--inspires Darwin on his voyage. Natural selection! It's almost tautological. But it does require billions of years. Various Christians et al resist the chronology. What I hadn't realized, however, until I read Ferris's book is that for the entire 19th century there was no definitive evidence that the world was actually billions of years old. They kept pushing it back, from thousands, to millions, to tens or hundreds of millions. Just because you understand how the solar system holds together via gravitation, doesn't mean you know the scales involved. They were only just nailing down the size of the solar system, didn't understand spectroscopy or nuclear reactions, etc. What's really interesting is that it was a desire to vindicate Darwinism that drove a lot of research into astrophysics, and by the 20th century, they were finally able to establish that the earth is about 5 billion years old, which is appropriately long for the biologists. So in the 19th century, we could say that Darwinism was radical not just because it challenged some people's dogmatic ideas about the earth and its tree of life, but because it laid bare this huge chasm between what the biologists were able to prove and what the physicist were able to prove. And by the 20th century, that chasm had been bridged. So Darwinism was radical and great because it drove new research in all quarters--whereas, whatever you want to call it, "Creationism" was, let's say, less of a catalyst for experimentation and innovation. A similar case occurs in the 1940's and 50's. Hoyle decided the Big Bang was some Creationist bullshit. A lot of physicists at the time were speculating that much of the periodic table of the elements had to be forged in the Hot Beginning since no one could prove the stars could forge much more than the lighest elements. Now later there was a lot of good evidence for a Hot Big Bang, but in the 50's, Hoyle's position drove the better research: he spent the next decade or so with collaborators demonstrating that the whole periodic table can be generated by the stars with no need to appeal to an initial state.
Anyway, to bring it around to my hobbyhorse, I kinda think that today, there's a particular materialistic computationalist stance that's taken towards fundamental questions about consciousness and mind that's holding back research. It's clear that physics demands an observer for the calculations to begin, but physics doesn't tell you what the observer is. Physics just traces the shape that subjectivity has to take when it is situated amidst others. Physics allows us to define the limits of objectivity for a given observer, which is a kind of negative image, a sillouette, of the subject. So we watch the shadow of some subject evolve in accordance with the equations. But the relevant question is how do we describe in equations the sort of shadow *we* cast. Then we can correlate the changes in the mathematical shadow with *our* inner feelings and perceptions, thus attaining new heights of self-awareness. I suspect that headway can be made in this direction without necessarily having to muck around with "neurons," especially as mathematically speaking, quantum mechanics can be regarded as a funkified theory of complex valued neural networks, with "quantum collapse" being analogous to the non-linearity introduced at each layer of the neural network, which, by the way, is what makes them Turing complete. My take on it is that eventually we'll regard not consciousness as the epiphenomenon, but, with some delightful irony, the neurons themselves, viewing them as "material quanta situated relative to some particular outside observer," etc. It remains to be seen what natural selection will look like when rewritten from this perspective!
To be fair, although to some people this might seem totally out of bounds, I'm hardly the first to publically expose this view. On the sidelines, people have basically been yakking on about it or something like it since the birth of quantum mechanics itself, so like practically 100 years. But what is 100 years in scientific time? I think it's just another case of physics leapfrogging biology, whereas once biology had leapt over physics. The problem isn't just scientific inertia, or the spell of computers, or a capitalistic mindset, but also: we don't have quantum computers. Classical computers took us real far in predicting particle collisions, and these days, more and more, relatively simple many body systems. But as anyone who has ever tried to code up a simulation of a quantum computer on a classical computer knows: you can't do any shit that's halfway complicated, and hardly at the scale for say, a social network that actually used the laws of QM to help mediate people's subjectivity. So we have to wait for our equivalent of those big 19th century drills.
But as I say, I think we can make progress in the meantime by proceeding conceptually, as long as we're careful. I mean, isn't that what mathematics is all about? Let's not blind ourselves like Laplace who didn't take the time out of his busy life to wonder about how difficult it actually is to solve the n-body problem in Newtonian gravity, and skipped forward to absolute cosmic determinism without appreciating the landscape of complexity that lay before him, with all of its geyers of chaos, nor appreciate the metaphysical difficulty involved in the notion that all the variables in the universe can even be *written down* relative to a single observer in such a way that the future of the whole can remain predictable with certainty. On purely logical grounds, if the future of a whole system could be predicted with certainty, then naturally that would exclude knowledge of this by a part, since otherwise the part could use that knowledge to simply do the opposite of what the whole expects, thus defying the prediction. If the entire universe were absolutely deterministic, then logically, that would make physics itself impossible! But behold physics. QED.
Maybe in the 19th century, this would have read as fluff in the light of Newton, but in the light of quantum mechanics, I think there's some meat on this bone. But let's take it from another direction. I think a lot about the idea of a quantum operating system. Inspired not the least by Roger Penrose, I've spent the last two months trying to build a 3d graphical user interface for spin-J particles, taking advantage of Ettore Majorana's "stellar representation," which represents any spin-J particle as 2J points on the 2-sphere. These 2J points correspond to the states of 2J bosons. But the particle can also be factored out in other ways, for example into "distinguishable" tensor pieces, by partial tracing over the higher-dimensional complement of a given lower-dimensional piece you are considering. Concretely, you could represent an 8 dimensional quantum state, which is some unit complex vector up to phase, as 7 points on the surface of a 2-sphere, but also as 3 points in the interior of the sphere, corresponding to the fact that 2**3 is 8, so you can view the 8-dim state as composed of 3 2-dim pieces. There's all sorts of interesting connections to topology and Hopf fibrations. It's ridiculously simple: you associate to each n-dim complex state vector, a certain polynomial whose complex roots stereographically projected to the 2-sphere are the "Majorana stars." Then factor the 8-dim state into 3 pieces, each of which are Hermitian matrices, and so can be decomposed in the Pauli basis to give x, y, z coordinates within the sphere. I made it so you could choose a fixed star to rotate another star around--in other words, use one Hermitian matrix as a pole, and evolve a star via a unitary matrix in a spherical circle around it. And I also made it so you could apply Lorentz transformations aka Mobius transformations to the surface stars (a la Penrose) giving an impression of how the "parts" of a quantum system (whatever they are metaphysically) appear when viewed from different vantage points in spacetime. (There is a physical justification for this: the Majorana stars represent directions you can point your Stern-Gerlach apparatus in, and when you send the particle through, there's always a potential spin state that has 0 probability. They represent, as it were, the "spherical zeros" of the function representing the system.) And sort of as a joke, I added an astronomy feature, where you could use as the seed for your 8-dim quantum state, the locations on the celestial sphere at a given time, latitude, and longitude of the 7 classical planets. You can play around with it here.
So having done that, I've been thinking about how to scale up to multiparticle systems. Spheres within spheres. Spheres passing stars between them. But the more sophisticated I get, the more everything slows to a crawl... I mean, you have to solve some freakish polynomials and more at each time step, etc, if you actually want to provide an *interface* and not just solve some well-defined problem. So yeah, it would be great to have a quantum computer that could solve those polynomials just by being! But you can't help but appreciate the irony of having to build up and collapse a whole quantum state at each time step just to provide some classical data for a classical simulation of *a fucking quantum computer*.
There's some python code somewhere that instructs a quantum computer which states to prepare, which transformations to induce, and which collapses to trigger so that a numerical result can be returned from the function. So you could go merrily on your way programming classically, with all of your functions being systematically given quantum speed ups by clever mathematicians behind the scenes. But ultimately that's like programming for a single core when you've got a bazillion waiting in parallel. There's gotta be a mindset shift, as there was for object oriented programming, functional programming, etc. It's like film. You have these still images on a reel, and a human can look at them one at a time, and although they represent motion, we don't see it as moving. But if you show us the frames at 24 fps, we perceive their natural motion. The same is true of a video game. The underlying state of the game is translated into the shifting colorful pixels that give you the illusion of a cinema under your control. Different perspectives are calculated for different players from the underlying objective state of the game, which awaits our input. Now suppose I have a quantum computer, and I want to provide a video game like interface to the quantum state--I want to perform a unitary operator on the state, which is to say, rotate a star around some axis, which changes the locations of the other stars in response. I can direct the quantum computer to do that, but I'd have to collapse a whole new quantum system at each time step to help calculate the motion continuously in the classical visualization of the original quantum system because I can't peek inside the quantum computer while it's calculating. Duh!
So what you really want is a kind of feedback loop: a quantum system such that when you collapse it in some specified way, you get numerical values which are the input to a polynomial time classical simulation of that same quantum system that predicts with certainty its time evolution over a certain interval, upon which time the quantum state is collapsed again and the new seeds for the next classical computation are sown--and all this despite the fact that a user can freely and continuously apply arbitrary unitary operators in the classical simulation of the quantum system--I emphasize: whose *graphical representation* can be calculated in polynomial time--which are also applied (without collapse) to the quantum system itself, seamlessly intertwining them, like spirit and matter. And obviously it depends on your choice of graphical representation, your choice of keyboard, mouse, joystick, etc--your choice of what you want to view and interact with continuously, and what you want to view and interact with discretely. (Remember the fundamental duality: every circuit based quantum computation, aka some continuous unitary evolution, can also be represented as a measurement based quantum computation defined for some pre-entangled observers, with instructions for classical computation on and communication of their measurement results.)
So provisionally, I'm thinking of a quantum operating system as something that takes as input a choice of a) classical computer graphics primitives and b) mouse/keyboard/joystick input primitives c) arbitrary constraints, and prepares a quantum system and classical visualizer that together allow you to play the quantum system like a video game, the continuous classical world's geometry and responsive motion contrived so as to balance the intrinsic uncertainty of the fast quantum system against the periodic needs of the slow, yet definite classical interface to that system that wants to remain veridical despite user interactions.
Of course, physics itself considers the most general possible interfaces. The question is how can we capture mathematically the ones that are most relevant to human beings. Naturally, we begin with the interface being Mac OS X or whatever, but at a certain point one wants to break free of the limitations of screen and keyboard and mouse and consider the world at large, to the extent that it is under our control, as a potential classical interface to a quantum system precisely tailored to its limitations as an interface, a body for a quantum mind to ride on.
Which brings us back around to our main point. What is mind? To see it correctly requires you to stand on your head for just an instant. By passing light through a prism, you can split it into component colors. By passing a chord through a forest of resonators, you can split it into component notes. By passing a charged particle through a magnetic field, you split it into component angular momentum states, a sum of orbital and spin angular momentum states, which are entangled with quantized spatial location states. By passing a thought through a human mind, it is split into component ideas, analyzed into particular perceptions, projected into your reference frame, so you can see it from your point of view.
Those who "champion the use of reason" in their propagandizing, earnest or otherwise, I think, are misframing the issue. The question is whether or not you believe all disagreements can be reconciled with a common vocabulary without distorting people's intentions. Turned around, the question is whether you believe you can think for yourself. In the sense of reason as judgement to be purely rational is live purely publically, at the continual mercy of the judgement of others. When a thought exists in mind, of itself, it exists in a multidimensional state, and experiences effortless continuous unitary evolution, viewing the interactions of its parts as if from above, but when an answer, a judgement is required to be registered in the outside world, in the body, the thought has to collapse to a particular eigenspace as a whole. To demand someone interact with you rationally is to demand that you lay all your cards on the table, all your steps, all your collapses, all your acts of sorting and ordering in a way to be judged consistent by your interlocutor--it is to demand that you perform your thoughts publically, in a way that is legible to others, as opposed to merely perceptible within oneself, and there are many different kinds of readers in the world. But any thought that can be had by a sequence of judgements expressed interactively through the body can also be had by a continuous unitary evolution of mind, if the conditions are right. If by reason you mean judgement, to "champion the use of reason" is merely to champion letting what is outside you think for you in every case. To analyze, to reason, to separate, to distinguish, to filter, to judge, to order, all these things are a product of "passing through." Non-judgement is "being it", unitary evolution, as opposed to judging it. Only by suspending judgement can you vaunt yourself into a god. This isn't to say that judgement can't be cultivated--and this is what the cultivation of the body is; but it is to say that probably for most people they are dealing with a problem of too much anxious judgement and not a lack of it! Unable to stop thinking, they try by various seditive means to finally at last dissapear from exhaustion into an uneasy sleep. What they lack is a route to the privacy of "non-thought": absolute safety, if only for the blink of an eye.
In other words, in cases of anxiety there's a misfit between the environment and the subject with regard to what's inwardly perceived as unitary (non-work aka inertial/geodesic motion) and what's perceived as collapse after collapse after collapse after collapse (work aka acceleration aka forces aka interaction aka curvature aka stress) by the body/mind interface in question. It's 2018--am I allowed to talk this way yet? Lol. The human mind tends to get stuck into loops of circular reasoning, whereby one act of shifting perspective leads to another leads to another leads to another leads back to the original, unable to stay put nor adventure off to survey the rest of the world. Each perspective switch, each act of judgement, each choice of representation, is registered on the body. Flipping it around, in a case of anxiety, it's as if the classical interface aka the brain and environment et al is querying your quantum state at too fast a rate for you to even have a single goddamn unitary thought so you just go wandering around in circles in phase space. The question is how to have a thought without the motion of the body at all. Simple unitary motion, which can be harnessed to solve any problem with zero added energy given enough time, if you set up the initial conditions right.
In any situation, along some dimension, we can always dissent, right? And refuse the supposed lawfulness of the world precisely in proportion to our knowledge of it?
I looked up on archive.org when I registered my first domain name nessiness.com. It was in 2004 which was fourteen years ago now. I must have been a freshman in high school. I remember how buying a domain name back then felt like buying a DOMAIN, a DOMINION, a tiny principality of the internet to settle down in and call one's own. Like a fucking hipster, I was the first in my crop of kids to get a website when no one had a website; and naturally when everyone got websites and, egged on by advertisers, started spamming the world with their pictures and video, I neglected mine.
I admit, I have been an inveterate, ungenerous lurker. It's been great. I've learned nearly everything beautiful, true, and worth knowing on the internet, and I mean the internet of *text*. I had what no generation of human beings had before me: all the libraries of the world, cracked open and spilled out for all to see. But when it came to my own work, which by now consists of a large body of poetry, short stories, novellas, essays, scientific treatises, songs, symphonies, archaic dances, endless computer code, I've been very reluctant to release any of it to the public. I feared losing my anonymity, that precious modest irreplaceable thing. But more than that, living in the Library of Babel itself is paralyzing. The eyes of all the world, past and future, are upon you. On the virtual shelf is very nearly every great thing a human has ever accomplished on record--and it's difficult to have illusions about measuring up. It's also very loud in this library, and I really didn't want to add to the din. I heard they're shipping the books to some warehouse in NJ. It's almost too loud to think.
All of this is to say, I have a backlog of material that I really want to share with everybody, and it's a daunting task putting it all together, scattered as it is across a dozen hard drives, indexed by obscure file systems... Well, it's time to defrag. The only sane way I can think of approaching this is as an archeologist, or geologist even, peeling back layer after layer of digital strata, in hopes of reconstructing a little bit of what once was. My plan is to work reverse chronologically through every single file and folder I've ever created and/or downloaded, and daily post here my haul.