Saturday 28 November 2015

Physicists should take pride in their work

All of my posts thus far have been about presenting information or telling a story. This one is more of an opinion and a rant.

Too often, speaking with other physicists, I have had conversations that go like this:

Me:  What do you work on?

Them: Condensed matter physics.

Me: Oh yeah? What aspect?

Them: Uhh...materials physics.

Me: Ok, what kind of materials?

Them: Uhhhh...solid state.

Me: What kind of solid state physics?

Them: Uhhhhhhhh systems with many atoms.

This will continue for as long as I have patience, and I will get no information about what the person actually does. If I care enough I can look up their supervisor's research webpage and find out that they actually fire x-rays at superconductors or something like that. I used condensed matter as an example but it's not limited to that. I've had the same conversation go "black holes"..."general relativity"..."collapsed stars."

Physicists, you should not do this. It is annoying, you're selling yourself short, and it's disrespectful to the person asking the question. If the person asking you the question has a degree (or several!) in physics or a related discipline, they'll be able to understand the elevator-description of what you do.

Sometimes I think people give these non-answers because they're embarrassed about what they work on. There is this false premise that there is "real physics" and what they're doing is not it. People working on semiconductors are embarrassed that they're not working on quantum gravity, and the people working on quantum gravity are embarrassed that they're not working on semiconductors. I once had a guy who did simulations of relativistic nuclear collisions, which is like the most physics you can get in three consecutive words, tell me he wasn't doing real physics (although he wouldn't tell me what he actually did!). It's all real physics. No matter what you are working on, you are pushing the boundaries of human knowledge, even if it seems extremely specialized or boringly incremental or removed from reality.

If somebody is asked a question, it is rude not to answer it. In a physics department, it's reasonable that you'll be discussing physics with other people who are well-read in physics. They can handle the truth. If the answer is not detailed enough, the person can ask for more information. If the answer is too detailed, the person can ask for clarification, and try to understand by asking more questions. This type of conversation generally involves two people with differing levels of knowledge on a topic finding a way to meet in the middle. When one side refuses to meet in the middle, either by asking to be spoon-fed or by withholding information, it's not fun.

Now, there is an art to knowing how technical a summary of your work to give. Generally it depends on whether you're talking to someone in the same research sub-field as you, the same science as you, other scientists in different disciplines, or non-scientists. The main thing I'm ranting about is physicists withholding information about their work from other physicists, but I imagine it happens in other fields too.

For my Ph.D. work, I'd generally tell people that I looked at DNA molecules squeezed into very small tubes, to measure how squishy the molecules are. If they asked, I'd tell them how it relates to genetic sequencing technology, and what the relevant physics governing the squishiness of DNA is. If they really wanted to know, I'd talk about screened electrostatic repulsion and conformational degeneracy and the like. But I wouldn't just mumble different permutations of "biophysics...biological physics...physical biology...physics of biological systems..."

So, when someone asks you what you work on. Don't be vague. Be proud.

Sunday 22 November 2015

What do we know about extra dimensions?

As far as we know, we live in a world with three spatial dimensions. There are some theories that it has more than that. Here, I will discuss the experimental searches for extra dimensions, how they work and what they have found.

Spoiler alert: no evidence of extra dimensions have been found.

But just because no evidence has been found, it doesn't mean we haven't learned anything. In physics, there is much to be learned from measuring zero, because it can tell you the largest value that something could have while still evading detection. I talk a bit more about measuring zero in my article on photon masses and lifetimes. 

The searches for extra dimensions do not involve trying to draw seven lines perpendicular to each other. All of them require making some sort of theoretical assumption involving extra dimensions, seeing what that theory implies, and looking for those implications. The strength of or constraint against those implications can give information about the extra dimensions that lead to them. However, whatever that theory is, it must reconcile the fact that we appear to live in a three dimensional universe. The reconciliation is usually that the extra dimensions are really small.

Why extra dimensions?

String theory and its relatives require that spacetime have 10 or 11 or 26 dimensions in order for certain calculations not to give infinite results. A lot of work goes into figuring out how these can be "compactified" so that it still seems like we live in three dimensions. String theory, being a theory of quantum gravity, generally has its extra dimensions on the order of the Planck length. The theories I'll be talking about are theory of Large Extra Dimensions, large being relative to the extremely small Planck length. Some of the problems these attempt to solve are the hierarchy problem, that gravity is so much weaker than other forces, and the vacuum catastrophy, that the measured energy density of the universe is 100 orders of magnitude smaller than the prediction from quantum field theory.

Our three dimensional universe as the surface of a higher dimensional universe. Source.


The best-known example of such a model is the Randall-Sundrum model. Its first author, Lisa Randall, is now hypothesizing that galactic dark matter distributions may lead to mass-extinction events on Earth. That is not really relevant but it's cool. This model posits we live in a three-dimensional surface in a four-dimensional universe (actually it's one higher in both cases, because of time), and gravity can propagate through the bulk of the universe while other forces are constrained to the surface. These types of models require that these extra dimensions have some characteristic size, compared to our three spatial dimensions which are infinite. How can a dimension have a size? Well, imagine if you lived on a really long narrow tube, so narrow that it seemed like you just lived on a one-dimensional line. The second dimension, that you don't notice, has a characteristic size that's the circumference of the tube. In fact, I found a picture demonstrating that on google images.


Tests of Newtonian Gravity at Short Distances

The fact that gravity follows an inverse square behaviour is a consequence of the fact that we live in a universe with three spatial dimensions. That is good for us, because an inverse square force is one of the only kinds that can give stable orbits. The inverse-squareness of gravity is attested by the elliptical orbits of planets around the sun, but it was first measured on a terrestrial scale by Cavendish in 1798, who observed the rotation of a torsional pendulum surrounded by massive spheres of lead, which acted as their own gravitational source. I once tried this experiment with my undergraduate lab partner Bon, and it was awful.

Within the Randall-Sundrum model of extra dimensions, it is expected that gravity will behave differently over distances smaller than the size of the extra dimensions. This was posited in order to explain the discrepancy between the observed density of dark energy, and the much much much larger prediction based on electromagnetic vacuum energy. So, Kapner and friends from the Eot-Wash group* simply measured Newtonian gravity with a Cavendish-type experiment to shorter and shorter distances, shorter than anyone had measured before, down to 44 microns separation between the source and test masses. The observed dark energy density has a characteristic length-scale of 85 microns, and they managed to get below that.

"We minimized electromagnetic torques by coating the entire detector with gold and surrounding it by a gold-coated shield."
The experiment functioned in such a way that if gravity was not following an inverse square law, there would be an extra torque on the pendulum, which they could then detect. As far as I know, this is the most precise lab-scale gravity experiment there is. They found no departure from the inverse square law down to that length, but were still able to learn some things. By fitting their results to a modified gravitational Yukawa potential (Newtonian gravity plus an exponentially decaying extra part), they could find experimental bounds on the strength and characteristic length scale of the non-observed deviation. They found, for a force with equal strength to gravity, it would have to be localized to dimensions smaller than 56 microns, or else it would have been detected. So, we know that mites crawling on human hairs are not subject to extra dimensional gravitational forces.

The Large Hadron Collider

The Large Hadron Collider (LHC) at CERN on the French-Swiss border was built to smash protons (and sometimes lead nuclei) together at almost the speed of light to see what comes out. The main thing they were looking for was the Higgs Boson, which was found in 2012. It also looks for other undiscovered particles, which I call splorks, and deviations from the predictions of the Standard Model of particle physics, which may be indicative of "new physics" going on in the background. One of these new physicses is Large Extra Dimensions. How are post-collision particle entrails related to extra dimensions?

A graviton penetrating the brane that is our universe. I'm not sure if this picture makes things more or less clear. Source.


In the paper I'm focusing on, they looked at the amount of monojets deteted. A jet in particle physics is a system of quarks that keeps creating more pairs and triplets of quarks as it is pulled apart. Quarks are weird. A monojet is...one jet. These are typically produced at the same time as Z bosons, and I gather they are less common than dijets. The ATLAS collaboration, one of the two bigger experiments at the LHC, under the alphabetic supremacy of Georges Aad, looked at how a model of large extra dimensions would lead to monojet production. In this model, we live on a three dimensional (mem)brane in a higher dimensional universe. Gravity can propagate through the bulk of the universe, while other interactions can only happen along the brane. In this scenario, quarks could be created in the collisions paired with gravitons, which are not detectable by ATLAS**. These single graviton-associated quarks would lead to monojet events, in greater number than predicted by the standard model. If this explanation seems incomplete, it is because I do not fully understand how this works.

So Georges Aad and his army of science friends looked at the monojet production data and scanned for excesses above the standard model, and tried to fit it to the extra dimensional brane quark model. They found that for two extra dimensions in this model, the upper experimental bound on their size was 28 microns. It surprised me that this is the same order of magnitude as the Newtonian gravity measurement. For larger numbers of dimension, their bound gets smaller.

There is a ridiculous amount of data produced at the LHC and a ridiculous number of ways to analyse it. For each of the many theories of extra dimensions out there, there may be multiple ways to probe it with LHC data, but I have just focused on one.

Pulsar Constraints

The Fermi Large Area Telescope is a space-borne gamma ray observatory. It gives a lot of data regarding gamma ray emission from various sources throughout the universe, including from pulsars. The collaboration wrote a paper trying to constrain a model that predicts how the gamma emission of a neutron star would be different if there were extra dimensions. In the model they consider, gravitons are massless only in the bulk universe, but gain mass in our brane. This allows them to be trapped in the gravitational potential of a neutron star, where they can decay into gamma rays, which could then be detected. For two extra dimensions, their analysis is sufficient to rule out large extra dimensions above 9 nanometers, and smaller for more dimensions. This is a much tighter bound than the LHC and gravity data.

Gravitational Radiation from Cosmic Strings

This one is premature, because cosmic strings have not been detected and may not exist, and we cannot yet detect gravitational radiation. They are not the same thing as string theory strings, they are more like the boundaries between different regions in the early universe that got shrunk as the universe homogenized, until what was left was an un-get-ridable*** string. They are analogous to grain boundaries  between different regions in a crystal.

O'Callaghan and Gregory computed the gravitational wave spectrum emitted by kinks in these strings. They did this assuming three spatial dimensions, and then assuming more. They found that these waves could exceed the detection threshold of future gravitational wave detectors, and that the signals would be weaker if there were more dimensions. This is a longshot, in regards to detection of both cosmic strings and extra dimensions, given that gravitational waves could be detected.



Summary

Even though extra dimensions have not been detected, we can still get information about how big they can be based on what we have not detected given our ability to detect things. However, the way the data is analysed depends on what model of extra dimensions is being considered.

*A combination of Lorand Eotvos and the University of Washington.
**...yet.
***topologically protected

The Sophomore's Spindle: All about the function x^x

$x^x$. x-to-the-power-of-x. For natural numbers, it grows as 1, 4, 27, 256, 3125. It is not the most important or useful function, but it has a few cool properties, and I'll discuss them here. It is mostly known for growing really fast, but when plot as a complex function over negative numbers it has a cool shape. A special integral of this function is known as the "Sophomore's Dream," and in one of my ill-fated math attempts around 2010, I attempted to generalize that and find its anti-derivative. There isn't much centralized information about the $x^x$ function, so I hope to compile here in this blog post.

The $x^x$ spindle, discussed below. Image source.


Other forms and names

It is generally hard to search for information about this function because the results include any time the letter x appears twice in succession. The name "self-exponential function" returns some results.

If repeated addition is multiplication, and repeated multiplication is exponentiation, repeated exponentiation is called tetration. The notation is a flipped version of exponentiation: $x^x$=$^{2}x$. So, our function here could be called second-order tetration. This also continues the property* of the number 2 that $^{2}2=2^{2}=2\times 2=2+2$.

The other common way to represent this function is as an exponential, rewriting it as $e^{\log x^x}=e^{x\log x}$. This makes it much easier to manipulate, because now only the exponent is variable rather than both the base and the exponent.

Growth

As can be seen in the numbers above, this function grows really fast, more than an order of magnitude per integer increase, which is just a way of saying it grows faster than the exponential function (because the base also increases), and is greater than the factorial of any natural number.

This function grows really fast.

In the negative: the $x^x$ spindle. 

The function can be calculated easily for positive integers, and also for negative integers, over which the function rapidly decays. However, for negative non-integers, the function's output is not always real (a simple case, $(-0.5)^{-0.5}$ is purely imaginary). In fact, it is only real for negative x if x is a rational number whose denominator is odd. To figure out how to calculate this function for negative numbers, we'll go hyperbolic and then try using logarithms. The function $e^x$ can be written as cosh(x)+sinh(x), the sum of hyperbolic cosines and sines. That means we can write our function as:

$x^{x}=\cosh(x\log(x))+\sinh(x\log(x))$

The logarithm is not unambiguously defined for negative numbers, but by exploiting Euler's identity and some logarithm rules, we can write $\log(-y)=\log(-1)+\log(y)$ and $log(-1)=log(e^{\pi i})=\pi i$. Therefore, log(-y)=log(y)+$\pi i$. This is cheating a bit, because you can multiply $\pi i$ in that exponential by any odd integer and still satisfy Euler. This is merely the first choice of infinite possibilities, which we'll stick with for now. Anyway, this means that if x is negative, then we can rewrite our function again:

$x^{x}=\cosh(x\log(-x)+\pi i x)+\sinh(x\log(-x)+\pi i x)$

Now, we use the sum formulae for sinh and cosh, which are cosh(a+b)=cosh(a)cosh(b)+sinh(a)sinh(b) and sinh(a+b)=sinh(a)cosh(b)+sinh(a)cosh(b). We also remember that cosh(ix)=cos(x) and sinh(ix)=i sin(x). If we do this expansion, simplify, and group by realness, we find:

$x^{x}=(-x)^{x}\left(\cos(\pi x)+i\sin(\pi x) \right)$

So, what happens when we plot this function in the negative domain? Its absolute value generally gets smaller, while its real and imaginary parts oscillate with a period of 2. It is purely real for integers, and purely imaginary for half-integers. Another way to plot this would be as a single curve with real and imaginary y-axes, in which case this function would trace out a spiral.

$x^x$ over negative numbers. 


However, this assumes our basic choice of the negative logarithm. We have a whole family of choices. Mark Meyerson realized something interesting, that the functions for various choices of logarithm follow the same envelope function with different frequency, such that all of them together trace out the shape a vase (which he calls the $x^x$ spindle). As more and more values of the logarithm are added, the spindle gets filled out (see the first picture).

Inverse

There is no simple function that is the inverse of $x^x$. However, there is a special function that was essentially almost designed to be the inverse of this function. The Lambert W Function is defined such that x=W(x)e$^{W(x)}$. The inverse of $x^x$ is:


There are two branches of the W function, and the inverse of $x^x$ swaps over to the other branch below x=1/e, such that its inverse passes the vertical line test for each branch.


This I don't think is very interesting, it's basically saying "the function that inverts the self-exponential function is defined as the function that inverts the self-exponential function." I guess you could call it the xth root of x, which is not the same as $x^{1/x}$ in this case.

Derivative

When students are first learning calculus, they learn that the derivative of a power function $x^n$ is simply $nx^{n-1}$. They also learn that the derivative of an exponential function $n^x$ is proportional to itself, with the constant of proportionality being the natural logarithm of the base: $n^{x}log(n)$, with n=e being a special case. It is not immediately obvious which rule to apply to $x^x$, although the second one is closer to being correct.

If we rewrite the function $x^{x}=e^{x\log{x}}$, its derivative can be found with the chain rule. The first step in the differentiation just gets us $e^{x\log{x}}=x^x$, and that gets multiplied the derivative of $x\log{x}$ which from the product rule is $(1)\log{x} + (x)\frac{1}{x}=1+\log{x}$. Multiplying the derivative of the innie by the derivative of the outie, we find:



By finding when this equals zero, we can find the minimum and turning point of the function. This is simply just log(x)=-1, x=1/e=0.367..., and the minimum value is 0.692... One thing about this derivative is that it increases faster than function itself, contrary to the derivative of a power function. The rate of change of the function is even more divergent than the function itself.

Integrals: The Sophomore's Dream

One of the most interesting aspects of this function crops up when you try to integrate it. It actually comes from the reciprocal cousin of the function, $x^{-x}$, but the same phenomenon applies to the function itself. It is the identity:


There is no immediately obvious reason why that should be true, but it is (it converges to roughly 1.29). The name Sophomore's Dream is an extension of the "freshman's dream," that (a+b)$^n$=$a^{n}+b^{n}$. It was first proven by Bernoulli in 1697. There is a similar identity for regular $x^x$, which is not as neat:



This is proven by expanding the function as a series**:


To find the integral, each term is integrated individually. Wikipedia, the free encyclopedia, gives a decent proof of how to integrate these terms, both in modern notation (that involves gamma functions) as well as with Bernoulli's original method. It's important that the limits of integration are zero and one, because the log(1) term kills some of the extraneous nasty terms in the antiderivative. The main step in the termwise integration involves a change of variable that turns it into the integrand that leads to the factorial function.



Around 2009 or 2010, I thought I was clever because I found a way to express the indefinite integral of the $x^x$ function, that could be evaluated at values besides zero and one. Basically it involved using something called the incomplete gamma function, which is related to the factorial, to express the integral of each Taylor term in a general form. My solution was:
However, somewhat like the inverse, this is almost tautological and doesn't add much nuance. Still, I think I was the first person to figure this out. I tried writing a paper and submitting it to the American Mathematical Monthly (which is not at all the right journal for this) and got a rejection so harsh I still haven't read it almost six years later. However, in 2014 some Spanish researchers wrote a similar paper about the self-exponential function, and they came to the same conclusion as me regarding the incomplete gamma functions. So, I'm glad somebody got it out there.

Something else that's kind of interesting involving integrals and this function: the area under $x^{-x}$ over all positive numbers is like 1.99. I'm not sure if that's a coincidence or not.

Applications

Basically none.

The most common place the $x^x$ term pops up is in the Stirling approximation to the factorial, which is useful in statistical mechanics and combinatorics. It also gives a sense of the relative magnitude of self-exponentiation and factorials: one is literally exponentially smaller than the other.

In graduate statistical mechanics, there was a question about a square box, an n x n grid, with a fluid in it. At each point x along the box, the fluid could be at some height between 0 and n. So, the degeneracy of total states this fluid could have was $n^n$.

If the fluid must be continuous and touch the bottom, it can take on $5^5$ different configurations.
If anyone knows any other applications of this function, let me know.

So just to summarize, the function $x^x$ grows really fast as has a few cool properties, but overall isn't the most useful of functions.


*The solution is 4.

**I had originally erroneously called this a Taylor series.

Tuesday 10 November 2015

Bursting Bubbles Breach Blood-Brain Barrier: Blogger Bequeaths Belated Boasts

There was a story in the news today about the blood-brain barrier being bypassed* in order to deliver chemotherapy drugs directly to a brain tumour, using a combination of microbubbles and focused ultrasound. I worked on this project for over a year between finishing one university and starting another university (2008-2009), and it was good to see it finally in use. Even though they have achieved a medical feat, the phenomena behind it are quite physicsy.


Bubbles oscillating near red blood cells. From one of the news articles.
Focused ultrasound is what it sounds like: applying high intensity sound waves to a specific part of the body. Constructive interference allows the sound intensity to be maximized within the body rather than at the surface, and if the waves are strong enough then the tissue will start heating up as some of the acoustic energy is absorbed. It often fills the same medical niche as radiation therapy, except without the typical side effects of radiation exposure. My first university summer job involved working on the electronics for a focused ultrasound treatment for prostate cancer. Having one's prostate burned by sound waves from the inside may sound unpleasant, but it's not as unpleasant as having the whole thing removed.

High intensity focused ultrasound thermal therapy. All other images I could find involved detailed 3D renderings of the rectum. There's more to HIFU than just butts. Image source.

After graduating from university, I got a full time job in the Focused Ultrasound Lab at Sunnybrook Hospital in Toronto. There, they had developed what was essentially a helmet full of hundreds ultrasound transducers, designed for constructively targeting sound waves inside the brain. In addition to designing this device, they had to solve such problems as "How much is the skull going to refract the waves?" and "How do we avoid burning bone before flesh?" This device has since been used to therapeutically zap through the skull, but that's not what the news is about today.

Inside the transducer dome helmet array. The head goes in the middle.

My specific project involved microbubbles: really small bubbles (duh) that are used as contrast agents during diagnostic ultrasound. They are injected into the bloodstream (they are far too small to cause embolisms, about the size of red blood cells), and when an ultrasound wave hits them, they contract and expand in phase with the applied pressure wave, re-emitting sound waves as they drive the surrounding fluid with their oscillation, which can then be detected. All the videos of this on youtube suck, so out of protest I won't post one. My supervisor, Kullervo Hynynen, wanted to move beyond bubble diagnostics and into bubble therapy. His plan, as we have seen, was to use bubbles to open the blood-brain barrier and deliver drugs to the brain.


Because most of the articles I write are about physics and math, I'll remind the readers that the blood-brain barrier is not a physical separation between the brain and the arterial network, but rather refers to the impermeable network of proteins that forms around the walls of blood vessels inside the brain, that prevents molecules from getting from the bloodstream into the brain. This is useful for preventing blood contagions from affecting the brain, but makes it hard to get drugs into it (cocaine is a notable exception).

Two diagrams of how this works, to hammer the point across. Bubbles are injected into the blood stream, focused ultrasound makes them oscillate and/or collapse, that collapse opens the vessel wall.
The general plan was to use the energy absorption of ultrasound by bubbles to raise the temperature in their vicinity, as well as to create shockwaves from their collapse. It was hoped that either the increased temperature would cause the proteins making up the barrier to relax their grip, or just to violently shear them away.  At the risk of repeating what I talked about in "My Journey into the Hyperbubble," My research involved developing a theory to describe bubble oscillation inside blood vessels, then apply that to a 3D model of the blood vessels in a rat brain, figuring out how much heat would be transferred to the brain based on bubbles oscillating in those blood vessels.

From my paper, a rendering of the heat distribution inside the 3D rat brain. This rotating gif is way cooler and you can see the hot-spots in the blood vessels from the bubbles, but I'll only include a link because it'll kill somebody's data plan: HERE
My solution involved solving a modified version of the Rayleigh-Plesset equation (which is itself derivable from the Navier-Stokes equation) to simulate the bubble oscillation dynamics, calculate the power radiated from those dynamics (through a thermal damping term), and use that power as an input for the Pennes bio-heat equation, which is like the regular heat equation except with a blood flow term that we chose to ignore anyway. The idea was that the results of my simulations would inform future neurologists how much ultrasound to use and at what frequency to get the best results and not fry the person's brain.

The results of one of my simulations from 2009, showing the temperature around the bubbles in the vessels increasing over time.

After I started grad school, my particular project (the localized heating simulations, not the whole research program) didn't really go anywhere. Oh well. However, the Focused Ultrasound Lab kept working on developing this treatment. They apparently used it to treat Alzheimer's in mice.

Today, some news articles cropped up on Facebook about how this treatment has been successfully used to bypass the blood-brain barrier in humans, which is a pretty big milestone for any biomedical development process. There is not yet a peer-reviewed journal article on the topic, but from the news articles, they used this procedure to deliver a chemotherapy agent to a patient's brain tumour (without having to flood the entire body with it, one of the main issues with chemo).

I do not know whether my calculations factored into the patient's treatment. I would hazard to guess that they did not, because they never verified them experimentally and I don't have enough faith in my simulations to recommend going directly from simulation output to sonic brain zapping. However, it is a good feeling nonetheless to see something that I worked on in its early stages finally come to fruition. It is a good example of how a good old fashioned physics problem and an application of fluid dynamics, acoustics, and heat transfer starts saving lives in less than a decade.

*this one is not intentional, this topic is just really alliterative.

Sunday 8 November 2015

The host of Daily Planet said something nice to me.

In March, I was interviewed on the Canadian Discovery Channel show Daily Planet about my falling-through-the-Earth paper. The interview is here. Later that week, the host of the show, Dan Riskin, emailed me asking for help replicating my calculations, so I told him about Newton's shell theorem and how it applied in this scenario. Last week he was answering questions on reddit, and I reminded him of his email and asked him what scientists should do to improve the state of science journalism. He said:

Yes! That was a great day.
You did a paper about a person falling into a hole in the ground and then falling all the way to the centre of the earth (the hole hypothetically goes right through the middle). You figured out how long it would take to get to the centre.
I loved this question and spent half a day trying to find the answer before I gave up. I had a terrible computer model that kept throwing my person into space.
Then you taught me that so long as you're inside a sphere the gravitational attraction of the sphere itself, beyond a radius equal to your distance from the centre, cancels out. And I think you told me Newton calculated that. Do I have that right?
You did a great job improving science outreach by doing a ridiculous but fun question and then deriving the answer. I loved that and have told many people about it. You're my hero, a bit. Do more of that.
It is often said that the word hero is thrown around far too much these days. But here, we have an unbiased external source of that appellation.

Friday 6 November 2015

A Trick for Mentally Approximating Square Roots

You won't believe this one simple trick for calculating square roots. Calculators hate me.

If you're the kind of person who needs to quickly calculate the square root of something, whether for finding out your crows-fly distance to a destination through a city grid, or determining whether the final score of your sports game was within statistical error or not, you might find this trick handy. It is not particularly advanced or arcane, it is just linear interpolation.

It requires you to know your perfect squares, to be able to do a quick subtraction in your head, and some even simpler addition. It is effective to the point that you can do these things.

Consider some number $S=Q^2$, and you want to find Q. Unless S is a perfect square, Q will be an irrational number, so any expression of it in terms of numbers will be an approximation. First find the integer N such that $N^{2}< S<(N+1)^{2}$. e.g. if S is 70, N is 8 because 70 is between $8^2$ and $9^2$. We approximate the irrational part of the square root, (Q-N), as a fraction found by linear interpolation. The denominator of the fraction will be the distance between the two perfect squares surrounding S, $(N+1)^{2}-N^{2}=2N+1$. The numerator is simply ($S-N^2$). Thus, to approximate the square root of a number, simply calculate:

$Q\approx N+\frac{S-N^{2}}{2N+1}$
Demonstrating how this works for the square root of 70. The actual square root of 70 is marked off with a line.

So for our example of 70, N=8, S-N$^2$=70-64=6, 2N+1=17, so $Q\approx 8+\frac{6}{17}\approx 8.35$. This is roughly 0.2% away from the actual answer, about 8.36.


Comparing the approximation to the actual square root function between 9 and 25. It's pretty close.

This operation basically assumes that square roots are distributed linearly between perfect squares. This is obviously not the case, but it gets more correct as the numbers get larger. By looking at the error, we can see that the approximation is within 1% for numbers larger than 10, and is worst when the fraction is $\frac{N}{2N+1}$. The worst-case for each perfect square interval decreases inversely with the number.

The error associated with this approximation.
So how fast can this be done? For numbers below 100 it can be done mentally in one or two seconds, with practice. For bigger numbers, it'll probably take a bit longer. Obviously this gives you a fraction and not a decimal expansion. You can roughly guess a decimal expansion from the fraction, but that coarsens it a bit. In the above example, 6/17 is close to 6/18 so I could guess the decimal expansion is about .33.

Googling techniques for mental square roots, the first two are for finding the roots of large perfect squares, and then there is a Math Stack Exchange post about using a first-order Taylor expansion. This is more accurate because the Taylor series diverges as you approach the next perfect square, and I think this is faster as well.

Tuesday 3 November 2015

Two affiliated posts: biking and DNA origami

I went on a bike ride a few weeks ago with a bunch of people and we split up and tried to reconnoiter and failed. Afterwards I used strava to figure out why. Read more here: http://cycling.mit.edu/blog/2015/11/02/a-club-ride-narrative-with-strava-labs/

I also wrote an article on PhysicsForums about a cool paper that I saw yesterday, about DNA origami. It's here: https://www.physicsforums.com/insights/atomic-positioning-dna-hinges/