Data Scientist
Researcher
Born and raised in San Diego, California, found my true passion for data when I studied Economics, Political Science & Math.
My fields of interest include : NLP, data visualization, Machine Learning , High Performance Computing, Cryptography, blockchain, decentralized currency & many other fields including Math, Physics, Political Economy, Psychology & Philosophy .
As my interest in Machine Learning piqued, the problem of computational
complexity became increasingly apparent given the amount of data needed to run ML/DL models effectively.
The problem is best defined as the n-body problem: In physics, the n-body problem is the problem of
predicting the individual motions of a group of celestial objects interacting with each other gravitationally.
The n-body problem was first devised by Newton when calculating the trajectories of Jupiter & it's moons.
By 1987 Greengard & Rokhlin published: A Fast Algorithm for Particle Simulations*, proposing a new method of
reducing computational complexity from
By the year 2000 their algorithm was named top 10 algorithms of the 20thCentury.
Predict the individual motion of celestial objects & their interactions with each other.
A very simple way to think about this is to imagine someone juggling 2-3 objects. The task is simple for 2, but for 3
one needs to master hand-eye coordination & understand the mass & velocity of the object being juggled as well as the
force needed to move them.
If we count the amount of stars, we are taking into consideration about 1021 stars, 1 Billion Trillion,
if your math is fuzzy. This is only the amount of stars, lest you forget there are planets, asteroids or the little understood Black holes.
The number is unfathomable, calculating mass & predicting the movement of each body & its interaction with other bodies is computationally impractical.
Stated simply, what the Barnes-Hut algorithm does is create a quadtree (or octree) that computes close-range particles mass & force individually,
while distant particles are approximated by the largest macroparticle's center mass as well as force. This in turn dramatically reduces computation due
to calculations taking place as an approximation of particles instead of direct summation of all particles.
In essence, FMM & BH do the same computations, however, there are some differences. The first difference between
them is that FMM computes the potential at every particle, not just the force. The inner expansion also uses the nearest-neighbor to calculate potential.
The key difference is that FMM is more expensive because it computes both expansions (inner & outer) for every leafnode and BH does not.
Now that we have explored the significance of the n-body problem & some its proposed solutions; the question is : Why is this important?
A tidal wave of data is coming, 5g, driverless cars, smart homes & any internet connected device will put a strain on current & future hardware.
The necessity for faster & more efficient algorithms is needed for exascale & some day zettascale computation, the importance of these algorithms are a matter of national security. I modestly propose the idea of "computational supremacy" : The need for nation states to own & control powerful supercomputers
which will ultimately be used for military purposes like obtaining Strong Ai & mapping the cosmos.
Check out the TL;DR version w/ citations and applications of FMM