## Thursday, August 6, 2015

### uhoh!! Typhoon satellite images

I never cease to be amazed by how easy it is to do stuff in python.

But I often cheat and used ImageJ to manipulate images.

import urllib

months = [str(100+i)[1:] for i in range(1, 13)]
days =   [str(100+i)[1:] for i in range(1, 32)]
hours =  [str(100+i)[1:] for i in range(24)]
minutes = ["00", "30"]

url_start = "http://www.jma.go.jp/en/gms/imgs/"
url_middle = "/infrared/0/"
url_end = "-00.png"

quadrant = "4"  # NW = 1, SW = 2, NE = 3, SE = 4

# goodies at http://www.jma.go.jp/en/gms/
year = "2015"
month = months[7] # 8th month = August
for day in days[8:2:-1]:
for hour in hours:
for minute in minutes:
time_stamp = year + month + day + hour + minute
url = url_start + quadrant + url_middle + time_stamp +url_end
filename = "infrared_Q" + quadrant + "_" + time_stamp +".png"
urllib.urlretrieve(url, filename)
print url
print filename


## Friday, July 17, 2015

### Quickiebrot II

Here are some of those images saved by the script in Quickiebrot I.

CAUTION!  Flickering Images, don't watch if you are sensitive to things like flash photography.

Otherwise, SCROLL DOWN and get dizzy!

Or, better yet, go get my python script and make a better one! (I made this one horrible to encourage you to try it yourself.)

### Quickiebrot I

The story of the discovery of the Mandelbrot set is pretty amazing. It starts with "gee, what's the square root of negative one" and continues with "gee, what happens if I do this?" There is so much out there on the subject - go look!

I like the "Colors of Infinity" interview with Arthur C. Clarke (all over YouTube).

I also like this cool post by Ken Shirriff: 12-minute Mandelbrot: fractals on a 50 year old IBM 1401 mainframe.

This is 2014 but looks like vintage output! https://plus.google.com/photos/+KenShirriff/albums/6116650756716708097/6116651206478534962?pid=6116651206478534962&oid=106338564628446721517

See these YouTube links for really good explanations and how-to's for Julia sets and the Mandelbrot set.

Here is a quickie mandelbrot set imager that you can click-to-move, and zoom. It's just bare bones, but it gets you started. Note: It saves images, you might want to disable that!

Also, the standard slider in matplotlib.widgets is horizontal. I found a modified script to make it vertical and used it instead. That's why the script appears to be so long.

The full python script is given on this page. It's not fancy or optimal, but it gets you started.

## Saturday, July 4, 2015

### Relax Laplace! I

In a region with no free charge, the electrostatic potential obeys the Laplace equation:

grad^2 phi = 0

Jacobi iteration is a simple numerical technique to find solutions to problems like this. For example, if you have a 2D equal-spaced cartesian mesh of points, the numerical gradient

(del^2 phi)/(del x^2) + (del^2 phi)/(del y^2) can be approximated as:

(phi[i+1,j] + phi[i-1,j] - 2*phi[i,j]) + (phi[i,j+1] + phi[i,j-1] - 2*phi[i,j])

If that is set to zero, then one can iterate the expression:

next_phi[i,j] = 0.25 * (phi[i+1,j] + phi[i-1,j] +  phi[i,j+1] + phi[i,j-1])

It is slow, but here's a basic example. Scroll down to see cool ways to make it much faster!

Shown as a contour using mostly Matplotlib defaults:

 1 2 CS = ax.contour(phi, origin = 'lower') ax.clabel(CS, inline=1, fontsize=11) 

The script to set up the problem (phido_me, and sphere) and then iterate is quite short. Instead of looping over all the indices in the array as you would in a compiled language like C or FORTRAN, in Python we try to just use existing NumPy methods like np.roll in line 29.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 import numpy as np import matplotlib.pyplot as plt nx, ny = 101, 101 phi = np.zeros((ny, nx), dtype = 'float') do_me = np.ones_like(phi, dtype='bool') x0, y0, r0 = 40, 65, 12 x = np.arange(nx, dtype = 'float')[None,:] y = np.arange(ny, dtype = 'float')[:,None] rsq = (x-x0)**2 + (y-y0)**2 circle = rsq <= r0**2 phi[circle] = 1.0 do_me[circle] = False do_me[0,:], do_me[-1,:], do_me[:,0], do_me[:,-1] = False, False, False, False n, nper = 100, 100 phi_hold = np.zeros((n+1, ny, nx)) phi_hold[0] = phi for i in range(n): for j in range(nper): phi2 = 0.25*(np.roll(phi, 1, axis=0) + np.roll(phi, -1, axis=0) + np.roll(phi, 1, axis=1) + np.roll(phi, -1, axis=1) ) phi[do_me] = phi2[do_me] phi_hold[i+1] = phi change = phi_hold[1:] - phi_hold[:-1] cmin, cmax = [], [] for thing in change: cmin.append(thing[do_me].min()) cmax.append(thing[do_me].max()) 

The actual calculation (lines 26 to 36) takes about 2.2 seconds on my laptop. That's 100*100*101*101 multiplications plus three times that many sums, among other things, or about 6ns per flop. Since a lap-top-CPU-flop is more like about 1 or 1.5ns, there's definitely some room for improvement. (That GPU is just sitting there hungry for work, and could probably do it much faster!)

There's an upcoming post on lap-top-CPU-flops with NumPy; stay tuned!

So, let's make the problem a little harder by making it 3D, Instead of ~10,000 mesh points we'll have ~1,000,000. At 8 bytes per point, that's beyond the cache, so it's going to be slower still.

and two slices below the sphere:

This took 525 seconds. That's 100*100*101*101*101 multiplications plus five summations, or about 9ns per flop. It's creeping up there! ,And we're still changing at about 3E-04 per 100 iterations. The next Relax Laplace! post will cover some ways to bring that back down.

The 3D version of the script for reference:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 nx, ny, nz = 101, 101, 101 phi = np.zeros((nz, ny, nx), dtype = 'float') do_me = np.ones_like(phi, dtype='bool') x0, y0, z0, r0 = 40, 65, 50, 12 x = np.arange(nx, dtype = 'float')[None,None,:] y = np.arange(ny, dtype = 'float')[None,:,None] z = np.arange(ny, dtype = 'float')[:,None,None] rsq = (x-x0)**2 + (y-y0)**2 + (z-z0)**2 sphere = rsq <= r0**2 phi[sphere] = 1.0 do_me[sphere] = False do_me[0,:,:], do_me[-1,:,:] = False, False do_me[:,0,:], do_me[:,-1,:] = False, False do_me[:,:,0], do_me[:,:,-1] = False, False n, nper = 100, 100 phi_hold = np.zeros((n+1, nz, ny, nx)) phi_hold[0] = phi one_sixth = (1./6.) for i in range(n): for j in range(nper): phi2 = one_sixth*(np.roll(phi, 1, axis=0) + np.roll(phi, -1, axis=0) + np.roll(phi, 1, axis=1) + np.roll(phi, -1, axis=1) + np.roll(phi, 1, axis=2) + np.roll(phi, -1, axis=2) ) phi[do_me] = phi2[do_me] print i phi_hold[i+1] = phi 

## Friday, July 3, 2015

### "Don't use jet..."

Wikipedia
Jet is the worst colormap, except for all the others.

No, Winston Churchill was not talking about colormaps. He had more important things on his mind.

Starting from Jake VanderPlas' "How bad is your colormap" and this haiku tweet you can consider the matplotlib colormaps (and references therein) and the issues of luminance. For example, in the "jet" plot (default for imshow in matplotlib, and MatLab by the way) you can see artifacts. The bright blue line is meaningless. It just comes from a poor cmap definition.

But wait, there's more! Don't miss Climate Lab Book's End of the Rainbow, If We Assume's Planck vs WMAP Cosmic Ray Background OMG and this recent Scrap rainbow colour scales article in Nature!

These are some plots of the simulation in my "Relax Laplace!" post (in the works).

I also promise to add a post with some GOOD alternative maps.

Scroll down to the second (Black and White) version to understand the artifacts better.  Keep scrolling to see other gradient patterns.

...and...

 1 2 3 4 5 x = np.linspace(-1, 1, 200) X, Y = np.meshgrid(x, x) theta = np.arctan2(Y, X) theta[theta<0] += 2. * np.pi 

## Sunday, June 28, 2015

### Don't Mess with the Pelican (really, don't!)

I like python.

So when I read about Pelican, and I saw people use it and then blog about how they use it and how great it is, I thought I should try to use it too.

Even with plenty of help, I couldn't get much done. I just don't "get" Pelican. Then I started receiving some gentle comments suggesting that maybe Pelican is no Panacea for People Posting Python.

Now I'm here in google, and life is so much easier. So far everything just seems to work for me without much effort.

To add highlighted code, I found this how-to and right now use the Vim theme in hilite.me, though I really should just learn to use pygments.

 1 2 3 4 5 def fib(n): a, b = 0, 1 for i in range(n-1): a, b = b, a + b return a 

What about equations? Well, for now I'm going to try ASCII input for MathJax as described in this how-to. Goodies on ASCII input, including a renderer for testing, can be found at asciimath.org and there are some good examples of asciimathjax

MathType (used to be called Equation Editor in Word, PowerPoint, etc.) is supposed to be compatible in both directions, but so far I haven't figured out how.

So for example, just type this p(x)=(e^-mu mu^x)/(x!) inside backquotes (backticks), that other little thing on the tilde '~' key, and I get this (after adding one line of HTML to the template):

p(x)=(e^-mu mu^x)/(x!)

If your equations stop rendering properly, you may have accumulated bits of extra html in your post floating around after some combination of copy/pasting and formatting.

Using MathJax seem to be easy, and for me, intuitive. The equations displayed in the browser can be accessed by right clicking (or control clicking).

Just for example, you can set it so the equations will zoom on command:

Or you can capture the codes in various formats:

### Orbits II (SciPy odeint)

Last night I was talking to a friend who asked me why I used python for science, and not something standard like FORTRAN or C. I said something like "they took all the most commonly used FORTRAN and C functions in the world, and wrapped them up in python for us" or something dreamy-eyed like that. While NumPy has been crafted with care for python users, a lot of SciPy is really standard, widely recognized packages, all just ready for us to invoke with a few lines of python.

Personally, I found it more satisfying to write and run and debug and use the Runge-Kutta algorithms (RK4, RK45) first, before I went ahead and just used the imensely powerful scipy.inegrate package odeint. There is a lot going on under the hood in odeint, and this is great. You can read this tutorial and the reference documentaiton.

Just for example, with dynamic step sizes (can save orders of magnitude of time) how do you get the position at fixed, predetermined times? With the explicit RK scripts in the previous post, I have to include tests and provisional one-step calculations. However, odeint takes care of all of this for us, either by stopping, backing-up, or interpolating between steps, whichever is numerically prudent. However, I still use explicit Runge-Kutta here and there, especially if SciPy is not available for some reason, or if I'm doing "weird" stuff.

If you have read the first Post (Orbits part 1), then this will look even better.

Anyway, here it is!

And this is the python, pretty nice don't you think?

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from scipy.integrate import odeint as ODEint X0 = [r_aphelion, 0.0, 0.0, 0.0, 0.5*v_aphelion, 0.0] def func_ODE(X, t): f = np.zeros(6) # x, y, z, vx, vy, vz f[:3] = X[3:] # dx/dt = v one_over_r_cubed = ((X[:3]**2).sum())**-1.5 f[3:] = - Gm * X[:3] * one_over_r_cubed return f times = np.linspace(0.0, t_year, nstep+1) # stopping points to return data X, output_dict = ODEint(func_ODE, X0, timess, full_output=True) stepsizes = output_dict['hu'] 

How does it work?  We're solving 6 first-order differential equations based on a state vector:

X = [x, y, z, vx, vy, vz]

(dx)/(dt) = v_x,  (dy)/(dt) = v_y,  (dz)/(dt) = v_z,    is    f[:3] = X[3:],  and

(dv_x)/(dt) = a_x,  (dv_y)/(dt) = a_y,  (dv_z)/(dt) = z_z,    is    f[3:] = -Gm * X[:3] / r**3.

The algorithm will report back to you the calculated values of the state vector at the points in time you specify in times.  You can get a lot of information about what happened "under the hood" by setting full_output = True and examining the goodies returned in output_dict.