I have a PC at home which is substantially faster than my laptop, so I’ve spent the day configuring it so I can log into it through ssh and run simulations.

Posted on by Zack (Zequn) Li | Leave a comment

Qbar = 2000, Qmax = 0.1

When using \overline{Q} = 2000 and Q_{max} = 0.1, we get some really weird behavior. This is reminiscent of the delta factor explorations from a year ago.

Here, I’m using the following Indat file. An important thing to note is that due to the increased Q_{max} to 0.1 from the typical 0.01, I had automatically calculated a new dinflo of 1.7 \times 10^{-11} \text{ g/cm}^3.

 rstrt   = 'no'
 regrid = 0
 prefix  = 'ldi'
 imax    = 2000
 jmax    = 1
 kmax    = 1
 xmin = 1.00
 xmax = 10.0
 ymin = 0.
 ymax = 1.0
 zmin = 0.
 zmax = 1.0
 dxfac = 1.002
 courant = 0.25
 dtmax = 2.3 
 t_stat = 5.e5 
 lum = 8.0e5
 mass = 50.0
 radius = 20.0
 tempw = 40000.
 isotherm = 1
 Qbar   = 2000.0
 Qmax  = 0.1
 ifrc = 4
 amp = 0.0
 phase = 0.0
 period = 4.e3 
 ncycend = 900000000
 ndump   = 900000000
 nprin   = 900000000
 nmovie  = 900000000
 endtime = 5.01e5
 tprin   = 1.e2
fvel = 0.2
dinflo  = 1.72437934367e-11
rcl = 1.3
alpha = 0.65
 tmovie  = 5.0e+05 /
Posted in problems | Leave a comment

Qmax dinflo and mass loss

When you impose a line strength cutoff of something like Q_{max} / \bar{Q} = 0.01, the mass loss rate should be affected. I wrote up a short Python function to compute the mass loss rate, and thus the required density lower boundary condition (“dinflo”).

You would call this function with arguments like “computeDinflo(2000,0.01)”.

import math
import numpy as np 
from scipy.optimize import fsolve

def computeDinflo(Qbar, Qmax):
    dfac = 5#
    k = 1.380658e-16 # (* eV / K *)
    T = 40000# (* K *)
    Rsun = 6.955e10# (* radius of sun *)
    Rstar = 20* Rsun# (* radius of \[Zeta] pup *)
    Lsun = 3.839e33# (* luminosity of sun *)
    Lstar = 8e5 * Lsun# (* luminosity of \[Zeta] pup *)
    Msun = 1.989e33# (* mass of sun *)
    Mstar = 50 * Msun# (* mass of \[Zeta] pup *)
    mH =  1.6737236e-24 # (* mass of H atom *)
    mu = 1.3 # (* mean mass in units of Subscript[m, H] *)
    # Qbar = 2000 # (* bounciness of wind *)
    Qmax =  Qmax * Qbar#
    c = 2.998e10# (* speed of light *)
    csq = c * c
    G =  6.67e-8# (* grav constant *)
    kappae = 0.34# (* electron opacity *)
    alpha = 0.65#(* CAK linelist power law *)

    gammae = kappae * Lstar / (4 * np.pi * G * Mstar * c)
    vesc = np.sqrt(  2. * G * Mstar * (1-gammae) / Rstar )
    Mcak = (Lstar / csq) * (alpha/(1-alpha)) * ( (Qbar * gammae) / (1-gammae) )**((1-alpha)/alpha)
    fdfac = 1. / (1+alpha)**(1./alpha)
    a = np.sqrt(k * T / (mu * mH))
    deltams = 4. *( ( np.sqrt(1.-alpha)) / alpha) * a / vesc

    Mfdfscak = Mcak * fdfac * (1 + deltams)

    def func(m):
        return (1 + Qmax*(m*gammae/( 1 - gammae)*(1 - alpha)/alpha))**(1 - alpha) - (1 + (Qmax/Qbar)**( 1 - alpha) *(m*gammae/( 1 - gammae)*(1 - alpha)/alpha)/(gammae/(1 - gammae)))

    Mmax = (Lstar / csq) * fsolve(func, Mcak * csq / Lstar)[0]
    Mmax = Mmax * fdfac * (1.+deltams)
    dstar = Mmax / (4 * np.pi * Rstar * Rstar * a)
    dinflo = dfac * dstar

    return dinflo
Posted in simulation | Leave a comment

using the wind statistics implemented by Jon

In the previous post I calculated wind statistics like \langle \rho^2 \rangle and \langle v \rangle in Python, outside of the simulation code. This meant I was taking snapshots of information every 500 seconds, which gave some scatter. Jon informed me that his implementation already outputs some of this information, so I’m now using these averages that use information from every timestep.


Posted in figure | Leave a comment

basic tests of wind statistics


I implemented these wind statistic calculations into a Python script a while ago, and just now ran it on a standard 300 ksec run, but only taking into account the last 150 ksec (to get rid of the transients at the beginning).

It’s nice to compare these plots to those in Runacres and Owocki 2002.

I’ve also run this script on the 2D output, but I need to rerun the 1D in order to directly compare. The 2D run was only out to 5 stellar radii. Note that these data have less noise, primarily because I’m now averaging over 50 grid cells for each radius!


Posted in figure | Leave a comment







Posted in figure | Leave a comment

advanced movie making (2D LDI)

Skip to the halfway point for the real good stuff. Plotted is log density.

Combining the past knowledge of making animated movies, simulating winds in 2D, and making the proper projection, we have reached the point where we can see these winds in a new dimension!

There are two clear next steps:

  • Generate a new initial condition involving a substantially higher kappa_max using a CAK 1D run. Use this initial data to run a new 2D simulation, and hope the shocks are weaker (ultimate goal: fix the kappa_max problem and get believable shock tracking).
  • Use the clumping statistics on each radial ray, and determine the effect of adding this new dimension on wind statistics (more work on this extra dimension and whatever).
Posted in multi-D, simulation | Leave a comment

final dinflo parameter study — weirdness + resilience


500 ksec runs of CAK wind with zeta Puppis parameters. The lesson here is basically that the simulations don’t care what dinflo is really, up to an order of magnitude.

Posted in simulation | Leave a comment

plotting 2D spherical simulation output in Python

Eulerian grid hydro sims output their fluid variables using tables of grid cells. In the case of 2D LDI simulations, these grids are in terms of radius and angle. This presents an annoying problem for plotting, since typical plotting libraries work in Cartesian.


I spent an hour coding up a hackish and slow projection function, but then it occurred to me that Python must do its own projections when generating display images. Our use case is weird though, because we don’t want a full polar plot — just a range of radii and angles. Matplotlib can’t do this by default, but this stackoverflow post gives a handy function using axisartist. Put this at the top of your code:

import numpy as np
import matplotlib.pyplot as plt 
import matplotlib.image as mpimg 

from matplotlib.transforms import Affine2D
import mpl_toolkits.axisartist.floating_axes as floating_axes
import mpl_toolkits.axisartist.angle_helper as angle_helper
from matplotlib.projections import PolarAxes
from mpl_toolkits.axisartist.grid_finder import MaxNLocator

# define how your plots look:

def setup_axes(fig, rect, theta, radius):

    # PolarAxes.PolarTransform takes radian. However, we want our coordinate
    # system in degree
    tr = Affine2D() + PolarAxes.PolarTransform()

    # Find grid values appropriate for the coordinate (degree).
    # The argument is an approximate number of grids.
    grid_locator1 = angle_helper.LocatorD(2)

    # And also use an appropriate formatter:
    tick_formatter1 = angle_helper.FormatterDMS()

    # set up number of ticks for the r-axis
    grid_locator2 = MaxNLocator(4)

    # the extremes are passed to the function
    grid_helper = floating_axes.GridHelperCurveLinear(tr,
                                extremes=(theta[0], theta[1], radius[0], radius[1]),

    ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)

    # adjust axis
    # the axis artist lets you call axis with
    # "bottom", "top", "left", "right"

    ax1.axis["top"].toggle(ticklabels=True, label=True)


    # create a parasite axes
    aux_ax = ax1.get_aux_axes(tr)

    aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
    ax1.patch.zorder=0.9 # but this has a side effect that the patch is
                         # drawn twice, and possibly over some other
                         # artists. So, we decrease the zorder a bit to
                         # prevent this.

    return ax1, aux_ax

Here’s some example code (append this to the above to test it) which generates an example density function and then distorts it from -0.1 to 0.1 radians, and from 1 to 10 in radius.

# input file grid dimensions
xmax = 2000
ymax = 50
thetamin = -0.1
thetamax = 0.1
rmin = 1.0
rmax = 10.0

# generate a fake density variable on a 2000x50 grid
img = np.zeros( (xmax,ymax) )
for x in range(0,xmax):
for y in range(0,ymax):
img[x,y] = np.sin(0.05*x) * np.sin(0.05*y) + 10

theta,rad = np.meshgrid(np.linspace(thetamin,thetamax,ymax),
X = theta
Y = rad

# let's show the starting materials

# now let's plot the finished product!
fig = plt.figure(1, figsize=(8, 4))
fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)

ax1, aux_ax1 = setup_axes(fig, 111, theta=[thetamin,thetamax], radius=[rmin, rmax])

aux_ax1.pcolormesh( X,Y, img )

You first get a Cartesian plot.

After our changes, we should get the desired result.

Posted in reference | Leave a comment

simulation time too short to find final Mdot

In all of these parameter studies so far, I’m just running a CAK wind under standard zeta Pup parameters, but varying dinflo. mdot_vs_dinflo_log_grid


Previously we looked at a linear region about the expected dinflo of 1.2e-12 g / cubic cm, which is present on this plot as well. However, we now explore about two magnitudes below that in density, and we find some weird stuff. Mass loss rate decreases as expected, but then sharply peaks as we reach 1/100 the standard dinflo prescription.

This is almost certainly not physical. After inspecting the simulation outputs, it looks like at the low densities the simulation has not quite settled down enough for an accurate reading of the mass loss rate. I’ll rerun it tonight for twice as long, which should be sufficient. We can look at a characteristic time scale 10R_* / v_{\infty} \approx 100 \, ksec. We want a couple of these timescales in the simulation — since CAK runs are pretty short, I’ll probably use 500 ksec this time, up from 300 ksec.

Posted in Uncategorized | Leave a comment