Next, multiplied by a power law.
Next, multiplied by a power law.
I have my own implementation of the SSF radiation force with the hydro code VH-1, which has some sort of lingering bug associated with the inner wind. I’ve decided to instead work with Jon Sundqvist’s implementation, so one of my recent projects has been to move the LDI simulation up to 2D.
VH-1 is pretty straightforward in setting the dimension, but Jon’s code has some sort of bug that makes it crash whenever I turn on the second dimension. To be frank, I have a hard time reading other people’s code, so I figured I would first try it on my own SSF code. It works pretty well:
The problem is that my own SSF implementation has a small bug which makes it not have the stable CAK solution very close to the star.
The only hurdle: I originally had a log-gridding function, which was causing crashing. I disabled it, which fixed everything. This is maybe a clue about why Jon’s code fails to run in 2D.
Based on normalization upper limits from Vero and Maurice, we have some new information at the high temperature range.
The hotted detected line is Si XIV.
Maurice got some upper limits on normalizations from EPIC data for Fe XXV (1.86 Å) and S XV (5.06 Å). The upper limits don’t tell us very much, unfortunately. The two new lines are the boxes with only a lower part.
Véro sent David some upper limits on the non-detected lines, Ca XX (3.02 Å) and Ca XIX (3.19 Å). These results are a little more promising, but still don’t seem to indicate an exponential turnover. Again, the two new lines are the boxes with only a lower part.
I’ve been working on better error estimates in the Np(T) calculations, and I needed a way to quickly calculate the transmission fraction produced by the windtabs model. The point of windtabs is to have a big table full of data (hence “tabs”) to use for model fitting, but I didn’t have access to that table (it’s in a weird XSPEC file format).
The answer is pretty straightforward–make my own table, and interpolate over it. The windtabs transmissions are pretty smooth, so I didn’t need that many points. Using a 30 by 30 grid evenly spaced across for and for , we get a pretty good fit. The plot below is for , with an exponential absorption model plotted for comparison in red, the full double integral in blue, and the interpolation function in gold. Note the deviation because I only used .
If we explore this parameter space a bit, we find that it was indeed pretty smooth (below is ).
The end result is that I can really quickly generate values for a particular and combination, for use in a Monte Carlo error estimate. Below is a histogram for normally distributed and .
Unfortunately, I’m now stuck! I need an input probability distribution to model the asymmetric error on , but I have no idea what it is! Some talking with David suggests that the base distribution (and upper/lower errors) comes from fitting, so some exploration of what XSPEC is doing might be fruitful.
You can get the (rather messy) Mathematica notebook here.
All five of our program stars in the Np(T) paper share a particular Mg XI line at 8.421 Å. If we compare the shock probability on this line for each star, we get an interesting trend, where earlier spectral subtype seems to imply a higher shock probability.
Various tasks remain so that we can publish the results.
In my work, I’ve used an upper integration limit of . This produces the gray box plot of zeta Puppis below.
However, taking this integration limit to (effectively infinity), has minimal effects but causes the gray box plots to move closer to power law.
Jon sent me his LDI implementation in the F90 VH-1 version. I had some initial issues with getting it functioning, as I wasn’t familiar with the configuration settings. I thus had some issues with the mass loss rate:
It’s possible that this the fact that I had not used the isothermal option. Jon sent me an updated INDAT file, and we get the expected results: