temet.util package

Submodules

temet.util.boxRemap module

Implementation of ‘a volume and local structure preserving mapping of periodic boxes’ (Carlson, White 2010) http://mwhite.berkeley.edu/BoxRemap/.

class Plane(p, n)

Bases: object

normal()
test(x, y, z)
class Cell(ix=0, iy=0, iz=0)

Bases: object

contains(x, y, z)
class Cuboid(u1=(1, 0, 0), u2=(0, 1, 0), u3=(0, 0, 1))

Bases: object

Main cuboid remapping class.

dot(u, v)
triple_scalar_product(u, v, w)
square(v)
length(v)
UnitCubeTest(P)

Return +1, 0, or -1 if the unit cube is above, below, or intersecting the plane.

GetCells()

Create and return flattened, ndarray representation of cells and faces/planes.

GetN123()
Transform(x, y, z)
InverseTransform(r1, r2, r3)
CuboidTransformPoint(x, y, z, nCellsTot, nFacesTot, faceOff, faceLen, cell_ix, cell_iy, cell_iz, face_a, face_b, face_c, face_d, n1, n2, n3)

Do Cuboid transform of a single (x,y,z) point given the flattened input configuration. Return 3-tuple.

CuboidTransformArray(pos_in, pos_out, nCellsTot, nFacesTot, faceOff, faceLen, cell_ix, cell_iy, cell_iz, face_a, face_b, face_c, face_d, n1, n2, n3)

Do Cuboid transform of a [N,3] coordinate array, given the flattened input configuration. Write results to pos_out.

findCuboidRemapInds(remapRatio, nPixels=None)

Find the closest remapping matrix (3x3) to achieve the input remapping ratio of [x,y,z] relative extents. Also input nPixels [x,y] of the final image to be made, in order to calculate newBoxSize.

remapPositions(sP, pos, remapRatio, nPixels)

Remap a [N,3] coordinate array from the original cubic periodic domain of side-length sP.boxSize into a non-periodic, rectangular domain with the relative shape of remapRatio in the (x,y,z) dimensions.

test()

temet.util.dataConvert module

Various data exporters/converters, between different formats, etc.

concatSubboxFilesAndMinify()

Minify a series of subbox snapshots but removing unwanted fields, and re-save concatenated into a smaller number of chunks.

groupCutoutFromSnap(run='tng')

Create a [full] subhalo/fof cutout from a snapshot (as would be done by the Web API).

tracerCutoutFromTracerTracksCat()

Create a subhalo cutout of tracers from the full postprocessing/tracer_tracks/ catalog.

makeSnapHeadersForLHaloTree()

Copy chunk 0 of each snapshot only and the header only (for LHaloTree B-HaloTree).

makeSnapSubsetsForPostProcessing()

Copy snapshot chunks reducing to needed fields for tree/post-processing calculations.

finalizeSubfindHBTGroupCat(snap, prep=False)

For a Subfind-HBT post-processing of an original Subfind simulation, extract the new group ordered IDs from the rewritten snapshot, place them into the catalog itself, and cross-match to the original snapshot IDs, to write the index mapping into the catalog. Finally, rewrite the catalog into a single file, and add fof and subhalo cross-matching to original catalog. To run: (i) first create minimal snaps with makeSnapSubsetsForPostProcessing() above, (ii) run Gadget-4 SubfindHBT, (iii) run finalizeSubfindHBTGroupCat(prep=True), (iv) create SubLinkHBT trees, (v) run makeSubgroupOffsetsIntoSublinkTreeGlobal(basePath, treeName=’SubLinkHBT’), (vi) run finalizeSubfindHBTGroupCat() to finish.

makeSdssSpecObjIDhdf5()

Transform some CSV files into a HDF5 for SDSS objid -> specobjid mapping.

createEmptyMissingGroupCatChunk()
combineAuxCatSubdivisions()

Combine a subdivision of a pSplit auxCat calculation.

snapRedshiftsTxt()

Output a text-file of snapshot redshifts, etc.

tngVariantsLatexOrWikiTable(variants='all', fmt='wiki')

Output latex-syntax table describing the TNG model variations runs.

splitSingleHDF5IntoChunks(snap=151)

Split a single-file snapshot/catalog/etc HDF5 into a number of roughly equally sized chunks.

combineMultipleHDF5FilesIntoSingle()

Combine multiple groupcat file chunks into a single HDF5 file.

convertVoronoiConnectivityVPPP(stage=1, thisTask=0)

Read the Voronoi mesh data from Chris Byrohl using his vppp (voro++ parallel) approach, save to HDF5.

exportSubhalosBinary()

Export a very minimal group catalog to a flat binary format (for WebGL/Explorer3D).

exportHierarchicalBoxGrids(sP, partType='gas', partField='mass', nCells=[32, 64, 128, 256, 512], haloID=None, haloSizeRvir=2.0, retData=False, memoryReturn=False)

Export one or more 3D uniform Cartesian grids, of different resolutions, to a flat binary format.

Parameters:
  • sP (simParams) – simulation instance.

  • partType (str)

  • partField (str)

  • nCells (list[int]) – one or more grid sizes (number of cells per linear dimension)

  • haloID (int or None) – if None then use full box, otherwise fof-restricted particle load.

  • haloSizeRvir (float) – if haloID is specified, then this gives the box side-length in rvir units.

  • retData (bool) – if True, immediately return python object (list of ndarrays) of data.

  • memoryReturn (bool) – if True, an actual file is not written, and instead a bytes array is returned.

Returns:

Either a list, raw binary inside a io.BytesIO, or no return if an actual file is written to disk.

Note

This function is used to pre-compute the grids used in the Explorer3D WebGL volume rendering interface, as well as on-the-fly grid generation for halo-scope volume rendering.

exportIltisCutout(sim, haloIDs, emLine='O  7 21.6020A', haloSizeRvir=2.0, sfrEmisFac=1)

Export a snapshot cutout (global scope) around a given halo, for use in ILTIS-RT.

Parameters:
  • sim (simParams) – simulation instance.

  • haloIDs (list[int]) – list of FoF halo IDs to center the cutouts on.

  • emLine (str) – the emission line to save the emissivity of, which also specifies the ion for the density field.

  • haloSizeRvir (float) – the box side-length in rvir units.

  • sfrEmisBoost (float) – if not unity, then multiplicative (boost) factor by which to change the emissivities for star-forming gas only, e.g. to generate simple models for bright central sources.

Returns:

None. An actual file is written to disk.

temet.util.dataConvertSim module

Conversion of data (snapshots/catalogs) between different cosmological simulations.

convertGadgetICsToHDF5(aip=False)

Convert a Gadget-1/2 binary format ICs (dm-only, only pos/vel/IDs) into HDF5 format (keep original ordering).

convertMillenniumSubhaloCatalog(snap=63)

Convert a subhalo catalog (‘sub_tab_NNN.X’ files), custom binary format of Millennium simulation to Illustris-like HDF5.

convertMillennium2SubhaloCatalog(snap=67)

Convert a subhalo catalog (‘subhalo_tab_NNN.X’ files), custom binary format of Millennium-2 simulation to TNG-like HDF5.

convertMillenniumSnapshot(snap=63)

Convert a complete Millennium snapshot (+IDS) into Illustris-like group-ordered HDF5 format.

convertMillennium2Snapshot(snap=67)

Convert a complete Millennium-2 snapshot into TNG-like group-ordered HDF5 format. Note all snapshots except 4-7 (inclusive) are already group-ordered.

convertEagleSnapshot(snap=20)

Convert an EAGLE simulation snapshot (HDF5) to a TNG-like snapshot (field names, units, etc).

convertSimbaSnapshot(snap=151)

Convert an SIMBA simulation snapshot (HDF5) to a TNG-like snapshot (field names, units, etc).

fixSimbaSMBHs(snap=112)
testSimba(redshift=0.0)

Compare all snapshot fields (1d histograms) vs TNG to check unit conversions, etc.

testSimbaCat(redshift=0.0)

Compare all group cat fields (1d histograms) to check unit conversions, etc.

temet.util.gfmExternalFiles module

Create external data files required for GFM runs (photometrics, cooling, etc)

makeStellarPhotometricsHDF5_BC03()

Create stellar_photometrics.hdf5 file using BC03 models, as used for Illustris and IllustrisTNG runs. Bands: UBVK (Buser U,B3,V,IR K filter + Palomar200 IR detectors + atmosphere.57) in Vega, griz (sdss) in AB.

Notes

temet.util.helper module

General helper functions, small algorithms, basic I/O, etc.

nUnique(x)

Return number of unique elements in input numpy array x.

isUnique(x)

Does input array contain only unique values?

closest(array, value)

Return closest element of array to input value.

array_equal_nan(a, b)

As np.array_equal(a,b) but allowing NaN==NaN.

evenlySample(sequence, num, logSpace=False)

Return num samples from sequence roughly equally spaced.

contiguousIntSubsets(x)

Return a list of index pairs corresponding to contiguous integer subsets of the input array. Final index of each pair is exclusive as in numpy syntax, so to obtain all the elements of a range from a numpy array, x[ranges[0][0]:ranges[0][1]].

logZeroSafe(x, zeroVal=1.0)

Take log10 of input variable or array, keeping zeros at some value.

logZeroMin(x)

Take log10 of input variable, setting zeros to 100 times less than the minimum.

logZeroNaN(x)

Take log10, setting zeros to NaN silently and leaving NaN as NaN (same as the default behavior, but suppress warnings).

last_nonzero(array, axis, invalid_val=-1)

Return the indices of the last nonzero entries of the array, along the given axis.

iterable(x)

Protect against non-list/non-tuple (e.g. scalar or single string) value of x, to guarantee that a for loop can iterate over this object correctly.

rebin(x, shape)

Resize input array x, must be 2D, to new shape, probably smaller, by averaging.

reportMemory()

Return current Python process memory usage in GB.

numPartToChunkLoadSize(numPart)

For a given snapshot size, in terms of total particle count, decide on a good chunk loading size for a reasonable compute/memory balance.

tail(fileName, nLines)

Wrap linux tail command line utility.

running_median(X, Y, nBins=100, binSize=None, binSizeLg=None, skipZeros=False, percs=None, minNumPerBin=10, mean=False, weights=None)

Create a adaptive median line of a (x,y) point set using some number of bins.

running_median_clipped(X, Y_in, nBins=100, minVal=None, maxVal=None, binSize=None, skipZerosX=False, skipZerosY=False, clipPercs=[10, 90])

Create a constant-bin median line of a (x,y) point set, clipping outliers above/below clipPercs.

running_median_sub(X, Y, S, nBins=100, binSize=None, skipZeros=False, sPercs=[25, 50, 75], percs=[16, 84], minNumPerBin=10)

Create a adaptive median line of a (x,y) point set using some number of bins, where in each bin only the sub-sample of points obtained by slicing a third value (S) above and/or below one or more percentile thresholds is used.

running_sigmawindow(X, Y, windowSize=None)

Create an local/adaptive estimate of the stddev of a (x,y) point set using a sliding window of windowSize points.

running_histogram(X, nBins=100, binSize=None, normFac=None, skipZeros=False)

Create a adaptive histogram of a (x) point set using some number of bins.

replicateVar(childCounts, subsetInds=None)

Given a number of children for each parent, replicate the parent indices for each child. subset_inds : still need to walk the full child_counts, but only want parent indices of a subset.

pSplit(array, numProcs, curProc)

Divide work for embarassingly parallel problems.

pSplitRange(indrange, numProcs, curProc, inclusive=False)

As pSplit(), but accept a 2-tuple of [start,end] indices and return a new range subset. If inclusive==True, then assume the range subset will be used e.g. as input to snapshotSubseet(), which unlike numpy convention is inclusive in the indices.

getIDIndexMapSparse(ids)

Return an array which maps ID->indices within dense, disjoint subsets which are allowed to be sparse in the entire ID range. within each subset i of size binsize array[ID-minids[i]+offset[i]] is the index of the original array ids where ID is found (assumes no duplicate IDs).

getIDIndexMap(ids)

Return an array of size max(ids)-min(ids) such that array[ID-min(ids)] is the index of the original array ids where ID is found (assumes a one to one mapping, not repeated indices as in the case of parentIDs for tracers).

trapsum(xin, yin)

Trapezoidal rule numerical quadrature.

leastsq_fit(func, params_init, args=None)

Wrap scipy.optimize.leastsq() by making the error function and handling returns. If args is not None, then the standard errors are also computed, but this ASSUMES that args[0] is the x data points and args[1] is the y data points.

least_squares_fit(func, params_init, params_bounds, args=None)

Wrap scipy.optimize.least_squares() using the Trust Region Reflective algorithm for fitting with (optional) constraints on each parameter.

reducedChiSq(sim_x, sim_y, data_x, data_y, data_yerr=None, data_yerr_up=None, data_yerr_down=None)

Helper, computed reduced (i.e.) mean chi squared between a simulation ‘line’ and observed relation given by a set of points with errorbars.

sgolay2d(z, window_size, order, derivative=None)

Szalay-golay filter in 2D using FFT convolutions.

gaussian_filter_nan(zz, sigma)

Filter 2D array zz by a Gaussian with input sigma, ignoring any NaN entries.

mvbe(points, tol=0.005)

Find the ‘minimum volume bounding ellipsoid’ of a given set of points (iterative). Returns the ellipse equation in “center form” (x-c).T * A * (x-c) = 1 Based on MVEE in Matlab by Nima Moshtagh.

binned_stat_2d(x, y, c, bins, range_x, range_y, stat='median')

Replacement of binned_statistic_2d for mean_nan or median_nan.

binned_statistic_weighted(x, values, statistic, bins, weights=None, weights_w=None)

If weights == None, straight passthrough to scipy.stats.binned_statistic(). Otherwise, compute once for values*weights, again for weights alone, then normalize and return. If weights_w is not None, apply this np.where() result to the weights array.

lowess(x, y, x0, degree=1, l=1, robust=False)

Locally smoothed regression with the LOWESS algorithm (https://github.com/arokem/lowess). x - values of x for which f(x) is input, with shape (ndim,y.size) y - values of f(x) at these points (1d) x0 - x values to output LOWESS estimate for (can be the same as x) degree - degree of smoothing functions (0=locally constant, 1=linear, …) l - metric window size for the kernel (scalar, or array of y.size) robust - if True, apply the robustification procedure from Cleveland (1979), pg 831

weighted_std_binned(x, vals, weights, bins)

For a given set of bins (edges), histogram x into those bins, and then compute and return the standard deviation (unbiased) of vals weighted by weights, per-bin. Assumes ‘reliability’ (non-random) weights, following https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance.

bincount(x, dtype)

Same behavior as np.bincount() except can specify dtype different than x.dtype to save memory.

periodicDistsN(pos1, pos2, BoxSize, squared=False)

Compute periodic distance between each (x,y,z) coordinate in pos1 vs. the corresponding (x,y,z) point in pos2. Either pos1 and pos2 have the same shapes, and are matched pairwise, or pos1 is a tuple (i.e. reference position).

periodicDistsIndexed(pos1, pos2, indices, BoxSize)

Compute periodic distance between each (x,y,z) coordinate in pos1 vs. the corresponding (x,y,z) point in pos2. Here pos1.shape[0] != pos2.shape[0], e.g. in the case that pos1 are group centers, and pos2 are particle positions. Then indices gives, for each pos2, the index of the corresponding pos1 element to compute the distance to, e.g. the group ID of each particle. Return size is the length of pos2.

crossmatchHalosByCommonIDs(ids1, lengths1, ids2, lengths2)

For two sets of objects (e.g. halos or subhalos) which contain member IDs, finding the best matching index of the second set, for each object in each first set, coresponding to the object in the second set with the largest number of shared IDs.

getWhiteBlackColors(pStyle)

Plot style helper.

validColorTableNames()

Return a list of whitelisted colormap names.

loadColorTable(ctName, valMinMax=None, plawScale=None, cmapCenterVal=None, fracSubset=None, numColors=None)

Load a custom or built-in color table specified by name.

Parameters:
  • ctName (str) – requested color map by name. Note that appending ‘_r’ to most default colormap names

  • order (requests the colormap in reverse)

  • valMinMax (list[float][2]) – required for some custom colormaps, and for some adjustments.

  • plawScale (float) – return the colormap modified as cmap_new = cmap_old**plawScale.

  • cmapCenterVal (float) – return the colormap modified such that its middle point lands at the numerical value cmapCenterVal, given the bounds valMinMax (e.g. zero, for any symmetric colormap, say for positive/negative radial velocities).

  • fracSubset (list[float][2]) – a 2-tuple in [0,1] e.g. [0.2,0.8] to use only part of the original cmap range.

  • numColors (int or None) – if not None, integer number of discrete colors of the desired colortable (matplotlib colormaps only).

Returns:

requested colormap.

Return type:

matplotlib.colors.Colormap

sampleColorTable(ctName, num, bounds=None)

Grab a sequence of colors, evenly spaced, from a given colortable.

contourf(*args, **kwargs)

Wrap matplotlib.contourf() for a graphical fix in PDF output.

plothist(x, filename='out.pdf', nBins=50, norm=False, skipzeros=True)

Plot a quick 1D histogram of an array x and save it to a PDF.

plotxy(x, y, filename='plot.pdf')

Plot a quick 1D line plot of x vs. y and save it to a PDF.

plot2d(grid, label='', minmax=None, filename='plot.pdf')

Plot a quick image plot of a 2d array/grid and save it to a PDF.

curRepoVersion()

Return a hash of the current state of the mercurial python repo.

temet.util.job_monitor module

Helpers for working with SLURM jobs.

checkVisJobs()

Categorize a large job set into running/completed/failed and automatically re-submit jobs which have failed.

submitExpandedJobs(jobNum)

temet.util.metaPlots module

Non-science and other meta plots.

plotUsersData()

Parse and plot a user data dump from the Illustris[TNG] public data release website.

plotNumPublicationsVsTime()

Compare the number of publications vs time for various projects.

plotCpuTimeEstimates()

Read the log file produced by plotCpuTimes() and plot the ‘predicted’ total CPUhs and finish date of a run versus the date on which that prediction was made.

periodic_slurm_status(machine='vera', nosave=False)

Collect current statistics from the SLURM scheduler, save some data, make some plots.

temet.util.parallelSort module

argsort(*args, **kwargs)

Overloaded function.

  1. argsort(arg0: numpy.ndarray[numpy.int64]) -> numpy.ndarray[numpy.int64]

An argsort-like function (int32 input).

  1. argsort(arg0: numpy.ndarray[numpy.int64]) -> numpy.ndarray[numpy.int64]

An argsort-like function (int64 input).

  1. argsort(arg0: numpy.ndarray[numpy.uint64]) -> numpy.ndarray[numpy.int64]

An argsort-like function (uint32 input).

  1. argsort(arg0: numpy.ndarray[numpy.uint64]) -> numpy.ndarray[numpy.int64]

An argsort-like function (uint64 input).

sort(*args, **kwargs)

Overloaded function.

  1. sort(arg0: numpy.ndarray[numpy.float32]) -> None

A function that sorts in-place (float).

  1. sort(arg0: numpy.ndarray[numpy.float64]) -> None

A function that sorts in-place (double).

  1. sort(arg0: numpy.ndarray[numpy.int16]) -> None

A function that sorts in-place (int16).

  1. sort(arg0: numpy.ndarray[numpy.int64]) -> None

A function that sorts in-place (int32).

  1. sort(arg0: numpy.ndarray[numpy.int64]) -> None

A function that sorts in-place (int64).

  1. sort(arg0: numpy.ndarray[numpy.uint16]) -> None

A function that sorts in-place (uint16).

  1. sort(arg0: numpy.ndarray[numpy.uint64]) -> None

A function that sorts in-place (uint32).

  1. sort(arg0: numpy.ndarray[numpy.uint64]) -> None

A function that sorts in-place (uint64).

temet.util.png module

Pure Python PNG Reader/Writer

This Python module implements support for PNG images (see PNG specification at http://www.w3.org/TR/2003/REC-PNG-20031110/ ). It reads and writes PNG files with all allowable bit depths (1/2/4/8/16/24/32/48/64 bits per pixel) and colour combinations: greyscale (1/2/4/8/16 bit); RGB, RGBA, LA (greyscale with alpha) with 8/16 bits per channel; colour mapped images (1/2/4/8 bit). Adam7 interlacing is supported for reading and writing. A number of optional chunks can be specified (when writing) and understood (when reading): tRNS, bKGD, gAMA.

For help, type import png; help(png) in your python interpreter.

A good place to start is the Reader and Writer classes.

Requires Python 2.3. Limited support is available for Python 2.2, but not everything works. Best with Python 2.4 and higher. Installation is trivial, but see the README.txt file (with the source distribution) for details.

This file can also be used as a command-line utility to convert Netpbm PNM files to PNG, and the reverse conversion from PNG to PNM. The interface is similar to that of the pnmtopng program from Netpbm. Type python png.py --help at the shell prompt for usage and a list of options.

A note on spelling and terminology

Generally British English spelling is used in the documentation. So that’s “greyscale” and “colour”. This not only matches the author’s native language, it’s also used by the PNG specification.

The major colour models supported by PNG (and hence by PyPNG) are: greyscale, RGB, greyscale–alpha, RGB–alpha. These are sometimes referred to using the abbreviations: L, RGB, LA, RGBA. In this case each letter abbreviates a single channel: L is for Luminance or Luma or Lightness which is the channel used in greyscale images; R, G, B stand for Red, Green, Blue, the components of a colour image; A stands for Alpha, the opacity channel (used for transparency effects, but higher values are more opaque, so it makes sense to call it opacity).

A note on formats

When getting pixel data out of this module (reading) and presenting data to this module (writing) there are a number of ways the data could be represented as a Python value. Generally this module uses one of three formats called “flat row flat pixel”, “boxed row flat pixel”, and “boxed row boxed pixel”. Basically the concern is whether each pixel and each row comes in its own little tuple (box), or not.

Consider an image that is 3 pixels wide by 2 pixels high, and each pixel has RGB components:

Boxed row flat pixel:

list([R,G,B, R,G,B, R,G,B],
     [R,G,B, R,G,B, R,G,B])

Each row appears as its own list, but the pixels are flattened so that three values for one pixel simply follow the three values for the previous pixel. This is the most common format used, because it provides a good compromise between space and convenience. PyPNG regards itself as at liberty to replace any sequence type with any sufficiently compatible other sequence type; in practice each row is an array (from the array module), and the outer list is sometimes an iterator rather than an explicit list (so that streaming is possible).

Flat row flat pixel:

[R,G,B, R,G,B, R,G,B,
 R,G,B, R,G,B, R,G,B]

The entire image is one single giant sequence of colour values. Generally an array will be used (to save space), not a list.

Boxed row boxed pixel:

list([ (R,G,B), (R,G,B), (R,G,B) ],
     [ (R,G,B), (R,G,B), (R,G,B) ])

Each row appears in its own list, but each pixel also appears in its own tuple. A serious memory burn in Python.

In all cases the top row comes first, and for each row the pixels are ordered from left-to-right. Within a pixel the values appear in the order, R-G-B-A (or L-A for greyscale–alpha).

There is a fourth format, mentioned because it is used internally, is close to what lies inside a PNG file itself, and has some support from the public API. This format is called packed. When packed, each row is a sequence of bytes (integers from 0 to 255), just as it is before PNG scanline filtering is applied. When the bit depth is 8 this is essentially the same as boxed row flat pixel; when the bit depth is less than 8, several pixels are packed into each byte; when the bit depth is 16 (the only value more than 8 that is supported by the PNG image format) each pixel value is decomposed into 2 bytes (and packed is a misnomer). This format is used by the Writer.write_packed() method. It isn’t usually a convenient format, but may be just right if the source data for the PNG image comes from something that uses a similar format (for example, 1-bit BMPs, or another PNG file).

And now, my famous members

class Image(rows, info)

Bases: object

A PNG image. You can create an Image object from an array of pixels by calling png.from_array(). It can be saved to disk with the save() method.

save(file)

Save the image to file. If file looks like an open file descriptor then it is used, otherwise it is treated as a filename and a fresh file is opened.

In general, you can only call this method once; after it has been called the first time and the PNG image has been saved, the source data will have been streamed, and cannot be streamed again.

class Reader(_guess=None, **kw)

Bases: object

PNG decoder in pure Python.

chunk(seek=None, lenient=False)

Read the next PNG chunk from the input file; returns a (type,*data*) tuple. type is the chunk’s type as a string (all PNG chunk types are 4 characters long). data is the chunk’s data content, as a string.

If the optional seek argument is specified then it will keep reading chunks until it either runs out of file or finds the type specified by the argument. Note that in general the order of chunks in PNGs is unspecified, so using seek can cause you to miss chunks.

If the optional lenient argument evaluates to True, checksum failures will raise warnings rather than exceptions.

chunks()

Return an iterator that will yield each chunk as a (chunktype, content) pair.

undo_filter(filter_type, scanline, previous)

Undo the filter for a scanline. scanline is a sequence of bytes that does not include the initial filter type byte. previous is decoded previous scanline (for straightlaced images this is the previous pixel row, but for interlaced images, it is the previous scanline in the reduced image, which in general is not the previous pixel row in the final image). When there is no previous scanline (the first row of a straightlaced image, or the first row in one of the passes in an interlaced image), then this argument should be None.

The scanline will have the effects of filtering removed, and the result will be returned as a fresh sequence of bytes.

deinterlace(raw)

Read raw pixel data, undo filters, deinterlace, and flatten. Return in flat row flat pixel format.

iterboxed(rows)

Iterator that yields each scanline in boxed row flat pixel format. rows should be an iterator that yields the bytes of each row in turn.

serialtoflat(bytes, width=None)

Convert serial format (byte stream) pixel data to flat row flat pixel.

iterstraight(raw)

Iterator that undoes the effect of filtering, and yields each row in serialised format (as a sequence of bytes). Assumes input is straightlaced. raw should be an iterable that yields the raw bytes in chunks of arbitrary size.

validate_signature()

If signature (header) has not been read then read and validate it; otherwise do nothing.

preamble(lenient=False)

Extract the image metadata by reading the initial part of the PNG file up to the start of the IDAT chunk. All the chunks that precede the IDAT chunk are read and either processed for metadata or discarded.

If the optional lenient argument evaluates to True, checksum failures will raise warnings rather than exceptions.

chunklentype()

Reads just enough of the input to determine the next chunk’s length and type, returned as a (length, type) pair where type is a string. If there are no more chunks, None is returned.

process_chunk(lenient=False)

Process the next chunk and its data. This only processes the following chunk types, all others are ignored: IHDR, PLTE, bKGD, tRNS, gAMA, sBIT.

If the optional lenient argument evaluates to True, checksum failures will raise warnings rather than exceptions.

read(lenient=False)

Read the PNG file and decode it. Returns (width, height, pixels, metadata).

May use excessive memory.

pixels are returned in boxed row flat pixel format.

If the optional lenient argument evaluates to True, checksum failures will raise warnings rather than exceptions.

read_flat()

Read a PNG file and decode it into flat row flat pixel format. Returns (width, height, pixels, metadata).

May use excessive memory.

pixels are returned in flat row flat pixel format.

See also the read() method which returns pixels in the more stream-friendly boxed row flat pixel format.

palette(alpha='natural')

Returns a palette that is a sequence of 3-tuples or 4-tuples, synthesizing it from the PLTE and tRNS chunks. These chunks should have already been processed (for example, by calling the preamble() method). All the tuples are the same size: 3-tuples if there is no tRNS chunk, 4-tuples when there is a tRNS chunk. Assumes that the image is colour type 3 and therefore a PLTE chunk is required.

If the alpha argument is 'force' then an alpha channel is always added, forcing the result to be a sequence of 4-tuples.

asDirect()

Returns the image data as a direct representation of an x * y * planes array. This method is intended to remove the need for callers to deal with palettes and transparency themselves. Images with a palette (colour type 3) are converted to RGB or RGBA; images with transparency (a tRNS chunk) are converted to LA or RGBA as appropriate. When returned in this format the pixel values represent the colour value directly without needing to refer to palettes or transparency information.

Like the read() method this method returns a 4-tuple:

(width, height, pixels, meta)

This method normally returns pixel values with the bit depth they have in the source image, but when the source PNG has an sBIT chunk it is inspected and can reduce the bit depth of the result pixels; pixel values will be reduced according to the bit depth specified in the sBIT chunk (PNG nerds should note a single result bit depth is used for all channels; the maximum of the ones specified in the sBIT chunk. An RGB565 image will be rescaled to 6-bit RGB666).

The meta dictionary that is returned reflects the direct format and not the original source image. For example, an RGB source image with a tRNS chunk to represent a transparent colour, will have planes=3 and alpha=False for the source image, but the meta dictionary returned by this method will have planes=4 and alpha=True because an alpha channel is synthesized and added.

pixels is the pixel data in boxed row flat pixel format (just like the read() method).

All the other aspects of the image data are not changed.

asFloat(maxval=1.0)

Return image pixels as per asDirect() method, but scale all pixel values to be floating point values between 0.0 and maxval.

asRGB8()

Return the image data as an RGB pixels with 8-bits per sample. This is like the asRGB() method except that this method additionally rescales the values so that they are all between 0 and 255 (8-bit). In the case where the source image has a bit depth < 8 the transformation preserves all the information; where the source image has bit depth > 8, then rescaling to 8-bit values loses precision. No dithering is performed. Like asRGB(), an alpha channel in the source image will raise an exception.

This function returns a 4-tuple: (width, height, pixels, metadata). width, height, metadata are as per the read() method.

pixels is the pixel data in boxed row flat pixel format.

asRGBA8()

Return the image data as RGBA pixels with 8-bits per sample. This method is similar to asRGB8() and asRGBA(): The result pixels have an alpha channel, and values are rescaled to the range 0 to 255. The alpha channel is synthesized if necessary (with a small speed penalty).

asRGB()

Return image as RGB pixels. RGB colour images are passed through unchanged; greyscales are expanded into RGB triplets (there is a small speed overhead for doing this).

An alpha channel in the source image will raise an exception.

The return values are as for the read() method except that the metadata reflect the returned pixels, not the source image. In particular, for this method metadata['greyscale'] will be False.

asRGBA()

Return image as RGBA pixels. Greyscales are expanded into RGB triplets; an alpha channel is synthesized if necessary. The return values are as for the read() method except that the metadata reflect the returned pixels, not the source image. In particular, for this method metadata['greyscale'] will be False, and metadata['alpha'] will be True.

class Writer(width=None, height=None, size=None, greyscale=False, alpha=False, bitdepth=8, palette=None, transparent=None, background=None, gamma=None, compression=None, interlace=False, bytes_per_sample=None, planes=None, colormap=None, maxval=None, chunk_limit=1048576)

Bases: object

PNG encoder in pure Python.

make_palette()

Create the byte sequences for a PLTE and if necessary a tRNS chunk. Returned as a pair (p, t). t will be None if no tRNS chunk is necessary.

write(outfile, rows)

Write a PNG image to the output file. rows should be an iterable that yields each row in boxed row flat pixel format. The rows should be the rows of the original image, so there should be self.height rows of self.width * self.planes values. If interlace is specified (when creating the instance), then an interlaced PNG file will be written. Supply the rows in the normal image order; the interlacing is carried out internally.

Note

Interlacing will require the entire image to be in working memory.

write_passes(outfile, rows, packed=False)

Write a PNG image to the output file.

Most users are expected to find the write() or write_array() method more convenient.

The rows should be given to this method in the order that they appear in the output file. For straightlaced images, this is the usual top to bottom ordering, but for interlaced images the rows should have already been interlaced before passing them to this function.

rows should be an iterable that yields each row. When packed is False the rows should be in boxed row flat pixel format; when packed is True each row should be a packed sequence of bytes.

write_array(outfile, pixels)

Write an array in flat row flat pixel format as a PNG file on the output file. See also write() method.

write_packed(outfile, rows)

Write PNG file to outfile. The pixel data comes from rows which should be in boxed row packed format. Each row should be a sequence of packed bytes.

Technically, this method does work for interlaced images but it is best avoided. For interlaced images, the rows should be presented in the order that they appear in the file.

This method should not be used when the source image bit depth is not one naturally supported by PNG; the bit depth should be 1, 2, 4, 8, or 16.

convert_pnm(infile, outfile)

Convert a PNM file containing raw pixel data into a PNG file with the parameters set in the writer object. Works for (binary) PGM, PPM, and PAM formats.

convert_ppm_and_pgm(ppmfile, pgmfile, outfile)

Convert a PPM and PGM file containing raw pixel data into a PNG outfile with the parameters set in the writer object.

file_scanlines(infile)

Generates boxed rows in flat pixel format, from the input file infile. It assumes that the input file is in a “Netpbm-like” binary format, and is positioned at the beginning of the first pixel. The number of pixels to read is taken from the image dimensions (width, height, planes) and the number of bytes per value is implied by the image bitdepth.

array_scanlines(pixels)

Generates boxed rows (flat pixels) from flat rows (flat pixels) in an array.

array_scanlines_interlace(pixels)

Generator for interlaced scanlines from an array. pixels is the full source image in flat row flat pixel format. The generator yields each scanline of the reduced passes in turn, in boxed row flat pixel format.

write_chunks(out, chunks)

Create a PNG file by writing out the chunks.

from_array(a, mode=None, info={})

Create a PNG Image object from a 2- or 3-dimensional array. One application of this function is easy PIL-style saving: png.from_array(pixels, 'L').save('foo.png').

Unless they are specified using the info parameter, the PNG’s height and width are taken from the array size. For a 3 dimensional array the first axis is the height; the second axis is the width; and the third axis is the channel number. Thus an RGB image that is 16 pixels high and 8 wide will use an array that is 16x8x3. For 2 dimensional arrays the first axis is the height, but the second axis is width*channels, so an RGB image that is 16 pixels high and 8 wide will use a 2-dimensional array that is 16x24 (each row will be 8*3==24 sample values).

mode is a string that specifies the image colour format in a PIL-style mode. It can be:

'L'

greyscale (1 channel)

'LA'

greyscale with alpha (2 channel)

'RGB'

colour image (3 channel)

'RGBA'

colour image with alpha (4 channel)

The mode string can also specify the bit depth (overriding how this function normally derives the bit depth, see below). Appending ';16' to the mode will cause the PNG to be 16 bits per channel; any decimal from 1 to 16 can be used to specify the bit depth.

When a 2-dimensional array is used mode determines how many channels the image has, and so allows the width to be derived from the second array dimension.

The array is expected to be a numpy array, but it can be any suitable Python sequence. For example, a list of lists can be used: png.from_array([[0, 255, 0], [255, 0, 255]], 'L'). The exact rules are: len(a) gives the first dimension, height; len(a[0]) gives the second dimension; len(a[0][0]) gives the third dimension, unless an exception is raised in which case a 2-dimensional array is assumed. It’s slightly more complicated than that because an iterator of rows can be used, and it all still works. Using an iterator allows data to be streamed efficiently.

The bit depth of the PNG is normally taken from the array element’s datatype (but if mode specifies a bitdepth then that is used instead). The array element’s datatype is determined in a way which is supposed to work both for numpy arrays and for Python array.array objects. A 1 byte datatype will give a bit depth of 8, a 2 byte datatype will give a bit depth of 16. If the datatype does not have an implicit size, for example it is a plain Python list of lists, as above, then a default of 8 is used.

The info parameter is a dictionary that can be used to specify metadata (in the same style as the arguments to the :class:png.Writer class). For this function the keys that are useful are:

height

overrides the height derived from the array dimensions and allows a to be an iterable.

width

overrides the width derived from the array dimensions.

bitdepth

overrides the bit depth derived from the element datatype (but must match mode if that also specifies a bit depth).

Generally anything specified in the info dictionary will override any implicit choices that this function would otherwise make, but must match any explicit ones. For example, if the info dictionary has a greyscale key then this must be true when mode is 'L' or 'LA' and false when mode is 'RGB' or 'RGBA'.

temet.util.rotation module

Find rotation matrices (moment of inertia tensors) to place galaxies edge-on/face-on, do coordinate rotations.

meanAngMomVector(sP, subhaloID, shPos=None, shVel=None)

Calculate the 3-vector (x,y,z) of the mean angular momentum of either the star-forming gas or the inner stellar component, for rotation and projection into disk face/edge-on views.

momentOfInertiaTensor(sP, gas=None, stars=None, rHalf=None, shPos=None, subhaloID=None, useStars=True, onlyStars=False)

Calculate the moment of inertia tensor (3x3 matrix) for a subhalo or halo, given a load of its member gas and stars (at least within 2*rHalf==shHalfMassRadStars) and center position shPos. If useStars == True, then switch to stars if not enough SFing gas present, otherwise never use stars. If onlyStars == True, use stars alone to determine.

rotationMatricesFromInertiaTensor(I)

Calculate 3x3 rotation matrix by a diagonalization of the moment of inertia tensor. Note the resultant rotation matrices are hard-coded for projection with axes=[0,1] e.g. along z.

rotationMatrixFromVec(i_v_in, target_vec=None)

Calculate 3x3 rotation matrix to align input vec with a target vector. By default this is the z-axis, such that with vec the angular momentum vector of the galaxy, an (x,y) projection will yield a face on view, and an (x,z) projection will yield an edge on view.

rotationMatrixFromAngleDirection(angle, direction)

Calculate 3x3 rotation matrix for input angle about an axis defined by the input direction about the origin. Input angle in degrees.

rotationMatrixFromAngle(angle)

Calculate 2x2 rotation matrix for input angle, in degrees, CCW from the positive x-axis.

rotateCoordinateArray(sP, pos, rotMatrix, rotCenter, shiftBack=True)

Rotate a [N,3] array of Coordinates about rotCenter according to rotMatrix.

perspectiveProjection(n, f, l, r, b, t, pos, hsml, axes)

Transforms coordinates using the Perspective Projection Matrix and sizes using the ratio of similar triangles (http://www.songho.ca/opengl/gl_projectionmatrix.html).

The truncated pyramid frustrum is defined by:

n (float): The distance to the near plane along the line of sight direction. f (float): The distance to the far plane along the line of sight direction. [l, r] ([float, float]): The range of “x-axis” coordinates along the near plane. [b, t] ([float, float]): The range of “y-axis” coordinates along the near plane.

(l,b)/(r,t) thus correspond to the bottom-left/top-right corners of the frustum projected onto the near plane.

The Perspective Projection Matrix computed from this set of parameters is thereafter used to transform:

pos (ndarray[float][N,3]): array of 3-coordinates for the particles; camera is situated at z=0. hsml (ndarray[float][N]): smoothing lengths. axes (list[int][2]): the axis of the projection plane, e.g. [0,1] for a projection along the z-axis.

Returns:

Transformed coordinates. Since the z-component is always mapped to the near

plane in a perspective projection, only the x and y components are transformed here. The z component is left as is for use in, e.g., sphMap(). This, however, means that the transformed coordinates cannot be unprojected.

tHsml (ndarray[float][N]): Transformed hsml; equal to original hsml along the near plane, and scales

inversely with projection distance farther away from camera.

Return type:

tPos (ndarray[float][N,3])

Notes

  • The tranformed values of hsml will be negative if the point is behind the camera.

ellipsoidfit(pos_in, mass, scalerad, rin, rout, weighted=False)

Iterative algorithm to derive the shape (axes ratios) of a set of particles/cells given by pos_in and mass, within a radial shell defined by rin, rout. Positions should be unitless, normalized by scalerad (a scaling factor, could be the virial radius or e.g. a half mass radius). Originally from Eddie Chua.

ellipsoidfit_run()

Find the shape (in stars, gas, or DM) of a set of galaxies/halos using the iterative ellipsoid method.

ellipsoidfit_test(N=1000, h=0.1, R=10.0, q=1.0, s=1.0, phi=0.0, theta=0.0, noise_frac=0.0)

Generate random points in an ellipsoidal shell and test the iterative fitting algorithm.

ellipsoidfit_test_run()

Execute some tests.

temet.util.simParams module

The simParams class encapsulates all meta-data and details for a particular simulation. In addition, it can hold specifications to point to a specific part of a given simulation, for instance a particular snapshot/redshift or halo/subhalo. We often start analysis by creating an instance of this class as:

sim = temet.sim(run='tng100-1', redshift=0.0)

To analyze a new or custom simulation which is not defined in this file, you can instead pass its path, and the simulation metadata will be automatically loaded.

sim = temet.sim('/virgo/simulations/IllustrisTNG/TNG50-1/', redshift=0.0)

This sim object can then be passed to many analysis routines, which then automatically know (i) where to find the data files, and (ii) which snapshot to analyze. Furthermore, many of the most common data loading functions are “attached” to this sim object, such that the two follow calls are functionally identical:

masses = load.snapshot.snapshotSubset(sim, 'gas', 'mass')
masses = sim.snapshotSubset('gas', 'mass')

The second providing a convenient short-hand. If you were using the public data release scripts alone, these two calls would also be identical to:

import illustris_python as il
basePath = 'sims.TNG/TNG100-1/output/'
snap = 99
masses = il.snapshot.loadSubset(basePath, snap, 'gas', fields=['Masses'])

Note that upon creation, each simParams object instantiates and fills a util.units class which is automatically tailored to this simulation and its unit system. For example

dists = sim.snapshotSubset('gas', 'rad', haloID=1234)
dists_kpc = sim.units.codeLengthToKpc(dists)

converts the original dists array from code units to physical kpc. While “code units” typically implies ckpc/h, this particular simultion could be instead using a Mpc-unit system, and this difference is handled transparently, as are different values of h for different simulations.

To add a new simulation (suite), one of the existing examples should be copied and adapted as needed.

class simParams(run, res=None, variant=None, redshift=None, time=None, snap=None, hInd=None, haloInd=None, subhaloInd=None, arepoPath=None, simName=None)

Bases: object

simPath = ''

path (root) containing ‘output’ directory with simulation snapshots and group catalogs

arepoPath = ''

path to Arepo binary, Config.sh and param.txt files of this run

derivPath = ''

path to put derivative files (“data.files/”)

postPath = ''

path to put postprocessed files (“postprocessing/”)

simName = ''

label to add to plot legends (e.g. “Illustris-2”, “TNG300-1”)

simNameAlt = ''

alternative label for simulation names (e.g. “L75n1820FP”, “L205n1250TNG_DM”)

run = ''
res = 0
variant = ''
groupOrdered = None
comoving = True
boxSize = 0.0
targetGasMass = 0.0
gravSoft = 0.0
omega_m = None
omega_L = None
omega_k = 0.0
omega_b = None
HubbleParam = None
mpcUnits = False
nTypes = 6
numSnaps = 0
subbox = None
subboxCen = None
subboxSize = None
levelmin = 0
levelmax = 0
zoomLevel = 0
rVirFac = 0.0
hInd = None
hIndDisp = None
zoomShift = None
zoomShiftPhys = None
targetHaloPos = None
targetHaloInd = 0
targetHaloRvir = 0.0
targetHaloMass = 0.0
targetRedshift = 0.0
ids_offset = 0
sP_parent = None
trMassConst = 0.0
trMCPerCell = 0
trMCFields = None
trVelPerCell = 0
haloInd = None
subhaloInd = None
refPos = None
refVel = None
colors = None
marker = None
data = None
metals = None
BHs = False
winds = False
redshift = None
time = None
snap = None
scan_simulation(inputPath, simName=None)

Fill simulation parameters automatically, based on path.

lookup_simulation(res=None, run=None, variant=None, redshift=None, time=None, snap=None, hInd=None, haloInd=None, subhaloInd=None)

Fill parameters based on inputs. (hardcoded)

auxCatSplit(field, nThreads=1, **kwargs)

Automatically do a pSplit auxCat calculation.

fillZoomParams(res=None, hInd=None, variant=None)

Fill parameters for individual (MUSIC-based) zooms.

setRedshift(redshift=None)

Update sP based on new redshift.

setSnap(snap=None)

Update sP based on new snapshot.

ptNum(partType)

Return particle type number (in snapshots) for input partType string. Allows different simulations to use arbitrary numbers for each type (so far they do not).

isPartType(ptToCheck, ptToCheckAgainst)

Return either True or False depending on if ptToCheck is the same particle type as ptToCheckAgainst. For example, ptToCheck could be ‘star’, ‘stars’, or ‘stellar’ and ptToCheckAgainst could then be e.g. ‘stars’. The whole point is to remove any hard-coded dependencies on numeric particle types. Otherwise, you would naively check that e.g. partTypeNum(ptToCheck)==4. This can now vary for different simulations (so far it does not).

copy()

Return a deep copy of this simParams object, which can then be manipulated/changed without affecting the original.

property name
property isZoom
property isZoomOrVirtualBox
property isDMO
property isSubbox
property partTypes

Return a list of particle type names contained in this simulation, excluding tracers.

property parentBox

Return a sP corresponding to the parent volume, at the same redshift (fullbox for subbox only for now).

property dmoBox

Return a sP corresponding to the DMO/Dark analog volume, at the same redshift.

subboxSim(sbNum)
property numMetals
property scalefac
property tage

Current age of the universe [Gyr].

property boxSizeCubicPhysicalMpc
property boxSizeCubicComovingMpc
property boxLengthDeltaRedshift

The redshift interval dz corresponding to traversing one box side-length.

property dz
property boxLengthComovingPathLength

The comoving pathlength i.e. ‘absorption distance’ (dX) corresponding to one box side-length.

property dX
property zoomSubhaloID

Return the subhalo ID of the target subhalo in the zoom suite. In general, not known, and may not be zero.

property dmParticleMass

Return dark matter particle mass (scalar constant) in code units.

property numHalos

Return number of FoF halos / groups in the group catalog at this sP.snap.

property numSubhalos

Return number of Subfind subhalos in the group catalog at this sP.snap.

property subhaloIndsCen

Return indices of central subhalos in the group catalog at this sP.snap.

property subhaloIndsSat

Return indices of satellite subhalos in the group catalog at this sP.snap.

property numPart

Return number of particles/cells of all types at this sP.snap.

property simCode

Return simulation code used to produce snapshots. Currently only differentiates between AREPO and SWIFT.

property cpuHours

Return CPU core hours to z=0 for this simulation.

hash_path(path, length=16)

temet.util.sphMap module

Interpolation of scattered point sets onto a uniform grid using the SPH spline kernel deposition method.

sphMap(pos, hsml, mass, quant, axes, boxSizeImg, boxSizeSim, boxCen, nPixels, ndims, hsml_1=None, colDens=False, nThreads=16, multi=False, posTarget=None, maxIntProj=False, minIntProj=False, refGrid=None)

Simultaneously calculate a gridded map of projected mass and some other mass weighted quantity (e.g. temperature) with the sph spline kernel. If quant=None, the map of mass is returned, optionally converted to a column density map if colDens=True. If quant is specified, the mass-weighted map of the quantity is additionally returned. If len(nPixels) is three, compute grid and not image.

Parameters:
  • pos (ndarray[float][N,3 or N,2]) – array of 3-coordinates for the particles (or 2-coords only, to ignore z-axis).

  • hsml (ndarray[float][N]) – smoothing lengths to use for the particles, same units as pos.

  • mass (ndarray[float][N or 1]) – mass to deposit, can be scalar value if all particles have the same mass.

  • quant (ndarray[float][N]) – some quantity to calculate a mass-weighted projection of (None=skip).

  • axes (list[int][2]) – two axis indices defining the projection direction, e.g. [0,1] for x,y (project along z-axis).

  • boxSizeImg (list[float][3]) – the physical size the image should cover, same units as pos.

  • boxSizeSim (list[float][3]) – the physical size of the simulation box for periodic wrapping (0=non periodic).

  • boxCen (list[float][3]) – (x,y,z) coordinates of the center of the imaging box, same units as pos.

  • nPixels (list[int][2 or 3]) – number of pixels in x,y directions for output image or number of cells in x,y,z directions for output grid.

  • ndims (int) – number of dimensions of simulation (1,2,3), to set SPH kernel coefficients.

  • hsml_1 (ndarray[float][N]) – if not None, do asymmetric sph kernel projection with hsml==hsml_0 (x), and hsml_1 (y).

  • colDens (bool) – if True, normalize each grid value by its area (default=False).

  • nThreads (int) – do multithreaded calculation (mem required=nThreads times more).

  • multi (bool) – if True, return the tuple (dens,quant) instead of selecting one, under the assumption that we are using sphMap() in a chunked mode and have to normalize later.

  • posTarget (ndarray[float][N,3]) – array of 3-coordinates for a set of targets (e.g. star particle locations). If not None, then both pos and posTarget will be sorted on the 3rd (projection direction) coordinate, and mapping will be front to back, such that the grid value at the position (2D+depth) of each posTarget is estimated. In this case, the return is of shape [N(posTarget)] giving the projected density or mass weighted quantity along the line of sight to each posTarget.

  • maxIntProj (bool) – perform a maximum intensity projection (MIP) instead of the usual weighting.

  • minIntProj (bool) – perform a minimum intensity projection, instead of the usual weighting.

Returns:

computed 2D or 3D array, depositing either mass or mass-weighted quant. If multi is True, then a 2-tuple of the (dens_map, quant_map) arrays separately.

Return type:

ndarray[float][nPixels]

Notes

  • the transpose of _calcSphMap() is taken such that with default plotting approaches e.g. axes=[0,1] gives imshow(return[i,j]) with x and y axes corresponding correctly to code coordinates. This does not apply if len(nPixels) == 3 and a 3D grid is computed. No transpose is taken in that case.

  • the entries of boxSizeImg, boxCenter, and boxSizeSim correspond to [axes[0],axes[1],axes[2]], meaning pos[:,axes[0]] is compared against the first entries boxSizeImg[0] and boxCenter[0] and not compared against e.g. boxSizeImg[axes[0]].

sphMapWholeBox(pos, hsml, mass, quant, axes, nPixels, sP, colDens=False, nThreads=8, multi=False, posTarget=None, sliceFac=1.0)

Wrap sphMap() specialized to projecting an entire cosmological/periodic box (or subbox). Specifically, (ndims,boxSizeSim,boxSizeImg,boxCen) are derived from sP, and nPixels should be input as a single scalar which is then assumed to be square.

sphGridWholeBox(sP, pos, hsml, mass, quant, nCells=32)

Wrap sphMap(), produce a 3D grid of the given pt/field combination for an entire box.

benchmark()

Benchmark performance of sphMap().

benchmarkTarget()

Benchmark performance of sphMap with posTargets.

temet.util.subfind module

Implementation of (serial) subfind algorithm.

load_custom_dump(sP, GrNr, phase1=False)

Load groups_{snapNum}/fof_{fofID}.{taskNum} custom binary data dump.

load_snapshot_data(sP, GrNr)

Load the FoF particles from an actual snapshot for testing.

treeSearchIndicesIterate(P, xyz, h_guess, nNGB, boxSizeSim, next_node, tree_nodes)

Helper routine for subfind(), see below. Note: no nNGBDev, instead we terminate if we ever find >=nNGB, and the return is sorted by distance.

buildFullTree(P, boxSizeSim, xyzMin, xyzMax, extent, ForceSoftening)

As above, but minimal and JITed.

subfind_treeevaluate_potential(target, P, ForceSoftening, next_node, tree_nodes, boxHalf, boxSizeSim)
subfind_unbind_calculate_potential(num, ud, loc_P, ForceSoftening, NextNode, TreeNodes, boxhalf, boxsize, PS, G, atime)

Loop to parallelize.

subfind_unbind_calculate_potential_weak(num, ud, loc_P, ForceSoftening, NextNode, TreeNodes, boxhalf, boxsize, PS, G, atime, weakly_bound_limit)

Loop to parallelize.

subfind_unbind(P, SphP, PS, ud, num, vel_to_phys, H_of_a, G, atime, boxsize, SofteningTable, ForceSoftening, xyzMin, xyzMax, extent, central_flag)

Unbinding.

subfind(P, PS, SphP, StarP, BHP, atime, H_of_a, G, boxsize, SofteningTable, ForceSoftening)

Run serial subfind. (Offs = 0).

subfind_properties_1(candidates)

Determine the parent subhalo for each candidate.

get_time_difference_in_Gyr(a0, a1)

timestep.c

assign_stellar_photometrics(i, P, StarP, mags, LogMetallicity_bins, LogAgeInGyr_bins, TableMags, atime)

GFM/stellar_photometrics.c

subfind_properties_2(candidates, Tail, Next, P, PS, SphP, StarP, BHP, atime, H_of_a, G, boxsize, LogMetallicity_bins, LogAgeInGyr_bins, TableMags, SofteningTable, GrNr)

Determine the properties of each subhalo.

subfind_particle_order(P, PS)

Prepare a particle-level output order according to our new PS (subgroup) assignment.

set_softenings(P, SphP, sP, snapLoaded=False)

Generate SofteningTable following grav_softening.c, and set P[].SofteningType values (based on sizes/masses).

load_gfm_stellar_photometrics()

Load photometrics table for interpolation later.

run_subfind_snapshot(sP, GrNr)

Run complete Subfind algorithm on a single group, using the saved snaphshot itself.

run_subfind_customfof0save_phase1(sP, GrNr=0)

Run complete Subfind algorithm on custom FOF0 save files (TNG50-1).

run_subfind_customfof0save_phase2(sP, GrNr=0)
verify_results(sP, GrNr=0)

Check results vs. actual group catalog for a test run where the subgroups of FOF0 are computed in AREPO.

rewrite_groupcat(sP, GrNr=0)

Rewrite a group catalog which is missing FOF0 subhalos using the phase2 (final) results.

add_so_quantities(sP, GrNr=0)

FoF0 is missing SO quantities (Group_R_Crit200, etc). Need to derive now.

rewrite_snapshot(sP, GrNr=0)

Rewrite a snapshot which is missing FOF0 subhalos using the phase2 (final) results.

rewrite_particle_level_cat(sP, filename, partType)

Take an input HDF5 filename (with at most one dataset), whose size is assumed to correspond to the size of sP.NumPart[partType] at that snapshot, and shuffle the fof0 segment into the new order.

compare_subhalos_all_quantities(snap_start=67)

Plot diagnostic histograms.

run_subfind(snap)

Main driver.

benchmark()

Benchmark.

temet.util.tpcf module

Two-point correlation functions (pairwise distances).

tpcf(pos, radialBins, boxSizeSim, weights=None, pos2=None, weights2=None, nThreads=32)

Calculate and simultaneously histogram the results of a two-point auto correlation function, by computing all the pairwise (periodic) distances in pos. 3D only.

pos[N,3] : array of 3-coordinates for the galaxies/points radialBins[M] : array of (inner) bin edges in radial distance (code units) boxSizeSim[1] : the physical size of the simulation box for periodic wrapping (0=non periodic) weights[N] : if not None, then use these scalars for a weighted correlation function pos2[L,3] : if not None, then cross-correlation instead of auto, of pos vs. pos1 weights2[L] : must be None if weights is None, and vice versa

return is xi(r),DD,RR where xi[i]=(DD/RR-1) is computed between radialBins[i:i+1] (size == M-1)

quantReductionInRad(pos_search, pos_target, radial_bins, quants, reduce_op, boxSizeSim, nThreads=8)
Calculate a reduction operation on one or more quantities for all target points falling within a 3D

periodic search radius of each search point.

pos_search[N,3] : array of 3-coordinates for the galaxies/points to search from pos_target[M,3] : array of the 3-coordiantes of the galaxies/points to search over radial_bins[M] : array of bin edges in radial distance (code units) quants[M]/[M,P] : 1d or P-d array of quantities, one per pos_target, to process reduce_op[str] : one of ‘min’, ‘max’, ‘sum’ boxSizeSim[1] : the physical size of the simulation box for periodic wrapping (0=non periodic)

return is reduced_quants[N,M-1]/[N,M-1,P]

benchmark()

Benchmark performance of tpcf(). Single thread: 600sec for 100k points, perfect O(N^2) scaling, so 16.7 hours for 1M points.

benchmark2()

Benchmark performance of quantReductionInRad(). 100k points, 16 threads in 23 sec.

temet.util.treeSearch module

Adaptive estimation of a smoothing length (radius of sphere enclosing N nearest neighbors) using oct-tree.

buildFullTree(pos, boxSizeSim, treePrec=None, verbose=False)

Helper. See below.

calcHsml(pos, boxSizeSim, posSearch=None, posMask=None, nNGB=32, nNGBDev=1, nDims=3, weighted_num=False, treePrec='single', tree=None, nThreads=32, nearest=False, verbose=False)

Calculate a characteristic ‘size’ (‘smoothing length’) given a set of input particle coordinates, where the size is defined as the radius of the sphere (or circle in 2D) enclosing the nNGB nearest neighbors. If posSearch==None, then pos defines both the neighbor and search point sets, otherwise a radius for each element of posSearch is calculated by searching for nearby points in pos.

Parameters:
  • pos (ndarray[float][N,3]/[N,2]) – array of 3-coordinates for the particles (or 2-coords for 2D).

  • boxSizeSim (float) – the physical size of the simulation box for periodic wrapping (0=non periodic).

  • posSearch (ndarray[float][M,3]) – search sites.

  • posMask (ndarray[bool][N]) – if not None, bool mask, only True entries are considered in search.

  • nNGB (int) – number of nearest neighbors to search for in order to define HSML.

  • nNGBDev (int) – allowed deviation (+/-) from the requested number of neighbors.

  • nDims (int) – number of dimensions of simulation (1,2,3), to set SPH kernel coefficients.

  • weighted_num (bool) – if True, use the SPH kernel weighted number of neighbors, instead of real number.

  • treePrec (str) – construct the tree using ‘single’ or ‘double’ precision for coordinates.

  • tree (list or None) – if not None, should be a list of all the needed tree arrays (pre-computed), i.e the exact return of :py:func`~util.treeSearch.buildFullTree`.

  • nThreads (int) – do multithreaded calculation (on treefind, while tree construction remains serial).

  • nearest (bool) – if True, then instead of returning hsml values based on nNGB, return indices and distances to single closest match only.

Returns:

derived smoothing length for each input point (if not nearest). Instead if nearest == True, then a 2-tuple of (dists,indices).

Return type:

ndarray[float or int]

calcQuantReduction(pos, quant, hsml, op, boxSizeSim, posSearch=None, treePrec='single', tree=None, nThreads=32)

Calculate a reduction of a given quantity reduction on all the particles within a fixed search distance hsml around each pos. If posSearch==None, then pos defines both the neighbor and search point sets, otherwise a reduction at the location of each posSearch is calculated by searching for nearby points in pos.

Parameters:
  • pos (ndarray[float][N,3]/[N,2]) – array of 3-coordinates for the particles (or 2-coords for 2D).

  • quant (ndarray[float][N]) – array of quantity values (i.e. mass, temperature).

  • hsml (ndarray[float][1 or N]) – array of search distances, or scalar value if constant.

  • op (str) – ‘min’, ‘max’, ‘mean’, ‘kernel_mean’, ‘sum’, ‘count’.

  • boxSizeSim (float) – the physical size of the simulation box for periodic wrapping (0=non periodic).

  • posSearch (ndarray[float][N,3]) – search coordinates (optional).

  • treePrec (str) – construct the tree using ‘single’ or ‘double’ precision for coordinates.

  • tree (list or None) if not None, should be a list of all the needed tree arrays (pre-computed) – i.e the exact return of buildFullTree().

  • nThreads (int) – do multithreaded calculation (on treefind, while tree construction remains serial).

Returns:

reduction operation applied for each input.

Return type:

ndarray[float]

calcParticleIndices(pos, posSearch, hsmlSearch, boxSizeSim, posMask=None, treePrec='single', tree=None)

Find and return the actual particle indices (indexing pos, hsml) within the search radius hsml of the posSearch location. Serial by construction, since we do only one search.

Parameters:
  • pos (ndarray[float][N,3]/[N,2]) – array of 3-coordinates for the particles (or 2-coords for 2D).

  • posSearch (ndarray[float][3]) – search position.

  • hsmlSearch (float) – search distance.

  • boxSizeSim (float) – the physical size of the simulation box for periodic wrapping (0=non periodic).

  • posMask (ndarray[bool][N]) – if not None, then only True entries are considered in the search.

  • treePrec (str) – construct the tree using ‘single’ or ‘double’ precision for coordinates.

  • tree (list or None) if not None, should be a list of all the needed tree arrays (pre-computed) – i.e the exact return of buildFullTree().

Returns:

list of indices into pos of the neighbors within the search distance of the search position.

Return type:

ndarray[int]

benchmark()

Benchmark performance of calcHsml().

checkVsSubfindHsml()

Compare our result vs SubfindHsml output.

temet.util.units module

Conversion from ‘code units’ to physical units, conversions between different unit systems, and derivations of common halo properties (e.g. virial temperature) and cosmological quantities (e.g. age at a given redshift).

class units(sP=None)

Bases: object

Contains static methods which perform various unit conversions. Can also be instantiated with a redshift/sP, in which case contains the relevant unit system and redshift-dependent constants.

UnitMass_in_g = 1.989e+43
UnitVelocity_in_cm_per_s = 100000.0
UnitMass_str = '10$^{10}$ M$_{\\rm sun}$/h'
UnitVelocity_str = 'km/s'
boltzmann = 1.38065e-16
boltzmann_JK = 1.38065e-23
boltzmann_keV = 11604505.0
planck_erg_s = 6.626e-27
hc_kev_ang = 12.3981689
mass_proton = 1.672622e-24
mass_electron = 9.1095e-28
gamma = 1.666666667
hydrogen_massfrac = 0.76
helium_massfrac = 0.24
mu = 0.6
Gravity = 6.6738e-08
H0_h1_s = 3.24078e-18
Z_solar = 0.0127
L_sun = 3.839e+33
Msun_in_g = 1.98892e+33
c_cgs = 29979000000.0
c_km_s = 299790.0
c_kpc_Gyr = 306595.0
sigma_thomson = 6.6524e-25
electron_charge = 4.8032e-10
rydberg_ang = 0.00109737
rydberg_freq = 3289840000000000.0
CourantFac = 0.3
BH_eps_r = 0.2
BH_eps_f_high = 0.1
BH_f_thresh = 0.05
N_SNII = 0.0118008
E_SNII51 = 1.0
winds_tau = 0.1
winds_fZ = 0.25
winds_Zref = 0.002
winds_gamma_Z = 2.0
winds_e = 3.6
winds_kappa = 7.4
winds_vmin = 350.0
PhysDensThresh = 0.000754654
sh03_nH_thresh = 0.13
sh03_A0 = 573.0
sh03_T_SN = 57300000.0
sh03_T_c = 1000.0
sh03_beta = 0.22578
sh03_t_star = 3.27665
s_in_yr = 31556930.0
pc_in_cm = 3.08568e+18
km_in_cm = 100000.0
kpc_in_ly = 3261.56
arcsec_in_rad = 4.84814e-06
ster_in_arcsec2 = 42500000000.0
ang_in_cm = 1e-08
erg_in_J = 1e-07
erg_in_kev = 624150000.0
eV_in_erg = 1.602e-12
UnitLength_in_cm = 3.085678e+21
UnitLength_str = 'ckpc/h'
UnitTime_in_s = None
UnitDensity_in_cgs = None
UnitPressure_in_cgs = None
UnitEnergy_in_cgs = None
UnitTemp_in_cgs = None
UnitMass_in_Msun = None
UnitTime_in_yr = None
s_in_kyr = None
s_in_Myr = None
s_in_Gyr = None
kpc_in_km = None
Mpc_in_cm = None
kmS_in_kpcYr = None
kmS_in_kpcGyr = None
kpc_in_cm = None
msunKpc3_in_gCm3 = None
H0 = None
G = None
rhoCrit = None
rhoCrit_msunMpc3 = None
H0_kmsMpc = None
mag2cgs = None
c_ang_per_sec = None
f_b = None
Hubble = None
rhoBack = None
H2_z_fact = None
Omega_z = None
H_z = None
H_of_a = None
rhoCrit_z = None
scalefac = None
codeMassToMsun(mass)

Convert mass from code units (10**10 msun/h) to (msun).

codeMassToLogMsun(mass)

Convert mass from code units (10**10 msun/h) to (log msun).

logMsunToCodeMass(mass)

Convert mass in (log msun) to code units.

codeMassToVirTemp(mass, meanmolwt=None, log=False)

Convert from halo mass in code units to virial temperature in Kelvin, at the specified redshift (Barkana & Loeb (2001) eqn.26).

codeMassOverTimeToMsunPerYear(mass_over_time_val)

Convert a code [mass/time] value (e.g. BH_Mdot) into [Msun/yr]. The usual 10.22 factor.

codeBHMassToMdotEdd(mass, eps_r=None)

Convert a code mass (of a blackhole) into dM/dt_eddington in [Msun/yr]. Also available directly as ‘BH_MdotEddington’ field.

codeBHMassToLumEdd(mass)

Convert a code mass (of a blackhole) into its Eddington luminosity [erg/s].

codeBHMassMdotToBolLum(mass, mdot, basic_model=False, obscuration=False)

Convert a (BH mass, BH mdot) pair to a bolometric luminosity [erg/s].

BH_chi(M_BH)

Return chi(M_BH) threshold for the fiducial Illustris/TNG model parameters. M_BH in Msun.

codeBHValsToFeedbackMode(bh_mass, bh_mdot, bh_mdot_bondi, bh_mdot_edd)

Return the feedback mode (0 = low/kinetic, 1 = high/quasar) of the black hole, based on its mass and accretion rate.

codeBHMassMdotToInstantaneousEnergy(bh_mass, bh_mdot, bh_density, bh_mdot_bondi, bh_mdot_edd)

Convert the instantaneous mass/accretion properties of a BH, all in code units, to an instantaneous amount of energy being injected as per the (mode-dependent) feedback model of the simulation, in [erg/s].

codeMetallicityToWindSpecificEnergy(metal_code)

Convert the SFR [Msun/yr] of a gas cell into the dot{E}_SN of total wind energy available [10^51 erg/msun].

codeSfrZToWindEnergyRate(sfr_msunyr, metal_code)

Convert the SFR [Msun/yr] of a gas cell into the dot{E}_SN of total wind energy available [10^51 erg/s]

codeSfrZToWindMomentumRate(sfr_msunyr, metal_code, dm_veldisp)

Convert the SFR [Msun/yr] of a gas cell into the dot{E}_SN of total wind energy available [10^51 g*cm/s^2]

sigmaDMToWindVel(dm_veldisp)

Convert a code (3D) velocity dispersion, i.e. the ‘SubfindVelDisp’, into the wind launch velocity of the model [km/s].

codeSfrZSigmaDMToWindMassLoading(sfr_msunyr, metal_code, dm_veldisp)

Convert a gas cell SFR [Msun/yr], metallicity [code], and 3D vel disp [km/s] into the mass loading factor of the wind model [dimensionless linear].

densToSH03TwoPhase(dens_in, sfr)

Convert a gas cell density to values corresponding to the two-phase state of the sub-cell gas, according to the Springel & Hernquist (2003) sub-grid ISM pressurization model.

Parameters:
  • dens_in (float or ndarray) – hydrogen physical gas number density [H atoms/cm^3]

  • sfr (float or ndarray) – star formation rate [Msun/yr]

Returns: a 2-tuple composed of

  • x (ndarray): the cold-phase (i.e. cold clouds) mass fraction, which is density-dependent, and always between zero and unity.

  • T_h (ndarray): the hot-phase temperature, which is density-dependent, and increases from ~1e5 to ~1e8 K.

logMsunToVirTemp(mass, meanmolwt=None, log=False)

Convert halo mass (in log msun, no little h) to virial temperature at specified redshift.

codeLengthToComovingKpc(x)

Convert length/distance in code units to comoving kpc.

codeLengthToKpc(x)

Convert length/distance in code units to physical kpc.

codeLengthToMpc(x)

Convert length/distance in code units to physical Mpc.

codeLengthToPc(x)

Convert length/distance in code units to physical parsec.

codeLengthToComovingMpc(x)

Convert length/distance in code units to comoving Mpc.

codeLengthToCm(x)

Convert length/distance in code units to cgs (cm).

codeAreaToKpc2(x)

Convert an area [length^2] in code units to physical kpc^2.

codeAreaToMpc2(x)

Convert an area [length^2] in code units to physical Mpc^2.

codeVolumeToCm3(x)

Convert a volume [length^3] in code units to physical cm^3 (cgs).

codeVolumeToKpc3(x)

Convert a volume [length^3] in code units to physical kpc^3.

codeVolumeToMpc3(x)

Convert a volume [length^3] in code units to physical Mpc^3.

physicalKpcToCodeLength(x)

Convert a length in [pkpc] to code units [typically ckpc/h].

lightyearsToCodeLength(x)

Convert a length in [lightyears] to code units.

physicalMpcToCodeLength(x)

Convert a length in [pMpc] to code units [typically ckpc/h].

particleCodeVelocityToKms(x)

Convert velocity field (for cells/particles, not group properties) into km/s.

groupCodeVelocityToKms(x)

Convert velocity vector (for groups, not subhalos nor particles) into km/s.

subhaloCodeVelocityToKms(x)

Convert velocity vector (for subhalos, not groups nor particles) into km/s.

subhaloSpinToKpcKms(x)

Convert spin vector (for subhalos, not groups nor particles) into kpc km/s.

particleCodeBFieldToGauss(b)

Convert magnetic field 3-vector (for cells) into Gauss, input b is PartType0/MagneticField.

particleCodeDivBToGaussPerKpc(divb)

Convert magnetic field divergence into [Gauss/kpc] physical, input is PartType0/MagneticFieldDivergence.

codePotentialToEscapeVelKms(pot)

Convert Potential [(km/s)^2/a] into an escape velocity [km/s].

particleAngMomVecInKpcKmS(pos, vel, mass, haloPos, haloVel)

Calculate particle angular momentum 3-vector in [Msun*kpc km/s] given input arrays of pos,vel,mass and the halo CM position and velocity to compute relative to. Includes Hubble correction.

particleSpecAngMomMagInKpcKmS(pos, vel, mass, haloPos, haloVel, log=False)

Wrap particleAngMomVecInKpcKmS() to calculate particle specific angular momentum magnitude in [kpc km/s] given input arrays.

particleRadialVelInKmS(pos, vel, haloPos, subhaloVel)

Calculate particle radial velocity in [km/s] (negative=inwards) given input arrays of pos,vel and the halo CM position and velocity to compute relative to. Includes Hubble correction.

particleRelativeVelInKmS(vel, subhaloVel)

Calculate particle velocity magnitude in [km/s], relative to a given reference frame motion which is typically SubhaloVel. If not, must be in physical units.

codeDensToPhys(dens, cgs=False, numDens=False, msunpc3=False, totKpc3=False)

Convert mass density comoving->physical and add little_h factors. Unless overridden by a parameter option, the default return units are \([10^{10} M_\odot/\rm{kpc}^3]\).

Parameters:
  • dens (array[float]) – density in code units, should be \([10^{10} M_\odot/h / (ckpc/h)^3]\) = \([10^{10} M_\odot h^2 / ckpc^3]\).

  • cgs (bool) – if True, return units are [g/cm^3].

  • numDens (bool) – if True and cgs == True, return units are [1/cm^3].

  • msunpc3 (bool) – if True, return units are [Msun/pc^3].

  • totKpc3 (bool) – if True, return units are [[orig units]/kpc^3].

Returns:

densities in physical units, as specified above.

Return type:

array[float]

physicalDensToCode(dens, cgs=False, numDens=False)

Convert mass density in physical units to code units (comoving + w/ little h factors, in unit system). Input: dens in [msun/kpc^3] or [g/cm^3 if cgs==True] or [1/cm^3 if cgs==True and numDens==True]. Output: dens in [10^10 Msun/h / (ckpc/h)^3] = [10^10 Msun h^2 / ckpc^3] comoving.

codeColDensToPhys(colDens, cgs=False, numDens=False, msunKpc2=False, totKpc2=False)

Convert a mass column density [mass/area] from comoving -> physical and remove little_h factors. Unless overridden by a parameter option, the default return units are \([10^{10} M_{\odot}/kpc^2]\).

Parameters:
  • colDens (ndarray) – column densities in code units, which should be \([10^{10} M_{\odot}/h / (ckpc/h)^2]\) = \([10^{10} M_{\odot} * h / ckpc^2].\)

  • cgs (bool) – if True, return units in [g/cm^2].

  • numDens (bool) – if True and cgs == True, return units in [1/cm^2], which is in fact [H atoms/cm^2].

  • msunKpc2 (bool) – return units in [Msun / kpc^2].

  • totKpc2 (bool) – return units in [[orig units] / kpc^2].

Returns:

physical column densities, units depending on the above.

Return type:

ndarray[float32]

UToTemp(u, xe, log=False)

Convert (U,xe) pair in code units to temperature in Kelvin.

TempToU(temp, xe, log=False)

Convert temperature in Kelvin to InternalEnergy (u) in code units.

coolingRateToCGS_unused(coolrate)

Convert code units (du/dt) to erg/s/g (cgs, specific). Unused.

coolingRateToCGS(code_dens, code_gfmcoolrate)

Convert cooling/heating rate to specific CGS [erg/s/g]. Input is PartType0/[Masses,Density,GFM_CoolingRate].

powellEnergyTermCGS(code_dens, code_divb, code_b, code_vel, code_vol)

Derive the ‘Powell heating/cooling’ energy source term (rightmost in Eqn. 21 Pakmor & Springel arxiv:1212.1452).

coolingTimeGyr(code_dens, code_gfmcoolrate, code_u)

Calculate a cooling time in Gyr from three code units inputs (i.e. snapshot values) of Density, GFM_CoolingRate, InternalEnergy.

tracerEntToCGS(ent, log=False)

Fix cosmological/unit system in TRACER_MC[MaxEnt], output in cgs [K cm^2].

calcXrayLumBolometric(sfr, u, xe, mass, dens, temp=None, log=False)

Following Navarro+ (1995) Eqn. 6 the most basic estimator of bolometric X-ray luminosity in [10^30 erg/s] for individual gas cells, based only on their density and temperature. Assumes simplified (primordial) high-temp cooling function, and only free-free (bremsstrahlung) emission contribution from T>10^6 Kelvin gas. All inputs in code units.

opticalDepthLineCenter(transition, dens_cgs, temp_K, cellsize_code)

Derive an optical depth tau_0 = n * sigma_0 * L for a given transition, assuming the frequency is at line center (neglecting the Voigt profile shape). dens_cgs is the volume number density of the species of relevance [1/cm^3], temp is the cell temperature [linear K], and cellsize_code is the usual radius of the cell [code units], which we take as L/2.

sfrToHalphaLuminosity(sfr)

Convert SFR from code units (Msun/yr) into H-alpha line luminosity [linear 10^30 erg/s]. Just the usual linear conversion from Kennicutt.

gasSfrMetalMassToS850Flux(sfr, metalmass, temp, dens, ismCut=True)

Convert SFR [code units, Msun/yr] and metal mass [Msun] into 850 micron submm flux [linear mJy]. Simple model of Hayward+ (2013b) assuming a constant dust-to-metal ratio. Also enforce Torrey+12 ‘ISM’ constraint using temp [log K] and dens [code units, comoving].

calcEntropyCGS(u, dens, log=False)

Calculate entropy as P/rho^gamma, converting rho from comoving to physical. Return [K cm^2].

calcPressureCGS(u, dens, log=False)

Calculate pressure as (gamma-1)*u*rho in physical ‘cgs’ [K/cm^3] units.

calcMagneticPressureCGS(b, log=False)

Calculate magnetic pressure as B^2/8/pi in physical ‘cgs’ [K/cm^3] units.

calcKineticEnergyDensityCGS(dens_code, vel_kms, log=False)

Calculate kinetic energy density (KE/volume = 1/2 * mv^2 / volume) in ‘cgs’ [K/cm^3] units, inputs: dens_code in code units, vel_kms in physical km/s.

calcSoundSpeedKmS(u, dens, log=False)

Calculate sound speed as sqrt(gamma*Pressure/Density) in physical km/s.

calcSunyaevZeldovichYparam(mass, xe, temp)

Calculate per-cell (thermal) SZ y-parameter (e.g. McCarthy+2014 Eqn 2, Roncarelli+2007 Eqn 5, Kay+2012 Eqn 12).

Parameters:
  • mass (ndarray[float]) – gas cell masses [code units].

  • xe (ndarray[float]) – gas electron number density fraction [code units, i.e. dimensionless linear].

  • temp (ndarray[float]) – gas temperature [linear K].

Returns:

y-parameter in units of area [pkpc^2].

Return type:

ndarray[float]

calcKineticSZYParam(mass, xe, vel_los)

Calculate per-cell kinetic SZ y-parameter (e.g. Dolag+16 Eqn. 4, Altamura+23 Eqn. 2).

Parameters:
  • mass (ndarray[float]) – gas cell masses [code units].

  • xe (ndarray[float]) – gas electron number density fraction [code units, i.e. dimensionless linear].

  • vel_los (ndarray[float]) – line-of-sight velocity [km/s].

Returns:

y-parameter in units of area [pkpc^2].

Return type:

ndarray[float]

codeDensToCritRatio(rho, baryon=False, log=False, redshiftZero=False)

Normalize code density by the critical (total/baryonic) density at some redshift. If redshiftZero, normalize by rho_crit,0 instead of rho_crit(z).

critRatioToCodeDens(ratioToCrit, baryon=False)

Convert a ratio of the critical density at some redshift to a code density.

codeMassToVirEnt(mass, log=False)

Given a total halo mass, return a S200 (e.g. Pvir/rho_200crit^gamma).

codeMassToVirVel(mass)

Given a total halo mass [in code units], return a virial velocity (V200) in physical [km/s].

codeMassToVirRad(mass)

Given a total halo mass [in code units], return a virial radius (r200) in physical [kpc].

codeM200R200ToV200InKmS(m200, r200)

Given a (M200,R200) pair in code units for a FoF group, compute V200 in physical [km/s].

avgEnclosedDensityToFreeFallTime(rho_code)

Convert a mass density (code units) to gravitational free-fall time [linear Gyr]. rho_code Should be the mean density interior to the radius R of a test mass within a halo.

metallicityInSolar(metal, log=False)

Given a code metallicity (M_Z/M_total), convert to value with respect to solar.

codeTimeStepToYears(TimeStep, Gyr=False)

Convert a TimeStep/TimeStepHydro/TimeStepGrav (e.g. an integer times All.Timebase_interval) for a comoving run to a physical time in years.

scalefacToAgeLogGyr(scalefacs)

Convert scalefactors (e.g. GFM_StellarFormationTime) to age in log(Gyr) given the current age of the universe as specified by sP.redshift.

codeEnergyToErg(energy, log=False)

Convert energy from code units (unitMass*unitLength^2/unitTime^2) to [erg]. (for BH_CumEgy* ).

codeEnergyRateToErgPerSec(energy_rate, log=False)

Convert energy/time from code units (unitEnergy/unitTime) to [erg/s]. (for Gas EnergyDissipation).

lumToAbsMag(lum)

Convert from an input luminosity in units of [Lsun/Hz] to an AB absolute magnitude.

absMagToLuminosity(mag)

Convert from input AB absolute magnitudes to (linear) luminosity units of [Lsun/Hz].

absMagToApparent(absolute_mag, redshift=None)

Convert an absolute magnitude to apparent.

apparentMagToAbsolute(apparent_mag, redshift=None)

Convert an apparent magnitude to absolute.

photonWavelengthToErg(wavelength, redshift=None)

Convert a photon wavelength [rest-frame Ang] emitted at a redshift into photon energy [erg].

luminosityToFlux(lum, wavelength=None, redshift=None)

Convert a luminosity in [erg/s] to a flux [photon/s/cm^2] for e.g. line emission at a given wavelength in [Angstroms] if not None, from a source at the given redshift. If wavelength is None, then output units are an energy flux e.g. [erg/s/cm^2].

fluxToLuminosity(flux, redshift=None)

Convert a flux in [erg/s/cm^2] to a luminosity [erg/s], from a source at the given redshift.

fluxToSurfaceBrightness(flux, pxDimsCode, arcsec2=True, arcmin2=False, ster=False, kpc=False)

Convert a flux in e.g. [energy/s/cm^2] or [photon/s/cm^2] into a surface brightness at a given redshift and for a certain pixel scale. pxDimsCode is a 2-tuple of the x and y dimensions of the pixel in code units, i.e. [ckpc/h]^2. Output e.g.: [photon/s/cm^2/arcsec^2] if arcsec2 == True, otherwise possibly [flux/arcmin^2], [flux/ster], or [flux/kpc^2].

synchrotronPowerPerFreq(gas_B, gas_vol, watts_per_hz=True, log=False, telescope='SKA', eta=1.0, k=10, gamma_min=300, gamma_max=15000, alpha=1.7)

Return synchrotron power per unit frequency for gas cells with inputs gas_B and gas_vol the magnetic field 3-vector and volume, both in code units. Output units [Watts/Hz]. Default model parameter assumptions of Xu+ (2012)/Marinacci+ (2017).

redshiftToAgeFlat(z)

Calculate age of the universe [Gyr] at the given redshift (assuming flat cosmology). Analytical formula from Peebles, p.317, eq 13.2.

ageFlatToRedshift(age)

Calculate redshift from age of the universe [Gyr] (assuming flat cosmology). Inversion of analytical formula from redshiftToAgeFlat().

redshiftToLookbackTime(z)

Calculate lookback time from z=0 to redshift in [Gyr], assuming flat cosmology.

redshiftToComovingDist(z)

Convert redshift z to line of sight distance (in Mpc). Assumes flat.

redshiftToComovingVolume(z)

Calculate total comoving volume from the present to redshift z (all-sky) in [cGpc^3].

comovingVolumeDeltaRedshift(z1, z2, arcSeqSq=None, arcMinSq=None, arcDegSq=None, mpc3=False)

Calcuate comoving volume between z1 and z2 (all-sky). If any of arcSeqSq, arcMinSq, or arcDegSq are input, then return for this solid angle on the sky instead of 4pi. If mpc3, return in units of [cMpc^3] instead of the default [cGpc^3].

redshiftToAngDiamDist(z)

Convert redshift z to angular diameter distance (in Mpc). This equals the proper/physical transverse distance for theta=1 rad. Assumes flat. Peebles, p.325.

redshiftToLumDist(z)

Convert redshift z to luminosity distance (in Mpc). This then allows the conversion between luminosity and a flux at that redshift.

arcsecToAngSizeKpcAtRedshift(ang_diam, z=None)

Convert an angle in arcseconds to an angular/transverse size (in proper/physical kpc) at redshift z. Assumes flat cosmology.

degToAngSizeKpcAtRedshift(ang_diam_deg, z)

Convert an angle in degrees to a physical size [kpc] at redshift z.

codeLengthToAngularSize(length_codeunits, z=None, arcsec=True, arcmin=False, deg=False)

Convert a distance in code units (i.e. ckpc/h) to an angular scale in [arcsec] at a given redshift z. Assumes flat cosmology. If arcmin or deg is True, then [arcmin] or [deg].

physicalKpcToAngularSize(length_kpc, z=None, arcsec=True, arcmin=False, deg=False)

Convert a distance in physical kpc to an angular scale in [arcsec] at the given redshift z. Assumes flat cosmology. If arcmin or deg is True, then [arcmin] or [deg].

magsToSurfaceBrightness(vals, pxSizeCode)

Convert a set of magnitudes (probably on a grid) to surface brightness values, given a constant pxSizeCode.

haloMassToOtherOverdensity(mass_orig, delta_orig=200, delta_new=500)

Convert a halo mass between two different spherical overdensity definitions, e.g. M200 to M500. Assumes NFW density profile and the c-M relation from Bullock by default. Based on https://arxiv.org/abs/astro-ph/0203169 (appendix). Input halo mass (and output) in units of linear Msun.

m200_to_m500(halo_mass)

Convert a halo mass from Delta=200 to Delta=500.

m500_to_m200(halo_mass)

Convert a halo mass from Delta=500 to Delta=200.

particleCountToMass(N_part, baryon=True, boxLength=None)

Convert the cube-root of a total particle count (e.g. 512, 1820) to its average mass given the cosmology and volume of this box. If boxLength is specified, then should be a box side-length in cMpc/h units (e.g. 25, 205). Return in [Msun].

meanmolwt(Y, Z)

Mean molecular weight, from Monaco+ (2007) eqn 14, for hot halo gas. Y = helium fraction (0.25) Z = metallicity (non-log metal mass/total mass)

temet.util.virtualSimFile module

Creation and update of the virtual ‘simulation.hdf5’ file.

createVirtualSimHDF5()

Create a single ‘simulation.hdf5’ file which is made up of virtual datasets (HDF5 1.1x/h5py 2.9.x features). Note: dataset details acquired from first chunk of last snapshot! Snapshot must be full, and first chunk must have at least one of every particle type! Note: run in simulation root dir, since we make relative path links.

supplementVirtualSimHDF5AddSnapField()

Add to existing ‘simulation.hdf5’ file (modify as needed, careful!).

supplementVirtualSimHDF5AddOrUpdateGroupcatField()

Add to existing ‘simulation.hdf5’ file (modify as needed, careful!).

supplementVirtualSimHDF5()

Add to existing ‘simulation.hdf5’ file (modify as needed, careful!).

temet.util.voronoi module

Algorithms and methods related to construction and use of the Voronoi mesh.

loadSingleHaloVPPP(sP, haloID)

Load Voronoi connectivity information for a single FoF halo.

loadGlobalVPPP(sP)

Load global Voronoi connectivity information for a snapshot.

voronoiThresholdSegmentation(sP, haloID, propName, propThresh, propThreshComp)

For all the gas cells of a given halo, identify collections which are spatially connected in the sense that they are Voronoi natural neighbors, and which satisfy a threshold criterion on a particular gas property, e.g. log(T) < 4.5.

voronoiWatershed(sP, haloID, propName)

For all the gas cells of a given halo, identify collections which are spatially connected (in the sense that they are Voronoi natural neighbors) using a watershed algorithm. That is, start at all the local maxima simultaneously, and flood outwards until encountering an already flooded cell, the interface between the two then becoming the segmentation line.

test_cen()
plotFacePolygon(cell_cen, neighbor_cen, vertices)

Plot helper.

voronoiSliceWithPlane()

Testing.

temet.util.voronoiRay module

Algorithms and methods related to ray-tracing through a Voronoi mesh.

trace_ray_through_voronoi_mesh_with_connectivity(cell_pos, num_ngb, ngb_inds, offset_ngb, ray_pos_in, ray_dir, total_dl, boxSize, debug, verify, fof_scope_mesh)

For a single ray, specified by its starting location, direction, and length, ray-trace through a Voronoi mesh as specified by the pre-computed natural neighbor connectivity information.

Parameters:
  • cell_pos (ndarray[float]) – Voronoi cell center positions [N,3].

  • num_ngb (array[int]) – Voronoi mesh connectivity, first return of util.voronoi.loadSingleHaloVPPP() or util.voronoi.loadGlobalVPPP() .

  • ngb_inds (array[int]) – Voronoi mesh connectivity, second return of util.voronoi.loadSingleHaloVPPP() or util.voronoi.loadGlobalVPPP() .

  • offset_ngb (array[int]) – Voronoi mesh connectivity, third return of util.voronoi.loadSingleHaloVPPP() or util.voronoi.loadGlobalVPPP() .

  • ray_pos_in (float[3]) – x,y,z coordinates of ray starting position.

  • ray_dir (float[3]) – normalized unit vector specifying ray direction.

  • total_dl (float) – pathlength to integrate ray (code units i.e. same as cell_pos).

  • boxSize (float) – simulation box size, for periodic boundaries (code units).

  • debug (bool) – if True, >=1, or >=2, print increasing verbose debugging information (disabled for @jit).

  • verify (bool) – if True, do brute-force distance verification every step of parent Voronoi cell.

  • fof_scope_mesh (bool) – if True, indicate that we have loaded and are working with a fof-scope set of cell data and mesh connectivity, i.e. not a correct not periodic mesh at the edges.

Returns:

a 2-tuple composed of

  • dx (ndarray[float]): per-cell path length, in order, for each intersected cell (code/input units).

  • ind (ndarray[float]): per-cell index, in order, for each intersected cell.

trace_ray_through_voronoi_mesh_treebased(cell_pos, NextNode, length, center, sibling, nextnode, ray_pos_in, ray_dir, total_dl, boxSize, debug, verify)

For a single ray, specified by its starting location, direction, and length, ray-trace through a Voronoi mesh using neighbor searches in a pre-computed tree.

Parameters:
  • cell_pos (ndarray[float]) – Voronoi cell center positions [N,3].

  • NextNode (array[int]) – neighbor tree, first return of util.treeSearch.buildFullTree().

  • length (array[int]) – neighbor tree, second return of util.treeSearch.buildFullTree().

  • center (array[int]) – neighbor tree, third return of util.treeSearch.buildFullTree().

  • sibling (array[int]) – neighbor tree, fourth return of util.treeSearch.buildFullTree().

  • nextnode (array[int]) – neighbor tree, fifth return of util.treeSearch.buildFullTree().

  • ray_pos_in (float[3]) – x,y,z coordinates of ray starting position.

  • ray_dir (float[3]) – normalized unit vector specifying ray direction.

  • total_dl (float) – pathlength to integrate ray (code units i.e. same as cell_pos).

  • boxSize (float) – simulation box size, for periodic boundaries (code units).

  • debug (bool) – if True, >=1, or >=2, print increasing verbose debugging information (disabled for @jit).

  • verify (bool) – if True, do brute-force distance verification every step of parent Voronoi cell.

Returns:

a 2-tuple composed of

  • dx (ndarray[float]): per-cell path length, in order, for each intersected cell (code/input units).

  • ind (ndarray[float]): per-cell index, in order, for each intersected cell.

rayTrace(sP, ray_pos, ray_dir, total_dl, pos, quant=None, quant2=None, mode='full', nThreads=72, tree=None)

For a given set of particle coordinates, assumed to be Voronoi cell center positions, perform a tree-based ray-tracing through this implicit Voronoi mesh, for a number of specified ray_pos initial ray positions, in a given direction, for a total pathlength. The operation while tracing is specified by mode.

Parameters:
  • sP (util.simParams.simParams) – simParams instance.

  • ray_pos (ndarray[float][M,3]) – starting ray positions in (x,y,z) space.

  • ray_dir (float[3]) – normalized unit vector specifying (the same) direction for all rays.

  • total_dl (float) – path length to integrate all rays (same units as pos).

  • pos (ndarray[float][N,3]) – array of 3-coordinates for the gas cells.

  • quant (ndarray[float][N]) – if not None, a quantity with the same size as pos to operate on.

  • quant2 (ndarray[float][N]) – if not None, a second quantity with the same size as pos to operate on, e.g. for weighting.

  • mode (str) – one of the following modes of operation: full: return full rays as a 4-tuple of (offsets, lengths, cell_dx, cell_inds) count, dx_sum, etc: return a summary statistic for each ray (single array of values)

  • nThreads (int) – do multithreaded calculation (mem required=nThreads times more).

  • tree (list or None) if not None, should be a list of all the needed tree arrays (pre-computed) – i.e the exact return of util.treeSearch.buildFullTree().

benchmark_test_raytracing()

Run a large number of rays using the threaded-code.

benchmark_test_voronoi(compare=True)

Run a large number of rays through the (fullbox) Voronoi mesh, in each case comparing the results from pre-computed vs. tree-based approaches, for correctness (and speed).

Parameters:

compare (bool) – if True, then run both methods and assert they have the same return. Otherwise, run only the tree-based method for timing.

Returns:

None

Module contents

Utilities, units, numba-accelerated algorithms, and other algorithmic implementations.