# Introduction

In this document series we will create the framework to calculate a PTF rate of SLSNe. This will build on our calcium-rich method, and apply it to SLSNe.
The ca-rich code was written in python 2.7, this is now outdated and many of the packages used in that project will no longer support 2.7 in future updates. This project will be written in python 3.6 - changes between 2.7 and 3.6 are very small, but affect us in the handling of `print`

and `map`

statements.

Roles:

Chris Frohmaier: I’ll lead the project, oversee code development and paper writing.

Mark Sullivan: Group boss, python code development, proof reads and improves paper drafts.

Charlotte Angus: SLSN-I models

## Overview

We will use the whole PTF survey and simulate SLSNe everywhere! This is more ambitious than the ca-rich rate where we constrained our calculations to areas observed with a regular cadence. This footprint will be the entire observable sky from Palomar from Jan 2010 - December 2012

- Assume a rate for SLSNe-I
- Based on this rate assumption, drawn an integer number of events from the Poisson distribution for out volume and time
- Randomly draw an explosion date (or peak), redshift, R.A. and dec. for each event.
- These events must be drawn uniformly from a special distribution for R.A. and dec.
- Draw redshifts which follow the differential co-moving volume of universe.
- t from a uniform distribution.

- A random light cuvre will be assigned to each object following the population distribution from Angus
- We will then check whther PTF would have observed the object during the survey.
- At all epochs we will use the Frohmaier et al. 2017 efficiencies to statistically determine whether each epoch was detected.
- The light curves will be checked to see if they meet our minimum detection criteria, e.g. 4 detections > 20th mag
- Those that pass the cut will added to our total and the intrinsic rate and number stored in a database
- We will compare the distribution of rates which produce the same number of events as observed by PTF

PTF sample from: http://iopscience.iop.org/article/10.3847/1538-4357/aaac2f/pdf or http://iopscience.iop.org/article/10.3847/0004-637X/830/1/13/pdf . Will ultimately be the same.

# Chapter 1.

In this chapter we will explore how to create a population of objects (which will eventually become supernovae) that are distributed uniformly in the universe.

## Let’s python

### How far can we expect to see a SLSN-I

We assume the peak brightness is M = -21 and that we will definitely get a spectrum if the object is m$_R$ < 20

$\mu = m - M = 5 \mathrm{log}(\mathrm{D}_\mathrm{L}) + 25$

(for D$_\mathrm{L}$ in Mpc)

```
import numpy as np #Import numerical python, everyone uses it all the time for numerical stuff, arrays etc, very fast
from astropy.cosmology import FlatLambdaCDM, z_at_value #importing cosmology tools from astropy
import astropy.units as u #astropy cosmology become very useful when we define units
cosmo = FlatLambdaCDM(H0=70, Om0=0.3) #Here we define our cosmology to the variable 'cosmo'
```

```
m = 20. # Apparent magnitude limit
M = -21 # Absolute magnitudeof source
dl = 10.**(((m-M) - 25.)/5.) # Luminosity distance
```

```
## Here we define our upper redshift limit
## the function z_at_value takes as the first arguement the input type
## and the second arguement the value with appropriate units
## out dl is in Mpc, so we have to multiply by the units u.Mpc
z_limit = z_at_value(cosmo.luminosity_distance, dl*u.Mpc)
```

```
print(z_limit) # printing the results
```

```
0.3053784084335546
```

Of course, we won’t get a spectrum when our peak is at the detection limit, so maybe we should do this again but one mag brighter than the limit:

```
m = 19. # Our new apparent magnitude
dl = 10.**(((m-M) - 25.)/5.) # Luminosity distance
z_limit = z_at_value(cosmo.luminosity_distance, dl*u.Mpc)
print(z_limit) # printing the results
```

```
0.2036357362047915
```

O.K., we’re probably spectroscopically complete to z = 0.2

## Drawing randomly from the comoving volume of universe

Here we will plot the differential comoving volume element of the uniform, drawn random redshifts from it, and check our redshifts agree with the distribution.

```
## We will be plotting data so lets import our favourite plotting module
import matplotlib.pyplot as plt
%matplotlib inline
#we only need this line when using jupyter notebook, it tells python to put the plot in the document and not seperate window
```

```
## create an array of redshifts
z_array = np.arange(0.0001, 4, 0.001) #np.arange(start,stop,step_size) N.B. python stops 1 step size before end
## Also we never start with redshift=0, the cosmology calculator hates it, so I just use some arbitrarily small number
## Below I have combined 2 steps in one
## cosmo.comoving_volume(z_array) calculates the volume of a sphere out to redshift = z_i
## for each i in z_array
## np.diff then calculates the difference in volume to turn this into volume shells
## therefore the result will have length N-1 of the original array
vol_arr = np.diff(cosmo.comoving_volume(z_array))
z_bin = 0.5*(z_array[1:] + z_array[:-1]) #We want to find the middle of each bin so it lines up with our vol_arr array
```

```
ax1 = plt.subplot(1,1,1) # Define a plottig axis
ax1.plot(z_bin,vol_arr) #line plot
ax1.set_xlabel('Redshift') # set label
ax1.set_ylabel(r'Comoving shell volume (Mpc$^3$)') # r'' allows us to use Latex in strings
```

### Drawing from the distribution

We will utilise the inverse-CDF method to draw from our distribution. Firstly we will generate a CDF, inverse it, create an interpolatable function from this inverse and uniformly drawn from it. This will give us redshifts that match our original distribution (I’m just employing a stats technique, where the differential comoving volume is the PDF 😐 ).

```
from scipy.interpolate import interp1d #do 1D interpolations
cumul_V = np.cumsum(vol_arr)/max(np.cumsum(vol_arr)) # np.cumsum does the cumlative sum,
# I divide by max() to normalise to 1
cdfV = interp1d(z_bin,cumul_V, fill_value='extrapolate') # Interpolatable function for CDF
# Give it a redshift it returns a number from 0,1 following CDG
icdfV = interp1d(cumul_V,z_bin, fill_value='extrapolate')# Interpolatable function for iCDF
# Give it a number between 0,1 it returns a redshift following iCDF
rand01 = np.random.uniform(0,1,1000000) # Generates a uniform distribution of floats
# between 0 and 1, np.uniform(start, stop, size), we made 1 million!
randZ = icdfV(rand01) # Interpolate on the iCDF function to give me corresponding redshifts
# randZ will be an array of length = len(rand01)
```

```
plt.figure(figsize=(15,5))
ax1 = plt.subplot(1,3,1) # we will make a subplot(num_rows, num_cols, cell_number)
ax1.plot(z_bin,cumul_V)
ax1.set_xlabel('Redshift')
ax1.set_ylabel('Volume CDF, normalised')
ax1.set_title('Volume CDF')
ax2 = plt.subplot(1,3,2) #axis for the second cell in subplot
ax2.plot(cumul_V,z_bin)
ax2.set_ylabel('Redshift')
ax2.set_xlabel('Volume CDF, normalised')
ax2.set_title('Volume inverse CDF')
ax3 = plt.subplot(1,3,3) #axis for the 3rd cell in subplot
n, bins, pat = ax3.hist(randZ, bins=np.arange(0,4,0.1), normed=True)
ax3.set_xlabel('Redshift')
ax3.set_ylabel('normalised bin count')
ax3.set_title('Redshift Distribution')
```

#### Ta-da!

We now have an array (`randz`

) that contains redshifts with a distribution which matches the differential comoving volume of the universe.

But wait, we only care out to redshift 0.2. Instead of re-doing this all again and change from z=4 to z=0.2, lets just to the last big again on the inverse CDF. Instead of U(0,1) well to U(0,?)

```
newLim = cdfV(0.2) # Go to our CDF function, find the number between 0,1 that corresponds to our new z limit of 0.2.
```

```
randZ02 = icdfV(np.random.uniform(0,newLim,1000000)) # draw a million
```

```
ax1 = plt.subplot(1,1,1)
_ = ax1.hist(randZ02, bins=np.arange(0,0.21,0.005))
ax1.set_xlabel('Redshift')
ax1.set_ylabel('Number of objects')
ax1.set_title('Distribution of redshifts')
```

Beautiful

# Random R.A. and dec. from a spherical distribution

We can’t just randomly draw from a uniform distribution in cartesian-space and expect to be uniform on the sky (spherical).

Example:

```
RA_silly = np.random.uniform(0,360, 2000)
DEC_silly = np.random.uniform(-90,90, 2000)
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1,2,1)
ax1.scatter(RA_silly, DEC_silly, marker='.', color='#9C27B0')
ax1.set_xlim(0,360)
ax1.set_ylim(-90,90)
ax1.set_xlabel('R.A')
ax1.set_ylabel('dec.')
ax1.set_title('Cartesian Projection')
ax2 = fig.add_subplot(1,2,2, projection="mollweide")
ax2.scatter(np.radians(RA_silly)-np.pi, np.radians(DEC_silly), marker='.', color='#9C27B0')
ax2.set_xlabel('R.A')
ax2.set_ylabel('dec.')
ax2.set_title('Spherical Projection')
```

Notice how the cartesian projection is nice and random, but the special project clumps at the poles? We don’t want this! 😱

## Spherically uniform

We’ll have to do this in radians. Let’s imagine we have transformed to a new coordiante system called the u,v plane

```
# We'll define the bounds of our space to be transformed into the u,v plane
# For this example its the entire sky, but in principle these can be any
# 'boxes' on the sky.
low_ra = 0.
high_ra = 360.
low_dec = -90
high_dec = 90
## What are our new u,v plane extremes in radians?
umin=np.radians(low_ra)/(2.*np.pi)
umax=np.radians(high_ra)/(2.*np.pi)
vmin=(np.cos(np.radians(90.-low_dec))+1.0)/2.
vmax=(np.cos(np.radians(90.-high_dec))+1.0)/2.
# We can now draw uniformly from u,v space
pRa=np.random.uniform(umin, umax, 2000)
pDec=np.random.uniform(vmin, vmax, 2000)
## Transform back to R.A. and Dec in degrees
sphericalRA = np.degrees(pRa*2.*np.pi)
sphericalDEC = 90.-np.degrees(np.arccos((2.*pDec)-1.))
```

```
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1,2,1)
ax1.scatter(sphericalRA, sphericalDEC, marker='.', color='#009688')
ax1.set_xlim(0,360)
ax1.set_ylim(-90,90)
ax1.set_xlabel('R.A')
ax1.set_ylabel('dec.')
ax1.set_title('Cartesian Projection')
ax2 = fig.add_subplot(1,2,2, projection="mollweide")
ax2.scatter(np.radians(sphericalRA)-np.pi, np.radians(sphericalDEC), marker='.', color='#009688')
ax2.set_xlabel('R.A')
ax2.set_ylabel('dec.')
ax2.set_title('Spherical Projection')
```

Fantastic stuff, we now know how to create a population of objects uniformly distributed in spherical space.

## Time

I’ve saved the easiest for last. How do we generate a population of explosion times (or peak times)?

```
jd_start = 2455197.5
jd_stop = 2456293.5
randomT = np.random.uniform(jd_start, jd_stop, 100)
```

I won’t bother plotting this to prove to you it’s uniform.

# Putting it all together

Let’s create a function that we can call to output these results. The function will only output 1 object at a time, but if we run the function many times it will tend towards everything we have shown so far.

I will repeat many of the lines we have explored above.

```
z_array = np.arange(0.0001, 4, 0.001)
vol_arr = np.diff(cosmo.comoving_volume(z_array))
z_bin = 0.5*(z_array[1:] + z_array[:-1])
cumul_V = np.cumsum(vol_arr)/max(np.cumsum(vol_arr))
cdfV = interp1d(z_bin,cumul_V, fill_value='extrapolate')
icdfV = interp1d(cumul_V,z_bin, fill_value='extrapolate')
def ran_z_RA_dec_T(z_low, z_high, ra_low, ra_high, dec_low, dec_high, T_low, T_high):
'''Creates a random redshift, R.A., dec., and explosion time.
The output is drawn from the differential comoving volume of the universe,
spherically uniform distributions and uniform time.
z_low, z_high : Redshifts, must be between 0 and infinity
ra_low, ra_high: 0,360 ra_low<ra_high
dec_low, dec_high: -90,90 dec_low<dec_high
T_low, T_high: anything T_low<T_high
'''
## Generate our random redshift
lower_z = cdfV(z_low)
upper_z = cdfV(z_high)
randZ = icdfV(np.random.uniform(lower_z,upper_z))
##Generate our random RA and Dec
umin=np.radians(ra_low)/(2.*np.pi)
umax=np.radians(ra_high)/(2.*np.pi)
vmin=(np.cos(np.radians(90.-low_dec))+1.0)/2.
vmax=(np.cos(np.radians(90.-high_dec))+1.0)/2.
pRa=np.random.uniform(umin, umax)
pDec=np.random.uniform(vmin, vmax)
sphericalRA = np.degrees(pRa*2.*np.pi)
sphericalDEC = 90.-np.degrees(np.arccos((2.*pDec)-1.))
## Generate a random explosion Time
randomT = np.random.uniform(T_low, T_high)
return float(randZ), sphericalRA, sphericalDEC, randomT
```

## Make Population

### Method 1: For loops

Lets make 10,000 objects using a for loop, append the result to a new array

```
%%timeit
output_test = []
for k in range(0,10000):
output_test.append(ran_z_RA_dec_T(0,4,0,360,-90,90,2455197.5,2456293.5))
```

```
1 loop, best of 3: 884 ms per loop
```

### Method 2: map

We’ll use `map`

to map an iterable onto our function.
We have to set up the iterables which can be tedious

```
num = 100000
zl = [0 for i in range(num)]
zh = [4 for i in range(num)]
ral = [0 for i in range(num)]
rah = [360 for i in range(num)]
decl = [-90 for i in range(num)]
dech = [90 for i in range(num)]
Tl = [2455197.5 for i in range(num)]
Th = [2456293.5 for i in range(num)]
```

```
slsn = list(map(ran_z_RA_dec_T, zl, zh, ral, rah, decl, dech, Tl, Th))
```

```
slsn = np.array(slsn) # Convert our output into an numpy array
```

## Check our distributions are ok

```
fig = plt.figure(figsize=(20,5))
ax1 = fig.add_subplot(1,3,1)
ax1.hist(slsn[:,0], bins=np.linspace(0,4,50), color='#FF5722')
ax1.set_xlabel('Redshift')
ax1.set_ylabel('Number')
ax1.set_xlim(0,4)
ax2 = fig.add_subplot(1,3,2, projection="mollweide")
ax2.scatter(np.radians(slsn[:,1])-np.pi, np.radians(slsn[:,2]), marker='.', color='#009688')
ax2.set_xlabel('R.A')
ax2.set_ylabel('dec.')
ax2.set_title('Spherical Projection')
ax3 = fig.add_subplot(1,3,3)
_ = ax3.hist(slsn[:,3], bins=50, color='#607D8B')
ax3.set_xlabel('Julian Date')
ax3.set_ylabel('Number')
ax3.set_xlim(2455197.5,2456293.5)
```

Looks good to me!

# In the next chapter …

The next chapter will explore the method of querying the PTF observing database. The outputs of this chapter (R.A., dec., T) will serve as database query inputs.