Locked History Actions

GREAT3

GREAT3 Challenge

HPC Install

See this page.

Challenge Results

See this page.

Prior Catalogs

See this page.

Estimator Validation

See this page.

Toy Studies

GREAT3 Prior Study

See this page.

Theta Bias Study

See this page.

Two Prior Toys

Pick two priors with fluxes ~ 25,35 from cosmos.json:

    {
      "q": 0.8,
      "flux": 25.0,
      "hlr": 0.3,
      "beta": 0.0,
      "n": 0.95
    },
    {
      "q": 0.2,
      "flux": 35.0,
      "hlr": 0.4,
      "beta": 0.0,
      "n": 0.80
    }

Set the prior width to 25% so the resulting combined flux distribution is:

pairflux.png pairgen.png

Generate 100x100 toys using:

./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors two.json --sigma-frac 0.25 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 1 --save pair1 > log/pair1.NNN.log" --hpc --N pair1 --q dm
./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors two.json --sigma-frac 0.25 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 2 --save pair2 > log/pair2.NNN.log" --hpc --N pair2 --q dm
./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors two.json --sigma-frac 0.25 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 3 --save pair3 > log/pair3.NNN.log" --hpc --N pair3 --q dm
./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors two.json --sigma-frac 0.25 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 4 --save pair4 > log/pair4.NNN.log" --hpc --N pair4 --q dm

Analyze using the correct two priors:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --suffix both.NNN  > log/pair1.both.NNN.log" --hpc --N pair1.both
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair2 --grid-scan --suffix both.NNN  > log/pair2.both.NNN.log" --hpc --N pair2.both
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair3 --grid-scan --suffix both.NNN  > log/pair3.both.NNN.log" --hpc --N pair3.both
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair4 --grid-scan --suffix both.NNN  > log/pair4.both.NNN.log" --hpc --N pair4.both

Analyze with a single (incorrect) flux=30 prior:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --flux 30 --sigma-frac 0.25 --hlrd 0.5 --qd 0.5 --load pair1 --grid-scan --suffix one.NNN > log/pair1.one.NNN.log" --hpc --N pair1.one
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --flux 30 --sigma-frac 0.25 --hlrd 0.5 --qd 0.5 --load pair2 --grid-scan --suffix one.NNN > log/pair2.one.NNN.log" --hpc --N pair2.one

Analyze use a single (correct) prior (the flux = 35 one with q = 0.2):

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors 2nd.json --sigma-frac 0.25 --load pair1 --grid-scan --suffix 2nd.NNN  > log/pair1.2nd.NNN.log" --hpc --N pair1.2nd --q free64

Analyze using the correct two priors and finer sampling: increase --nxy from 5 to 7 and --ntheta from 12 to 15, but also increase nfine from 65 to 150 (xy) and 100 to 200 (theta) in prior.py:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 7 --ntheta 15 --suffix fine.NNN  > log/pair1.fine.NNN.log" --hpc --N pair1.fine

Reduce coarse sampling:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 5 --ntheta 12 --suffix fine2.NNN  > log/pair1.fine2.NNN.log" --hpc --N pair1.fine2

Restore 15 by 72 coarse sampling but reduce fine xy sampling from 150 back to the original 65 (with an updated to prior.py). Leave the theta fine sampling at 200 since this should have little effect on the speed:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 7 --ntheta 15 --suffix fine3.NNN  > log/pair1.fine3.NNN.log" --hpc --N pair1.fine3

Back off to --nxy 5, keeping everything else the same:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 5 --ntheta 15 --suffix fine4.NNN  > log/pair1.fine4.NNN.log" --hpc --N pair1.fine4

Go back to --nxy 7 and increase --ntheta, which seems to be responsible for most of the improvement, from 15 to 20:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 7 --ntheta 20 --suffix fine5.NNN  > log/pair1.fine5.NNN.log" --hpc --N pair1.fine5

Since that improves the score a lot, increase ntheta further:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --priors two.json --sigma-frac 0.25 --load pair1 --grid-scan --nxy 7 --ntheta 25 --suffix fine6.NNN  > log/pair1.fine6.NNN.log" --hpc --N pair1.fine6

Results are summarized here.

Single Prior Toys

Generate 100x100 observations with S/N = 30 and seeds 1,2 using:

./g3multi.py --split 0:200:20 --run "./g3toygen.py --verbose --size 0 --first-obs NNN --nobs 20 --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --centroid-shift 1 --rotated --pairs --save full1 --seed 1 > log/gen-full1.NNN.log" --hpc --N gen-full1
./g3multi.py --split 0:200:20 --run "./g3toygen.py --verbose --size 0 --first-obs NNN --nobs 20 --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --centroid-shift 1 --rotated --pairs --save full2 --seed 2 > log/gen-full2.NNN.log" --hpc --N gen-full2

For comparison, generate 20x20 observations with S/N = 300 (I meant this to be 150 - oops). This should have roughly the same shear signal since flux*sqrt(N) stays constant.

./g3multi.py --split 0:200:20 --run "./g3toygen.py --verbose --size 20 --first-obs NNN --nobs 20 --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --centroid-shift 1 --rotated --pairs --save hisn1 --seed 1 > log/gen-hisn1.NNN.log" --hpc --N gen-hisn1
./g3multi.py --split 0:200:20 --run "./g3toygen.py --verbose --size 20 --first-obs NNN --nobs 20 --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --centroid-shift 1 --rotated --pairs --save hisn2 --seed 2 > log/gen-hisn2.NNN.log" --hpc --N gen-hisn2

Check on running HPC jobs in the dm queue using qstat -q dm. Replace the last two params with --nohup to run jobs on darkmatter instead of HPC.

Analyze with all parameters (flux,theta,x,y) estimated:

./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn1 --grid-scan --fast-chisq --fast-init --suffix EstAllNNN > log/EstAll-hisn1.NNN.log" --hpc --N EstAll-hisn1
./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn2 --grid-scan --fast-chisq --fast-init --suffix EstAllNNN > log/EstAll-hisn2.NNN.log" --hpc --N EstAll-hisn2

./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 0 --nobs 10 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full1 --grid-scan --fast-chisq --fast-init --suffix EstAllNNN > log/EstAll-full1.NNN.log" --hpc --N EstAll-full1
./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 0 --nobs 10 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full2 --grid-scan --fast-chisq --fast-init --suffix EstAllNNN > log/EstAll-full2.NNN.log" --hpc --N EstAll-full2

Same jobs but with theta fixed to true value for the phi calculation only (add --use-true-uij parameter):

./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn1 --grid-scan --fast-chisq --fast-init --use-true-uij --suffix TrueThNNN > log/TrueTh-hisn1.NNN.log" --hpc --N TrueTh-hisn1
./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn2 --grid-scan --fast-chisq --fast-init --use-true-uij --suffix TrueThNNN > log/TrueTh-hisn2.NNN.log" --hpc --N TrueTh-hisn2

./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 0 --nobs 10 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full1 --grid-scan --fast-chisq --fast-init --use-true-uij --suffix TrueThNNN > log/TrueTh-full1.NNN.log" --hpc --N TrueTh-full1
./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 0 --nobs 10 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full2 --grid-scan --fast-chisq --fast-init --use-true-uij --suffix TrueThNNN > log/TrueTh-full2.NNN.log" --hpc --N TrueTh-full2

Same jobs but using updated code that does not rescale flux (since this would simplify pruning with multiple priors):

./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn1 --grid-scan --fast-chisq --fast-init --suffix NoRescaleNNN > log/NoRescale-hisn1.NNN.log" --hpc --N NoRescale-hisn1 --q free64
./g3multi.py --split 0:200:10 --run "./g3toybash.py --verbose --size 20 --nobs 10 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn2 --grid-scan --fast-chisq --fast-init --suffix NoRescaleNNN > log/NoRescale-hisn2.NNN.log" --hpc --N NoRescale-hisn2 --q free64

./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --size 0 --nobs 5 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full1 --grid-scan --fast-chisq --fast-init --suffix NoRescaleNNN > log/NoRescale-full1.NNN.log" --hpc --N NoRescale-full1
./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --size 0 --nobs 5 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full2 --grid-scan --fast-chisq --fast-init --suffix NoRescaleNNN > log/NoRescale-full2.NNN.log" --hpc --N NoRescale-full2

Finally, analyze use new prior.Likelihood instead of fisher.Likelihood (aka fully marginalized prior):

./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --size 20 --nobs 5 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn1 --grid-scan --suffix EstNewNNN > log/EstNew-hisn1.NNN.log" --hpc --N EstNew-hisn1
./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --size 20 --nobs 5 --first-obs NNN --flux 300 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load hisn2 --grid-scan --suffix EstNewNNN > log/EstNew-hisn2.NNN.log" --hpc --N EstNew-hisn2

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --size 0 --nobs 1 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full1 --grid-scan --suffix EstNewNNN > log/EstNew-full1.NNN.log" --hpc --N EstNew-full1
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --size 0 --nobs 1 --first-obs NNN --flux 30 --sigma-frac 0.1 --hlrd 0.5 --qd 0.5 --load full2 --grid-scan --suffix EstNewNNN > log/EstNew-full2.NNN.log" --hpc --N EstNew-full2

Summarize results using, e.g.

python great3/metric.py -i '/share/dm/all/data/great3/local/control/ground/constant/hisn1_EstAll*.json'

Deep Field Priors

Generate toys using deep003 priors on HPC and two different random seeds:

./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors deep003 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 1 --save deep3toy1 > log/deep3toy1.NNN.log" --hpc --N deep3toy1
./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors deep003 --centroid-shift 1 --rotated --pairs --size 0 --nobs 10 --first-obs NNN --seed 2 --save deep3toy2 > log/deep3toy2.NNN.log" --hpc --N deep3toy2

Generate a second pair of toys without 90-deg rotated pairs:

./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors deep003 --centroid-shift 1 --pairs --size 0 --nobs 10 --first-obs NNN --seed 3 --save deep3toy3 > log/deep3toy3.NNN.log" --hpc --N deep3toy3
./g3multi.py --split 0:200:10 --run "./g3toygen.py --priors deep003 --centroid-shift 1 --pairs --size 0 --nobs 10 --first-obs NNN --seed 4 --save deep3toy4 > log/deep3toy4.NNN.log" --hpc --N deep3toy4

Test analysis with flux > 75 cut:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --min-flux 75 --sigma-frac 0.2 --priors deep003 --load deep3toy1 --grid-scan --fast-chisq --fast-init --suffix fmin75.NNN --save-best > log/d3t1f75.NNN.log" --hpc --N d3t1f75
./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --min-flux 75 --sigma-frac 0.2 --priors deep003 --load deep3toy2 --grid-scan --fast-chisq --fast-init --suffix fmin75.NNN --save-best > log/d3t2f75.NNN.log" --hpc --N d3t2f75

Summarize results with:

python great3/metric.py -i '/share/dm/all/data/great3/local/control/ground/constant/deep3toy1_fmin75.*.json'

Try fitting multi-prior toys with a single prior:

./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --first-obs NNN --nobs 5 --flux 30 --sigma-frac 0.25 --hlrd 0.5 --qd 0.5 --load deep3toy1 --grid-scan --fast-chisq --fast-init --suffix one.NNN --flatten-theta-weight 0.01 --save-best > log/d3t1one.NNN.log" --hpc --N d3t1one
./g3multi.py --split 0:200:5 --run "./g3toybash.py --verbose --first-obs NNN --nobs 5 --flux 30 --sigma-frac 0.25 --hlrd 0.5 --qd 0.5 --load deep3toy2 --grid-scan --fast-chisq --fast-init --suffix one.NNN --flatten-theta-weight 0.01 --save-best > log/d3t2one.NNN.log" --hpc --N d3t2one

Test analysis using randomly chosen subset of 100 priors for each observation:

./g3multi.py --split 0:200:1 --run "./g3toybash.py --verbose --nobs 1 --first-obs NNN --max-priors 100 --sigma-frac 0.2 --priors deep003 --load deep3toy1 --grid-scan --suffix n100.NNN --ntheta 25 --nxy 5 > log/deep3toy1.n100.NNN.log" --hpc --N deep3toy1.n100

Shear Maximum Likelihood Strategies

Generate 10x10 observations with disk-only sources (hlr=0.5",qd=0.5) having arbitrary rotations (in 90-deg pairs) and centroid shifts. Sample the flux from a Gaussian with mean 300 and sigma 30 (rVariance = 0.01) and use the noise and psf models for control/ground/constant:

./g3toygen.py --verbose --size 10 --nobs 0 --flux 300 --r-variance 0.01 --hlrd 0.5 --qd 0.5 --centroid-shift 1 --rotated --pairs --save full

Calculate the NLL using the true (theta,x,y) for each generated stamp and compare different methods for finding the best (g1,g2) for each of the 200 observations:

./g3toybash.py --verbose --size 10 --nobs 0 --flux 300 --r-variance 0.01 --hlrd 0.5 --qd 0.5 --load full <options>

Time

m

c

Q

<options>

00:35:08

0.0005121 -0.0015916

0.0000349 0.0000540

2232.99

--suffix s0t100 --mn-strategy 0 --mn-tolerance 100

00:38:16

0.0005002 -0.0015877

0.0000351 0.0000539

2241.24

--suffix s0t50 --mn-strategy 0 --mn-tolerance 50

00:58:17

0.0005373 -0.0016128

0.0000353 0.0000528

2203.93

--suffix s0t10 --mn-strategy 0 --mn-tolerance 10

05:00:42

0.0005456 -0.0016642

0.0000355 0.0000522

2148.51

--suffix s1t10 --mn-strategy 1 --mn-tolerance 10

02:39:51

0.0005440 -0.0016529

0.0000351 0.0000527

2160.00

--suffix s0t1 --mn-strategy 0 --mn-tolerance 1

01:19:07

0.0005206 -0.0016675

0.0000350 0.0000525

2153.54

--suffix g5 --grid-scan --shear-grid-size 5 --shear-grid-max 0.01

00:52:20

0.0005219 -0.0016575

0.0000346 0.0000523

2165.07

--suffix g4 --grid-scan --shear-grid-size 4 --shear-grid-max 0.01

Based on these results, set the defaults to --mn-strategy 0 --mn-tolerance 50.

Earlier Studies

Fix S/N = 4000 and shape to disk-only with hlr = 0.5". Simulate branches based on the measured noise and psf of control/ground/constant using 10x10 stamps per observation and using different treatments of the centroid shift and rotation:

./g3toygen.py --verbose --size 10 --nobs 0 --flux 4000 --hlrd 0.5 <args>

The ground jobs take about 3 minutes on my laptop. Add the --space option to base simulated images on control/space/constant.

Name

q

Shifts?

Rotated?

Pairs?

<args>

round

1.0

no

no

no

--qd 1.0 --save round

shifts

1.0

yes

no

no

--qd 1.0 --centroid-shift 1 --save shifts

rotated

0.5

no

yes

no

--qd 0.5 --rotated --save rotated

pairs

0.5

no

yes

yes

--qd 0.5 --rotated --pairs --save pairs

full

0.5

yes

yes

yes

--qd 0.5 --centroid-shift 1 --rotated --pairs --save full

Estimate shears using the catalog truth values of {flux,dx,dy,angle} and save results to $GREAT3_ROOT/local/control/ground/constant/obs-<nnn>-0/<name>_truth.json:

./g3toybash.py --verbose --size 10 --nobs 0 --hlrd 0.5 --suffix truth <args>

These jobs take about 68 mins each on my laptop.

Name

m1

m2

c1

c2

Q

m

c

Q

<args>

round

-2.39e-4

-4.86e-5

+3.3e-6

+4.1e-6

16041

-0.0000097 -0.0000302

0.0000012 0.0000014

109510

--qd 1.0 --load round

shifts

-3.47e-5

+7.96e-5

+0.5e-6

-3.1e-6

43286

0.0001604 -0.0000566

-0.0000020 0.0000017

23246

--qd 1.0 --load shifts

rotated

-2.26e-4

-8.85e-5

+0.2e-6

+7.8e-6

15706

0.0000903 -0.0000138

0.0000003 0.0000041

98461

--qd 0.5 --load rotated

pairs

+1.90e-4

-1.29e-4

+2.3e-6

+6.7e-6

16650

0.0000999 -0.0000876

0.0000001 -0.0000024

29604

--qd 0.5 --load pairs

full

+0.59e-5

+1.22e-4

+4.9e-6

+0.1e-6

30483

0.0000403 -0.0000229

-0.0000002 -0.0000018

80308

--qd 0.5 --load full

Calculate the m,c,Q scores in the table above using, e.g.

python great3/metric.py -i $GREAT3_ROOT/local/control/ground/constant/round_truth.json --plot

The quoted values are after psf rotation but with no auto-calibration of the slope. The first set of values are for ground and the second set for space.

Data Access

Example command to download data for a single branch from the US mirror:

nohup wget ftp://ftp.great3.caltech.edu/pub/great3/data/public/control-ground-constant.tar.gz &

On darkmatter, the data is accessible at /data/great3

On HPC, hop over to our gpu (qrsh -q dm) and you will find the data at /dm/all/data/great3

Processing Status

branch name

variant

format

constpsf

noise

psfoffsets

psfmosaic

psfchisq

control

ground,constant

48,0.2"

Y,R

noise

psf psf01

psf psf01 psf01_deep

psf psf01

ground,variable

48,0.2"

Y,R

noise

psf psf01

psf psf01 psf01_deep

psf psf01

space,constant

96,0.05"

Y

noise

psf psf01

psf psf01 psf01_deep

psf psf01

space,variable

96,0.05"

Y

noise

psf psf01

psf psf01 psf01_deep

psf psf01

real_galaxy

ground,constant

48,0.2"

Y,R

noise

psf psf01

psf psf01 psf01_deep

psf psf01

ground,variable

48,0.2"

Y,R

noise

psf psf01

psf psf01 psf01_deep

psf psf01

space,constant

96,0.05"

Y

noise

psf psf01

psf psf01 psf01_deep

psf psf01

space,variable

96,0.05"

Y

noise

psf psf01

psf psf01 psf01_deep

psf psf01

multiepoch

ground,constant

48,0.2"

Y,R

psf psf01

psf psf01 psf01_deep

psf psf01

ground,variable

48,0.2"

Y,R

psf psf01

psf psf01 psf01_deep

psf psf01

space,constant

48,0.1"

Y

altpsf

altpsf

altpsf

space,variable

48,0.1"

Y

altpsf

altpsf

altpsf

Format details are here.

Constant PSF Analysis

Use lam = 0.01 for constant ground psfs:

nohup ./g3constpsf.py --verbose --lam 0.01 --name control --save psf01 > $GREAT3_ROOT/log/psf01.control.ground.constant.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name control --variable --save psf01 > $GREAT3_ROOT/log/psf01.control.ground.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name real_galaxy --save psf01 > $GREAT3_ROOT/log/psf01.real_galaxy.ground.constant.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name real_galaxy --variable --save psf01 > $GREAT3_ROOT/log/psf01.real_galaxy.ground.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name multiepoch --save psf01 > $GREAT3_ROOT/log/psf01.multiepoch.ground.constant.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name multiepoch --variable --save psf01 > $GREAT3_ROOT/log/psf01.multiepoch.ground.variable.log &

Use lam = 0.0001 and deriv-order = 3 for single-epoch constant space psfs:

nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name control --space --save psf > $GREAT3_ROOT/log/psf01.control.space.constant.log &
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name control --space --variable --save psf > $GREAT3_ROOT/log/psf01.control.space.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name real_galaxy --space --save psf > $GREAT3_ROOT/log/psf01.real_galaxy.space.constant.log &
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name real_galaxy --space --variable --save psf > $GREAT3_ROOT/log/psf01.real_galaxy.space.variable.log &

nohup time ./g3constpsf.py --verbose --altfit --max-iter 10000 --max-offset 1.1 --name multiepoch --space --variable --save altpsf > constpsf.altfit.multiepoch.space.variable.log &
nohup time ./g3constpsf.py --verbose --altfit --max-iter 10000 --max-offset 1.1 --name multiepoch --space --save altpsf > constpsf.altfit.multiepoch.space.constant.log &

Combined fits across subfields with same psf (c3 stands for combined fits with 3x3 oversampling, etc):

nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name control --space --variable --save psfc3 --oversampling 3 --combined > $GREAT3_ROOT/log/psfc3.control.space.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name real_galaxy --space --variable --save psfc3 --oversampling 3 --combined > $GREAT3_ROOT/log/psfc3.real_galaxy.space.variable.log &

nohup ./g3constpsf.py --verbose --lam 0.01 --name control --variable --save psfc5 --oversampling 5 --combined > $GREAT3_ROOT/log/psfc5.control.ground.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name real_galaxy --variable --save psfc5 --oversampling 5 --combined > $GREAT3_ROOT/log/psfc5.real_galaxy.ground.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.01 --name multiepoch --variable --save psfc5 --oversampling 5 --combined > $GREAT3_ROOT/log/psfc5.multiepoch.ground.variable.log &

# these jobs failed with MemoryError in build_regularizer
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name control --space --variable --save psfc5 --oversampling 5 --combined > $GREAT3_ROOT/log/psfc5.control.space.variable.log &
nohup ./g3constpsf.py --verbose --lam 0.0001 --deriv-order 3 --name real_galaxy --space --variable --save psfc5 --oversampling 5 --combined > $GREAT3_ROOT/log/psfc5.real_galaxy.space.variable.log &

Noise Studies

Calculate noise RMS and signal-to-noise distributions:

nohup ./g3validate.py --noise --verbose --name control --save noise > $GREAT3_ROOT/log/noise.control.ground.constant.log &
nohup ./g3validate.py --noise --verbose --name control --save noise --space > $GREAT3_ROOT/log/noise.control.space.constant.log &
nohup ./g3validate.py --noise --verbose --name control --save noise --variable > $GREAT3_ROOT/log/noise.control.ground.variable.log &
nohup ./g3validate.py --noise --verbose --name control --save noise --space --variable > $GREAT3_ROOT/log/noise.control.space.variable.log &

nohup ./g3validate.py --noise --verbose --name real_galaxy --save noise > $GREAT3_ROOT/log/noise.real_galaxy.ground.constant.log &
nohup ./g3validate.py --noise --verbose --name real_galaxy --save noise --space > $GREAT3_ROOT/log/noise.real_galaxy.space.constant.log &
nohup ./g3validate.py --noise --verbose --name real_galaxy --save noise --variable > $GREAT3_ROOT/log/noise.real_galaxy.ground.variable.log &
nohup ./g3validate.py --noise --verbose --name real_galaxy --save noise --space --variable > $GREAT3_ROOT/log/noise.real_galaxy.space.variable.log &

nohup ./g3validate.py --noise --verbose --name multiepoch --save noise > $GREAT3_ROOT/log/noise.multiepoch.ground.constant.log &
##nohup ./g3validate.py --noise --verbose --name multiepoch --save noise --space > $GREAT3_ROOT/log/noise.multiepoch.space.constant.log &
nohup ./g3validate.py --noise --verbose --name multiepoch --save noise --variable > $GREAT3_ROOT/log/noise.multiepoch.ground.variable.log &
##nohup ./g3validate.py --noise --verbose --name multiepoch --save noise --space --variable > $GREAT3_ROOT/log/noise.multiepoch.space.variable.log &

Noise attachments are on this page

Tiled Image Studies

See here

TODO

Install optimized sparse matrix solver UMFPACK, which requires CHOLMOD, CAMD, CCOLAMD, COLAMD, and metis-4.0, and upgrading to scipy v0.13.0.

Update const psf analysis of variable shear branches to combine the 20x9 stars with the same psf.

Update const psf analysis of multiepoch constant (variable) branches to combine the 6x9 (20x6x9?) stars with the same psf.

Do star centroids follow the dithers in multi-epoch branches? NO

Create json output module that adds cmd-line args and git commit hash to top-level dict and manages default output style.

Investigate why there is so much variation in the psf tails within a sub-field for multiepoch-space-variable. Compare Solver tails with average of the 9 input psfs.

Misc Notes

On 12/16, the "psf" g3constpsf output files for cs* and rs* branches were overwritten. This is how they were restored:

cd into csc (repeat for csv, rsc, rsv):

for dir in $(ls -1 .); do sed 's/'"'"'psf'"'"'/'"'"'psf01'"'"'/g' $dir/psf.json > $dir/psf01.json; done
for dir in $(ls -1 .); do rm -f $dir/psf.json; done
for dir in $(ls -1 .); do mv $dir/psf.fits $dir/psf01.fits; done

copy psf* files back to dm (repeat for control-space):

rsync -avzhe ssh --include='psf.json' --include='psf.fits' --include='*/' --exclude='*' real_galaxy/space dm:/data/great3/local/real_galaxy/