2017-09-17

regression

Our data-driven model for stars, The Cannon, is a regression. That is, it figures out how the labels generate the spectral pixels with a model for possible functional forms for that generation. I spent part of today building a Jupyter notebook to demonstrate that—when the assumptions underlying the regression are correct—the results of the regression are accurate (and precise). That is, the maximum-likelihood regression estimator is a good one. That isn't surprising; there are very general proofs; but it answers some questions (that my collaborators have) about cases where the labels (the regressors) are correlated in the training set.

2017-09-15

new parallel-play workshop

Today was the first try at a new group-meeting idea for my group. I invited my NYC close collaborators to my (new) NYU office (which is also right across the hall from Huppenkothen and Leistedt) to work on whatever they are working on. The idea is that we will work in parallel (and independently), but we are all there to answer questions, discuss, debug, and pair-code. It was intimate today, but successful. Megan Bedell (Flatiron) and I debugged a part of her code that infers the telluric absorption spectrum (in a data-driven way, of course). And Elisabeth Andersson (NYU) got kplr and batman installed inside the sandbox that runs her Jupyter notebooks.

2017-09-14

latent variable models, weak lensing

The day started with a call with Bernhard Schölkopf (MPI-IS), Hans-Walter Rix (MPIA), and Markus Bonse (Darmstadt) to discuss taking Christina Eilers's (MPIA) problem of modeling spectra with partial labels over to a latent-variable model, probably starting with the GPLVM. We discussed data format and how we might start. There is a lot of work in astronomy using GANs and deep learning to make data generators. These are great, but we are betting it will be easier to put causal structure that we care about into the latent-variable model.

At Cosmology & Data Group Meeting at Flatiron, the whole group discussed the big batch of weak lensing results released by the Dark Energy Survey last month. A lot of the discussion was about understanding the covariances of the likelihood information coming from the weak lensing. This is a bit hard to understand, because everyone uses highly informative priors (for good reasons, of course) from prior data. We also discussed the multiplicative bias and other biases in shape measurement; how might we constrain these independently from the cosmological parameters themselves? Data simulations, of course, but most of us would like to see a measurement to constrain them.

At the end of Cosmology Meeting, Ben Wandelt (Flatiron) and I spent time discussing projects of mutual interest. In particular we discussed dimensionality reduction related to galaxy morphologies and spatially resolved spectroscopy, in part inspired by the weak-lensing discussion, and also the future of Euclid.

2017-09-13

Gaia, asteroseismology, robots

In our panic about upcoming Gaia DR2, Adrian Price-Whelan and I have established a weekly workshop on Wednesdays, in which we discuss, hack, and parallel-work on Gaia projects in the library at the Flatiron CCA. In our first meeting we just said what we wanted to do, jointly edited a big shared google doc, and then started working. At each workshop meeting, we will spend some time talking and some time working. My plan is to do data-driven photometric parallaxes, and maybe infer some dust.

At the Stars Group Meeting, Stephen Feeney (Flatiron) talked about asteroseismology, where we are trying to get the seismic parameters without ever taking a Fourier Transform. Some of the crowd (Cantiello in particular) suggested that we have started on stars that are too hard; we should choose super-easy, super-bright, super-standard stars to start. Others in the crowd (Hawkins in particular) pointed out that we could be using asteroseismic H-R diagram priors on our inference. Why not be physically motivated? Duh.

At the end of Group Meeting, Kevin Schawinski (ETH) said a few words about auto-encoders. We discussed imposing more causal structure on them, and seeing what happens. He is going down this path. We also veered off into networks-of-autonomous-robots territory for LSST follow-up, keying off remarks from Or Graur (CfA) about time-domain and spectroscopic surveys. Building robots that know about scientific costs and utility is an incredibly promising direction, but hard.

2017-09-12

statistics of power spectra

Daniela Huppenkothen (NYU) came to talk about power spectra and cross-spectra today. The idea of the cross-spectrum is that you multiply one signal's Fourier transform against the complex conjugate of the others'. If the signals are identical, this is the power spectrum. If they differ by phase lags, the answer has an imaginary part, and so on. We then launched into a long conversation about the distribution of cross-spectrum components given distributions for the original signals. In the simplest case, this is about distributions of sums of products of Gaussian-distributed variables, where analytic results are rare. And that's the simplest case!

One paradox or oddity that we discussed is the following: In a long time series, imagine that every time point gets a value (flux value, say) that is drawn from a very skew or very non-Gaussian distribution. Now take the Fourier transform. By central-limit reasoning, all the Fourier amplitudes must be very close to Gaussian-distributed! Where did the non-Gaussianity go? After all, the FT is simply a rotation in data space. I think it probably all went into the correlations of the Fourier amplitudes, but how to see that? These are old ideas that are well understood in signal processing, I am sure, but not by me!

2017-09-11

EPRV

Today I met with Megan Bedell, who is just about to start work here in the city at the Flatiron Institute. We discussed our summer work on extreme precision radial-velocity measurements. We have come to the realization that we can't write a theory paper on this without dealing with tellurics and continuum, so we decided to face that in the short term. I don't want to get too bogged down, though, because we have a very simple point: Some ways of measuring the radial velocity saturate the Cramér–Rao bound, many do not!

2017-09-08

reconstruction, modifications to GR

The day started with a conversation with Elisabeth Andersson (NYU) about possible projects. We tentatively decided to look for resonant planets in the Kepler data. I sent her the Luger paper on TRAPPIST-1.

Before lunch, there was a great Astro Seminar by Marcel Schmittfull (IAS) about using the non-linearities in the growth of large-scale structure to improve measurements of cosmological parameters. He made two clear points (to me): One is that the first-order "reconstruction" methods used to run back the clock on nonlinear clustering can be substantially improved upon (and even small improvements can lead to large improvements in cosmological parameter estimation). The other is that there is as much information about cosmological parameters in the skewness as the variance (ish!). After his talk I asked about improving reconstruction even further using machine learning, which led to a conversation with Marc Williamson (NYU) about a possible pilot project.

In the afternoon, after a talk about crazy black-hole ideas from Ram Brustein (Ben-Gurion), Matt Kleban (NYU) and I discussed the great difficulty of seeing strong-field corrections to general relativity in gravitational-wave measurements. The problem is that the radiation signal is dominated by activity well outside the Schwarzschild radius: Things close to the horizon are highly time-dilated and red-shifted and so don't add hugely to the strong parts of the signal. Most observable signatures of departures from GR are probably already ruled out by other observations! With the standard model, dark matter, dark energy, and GR all looking like they have no observational issues, fundamental physics is looking a little boring right now!

2017-09-07

a non-parametric map of the MW halo

The day started with a call with Ana Bonaca (CfA), in which we discussed generalizing her Milky Way gravitational potential to have more structure, substructure, and freedom. We anticipate that when we increase this freedom, the precision with which any one cold stellar stream constrains the global MW potential should decrease. Eventually, with a very free potential, in principle each stream should constrain the gravitational acceleration field in the vicinity of that stream! If that's true, then a dense network of cold streams throughout the Milky Way halo would provide a non-parametric (ish) map of the acceleration field throughout the Milky Way halo!

In the afternoon I pitched new projects to Kate Storey-Fisher (NYU). She wants to do cosmology! So I pitched the projects I have on foregrounds for next-generation CMB and line-intensity mapping experiments, and my ideas about finding anomalies (and new statistics for parameter estimation) in a statistically responsible way. On the latter, I warned her that some of the relevant work is in the philosophy literature.

2017-09-06

MW dynamics

At Flatiron Stars Group Meeting, Chervin Laporte (Columbia) led a very lively discussion of how the Sagittarius and LMC accretion events into the Milky Way halo might be affecting the Milky Way disk. There can be substantial distortions to the disk from these minor mergers, and some of the action comes from the fact that the merging satellite raises a wake or disturbance in the halo that magnifies the effect of the satellite itself. He has great results that should appear on the arXiv soon.

After that, there were many discussions about things Gaia-related. We decided to start a weekly workshop-like meeting to prepare for Gaia DR2, which is expected in April. We are not ready! But when you are talking about billions of stars, you have to get ready in advance.

One highlight of the day was a brief chat with Sarah Pearson (Columbia), Kathryn Johnston (Columbia), and Adrian Price-Whelan (Princeton) about the formal structure of our cold-stream inference models, and the equivalence (or not) of our methods that run particles backwards in time (to a simpler distribution function) or forwards in time (to a simpler likelihood function). We discussed the possibility of differentiating our codes to permit higher-end sampling. We also discussed the information content in streams (work I have been doing with Ana Bonaca of Harvard) and the toy quality of most of the models we (and others) have been using.

2017-09-05

not much; but which inference is best?

Various tasks involved in the re-start of the academic year took out my research time today. But I did have a productive conversation with Alex Malz (NYU) about his current projects and priorities. One question that Malz asked is: Imagine you have various Bayesian inference methods or systems, each of which performs some some (say) Bayesian classification task. Each inference outputs probabilities over classes. How can you tell which inference method is the best? That's a hard problem! If you have fake data, you could ask which puts the highest probabilities on the true answer. Or you could ask which does the best when used in Bayesian decision theory, with some actions (decisions) and some utilities, or a bag of actors with different utilities. After all, different kinds of mistakes cost different actors different amounts! But then how do you tell which inference is best on real (astronomical) data, where you don't know what the true answer is? Is there any strategy? Something about predicting new data? Or is there something clever? I am out of my league here.

2017-09-01

#LennartFest day 3

Today was the last day of a great meeting. Both yesterday and today there were talks about future astrometric missions, including the Gaia extension, and also GaiaNIR, SmallJasmine, and Theia. In his overview talk on the latter, Alberto Krone-Martins put a lot of emphasis on the internal monitoring systems for the design, in which there will be lots of metrology of the spacecraft structure, optics, and camera. He said the value of this was a lesson from Gaia.

This point connects strongly to things I have been working on in self-calibration. In the long run, if a survey is designed properly, it will contain enough redundancy to permit self-calibration. In this sense, the internal monitoring has no long-term value. For example, the Gaia spacecraft includes a basic-angle monitor. But in the end, the data analysis pipeline will determine the basic angle continuously, from the science data themselves. They will not use the monitor data directly in the solution. The reason is: The information about calibration latent in the science data always outweighs what's in the calibration data.

That said (and Timo Prusti emphasized this to me), the internal monitoring and calibration data are very useful for diagnosing problems as they arise. So I'm not saying you don't value such systems and data; I'm saying that you should still design your projects to you don't need them at the end of the day. This is exactly how the SDSS imaging-data story played out, and it was very, very good.

I also gave my own talk at the meeting today. My slides are here. I think I surprised some part of the audience when I said that I thought we could do photometric parallax at all magnitudes without ever using any physical or numerical model of stars!

One thing I realized, as I was giving the talk, is that there is a sense in which the data-driven models make very few assumptions indeed. They assume that Gaia's geometric parallax measurements are good, and that it's noise model is close to correct. But the rest is just very weak assumptions about functional forms. So there is a sense in which our data-driven model (or a next-generation one) is purely geometric. Photometric parallaxes with a purely geometric basis. Odd to think of that.

At the end of the meeting, Amina Helmi told me about vaex, which is a very fast visualization tool for large data sets, built on clever data structures. I love those!

2017-08-31

#LennartFest day 2

Many great things happened at the meeting today; way too many to mention. Steinmetz showed how good the RAVE-on results are, and nicely described also their limitations. Korn showed an example of an extremely underluminous star, and discussed possible explanations (most of them boring data issues). Brown explained that with a Gaia mission extension, the parameter inference for exoplanet orbit parameters can improve as a huge power (like 4.5?) of mission lifetime. That deserves more thought. Gerhard explained that the MW bar is a large fraction of the mass of the entire disk! Helmi showed plausible halo substructure and got me really excited about getting ready for Gaia DR2. In the questions after her talk, Binney claimed that galaxy halos don't grow primarily by mergers, not even in theory! Hobbs talked about a mission concept for a post-Gaia NIR mission (which would be incredible). He pointed out that the reference frame and stellar positions require constant maintenance; the precision of Gaia doesn't last.

One slightly (and embarrassingly) frustrating thing about the talks today was that multiple discussed open clusters without noting that we found a lot ourselves. And several discussed the local standard of rest without mentioning our value. Now of course I (officially) don't mind; neither of these are top scientific objectives for me (I don't even think the LSR exists). But it is a Zen-like reminder not to be attached to material things (like citations)!

2017-08-30

#LennartFest day 1

I broke my own rules and left #AstroHackWeek to catch up with #LennartFest. The reason for the rule infraction is that the latter meeting is the retirement celebration of Lennart Lindegren (Lund) who is one of the true pioneers in astrometry, and especially astrometry in space and at scale. My loyal reader knows his influence on me!

Talks today were somewhat obscured by my travel exhaustion. But I learned some things! Francois Mignard (Côte d'Azur) gave a nice talk on the reference frame. He started with an argument that we need a frame. I agree that we want inertial proper motions, but I don't agree that they have to be on a coordinate grid. If there is one thing that contemporary physics teaches us it is that you don't need a coordinate system. But the work being done to validate the inertial-ness of the frame is heroic, and important.

Floor van Leewen (Cambridge) spoke about star clusters. He hypothesized—and then showed—that proper motions can be as informative about distance as parallaxes, especially for nearby clusters. This meshes with things Boris Leistedt (NYU) and I have been talking about, and I think we can lay down a solid probabilistic method for combining these kinds of information responsibly.

Letizia Capitanio (Paris) reminded us (I guess, but it was new to me) that the Gaia RVS instrument captures a diffuse interstellar band line. This opens up the possibility that we could do kinematic dust mapping with Gaia! She also showed some competitive dust maps based on Gaussian Process inferences.

2017-08-29

#AstroHackWeek day 2

Today Jake vanderPlas (UW) and Boris Leistedt (NYU) gave morning tutorials on Bayesian inference. One of Leistedt's live exercises (done in Jupyter notebooks in real time) involved increasing model complexity in a linear fit and over-fitting. I got sucked into this example:

I made a model where y(x) is a (large) sum of sines and cosines (like a Fourier series). I used way more terms than there are data points, so over-fitting is guaranteed, in the case of maximum likelihood. I then did Bayesian inference with this model, but putting a prior on the coefficients that is more restrictive (more informative) as the wave number increases (or the wavelength decreases). This model is a well-behaved Gaussian process! It was nice to see the continuous evolution from fitting a rigid function to fitting a Gaussian process, all in just a few lines of Python.

2017-08-28

#AstroHackWeek day 1

AstroHackWeek 2017 kicked off today, with Adrian Price-Whelan (Princeton) and I doing a tutorial on machine learning. We introduced a couple of ideas and simple methods, and then we set ten (randomly assigned) groups working on five methods on two different data sets. We didn't get very far! But we tried to get the discussion started.

In the afternoon hacking sessions, Price-Whelan and I looked at some suspcious equations in the famous Binney & Tremaine book, 2ed. In Chapter 4, there are lots of integrals of phase-space densities and we developed an argument that some of these equations must have wrong units. We can't be right—Binney & Tremaine can't be wrong—because all of Chapter 4 follows from these equations. But we don't see what we are doing wrong.

[Note added later: We were wrong and B&T is okay; but they don't define their distribution functions very clearly!]