Skip to content Skip to navigation


Quantum computing with minimal resources

Photonics offers unique advantages as a substrate for quantum information processing, but imposes fundamental scalability challenges. Nondeterministic schemes impose massive resource overheads, while deterministic schemes require prohibitively many identical quantum emitters to realize sizeable quantum circuits. In this paper, we propose a scalable architecture for a photonic quantum computer which needs minimal quantum resources to implement any quantum circuit: a single coherently controlled atom. Optical switches endow a photonic quantum state with a synthetic time dimension by modulating photon-atom couplings. Quantum operations applied to the atomic qubit can be teleported onto the photonic qubits via projective measurement, and arbitrary quantum circuits can be compiled into a sequence of these teleported operators. This design negates the need for many identical quantum emitters to be integrated into a photonic circuit and allows effective all-to-all connectivity between photonic qubits. The proposed device has a machine size which is independent of quantum circuit depth, does not require single-photon detectors, operates deterministically, and is robust to experimental imperfections.

Photonic quantum programmable gate arrays

Watch my talk at CLEO 2020 on the results from our paper here!

Photonic systems have many unique advantages for quantum information processing, but deterministic multi-photon gates are difficult to implement, and complex quantum circuits can be prohibitively large to do with free-space optics since processing is done along the photon path. In this paper, we present an architecture for a photonic quantum programmable gate array (QPGA) which can be dynamically programmed to implement any quantum operation, in principle deterministically and with perfect fidelity. Our photonic integrated circuit architecture consists of a lattice of beamsplitters and phase shifters, which perform rotations on path-encoded photonic qubits, and embedded quantum emitters, which use a two-photon scattering process to implement two-qubit controlled gates deterministically. We show how to exactly prepare arbitrary quantum states and operators on the device, and we apply machine learning techniques to automatically implement highly compact approximations to important quantum circuits. Our design is the first to our knowledge to extend programmable integrated optics to the quantum domain in a manner which is both deterministic and spatially efficient, and ongoing advancements in nanophotonic processors and strongly coupled quantum emitters may allow for feasible near future implementation of our design.

Nanophotonic neural networks

Optical systems are a promising hardware platform for fast and energy-efficient machine learning. A neural network implemented with nanophotonic components requires virtually no energy to operate, produces no waste heat, and can process data at feedthrough rates hundreds of times faster than electronic systems. I developed neuroptica, a popular photonic neural network simulation library. I co-authored a paper addressing architectural and training difficulties with this class of optical devices, and helped to design a new type of physical electro-optic activation function for use in these networks, which has since been experimentally realized.

Computing photon scattering in arbitrary quantum optical systems

I wrote the scattering module for QuTiP (the Quantum Toolbox in Python). This module uses recent advances in quantum optical theory to solve the generalized problem of computing the scattered state of a driven arbitrary quantum system. I made modifications to the theoretical framework that reduce the space complexity of computing the evolution of a quantum system emitting a fixed number of photons from exponential to polynomial complexity, making the calculations that my module performs tractable. The scattering module was a major feature in the v4.3 release of QuTiP. I also wrote an instructional notebook to demonstrate the capabilities of my module, which is posted on the QuTiP website.

A distributed simulation framework for quantum networks and channels

SQUANCH (Simulator for Quantum Networks and CHannels) is an open-source Python framework for creating performant and parallelized simulations of distributed quantum information processing. Although it can be used as a general-purpose quantum computing simulation library, SQUANCH is designed specifically for simulating quantum networks, acting as a sort of "quantum playground" to test ideas for quantum networking protocols. The package includes flexible modules that allow you to easily design and simulate complex multi-party quantum networks, extensible classes for implementing quantum and classical error models for testing error correction protocols, and a multi-threaded framework for manipulating quantum information in a performant manner. The whitepaper introducing the framework is available on arXiv.

Functional testing for the LSST camera system

The Large Synoptic Survey Telescope (LSST) is a wide-field survey telescope, currently under construction in Chile, that will generate 15 terabytes of astronmical data each night of its operation. When it is completed in 2019, it will have the world's largest CCD camera array at 3.2 gigapixels.
Working with SLAC National Accelerator Laboratory, I authored a functional testing system and accompanying documentation for the camera readout and control boards, which need to operate reliably across a wide range of conditions and temperatures. The testing system checks that all of the low-level board components are working as intended while thermocycling the boards. There are 18 primary tests, which examine the board communication capabilities, the serial and parallel clocks, the output gate/drain and guard/reset drains, and the camera ASPIC noise levels as they respond to changes in temperature. When the design for the readout boards are finalized at Brookhaven later this year and the primary production run is finished, the testing system will be used to check the functionality of every readout board prior to installation in the telescope.

Vertex reconstruction with precision timing data

The Phase-II upgrades to the Large Hadron Collider will introduce a variety of new measurement devices to the CMS, including the High-Granularity Calorimeter (HGCAL) and will increase the beam luminosity by an order of magnitude. However, these improvements to luminosity will increase pileup - when multiple collisions happen sufficiently close enough in space and time such that it is difficult to associate the detected particles with their original collision location ("vertex"). A high pileup environment poses particular challenges to distinguishing the two major production mechanisms for the Higgs boson, gluon fusion and vector boson fusion, reducing the accuracy of the current boosted decision tree vertex reconstruction algorithms from >90% to around 30%. Separating the two production mechanisms is of crucial importance for precisely understanding electro-weak symmetry breaking.

Using high precision timing measurements on the order of 10 picoseconds from simulated events in the HGCAL, we designed a vertex reconstruction algorithm that requires only the spatiotemporal arrival coordinates of the detected particles to reconstruct the interaction vertex of a collision with a median resolution of 235 micrometers, approximately 162 times better than the original proof-of-concept algorithm from 2012. To do this, we implemented a set of filters to detect poorly-reconstructed events and designed a helper algorithm capable of inferring likely interaction vertices given the structural "pointing" data of only a single cluster. We authored a CMS analysis note (mirror) detailing the development and results of our algorithm.

Evolution of the length of day

The length of Earth's day has increased from about 7 hours to its current 24 hour value over the last 4.5 billion years. However, geological records indicate that for about a billion years during the Precambrian era, the deceleration of Earth's rotation stopped at a day length of about 21 hours, before resuming about 650 Myr ago.
To understand how this phenomenon of an unchanging day length began and ended, we developed a mathematical model describing how the competing torques of the thermally-driven atmospheric tide and the gravitationally-driven lunar tide affected the evolution of the Earth's day length. During the Precambrian era, Earth’s decelerating rotation would have passed a 21 hour period that would have been resonant with the semidiurnal atmospheric thermal tide. Near this point, the atmospheric torque would have been maximized, being comparable in magnitude but opposite in direction to the lunar torque, halting Earth’s rotational deceleration, maintaining a constant day length.
We developed a computational model to determine necessary conditions for formation and breakage of this resonant effect. Our simulations showed the resonance to be resilient to atmospheric thermal noise but suggested that a sudden large atmospheric temperature increase similar in magnitude and duration to the deglaciation period following a snowball Earth event near the end of the Precambrian would break the resonance, allowing the length of day to resume increasing to its current value. Using the known durations and magnitudes of snowball events throughout geological history, our model generated a simulated day length evolution that closely resembles existing paleorotational data, tentatively identifying the Sturtian glaciation as the event which broke the resonant effect. We published our findings in Geophysical Research Letters (arXiv mirror) in 2016.