Introduction to QAE and QCompress

What is a quantum autoencoder (QAE)?

_images/qae_image.png

Similar to the idea of classical autoencoders, a quantum autoencoder is a function whose parameters are optimized across a training data such that given an -qubit input , the autoencoder attempts to reproduce . Part of this process involves expressing the input data set using a fewer number of qubits (using qubits out of ). This means that if the QAE is successfully trained, the corresponding circuit represents a compressed encoding of the input , which may be useful to applications such as dimension reduction of quantum data. For a more in-depth explanation of the QAE, please refer to the original paper by Romero et al. In addition, we note that this is one possible realization of a “quantum” autoencoder and that there are other proposed models for the quantum autoencoder.

QAE model in QCompress

We note that our setup of the quantum autoencoder in QCompress is different from what was proposed by Romero et al. In the original paper, the protocol includes a SWAP test to measure the overlap between the “reference” and “trash” states. However, implementing the SWAP test is generally expensive for today’s quantum processors. Instead, we implement two alternative training schemes, described in Sim et al.

Before going into the details, we use the following naming conventions for the types of qubits involved in the QAE model in QCompress:

_images/qae_structure.png

In the current version of QCompress, there are two main training schemes:

  1. Halfway training (or trash training) - In this scheme, we execute only the state preparation followed by the training circuit and count the probability of measure all 0’s on the “trash” qubits (i.e. input qubits that are not the latent space qubits).
  2. Full training - In this scheme, we execute the entire circuit (state preparation, training, un-training, un-state preparation) and count the probability of measuring all 0’s on the “output” qubits. There are two possible sub-strategies:

2a. Full training with reset: With the RESET feature in pyQuil, we reset the input qubits (except the latent space qubit) such that these qubits are the refresh qubits in the latter half of the QAE circuit. Therefore, in total, this method requires qubits.

2b. Full training without reset: Without the reset feature, we introduce new qubits for the refresh qubits. Therefore, in total, this method requires qubits.

NOTE: For the loss function, we average over the training set losses and negate the value to cast as a minimization problem.

Overview of QCompress

Here, we provide a high-level overview of how to prepare and execute an instance of the QAE algorithm using QCompress. The major steps involved are:

  1. Prepare quantum data: generate state preparation circuits, for each data point .
  2. Select a parametrized circuit to train the QAE.
  3. Initialize the QAE instance.
  4. Set up the Forest connection: this is where the user can decide on executing the instance on the simulator or the actual quantum device on Rigetti’s QCS.
  5. Split data set into training and test sets.
  6. Set initial guess for the parameters, and train the QAE.
  7. Evaluate the QAE performance by predicting against the test set.

Examples

We provide several Jupyter notebooks to demonstrate the utility of QCompress. We recommend going through the notebooks in the order shown in the table (top-down).

Notebook Feature(s)
qae_h2_demo.ipynb Simulates the compression of the ground states of the hydrogen molecule. Uses OpenFermion and grove to generate data. Demonstrates the “halfway” training scheme.
qae_two_qubit_demo.ipynb Simulates the compression of a two-qubit data set. Outlines how to run an instance on an actual device. Demonstrates the “full with reset” training scheme.
run_landscape_scan.ipynb Shows user how to run landscape scans for small (few-parameter) instances. Demonstrates setup of the “full with no reset” training scheme.

How to cite QCompress

When using QCompress for research projects, please cite:

Sukin Sim, Yudong Cao, Jonathan Romero, Peter D. Johnson and Alán Aspuru-Guzik. A framework for algorithm deployment on cloud-based quantum computers. arXiv:1810.10576. 2018.