Deutsch's Algorithm: A Quantum Way to Test Functions

Deutsch's Algorithm: A Quantum Way to Test Functions

Introduction

Deutsch's algorithm is one of the simplest and earliest examples of a quantum algorithm that outperforms its classical counterpart. It was proposed by David Deutsch in 1985 as a way to demonstrate the power of quantum computation and the limitations of classical computation.

The Problem

Suppose you are given a black box that implements a function f that takes one bit as input and returns one bit as output. The function f is either constant (meaning f(0) = f(1)) or balanced (meaning f(0) ≠ f(1)). Your task is to determine whether f is constant or balanced by querying the black box.

A classical algorithm would need to query the black box twice, once with input 0 and once with input 1, to determine the nature of f. However, a quantum algorithm can do it with only one query, using the power of superposition and interference.

The Algorithm

The quantum algorithm for solving this problem is as follows:

  1. Prepare two qubits in the state |00⟩.

  2. Apply a Hadamard gate to both qubits, creating an equal superposition of all four basis states: |00⟩, |01⟩, |10⟩, and |11⟩.

  3. Apply the function f to the second qubit, controlled by the first qubit. This means that if the first qubit is |0⟩, then the second qubit is unchanged; if the first qubit is |1⟩, then the second qubit is flipped if and only if f(1) = 1. This can be done using a controlled-NOT gate with an extra X gate on the second qubit depending on the value of f(0).

  4. Apply a Hadamard gate to the first qubit only.

  5. Measure the first qubit. If the outcome is |0⟩, then f is constant; if the outcome is |1⟩, then f is balanced.

The Explanation

To understand why this algorithm works, let's look at how the state of the two qubits evolves throughout the algorithm. We start with |00⟩ and apply Hadamard gates to both qubits:

|00⟩ → 1/2(|00⟩ + |01⟩ + |10⟩ + |11⟩)

Then we apply the function f to the second qubit, controlled by the first qubit. Depending on whether f is constant or balanced, we get one of the following states:

  • If f is constant and f(0) = f(1) = 0, then nothing changes: 1/2(|00⟩ + |01⟩ + |10⟩ + |11⟩)

  • If f is constant and f(0) = f(1) = 1, then the second qubit is flipped in every term: 1/2(|01⟩ + |00⟩ + |11⟩ + |10⟩)

  • If f is balanced and f(0) = 0, f(1) = 1, then the second qubit is flipped only when the first qubit is |1⟩: 1/2(|00⟩ + |01⟩ + |11⟩ + |10⟩)

  • If f is balanced and f(0) = 1, f(1) = 0, then the second qubit is flipped only when the first qubit is |0⟩: 1/2(|01⟩ + |00⟩ + |10⟩ + |11⟩)

In any case, we can write the state after applying f as:

1/2(|0⟩ ⊗ (|0 ⊕ f(0)⟩ + |1 ⊕ f(0)⟩) + |1⟩ ⊗ (|0 ⊕ f(1)⟩ + |1 ⊕ f(1)⟩))

where ⊕ denotes bitwise addition modulo 2 (or XOR).

Next, we apply a Hadamard gate to the first qubit only. This transforms each term as follows:

  • |0⟩ → 1/√2(|0⟩ + |1⟩)

  • |1⟩ → 1/√2(|0⟩ - |1⟩)

Therefore, the state becomes:

1/2√2(|0⟩ ⊗ (|0 ⊕ f(0)⟩ + |1 ⊕ f(0)⟩) + (|0 ⟩ -| 1 ⟩ ) ⊗ (|0 ⊕ f(1) ⟩+| 1 ⊕ f(1) ⟩ ))

By rearranging the terms, we can see that this state is a product of two states, one for each qubit:

1/2√2(|0⟩ ⊗ (|0 ⊕ f(0)⟩ + |1 ⊕ f(0)⟩ + |1 ⟩ ⊗ (|0 ⊕ f(1) ⟩+| 1 ⊕ f(1) ⟩ )) = 1/2(|0⟩ + (-1)^f(0)⊕f(1)|1⟩) ⊗ (|0 ⊕ f(0)⟩ + |1 ⊕ f(0)⟩)

The state of the first qubit depends only on the XOR of f(0) and f(1), which is 0 if f is constant and 1 if f is balanced. The state of the second qubit is irrelevant for the measurement of the first qubit.

Finally, we measure the first qubit. If we get |0⟩, then we know that f(0) ⊕ f(1) = 0, which means that f is constant. If we get |1⟩, then we know that f(0) ⊕ f(1) = 1, which means that f is balanced.

The Implementation

To implement this algorithm in Q#, we need to define a quantum operation that takes a quantum operation f as an input and returns a boolean value indicating whether f is constant or balanced. The quantum operation f must have the signature (Qubit, Qubit) => (), meaning that it takes two qubits as inputs and returns nothing as output. It must also be reversible, meaning that applying it twice should return the qubits to their original state.

Here is one possible implementation of the Deutsch algorithm in Q#:

operation ApplyDeutschsAlgorithm(oracle : (Qubit => Unit is Adj+Ctl)) : Bool {
    using (qubits = Qubit[2]) {
        // Prepare the qubits in the required state
        X(qubits[1]);
        H(qubits[0]);
        H(qubits[1]);

        // Apply the oracle
        oracle(qubits);

        // Apply the Hadamard gate to the first qubit
        H(qubits[0]);

        // Measure the first qubit
        let result = M(qubits[0]);

        // Reset the qubits to their initial state
        ResetAll(qubits);

        // If the result is Zero, then the function is constant, otherwise it's balanced
        return result == Zero;
    }
}

This operation takes an oracle as input, which is a quantum operation that implements the function you’re testing. The oracle should take a single qubit as input and doesn’t need to return anything. The is Adj+Ctl part means that the oracle must be both adjointable and controllable.

Here’s an example of how to use this operation with a constant function:

operation ConstantOracle(q : Qubit) : Unit is Adj+Ctl {
    // This oracle doesn't do anything, so the function it represents is constant
}

operation TestConstantOracle() : Bool {
    return ApplyDeutschsAlgorithm(ConstantOracle);
}

And here’s an example with a balanced function:

operation BalancedOracle(q : Qubit) : Unit is Adj+Ctl {
    // This oracle applies the X gate, so the function it represents is balanced
    X(q);
}

operation TestBalancedOracle() : Bool {
    return ApplyDeutschsAlgorithm(BalancedOracle);
}

In both cases, you can run the test operation to see if Deutsch’s algorithm correctly identifies the function as constant or balanced. Remember, the algorithm returns true if the function is constant and false if it’s balanced. So TestConstantOracle should return true and TestBalancedOracle should return false.

Conclusion

In this article, we have seen how Deutsch’s algorithm can solve the problem of determining whether a function is constant or balanced with only one query to a quantum black box, while a classical algorithm would need two queries. We have explained the intuition behind the algorithm, the steps involved, and the implementation in Q#. We have also shown how this algorithm demonstrates the power of quantum computation and the limitations of classical computation1. Deutsch’s algorithm is one of the simplest and earliest examples of a quantum algorithm, but it is also a precursor to more advanced and useful algorithms, such as Deutsch-Jozsa, Simon, and Shor. These algorithms exploit the quantum phenomena of superposition and interference to achieve exponential speedups over classical algorithms for certain problems.