Distributed memory –
But the entanglement takes longer than the memory holds its state.
If we solve the problems, however, quantum memory offers some rather unusual properties. The process of writing to quantum memory is very similar to the process for quantum teleportation , Meaning of memory can potentially be transmitted between different computing facilities. And since the storage device is a quantum object, there’s the possibility that two qubits of memory in different locations can be entangled, essentially de-localizing the qubit’s value and spreading it between two facilities.
In a demonstration of that promise, Chinese researchers have entangled quantum memory at facilities over 23 kilometers apart. Separately, they have also done the entanglement with photons that have traveled through kilometers of optical cable. But the process of transmitting and entangling comes with an unfortunate side-effect: it takes so long that the memory typically loses its coherence in the meantime.
Quantum city
The basic outlines of the experiment are pretty straightforward for a process that’s somewhat mind-bending. The qubits being used here are small clouds of cold atoms (about a hundred million atoms for each). They are placed in a state where the atoms are indistinguishable from a quantum perspective and thus can be treated as a single quantum object. Because a quantum state will be distributed across all the atoms simultaneously, this provides a bit more stability than other forms of quantum memory. The atom cloud’s state is read and written using photons, and the atoms are placed in an optical cavity that traps these photons. This ensures that the photons have many opportunities to interact with the atom cloud, increasing the efficiency of operations.
When the memory’s state is set by a write photon, the atomic collective emits a second photon that indicates the success. The polarization of this photon contains information regarding the state of the atoms, so it serves as a tool for entangling the memory.
Unfortunately, that photon is at a wavelength that isn’t very useful, in that it tends to get lost during transmission. So the researchers sacrificed a bit of efficiency for a lot of utility. They used a device that shifts the wavelength of the photons from the near infrared to the wavelengths used in standard communications fibers. About 50 percent of the photons were lost, but the remaining ones can be transmitted at high-efficiency across existing fiber networks (provided the right hardware is put in place where the fiber ends
There are losses from filtering noise and getting photons into the fiber, but the entire process is over 50 – percent efficient, end to end. In this case, the two ends were 18 km apart, at the University of Science and Technology of China and the Hefei Software Park.
For the entanglement, the authors created two qubits of quantum memory, generated photons from both, and sent those photons down separate cables to the Software Park. There, the photons were sent through a device that made them impossible to distinguish, entangling them. Since they, in turn, were entangled with the quantum memory that produced them, the two qubits of memory were then entangled. While they resided in the same lab, the geometry of the fibers could have been arbitrary — it was equivalent to entangling two bits of memory that were (km apart.)
That’s a big step up from the previous record of 1.4km.
To stretch things out a bit, the researchers then turned to a long spool of cable. Two photons were sent down the cable and then manipulated so that it was impossible to determine which path they took through the cable. This again entangled them, and thus the memories that emitted the photons in the first place. The process required that the phase of the incoming photons be tracked, which is notably more difficult, and therefore dropped the overall efficiency.
For a km-long fiber path, this led to some rather low efficiencies, on the order of
– 4
. Which means the time to achieve entanglement went up — in this case to over half a second. And that’s a problem, because the typical lifetime of a qubit stored in this memory is microseconds, much shorter than the entanglement process. So the approach definitely falls into the “not quite ready for production” category.
And that’s unfortunate because the approach opens up a host of very intriguing possibilities. One is that spreading a qubit across two facilities through this delocalization could enable a single quantum calculation to be performed at remote facilities — possibly ones employing different hardware that have distinct strengths and weaknesses. And the researchers note that there’s a technique called entanglement swapping that could extend the distance between memory qubits even further — provided the qubits hold on to their state. But if all of these involve some amount of error, that error will quickly pile up and make the whole thing useless.
None of this should undercut the achievement demonstrated here, but it does show how far we still have to go. The inefficiencies popping up at every step of the process each represent a distinct engineering and / or physics challenge we have to tackle before any of this can be applicable to the real world.
(Nature) , . DOI: ) / s – 0 30 – – 7
GIPHY App Key not set. Please check settings