After “quantum supremacy”, the next step is scaling up and mastering the errors that dog qubits. But some researchers reckon the noise might always to be too high for useful quantum computers
IT IS 40 years since physicist Richard Feynman pointed out that quantum systems should be able to carry out an entirely new form of computation that outperforms even the most powerful conventional computers. “Feynman argued that quantum computing should offer an exponential speed-up for many classical computations,” says Cristian Calude at the University of Auckland in New Zealand. And with a slew of breakthroughs, quantum computers look like they might now be hitting the big time. Perhaps.
Because they have properties that just don’t exist in the classical world, quantum entities such as atoms, photons, electrons and the like have access to a different set of routines for information processing if used to make quantum bits, or qubits – a potentially much more powerful set.
Part of that is down to quantum superposition, which means a qubit can be used to represent a complex combination of the 0 and 1 binary states used in normal computing. That doesn’t mean it is 0 and 1 at the same time. A better way to put it is that might turn out to be 0 or 1.
Quantum algorithms use a process called “interference” to skew these undefined properties and bias the interactions of multiple qubits in a way that increases the likelihood they will arrive at a final state that contains a solution to the problem they are trying to solve.
That’s where entanglement comes into the mix. The spooky connections between qubits it generates somehow allow for a pattern of interference where the paths leading to each wrong answer destroy one another and cancel out, while the paths leading to the right answer are reinforced. …
Read more at New Scientist