> you can simulate an 8-qubit chip faster and with more fidelity than you can with 8 homegrown qubits.
That is really useful to know! At what point does that stop being true? like, if I had 100 qubits could I still simulate them faster on a classical computer than with actual quantum qubits? (I imagine this is a hard question to answer because it's a complexity theory question, and I guess quantum computers with 100 qubits don't actually exist in real life, but I'd be curious to know what known bounds there are on the answer)
Right now, if you want to simulate 40 qubits, you need a computer with at least 16 terabytes of RAM. 41 qubits needs 32 terabytes. You can't do this in a single machine, but you could get these with enough computers linked together. But big iron doesn't scale exponentially, so at some point, you run out of atoms in the universe to use as transistors.
And 100 qubits. That needs more than 18 quintillionterabytes of RAM. In other words, it's not going to happen.
With more memory, you need more time. Every quantum operation [0] needs to touch all of that memory. And if it takes 10 ns to touch one byte of memory, then it will take hours to even do one simple operation on 36 qubits. [1] On a quantum computer, an operation would be on the order of nanoseconds.
I would say this, as a rough answer to your question: A quantum computer is practically faster and more useful when you can no longer touch the RAM necessary to simulate it in a 1 millisecond. Going off of our 10 ns rule, 1 ms = 10^6 ns. So if 10 ns allows us to do something with 1 byte, then that's 10^5 bytes. So just about 10 KB.
And, drum roll, that's just under 10 qubits [2].
[0] There are some very special cases where this isn't true, but these special cases also don't give you any sort of quantum advantage.
[1] There are some tricks you can play to speed things up, like parallelizing across processors. But now you'll have to factor in communication time, memory, and bandwidth.
[2] One factor I'm not considering is the fidelity. Superconducting qubits can be thought of as analog devices with analog noise.
The largest full quantum computer simulation done so far is 45-qubits which was performed on the Cori supercomputer [0][1]. The main limitation in these kinds of exact simulations is memory, and so simulating 50-qubits would require about a 1000x the memory of Cori, which is infeasible.
There are some other methods of approximate simulation, but that gives the cut off for full, exact simulation of a quantum computer.
That is really useful to know! At what point does that stop being true? like, if I had 100 qubits could I still simulate them faster on a classical computer than with actual quantum qubits? (I imagine this is a hard question to answer because it's a complexity theory question, and I guess quantum computers with 100 qubits don't actually exist in real life, but I'd be curious to know what known bounds there are on the answer)