Based on what evidence I have been able to gather, and assuming Moore’s law holds, computers today are not at all far from human level. A human brain runs approximately 100,000,000,000 (10^11) neurons, each with an average of 1000 dendrites, at a maximum rate of about 100 impulses per second. This comes out to about 10^16 operations per second, or a rather impressive amount of raw power. According to the almighty Wikipedia, HPU4Science is the cheapest currently available source of computing power, at $1.80 per gFlops (billion operations per second). So we take 1.80×10^16/10^9, and come up with a cost of about 18 million dollars to run a human brain. In simple terms, this means we are technologically capable of doing an upload now.
Let me reiterate that. With today’s computing power, we have the theoretical capability of running a human mind in real time on a computer. Moore’s law states that computing power per dollar doubles every two years, so by 2031 it should cost around $20,000, or the price of a new car. Even now, though, our limitation is not computing power. Neither is it memory, not by a long shot. In fact, the amount of memory needed to hold a human mind, neuron structure and all, would cost somewhere around $10,000 today.
Our real barriers lie in our understanding of neuroscience and our mastery over nanotechnology. Currently, we are only able to emulate circuits of around 25000 neurons (at least, according to IBM, who also created Watson), or a cat’s brain. It takes enormous amounts of work to do this, as each neuron must be analyzed and a circuit diagram must be created for it. This has the unfortunate effect of destroying the neuron structure that was being copied. While we should have the capability of diagramming and simulating a human brain by 2020, actually being simulated would involve being killed, and the first few attempts would doubtless have problems and not run properly.
This is where nanotechnology comes in to save the day. In theory, it should be possible to create tiny machines that are attracted to the electromagnetic fields of neurons, and can thus line up along dendrites and pass along the same signals as the dendrites themselves do. These nanomachines would “learn” from the neurons when to fire, and would eventually match to arbitrarily high levels of precision. Basically, the skull would at that point contain two brains, on made of neurons, the other made of nanomachines, both doing exactly the same thing.
Now for the good part. The brain made of nanomachines would now be functionally identical to the one made of neurons. The structure and function of each of these machines could be uploaded to a computer, at which point a circuit diagram would be trivial to produce. The computer could then optomize from this diagram, leading to a significant increase in speed. Even if this step were not taken, computer processors run at somewhere around a billion operations per second. Assuming each group of 1000 neurons shared a processor, this would mean a 10,000x speedup over the original brain. This is nowhere near the theoretical maximum speedup (neurons are about 100x bigger in all dimensions and 1,000,000x slower than the laws of physics say they have to be). Nonetheless, it is a good start.
As for the timeline, predictions for when uploading should be available range from IBM’s optomistic 2019 to some guesses as late as 2100. I personally would put it sometime in the 2030 to 2050 range. If all of this is starting to sound like science fiction, just remember that predictions of the future tend to undershoot reality. Even twenty five years ago, nobody would have predicted how ubiquitous the internet would be in our lives.