In case you missed the hubbub, IBM researchers last month announced the creation of a powerful new brain simulation, which was variously reported as being “cat-scale,” an “accurate brain simulation,” a “simulated cat brain,” capable of “matching a cat’s brainpower,” and even “significantly smarter than [a] cat.” Many of the claims go beyond those made by the researchers themselves — although they did court some of the sensationalism by playing up the cat angle in their original paper, which they even titled “The Cat is Out of the Bag.”
Each of these claims is either false or so ill-defined as to be unfalsifiable — and those critics who pointed out the exaggerations deserve kudos.
But this story is really notable not because it is unusual but rather because it is so representative: journalistic sensationalism and scientific spin are par for the course when it comes to artificial intelligence and brain emulation. I would like, then, to attempt to make explicit the premises that underlie the whole-brain emulation project, with the aim of making sense of such claims in a less ad hoc manner than is typical today. Perhaps we can even evaluate them using falsifiable standards, as should be done in a scientific discipline.
How Computers Work
All research in artificial intelligence (AI) and whole-brain emulation proceeds from the same basic premise: that the mind is a computer. (Note that in some projects, the whole mind is presumed to be a computer, while in others, only some subset of the mind is so presumed, e.g. natural language comprehension or visual processing.)
What exactly does this premise mean? Computer systems are governed by layers of abstraction. At its simplest, a physical computer can be understood in terms of four basic layers:
The layers break down into two software layers and two physical layers. The processor is the device that bridges the divide between software and the physical world. It offers a set of symbolic instructions. But the processor is also a physical object designed to correspond to those symbols. An abacus, for example, can be understood as “just” a wooden frame with beads, but it has been designed to represent numbers, and so can perform arithmetic calculations.
Above the physical/software bridge provided by the processor is the program itself, which is written using instructions in the processor’s programming language, also known as the Instruction Set Architecture (ISA). For example, an x86 processor can execute instructions like “add these two numbers,” “store this number in that location,” and “jump back four instructions,” while a program written for the x86 will be a sequence of such instructions. Such programs could be as simple as an arithmetical calculator or as complex as a web browser.
Below the level of the processor is the set of properties of the physical world that are irrelevant to the processor’s operation. More specifically, it is the set of properties of the physical processor that do not appear in the scheme relating the ISA to its physical implementation in the processor. So, for example, a physical Turing Machine can be constructed using a length of tape on which symbols are represented magnetically. But one could also make the machine out of a length of paper tape painted different colors to represent different symbols. In each case, the machine has both magnetic and color properties, but which properties are relevant and which are irrelevant to its functioning as a processor depends on the scheme by which the physical/software divide is bridged.
Note the nature of this layered scheme: each layer requires the layer below it, but could function with a different layer below it. Just like the Turing Machine, an ISA can be implemented on many different physical processors, each of which abstracts away different sets of physical properties as irrelevant to their functioning. And a program, in turn, can be written using many different ISAs.
An Ideal Model for Whole-Brain Emulation
In supposing that the mind is a computer, the whole-brain emulation project proceeds on the premise that the computational model thus outlined applies to the mind. That is, it posits a sort of Ideal Model that can, in theory, completely describe the functioning of the mind. The task of the whole-brain emulation project, then, is to “fill in the blanks” of this model by attempting, either explicitly or implicitly, to answer the following four questions:
1. What is the mind’s program? That is, what is the set of instructions by which consciousness, qualia, and other mental phenomena are given rise in the brain?
2. In which instruction set is that program written? That is, what is the syntax of the basic functional unit of the mind?
3. What constitutes the hardware of the mind? That is, what is the basic functional unit of
the mind? What structure in the brain implements the ISA of the mind?
4. Which physical properties of the brain are irrelevant to the operation of its basic functional unit? That is, which physical properties of the brain can be left out of a complete simulation of the mind?
We could restate the basic premise of AI as the claim that the mind is an instantiation of a Turing Machine, and then equivalently summarize these four questions by asking: (1) What is the Turing Machine of which the mind is an instantiation? And (2) What physical structure in the brain implements that Turing Machine? When and only when these questions can be answered, it will be possible to program those answers into a computer, and whole-brain emulation will be achievable.
Limitations of the Ideal Model
You might object that this analysis is far too literal in its treatment of the mind as a computer. After all, don’t AI researchers now appreciate that the mind is squishy, indefinite, and difficult to break into layers (in a way that this smooth, ideal model and “Good Old-Fashioned AI” don’t acknowledge)?
There are two possible responses to this objection. Either mental phenomena (including intelligence, but also consciousness, qualia, and so forth) and the mind as a whole are instantiations of Turing Machines and therefore susceptible to the model and to replication on a computer, or they are not.
If the mind is not an instantiation of a Turing Machine, then the objection is correct, but the highest aspirations of the AI project are impossible.
If the mind is an instantiation of a Turing Machine, then the objection misunderstands the layered nature of physical and computer systems alike. Specifically, the objection understands that AI often proceeds by examining the top layer of the model — the “program” of the mind — but then denies this layer’s relationship to the layers below it. This objection essentially makes the same dualist error often attributed to AI critics like John Searle: it argues that if a computational system can be described at a high level of complexity bearing little resemblance to a Turing Machine, then it does not have some underlying Turing Machine implementation. (There is a deep irony in this objection — about which, more in a later post.)
There is a related question about this Ideal Model: Suppose we can ascertain the Turing Machine of which the mind is an instantiation. And suppose we then execute this program on a digital computer. Will the computer then be a mind? Will it be conscious? This is an open question, and a vexing and tremendously important one, but it is sufficient simply to note here that we do not know for certain whether such a scenario would result in a conscious computer. (If it would not, then certain premises of the Ideal Model would be false — but about this, more, also, in a later post.)
A third, and much more pressingly relevant, note about the model. For similar reasons to the fact that we do not know if simulating the brain at a low level will give rise to the high-level phenomena of the mind, it is also the case that even if and when we create a completely accurate model of the brain, we will not necessarily understand the mind. This is, again, because of the layered nature of physical and computational systems. It is just as difficult to understand a low-level simulation of a complex system as it is to understand the original physical system. In either case, higher-level behavior must be additionally understood — just as looking at the instructions executing on a computer processor allows you to completely predict the program’s behavior but does not necessarily allow you to understand its higher-level structure; and just as Newton would not necessarily have discerned his mechanical theories by making a perfectly accurate simulation of an apple falling from a tree. (I explained this layering in more depth in this recent New Atlantis essay.)
Achieving and Approximating the Ideal Model
Again, the claim in this post is that the Ideal Model presented here is the implicit model on which the whole-brain emulation project proceeds. Which brings us back to the “cat-brain” controversy.
When we attempt to analyze how the paper’s authors “fill in the blanks” of the Ideal Model, we see that they seem to define each of the levels (in some cases explicitly, in others implicitly) as follows: (1) the neuron is the basic functional unit of the mind; (2) everything below the level of the neuron is irrelevant; (3) the neuron’s computational power can be accurately replicated by simulating only its electrical action potential; and (4) the program of the mind is encoded in the synaptic connections between neurons. The neuron-level simulation appears to be quite simple, omitting a great level of detail without offering justification or explanation for whether these details are relevant and what might be the effects of omitting them if they are relevant.
Aside from the underlying question of whether such an Ideal Model of the mind really exists — that is, of whether the mind is in fact a computer — the most immediate question is: How close have we come to filling in the details of the Ideal Model? As the “cat-brain” example should indicate, the answer is: not very close. As Sally Adee writes in IEEE Spectrum:
Jim Olds (who directs George Mason University’s Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. “We need an Einstein of neuroscience,” he says, “to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity.” Here’s what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?…
No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. “We do not have a definition of consciousness,” says [Dartmouth Brain Engineering Laboratory Director Richard] Granger. “Or, worse, we have fifteen mutually incompatible definitions.”
The sorts of approximation seen in the “cat-brain” case, then, are entirely understandable and unavoidable in current attempts at whole-brain emulation. The problem is not the state of the art, but the overconfidence in understanding that so often accompanies it. We really have no idea yet how close these projects come to replicating or even modeling the mind. Note carefully that the uncertainty exists particularly at the level of the mind rather than the brain. We have a rather good idea of how much we do and do not know about the brain, and, in turn, how close our models come to simulating our current knowledge of the brain. What we lack is a sense of how this uncertainty aggregates at the level of the mind.
Many defenders of the AI project argue that it is precisely because the brain has turned out to be so “squishy,” indefinite, and unlike a computer, that approximations at the low level are acceptable. Their argument is that the brain is hugely redundant, designed to give rise to order at a high level out of disorder at a low level. This may or may not be the case, but again, if it is, we do not know how this happens or which details at the low level are part of the “disorder” and thus safely left out of a simulation. The aggregate low-level approximations may simply be filtered out as noise at a high level. Alternately, if the basic premise that the mind is a computer is true, then even miniscule errors in approximation of its basic functional unit may aggregate into wild differences in behavior at the high level, as they easily can when a computer processor malfunctions at a small but regular rate.
Until we have better answers to these questions, most of the claims such as those surrounding the “cat brain” should be regarded as grossly irresponsible. That the simulation in question is “smarter than a cat” or “matches a cat’s brainpower” is almost certainly false (though to my knowledge no efforts have been made to evaluate such claims, even using some sort of feline Turing Test — which, come to think of it, would be great fun to dream up). The claim that the simulation is “cat-scale” could be construed as true only insofar as it is so vaguely defined. Such simulations could rather easily be altered to further simplify the neuron model, shifting computational resources to simulate more neurons, resulting in an “ape-scale” or “human-scale” simulation — and those labels would be just as meaningless.
When reading news reports like many of those about the “cat-brain” paper, the lay public may instinctively take the extravagant claims with a grain of salt, even without knowing the many gaps in our knowledge. But it is unfortunate that reporters and bloggers who should be well-versed in this field peddle baseless sensationalism. And it is unfortunate some that researchers should prey on popular ignorance and press credulity by making these claims. But absent an increase in professional sobriety among journalists and AI researchers, we can only expect, as Jonah Lehrer has noted, many more such grand announcements in the years to come.

10 Comments

  1. OK, the "cat brain" claims are beyond exaggerated, as has been pointed out by people deeply involved in "the AI project." It seems unlikely that the brute-force approach that the IBM project is taking, modeling at the level of microanatomy, will turn out to be the most efficient for the purpose of practical AI. But that is not necessarily the point – the project is more likely to yield better understanding of natural neural systems, which may have some payoff for AI as well.

    Who ever said "the mind" was "a Turing machine"? Theoretical computer science argues that a Turing machine ought to be sufficient for simulating a brain, give or take a polynomial factor of efficiency. But practical computer science recognizes that the TM itself is a horribly inefficient architecture, and practical AI research recognizes that today's standard digital computer architectures may not even be the best way to deploy existing electronic technology for AI and AGI (although it is not yet clear that new chips which are neuromorphic at a low level will immediately be more effective in practice than numerical ANNs, particularly given the economics of semiconductors).

    In any case, it is obvious that the brain is neither a Turing machine nor any type of digital computer like the one I'm typing this reply on. What is not obvious is that a digital computer can't do an effective simulation of a brain. (Is a jet engine a Turing machine? Can a computer simulate a jet engine?)

    Your argument seems to rest ultimately on some unstated belief in the supernatural or extra-physical (or perhaps some quantum voodoo). Do you believe the brain is a physical system? Do you believe its behaves according to the laws of physics? If so, it can be simulated by a sufficiently powerful digital computer. Even if it uses nonlocal quantum effects, which is quite unlikely, it could be simulated by a quantum computer. I know you must have heard these arguments before, so why do you ignore them?

    From a practical standpoint (e.g. for controlling a humanoid robot) the brain can almost certainly be replaced by an artificial neuromorphic computer of far less spatiotemporal complexity (or "computing power") than would be required "whole brain emulation" at the level of detail attempted in the IBM project. Who cares about simulation fidelity to the level of membranes, receptors and neurotransmitters? Neurons are sufficiently noisy that it can't matter from a functional standpoint if a simulation is exactly biologically correct.

    Relying on arguments against the ultimate success of "the AI project" weakens the critique of transhumanism and sets it up to be undermined by steady and accelerating progress in this field – as well as raising the obvious question, So what are you worried about then? I take AI and AGI seriously, and I am worried.

  2. Mr. Gudbrud,

    Thanks very much — I’m grateful for your comment, and for the opportunity to elaborate on my post and to clear up some potential points of confusion.

    First, to clarify some technical language: I hope the post makes clear that neither I nor anyone else to my knowledge has claimed that the mind is literally a Turing Machine, that is, some sort of length of magnetic tape with a read/write head. The question is whether the mind is an instantiation of a theoretical Turing Machine. (I’ve now updated the post to change the two places where, for the sake of length, I at first did not include the phrase “instantiation of a” Turing Machine.) That is, computer science theory states that any computable function can be represented as a Turing Machine. So if the mind can be correctly simulated on a computer, it must be a computable function, and so must have some Turing Machine representation. I did not claim and did not mean to imply that the mind’s computable function would actually be encoded as a Turing Machine in the brain.

    I also was not considering the question of efficiency. You’re quite right to say that Turing Machines are incredibly inefficient; the point is simply that any program that can be executed on a computer has a Turing Machine representation, and such representations are commonly used in C.S. theory to reason about computation. Instead of Turing Machines, I could have equivalently written the post with reference to the lambda calculus, but that is much less commonly discussed.

    I’m afraid that some parts of your comment suggest that you may have somewhat misread my post — such as your objections about the mind obviously not being a digital computer, or about my arguments resting “ultimately on some unstated belief in the supernatural or extra-physical.” First of all, I did not claim that the mind cannot be simulated on a computer or that artificial general intelligence (AGI) is unachievable. I consider it an open question. The question is simply whether the theory of computable functions (Turing Machines or any of the equivalent theories) can entirely encapsulate the functioning of the mind. More generally, we might ask, as you’ve indicated, whether the physical world can be so encapsulated. (We could also ask whether the mind and the physical world in general are simply computations, as some philosophers and transhumanists claim.) Under Newtonian mechanics, physics can be encapsulated through computable functions. But today — to my knowledge — this is an open question given the possible indeterminism of quantum mechanics.

    You also make two claims that I dealt with in the post: that high simulation fidelity of the brain doesn’t matter because neurons are “noisy,” and that if the brain relies on quantum phenomena then we can simply simulate that too. I explicitly responded to the former point and implicitly to the latter in discussing the importance of low-level fidelity in a simulation:

    Many defenders of the AI project argue that it is precisely because the brain has turned out to be so “squishy,” indefinite, and unlike a computer, that approximations at the low level are acceptable. Their argument is that the brain is hugely redundant, designed to give rise to order at a high level out of disorder at a low level. This may or may not be the case, but again, if it is, we do not know how this happens or which details at the low level are part of the “disorder” and thus safely left out of a simulation. The aggregate low-level approximations may simply be filtered out as noise at a high level. Alternately, if the basic premise that the mind is a computer is true, then even miniscule errors in approximation of its basic functional unit may aggregate into wild differences in behavior at the high level, as they easily can when a computer processor malfunctions at a small but regular rate.

    (Continued in next comment…)

  3. (Continued from previous comment.)

    My main point is simply that, if it is possible for the mind to exist in a computer, then there is some program that sufficiently describes the mind, and so the mind is subject to theories of computation. (Again, the question of whether this is possible I consider to be open.) Given that this is such a highly technical and philosophical question, I would like to see it discussed by AI researchers and journalists in far more technically rigorous terms than is typical today — which, as I’ve indicated, should be possible using theories of computation if AGI is possible.

    The final questions you raise are indeed important. I certainly would not deny that the progress of AI thus far lends some support to the notion that the ultimate goal of AGI is possible. Certainly we have seen that some mental phenomena can be pretty well described as computable functions. But, as I detailed here, the fact that AI enthusiasts have been asserting with confidence for decades that the mind is a computer without really coming close to answering the sort of necessary questions I posed in this post lends some support to the notion that the premises underlying the overall project may be deeply flawed.

    As for your concluding question, while you are quite right to raise these ethical aspects of this discussion, this post was limited almost entirely to technical concerns (except for the parts about journalistic and scientific responsibility in discussing technical claims). You’re right that if my primary aim were to drum up concern over the impending doom of AI, then I would keep quiet about doubts as to whether it is possible — but I think it’s important to have an honest discussion of the technical questions. Overall, though, the work done on this blog and in the journal should make clear that, regardless of whether AGI is ultimately possible, there are still numerous reasons to be concerned about transhumanism and related trends.

  4. Dear Mr. Shulman,

    Thank you for your thoughtful reply. I will confine my response to a few points, for lack of time to do more.

    As a physicist, I don’t know what sort of object “the mind” is, but “the brain” or more precisely, particular brains which exist in the physical world, are familiar things to me. Perhaps by “the mind” you mean some abstract model of the brain’s functioning, i.e. the way it gates and integrates excitation, tunes its synapses and so on. I don’t know what it would mean to say that such a logical model would be “an instantiation of a Turing Machine” but it does not sound correct to me. On the other hand, I do believe that if you could specify such a model, it would be computable either with a Turing Machine, or more reasonably, with an efficient computer of greater capability than any existing today.

    I’m not sure I even know what it would mean to say that a given modern digital computer was “an instantiation of a Turing Machine” but I suspect it would not be correct, since the Turing Machine is a particular architecture for a hypothetical computer and no modern computer has that architecture. As you know, the Church-Turing thesis, which I think remains unproven but is accepted as the foundation of theoretical computer science, holds that any function that is computable by any (non-quantum) machine is computable by a Turing Machine with at most a polynomial factor of inefficiency (number of steps required). From this perspective, all classical computers are computationally equivalent to Turing Machines, but that does not make them Turing Machines, or even instantiations of Turing Machines, unless I am misunderstanding something.

    You say that “if the mind can be correctly simulated on a computer, it must be a computable function, and so it must have some Turing Machine representation.” I think the first part of this sentence is fairly loaded. First of all, again, what do you mean by “the mind”? If you mean “what the brain does,” well, do you mean “exactly what a particular brain does,” do you mean “what a particular brain is more or less likely to do in different situations,” or do you mean “something like what brains do in general”?

    If you mean the first, then I would agree it is extremely unlikely that any machine, or even another identically constituted brain, can ever compute an exact prediction, beyond the level of what we are already able to predict knowing something about someone’s personality (I’m also not sure what you mean by “correctly”). Plain old thermochemical randomness, to say nothing of quantum mechanical uncertainty, prevents this from being possible even in any coherent thought experiment.

    An identically constituted copy of a brain, or a brain simulation of some kind like the transhumanists’ “upload,” would probably behave more or less as you’d expect the original brain to behave, or at least not in some way obviously at odds with expectations of personality. In fact, it would probably do a better job of predicting what a particular individual brain would be more or less likely to do in different situations than could be done on the basis of just knowing a person, since it would have access to “inside information.” But it is very doubtful whether such objects could ever be created.

    On the other hand, if you mean “something like what brains do in general,” at least something good enough to control a humanoid robot and give it humanlike intelligence and behavior, then it seems quite likely that not only can a computer of some kind give a reasonable simulation of human brain behavior, at least for purposes of controlling humanoid robots, but that it need not even be a brain simulation or structured in any way obviously resembling an actual human brain. If you disagree with this, you ought to tell us why, without needing to dispute whether “the mind” is “a computable function.”

  5. So, I claim that not only can the internal process of the human brain be effectively (for practical purposes), if not “correctly,” simulated by a digital computer, but that, more importantly, effective simulations of human behavior can be created without needing to simulate brains, and furthermore will be created, for various purposes, in the quite near future. That is a matter which has very serious consequences, which may safely be discounted only if one disbelieves that it is and will be so.

    I do agree that this requires, not that the behavior of any given brain be exactly predictable by a computable function but that there exist some class of computable functions which constitute functionally effective simulations of human brains, and an even larger class of computable functions which constitute functionally effective simulations of human intelligence and behavior. That does not seem to me an unreasonable hypothesis, and I don’t see an argument in what you’ve written as to why it would be unreasonable.

  6. Dear Mr. Shulman,

    Much as I regret having to take an antagonistic stance, since you and I share in our distate for the cult of technology, I think your approach on “the project of AI” is completely wrongheaded. As I have indicated, I think the hour is quite late; we are already living in the age of google and Deep Blue and will soon find ourselves surrounded by natural-language systems, computer vision, and “intelligent agents” doing more and more of our thinking for us. Humanoid robots that navigate the human landscape, and passers of various levels of Turing Test are likely not more than a decade away. Superhuman intelligence of a kind already exists, and the more dangerous kind, that is willful and actually thinks like humans, is no longer a distant goal. This assessment rests on more evidence than I care to marshall for this post, but it is my assessment; you may disagree of course but the arguments that you have deployed so far strike me as quite confused and certainly not referential of the current state and pace of technology.

    I went back and reread your “Why Minds Are Not Like Computers” and I must say I was not more impressed than the first time I read it. I think your basic rhetoric is that the terminology of computer science does not describe “the mind” or even the brain very well, and that is of course true, but quite irrelevant. Obviously the brain is not a digital computer, but that does not mean a digital computer cannot in principle simulate a brain to any required level of fidelity. Just as obviously, simulation of a human brain to any reasonable level of fidelity in real time is a task well beyond the state of the art, and will be for a long time to come, but this too is pretty irrelevant to the question of whether and when AI systems will be able to replace human brains for practical purposes.

    You ask, “what degree of fidelity is good enough? Suppose we were to replicate a computer by duplicating its processor; would it be sufficient to have the duplicate correctly reproduce its operations, say, 95 percent of the time? If not, then 99.9 percent?” But you know perfectly well that a digital computer with an error rate of 10^-3 will immediately crash, while it is very unlikely that neurons are deterministic to this level; yet the brain works. This is why it is very unlikely that producing a good functional equivalent of human intelligence requires a very high fidelity brain simulation.

    You beat the very dead horse of Searle’s Chinese Room argument, which is simply answered by the observation that a Chinese Room which could pass an unrestricted Turing Test is quite impossible. What Searle proposed was to have a guy look up Chinese questions or conversations in a giant reference book. For any lookup table large enough to cover all questions or conversations up to some length within some domain, there is a slightly longer or less restricted question or conversation which would immediately break it. So there is no need for any mysticism about “the whole system” understanding. No way could a setup like this ever pass a real Turing Test. What is required to pass the test is a system that really does understand the questions and can formulate intelligent answers to them, as well as maintain a semblance of humanity.

  7. Pylyshyn’s argument is another variation on the cell-by-cell replacement argument deployed by Moravec and others (Moravec argues that you don’t lose consciousness, so you don’t die, so your atman gets transferred to a computer). Your answer is that maybe “the neuron is not a black box”, but here you are grasping at straws. From the point of view of the next neurons, from the point of view of the entire system, it is inescapable that the role of any one neuron in the timescale of conscious thought and action is fully described by some input-output transfer function. Although the way the neuron adjusts that function over time is probably more complicated than the model used in the IBM paper, for example, it must still be a purely local “cellular automaton” or “algorithm” – to use the crude and inappropriate language of computer science, but the important point is that it is local. Unless you believe in spooks.

    You say that “if the neuron is not a black box” then “Pylyshyn’s thought experiment [and Moravec’s] would be a description of the slow death of the brain and the mind, in the same manner as if you were to slowly kill a person’s brain cells, one by one.” But why is this not a correct description even if the neuron is just “a black box” in the sense that you mean, and even if it is true that whatever continues to speak does continue meaning something by its words, or that Pylyshyn or Moravec’s computer is in fact conscious and thinks itself the very person who started “the procedure”? The object that remains would not be a human brain, not a human being, not human, whatever it might be. This is just a physical fact. Is it too fantastic for you to imagine that the “continuity of consciousness” does not imply the continuation of life? You must resort to unrealistic hypotheses about how the brain works, in order to deny the possibility of death unperceived, the voodoo of soul transfer apparently achieved, without there needing to exist an actual soul? I don’t know if such a process would ever be physically possible, but I can’t rule it out. What I can rule out is that the end result would mean that a human being is still alive, if it is a physical fact that the human being was destroyed.

    That’s all for now, but I look forward to more interesting discussions in the future.

    Yours,
    Mark A. Gubrud

  8. I would like to add more about the whole uploading mind emulation thing and why I think it unlikely.

    Brains are dynamically reconfigurable. The synaptic connection reconfigure themselves all the time (I think this occurs during sleep and is one of the reasons why we sleep). No semiconductor technology has this dynamism.

    Memory storage is chemical in nature, not electronic. Synapses vary as to chemical type. Also, dendrites are not the only way neurons communicate with each other. They also communicate by diffusion-based chemistry as well.

    There are various kinds of memory storage. There is short-term storage, there is long-term potentiation, then there is the really long term memory which is still not understood. These various kinds of memory as well as communications are interactive with each other.

    The software to simulate all of this would be so complex I cannot imagine it being done in the foreseeable future. The Moore's Law progression has never applied to software, only to semiconductor hardware.

    Having computers that exceed the so-called computational capabilities of human brains is not difficult to imagine. By Moravec's estimates, we all ready have them. By Kurzweil's estimates, we will have them in 10 years or so. But brains works so differently from semiconductor-based computers that such comparisons are essentially meaningless.

    I expect it to take at least another 50 years for the development of any kind of machine sentience (real AI) and even then, it will be quite different than us.

  9. Mr. Gudbrud,

    Thanks again for your comments, and apologies for the delay in responding. I’ve been a bit too busy to give your remarks the kind of full reply they deserve, and frankly don’t think that the comments section of a blog can do justice to such a complex discussion. I hope we’ll get a chance to meet face-to-face to continue this discussion in the not-too-distant future — but in the meantime, maybe I can clarify a few points here.

    First, I don’t want to get too bogged down in the technical language about Turing Machines. In asking whether the mind is a Turing Machine or an instantiation of a Turing Machine, I simply mean to ask whether the mind is a computable function — that is, a function that can be computed by some Turing Machine.

    Next, you seem to see some things in my post and essay that I did not intend to say (and hope are not there). For example, you suggest that I am attempting to present definitive arguments against the possibility of AI. But actually, my purpose is just to look at the technical implications of common assumptions that AI is possible, and then to outline some scenarios under which AI would not be possible. Whether those scenarios describe reality are philosophical and empirical questions that remain the subject of ongoing debate.

    Very briefly on Pylyshyn: If I understand you correctly, your comment speaks to the question of whether a program that is behaviorally equivalent to a human mind constitutes a human mind. This philosophical question is an interesting one, but it isn’t what I was focusing on. Short of engaging in a full discussion, I will just point out that arguing a negative answer to that question requires an even stronger nonreductive stance than the one I have taken.

    On to a broader point: You have a few times suggested that I rely on “mysticism,” “spooks,” “voodoo,” and “the supernatural.” Not believing in strong reductionism — as most philosophers of mind do not — is not equivalent to believing in the supernatural. You remark that, “as a physicist, [you] don’t know what sort of object ‘the mind’ is,” and you suggest that I might mean by “mind” an “abstract model of the brain’s functioning.” By “mind” I don’t mean an object at all. The mind is nonphysical in nature: qualia (e.g. pain, redness) cannot be touched, held, or located in space. The mind is a nonphysical phenomenon that arises out of the physical phenomenon of the brain. To say this is not to affirm “belief in the supernatural or extra-physical,” as you put it in one of your comments. Indeed, the main point of my essay is simply to show how layers of complexity can lead to nonphysical properties and phenomena in systems composed of entirely physical substances. The common analogy between software/hardware and mind/brain is in this respect legitimate. One need not believe in the supernatural, or even the “extra-physical,” to affirm that both minds and software exist, even though both are nonphysical. (If you were to claim that there are no minds, only brains, or that minds are just brains, you would have to essentially make the same claim about software and hardware respectively.)

    (Continued…)

  10. (…continued.)

    Your comment about neuronal simulations is an important one, and I will not dispute that high-fidelity neuronal simulations probably are not necessary for accomplishing whole-brain emulation. I will simply note that I was raising a hypothetical, and that the questions of how much fidelity is necessary and of which brain structures remain wide open. But in the absence of some general theory of the brain/mind system, and some standards about such fidelity to go along with it, whole-brain emulation researchers seem prone to underrate the importance of this question (as Henry Markram noted of the “cat brain” researchers).

    In the end, of course, I share your concerns about the applications of such technologies. The subject of our current dispute — that is, whether strong AI is possible — is an important and fascinating one in and of itself, but in many ways it is unimportant to such concerns. Technologies need not be human or be able to think in a way we would recognize as humanlike or conscious in order for them to be of grave concern, and the arguments I am making here are not at all intended to minimize these concerns. As you indicate, the hour is already quite late for us to be asking how such technologies will impact our safety and political order (and much more beyond). It is precisely because I and others consider these questions so important that we continue to explore them on this blog and in the pages of The New Atlantis.

    Yours,
    Ari Schulman

Comments are closed.