While reading about quantum computer skepticism, I came across some comments by Scott Aaronson which mentions classical computer skepticism:
A: The only reason we don't need fault-tolerance machinery for classical computers is that the components are so reliable, but we haven't been able to build reliable quantum computer components yet. Presumably, if we could build extremely reliable components, we wouldn't need error correction and fault-tolerance technology.
Scott: Yes, that's what I would say. In the early days of classical computing, it wasn't clear at all that reliable components would exist. Von Neumann actually proved a classical analog of the Threshold Theorem, then later, it was found that we didn't need it. He did this to answer skeptics [emphasis mine] who said there was always going to be something making a nest in your JOHNNIAC, insects would always fly into the machine, and that these things would impose a physical limit on classical computation. Sort of feels like history's repeating itself.
I asked about this on the cstheory stack exchange, and got a reply (from Peter Shor!) from which I was able to find some resources.
I think my original question was perhaps a bit misguided and I will need to revise the scope of what I'm asking, but I will share what I have learned.
First, Peter Shor's answer was able to guide me to some publications by Shannon which opened up a number of different papers to pursue. I wouldn't have found this information without his reply, so I am grateful for the response.
From the sources that I found, there is never any hint of skepticism about classical computation. What does happen to be a concern, however, is the reliability of complex systems built from unreliable components. Knowing what I do now, I think the question closest to my original inquiry that I am able to answer is something like
Knowing that early electronic components were unreliable, how is it possible to build a machine from unreliable components that happens to work?
I think this is a rather intuitive claim, that a complicated system made of unreliable components seems like it would be in a faulty state more often than a simple system, and as the system becomes more complicated, it should become less reliable (and I think this is what Scott Aaronson was getting at).
Some literature of the time making this claim, Smith in [13] says
However, the problem of making complex electronic equipment reliable still remains. Statistically, the greater the number of parts, the greater the chance of failure.
And Mine says in [7]
During the past several years, the electronic equipment has tended rapidly towards increased size and complexity. The more complex and enormous an equipment is, the more likely it is that any one of its elements will fail. Thus it is unavoidable that electronic equipment become unreliable.
The lecture series from von Neumann [6] is some of the earliest literature addressing how to make a reliable system from unreliable components, though Mullin [11] says the paper is an extension of earlier work by McCulloch and Pitts [1].
In section 10 of the paper, von Neumann says
Section 10 is devoted to a sketch of the statistical analysis necessary to show that, by using large enough bundles of lines, any desired degree of accuracy (i.e. as small a probability of malfunction of the ultimate output of the network as desired) can be obtained with a multiplexed automaton.
Despite this possibility, it seems that electronic equipment of the time had rather poor performance characteristics in practice. For example, Carhart says in [3]
Remarkable technological advances have been made in the field of electronics during the past decade. The impact of these developments on military activities has been dramatic and far-reaching. During World War II, radar, the proximity fuze [sic], fire-control equipment, and sonar played crucial roles in winning the victory. Since V-Day, the importance of electronics in warfare has greatly increased. It has brought about revolutionary changes in the rapidity and range of communication, in the speed and precision of controlling modern weapons, and in the detection and tracking of enemy weapons. To realize these potentialities it is imperative that electronic equipment operate reliably in the field. Yet it is generally agreed that present electronic equipment is unreliable to a serious degree. [emphasis original] For example, a recent widely distributed report (Ref. 2) states that in 1950 only about one third of Navy electronic equipment was operating properly.
Interestingly, in the final pages of the report Carhart claims that military equipment is approaching a critical complexity limit in terms of reliability, at least for serial systems. This is the only instance I found of anyone claiming a limit to extant complexity, though I think this is a rather soft claim, as the author proposes redundancy and parallel solutions to increase reliability.
I found a small number of other papers that casually make claims against complexity and then refute these arguments. For instance, Lipp in [14] says
Although it is commonly believed that a circuit containing a multiplicity of elements is less reliable than a simpler one, this is untrue in a large class of cases.
Moskowitz and McLean say in [12]
Hence, an increase in complexity usually leads, with present design concepts, to a decrease in equipment reliability. Yet reliable operation is as important as the satisfactory electronic solution of the other aspects of the problem.
Can some principle or concept be found in equipment design which will not tend to increase the probability of failure with increase in complexity? It is the purpose of this report to indicate a possible solution.
I haven't found any evidence of skepticism about classical computing. Instead, I did find a number of papers addressing concerns about system reliability, stemming from the original research done by von Neumann. The report by Carhart was the only instance of a claimed limitation on system reliability, but the author proposes solutions to address the problem.
(The majority of this post is duplicated at https://cstheory.stackexchange.com/questions/40571/reference-request-skeptics-of-classical-threshold-theorm/40632#40632)
[1] W. S. McCulloch and W. Pitts, "A logical calculus of the ideas immanent in nervous activity," The Bulletin of Mathematical Biophysics, vol. 5, no. 4, pp. 115–133, Dec. 1943.
https://doi.org/10.1007/BF02478259
[2] A. C. Block, "A Redundancy Analog," IRE Transactions on Reliability and Quality Control, vol. PGRQC-12, pp. 1–7, Nov. 1957.
https://doi.org/10.1109/IRE-PGRQC.1957.5007153
[3] R. R. Carhart, "A Survey of the Current Status of the Electronic Reliability Problem", white paper from RAND Corporation, 1953.
https://www.rand.org/pubs/research_memoranda/RM1131.html
[4] Z. W. Birnbaum, J. D. Esary, and S. C. Saunders, "Multi-Component Systems and Structures and Their Reliability," Technometrics, vol. 3, no. 1, pp. 55–77, Feb. 1961.
https://doi.org/10.2307/1266477
[5] R. Gordon, "Optimum Component Redundancy for Maximum System Reliability." Operations Research, vol. 5, no. 2, pp. 229–243, 1957.
http://www.jstor.org/stable/167353
[6] J. von Neumann, "Lectures on Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components," Jan 1952.
http://www.mit.edu/~6.454/papers/pierce_1952.pdf
[7] H. Mine, "Reliability of a Physical System," IRE Transactions on Circuit Theory, vol. 6, no. 5, pp. 138–151, 1959.
https://doi.org/10.1109/TCT.1959.1086604
[8] L. Hellerman and M. P. Racite, "Reliabllity Techniques for Electronic Circuit Design," IRE Transactions on Reliability and Quality Control, vol. PGRQC-14, no. 0, pp. 9–16, Sep. 1958.
https://doi.org/10.1109/IRE-PGRQC.1958.5007177
[9] J. C. Hudson and K. C. Kapur, "Reliability theory for multistate systems with multistate components," Microelectronics Reliability, vol. 22, no. 1, pp. 1–7, Jan. 1982.
https://doi.org/10.1016/0026-2714(82)90045-2
[10] E. F. Moore and C. E. Shannon, "Reliable circuits using less reliable relays," Journal of the Franklin Institute, vol. 262, no. 3, pp. 191–208, Sep. 1956.
https://doi.org/10.1016/0016-0032(56)90559-2
[11] A. A. Mullin, "Reliable stochastic sequential switching circuits," Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics, vol. 77, no. 5, pp. 606–611, 1958.
https://doi.org/10.1109/TCE.1958.6372695
[12] F. Moskowitz and J. B. McLean, "Some reliability aspects of systems design," IRE Transactions on Reliability and Quality Control, vol. PGRQC-8, pp. 7–35, Sep. 1956.
https://doi.org/10.1109/IRE-PGRQC.1956.6540347
[13] T. A. Smith, "The background of reliability," IRE Transactions on Reliability and Quality Control, vol. PGRQC-8, pp. 55–58, Sep. 1956.
https://doi.org/10.1109/IRE-PGRQC.1956.6540350
[14] J. P. Lipp, "Topology of Switching Elements vs. Reliability," IRE Transactions on Reliability and Quality Control, vol. PGRQC-10, no. 0, pp. 21–33, Jun. 1957.
https://doi.org/10.1109/IRE-PGRQC.1957.5007130