Dom suggested that “The distance to our goals always seems further to those realists without sufficient knowledge.”. My personal experience in software indicates that when you are attempting to solve a difficult problem with many unknowns, you cannot predict the degree of complexity of the problem or solution without actually solving the problem. Good estimation in a software project is an exercise in pessimism. I also find that my pessimism is never enough! At the beginning of a project I am filled with youthful exuberance and optimism that blinds me to the fact that when I am waist deep in complexity (and office politics) my exuberance isn’t enough to get me through the project - at that point progress throttles back to the baseline progress afforded by grim determination. This is a very good description of the AI research community, and presumably many other areas of science as well.
A war is equivalent to the deadlines-looming stage of a project where we always seem to pull miraculous rabbits out of our arses. The plateau period of slow progress cause by disillusionment can be seen at work in the AI research community now. Anti-results such as Minsky’s proof of the limitations of certain types of neural networks and the failure to make quick progress have muted the youthful exuberance of the AI community and it is now in baseline progress mode.
This probing of the unknown reminds me of Turing’s Halting Problem - For certain problems where information is lacking, the only way to work out whether a program will halt is to run it. When a program over-runs you can’t know whether it is about to halt or not. So there is a basic undecidability about complex software that is reflected in software projects. This is the “software crisis” we were all taught so much about in university - the reason why more than 50% of major projects go over budget, are late, or get scrapped altogether. We were taught that there is no silver bullet and that the only way to overcome the crisis was to adopt strict programming discipline and make use of automated proof systems to identify items of code where halting and correctness were not possible.
But we live in a capitalist economy whose driving force is the market. Software engineers are required to maximize functionality and minimize costs so formal methods. Proofs are not an option. Consequently we make conservative estimates about what is possible with a given number of programmers in a fixed time. I see no difference between this working environment and that of scientists who are also driven to produce short term results for their investors with limited resources - they don’t even know if there is a solution to the problem - they just know that there always has been before.
So, Dom, I think I’m suggesting that rational pessimism is justified when predicting the time taken to perform a poorly specified task of great complexity, and I can prove it!!!!!!
artificial intelligence Computer Science programming Work