Text
In his 1950 paper 'Computing Machinery and Intelligence', Alan M. Turing challenges the assertion made by Ada Lovelace that machines cannot surprise us. For Turing, this conviction stems from the false idea that for something 'surprising' to happen, it must arise spontaneously from a 'creative mental act', thus overlooking the extensive training and knowledge that go into this act. However, this pre-existing knowledge is precisely what Bayesian epistemology is all about. Providing a framework for hypothesis building (i.e. abduction), it allows for the generation of new concepts and thus a way to learn how to learn. In this paper, the question of machine learning is revisited in order to explore whether Bayesian learning, as a form of abductive reasoning, can provide an alternative to the current dichotomy between inductive and deductive approaches in machine learning debates. The paper will further demonstrate that machine learning invariably entails a degree of situatedness, as evidenced by the example of Bayesian belief networks, which arguably rely on abductive reasoning. In this manner, the discourse surrounding Bayesian learning models has the capacity to elucidate the aspects that are often left implicit in contemporary machine learning debates and methodologies. The aim of this paper is to broaden the discourse surrounding machine learning by reinstating some of the historical and socio-technical elements that have been overlooked. In particular, the use of conceptual knowledge derived from existing categories potentially enables machines to learn new things. In light of Turing's initial formulation of the problem, the role of abductive reasoning in machine learning will be examined, in order to determine whether a 'creative mental act' is indeed possible.

