Geoffrey Everest Hinton is a British-Canadian scientist and psychologist renowned and awarded for his work on artificial neural networks and deep learning. He obtained his BA degree in Experimental Psychology at the University of Cambridge and Ph.D. in artificial intelligence at Edinburg University. After his doctorate, he performed exceptionally in various postdoctoral positions and, thus, escalated to prestigious institutions like Sussex University, the University of California at San Diego, and Carnegie-Mellon University, where he became a faculty member in the computer science department. His unrelenting experience, prestige, and professionalism are concretized in his title as Emeritus Professor at the University of Toronto.(1,2) Besides teaching, Dr. Hinton also founded the Gatsby Computational Neuroscience Unit at the University College London and worked at Google as a distinguished researcher, vice-president, and engineering fellow, where he continues to research and boost artificial intelligence.(1,2)
Dr. Hinton’s cognitive psychology background helped him see the potential of artificial neural networks and the singularity in learning by backpropagation. Although proposed in 1960, this model was applied, familiarized, and elaborated in conjunction with Rumelhart and Williams in 1986 as a training tool for feedforward multi-layer networks. This breakthrough enabled algorithms to “self-adjust” to refine their output for the desired one.(3,4)
Dr. Hinton’s cognitive psychology background helped him see the potential of artificial neural networks and the singularity in learning by backpropagation. Although proposed in 1960, this model was applied, familiarized, and elaborated in conjunction with Rumelhart and Williams in 1986 as a training tool for feedforward multi-layer networks. This breakthrough enabled algorithms to “self-adjust” to refine their output for the desired one.(3,4)
Backpropagation uses the desired output to adjust the weight, relation, and processing of the input variables. These adjustments may increase or decrease the strength of association of different input variables with the output, as well as grading combinations of variables, resembling, in some way, how the brain enhances and corrects connections in the learning process. This tool is now standard in many artificial neural networks, in which training of hidden layers has increased the complexity of tasks artificial intelligence can perform in the present. However, during the process, Dr. Hinton understood the limitations of feedforward networks and ambitiously sought an alternative model that could train neurons based on internal representations rather than input and output. The answer to this query, further popularized and elaborated in conjunction with Terrence Sejnowski, receives the name Boltzman machine, a recurrent and stochastic neural network. Although initially proposed in the realm of condensed matter physics, the Boltzmann machine’s applications reached their summit in the cognitive sciences field. The impact of such models has transcended generations, as this scientist is considered by many as the “Godfather of Deep Learning.”(3,5,6,7,8)
This leader’s significant contributions lay in image processing, character recognition, and phoneme/speech recognition. Dr. Hinton researched computer vision, image processing, and character recognition for many years. As early as 1978, he explored the use of artificial neural networks for this purpose. Later, he developed a stochastic approach for image processing that included shape recognition and isolation from the background, reduction of illusory conjunction, and adaptive elastic models for handwritten character recognition.
In 2012, a significant improvement in convolutional frameworks made Dr. Hinton and his students, Alex Krizhevsky and Ilya Sutskever, winners of the ImageNet competition by reducing object recognition errors by half with their program AlexNet. Contributions to natural language processing include principles in isolated phoneme recognition, speech recognition, optimization of the latter using time-delay neural networks, and using a series of neural grids to develop speech synthesis based on several wearable sensors known as Glove-TalkII.(8,9,10)
This leader’s significant contributions lay in image processing, character recognition, and phoneme/speech recognition. Dr. Hinton researched computer vision, image processing, and character recognition for many years. As early as 1978, he explored the use of artificial neural networks for this purpose.
Later, he developed a stochastic approach for image processing that included shape recognition and isolation from the background, reduction of illusory conjunction, and adaptive elastic models for handwritten character recognition.
In 2012, a significant improvement in convolutional frameworks made Dr. Hinton and his students, Alex Krizhevsky and Ilya Sutskever, winners of the ImageNet competition by reducing object recognition errors by half with their program AlexNet. Contributions to natural language processing include principles in isolated phoneme recognition, speech recognition, optimization of the latter using time-delay neural networks, and using a series of neural grids to develop speech synthesis based on several wearable sensors known as Glove-TalkII.(8,9,10)
This AI mastermind became the Chief Scientific Adviser of the Vector Institute, an initiative of the University of Toronto founded by Canada’s government, the province of Ontario, and Google in 2017. This institution is an AI-dedicated facility that explores its potential uses in many fields, including healthcare, finance, and material sciences. The year after, he published a viewpoint article in JAMA to provide healthcare professionals with basic knowledge to understand artificial intelligence principles clearly and concisely. Dr. Hinton explains how AI can change healthcare as we know it, specifies its overall functioning, and provides some examples of its applications in predicting adverse events and detecting retinal diseases.(9,11,12)
This AI mastermind became the Chief Scientific Adviser of the Vector Institute, an initiative of the University of Toronto founded by Canada’s government, the province of Ontario, and Google in 2017. This institution is an AI-dedicated facility that explores its potential uses in many fields, including healthcare, finance, and material sciences.
The year after, he published a viewpoint article in JAMA to provide healthcare professionals with basic knowledge to understand artificial intelligence principles clearly and concisely. Dr. Hinton explains how AI can change healthcare as we know it, specifies its overall functioning, and provides some examples of its applications in predicting adverse events and detecting retinal diseases.(9,11,12)
Additionally, Dr. Hinton is one of Google Brain‘s leaders in Toronto, one of the most recent locations for this satellite office. The honor and professionalism of this scientist reflect on his widespread recognition. Some of these include the Association for Computing Machinery A.M. Turing Award in 2018 (jointly with Yoshua Bengio and Yann LeCun), Honorary Foreign Member of the US National Academy of Engineering, as well as a Fellow of the Cognitive Science Society, Royal Society (UK) and Royal Society of Canada. (1,2,9)
Additionally, Dr. Hinton is one of Google Brain‘s leaders in Toronto, one of the most recent locations for this satellite office. The honor and professionalism of this scientist reflect on his widespread recognition. Some of these include the Association for Computing Machinery A.M. Turing Award in 2018 (jointly with Yoshua Bengio and Yann LeCun), Honorary Foreign Member of the US National Academy of Engineering, as well as a Fellow of the Cognitive Science Society, Royal Society (UK) and Royal Society of Canada. (1,2,9)
Geoffrey Hinton has appeared in Toronto’s 50 most influential people by Toronto Life Magazine in its 2018 and 2019 editions, The Bloomberg 50 by Bloomberg Business week in 2017, and The 50 most influential people in Canadian business by The Globe and Mail Report on Business in the same year.(1)