If there is one aspect of human nature that can be pinpointed as the stimulus for human progress, it is curiosity. Truly, curiosity is the first and simplest emotion that we discover in our minds. From our very birth, it becomes our teacher, our guide to the world, our pathway to experiences that shape who we are and what we do. It transcends geography, religion, ethnicity, income, race, and ideology, rather encompassing a far larger bracket known as “human”.
In all of the myriad objects that we humans have created over the period of our existence, all of the unique stories and situations which have fostered some of the most revolutionary inventions in human history, all of the different solutions which we have sought for to address and facilitate challenges, one question stands out as the crux of change, the indicator of a transition from helplessness to hopefulness: “What if?”
From the invention of the astrolabe which facilitated sea-travel to the invention of satellites which have enabled and revolutionized telecommunications, we humans have had a tendency to channel our intrinsic curiosity to question our observations and perceptions of the natural world. But what is a more remarkable attribute of human progress is our tendency to not only ask ‘why?’ but also to ask ‘why not?’ This one extra question marks the key difference between progress and retrogress, between revolution and stagnation, between success and failure. Had Einstein not questioned why we cannot travel faster than the speed of light, who knows whether the theory of general relativity would have been developed or adopted? When it comes to the world of digital and computational technology, very few people personify this proclivity to adapt and innovate better than Charles Babbage, a polymath described by many as “the father of the computer.”
Babbage’s work and his inventions, in more ways than one, are symbolic of where we as humans have arrived from a technological standpoint. When Babbage built his revolutionary Difference Engine to make basic calculations, he simply had one goal in mind: to reduce errors made in the manual transcription of mathematical multiplication tables. Although his goal may retrospectively seem rudimentary, at the heart of Babbage’s invention lay an effort to utilize the simple, basic observations we make to create something complex which eliminates the necessity to perform repetitive, mundane tasks. An almost identical approach can be discovered in the machine learning algorithms for linear and multivariate regression as well as those for optimization. Complicated methods used by machine learning systems to calculate the hypothesis function rely on basic matrix calculations and manipulations like transposition and inversion. The vectorized cost function used in linear regression models are nothing more than slight manipulations of derived mathematical series. Optimization ML algorithms like stochastic gradient descent rely on nothing more than basic ideas from differential calculus and common sense to effectively find local optima. Although these techniques are not overly complicated, they dramatically reduce the time needed to make successful calculations and can thus be used to save substantial time and money.
Perhaps more than the Difference Engine, it is Babbage’s Analytical Engine (his subsequent attempt at making an even better calculator) which highlights the reaches of modern technology. The Analytical Engine is significant not necessarily because it was immensely successful in its time, but rather because for the first time, it signalled a shift from mechanized arithmetic to general-purpose intelligent computation. But more important than its remarkable features and abilities was the Analytical Engine’s theorized ability to learn, to process – a computer could, for the first time, really make its own decisions – if X, then Y; while A, do B; etc. While the applications since have grown exponentially, the core concept, the fundamental principle of a computer have since remained unchanged. From the most primitive Turing-machine to the most complicated supercomputer, all computers perform, in some capacity, the same fundamental tasks: use some sort of data (either inputted by a user or provided in some other manner) to perform some sort of operation in accordance with a certain set of instructions.
And therein lies both the power and the limitation of modern “smart” technology. No matter the methodology used, the ultimate goal of modern machine learning remains the same: to do what humans do, but simply to do it better than humans. Yes, infinitely more complicated variations of this goal have been developed; however, at the end of the day, all machine learning algorithms revolve around solving some sort of human problem, overcoming a human challenge. In the traditional sense, then, can computers ever be considered truly intelligent? Will they ever independently ask the vital question: “Why not?” Even as we enter the fourth industrial revolution – one which continually blurs the lines between the physical, digital, and biological spheres – we cannot teach machines to be curious. This inability of computers to form and pursue independent goals, no matter how intelligent we make them, begs an important question: is it curiosity which ultimately makes us human? Is curiosity the dividing line between humanity and humanity’s creation? Perhaps the more pertinent question is it even possible to instill curiosity into something that has been created for the very satiation of that curiosity? In broad and simple terms, machines are made only because someone, at some point, is curious about some idea and no longer wants to remain curious about that idea. So if something has been built to negate curiosity, how can we teach it to be independently curious itself?
In the mid-1800s, a young woman by the name of Ada Lovelace was tasked with transcribing Babbage’s notes into a different language, primarily as a way to expend her bustling energy. Curious and inquisitive by nature, Ms. Lovelace was so intrigued by the potential of Babbage’s Analytical Engine that she used different initials and secretly added notes to her transcript suggesting ways in which the Engine could be improved. The notes, which ended up being longer than an entire translation of Babbage’s paper, contained ideas for the first algorithm ever tailored specifically for a computer. Many of Lovelace’s notes on Babbage’s Analytical Engine contained ideas which form the fundamental pillars of modern software programming, and as such, she is often cited today as the first ever computer programmer. If Babbage is the paradigm for modern technology, Lovelace can be considered the paragon of that which makes us human: curiosity. And as long as we cannot teach computers to be Lovelace, we will not be able to develop a truly intelligent machine.