We could start connecting the dots as far back as 1943 with The World War II, but movies like Enigma and The Imitation Game have already done that for us. We know that war is what gave us mathematical concepts like computing, encryption and digital signals. This went on to lead to wireless cellular phones, digital photography, the network of satellites for GPS location and the internet. So let’s start with this groundwork.
The business model of this age was to figure out a way to apply the internet to a common everyday business like news, clothing or information. Each time someone figured out a way to do this well, they would disrupt the existing players in the domain.
The first wave of these internet companies would be AOL, Yahoo, MSN and others. They took what people already did, and gave it back to the people only this time through a web browser running on a computer.
They did well, but only in places where the internet and the computer along with its web browser was accessible. With a computer costing in the $600–1000 range, it was not so accessible in the eastern part of the world.
But still this proved that buying a computer with a web browser is something useful to many of us, and caused the price of computers and their parts to go down each year. Until it was cheap enough to actually fit inside a phone-sized device.
Each time the valuations of such companies would peak to unimaginable heights because of the misappropriation of what these companies were actually capable of achieving. This is because the very companies who dominated the internet on the strength of a new medium didn’t reach their full potential before another big trend came into the picture and displaced these original disruptors.
The smartphone was a computer made to fit inside your pocket and you could also hold it up to your ear. But it was bad at all the things you would do on a computer, because it had a small screen, a tiny keyboard and a bad web browser.
These unique set of constraints caused developers to focus on writing native high performant applications for these devices so that they could extract the maximum power from these small devices, and it was not until someone paired these with a great touch screen interface that these apps began to really shine.
Finally you could do some things more easily than on a computer. Everything was more accessible and ready to go. Always available in the pocket. No computer to turn on, no internet to switch on and no desk to sit on. It was alive.
And thus was born our second era of companies. They took everything that people already did, but provided it through the window of the phone. A swipe and a tap could do an endless number of things. Everytime someone figured out how to apply this model well to an existing business domain, they succeeded and did well.
And so the wheel spinned again. A lot of expectation was set on these companies and their valuations soared. But they were never able to achieve that full potential just by using the mobile app platform. Again there was another disruption lurking in the corner.
This is today.
All this while, as the computer trend and the smartphone trend churned, computing power got cheaper every year. It got so cheap that running a farm of computers put into a room became really cheap and companies began selling conceptual units of computing separately. Now you could buy processing, memory and storage as separate pieces, even though practically a computer consists of all these things.
The cloud was born. With the click of a button you could order for yourself a virtual computer that had arbitrary processing, memory and storage capability — no physical computer would ever have such a specification. But the cloud made it possible for companies to rent a virtual one for themselves.
And hence what was a contrained resource in the user’s hand was an abundant resource in the company’s server farms. They moved as much computing as they could from the user’s device to the server. Image resizing, video processing, sending files, storing data — everything moved to the server.
It became so abundant, that instead of writing step by step code to do what needs to be done, it was possible to let the computer do a thousand random things and learn from which of the few tries actually gave an expected result. So much data was collected that the computer was simply let loose on the data to find patterns and attempt to classify the data into meaning.
Given a thousand speech samples of the word Hello, the computer could now learn how to recognise when someone said the word. Given a thousand pictures of a cat, the computer could now make good guesses about which pictures had cats in them.
And thus Machine Intelligence was born.
Traditional software development involved a designer deciding what the flow of a product should be. A few tests and data would help them decide what was the right thing to show to the user after a certain action. The developer would then go ahead and implement the same using established practices of software engineering. The software would be tested and shipped to the users — each of them receiving roughly the same experience.
Over time data would be collected and work would start on the second version of the software based on insights from the first version of the product. And the cycle would repeat.
Machine Intelligence makes this process outdated. Any rule based programming that says if-this-then-that, done manually, will soon be a thing of the past. Programmers would instead build systems to understand what people generally do and then re-inforce that behaviour through learning.
The software development cycle that had a period of six months or even a year would instead happen every minute. The system would learn what the user clicks and what the user does not click and decide the best thing to show next. It would tune itself every time a user clicked somewhere.
This is ony one example of applying Machine Intelligence to the software development process. Companies are going to take everything that we have been doing and figure out ways to merge them with Machine Intelligence. Again there will be successful companies and again there will be overblown valuations.
But hey, there is the opportunity to change the world! Heck, why not.
“The business plans of the next 10,000 startups are easy to forecast: take X and add AI” — Kevin Kelly, Wired
In all of this, there is one dot missing. What would be the interface to such an intelligent platform? Each time when there was a new technology available, it needed the development of a new user interface to take benefit from it. The internet needed the graphical user interface with point and click mouse and the mobile apps needed a fluid touch screen. What is the UI of the Machine Intelligence Age?
They too, are already here. Wearables.
Wearables have a bad interface in the traditional sense —if you consider an interface as a screen, that is. Hence critics of wearables take the need for a screen as paramount.
But in the age of machine intelligence, the computer already knows what is the most likely thing that you will need and want. And it can also learn when that is not the case. Plus, with the many sensors that are soon going to enter our surroundings in the form of internet connected devices, the computer is more aware of the context and surroundings.
It’s like it has eyes and ears on what we want and need, fortunately not literally. It knows when we are home and it knows the temperature and it knows how to control the air conditioner — so it can take all of these and do the right thing once I am home.
Many movies depict machine intelligence with robots — complete with eyes and ears and hands and legs — this will simply not be the case.
Machine Intelligence Begins — and it is not like in the movies.
If you enjoyed this article, please hit 💚 recommend!
You can follow me on Twitter @paramaggarwal.