Artificial Intelligence receives a great deal of hype with good reason but is it appropriate for your enterprise? You might want to consider the same tech but with a different approach for your decision support system — Intelligence Augmentation.
Artificial intelligence receives a great deal of attention in the media. Here are some recent headlines:
The prospects are exciting, but how can a non-software company incorporate Artificial Intelligence into its processes, and why would it want to? AI is a powerful trend, but it is also a bit nebulous to grasp, making it hard to wield. In this piece I will attempt to define Artificial Intelligence as well as its lesser-known sibling — Intelligence Augmentation (IA). I argue that getting to know Intelligence Augmentation will help you put the power of AI to work in your business. As you will see below, Intelligence Augmentation (IA) has its own lineage, but it can also be used as an approach to leverage artificial intelligence technology to enhance human intelligence, rather than replace it. Whereas AI, as Herbert Simon put it, is an attempt to put machines in a position to do “any work a man can do.” Think decision support systems, rather than human replacement systems.
AI receives the lion’s share of the press and investment, and IA receives the scraps. But maybe our priorities should be re-visited, and IA should receive more attention. Consider this:
Let us define and compare the two to see which one suits your project requirements. Towards the end of this series, I will give recommendations for when you decide to implement your idea in the form of a project.
The State of AI Report 2019 defines artificial intelligence as “a broad discipline with the goal of creating intelligent machines, as opposed to the natural intelligence that is demonstrated by humans and animals. It has become a somewhat catch-all term that nonetheless captures the long-term ambition of the field to build machines that emulate and then exceed the full range of human cognition.” The idea and mission of AI have been around for a while. Christopher Neels points to the summer of 1956 as the beginning of artificial intelligence as a field. John McCarthy of Dartmouth University brought together leading mathematicians and scientists for an extended brainstorming session on artificial intelligence. From their proposal: “We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College… the study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Herbert Simon, one of the ambitious and optimistic attendees, predicted that “machines will be capable, within twenty years, of doing any work a man can do.”
Fast forward to 2019. In a KD Nuggets article, Andy Cotgreave states “Artificial Intelligence. It’s not intelligence, it might never be. The work of AI is at best clever use of algorithms to analyse large amounts of data. It has become a lazy term applied to too many tools and gadgets.” He has a point… Does today’s AI qualify as intelligence? According to Andrew Ng, one of the world’s foremost AI executives and educators, “99% of the value created by AI today is through one type of AI, which is learning A→B, or input to output mappings. For example, AI is getting really good at inputting a picture, and outputting, ‘Is it you?’ Zero, One.”
AI is broadly categorized into ANI (Artificial Narrow Intelligence) and AGI (Artificial General Intelligence). AGI is the ability of a machine to perform general intelligent action; ANI is about specific problem solving. With a lot of progress in ANI, people falsely started believing that they are progressing in AGI. I think that is the clarification that Andy Cotgreave makes above. With AGI, we will move well beyond the stage of an A→B mapping.
But AGI is still a ways off.
Machines excel in working a problem that a human can do with 1 second of thought and for which lots of labelled data is available. What is an example of AI at work today? Taking pictures of vehicles that break simple traffic laws is best accomplished with a machine. Large, constantly updated databases of license plates and vehicle models exist. It’s a monotonous job for a human to wait at an intersection for lawbreakers. And issuing a ticket by a human is time consuming. But machines do this very well. Sending the ticket through the mail or electronically is also easy to automate and maintain. Of course humans should monitor the system, evaluate anomalies, and seek improvements in performance. Such a domain is an excellent space for artificial intelligence in its present state.
How about another example? We turn once again to Andrew Ng:
“Possibly the most lucrative application of [AI] today is online advertising. If you look at all of the large web serving platforms, Google, Facebook, Baidu, Twitter, and so on, almost all of the large ad platforms have a piece of AI technology that takes as input an ad and some information about you the user and tries to output ‘Will you the user click on this ad?’ Because for the online ad platforms, every click is money and so there is a very large incentive to try and show you an ad that you have a 5% chance of clicking on, rather than only a 4.5% chance.”
With online advertising you have an enormous, constantly updated, labeled database for user mouse actions, text based searches, and social media behavior; also, the work is monotonous, requires little thought, and human lives aren’t at stake. It’s a perfect use of today’s AI.
Many businesses can certainly benefit from the AI-operated ad platforms mentioned above, but what about the rest of the possible day to day operations on the planet? Mr. Ng goes on to say that the real opportunity for AI is outside of the software industry. Software corporations attract a great deal of attention, but the rest of the business world is enormous in comparison. How can we put AI technology to work for companies not called Google, Facebook, and Baidu? In the next part of this series, we will look at the definition of “Intelligence Augmentation” and “The Differences between Artificial Intelligence and Augmented Intelligence.”
In the third part of the series, we will finish up with the following topics:
Thanks for reading! If you have comments or questions, please add them below and/or reach out on Twitter at @_joecha_.
And don’t forget to sign up for our newsletter.