If you have access to enough structured or semi-structured data, you can also build these capabilities into your own homegrown software — either to power your products or to improve company operations. Under the right conditions, the AI solutions you build can bring you extraordinary capabilities.
This chapter provides more background on AI, and describes what it takes to build AI solutions.
Artificial intelligence (AI) is a broad class of approaches to learning from large datasets. AI is variously defined as “the science and engineering of making intelligent machines¹,” or “A branch of computer science dealing with the simulation of intelligent behavior in computers,” or “The capacity of a machine to imitate human behavior².” Early experiments in AI involved building a series of if-then statements or a statistical model mapping data to categories. These rules engines, expert systems and knowledge graphs have been referred to as symbolic AI. They follow rules and logic that have been programmed by humans.
More recently, machine learning has emerged as a unique AI sub-domain. Unlike symbolic AI, which must be explicitly programmed to generate its results, a machine learning program can modify itself when it is exposed to new data. The program can generate its own algorithm, derived from comparing example data to a desired output. By this method, the machine learns automatically. Machine learning models “learn” to solve a problem — such as to reduce error or loss, or to pursue some other objective.
There are many subdomains within machine learning, as shown by this graphic:
At a high level, machine learning may be subdivided into three categories: supervised, unsupervised or reinforcement-based learning.
In supervised learning, the model applies learning from patterns in past structured and semi-structured data to new data for the purpose of predicting future events. An initial dataset is used to produce an inferred function that predicts output values. Then over time the algorithm can be further trained as errors are discovered. Supervised learning includes classification problems, such as fraud detection, customer retention and diagnoses; and regression problems, such as weather forecasting, market forecasting, defect detection in assembly lines and life expectancy estimates.
When data can’t easily be classified or labeled, unsupervised learning methods are required. Here, the model studies data to draw inferences describing hidden structures in the datasets. Scientists can use these inferences to train the model. Sometimes a small set of labeled data can be combined with a large set of unlabeled data to improve results.
Examples of unsupervised learning include feature elicitation, such as used in image recognition; or in big data visualization. These are examples of a sub-branch called dimensionality reduction. Another sub-branch is clustering, which supports applications such as recommendation engines, customer segmentation refinement and targeted marketing refinement.
In reinforcement learning, the model is built to pursue a goal. A reinforcement signal is sent to the machine when its actions lead towards the desired result. The model learns by producing actions and discovering errors and rewards. Through trial and error, machines discover the ideal behavior that maximizes performance. Examples of this type of machine learning include robot navigation, game AI, and real-time decision support systems.
Deep learning is a subset of machine learning. With deep learning, the model is made up of multiple layers, comprising an artificial neural network. These hidden layers enable the model to learn subtle features in the data. Simple features at one layer are recombined in the next layer, creating more complex patterns. Input data runs through mathematical operations to further refine the model. Deep learning is computationally intensive, requiring very large datasets. As computational costs and AI-specific technologies emerge, the universe of problems deep learning can tackle continues to expand.
These systems require lots of compute power, more training time than other AI methods, and tend to achieve high levels of accuracy. They perform particularly well on unstructured data, perceiving hidden patterns. Deep learning systems are powered by nonlinear algorithms. These algorithms find complex nonlinear relationships between input and output features.
Through deep learning, complex problems such as natural language processing, image recognition, sound recognition and recommender systems are tackled. Deep learning systems can recognize spoken words, translate languages, decipher handwriting and extract valuable insights from documents such as entities and sentiment. They can be used to discern patterns in the financial markets. Deep learning systems are exceeding human level performance in an increasing number of tasks. For example, Google’s DeepMind project created AlphaGo, a deep learning model that has defeated two world champions at the game of Go³.
Because of the complexity of deep learning programs, the way the system learns can be opaque, making explainability difficult. It’s hard to forensically identify how the system produced a result. Depending on how the technology is used, this can raise ethical issues. For instance, if a person is denied a loan because of the output of a deep learning program, and the company can’t explain why, is that ethical? You will read more about this in Chapter 29 — Trust.
Robotic Process Automation
Robotic Process Automation is a tool to advance the enterprise’s adaptive imperative in any high volume, low variation domain. RPA automates human tasks. It’s a way to improve quality while reducing costs. As long as the domain exhibits unvarying patterns of work that are unlikely to change over time, RPA can be an effective solution.
A company uses RPA tools to configure a robot so that it receives the right data, transforms it consistent with the business logic and initiates the intended output. RPA is used for such applications as accounts payable processing, accounts receivable processing and customer support.
To be executed effectively, RPA solutions need to be built and maintained properly. The design and optimization of RPA solutions needs to be carefully managed by a dedicated domain team close to the point of impact.
Whether you access the power of AI through vendor systems or by building them yourselves, the applications are many. Here are some of these (associated with the corresponding operating system or meta system):
As you can see, most operating and meta systems in the enterprise harbor opportunities to apply AI. Of course, every application of AI is an investment decision. Deriving AI benefits from a vendor tool costs less than building AI solutions yourself. But it’s still an investment. The vendor solution must sit inside of a system and a domain, solving real problems for the people managing that domain.
When considering AI, important questions must be asked. To what degree does this proposed AI solution advance the business outcome objectives of the domain? How will it support the domain’s workflows? What skills must the people who work within these workflows possess to take full advantage of this new AI capability? Who will govern data use and address privacy concerns? What will it cost to rent this vendor tool? Is the requisite training data obtainable in enough volume, and can we obtain it at a low enough cost? All of these considerations must be factored in when deciding whether to invest in AI capabilities.
Even more so if you will build these capabilities yourself. To “do AI well” takes a unique set of data requirements, skills and technical systems. And it will take time to mobilize these resources, train the datasets and analyze the results. There is no instant fix here. These significant investments in money and time need to be weighed against the benefits achieved.
Let’s dive into some examples of companies that use AI.
For instance, Pinterest uses AI to refine content discovery.
Facebook and other companies leverage machine learning to optimize chat bots:
Ecommerce retailers use machine learning for recommendation engines and to maximize conversion:
Salesforce.com uses AI capabilities to offer lead scoring (predictions of the likelihood a lead will close):
AI can also be used to optimize technical systems, such as with threat detection and fraud detection. As deep learning capabilities improve, cognitive solutions such as automated cyber security response are likely to emerge.
AI capability can be a key source of competitive advantage in the fit systems enterprise. Rich feedback loops drive accelerated learning to enable better and faster decision making. When built with the right human and technical resources and pointed at the right problem domains, AI can supercharge learning and decision making to advance the enterprise, leaving competitors in the dust.
As you contemplate whether it’s time to build AI, machine learning or deep learning capability, a basic consideration is whether your problem domain yields you sufficient data to enable analysis. The statistical accuracy of a model depends on the size of the dataset, given the complexity of the problem. As Jason Brownlee noted in his post, “How Much Training Data is Required for Machine Learning⁴?” three factors impact the amount of data you need:
- The complexity of the problem (the unknown underlying function that best relates input variables to the output variable)
- The complexity of the learning algorithm (the algorithm that inductively learns the unknown underlying mapping function from the specific examples in the data)
- The level of precision that is required
As a general rule, the simplest problems will need thousands of examples. Average machine learning problems will require hundreds of thousands of examples. And complex deep learning problems will require millions to tens of millions of examples (unless you have access to models that have been pre-trained). You can estimate the data that will be required by building a learning curve. A machine learning algorithm can be assessed by evaluating the model’s skill in creating an accurate output versus the size of the dataset. You can plot the result on a graph, showing the dataset size on the x axis and the model skill on the y axis. This learning curve will help you identify the point of diminishing returns as dataset size increases.
Because the amount of data is such a critical factor in AI computations, it’s important to consider every possible means by which you can increase the size of your datasets. Of course, the specifics depend on your application. But consider whether you can find new data sources to increase dataset size. Can you add sensors? Is there public data you can access? Can you gain access to partner data? Can your product platform be instrumented to increase the volume of relevant data?
The more data the better, assuming the cost to acquire that data is acceptable given the return.
If you embark on AI, machine learning or deep learning initiatives, you will create one or more domain teams to do the job. The skills you must assemble to build and manage AI solutions are in heavy demand. For instance, the jobs website Indeed shows that demand for data scientists has grown 344% from 2013 to 2018⁵. The specific composition of the team will be tied to the domain’s business outcome objective, but as a general rule certain skills and roles are required.
Key skills required to build AI, machine learning and deep learning capabilities include:
- Mathematics (algebra, calculus, algorithms)
- Data modeling / evaluation
- Computer science (programming languages such as Python, distributed computing architecture)
- Networking (bayesian networking, including neural nets)
Machine learning and deep learning derive their power from the algorithms built into the models. For supervised learning, some top algorithms include decision trees, naive bayes classification, ordinary least squares regression and support vector machines. For unsupervised learning, they include centroid-based algorithms, connectivity-based algorithms, dimensionality reduction algorithms, singular value decomposition and principal component analysis.
Key roles include:
- Data scientists
- Data architects
- Machine learning engineers
- Business intelligence developers
- Visualization designers
- Head of engineering
A 2018 report from LinkedIn showed a US nationwide data scientist shortage of 150,000, with the shortage rising briskly. See below:
And yet, despite the shortage, the data scientist role is critical. A data scientist collects, analyzes and interprets complex datasets. Data scientists are experienced in statistical computing languages, such as Perl, Python, Scala and SQL. They have experience working with unstructured datasets on big data platforms such Hive, Hadoop, Pig and Spark, and are familiar with MapReduce methods. Data scientists combine mathematics and computer science skill sets. Top data scientists are highly educated and wicked smart, often holding advanced degrees in a quantitative field.
In a post on TowardsDataScience.com, Jeff Hale shared an analysis of the technology skills cited in job listings for data scientists. They include⁶:
The data architect designs and builds the technical systems that enable the execution of AI workflows. Data architects combine strong computer science, algorithmic and statistical skills. Many data architects have completed PhDs in mathematics. They have extensive experience working with Hadoop and Spark systems. They understand C++, Scala, Python and Java. They understand data migration, mining and visualization.
Machine Learning Engineers
Machine learning engineers program systems to apply predictive models and leverage large datasets. They possess strong algorithm and data structure skills and can effectively implement algorithms in a system. They can work in multiple programming languages, such as Python, C++, Java and Scala. They are comfortable building and managing cloud-based distributed systems built via a reactive microservices architecture. They have strong mathematical and analytical skills, and possess relevant domain experience — such as experience with artificial neural networks.
Business Intelligence Developers
BI developers design, model, maintain and analyze structured datasets that support business intelligence objectives. These developers work closely with business users to define data analysis objectives, and then develop analytical outputs that meet these objectives. THis includes data warehouse design, SQL queries and moving data into BI tools. BI developers come from a background in computer science or engineering.
Visualization designers focus on the user experience design. This is often most relevant for product applications, since visualization tools for internal operations can leverage off-the-shelf BI tools.
Head of Engineering
Perhaps the most important role of all is the head of engineering. This person will be responsible for hiring the other members of the team, and therefore needs to possess a mix of the capabilities noted above in the other roles — as well as the experience and leadership skills to attract a great team. When it comes to AI (and, for that matter, any team), the quality of the leader is perhaps the single biggest determinant in the quality of the team.
AI, machine learning, and deep learning are powerful capabilities when in the right hands. If your company is contemplating the development of AI capability, start by clearly identifying your business objective. Make sure the reward is worth the effort and cost. Tightly define the problem domain. And then, most important of all, recruit 10Xers into your data scientist, architect and engineer roles. The better your team, the more analytical capacity you will possess.
To view all chapters go here.
If you would like more CEO insights into scaling your revenue engine and building a high-growth tech company, please visit us at CEOQuest.com, and follow us on LinkedIn, Twitter, and YouTube.
- McCarthy, John. “What Is Artificial Intelligence?” Stanford University, November 12, 2017. http://www-formal.stanford.edu/jmc/whatisai/.
- “Artificial Intelligence (AI) vs. Machine Learning vs. Deep Learning.” Skymind. https://skymind.ai/wiki/ai-vs-machine-learning-vs-deep-learning.
- Brownlee, Jason. “How Much Training Data Is Required for Machine Learning?” Machine Learning Mastery, May 22, 2019. https://machinelearningmastery.com/much-training-data-required-machine-learning/.
- “Job Search.” Indeed, n.d. https://www.indeed.com/.
- Hale, Jeff. “The Most in Demand Skills for Data Scientists.” Medium. Towards Data Science, October 15, 2018. https://towardsdatascience.com/the-most-in-demand-skills-for-data-scientists-4a4a8db896db.