Jump to section

What is deep learning?

Copy URL

Deep learning is an artificial intelligence (AI) technique that teaches computers to process data using an algorithm inspired by the human brain. 

Deep learning uses artificial intelligence and machine learning (AI/ML) to help data scientists collect, analyze, and interpret large amounts of data. The process of deep learning, also known as deep neural learning or deep neural networking, teaches computers to learn through observation, imitating the way humans gain knowledge. 

The human brain contains many interconnected neurons, which act as information messengers when the brain is processing information (or data). These neurons use electrical impulses and chemical signals to communicate with one another and transmit information between different areas of the brain. 

Artificial neural networks (ANNs)--the underlying architecture behind deep learning–are based on this biological phenomenon but formed by artificial neurons that are made from software modules called nodes. These nodes use mathematical calculations (instead of chemical signals as in the brain) to communicate and transmit information. This simulated neural network (SNN) processes data by clustering data points and making predictions.

It may help to think of deep learning as a type of flow chart, starting with an input layer and ending with an output layer. Sandwiched between these two layers are the “hidden layers” which process information at different levels, adjusting and adapting their behavior as they continuously receive new data. Deep learning models can have hundreds of hidden layers, each of which plays a part in discovering relationships and patterns within the data set. 

Starting with the input layer, which is composed of several nodes, data is introduced to the model and categorized accordingly before it’s moved forward to the next layer. The path that the data takes through each layer is based upon the calculations set in place for each node. Eventually, the data moves through each layer, picking up observations along the way that ultimately create the output, or final analysis, of the data.

Applications that utilize deep learning are already integrated into our daily lives and have uses in many different industries. generative AI, which now powers many AI tools, is made possible through deep learning.

The use cases for deep learning are forever evolving, but 3 of the most popular technologies being utilized today are computer vision, speech recognition, and natural language processing (NLP). 

  • Computer vision: Computers can use deep learning techniques to comprehend images the same way humans do. This means automated content moderation, facial recognition, and image classification. 
  • Speech recognition: Pitch, tone, language, and accent can all be analyzed and by way of deep learning models. Not only can this be used to improve customer experience, but it is also helpful from an accessibility standpoint in cases that require real-time transcription.
  • Natural language processing (NLP): Computers use deep learning algorithms to analyze and gather insights from text data and documents. This can aid in the function of summarizing long documents, indexing key phrases that indicate sentiment (such as positive or negative comments), and generating insight for automated virtual assistants and chatbots. NLP is the broader field that encompasses the development and application of large language models (LLMs) to understand and generate human language.
     

Some examples of how industries are utilizing deep learning principles include the following:

  • Customer Service: Chatbots, virtual assistants, and dial-in customer service portals utilize tools like speech recognition. 
  • Financial services: Predictive analytics drive the algorithmic trading of stocks, assess business risks for loan approvals, detect fraud, and help manage credit and investment portfolios.
  • Healthcare: With the digitization of healthcare records, image recognition applications can support medical imaging specialists by learning to automatically detect red flags that indicate a potential medical diagnosis. See how HCA Healthcare uses predictive analysis to establish a standardized, digital approach to sepsis detection. 
  • Media and Entertainment: From online shopping to media streaming services, deep learning is being used to track user activity and develop personalized recommendations.
  • Industrial automation: In factories and warehouses, deep learning applications can automatically detect when people or objects are within an unsafe distance of machines, or can assist with quality control or predictive maintenance. 
  • Self-driving cars: Automotive researchers use deep learning to train cars to detect objects like stop signs, traffic lights, crosswalks, and pedestrians.
  • Law enforcement: Speech recognition, computer vision, and natural language processing (NLP) can save time and resources by aiding in the analysis of large amounts of data. 
  • Aerospace and Military: Those monitoring large geographic areas can use deep learning to detect objects, identify areas of interest from afar, and verify safe or unsafe zones for troops. 

Red Hat has partnered with IBM to create Red Hat® Ansible® Lightspeed with IBM watsonx Code Assistant—a generative AI service that helps developers create Ansible content more efficiently.

Deep learning is a specialized form of machine learning and differentiates itself by the type of data it works with and the methods in which it learns.

Classical machine learning algorithms require some human intervention by way of pre-processing data sets before they’re introduced to the model. This means that specific features are defined and labeled from the input data then organized into tables before being introduced to the machine learning model. Conversely, deep learning algorithms don’t require this same level of pre-processing and are able to comprehend unstructured data such as text documents, images of pixel data, or files of audio data. 

Deep learning may be preferred to classical machine learning in instances where there is a large amount of data, a lack of background knowledge regarding the subject, or when there is a complex and time-consuming task at hand.

We know that deep learning works by utilizing a structure of nodes that communicate with one another within an artificial neural network. To create an ANN, calculations and parameters must be introduced to the model alongside the data we give it, and precautions must be taken to ensure that these calculations account for bias and variance. 

In the context of machine learning, bias refers to the extent to which your model is making assumptions or generalizations about the data in order to make the target function easier to learn. High bias means that the model is simplifying and creating shortcuts (to a fault) as it processes information. 

Variance refers to the measurement of how far each data point is from the mean, or the statistical measurement of the spread between numbers in a data set. In opposition to bias, variance refers to how sensitive a model is to the training data. High variance (or sensitivity) means that the model is paying too much attention to detail and missing the underlying patterns in the dataset. 

In supervised learning, when variance is too high and bias is too low, it’s called overfitting. When bias is high and variance is low, it’s called underfitting. Creating the perfect fit can be difficult, and this phenomenon is commonly referred to as the bias-variance tradeoff. 

Parameters define boundaries, and boundaries are critical for making sense of the enormous amount of data that deep learning algorithms must process. This means that often, overfitting and underfitting can be corrected by using fewer or more parameters, respectively. 

Accounting for human related bias 

If a deep learning model is trained on data that is statistically biased, or doesn’t provide an accurate representation of the population, the output can be flawed. Unfortunately, existing human bias is often transferred to artificial intelligence, thus creating risk for discriminatory algorithms and bias outputs. 

As organizations continue to leverage AI for improved productivity and performance, it’s critical that strategies are put in place to minimize bias. This begins with inclusive design processes and a more thoughtful consideration of representative diversity within the collected data. 

What is a black box?

“Black box” refers to when an AI program performs a task within its neural network and doesn’t show its work. This creates a scenario where no one–including the data scientists and engineers who created the algorithm–is able to explain exactly how the model arrived at a specific output. The lack of interpretability in black box models can create harmful consequences when used for high-stakes decision making, especially in industries like healthcare, criminal justice, or finance. 

Deep learning models are able to perform more complex computing tasks without human intervention, but this means that they require more processing power, sufficient infrastructure, and larger sets of training data. Cloud computing allows teams to access multiple processors at once, such as clusters of GPUs (graphic processing units) and CPUs (central processing units), which creates an ideal environment for complex mathematical operations to be performed.

By designing, developing, and training deep learning models on the cloud, dev teams can scale and distribute workloads with speed and accuracy, while simultaneously cutting down on operating costs. 

 

Deep learning and machine learning on the edge

Working on the cloud opens up possibilities for machine learning on the edge. By establishing edge computing hubs connected to public cloud resources, information can be captured and analyzed in real time to assist in operations ranging from supply chain status updates to information on disaster evacuation sites. 

 

Red Hat provides the common foundations for your teams to build and deploy AI applications and machine learning (ML) models with transparency and control. 

Red Hat® OpenShift® AI is a platform that can train, prompt-tune, fine tune, and serve AI models for your unique use case and with your own data.

For large AI deployments, Red Hat OpenShift offers a scalable application platform suitable for AI workloads, complete with access to popular hardware accelerators.

Red Hat is also using our own Red Hat OpenShift AI tools to improve the utility of other open source software, starting with Red Hat Ansible® Lightspeed with IBM watsonx Code Assistant. This service helps automation teams learn, create, and maintain Ansible content more efficiently. It accepts prompts entered by a user and then interacts with IBM watsonx foundation models to produce code recommendations that are then used to create Ansible Playbooks.

Additionally, Red Hat’s partner integrations open the doors to an ecosystem of trusted AI tools built to work with open source platforms.

Keep reading

Article

What is generative AI?

Generative AI relies on deep learning models trained on large data sets to create new content.

Article

What is machine learning?

Machine learning is the technique of training a computer to find patterns, make predictions, and learn from experience without being explicitly programmed.

Article

What are foundation models?

A foundation model is a type of machine learning (ML) model that is pre-trained to perform a range of tasks. 

More about AI/ML

Products

An AI-focused portfolio that provides tools to train, tune, serve, monitor, and manage AI/ML experiments and models on Red Hat OpenShift.

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure. 

Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is a generative AI service designed by and for Ansible automators, operators, and developers. 

Resources

e-book

Top considerations for building a production-ready AI/ML environment

Analyst Material

The Total Economic Impact™ Of Red Hat Hybrid Cloud Platform For MLOps

Webinar

Getting the most out of AI with open source and Kubernetes