Education logo

Understanding the Basics of Artificial Intelligence and Machine Learning

by Everyday Junglist 13 days ago in stem

A Short Course

This display of geometry and mathematics should convince you of my credentials. Courtesy of Pixabay.

Artificial Intelligence (AI) and Machine Learning (ML) have been some of the hottest topics around for close to a decade now. If it’s hot it’s got to be worth learning about, am I right? Get it, ‘learning’ about? Learning used to be something that only human beings and some non-human animals could do. Those days are over my friend, now even machines can learn! Read on if you want to ‘learn’ more. Oops, there I go again, LOL!

The term “AI” is thrown around casually every day. You hear aspiring developers and aspiring actors/actresses saying they want to learn AI. You also hear executives saying they want to implement AI in their services. You also hear people saying they want an AI to do their homework and/or job for them. Other people; freaks, weirdos, and losers mostly, say they want an AI as a girlfriend/boyfriend. Sick right? Finally some people want an AI to bring about the singularity and usher in a Utopian future. The whouwhatularity? Don’t ask but go ahead and Google it. That is some batshit crazy stuff. But quite often, many of these people don’t understand what AI is.

Once you’ve read this article, you will understand the basics of AI and ML. More importantly, you will understand how Deep Learning, the most popular type of ML, works. Remember where we started, if something is hot and popular it has to be good and totally legit. Most definitely it has to be legit and real and worthwhile, it just has to be, right? Doesn’t it? I say yes, so let’s ‘learn’ more. Ha! Zing. There I go again. Not only are you ‘learning’ you are also laughing. Good stuff.

This guide is intended for everyone, so no advanced mathematics will be involved. That crap is boring anyway and who needs boring when we are talking about the hottest and most popular subjects in all the world right now. Don’t knock the advanced mathematics too much though as it’s the only thing keeping the house of cards that is the field of artificial intelligence from crashing and burning down around everyone crushing the hopes and dreams of silicon valley playboys, VC hotshots, and tech fanboys from silicon valley north to silicon valley south. Did I just think that or write it? Oh well, moving on.


The first step towards understanding how Deep Learning works is to grasp the differences between important terms.

Artificial Intelligence vs Machine Learning

Artificial Intelligence is the replication of human intelligence in computers. It is a thing which does not currently, and may never exist. It is a difficult thing to replicate since we currently not exactly sure what human “intelligence” actually is and/or what processes in the brain and/or body are responsible for humans having it. But that doesn’t matter because if you believe something can happen, it can happen. All you have to do is be patient and AI will be here for sure next year, or in the next couple of years, or within a decade, or within the century. One of those predictions has to be right, right?

When AI research first started, researchers were trying to replicate human intelligence. They quickly realized that was too hard so settled for trying to replicate specific tasks — like playing a game. That was a real son of a bitch too but seemed possible so they introduced a vast number of rules that the computer needed to respect. The computer had a specific list of possible actions, and made decisions based on those rules. They called this computer, a different type of computer, a newer computer.

Machine Learning refers to the ability of a machine to learn using large data sets instead of hard coded rules. It is also a term that is made up of two words that when combined in that order result in a logical contradiction. Moreover it is a logical impossibility and thus absurd. Bottom line is machines can’t learn, but don’t let that stop you from using the term or even earning a degree in the subject. Word of advice, Udacity and Udemy are scams and there nanogrees are worthless. They can suck my left nut and I want my three grand back dicks.

ML allows computers to learn by themselves. Wait a minute? you might be asking yourself, didn’t he just say that was logically impossible? Machines can’t learn, that’s what he said. I did say that but I didn’t really mean it. Of course machines can learn, it’s 2021, not the stone ages like way back in 2010 or something when machines were machines, men were men, and women were women, except for some women, who were actually men. That one actually caught me off guard some when I had an unfortunate ‘date’ in Tijuana that went horribly wrong. Back to the topic at hand, this type of learning takes advantage of the processing power of modern computers, which can easily process large data sets.

Supervised learning vs unsupervised learning

Boring, Yawn. Next section, the good stuff.

Now, how does Deep Learning work?

You’re now prepared to understand what Deep Learning is, and how it works.

First and foremost, you may have heard some Debbie Downers and Johnny Jerk-offs saying things like, “deep learning is nothing more than a hedge term designed to mask the failure of AI researchers to make real progress toward its development.” and “Deep learning and predictive analytics sound impressive but they are nothing more than computing that takes very large data sets as inputs then uses clever mathematical and statistical treatments, which are essentially the rules that make up the algorithms used in programming the computer, to predict outputs. The output prediction can change over time (becoming more accurate in theory) based on feedback of new information (what is a ‘correct’ answer and what is a ‘wrong’ answer) from the various training data sets that are continually fed as inputs.” They are jerk-offs and do not know what they are talking about. The machine is learning, learning deeply, very, very deeply.

Deep Learning is a machine learning method. A method to do something that is logically impossible sounds like it would be hard. It is hard but not impossible. But you just said it was logically impossible? Logically impossible is not the same as impossible impossible my friend. Read on.

We will learn how deep learning works by building an hypothetical airplane ticket price estimation service. Here’s a head scratcher for you. If we are learning how a machine learns could someone/thing be learning how we learn at the same time, perhaps by setting up a simulated universe with ourselves as simulated beings within it? They then set the universe in motion with a given set of inputs, universal laws, ourselves, etc., hit run, and evaluate the outputs. They then reset the simulation over and over and over watching as we ‘learn’ how to learn about deep learning and machine learning. Damn simulation hypothesis again, cool to think about but also gives me a massive headache every time I do. Let’s keep moving.

We want our airplane ticket price estimator to predict the price using the following inputs (we are excluding return tickets for simplicity):

Origin Airport

Destination Airport

Departure Date


Neural networks

Let’s look inside the brain of our AI.

Like animals, our estimator AI’s brain has neurons. Don’t worry about the fact that a neuron is a specialized cell that exists only in biological systems for transmitting nerve impulses, and thus could not possibly be in the “brain” of our AI. We use terms like neuron and brain however we feel like. That’s the greatest thing about AI, ML, and DL, we can take terms, words, concepts, even definitions from biology whenever we want too without a second thought or care in the world. It’s fucking great. Unlike many animals, our estimator AI is not intelligent in the least and not capable of learning. At any rate the neurons are represented by circles. These neurons are inter-connected.

The neurons are grouped into three different types of layers:

1. Input Layer

2. Hidden Layer(s)

3. Output Layer

The input layer receives input data. In our case, we have four neurons in the input layer: Origin Airport, Destination Airport, Departure Date, and Airline. The input layer passes the inputs to the first hidden layer.

The hidden layers perform mathematical computations on our inputs. One of the challenges in creating neural networks is deciding the number of hidden layers, as well as the number of neurons for each layer. Yes the hidden layers compute. Can you believe that shit? A computer that fucking computes, its genius. Now that is what I call me some learning.

The output layer returns the output data. In our case, it gives us the price prediction. The results of computation are magically produced in the output layer. Only something with intelligence could do something so miraculous as execute code that tells it to take data, pass it through various algorithms, and spit out other data. The miracle of creation.

So how does it compute the price prediction?

This is where the magic of Deep Learning begins. It is fucking magical as shit.

Each connection between neurons is associated with a weight. This weight dictates the importance of the input value. The initial weights are set randomly. This is really cool right? Have you ever heard of a weighted data set, I bet not, unless you are some nerd who took any math or science or engineering or any other technical subject matter in school in any grade 4-graduate school. In other words, unless you are a nerd. Also, if you had to take a six sigma course at your work for some stupid reason.

When predicting the price of an airplane ticket, the departure date is one of the heavier factors. Hence, the departure date neuron connections will have a big weight. Each neuron has an Activation Function. These functions are hard to understand without mathematical reasoning. Always remember "hard to understand" translates directly to "good for business" in the world of AI, ML, and DL.

Simply put, one of its purposes is to “standardize” the output from the neuron. The other purpose is to give VCs instant hard ons. Once a set of input data has passed through all the layers of the neural network, it returns the output data through the output layer.

Nothing complicated, right?

Training the Neural Network

Training the AI is the hardest part of Deep Learning. Why? Because you need a large data set.

You need a large amount of computational power.

It is boring as a motherfucker

and blah, blah, blah Gradient descent.

In summary…

  • Deep Learning uses a Neural Network to imitate animal intelligence. It does not matter that we do not know what animal intelligence is or how it comes about, we are able to imitate it because we are intelligent. Get it?
  • There are three types of layers of neurons in a neural network: the InputLayer, the Hidden Layer(s), and the Output Layer.
  • Connections between neurons are associated with a weight, dictating the importance of the input value. Even though a neuron is a cell found only in biological systems we can co-opt the term because we are cool as shit and very popular.
  • Neurons apply an Activation Function on the data to “standardize” the output coming out of the neuron.
  • To train a Neural Network, you need a large data set. Fucking huge.
  • Iterating through the data set and comparing the outputs will produce a Cost Function, indicating how much the AI is off from the real outputs.
  • After every iteration through the data set, the weights between neurons are adjusted using Gradient Descent to reduce the cost function.
  • And holy shit you are learning, deep, so deep, real deep, deep as a motherfucker.


Everyday Junglist

Research scientist (Ph.D. micro/molecular biology), Thought middle manager, Everyday junglist, Selecta (Ret.), Boulderer, Cat lover, Fish hater

Receive stories by Everyday Junglist in your feed
Everyday Junglist
Read next: The Existence of Nothing

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2021 Creatd, Inc. All Rights Reserved.