Machine Learning- Introduction to the Basics and How to begin
In this blog, we are going to cover the basics of Machine Learning. And will learn what are the different types of machine learning algorithms, and will deep dive into linear regression.
What is Machine Learning?
There are many definitions present according to different resources, let’s look at some of them-
According to Arthur Samuel– Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.
Arthur gave this definition in the year 1959, Arthur Samuel is a very popular name in the field of machine learning and AI. He revolutionized machine learning and made it popular back in 1959, with his AI-driven Checkers game. He also won the Computer Pioneer award in 1987.
According to Tom M Mitchell– A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.
This definition might seem to be confusing, let us understand this by our classical checker’s example-
The program or machine in the checkers game is learning from playing the checkers game that is experience E.
The task for the machine is to learn which moves will let to a win, which is considered as T.
After this, the performance is measured by the probability of the machine winning the checkers’ game that is P.
This probability P will improve as the experience E will increase, so the more experience or data the machine gets the more is the probability of completing the task.
That is winning the game. This is what this definition by Tom M Mitchell means.
Tom M Mitchell is a very famous computer scientist and very well known for his contribution to Machine Learning, AI, and Neuroscience. He is also the author of the textbook MachineLearning.
In a conclusion, we can say that Machine Learning is the process of continuously training the machine by providing it experiences that can be past experiences or the present and to make it act according to that towards a defined result.
Different Types of Machine Learning Algorithms
There are basically two types of learning algorithms-
- Supervised Learning Algorithm
- Unsupervised Learning Algorithm
Supervised Learning Algorithm
Supervised learning is training the machine by providing it with a dataset with the input value with a corresponding desired output, so all the input and output values are provided as a dataset. For example, training a machine that can predict the house prices on the basis of the size of the land. In this case, a dataset of house size with a corresponding price will be provided with the help of that prediction will be done.
The main Aim of the supervised learning algorithm is to find the function that can be used to map between the input and output values.
The most common use cases of it are-
- Spam Detection (Emails)
- Fraud Detection
- Image Classification
Unsupervised Learning Algorithm
In unsupervised learning, the algorithm works with the dataset with no output value. The main logic behind using unsupervised learning is to make the algorithm learn from the patterns and data structure. The most famous example of unsupervised learning is the google news algorithm. It goes around the web to find the news and clusters them. Unsupervised learning algorithms are also trendy in the medical field, like clustering people according to their diseases or DNA matching, all these problems are very easy to solve with unsupervised algorithms.
A label is a thing that we are trying to predict. In terms of Linear Regression, it is the y variable. for example in the case of spam email detection,
the label will be spam or not spam.
In the animal name prediction, the label will be the animal name. In short, we can say that it is the output of the particular input data.
Features are the input data or the variables that are passed for the output. In terms of the Linear Regression, it is the x variable. Features can vary
from project to project, that is there can be only one feature in a project while there can be n number of features in a project.
In the example of spam email detection, the features can be-
- Words in the email
- Address of the email
A model is used to define the relationship between labels and features. There are two phases considered in the lifecycle of a model.
The first phase is the training phase, in this, the model is trained with the input and output values. That is the model learns from the example values.
The second phase is the inference phase, in this, the model is trained with the unlabeled values. This method is used to generate the values for the unlabelled dataset.
Different Types of Model
There are basically two types of models-
- Regression Model
- Classification Model
Regression Model- As the name suggests, the regression model means the continuous prediction of the values. It can be a continuous prediction of the house price, at a given place. Another example is of calculating the probability of a student failing an exam.
Classification Model- The classification model means to give the result as a discrete value. The most common examples of a classification model are -is given email spam or not? what is the name of the animal shown in the picture? likewise, there are many scenarios where classification is required.
Linear Regression Model
Linear regression is one of the types of regression models. Linear Regression is similar to the linear equation of algebra, i.e. y = mx + c, where- y is the dependent variable that is the label, and x is the independent variable that is the feature. Where m is the slope in terms of machine learning it is represented as w, where w is the weightage of that variable. c is the y-intercept or can be called bias.
Linear regression models are used when we know that the relation between the independent variable and the dependent variable is proportional. That is there is continuous linearity in them. It is always used to predict the numeric values not the characters or classification.
There are two types of linear equations-
- Simple linear equation
- Multiple linear equations
In the simple linear equation, there is only one feature for a label, which means the equation will be, y = wx + c.
Whereas in the multiple linear equations, there are multiple features available for a label, in that case, the equation is like y = c + w1x1 + w2x2 + w3x3 + … wnxn, where n is the total number of terms available that is the independent variable. Where w1 to wn are the weightage of that feature for defining the value of a label.
To understand it more simply, let us consider a linear regression model for predicting the price of the house, the independent variables are the location, size, and rooms. In this case, each of them will have a different weightage for predicting the price of a house, say the weightage of the size is maximum as compared to others.
Loss in a training Model
Loss is the difference between the actual value and the predicted value. If both of them are the same then we can say that the model works perfectly and there is no loss. The main aim of training a model is to reduce the loss as much as possible.
Square Loss- Square loss is one of the most popular loss functions that is calculated by the square of the difference between the label and the prediction.
square loss = (predicted_value — real _value)²
Mean Square Loss- It is the average squared loss per example over the whole dataset.
MSE is very popular in machine learning models, but it is not the best function in all use cases.
That’s it from this blog, this will be a series of blogs, in which we will deep dive into machine learning.
In the next blog, we will cover how to reduce loss, and what are the different approaches we can use for reducing the loss.
For more blogs visit- Utkarsh Shukla blogs, Utkarsh Shukla Author
Check out my website – https://utkarshshukla.herokuapp.com/