machine learning comes with the learning pattern which is supervised learning at a first glance .so here is a brief about it

terms used here are :

the very first thing needs to keep in mind is framing your machine learning model/projects means what you want to achieve out of the data. example may contains as follows:

A **regression** model predicts continuous values. For example, regression models make predictions that answer questions like the following:

- What is the value of a house in California?
- What is the probability that a user will click on this ad?

A **classification** model predicts discrete values. For example, classification models make predictions that answer questions like the following:

- Is a given email message spam or not spam?
- Is this an image of a dog, a cat, or a hamster?

## Descending into ML

**Linear regression** is a method for finding the straight line or hyperplane that best fits a set of points. This module explores linear regression intuitively before laying the groundwork for a machine learning approach to linear regression.

Model can be as complicated as

to understand linear regression play it:

now you have data ,model what next ….you have to predict now how good is your model here comes loss function which is the fundamental building block of your performance as a ML Engineer.

True, the line doesn’t pass through every dot, but the line does clearly show the relationship between chirps and temperature. Using the equation for a line, you could write down this relationship as follows:y=mx+b

where:

- y is the temperature in Celsius—the value we’re trying to predict.
- m is the slope of the line.
- x is the number of chirps per minute—the value of our input feature.
- b is the y-intercept.

By convention in machine learning, you’ll write the equation for a model slightly differently:y′=b+w1x1

where:

- y′ is the predicted label (a desired output).
- b is the bias (the y-intercept), sometimes referred to as w0.
- w1 is the weight of feature 1. Weight is the same concept as the “slope” m in the traditional equation of a line.
- x1 is a feature (a known input).

To **infer** (predict) the temperature y′ for a new chirps-per-minute value x1, just substitute the x1 value into this model.

Although this model uses only one feature, a more sophisticated model might rely on multiple features, each having a separate weight (w1, w2, etc.). For example, a model that relies on three features might look as follows:y′=b+w1x1+w2x2+w3x3.

**Training** a model simply means learning (determining) good values for all the weights and the bias from labeled examples. In supervised learning, a machine learning algorithm builds a model by examining many examples and attempting to find a model that minimizes loss; this process is called **empirical risk minimization**.

Loss is the penalty for a bad prediction. That is, **loss** is a number indicating how bad the model’s prediction was on a single example. If the model’s prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have *low* loss, on average, across all examples.

# Reducing Loss

To train a model, we need a good way to reduce the model’s loss. An iterative approach is one widely used method for reducing loss, and is as easy and efficient as walking down a hill.

if you think closely our goal is to achieve y =m*x + c from y’=w * x + b here both w and b are variable and we are trying to predict it so would not it be nice to check for a path that tells us whether the b and w are doing good and slowly converging into the values where they would become m and c

here comes gradient descent. some of the gradient decent are

exercise of pandas,tensorflow included

post inspired by google crash course ml.