Introduction to Normal Random Variables: Overview

In the Exploratory Data Analysis sections of this course, we encountered data sets, such as lengths of human pregnancies, whose distributions naturally followed a symmetric unimodal bell shape, bulging in the middle and tapering off at the ends. Many variables, such as pregnancy lengths, shoe sizes, foot lengths, and other human physical characteristics exhibit these properties: symmetry indicates that the variable is just as likely to take a value a certain distance below its mean as it is to take a value that same distance above its mean; the bell-shape indicates that values closer to the mean are more likely, and it … Continue reading Introduction to Normal Random Variables: Overview

Conditional Probability and Independence Introduction

Introduction In the last section, we established the five basic rules of probability, which include the two restricted versions of the Addition Rule and Multiplication Rule: The Addition Rule for Disjoint Events and the Multiplication Rule for Independent Events. We have also established a General Addition Rule for which the events need not be disjoint. In order to complete our set of rules, we still require a General Multiplication Rule for which the events need not be independent. In order to establish such a rule, however, we first need to understand the important concept of conditional probability. This section will be organized as follows: We’ll first … Continue reading Conditional Probability and Independence Introduction

Probability Rules

Basic Probability Rules In the previous section we considered situations in which all the possible outcomes of a random experiment are equally likely, and learned a simple way to find the probability of any event in this special case. We are now moving on to learn how to find the probability of events in the general case (when the possible outcomes are not necessarily equally likely), using five basic probability rules. Fortunately, these basic rules of probability are very intuitive, and as long as they are applied systematically, they will let us solve more complicated problems; in particular, those problems … Continue reading Probability Rules

Sticky post

Probability A short story

Sample Spaces As we saw in the previous section, probability questions arise when we are faced with a situation that involves uncertainty. Such a situation is called a random experiment, an experiment that produces an outcome that cannot be predicted in advance (hence the uncertainty). Here are a few examples of random experiments: Toss a coin once and record whether you get heads (H) or tails (T). The possible outcomes that this random experiment can produce are: {H, T}. Toss a coin twice. The possible outcomes that this random experiment can produce are: {HH, HT, TH, TT}. Toss a coin 3 … Continue reading Probability A short story

Random Variables

In the previous sections we’ve learned principles and tools that help us find probabilities of events in general. Now that we’ve become proficient at doing that, we’ll talk about random variables. Just like any other variable, random variables can take on multiple values. What differentiates random variables from other variables is that the values for these variables are determined by a random trial, random sample, or simulation. The probabilities for the values can be determined by theoretical or observational means. Such probabilities play a vital role in the theory behind statistical inference, our ultimate goal in this course. Introduction We first … Continue reading Random Variables

Real Time Big Data Processing

let’s first we will try to answer the following: You will need to know what is Big Data in terms of streaming services: so big data follows the 3 v’s velocity,variety and volume. And it is quite understandable that processing this huge amount of data is not possible with sets of Traditional Software.Still let’s see how can we process it via Traditional Approach: Now let’s understand the cons of the method Batch means a subset of whole data accumulated over time thus it is the lag representation of real time data and the more lag the more value we have … Continue reading Real Time Big Data Processing

In Depth Clustering Analysis

Clustering is the Unsupervised version of classification if we have labeled data then we will get classification when we grouped same labeled data . And if we don’t have the labels we will use features of the vectors to identify the same data points and group them with same properties these is what clustering is . let’s tell you that by an example suppose assume that you have two different scenario in two different life you are living in two parallel Universe. In the first world you know who your mother is and you love the food your mother cooks … Continue reading In Depth Clustering Analysis

You Can Blend Apache Spark And Tensorflow To Build Potential Deep Learning Solutions

Before we Start our journey let’s explore what is spark and what is tensorflow and why we want them to be combined. Apache Spark™ is a unified analytics engine for large-scale data processing. Features: Speed: Run workloads 100x faster. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.(DAG means ) Logistic regression in Hadoop and Spark Ease of Use: Write applications quickly in Java, Scala, Python, R, and SQL.Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use … Continue reading You Can Blend Apache Spark And Tensorflow To Build Potential Deep Learning Solutions

Big Data and Machine Learning Fundamentals using GCP for Data Professionals.

You Will get a brief overview of GCP and tools that power machine learning and Big data in GCP. So What Is GCP? GCP stands for Googles cloud platform ….means in a simple way Googles offering of could solutions . Now I hope that you have understanding about what is cloud. If you ever working in AN organizations that have there own data center you have a bit idea about cloud . It simply means giving access to do what you wanna do with a machine but restricts your physical level access. So how you as an organisation get benefit … Continue reading Big Data and Machine Learning Fundamentals using GCP for Data Professionals.

End To End Scalable Machine Learning Project On Google Cloud With Beautiful Front End with Big Data-II

This is the last post of our blogs full hands on tutorial of machine learning at scale with Big data on cloud so holds your hand tight as its about to finish . If you ditch me like my past girlfriends i will still be same desperate for your response and hope that one day you will see my previous posts that I am providing here. Exploring the dataset and visualizing it. Building Machine learning regression model using DNN. So let’s build the final concepts and serve what we build in web . So at first we will talk about: … Continue reading End To End Scalable Machine Learning Project On Google Cloud With Beautiful Front End with Big Data-II