IOT in GCP

Security is critical when deploying and managing an IoT network. Cloud IoT Core has several security features to protect your IoT network. Devices are authenticated individually. Which means if there is an attack on your IoT network it is limited to one device and not the whole fleet. There are four public key formats available for devices: RS256 and RSA256_X509, and ES256 and ES256_X509. You specify the key format when creating the device. You can also define an expiration time for each device credential (public key). After it expires, the key is ignored but not automatically deleted. If you don’t … Continue reading IOT in GCP

Sticky post

Cloud IOT Core

Cloud IoT Core is a fully managed service that allows you to easily and securely connect, manage, and ingest data from millions of globally dispersed devices. Cloud IoT Core, in combination with other services on Google Cloud platform, provides a complete solution for collecting, processing, analyzing, and visualizing IoT data in real time to support improved operational efficiency. You will transmit telemetry messages from a device and the device will respond to configuration changes from a server based on real-time data. The devices in this system publish temperature data to their telemetry feeds, and a server consumes the telemetry data … Continue reading Cloud IOT Core

Sticky post

Query GitHub data using BigQuery

BigQuery is Google’s fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes of data without needing a database administrator or any infrastructure to manage. BigQuery uses familiar SQL and a pay-only-for-what-you-use charging model. BigQuery allows you to focus on analyzing data to find meaningful insights. In this post we’ll see how to query the GitHub public dataset to grab hands on experience with it. Sign-in to Google Cloud Platform console (console.cloud.google.com) and navigate to BigQuery. You can also open the BigQuery web UI directly by entering the following URL in your browser. Accept the terms of service. … Continue reading Query GitHub data using BigQuery

Sticky post

Recommend Products using ML with Cloud SQL and Dataproc

As our goal is to provide demo that is why we are using the Cloud SQL or else yo can use spanner for horizontal scaling. our goal is to Create Cloud SQL instance Create database tables by importing .sql files from Cloud Storage Populate the tables by importing .csv files from Cloud Storage Allow access to Cloud SQL Explore the rentals data using SQL statements from CloudShell  the GCP console opens in this tab.Note: You can view the menu with a list of GCP Products and Services by clicking the Navigation menu at the top-left, next to “Google Cloud Platform”.  you populate rentals … Continue reading Recommend Products using ML with Cloud SQL and Dataproc

Explore a BigQuery Public Dataset

Storing and querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google’s infrastructure. Simply move your data into BigQuery and let us handle the hard work. You can control access to both the project and your data based on your business needs, such as giving others the ability to view or query your data. You access BigQuery through the GCP Console, the command-line tool, or by making calls to the BigQuery REST API using a variety of client libraries such … Continue reading Explore a BigQuery Public Dataset

Sticky post

Dimensionality reduction using sklearn a way of reducing burden

Principal component analysis (PCA): PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, PCA is implemented as a transformer object that learns n components in its fit method, and can be used on new data to project it on these components. PCA centers but does not scale the input data for each feature before applying the SVD. The optional parameter parameter whiten=True makes it possible to project the data onto the singular space while scaling each component to unit variance. The PCA object also provides a probabilistic interpretation of the PCA that can give a likelihood … Continue reading Dimensionality reduction using sklearn a way of reducing burden

Open Economics For understanding Banking Domain and trade system and building ML tools

For building tools related to Financial trade analysis understanding the domain is important here we will try to disclose the basic of it and that completes your technical knowledge along with domain expertise . Normally we assume that Economy of a Capitalist country to be closed. This is done to simplified calculations of country’s GDP,GNP,NI,Wage etc but in reality this never happens so lets just jump into foundation of open economics. Interaction of economics with world happens in three broad ways . Goods that consumer and Firms can choose between state and foreign.Example will be choosing a Books from local … Continue reading Open Economics For understanding Banking Domain and trade system and building ML tools

Sticky post

Machine Learning crash course (Tensorflow Examples)

machine learning comes with the learning pattern which is supervised learning at a first glance .so here is a brief about it terms used here are : the very first thing needs to keep in mind is framing your machine learning model/projects means what you want to achieve out of the data. example may contains as follows: A regression model predicts continuous values. For example, regression models make predictions that answer questions like the following: What is the value of a house in California? What is the probability that a user will click on this ad? A classification model predicts discrete values. For example, … Continue reading Machine Learning crash course (Tensorflow Examples)

Sticky post

Spark Cluster Overview

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. Security in Spark is OFF by default. This could mean you are vulnerable to attack by default. Spark uses Hadoop’s client libraries for HDFS and YARN.  Users can also download a “Hadoop free” binary and run Spark with any Hadoop version by augmenting Spark’s classpath. Scala and Java users can … Continue reading Spark Cluster Overview

Sticky post

Be different build a machine learning model with some extra line in your SQL query and grab attention

By the introduction you probably get it and yes we are talking about Biguery ML . BigQuery ML enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data. SEND FEEDBACK BigQuery ML  Documentation Introduction to BigQuery ML Overview BigQuery ML enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to … Continue reading Be different build a machine learning model with some extra line in your SQL query and grab attention