Analyzing Financial Time Series Using BigQuery and Cloud Datalab

This solution illustrates the power and utility of BigQuery and Cloud Datalab as tools for quantitative analysis. The solution provides an introduction (this document) and gets you set up to run a notebook-based Cloud Datalab tutorial.

If you’re a quantitative analyst, you use a variety of tools and techniques to mine big data, such as market transaction histories, for information that can give you insight into market trends. Because quotes and trades happen at predictable intervals, such data represents a financial time series that you can analyze by using established techniques, including frequency analysis and moving averages.

Analyzing Financial Time Series Using BigQuery and Cloud Datalab

This solution illustrates the power and utility of BigQuery and Cloud Datalab as tools for quantitative analysis. The solution provides an introduction (this document) and gets you set up to run a notebook-based Cloud Datalab tutorial.

If you’re a quantitative analyst, you use a variety of tools and techniques to mine big data, such as market transaction histories, for information that can give you insight into market trends. Because quotes and trades happen at predictable intervals, such data represents a financial time series that you can analyze by using established techniques, including frequency analysis and moving averages.

But dealing with massive datasets can be challenging. Traditional tools might not scale as the dataset grows. Storage requirements can grow as fast as the dataset, so downloading data to your computer’s hard drive is no longer a workable approach. And it can take a long time to retrieve the right data subsets from a traditional database query.

BigQuery solves these issues by enabling you to run SQL queries and to get results quickly through the processing power of Google’s infrastructure. You can use BigQuery on the web, and you can use it on the command line and through APIs. When combined with other components of Google Cloud Platform (GCP) or third-party tools, BigQuery enables you to build the data-analysis applications you need now yet still be confident that you can scale them in the future.

GCP is also ISO 27001, ISO 27017, ISO 27018, SOC3, FINRA, and PCI compliant.

Objectives

  • Load a dataset into BigQuery.
  • Use BigQuery and Cloud Datalab to query financial time-series data.
  • Visualize your query results in Cloud Datalab.

This tutorial uses the following billable components of Google Cloud Platform:

  • Cloud Datalab: The resources needed to run Cloud Datalab on GCP are billable. These resources include one Compute Engine virtual machine, two persistent disks, and space for Cloud Storage backups.
  • BigQuery: This tutorial stores close to 100 MB of data in BigQuery and processes under 300 MB to execute the queries once. This amount of data is covered under the free limits(1 TB) provided by BigQuery each month.

Before you start the tutorial, you need to set up Cloud Datalab.

Use Cloud Shell

start a cloud shell session:(Not covering with SDK)

In the Cloud Shell session window, run the following command to create a Cloud Datalab VM instance. Make sure to select a unique name for the instance, which must start with a lowercase letter followed by up to 63 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.

datalab create instance-name

Open your browser to the Cloud Datalab home page by clicking the Web preview button 

web-preview-button

, and then selecting Change port→Port 8081.

On the Cloud Datalab home page, add a new notebook by clicking add_box Notebook on the top left.

Copy the following code into that cell and click Run to execute it.

!gsutil cp gs://solutions-public-assets/bigquery-datalab/* 
execute the new notebook

Return to the original tab to see additional files. Click Analyzing Financial Time Series using BigQuery and Datalab.ipynb to begin working interactively through the tutorial.

or create the notebook with me:

financial markets are increasingly global, and if you follow the sun from Asia to Europe to the US and so on, you can use information from an earlier time zone to your advantage in a later time zone.

The following table shows a number of stock market indices from around the globe, their closing times in Eastern Standard Time (EST), and the delay in hours between the close that index and the close of the S&P 500 in New York. This makes EST the base time zone. For example, Australian markets close for the day 15 hours before US markets close. If the close of the All Ords in Australia is a useful predictor of the close of the S&P 500 for a given day we can use that information to guide our trading activity. Continuing our example of the Australian All Ords, if this index closes up and we think that means the S&P 500 will close up as well then we should either buy stocks that compose the S&P 500 or, more likely, an ETF that tracks the S&P 500. In reality, the situation is more complex because there are commissions and tax to account for. But as a first approximation, we’ll assume an index closing up indicates a gain, and vice-versa.

IndexCountryClosing Time (EST)Hours Before S&P Close
All OrdsAustralia010015
Nikkei 225Japan020014
Hang SengHong Kong040012
DAXGermany11304.5
FTSE 100UK11304.5
NYSE CompositeUS16000
Dow Jones Industrial AverageUS16000
S&P 500US16000

Set up

First, import necessary libraries.

import pandas as pd
from pandas.tools.plotting import autocorrelation_plot
from pandas.tools.plotting import scatter_matrix

import numpy as np

import matplotlib.pyplot as plt

import google.datalab.bigquery as bq #serverless data warehouse

import tensorflow as tf #machine learning

The data covers roughly the last 5 years, using the date range from 1/1/2010 to 10/1/2015. Data comes from the S&P 500 (S&P), NYSE, Dow Jones Industrial Average (DJIA), Nikkei 225 (Nikkei), Hang Seng, FTSE 100 (FTSE), DAX, and All Ordinaries (AORD) indices.

This data is publicly available and is stored in BigQuery for convenience. The built-in connector functionality in Cloud Datalab can access this data as Pandas DataFrames.

snp = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.snp'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
nyse = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.nyse'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
djia = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.djia'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
nikkei = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.nikkei'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
hangseng = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.hangseng'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
ftse = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.ftse'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
dax = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.dax'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')
aord = bq.Query.from_table(bq.Table('bingo-ml-1.market_data.aord'), fields=['Date', 'Close']).execute().result().to_dataframe().set_index('Date')

wait what is happening bq is for bigquery then we are telling that we are querying the data warehouse from bq the from_table is to provide location of data provide provide the data table then fields then execute in datalab save the results into a dataframe to analysis and set index upto date. do it for all index

 The closing prices are of interest, so for convenience extract the closing prices for each of the indices into a single Pandas DataFrame, called closing_data. Because not all of the indices have the same number of values, mainly due to bank holidays, we’ll forward-fill the gaps. This means that, if a value isn’t available for day N, fill it with the value for another day, such as N-1 or N-2, so that it contains the latest available value.

closing_data = pd.DataFrame()

closing_data['snp_close'] = snp['Close']
closing_data['nyse_close'] = nyse['Close']
closing_data['djia_close'] = djia['Close']
closing_data['nikkei_close'] = nikkei['Close']
closing_data['hangseng_close'] = hangseng['Close']
closing_data['ftse_close'] = ftse['Close']
closing_data['dax_close'] = dax['Close']
closing_data['aord_close'] = aord['Close']

# Pandas includes a very convenient function for filling gaps in the data.
closing_data = closing_data.fillna(method='ffill')

At this point, you’ve sourced five years of time series for eight financial indices, combined the pertinent data into a single data structure, and harmonized the data to have the same number of entries, by using only the 20 lines of code in this notebook. Plus, it took about 10 seconds to do all of that. That’s impressive, and it shows how Cloud Datalab can amplify your productivity, first by giving you access to all the goodness of Python available through iPython notebooks, and second by giving you a host of connectors to GCP services. So far you’ve just used BigQuery, but you are free to experiment with Google Cloud Storage, and you can expect to see the number of these connectors grow.

Exploratory data analysis

Exploratory Data Analysis (EDA) is foundational to working with machine learning, and any other sort of analysis. EDA means getting to know your data, getting your fingers dirty with your data, feeling it and seeing it. The end result is you know your data very well, so when you build models you build them based on an actual, practical, physical understanding of the data, not assumptions or vaguely held notions. You can still make assumptions of course, but EDA means you will understand your assumptions and why you’re making those assumptions.

First, take a look at the data.

closing_data.describe()
snp_closenyse_closedjia_closenikkei_closehangseng_closeftse_closedax_closeaord_close
count1447.0000001447.0000001447.0000001447.0000001447.0000001447.0000001447.0000001447.000000
mean1549.7332758920.46848914017.46499012562.24003222250.7598716097.8185897959.3869104910.363302
std338.2782801420.8303752522.9480443653.2941242024.991482552.4717811761.802671486.961236
min1022.5800176434.8100599686.4804698160.00976616250.2695314805.7998055072.3300783927.600098
25%1271.2399907668.23486311987.6352549463.86474620850.4853515677.8498536448.3552254483.199951
50%1433.1899418445.76953113323.36035210815.03027322444.8007816005.1000987426.8100594899.200195
75%1875.51001010370.32470716413.57519615215.70996123447.3398446620.6999519403.6298835345.750000
max2130.82006811239.66015618312.39062520868.02929728442.7500007104.00000012374.7304695954.799805

It’s best to scale the data so that, for example, operations involving multiple indices aren’t unduly influenced by a single, massive index.

Plot the data.

pd.concat([closing_data['snp_close'],
  closing_data['nyse_close'],
  closing_data['djia_close'],
  closing_data['nikkei_close'],
  closing_data['hangseng_close'],
  closing_data['ftse_close'],
  closing_data['dax_close'],
  closing_data['aord_close']], axis=1).plot(figsize=(20, 15))

As expected, the structure isn’t uniformly visible for the indices. Divide each value in an individual index by the maximum value for that index., and then replot. The maximum value of all indices will be 1.

closing_data['snp_close_scaled'] = closing_data['snp_close'] / max(closing_data['snp_close'])
closing_data['nyse_close_scaled'] = closing_data['nyse_close'] / max(closing_data['nyse_close'])
closing_data['djia_close_scaled'] = closing_data['djia_close'] / max(closing_data['djia_close'])
closing_data['nikkei_close_scaled'] = closing_data['nikkei_close'] / max(closing_data['nikkei_close'])
closing_data['hangseng_close_scaled'] = closing_data['hangseng_close'] / max(closing_data['hangseng_close'])
closing_data['ftse_close_scaled'] = closing_data['ftse_close'] / max(closing_data['ftse_close'])
closing_data['dax_close_scaled'] = closing_data['dax_close'] / max(closing_data['dax_close'])
closing_data['aord_close_scaled'] = closing_data['aord_close'] / max(closing_data['aord_close'])
pd.concat([closing_data['snp_close_scaled'],
  closing_data['nyse_close_scaled'],
  closing_data['djia_close_scaled'],
  closing_data['nikkei_close_scaled'],
  closing_data['hangseng_close_scaled'],
  closing_data['ftse_close_scaled'],
  closing_data['dax_close_scaled'],
  closing_data['aord_close_scaled']], axis=1).plot(figsize=(20, 15))

You can see that, over the five-year period, these indices are correlated. Notice that sudden drops from economic events happened globally to all indices, and they otherwise exhibited general rises. This is an good start, though not the complete story. Next, plot autocorrelations for each of the indices. The autocorrelations determine correlations between current values of the index and lagged values of the same index. The goal is to determine whether the lagged values are reliable indicators of the current values. If they are, then we’ve identified a correlation.

fig = plt.figure()
fig.set_figwidth(20)
fig.set_figheight(15)

_ = autocorrelation_plot(closing_data['snp_close'], label='snp_close')
_ = autocorrelation_plot(closing_data['nyse_close'], label='nyse_close')
_ = autocorrelation_plot(closing_data['djia_close'], label='djia_close')
_ = autocorrelation_plot(closing_data['nikkei_close'], label='nikkei_close')
_ = autocorrelation_plot(closing_data['hangseng_close'], label='hangseng_close')
_ = autocorrelation_plot(closing_data['ftse_close'], label='ftse_close')
_ = autocorrelation_plot(closing_data['dax_close'], label='dax_close')
_ = autocorrelation_plot(closing_data['aord_close'], label='aord_close')

_ = plt.legend(loc='upper right')

You should see strong autocorrelations, positive for around 500 lagged days, then going negative. This tells us something we should intuitively know: if an index is rising it tends to carry on rising, and vice-versa. It should be encouraging that what we see here conforms to what we know about financial markets.

Next, look at a scatter matrix, showing everything plotted against everything, to see how indices are correlated with each other.

_ = scatter_matrix(pd.concat([closing_data['snp_close_scaled'],
  closing_data['nyse_close_scaled'],
  closing_data['djia_close_scaled'],
  closing_data['nikkei_close_scaled'],
  closing_data['hangseng_close_scaled'],
  closing_data['ftse_close_scaled'],
  closing_data['dax_close_scaled'],
  closing_data['aord_close_scaled']], axis=1), figsize=(20, 20), diagonal='kde')


You can see significant correlations across the board, further evidence that the premise is workable and one market can be influenced by another.

As an aside, this process of gradual, incremental experimentation and progress is the best approach and what you probably do normally. With a little patience, we’ll get to some deeper understanding.

The actual value of an index is not that useful for modeling. It can be a useful indicator, but to get to the heart of the matter, we need a time series that is stationary in the mean, thus having no trend in the data. There are various ways of doing that, but they all essentially look at the difference between values, rather than the absolute value. In the case of market data, the usual practice is to work with logged returns, calculated as the natural logarithm of the index today divided by the index yesterday:

ln(Vt/Vt-1)

log_return_data = pd.DataFrame()

log_return_data['snp_log_return'] = np.log(closing_data['snp_close']/closing_data['snp_close'].shift())
log_return_data['nyse_log_return'] = np.log(closing_data['nyse_close']/closing_data['nyse_close'].shift())
log_return_data['djia_log_return'] = np.log(closing_data['djia_close']/closing_data['djia_close'].shift())
log_return_data['nikkei_log_return'] = np.log(closing_data['nikkei_close']/closing_data['nikkei_close'].shift())
log_return_data['hangseng_log_return'] = np.log(closing_data['hangseng_close']/closing_data['hangseng_close'].shift())
log_return_data['ftse_log_return'] = np.log(closing_data['ftse_close']/closing_data['ftse_close'].shift())
log_return_data['dax_log_return'] = np.log(closing_data['dax_close']/closing_data['dax_close'].shift())
log_return_data['aord_log_return'] = np.log(closing_data['aord_close']/closing_data['aord_close'].shift())

log_return_data.describe()
_ = pd.concat([log_return_data['snp_log_return'],
  log_return_data['nyse_log_return'],
  log_return_data['djia_log_return'],
  log_return_data['nikkei_log_return'],
  log_return_data['hangseng_log_return'],
  log_return_data['ftse_log_return'],
  log_return_data['dax_log_return'],
  log_return_data['aord_log_return']], axis=1).plot(figsize=(20, 15))

You can see from the plot that the log returns of our indices are similarly scaled and centered, with no visible trend in the data. It’s looking good, so now look at autocorrelations

fig = plt.figure()
fig.set_figwidth(20)
fig.set_figheight(15)

_ = autocorrelation_plot(log_return_data['snp_log_return'], label='snp_log_return')
_ = autocorrelation_plot(log_return_data['nyse_log_return'], label='nyse_log_return')
_ = autocorrelation_plot(log_return_data['djia_log_return'], label='djia_log_return')
_ = autocorrelation_plot(log_return_data['nikkei_log_return'], label='nikkei_log_return')
_ = autocorrelation_plot(log_return_data['hangseng_log_return'], label='hangseng_log_return')
_ = autocorrelation_plot(log_return_data['ftse_log_return'], label='ftse_log_return')
_ = autocorrelation_plot(log_return_data['dax_log_return'], label='dax_log_return')
_ = autocorrelation_plot(log_return_data['aord_log_return'], label='aord_log_return')

_ = plt.legend(loc='upper right')

At this point, you’ve done a good enough job of exploratory data analysis. You’ve visualized our data and come to know it better. You’ve transformed it into a form that is useful for modelling, log returns, and looked at how indices relate to each other. You’ve seen that indices from Europe strongly correlate with US indices, and that indices from Asia/Oceania significantly correlate with those same indices for a given day. You’ve also seen that if you look at historical values, they do not correlate with today’s values. Summing up:

  • European indices from the same day were a strong predictor for the S&P 500 close.
  • Asian/Oceanian indices from the same day were a significant predictor for the S&P 500 close.
  • Indices from previous days were not good predictors for the S&P close.

What should we think so far?

Cloud Datalab is working great. With just a few lines of code, you were able to munge the data, visualize the changes, and make decisions. You could easily analyze and iterate. This is a common feature of iPython, but the advantage here is that Cloud Datalab is a managed service that you can simply click and use, so you can focus on your analysis.

Feature selection

At this point, we can see a model:

  • We’ll predict whether the S&P 500 close today will be higher or lower than yesterday.
  • We’ll use all our data sources: NYSE, DJIA, Nikkei, Hang Seng, FTSE, DAX, AORD.
  • We’ll use three sets of data points—T, T-1, and T-2—where we take the data available on day T or T-n, meaning today’s non-US data and yesterday’s US data.

Predicting whether the log return of the S&P 500 is positive or negative is a classification problem. That is, we want to choose one option from a finite set of options, in this case positive or negative. This is the base case of classification where we have only two values to choose from, known as binary classification, or logistic regression.

This uses the findings from of our exploratory data analysis, namely that log returns from other regions on a given day are strongly correlated with the log return of the S&P 500, and there are stronger correlations from those regions that are geographically closer with respect to time zones. However, our models also use data outside of those findings. For example, we use data from the past few days in addition to today. There are two reasons for using this additional data. First, we’re adding additional features to our model for the purpose of this solution to see how things perform. which is not a good reason to add features outside of a tutorial setting. Second, machine learning models are very good at finding weak signals from data.

In machine learning, as in most things, there are subtle tradeoffs happening, but in general good data is better than good algorithms, which are better than good frameworks. You need all three pillars but in that order of importance: data, algorithms, frameworks.

TensorFlow

TensorFlow is an open source software library, initiated by Google, for numerical computation using data flow graphs. TensorFlow is based on Google’s machine learning expertise and is the next generation framework used internally at Google for tasks such as translation and image recognition. It’s a wonderful framework for machine learning because it’s expressive, efficient, and easy to use.

Feature engineering for TensorFlow

From a training and testing perspective, time series data is easy. Training data should come from events that happened before test data events, and be contiguous in time. Otherwise, your model would be trained on events from “the future”, at least as compared to the test data. It would then likely perform badly in practice, because you can’t really have access to data from the future. That means random sampling or cross validation don’t apply to time series data. Decide on a training-versus-testing split, and divide your data into training and test datasets.

In this case, you’ll create the features together with two additional columns:

  • snp_log_return_positive, which is 1 if the log return of the S&P 500 close is positive, and 0 otherwise.
  • snp_log_return_negative, which is 1 if the log return of the S&P 500 close is negative, and 1 otherwise.

Now, logically you could encode this information in one column, named snp_log_return, which is 1 if positive and 0 if negative, but that’s not the way TensorFlow works for classification models. TensorFlow uses the general definition of classification, that there can be many different potential values to choose from, and a form or encoding for these options called one-hot encoding. One-hot encoding means that each choice is an entry in an array, and the actual value has an entry of 1 with all other values being 0. This encoding (i.e. a single 1 in an array of 0s) is for the input of the model, where you categorically know which value is correct. A variation of this is used for the output, where each entry in the array contains the probability of the answer being that choice. You can then choose the most likely value by choosing the highest probability, together with having a measure of the confidence you can place in that answer relative to other answers.

We’ll use 80% of our data for training and 20% for testing.In [21]:

log_return_data['snp_log_return_positive'] = 0
log_return_data.ix[log_return_data['snp_log_return'] >= 0, 'snp_log_return_positive'] = 1
log_return_data['snp_log_return_negative'] = 0
log_return_data.ix[log_return_data['snp_log_return'] < 0, 'snp_log_return_negative'] = 1

training_test_data = pd.DataFrame(
  columns=[
    'snp_log_return_positive', 'snp_log_return_negative',
    'snp_log_return_1', 'snp_log_return_2', 'snp_log_return_3',
    'nyse_log_return_1', 'nyse_log_return_2', 'nyse_log_return_3',
    'djia_log_return_1', 'djia_log_return_2', 'djia_log_return_3',
    'nikkei_log_return_0', 'nikkei_log_return_1', 'nikkei_log_return_2',
    'hangseng_log_return_0', 'hangseng_log_return_1', 'hangseng_log_return_2',
    'ftse_log_return_0', 'ftse_log_return_1', 'ftse_log_return_2',
    'dax_log_return_0', 'dax_log_return_1', 'dax_log_return_2',
    'aord_log_return_0', 'aord_log_return_1', 'aord_log_return_2'])

for i in range(7, len(log_return_data)):
  snp_log_return_positive = log_return_data['snp_log_return_positive'].ix[i]
  snp_log_return_negative = log_return_data['snp_log_return_negative'].ix[i]
  snp_log_return_1 = log_return_data['snp_log_return'].ix[i-1]
  snp_log_return_2 = log_return_data['snp_log_return'].ix[i-2]
  snp_log_return_3 = log_return_data['snp_log_return'].ix[i-3]
  nyse_log_return_1 = log_return_data['nyse_log_return'].ix[i-1]
  nyse_log_return_2 = log_return_data['nyse_log_return'].ix[i-2]
  nyse_log_return_3 = log_return_data['nyse_log_return'].ix[i-3]
  djia_log_return_1 = log_return_data['djia_log_return'].ix[i-1]
  djia_log_return_2 = log_return_data['djia_log_return'].ix[i-2]
  djia_log_return_3 = log_return_data['djia_log_return'].ix[i-3]
  nikkei_log_return_0 = log_return_data['nikkei_log_return'].ix[i]
  nikkei_log_return_1 = log_return_data['nikkei_log_return'].ix[i-1]
  nikkei_log_return_2 = log_return_data['nikkei_log_return'].ix[i-2]
  hangseng_log_return_0 = log_return_data['hangseng_log_return'].ix[i]
  hangseng_log_return_1 = log_return_data['hangseng_log_return'].ix[i-1]
  hangseng_log_return_2 = log_return_data['hangseng_log_return'].ix[i-2]
  ftse_log_return_0 = log_return_data['ftse_log_return'].ix[i]
  ftse_log_return_1 = log_return_data['ftse_log_return'].ix[i-1]
  ftse_log_return_2 = log_return_data['ftse_log_return'].ix[i-2]
  dax_log_return_0 = log_return_data['dax_log_return'].ix[i]
  dax_log_return_1 = log_return_data['dax_log_return'].ix[i-1]
  dax_log_return_2 = log_return_data['dax_log_return'].ix[i-2]
  aord_log_return_0 = log_return_data['aord_log_return'].ix[i]
  aord_log_return_1 = log_return_data['aord_log_return'].ix[i-1]
  aord_log_return_2 = log_return_data['aord_log_return'].ix[i-2]
  training_test_data = training_test_data.append(
    {'snp_log_return_positive':snp_log_return_positive,
    'snp_log_return_negative':snp_log_return_negative,
    'snp_log_return_1':snp_log_return_1,
    'snp_log_return_2':snp_log_return_2,
    'snp_log_return_3':snp_log_return_3,
    'nyse_log_return_1':nyse_log_return_1,
    'nyse_log_return_2':nyse_log_return_2,
    'nyse_log_return_3':nyse_log_return_3,
    'djia_log_return_1':djia_log_return_1,
    'djia_log_return_2':djia_log_return_2,
    'djia_log_return_3':djia_log_return_3,
    'nikkei_log_return_0':nikkei_log_return_0,
    'nikkei_log_return_1':nikkei_log_return_1,
    'nikkei_log_return_2':nikkei_log_return_2,
    'hangseng_log_return_0':hangseng_log_return_0,
    'hangseng_log_return_1':hangseng_log_return_1,
    'hangseng_log_return_2':hangseng_log_return_2,
    'ftse_log_return_0':ftse_log_return_0,
    'ftse_log_return_1':ftse_log_return_1,
    'ftse_log_return_2':ftse_log_return_2,
    'dax_log_return_0':dax_log_return_0,
    'dax_log_return_1':dax_log_return_1,
    'dax_log_return_2':dax_log_return_2,
    'aord_log_return_0':aord_log_return_0,
    'aord_log_return_1':aord_log_return_1,
    'aord_log_return_2':aord_log_return_2},
    ignore_index=True)
  
training_test_data.describe()

Out[21]:

snp_log_return_positivesnp_log_return_negativesnp_log_return_1snp_log_return_2snp_log_return_3nyse_log_return_1nyse_log_return_2nyse_log_return_3djia_log_return_1djia_log_return_2hangseng_log_return_2ftse_log_return_0ftse_log_return_1ftse_log_return_2dax_log_return_0dax_log_return_1dax_log_return_2aord_log_return_0aord_log_return_1aord_log_return_2
count1440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.0000001440.000000
mean0.5694440.4305560.0002880.0003000.0002770.0001980.0002800.0001990.0002190.0002340.000202-0.0001320.0000890.000177-0.0001010.0003720.000330-0.0000900.0002040.000214
std0.4953260.4953260.2831530.2831550.2831530.2097130.2097250.2097380.2384770.2384800.1162080.1226910.1223900.1224270.2740630.2734780.2734550.1292400.1290240.129027
min0.0000000.000000-0.692193-0.692193-0.692193-0.506269-0.506269-0.506269-0.588851-0.588851-0.457791-0.326003-0.310586-0.310586-0.787802-0.787802-0.787802-0.364802-0.364802-0.364802
25%0.0000000.000000-0.191954-0.191954-0.191954-0.139134-0.139134-0.139134-0.163664-0.163664-0.067801-0.085918-0.085708-0.085708-0.182506-0.180272-0.180272-0.081758-0.081718-0.081718
50%1.0000000.0000000.0019910.0019910.0019040.0027990.0029030.0027990.0054760.0054760.0000040.0000000.0000000.0000000.0047260.0047260.0047260.0000350.0001080.000108
75%1.0000001.0000000.1730810.1730810.1730810.1430430.1430430.1430430.1536020.1536020.0743910.0852860.0852860.0854120.1792660.1792660.1790780.0864630.0866260.086626
max1.0000001.0000000.6941000.6941000.6941000.5047590.5047590.5047590.6085450.6085450.3568320.3109550.3109550.3109550.7333550.7333550.7333550.3510900.3510900.351090

8 rows × 26 columns

Now, create the training and test data.In [22]:

predictors_tf = training_test_data[training_test_data.columns[2:]]

classes_tf = training_test_data[training_test_data.columns[:2]]

training_set_size = int(len(training_test_data) * 0.8)
test_set_size = len(training_test_data) - training_set_size

training_predictors_tf = predictors_tf[:training_set_size]
training_classes_tf = classes_tf[:training_set_size]
test_predictors_tf = predictors_tf[training_set_size:]
test_classes_tf = classes_tf[training_set_size:]

training_predictors_tf.describe()

Out[22]:

snp_log_return_1snp_log_return_2snp_log_return_3nyse_log_return_1nyse_log_return_2nyse_log_return_3djia_log_return_1djia_log_return_2djia_log_return_3nikkei_log_return_0hangseng_log_return_2ftse_log_return_0ftse_log_return_1ftse_log_return_2dax_log_return_0dax_log_return_1dax_log_return_2aord_log_return_0aord_log_return_1aord_log_return_2
count1152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.0000001152.000000
mean-0.0001960.000397-0.000059-0.0001430.000339-0.000048-0.0002030.000321-0.000064-0.0000130.000225-0.000038-0.0001210.000191-0.000061-0.0000900.000392-0.0000320.0000190.000253
std0.2847890.2841790.2837550.2106540.2102240.2099850.2411690.2406430.2402780.3381510.1156600.1236630.1236360.1234730.2741340.2741320.2738530.1286920.1287270.128579
min-0.692193-0.692193-0.692193-0.506269-0.506269-0.506269-0.588851-0.588851-0.588851-0.862554-0.457791-0.310586-0.310586-0.310586-0.726642-0.726642-0.726642-0.364802-0.364802-0.364802
25%-0.199468-0.198908-0.198908-0.146283-0.144873-0.144873-0.169649-0.169313-0.169313-0.169807-0.069357-0.091270-0.091270-0.090538-0.186197-0.186197-0.185310-0.081838-0.081838-0.081758
50%0.0016640.0017570.0016640.0024170.0024890.0024170.0055720.0056720.0056720.0000000.0004200.0000000.0000000.0000000.0045320.0044660.0045320.0000000.0000000.000035
75%0.1787090.1787090.1783320.1442350.1442350.1438890.1610360.1610360.1602680.1880530.0756200.0901150.0901150.0902950.1840760.1840760.1840760.0852440.0854980.085498
max0.6941000.6941000.6941000.5047590.5047590.5047590.6085450.6085450.6085450.8667990.3568320.3109550.3109550.3109550.7333550.7333550.7333550.3510900.3510900.351090

8 rows × 24 columnsIn [23]:

test_predictors_tf.describe()

Out[23]:

snp_log_return_1snp_log_return_2snp_log_return_3nyse_log_return_1nyse_log_return_2nyse_log_return_3djia_log_return_1djia_log_return_2djia_log_return_3nikkei_log_return_0hangseng_log_return_2ftse_log_return_0ftse_log_return_1ftse_log_return_2dax_log_return_0dax_log_return_1dax_log_return_2aord_log_return_0aord_log_return_1aord_log_return_2
count288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000288.000000
mean0.002222-0.0000910.0016200.0015630.0000460.0011840.001907-0.0001150.001384-0.0001410.000113-0.0005090.0009270.000123-0.0002640.0022220.000081-0.0003200.0009440.000057
std0.2769920.2795080.2812210.2062610.2080810.2091090.2277880.2300290.2315520.3599040.1185800.1189320.1174760.1183570.2742530.2713120.2723280.1316340.1304270.131032
min-0.662979-0.662979-0.662979-0.491978-0.491978-0.491978-0.578444-0.578444-0.578444-0.870563-0.397151-0.326003-0.292388-0.292388-0.787802-0.787802-0.787802-0.326055-0.326055-0.326055
25%-0.144085-0.146887-0.146887-0.110501-0.112281-0.112281-0.121279-0.123573-0.123573-0.134193-0.061134-0.069690-0.068599-0.069690-0.141651-0.139082-0.141651-0.078831-0.078588-0.078831
50%0.0030770.0029850.0029850.0046590.0046590.0046590.0052640.0048680.0048680.0000000.0000000.0007740.0018380.0018380.0085750.0114510.0085750.0009910.0018580.000991
75%0.1596490.1596490.1623330.1241780.1241780.1245260.1344320.1344320.1356820.1670830.0671140.0691950.0695060.0695060.1559440.1559440.1542160.0939580.0939580.093958
max0.6390260.6390260.6390260.4627830.4627830.462783
def tf_confusion_metrics(model, actual_classes, session, feed_dict):
  predictions = tf.argmax(model, 1)
  actuals = tf.argmax(actual_classes, 1)

  ones_like_actuals = tf.ones_like(actuals)
  zeros_like_actuals = tf.zeros_like(actuals)
  ones_like_predictions = tf.ones_like(predictions)
  zeros_like_predictions = tf.zeros_like(predictions)

  tp_op = tf.reduce_sum(
    tf.cast(
      tf.logical_and(
        tf.equal(actuals, ones_like_actuals), 
        tf.equal(predictions, ones_like_predictions)
      ), 
      "float"
    )
  )

  tn_op = tf.reduce_sum(
    tf.cast(
      tf.logical_and(
        tf.equal(actuals, zeros_like_actuals), 
        tf.equal(predictions, zeros_like_predictions)
      ), 
      "float"
    )
  )

  fp_op = tf.reduce_sum(
    tf.cast(
      tf.logical_and(
        tf.equal(actuals, zeros_like_actuals), 
        tf.equal(predictions, ones_like_predictions)
      ), 
      "float"
    )
  )

  fn_op = tf.reduce_sum(
    tf.cast(
      tf.logical_and(
        tf.equal(actuals, ones_like_actuals), 
        tf.equal(predictions, zeros_like_predictions)
      ), 
      "float"
    )
  )

  tp, tn, fp, fn = \
    session.run(
      [tp_op, tn_op, fp_op, fn_op], 
      feed_dict
    )

  tpfn = float(tp) + float(fn)
  tpr = 0 if tpfn == 0 else float(tp)/tpfn
  fpr = 0 if tpfn == 0 else float(fp)/tpfn

  total = float(tp) + float(fp) + float(fn) + float(tn)
  accuracy = 0 if total == 0 else (float(tp) + float(tn))/total

  recall = tpr
  tpfp = float(tp) + float(fp)
  precision = 0 if tpfp == 0 else float(tp)/tpfp
  
  f1_score = 0 if recall == 0 else (2 * (precision * recall)) / (precision + recall)
  
  print('Precision = ', precision)
  print('Recall = ', recall)
  print('F1 Score = ', f1_score)
  print('Accuracy = ', accuracy)

Binary classification with TensorFlow

Now, get some tensors flowing. The model is binary classification expressed in TensorFlow.In [25]:

sess = tf.Session()

# Define variables for the number of predictors and number of classes to remove magic numbers from our code.
num_predictors = len(training_predictors_tf.columns) # 24 in the default case
num_classes = len(training_classes_tf.columns) # 2 in the default case

# Define placeholders for the data we feed into the process - feature data and actual classes.
feature_data = tf.placeholder("float", [None, num_predictors])
actual_classes = tf.placeholder("float", [None, num_classes])

# Define a matrix of weights and initialize it with some small random values.
weights = tf.Variable(tf.truncated_normal([num_predictors, num_classes], stddev=0.0001))
biases = tf.Variable(tf.ones([num_classes]))

# Define our model...
# Here we take a softmax regression of the product of our feature data and weights.
model = tf.nn.softmax(tf.matmul(feature_data, weights) + biases)

# Define a cost function (we're using the cross entropy).
cost = -tf.reduce_sum(actual_classes*tf.log(model))

# Define a training step...
# Here we use gradient descent with a learning rate of 0.01 using the cost function we just defined.
training_step = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)

init = tf.initialize_all_variables()
sess.run(init)
WARNING:tensorflow:From <ipython-input-25-cd5471ce00ea>:26: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.

We’ll train our model in the following snippet. The approach of TensorFlow to executing graph operations allows fine-grained control over the process. Any operation you provide to the session as part of the run operation will be executed and the results returned. You can provide a list of multiple operations.

You’ll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration we’ll assess the accuracy of the model on the training data to assess progress.In [26]:

correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

for i in range(1, 30001):
  sess.run(
    training_step, 
    feed_dict={
      feature_data: training_predictors_tf.values, 
      actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
    }
  )
  if i%5000 == 0:
    print(i, sess.run(
      accuracy,
      feed_dict={
        feature_data: training_predictors_tf.values, 
        actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
      }
    ))
5000 0.809896
10000 0.859375
15000 0.881076
20000 0.891493
25000 0.896701
30000 0.904514

An accuracy of 65% on the training data is fine, certainly better than random.In [27]:

feed_dict= {
  feature_data: test_predictors_tf.values,
  actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)
}

tf_confusion_metrics(model, actual_classes, sess, feed_dict)
Precision =  0.905660377358
Recall =  0.780487804878
F1 Score =  0.838427947598
Accuracy =  0.871527777778

The metrics for this most simple of TensorFlow models are unimpressive, an F1 Score of 0.36 is not going to blow any light bulbs in the room. That’s partly because of its simplicity and partly because It hasn’t been tuned; selection of hyperparameters is very important in machine learning modelling.

Feed-forward neural network with two hidden layers

You’ll now build a proper feed-forward neural net with two hidden layers.In [28]:

sess1 = tf.Session()

num_predictors = len(training_predictors_tf.columns)
num_classes = len(training_classes_tf.columns)

feature_data = tf.placeholder("float", [None, num_predictors])
actual_classes = tf.placeholder("float", [None, 2])

weights1 = tf.Variable(tf.truncated_normal([24, 50], stddev=0.0001))
biases1 = tf.Variable(tf.ones([50]))

weights2 = tf.Variable(tf.truncated_normal([50, 25], stddev=0.0001))
biases2 = tf.Variable(tf.ones([25]))
                     
weights3 = tf.Variable(tf.truncated_normal([25, 2], stddev=0.0001))
biases3 = tf.Variable(tf.ones([2]))

hidden_layer_1 = tf.nn.relu(tf.matmul(feature_data, weights1) + biases1)
hidden_layer_2 = tf.nn.relu(tf.matmul(hidden_layer_1, weights2) + biases2)
model = tf.nn.softmax(tf.matmul(hidden_layer_2, weights3) + biases3)

cost = -tf.reduce_sum(actual_classes*tf.log(model))

train_op1 = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)

init = tf.initialize_all_variables()
sess1.run(init)
WARNING:tensorflow:From <ipython-input-28-dd8f22a83d55>:26: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.

Again, you’ll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration, you’ll assess the accuracy of the model on the training data to assess progress.In [29]:

correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

for i in range(1, 30001):
  sess1.run(
    train_op1, 
    feed_dict={
      feature_data: training_predictors_tf.values, 
      actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
    }
  )
  if i%5000 == 0:
    print(i, sess1.run(
      accuracy,
      feed_dict={
        feature_data: training_predictors_tf.values, 
        actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)
      }
    ))
5000 0.931424
10000 0.934028
15000 0.934028
20000 0.934028
25000 0.934028
30000 0.934028

A significant improvement in accuracy with the training data shows that the hidden layers are adding additional capacity for learning to the model.

Looking at precision, recall, and accuracy, you can see a measurable improvement in performance, but certainly not a step function. This indicates that we’re likely reaching the limits of this relatively simple feature set.In [30]:

feed_dict= {
  feature_data: test_predictors_tf.values,
  actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)
}

tf_confusion_metrics(model, actual_classes, sess1, feed_dict)
Precision =  0.921052631579
Recall =  0.853658536585
F1 Score =  0.886075949367
Accuracy =  0.90625

Conclusion

You’ve covered a lot of ground. You moved from sourcing five years of financial time-series data, to munging that data into a more suitable form. You explored and visualized that data with exploratory data analysis and then decided on a machine learning model and the features for that model. You engineered those features, built a binary classifier in TensorFlow, and analyzed its performance. You built a feed forward neural net with two hidden layers in TensorFlow and analyzed its performance.

How did the technology fare? It should take most people 1.5 to 3 hours to extract the juice from this solution, and none of that time is spent waiting for infrastructure or software; it’s spent reading and thinking. In many organizations, it can take anywhere from days to months to do this sort of data analysis, depending on whether you need to procure any hardware. And you didn’t need to do anything with infrastructure or additional software. Rather, you used a web-based console to direct GCP to set up systems on your behalf, which it did—fully managed, maintained, and supported—freeing you up to spend your time analyzing.

It was also cost effective. If you took your time with this solution and spent three hours to go through it, the cost would be a few pennies.

Cloud Datalab worked admirably, too. iPython/Jupyter has always been a great platform for interactive, iterative work and a fully-managed version of that platform on GCP, with connectors to other GCP technologies such as BigQuery and Google Cloud Storage, is a force multiplier for your analysis needs. If you haven’t used iPython before, this solution might have been eye opening, for you. If you’re already familiar with iPython, then you’ll love the connectors to other GCP technologies.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s