As we learn the python for financial world, we will want to go into what is the future domain that is being explored to bring prediction of the securities much like human. Able to react and re-learn, scientists are working on deep learning base on neurons of brain. Having a large of neurons like a neutral network to sensor every signal.

Short Introduction on Neural Network

perceptron node
  • neural network is a network or circuit of neurons.
  • An artificial neural network, composed of artificial neurons or nodes. 
  • Thus a neural network is either a biological neural network, made up of biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems.
  •  Frank Rosenblatt, an American psychologist, conceptualized and tried to build a machine that responds like the human mind in 1958. He named his machine "Perceptron."
  • An artificial neural network in its basic form has three layers of neurons.
  • Information flows from one to the next, just as it does in the human brain:
    • The input layer: the data's entry point into the system
    • The hidden layer: where the information gets processed
    • The output layer: where the system decides how to proceed based on the data
  • In an artificial neural network, the artificial neuron receives a stimulus in the form of a signal that is a real number. Then, output of each neuron is computed by a nonlinear function of the sum of its inputs.
  • Adds up the value of every neurons from the previous column it is connected to.
  • There are Xn inputs (x1, x2, x3) coming to the neuron.
  • This value is multiplied, by another variable called "weight" (w1, w2, w3) which determines the connection between the two neurons. 
  • A bias value may be added to the total value calculated.
  • After all those summations, the neuron finally applies a function called "activation function" to the obtained value.
  • This activation function comes with non-linear properties. 3 common activation functions:
    • Sigmoid activation
    • Tanh action
    • Rectified linear unit (ReLu)

How does a neural network learn ?

Nature of Code Image
  • The most basic neural network is called perception.
  • It consists on 2 neurons in the inputs column and 1 neuron in the output column.
  • Training multiple layers perception networks is much more complicated. With the simple perceptron, we could easily evaluate how to change the weights according to the error.
  • The solution to optimizing weights of a multi-layered network is known as backpropagation.
  • The output of the network is generated in the same manner as a perceptron.
  • The inputs multiplied by the weights are summed and fed forward through the network.
  • The difference here is that they pass through additional layers of neurons before reaching the output

Using TensorFlow

  • TensorFlow is a free and open-source software library for machine learning and artificial intelligence.
  • It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
  • There is an online Tensorflow playground for you to try so you need not install tensorflow on you PC computer.
  • Let work a example on using tensorflow.
  • After warm up on above example of tensorflow. We continue to work on linear regression using tensorflow.
  • import pandas as pd
  • import numpy as np
  • import tensorflow as tf
  • import matplotlib.pyplot as plt
  • x_data = np.linspace(0.0,10.0,1000000)
  • noise = np.random.randn(len(x_data))
  • y_data = (0.5* x_data) + 5+ noise
  • my_data = pd.concat([pd.DataFrame(data=x_data,columns=['XData']),pd.DataFrame(data=y_data,columns=['Y'])], axis=1)
  • print(my_data.head())
  • batch_size = 8
  • m = tf.Variable(0.5)
  • b = tf.Variable(1.0)
  • xph = tf.compat.v1.placeholder(tf.float32, [batch_size])
  • yph = tf.compat.v1.placeholder(tf.float32, [batch_size])
  • with tf.compat.v1.Session() as sess:
  •     y_model = m*xph + b
  •     err = tf.reduce_sum(tf.square(yph-y_model))
  •     opt = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.001)
  •     train = opt.minimize(err)
  •     init = tf.compat.v1.global_variables_initializer()
  •     sess.run(init)
  •     batches = 1000
  •     for i in range(batches):
  •         rand_int = np.random.randint(len(x_data), size = batch_size)
  •         feed = {xph: x_data[rand_int], yph:y_data[rand_int]}
  •         sess.run(train,feed_dict=feed)
  •     model_m, model_b = sess.run([m,b])
  •     print("model_m : " + str(model_m))
  •     print("model_b : " + str(model_b))
  •     y_hat = x_data * model_m + model_b
  •     my_data.sample(n=250).plot(kind='scatter',x='XData',y='Y')
  •     plt.plot(x_data,y_hat,'r')
model_m : 0.44678143 model_b : 4.9145107

Finally, you are now learn how deep learning using tensorflow V2 for linear regression. There is important things to take note, on Tensorflow V2. The programming function may have issue when you using V2 when you will need to access V1 function. One of the way is using tf.compat.v1., if not, you may have to rewrite to V2 format.

Happy reading and take care.


This free site is ad-supported. Learn more