I recently upgraded my Keras library installation to version 2.6 and so I've been revisiting my three basic examples: Iris Dataset (multi-class classification), Boston Housing (regression), and Banknote Authentication (binary classification). In older versions of Keras, you would install the TensorFlow engine first and then the separate Keras library second, but now Keras is included in TensorFlow.

The Banknote Authentication dataset has 1372 items. Each item represents a banknote (think Euro or dollar bill) that is authentic (class 0) or a forgery (class 1). Each line of data has four predictor values: variance of image of banknote, skewness of image, kurtosis of image, entropy of image.


A graph of some of the Banknote Authentication data -- just kurtosis and entropy for the first 40 class 0 (authentic) and first 40 class 1 (forgery) items.

The Banknote data can be used as-is because all predictor values are roughly in the same range, however, I normalized each predictor value by dividing by 20 so that all values are between -1 and +1. Then I wrote a little script to randomly split the data into a 1000-item set for training (about 70% of the data) and a 372-item set for testing (about 30%).

For the neural network binary classifier, I used a 4-(8-8)-1 architecture. I used tanh() activation on the two hidden layers but I could have used relu() activation instead. I used sigmoid activation on the output node so that output values are between 0.0 and 1.0, and then an output value less than 0.5 indicates class 0 = authentic, and an output value greater than 0.5 indicates class 1 = forgery.

  import numpy as np  import tensorflow as tf  from tensorflow import keras as K    print("Creating 4-(8-8)-1 neural network ")  g_init = K.initializers.glorot_uniform(seed=1)  model = K.models.Sequential()  model.add(K.layers.Dense(units=8, input_dim=4,    activation='tanh', kernel_initializer=g_init,     bias_initializer='zeros'))   model.add(K.layers.Dense(units=8,    activation='tanh', kernel_initializer=g_init,    bias_initializer='zeros'))   model.add(K.layers.Dense(units=1,    activation='sigmoid', kernel_initializer=g_init,    bias_initializer='zeros'))  

I used explicit Glorot initialization for layer weights and explicit zero-initialization for layer biases. These are the default initialization schemes used so I could have omitted the explicit initialization. I prefer explicit initialization -- I think it's more clear, and guards against confusion if the default initialization scheme changes.

I hit a few minor glitches as expected (deprecated parameter names, etc.) but I was ale to fix these glitches quickly. This is a big advantage of experience with Keras or any other machine learning library -- you make fewer mistakes, but more importantly, over time you learn how to correct mistakes quickly.

Good fun!



Dealing with mistakes is a part of any kind of software development, including the development of machine learning systems. According to Wikipedia, cellophane was invented as a result of a mistake.

Cellophane is a thin, transparent sheet made of regenerated wood or cotton cellulose. Cellophane was invented by Jacques Brandenberger in 1900. He was inspired by seeing wine spill on a restaurant tablecloth, and he decided to create a cloth that could deal with that type of mistake.

Cellophane is sometimes used for contemporary women's fashion (left and right images) but has been around for a long time (center image is from 1933).


Code below.

  # banknote_tfk.py  # Banknote classification  # Keras 2.6.0 in TensorFlow 2.6.0 ("_tfk")  # Anaconda3-2020.02  Python 3.7.6  Windows 10    import os  os.environ['TF_CPP_MIN_LOG_LEVEL']='2'  # suppress CPU warn    import numpy as np  import tensorflow as tf  from tensorflow import keras as K    class MyLogger(K.callbacks.Callback):    def __init__(self, n):      self.n = n   # print loss and acc every n epochs      def on_epoch_end(self, epoch, logs={}):      if epoch % self.n == 0:        curr_loss =logs.get('loss')        curr_acc = logs.get('accuracy') * 100        print("epoch = %4d  loss = %0.6f  acc = %0.2f%%" % \          (epoch, curr_loss, curr_acc))    def main():    print("\nBanknote Authentication using Keras example ")    np.random.seed(1)    tf.random.set_seed(1)      # 1. load data    print("Loading Banknote data into memory ")    train_file = ".\\Data\\banknote_train.txt"    train_x = np.loadtxt(train_file, delimiter='\t',      usecols=[0,1,2,3], dtype=np.float32)    train_y = np.loadtxt(train_file, delimiter='\t',      usecols=[4], dtype=np.float32)      test_file = ".\\Data\\banknote_test.txt"    test_x = np.loadtxt(test_file, delimiter='\t',      usecols=[0,1,2,3], dtype=np.float32)    test_y = np.loadtxt(test_file, delimiter='\t',      usecols=[4], dtype=np.float32)      # 2. define 4-(x-x)-1 deep NN model    print("\nCreating 4-(8-8)-1 neural network ")    g_init = K.initializers.glorot_uniform(seed=1)    model = K.models.Sequential()    model.add(K.layers.Dense(units=8, input_dim=4,      activation='tanh', kernel_initializer=g_init,       bias_initializer='zeros'))     model.add(K.layers.Dense(units=8,      activation='tanh', kernel_initializer=g_init,      bias_initializer='zeros'))     model.add(K.layers.Dense(units=1,      activation='sigmoid', kernel_initializer=g_init,      bias_initializer='zeros'))        # 3. compile model    opt = K.optimizers.SGD(learning_rate=0.01)      model.compile(loss='binary_crossentropy',      optimizer=opt, metrics=['accuracy'])        # 4. train model    max_epochs = 100    log_every = 10    my_logger = MyLogger(log_every)    print("\nStarting training ")    h = model.fit(train_x, train_y, batch_size=32,      epochs=max_epochs, verbose=0, callbacks=[my_logger])     print("Training finished ")      # 5. evaluate model    # np.set_printoptions(precision=4, suppress=True)    eval_results = model.evaluate(test_x, test_y, verbose=0)     print("\nLoss, accuracy on test data: ")    print("%0.4f %0.2f%%" % (eval_results[0], \  eval_results[1]*100))      # 6. save model    print("\nSaving trained model as banknote_model.h5 ")    # mp = ".\\Models\\banknote_model.h5"    # model.save(mp)      # 7. make a prediction    np.set_printoptions(formatter={'float': '{: 0.4f}'.format})    inpts = np.array([[0.5, 0.5, 0.5, 0.5]], dtype=np.float32)    pred = model.predict(inpts)    print("\nPredicting authenticity for: ")    print(inpts)     print("Probability of class 1 (forgery) = %0.4f " % pred)    if __name__=="__main__":    main()