# Using Machine Learning to Interpolate Values

Machine learning is bursting with potential applications, but one important (and simple!) usage is using a machine learning algorithm for interpolation. In this post, I will demonstrate how a simple neural network can be used to interpolate on 1D or 2D grids. This is of particular interest for me in interpolating large vectors on a 2D spatial grid.

1-D interpolation asks the simple question of given an x value predict a y value. Since neural networks are thought of as universal function estimators, they should be well suited to this type of problem (spoiler — they are). Let’s check out how this looks in practice.

But first! Let’s do some imports. We are going to be using keras’ Sequential model for the neural network, numpy for creating our data, matplotlib for visualizing our results, and sklearn for splitting our data into training and testing sets.

`from keras.models import Sequentialfrom keras.layers import Denseimport numpy as npimport matplotlib.pyplot as pltimport sklearn.model_selection as skm`

Let’s make a simple sine wave with a little bit of random noise.

`# Define 1D function we want to learndef function1d(x):    return np.sin(2*np.pi*x)# Create samplex = np.random.random_sample(1000)y = function1d(x) + np.random.normal(0, .1, 1000)# Plotplt.scatter(x,y, label='Generated Data')plt.plot(np.linspace(0,1,100), function1d(np.linspace(0,1,100)), label='Real Sine Function', color='r', linestyle='--', linewidth=3)plt.xlabel('X value')plt.ylabel('Y value')plt.title("Randomly Sampled Sine Function")`

And let’s go ahead and make a really simple neural network with two hidden layers and only 100

`### Model creation: adding layers and compilationmodel = Sequential()model.add(Dense(100, input_dim=1, activation='relu'))model.add(Dense(100, activation='relu'))model.add(Dense(1, activation='linear'))model.compile(optimizer='adam', loss='mse', metrics=['mse'])`

We will also do the standard data split (i.e. 80% for training and 20% for testing).

`X_train, X_test, y_train, y_test = skm.train_test_split(x, y, test_size=0.2)plt.scatter(X_train, y_train, label='training data')plt.scatter(X_test, y_test, label='test data')plt.legend()plt.xlabel('X value')plt.ylabel('Y value')plt.title("Randomly Sampled Sine Function")`

Now we are going to train our neural network using some standard parameters.

`history = model.fit(X_train, y_train, epochs=20, batch_size=4)`

With the trained model, we can now predict values for our test set. Let’s take a look at how these line up with the actual model.

`y_pred = model.predict(X_test)plt.scatter(X_train, y_train, label='Training Data')plt.scatter(X_test, y_pred, label='Model Predictions')plt.plot(np.linspace(0,1,100), function1d(np.linspace(0,1,100)), label='Real Sine Function', color='r', linestyle='--', linewidth=3)plt.legend()plt.xlabel('X value')plt.ylabel('Y value')plt.title("Randomly Sampled Sine Function")`

And voila! We can see that even a really simple neural network does a great job at this interpolation problem with noisy data.

Let’s check out the 2D case. First, we need to start by making our dataset.

`# Define 2D function we want to learndef function2d(x, y):    return (x+y)*np.exp(-5.0*(x**2+y**2))# Create samplex = 2*np.random.random_sample(2000) - 1y = 2*np.random.random_sample(2000) - 1z = function2d(x, y)# Plotplt.scatter(x,y, c=z)plt.xlabel('X value')plt.ylabel('Y value')plt.title("Randomly Sampled 2D Function")plt.colorbar()`

So here we want to take our x and y value pairs (x,y) as input and predict the z-value (i.e. the color). Let’s start by splitting up our data.

`X_train, X_test, y_train, y_test = skm.train_test_split(np.column_stack((x, y)), z, test_size=0.2)# Plotplt.scatter(X_train[:,0], X_train[:,1], c=y_train)plt.xlabel('X value')plt.ylabel('Y value')plt.title("Training Set")plt.colorbar()`

Now we can build our neural network by just changing the input to take two vakues instead of one.

`### Model creation: adding layers and compilationmodel2D = Sequential()model2D.add(Dense(100, input_shape=(None, 2), activation='relu'))model2D.add(Dense(100, activation='relu'))model2D.add(Dense(1, activation='linear'))model2D.compile(optimizer='adam', loss='mse', metrics=['mse'])`

And let’s go ahead and fit our network and see how the output looks.

`history = model2D.fit(X_train, y_train, epochs=20, batch_size=4)z_pred = model2D.predict(X_test)plt.scatter(X_test[:,0], X_test[:,1], c=z_pred)plt.xlabel('X value')plt.ylabel('Y value')plt.title("Predicted Set")plt.colorbar()`

Alright so it looks promising. Let’s plot this and our training data to see if it all looks OK.

`plt.scatter(X_train[:,0], X_train[:,1], c=y_train)plt.scatter(X_test[:,0], X_test[:,1], c=z_pred)plt.xlabel('X value')plt.ylabel('Y value')plt.title("All Data")plt.colorbar()`

And everything checks out!

So in this short article I’ve shown you how a neural network can be used for 1D or 2D interpolation!

You can find the full notebook here:

https://github.com/crhea93/Medium/blob/master/MachineLearning/BasicInterpolation.ipynb