Tip #4: Follow Tested Practices

Go to the beginning of the article

Read the previous tip: Watch Your Own Data

There are some best practices that you should follow regardless of the task or type of DNN model you are using. For example, always split your data into three sets: training, validation, and testing. Always monitor training loss in addition to validation loss to determine if your model is underfitting or overfitting, and be open to adjusting the model architecture and parameters as needed.

Be very mindful of the distinction between underfitting and overfitting, as this is an area where people can easily become confused. If the training loss of your model does not reach a sufficiently low level, your model is underfitting and you should not even consider the validation loss. If the training loss has reached a low level and continues to decrease with increasing epochs, but the validation loss does not follow a similar trend, then you have an overfitting model.

To fix an underfitting model, you should increase the complexity of the model by adding one or more convolutional layers or adding more neurons to an existing convolutional layer. To fix an overfitting model, do the opposite: remove a convolutional layer or remove some neurons from a convolutional layer. Other techniques for combating overfitting include using dropout and regularization.

Read next tip: My Starting-Point CNN Model


Need assistance in your AI/deep learning project? We may be able to help. Take a look at the intro to our bioinformatician team, see some of the advantages of using our team's help here, and check out our FAQ page!

Send us an inquiry, chat with us online (during our business hours 9-5 Mon-Fri U.S. Central Time), or reach us in other ways!



Chat Support Software