You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
35 lines
1.4 KiB
Markdown
35 lines
1.4 KiB
Markdown
# Things I have learned about PyTorch and Neural networks.
|
|
|
|
## Building models
|
|
All model building in Pytorch is based on the following three steps
|
|
1. start by creating an object that extends the nn.Module base class
|
|
1. define layers as class attributes (sequential wrapper for ease of use)
|
|
2. implement the `.forward()` method
|
|
|
|
Each layer is just a predefined 'function'.
|
|
Really, they are objects that extend the nn.Module base class.
|
|
Thus each NN can act as a layer in another NN.
|
|
For example, I reimplemented an upscaling layer in BasicNeuralNet2.
|
|
(I picked up a lot of this info here.)[https://deeplizard.com/learn/video/k4jY9L8H89U]
|
|
|
|
|
|
Also, neural networks can return more than just a single output as long as the
|
|
loss function that is used for optimization can consume both of them.
|
|
Thus I could write two separate neural networks (such as for launch and partials),
|
|
and then write a third NN that binds the two together.
|
|
|
|
|
|
## Notes on functions
|
|
ReLU is a linear rectifier, it does not have any training involved.
|
|
This makes it good for working as a final cleanup of the launch function.
|
|
This also makes it not so good for the partial derivatives.
|
|
|
|
Linear is a good but basic network type.
|
|
|
|
Upscaling allows you to create more features.
|
|
Downscaling reduces the number of features (by throwing data away?).
|
|
Instead of downscaling, use a linear function to change the dimensions.
|
|
|
|
# Remaining Questions
|
|
- How do you set it up to run over a set of variables, i.e. batches?
|