AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Tensorflow permute mnist8/6/2023 It’s just a matter of calling the get_models() function and passing in different parameter values.Īnd that’s all I wanted to cover today. You could also test the optimization for models with two and four hidden layers, or even more, but I’ll leave that up to you. It looks like the simplest model resulted in the best accuracy. Image 9 - Model optimization results (image by author) sort_values(by = 'test_accuracy', ascending = False) Keep in mind - the optimization will take some time, as we’re training 64 models for 50 epochs. Model_name += f 'dense ')Īnd now, let’s finally start the optimization. Dense(nodes_at_layer, activation =hidden_layer_activation)) Model_name = '' for nodes_at_layer in permutation: product( *layer_possibilities))įor permutation in layer_node_permutations: Layer_node_permutations = list(itertools. Node_options = list(range(min_nodes_per_layer, max_nodes_per_layer + 1, node_step_size)) Output_layer_activation: str = 'sigmoid') -> list: We’ll set the step size between nodes to 64, so the possibilities are 64, 128, 192, and 256: Today’s network will have 3 hidden layers, with a minimum of 64 and a maximum of 256 nodes per layer. The approach to finding the optimal neural network model will have some tweakable constants. How to approach optimizing neural network models? With that out of the way, let’s see how to approach optimizing neural network architectures. Once again, please refer to the previous article if you want more detailed insights into the logic behind data preprocessing. X_train, X_test, y_train, y_test = train_test_split( Here’s the entire data preprocessing code snippet:įrom sklearn.model_selection import train_test_splitįrom sklearn.preprocessing import StandardScalerĭf = ]ĭf. Scale the data- The scale between predictors differs significantly, so we’ll use the StandardScaler to bring the values closer. ![]() Train/test split- A classic 80:20 split.Convert to a binary classification task- We’ll declare any wine with a grade of 6 and above as good, and anything below as bad.Handle categorical features- The only one is type, indicating whether the wine is white or red.Delete missing values- There’s only a handful of them, so we won’t waste time on imputation.We’ll address that now, with numerous other things: The dataset is mostly clean, but isn’t designed for binary classification by default (good/bad wine). Image 2 - A random sample of the wine quality dataset (image by author) We’re ignoring the warnings and changing the default TensorFlow log level just so we don’t get overwhelmed with the output. From trics import accuracy_score, precision_score, recall_score, f1_score
0 Comments
Read More
Leave a Reply. |