Uploaded by Umar Zaman

Homework #2

advertisement
Generate two federated learning models to classify MNIST
dataset (Zaman Umar, 202360116)
Federated Learning Model#1 Configuration Parameters:
First model for federated learning is configured with the following parameters
Number of layers#: 2
Number of neurons in the first layer: 784X20
Number of neurons in the second layer: 20X10
Number of rounds: 20
Number of local epochs: 5
Number of clients: 10
Learning rate: 0.01
Number of classes: 10
Number of input size: 784
Number of batches: 32
Optimizer: SGD
Federated Learning Model#1 Code
Federated Learning Model#2 Configuration Parameters:
First model for federated learning is configured with the following parameters
Number of layers#: 3
Number of neurons in the first layer: 784X50
Number of neurons in the second layer: 50X20
Number of neurons in the second layer: 20X10
Number of rounds: 50
Number of local epochs: 5
Number of clients: 10
Learning rate: 0.01
Number of classes: 10
Number of input size: 784
Number of batches: 32
Optimizer: SGD
Federated Learning Model#2 Code
Explanations:
Two models were generated using Federated Learning with the given specifications. Most of the
parameters for model #1 and model #2 are the same, but a few were changed to check for
performance differences.
Model #1 has two layers with 20 and 10 neurons, respectively, and is configured for 20 rounds. It
achieved an accuracy of 0.09, with a global validation loss of 0.07 and local training loss of 0.59.
Model #2, on the other hand, is configured with three layers, each with 50, 20, and 10 neurons,
respectively, and is set for 50 rounds. It achieved an accuracy of 0.11, with a global validation loss
of 0.07 and local training loss of 0.46.
From the results of both models, it can be concluded that the number of layers, neurons in each
layer, and number of rounds do not increase accuracy in model #2. Conversely, model #1, with
lower values, had a higher accuracy than model #2.
Download