S.NO
EXPERIMENT NAME
MARKS
SIGNATURE
1.
Implement simple vector addition in TensorFlow.
2.
Implement a regression model in Keras.
3.
4.
5.
6.
7.
8.
Implement a perceptron in TensorFlow/Keras
Environment.
Implement a Feed-Forward Network in
TensorFlow/Keras.
Implement an Image Classifier using CNN in
TensorFlow/Keras.
Improve the Deep learning model by fine tuning hyper
parameters.
Implement a Transfer Learning concept in Image
Classification.
Using a pre trained model on Keras for Transfer
Learning
9.
Perform Sentiment Analysis using RNN
10.
Implement an LSTM based Autoencoder in
TensorFlow/Keras.
11.
Image generation using GAN
1
S.NO
EXPERIMENT NAME
MARKS
SIGNATURE
2
Implement Simple Vector Addition in TensorFlow
Exercise No:
Date:
Aim: Learn how to perform basic mathematical operations on tensors using TensorFlow.
Algorithm:
1.
2.
3.
4.
Define two vectors (e.g., vector_a and vector_b).
Convert the vectors to TensorFlow tensors using tf.constant.
Use the tf.add() function to perform element-wise addition of the two vectors.
Print the result.
Steps:
1.
2.
3.
4.
vector_a = tf.constant([1, 2, 3])
vector_b = tf.constant([4, 5, 6])
result = tf.add(vector_a, vector_b)
print(result)
Instructions:
1.
2.
3.
4.
Install TensorFlow by running:
pip install tensorflow
Create two simple vectors and perform addition.
Code:
import tensorflow as tf
# Create two vectors
vector_a = tf.constant([1, 2, 3], dtype=tf.float32)
vector_b = tf.constant([4, 5, 6], dtype=tf.float32)
# Perform addition
result = tf.add(vector_a, vector_b)
print(result)
Expected Output:
tf.Tensor([5. 7. 9.], shape=(3,), dtype=float32)
3
4
VIVA QUESTIONS
Q1: What is TensorFlow, and how does it handle basic operations like vector addition?
A1:
TensorFlow is an open-source machine learning library developed by Google. It provides a
flexible platform for building machine learning models, including neural networks. For basic
operations like vector addition, TensorFlow uses Tensors, which are multidimensional arrays,
to efficiently store and process data. In TensorFlow, vector addition is handled through
functions like tf.add(), which performs element-wise addition of two tensors (or vectors)
efficiently, leveraging hardware acceleration.
Q2: Can you explain how to perform simple vector addition in TensorFlow?
A2:
To perform simple vector addition in TensorFlow, you follow these steps:
1. Import TensorFlow using import tensorflow as tf.
2. Create two vectors (using tf.constant() or tf.Variable()).
3. Use tf.add() to add the two vectors element-wise.
4. Display or return the result.
Example:
import tensorflow as tf
# Create two vectors
vector_a = tf.constant([1, 2, 3])
vector_b = tf.constant([4, 5, 6])
# Add the vectors
result = tf.add(vector_a, vector_b)
# Print the result
print(result)
Q3: What is the difference between tf.add() and the Python + operator when working
with TensorFlow tensors?
A3:
The + operator can be used for element-wise addition in TensorFlow, similar to tf.add().
However, tf.add() is more explicit and often preferred when working with TensorFlow,
especially in more complex graphs or operations that require backward compatibility with
TensorFlow's graph execution. tf.add() is also optimized for TensorFlow’s performance when
dealing with large tensors across multiple devices, such as GPUs.
Example:
result = vector_a + vector_b # Equivalent to tf.add(vector_a, vector_b)
While both the + operator and tf.add() do the same thing in most cases, tf.add() is often used
for clarity and consistency in TensorFlow code.
Q4: How does TensorFlow handle operations like vector addition on GPU or
distributed systems?
A4:
TensorFlow is designed to run efficiently on both CPUs and GPUs, leveraging hardware
acceleration. When performing vector addition (or any operation), TensorFlow automatically
distributes operations across available devices (such as a GPU, if available). This is done
5
through the tf.device() context, and TensorFlow abstracts away the complexity of device
placement. The library optimizes tensor operations to be parallelized, which can lead to faster
execution, especially when dealing with large datasets or complex computations.
Q5: What data type do you typically use when performing vector operations in
TensorFlow?
A5:
In TensorFlow, the typical data type for performing vector operations is tf.float32, which is
the default type for most tensor operations. However, you can use other data types, such as
tf.int32 or tf.float64, depending on the problem's requirements. It is important to ensure that
the data types of the vectors being added match to avoid errors during execution.
Example:
vector_a = tf.constant([1.0, 2.0, 3.0], dtype=tf.float32)
vector_b = tf.constant([4.0, 5.0, 6.0], dtype=tf.float32)
result = tf.add(vector_a, vector_b)
Q6: What is the purpose of tf.constant() in the context of simple vector addition?
A6:
tf.constant() is used to create a constant tensor that holds fixed values. In the context of vector
addition, tf.constant() allows us to define vectors that do not change during the computation.
This is typically used for input vectors or for operations that are not expected to modify the
tensor values during the execution of the graph.
Example:
vector_a = tf.constant([1, 2, 3])
vector_b = tf.constant([4, 5, 6])
Q7: What will happen if the vectors have different shapes when performing addition in
TensorFlow?
A7:
TensorFlow performs broadcasting, meaning it will try to align tensors of different shapes
according to certain rules. If the vectors have incompatible shapes (for example, different
lengths), TensorFlow will raise an error. However, if one vector is a scalar or can be
broadcast to match the shape of the other vector, the operation will succeed.
For example:
vector_a = tf.constant([1, 2, 3]) # Shape (3,)
vector_b = tf.constant([4]) # Shape (1,)
# This will work because scalar values are broadcast across the tensor
result = tf.add(vector_a, vector_b) # Output: [5, 6, 7]
But if the shapes are incompatible:
vector_a = tf.constant([1, 2, 3]) # Shape (3,)
vector_b = tf.constant([4, 5]) # Shape (2,)
# This will raise an error due to incompatible shapes
Q8: How would you handle errors like shape mismatches or invalid operations in
TensorFlow?
A8:
To handle shape mismatches or invalid operations in TensorFlow:
1. Shape Validation: You can manually check the shapes of the tensors before
performing operations using tf.shape().
6
2. Reshaping Tensors: If the shapes are mismatched but can be aligned, use operations
like tf.reshape() to adjust the tensor shapes.
3. Exception Handling: Use try-except blocks to catch runtime errors and debug issues.
Example of shape validation:
if tf.shape(vector_a) == tf.shape(vector_b):
result = tf.add(vector_a, vector_b)
else:
print("Shapes are not compatible!")
Q9: Can you perform the vector addition using the tf.function() decorator? Why might
you do that?
A9:
Yes, you can use the tf.function() decorator to convert a Python function into a TensorFlow
graph, which can improve performance by enabling graph optimization and execution in a
more efficient manner.
Example:
@tf.function
def vector_addition(a, b):
return tf.add(a, b)
result = vector_addition(tf.constant([1, 2, 3]), tf.constant([4, 5, 6]))
Using tf.function() is beneficial for performance in production environments, as it reduces
overhead and accelerates the execution of TensorFlow operations.
7
Implement a Regression Model in Keras
Exercise No:
Date:
Aim: Implement a simple regression model using Keras to predict continuous values.
Algorithm:
1.
2.
3.
4.
Import necessary libraries (tensorflow.keras).
Generate or load training data (e.g., X_train and y_train).
Define a simple neural network model with one input and output layer.
Compile the model with an optimizer (e.g., Adam) and loss function (e.g.,
mean_squared_error).
5. Train the model using model.fit() on the training data.
Steps:
1.
2.
3.
4.
X_train, y_train = generate_synthetic_data()
model = Sequential([Dense(1, input_dim=1)])
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(X_train, y_train, epochs=10)
Instructions:
1. Import Keras and create a simple regression model using dense layers.
2. Generate synthetic data for training.
3. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Sample data
X_train = tf.random.normal([100, 1])
y_train = 2 * X_train + 1
# Create a simple regression model
model = models.Sequential([
layers.Dense(1, input_dim=1)
])
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(X_train, y_train, epochs=10)
8
9
VIVA QUESTIONS
Q1: What is the primary purpose of a regression model?
A1:
The primary purpose of a regression model is to predict a continuous output variable based
on one or more input features. It is used when the dependent variable is numeric and
continuous, such as predicting house prices, stock prices, or temperature.
Q2: Can you explain how a regression model in Keras is different from a classification
model?
A2:
In a regression model, the output is a continuous value, while in a classification model, the
output is a discrete class label. For regression tasks, the loss function is typically
mean_squared_error (MSE) or mean_absolute_error, while classification uses
categorical_crossentropy or binary_crossentropy depending on the number of classes.
Q3: What activation function is typically used in the output layer of a regression model?
A3:
For regression tasks, the output layer typically has a linear activation function (or no
activation function). This is because the model needs to output continuous values, and a
linear activation does not constrain the output to any range, unlike sigmoid or softmax, which
are used for classification tasks.
Q4: What is the loss function used in regression models, and why?
A4:
The most common loss function for regression models is mean squared error (MSE). MSE
computes the average of the squared differences between predicted and actual values. It is
preferred because it penalizes larger errors more heavily, making the model sensitive to larger
deviations between predicted and actual values.
Q5: How do you compile a regression model in Keras?
A5:
To compile a regression model in Keras, you would define the optimizer, loss function, and
any metrics you want to track (though in regression, metrics like accuracy are typically not
used). A typical compilation for a regression model might look like this:
model.compile(optimizer='adam', loss='mean_squared_error')
Here, Adam is used as the optimizer, and mean_squared_error is the loss function.
Q6: How can you prevent overfitting in a regression model?
A6:
Overfitting in regression models can be prevented through several techniques:
1. Regularization: L2 (Ridge) or L1 (Lasso) regularization can be added to the model to
penalize large weights.
10
2. Dropout: Adding dropout layers can prevent overfitting by randomly setting a
fraction of input units to 0 during training.
3. Early Stopping: Monitor validation loss during training and stop when it stops
improving.
4. Cross-validation: Use cross-validation to ensure that the model generalizes well to
new data.
Q7: How do you evaluate a regression model in Keras?
A7:
In Keras, regression models can be evaluated using the mean squared error (MSE) or mean
absolute error (MAE) as the evaluation metric. These metrics show how close the
predictions are to the actual values. For example:
model.evaluate(X_test, y_test, verbose=1)
Alternatively, custom metrics like R-squared (R²) can also be used to evaluate model
performance, though Keras doesn't directly include it as a built-in metric.
Q8: Can you explain the role of batch size and epochs in training a regression model?
A8:
Batch Size refers to the number of training samples used in one forward and
backward pass. Smaller batch sizes tend to make the model more stochastic, which
can help with generalization, but too small a batch can increase the variance of the
model.
Epochs refers to the number of times the entire training dataset is passed through the
model. More epochs allow the model to learn better, but too many epochs may lead to
overfitting. You should monitor validation loss and use techniques like early stopping
to avoid overfitting.
Q9: What is the significance of the optimizer in a regression model in Keras?
A9:
The optimizer is used to minimize the loss function during training by adjusting the weights
of the model. Common optimizers include Adam, SGD (Stochastic Gradient Descent), and
RMSprop. Adam is often preferred because it adapts the learning rate for each parameter and
converges faster. The optimizer is essential for efficient training, especially when dealing
with large datasets or complex models.
Q10: How would you handle a situation where the target variable in a regression model has a
skewed distribution?
A10:
When the target variable in a regression model has a skewed distribution, several strategies
can be used to address this:
1. Log Transformation: Apply a logarithmic transformation to the target variable to
reduce skewness and make the distribution more normal.
2. Normalization/Standardization: Scale the target variable to improve the learning
dynamics of the model.
3. Resampling: Use techniques like SMOTE or undersampling/oversampling to balance
the dataset.
4. Quantile Regression: Quantile regression can help in cases where the data is heavily
skewed and is more robust to outliers.
11
Implement a Perceptron in TensorFlow/Keras
Exercise No:
Date:
Aim: Implement a perceptron model to solve simple classification problems.
Algorithm:
1.
2.
3.
4.
5.
Import necessary libraries (tensorflow.keras).
Define the dataset (e.g., XOR problem).
Create a perceptron model with one input and one output neuron.
Compile the model using binary_crossentropy as the loss function.
Train the model on the dataset for a specified number of epochs.
Steps:
1.
2.
3.
4.
5.
X_train = [[0, 0], [0, 1], [1, 0], [1, 1]]
y_train = [[0], [1], [1], [0]]
model = Sequential([Dense(1, input_dim=2, activation='sigmoid')])
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(X_train, y_train, epochs=1000)
Instructions:
1. Use the XOR dataset to train the perceptron.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Define the perceptron model
model = models.Sequential([
layers.Dense(1, input_dim=2, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Sample data: XOR problem
X_train = tf.constant([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=tf.float32)
y_train = tf.constant([[0], [1], [1], [0]], dtype=tf.float32)
12
# Train the model
model.fit(X_train, y_train, epochs=1000)
13
VIVA QUESTIONS
Q1: What is a perceptron?
A1:
A perceptron is a simple, single-layer neural network used for binary classification tasks. It
consists of an input layer, a weighted sum, an activation function, and an output. It is the
basic building block for more complex neural networks.
Q2: How do you implement a perceptron in Keras?
A2:
In Keras, you can implement a perceptron by defining a Sequential model with one Dense
layer having a sigmoid activation function for binary classification. Example:
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(input_shape,))
])
Q3: What activation function is commonly used in a perceptron?
A3:
The sigmoid activation function is commonly used in the output layer of a perceptron for
binary classification tasks because it outputs values between 0 and 1, representing
probabilities.
Q4: How do you train a perceptron?
A4:
You train a perceptron by defining the loss function (usually binary_crossentropy for binary
classification), an optimizer (e.g., Adam), and fitting the model on the data using model.fit().
Q5: What is the difference between a perceptron and a multi-layer perceptron (MLP)?
A5:
A perceptron has only a single layer of neurons, while an MLP has multiple hidden layers.
MLPs can solve more complex tasks because they have the capability to model non-linear
relationships in the data.
Q6: How do you handle the case where the model is not learning in a perceptron?
A6:
You can try the following:
14
Ensure proper initialization of weights.
Adjust the learning rate (e.g., by using Adam optimizer).
Increase the number of training epochs.
Ensure data is preprocessed (normalized or standardized).
Q7: What loss function do you use for a binary classification perceptron?
A7:
For binary classification, binary_crossentropy is typically used as the loss function because
it measures the difference between the predicted probabilities and actual class labels.
Q8: How would you implement a multi-class classification perceptron?
A8:
For multi-class classification, use a softmax activation function in the output layer and
categorical_crossentropy as the loss function.
Q9: What is the purpose of the bias term in a perceptron?
A9:
The bias term helps shift the activation function to the right or left, allowing the perceptron to
make better decisions by adjusting the decision boundary.
Q10: How would you improve the performance of a perceptron?
A10:
To improve a perceptron’s performance, you can:
Increase the model complexity by adding more hidden layers.
Tune hyperparameters like the learning rate.
Use advanced optimizers like Adam.
Introduce regularization techniques like dropout or L2 regularization.
15
Implement a Feed-Forward Network in
TensorFlow/Keras
Exercise No:
Date:
Aim: Create a multi-layer feed-forward neural network with Keras.
Algorithm:
1. Define the input data (e.g., X_train and y_train).
2. Create a feed-forward neural network with one or more hidden layers.
3. Compile the model with an appropriate optimizer (e.g., adam) and loss function (e.g.,
categorical_crossentropy).
4. Train the model using model.fit().
Steps:
1. model = Sequential([Dense(128, activation='relu', input_dim=input_size), Dense(64,
activation='relu'), Dense(output_size, activation='softmax')])
2. model.compile(optimizer='adam', loss='categorical_crossentropy')
3. model.fit(X_train, y_train, epochs=10)
Instructions:
1. Create a model with multiple layers and train it on any dataset.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Define the feed-forward neural network
model = models.Sequential([
layers.Dense(128, activation='relu', input_dim=64),
layers.Dense(64, activation='relu'),
16
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
17
VIVA QUESTIONS
Q1: What is a feed-forward neural network (FNN)?
A1:
A feed-forward neural network is a type of neural network where the data moves in one
direction—from input to output. It consists of an input layer, one or more hidden layers, and
an output layer, with no cycles or loops.
Q2: How do you implement a feed-forward network in Keras?
A2:
In Keras, a feed-forward network can be implemented by stacking dense layers in a
Sequential model. Example:
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(input_dim,)),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
Q3: What activation functions are commonly used in a feed-forward network?
A3:
The most commonly used activation functions are:
ReLU for hidden layers.
Sigmoid or softmax for the output layer, depending on whether it’s binary or multiclass classification.
Q4: How do you train a feed-forward network?
18
A4:
You compile the model with an optimizer (e.g., Adam), loss function (e.g.,
binary_crossentropy), and metrics, then use model.fit() to train it on the data.
Q5: What is backpropagation?
A5:
Backpropagation is an algorithm used to update the weights in a neural network by
calculating the gradient of the loss function with respect to the weights. It propagates the
error backwards through the network to optimize weights.
Q6: What is overfitting, and how can it be prevented in a feed-forward network?
A6:
Overfitting occurs when a model learns the training data too well, including noise, leading to
poor generalization. It can be prevented by:
Using dropout layers.
Adding L2 regularization.
Using early stopping.
Increasing training data.
Q7: What is the role of a loss function in a feed-forward network?
A7:
The loss function measures the difference between predicted values and actual values. During
training, the goal is to minimize the loss function. Common loss functions for classification
include binary_crossentropy or categorical_crossentropy.
Q8: How do you optimize the training process in a feed-forward network?
A8:
Optimization techniques include:
Using better optimizers like Adam or RMSprop.
Adjusting the learning rate.
Using batch normalization to stabilize training.
Q9: What is batch normalization?
A9:
Batch normalization normalizes the input to each layer in the network to improve training
speed and stability by reducing internal covariate shift.
Q10: How would you handle imbalanced data in a feed-forward network?
A10:
You can handle imbalanced data by:
Using class weights in the loss function.
19
Implement an Image Classifier Using CNN in
TensorFlow/Keras
Exercise No:
Date:
Aim: Implement a Convolutional Neural Network (CNN) for image classification.
Algorithm:
1. Load image data (e.g., MNIST or CIFAR).
2. Create a convolutional neural network (CNN) architecture:
o Add convolutional layers.
o Add max-pooling layers.
o Flatten the output.
o Add dense layers for classification.
3. Compile the model with Adam optimizer and appropriate loss function.
4. Train the model on the dataset.
Steps:
1. model = Sequential([Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D((2, 2)), Conv2D(64, (3, 3), activation='relu'), MaxPooling2D((2, 2)),
Flatten(), Dense(64, activation='relu'), Dense(10, activation='softmax')])
2. model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
3. model.fit(X_train, y_train, epochs=5)
20
Instructions:
1. Use the MNIST dataset or any other dataset.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Define a simple CNN model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Assuming you have the dataset (e.g., MNIST)
# model.fit(X_train, y_train, epochs=5)
21
VIVA QUESTIONS
Q1: What is a Convolutional Neural Network (CNN)?
A1:
A CNN is a type of deep learning model primarily used for image classification and computer
vision
tasks. It consists of convolutional layers that detect spatial hierarchies in images.
Q2: How do you implement a CNN for image classification in Keras?
A2:
In Keras, a CNN can be implemented by stacking convolutional layers followed by pooling
layers and dense layers. Example:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
22
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
Q3: What is the purpose of pooling layers in CNNs?
A3:
Pooling layers are used to reduce the spatial dimensions of the image and down-sample
feature maps, thus reducing computational complexity and controlling overfitting.
Q4: What is the difference between ReLU and Sigmoid activation functions in CNNs?
A4:
ReLU is preferred in CNNs because it allows for faster training by reducing the likelihood of
the vanishing gradient problem. Sigmoid is typically used in binary classification output
layers but is less common in hidden layers due to its tendency to saturate.
Q5: What is the role of a convolutional layer in CNN?
A5:
A convolutional layer applies filters (kernels) to the input image to detect local patterns such
as edges, textures, or shapes. This helps the network to learn spatial hierarchies in the data.
Q6: What is transfer learning in the context of CNNs?
A6:
Transfer learning involves taking a pre-trained CNN model (trained on a large dataset like
ImageNet) and fine-tuning it for a new, often smaller dataset. This reduces training time and
improves performance.
Q7: What optimizer is commonly used in CNNs?
A7:
The Adam optimizer is commonly used in CNNs due to its adaptive learning rate and
efficient performance for image-related tasks.
Q8: What is dropout, and why is it used in CNNs?
A8:
Dropout is a regularization technique where a fraction of neurons are randomly "dropped"
(set to zero) during each training step. It helps prevent overfitting by ensuring that the
network doesn’t become too reliant on any specific neuron.
Q9: How would you evaluate a CNN model?
A9:
You can evaluate a CNN model by checking its accuracy, precision, recall, or F1-score on a
test dataset. In Keras, this is done with model.evaluate(X_test, y_test).
Q10: What is data augmentation in CNNs?
A10:
Data augmentation is a technique to artificially increase the size of the training dataset by
23
applying random transformations like rotations, flips, zooms, and shifts. This helps the model
generalize better and prevents overfitting.
24
Improve the Deep Learning Model by Fine-Tuning
Hyperparameters
Exercise No:
Date:
Aim: Fine-tune the hyperparameters to improve the model’s performance.
Algorithm:
1. Define the model architecture (e.g., neural network or CNN).
2. Choose a range of hyperparameters to optimize, such as learning rate, batch size,
number of layers, or dropout rate.
3. Use tools like GridSearchCV or RandomSearchCV to tune hyperparameters.
4. Re-train the model with the best-found hyperparameters.
Steps:
1.
2.
3.
4.
model = Sequential([...])
param_grid = {'learning_rate': [0.001, 0.01], 'batch_size': [32, 64]}
grid_search = GridSearchCV(estimator=model, param_grid=param_grid)
grid_search.fit(X_train, y_train)
Instructions:
1. Use a grid search or random search to experiment with hyperparameters like learning
rate, batch size, and model layers.
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
# Define a function to create the model
def create_model(optimizer='adam', dropout_rate=0.2):
model = models.Sequential([
layers.Dense(128, activation='relu', input_dim=64),
layers.Dropout(dropout_rate),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model)
param_grid = {'optimizer': ['adam', 'sgd'], 'dropout_rate': [0.2, 0.5]}
25
grid = GridSearchCV(estimator=model, param_grid=param_grid)
# grid.fit(X_train, y_train)
26
VIVA QUESTIONS
Q1: What is transfer learning?
A1:
Transfer learning involves taking a pre-trained model (typically trained on a large dataset like
ImageNet) and fine-tuning it for a new task or dataset. This allows the model to leverage
learned features from the original task and adapt them to the new task, improving
performance and reducing training time.
Q2: Why is transfer learning beneficial for image classification?
A2:
Transfer learning is beneficial because it reduces the need for large amounts of data and
computational resources. Pre-trained models already capture useful features like edges,
textures, and patterns, making them useful for a wide range of image classification tasks.
Q3: How do you implement transfer learning in Keras?
A3:
In Keras, you can implement transfer learning by using a pre-trained model (e.g., VGG16,
ResNet) and fine-tuning it for your specific image classification task. You freeze the weights
of the pre-trained layers and train only the top layers. Example:
base_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False,
input_shape=(224, 224, 3))
base_model.trainable = False # Freeze pre-trained layers
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10, activation='softmax')
])
Q4: What layers of a pre-trained model should be frozen during transfer learning?
A4:
Typically, the initial layers (those closer to the input) are frozen because they capture general
features like edges and textures. The later layers (closer to the output) are more task-specific
and can be fine-tuned to adapt to the new task.
27
Q5: How would you fine-tune a pre-trained model?
A5:
Fine-tuning involves unfreezing some of the top layers and retraining the entire model with a
lower learning rate. This allows the model to adjust its learned features for the new task while
retaining the knowledge from the original training.
Q6: What are the advantages of using pre-trained models in transfer learning?
A6:
Pre-trained models offer advantages like reduced training time, better generalization with
limited data, and the ability to leverage advanced models trained on massive datasets like
ImageNet, which provides valuable features for many image classification tasks.
Q7: How do you modify the output layer when using transfer learning for a different task?
A7:
You modify the output layer to match the number of classes in your new classification task.
For example, if the original model had 1000 classes, but your task has 10 classes, replace the
final layer with a Dense layer of 10 units and a softmax activation.
Q8: Can you apply transfer learning to a smaller dataset?
A8:
Yes, transfer learning is particularly useful for smaller datasets. It allows the model to benefit
from the knowledge learned from a larger dataset (e.g., ImageNet), reducing the need for a
large amount of labeled data for your task.
Q9: What is the difference between fine-tuning and feature extraction in transfer learning?
A9:
Fine-tuning: Involves unfreezing and retraining the last few layers or the entire
model to adapt it to a new task.
Feature extraction: The pre-trained model is used as a fixed feature extractor, and
only the final layers are trained for the new task.
Q10: How do you choose which layers to fine-tune in transfer learning?
A10:
Typically, you start by fine-tuning the deeper layers that capture high-level features, while
freezing the earlier layers. This helps retain the learned features from the original model
while adapting the later layers to the new task.
28
Implement a Transfer Learning Concept in Image
Classification
Exercise No:
Date:
Aim: Use transfer learning with pre-trained models for image classification Exercises.
Algorithm:
1.
2.
3.
4.
5.
Choose a pre-trained model (e.g., VGG16, ResNet50) without the top layers.
Freeze the layers of the base model.
Add custom layers (e.g., fully connected layers) on top of the base model.
Compile the model with an optimizer and appropriate loss function.
Train the model on the dataset.
Steps:
1. base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,
224, 3))
2. base_model.trainable = False
3. model = Sequential([base_model, GlobalAveragePooling2D(), Dense(512,
activation='relu'), Dense(10, activation='softmax')])
4. model.compile(optimizer='adam', loss='categorical_crossentropy')
5. model.fit(X_train, y_train, epochs=10)
Instructions:
1. Load a pre-trained model like VGG16 or ResNet50.
2. Freeze the layers of the base model and add custom layers.
3. Code:
from tensorflow.keras.applications import VGG16
from tensorflow.keras import layers, models
29
# Load the pre-trained VGG16 model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Freeze the base model
base_model.trainable = False
# Add custom layers on top
model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(512, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
30
VIVA QUESTIONS
1. What is transfer learning, and how does it apply to image classification?
Answer:
Transfer learning is the process of taking a pre-trained model (usually trained on a large
dataset like ImageNet) and fine-tuning it for a different, but related, task. In image
classification, transfer learning allows the model to use learned features (like edges, textures,
and patterns) from the source dataset to classify images from the target dataset. This is
especially useful when there is limited labeled data for the target task.
2. Why is transfer learning important in image classification?
Answer:
Transfer learning is important because it reduces the need for large datasets and expensive
computational resources. Instead of training a deep neural network from scratch, which
requires a large amount of labeled data and significant computational power, you can
leverage a pre-trained model that already has learned useful features. This allows faster
convergence and better generalization on smaller datasets.
3. How do you implement transfer learning using a pre-trained model in Keras?
Answer:
To implement transfer learning in Keras, you can use the tf.keras.applications module to load
a pre-trained model (such as VGG16, ResNet50, etc.) and fine-tune it for your specific image
classification task. You typically remove the top layer (the fully connected layers) and
replace them with a new layer suited for your task.
Example:
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
# Load a pre-trained VGG16 model without the top layer
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Freeze the base model layers
31
base_model.trainable = False
# Add custom layers on top for your classification task
model = Sequential([
base_model,
GlobalAveragePooling2D(),
Dense(1024, activation='relu'),
Dense(num_classes, activation='softmax') # num_classes is the number of classes in your
dataset
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
4. How do you choose which layers to freeze when using transfer learning?
Answer:
Typically, in transfer learning, you freeze the initial layers of the pre-trained model because
they capture general features like edges, textures, and simple patterns that are useful across
various tasks. The deeper layers, which capture more task-specific patterns, are typically finetuned because they can be adapted to the new task. The exact layers to freeze depend on the
specific task and dataset.
5. What is the role of the include_top argument when using pre-trained models in
Keras?
Answer:
The include_top argument in Keras controls whether the fully connected layers at the top of
the pre-trained model should be included. By setting include_top=False, you remove the final
classification layers and leave only the convolutional base, which you can then adapt to your
own classification task by adding custom layers on top.
Example:
base_model = VGG16(weights='imagenet', include_top=False)
6. When should you unfreeze layers of a pre-trained model, and how does it help?
Answer:
You can unfreeze the top layers of the pre-trained model after training the model on the new
dataset for a few epochs, with the pre-trained layers frozen. Fine-tuning these layers allows
the model to adapt more closely to the new task and dataset, potentially improving
performance. Unfreezing the layers can be done gradually, starting from the top-most layers
or by unfreezing the deeper layers that are more specific to the task.
Example:
for layer in base_model.layers[-10:]:
layer.trainable = True
7. How do you evaluate the performance of a transfer learning model?
Answer:
You can evaluate the performance of a transfer learning model using standard metrics such as
accuracy, precision, recall, and F1-score. You would typically evaluate the model on a
separate validation or test dataset that was not used during training to ensure that the model
generalizes well to unseen data.
Example:
32
model.evaluate(test_data, test_labels)
8. Can you use transfer learning with a smaller dataset, and how does it help?
Answer:
Yes, transfer learning is particularly effective when you have a smaller dataset. Pre-trained
models have already learned a wealth of useful features from a large dataset (e.g., ImageNet),
which means that even with a small dataset, the model can perform well because it doesn’t
have to learn from scratch. By fine-tuning the model on your small dataset, it can quickly
adapt to the new task while benefiting from the features learned on the larger dataset.
9. What are some popular pre-trained models you can use for transfer learning in image
classification?
Answer:
Some popular pre-trained models available in Keras for transfer learning include:
VGG16 and VGG19: Convolutional networks known for their simplicity and depth.
ResNet50: A deep network with residual connections that help in training deeper
models.
InceptionV3: A network that uses a mixture of convolutions of different sizes to
capture features at various scales.
Xception: A deep convolutional model using depthwise separable convolutions for
efficiency.
MobileNet: A lightweight model designed for mobile and embedded applications.
DenseNet: A model where each layer is connected to every other layer, facilitating
feature reuse.
10. What is the difference between feature extraction and fine-tuning in transfer
learning?
Answer:
Feature extraction: The pre-trained model is used to extract features from the input
images, and only the top (output) layers are trained for the new task. The pre-trained
layers are kept frozen.
Fine-tuning: Involves unfreezing some or all of the layers of the pre-trained model
and training them along with the new layers. Fine-tuning allows the model to adjust to
the new task, making it more specific to the data.
Feature extraction is often used when the dataset is small or when computational resources
are limited. Fine-tuning is used when you have more data and want to fully adapt the model
to the new task.
33
Using a Pre-Trained Model on Keras for Transfer
Learning
Exercise No:
Date:
Aim: Use a pre-trained model such as ResNet50 for transfer learning.
Algorithm:
1.
2.
3.
4.
Load a pre-trained model such as ResNet50, VGG16, or InceptionV3.
Freeze the base model layers and only train the new top layers.
Add custom layers like dense or dropout layers on top.
Compile and train the model.
Steps:
1. base_model = ResNet50(weights='imagenet', include_top=False)
2. base_model.trainable = False
3. model = Sequential([base_model, GlobalAveragePooling2D(), Dense(1024,
activation='relu'), Dense(10, activation='softmax')])
4. model.compile(optimizer='adam', loss='categorical_crossentropy')
5. model.fit(X_train, y_train, epochs=5)
Instructions:
1. Load the ResNet50 model and freeze the base layers.
2. Add custom layers and train the model.
3. Code:
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import layers, models
# Load ResNet50 as the base model
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Freeze the base model
base_model.trainable = False
# Add custom layers
model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(1024, activation='relu'),
34
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
35
VIVA QUESTIONS
Q1: What is a pre-trained model?
A1:
A pre-trained model is a model that has been previously trained on a large dataset, such as
ImageNet, and can be reused for tasks similar to the one it was originally trained on. These
models have learned useful features that can be transferred to new tasks.
Q2: How do you load a pre-trained model in Keras?
A2:
In Keras, you can load a pre-trained model using tf.keras.applications, such as:
from tensorflow.keras.applications import VGG16
model = VGG16(weights='imagenet')
Q3: What is the benefit of using a pre-trained model from Keras?
A3:
Using pre-trained models from Keras provides the benefit of leveraging large, publicly
available models trained on large datasets like ImageNet. This reduces the need to train
models from scratch, saving time and computational resources.
Q4: How do you adapt a pre-trained model for your own dataset?
A4:
You can adapt a pre-trained model by removing its top (output) layer and replacing it with a
new layer that matches the number of classes in your dataset. You can then fine-tune the
remaining layers.
Q5: What models are available for transfer learning in Keras?
A5:
Keras provides several pre-trained models like VGG16, ResNet50, InceptionV3, MobileNet,
and Xception. These models can be used for image classification tasks.
Q6: How can you freeze the layers of a pre-trained model?
A6:
You freeze the layers of a pre-trained model by setting the trainable attribute to False. For
example:
for layer in model.layers:
layer.trainable = False
Q7: What are the typical steps involved in using a pre-trained model for transfer learning?
A7:
The typical steps are:
1. Load a pre-trained model.
2. Freeze the pre-trained layers.
36
3. Add a custom output layer.
4. Compile and train the model on your dataset.
5. Optionally fine-tune some layers.
Q8: Can you use a pre-trained model for tasks other than image classification?
A8:
Yes, pre-trained models can be used for other tasks like object detection, segmentation, and
even natural language processing tasks, depending on the type of pre-trained model.
Q9: What is the role of the include_top parameter when using pre-trained models in Keras?
A9:
The include_top parameter determines whether to include the fully connected layers at the
top of the pre-trained model. Setting include_top=False removes these layers, allowing you to
add your custom layers.
Q10: How do you fine-tune a pre-trained model?
A10:
Fine-tuning involves unfreezing some of the deeper layers and retraining the model with a
lower learning rate to adapt it to the new task while preserving the features learned from the
original dataset.
37
Perform Sentiment Analysis Using RNN
Exercise No:
Date
Aim: Implement a Recurrent Neural Network (RNN) for sentiment analysis on text data.
Algorithm:
1. Load and preprocess text data (e.g., tokenization, padding).
2. Create an RNN model with embedding layers and recurrent layers (e.g., SimpleRNN,
LSTM).
3. Compile the model using an appropriate optimizer and loss function.
4. Train the model on the sentiment dataset.
Steps:
1. model = Sequential([Embedding(input_dim=10000, output_dim=128),
SimpleRNN(64), Dense(1, activation='sigmoid')])
2. model.compile(optimizer='adam', loss='binary_crossentropy')
3. model.fit(X_train, y_train, epochs=10)
Instructions:
1. Prepare the text data and create the RNN model.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Simple RNN model for sentiment analysis
model = models.Sequential([
layers.Embedding(input_dim=10000, output_dim=128),
layers.SimpleRNN(64),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
38
39
VIVA QUESTIONS
Q1: What is sentiment analysis?
A1:
Sentiment analysis is the process of analyzing text data to determine the sentiment or
emotional tone behind the words, often classifying them as positive, negative, or neutral.
Q2: Why is an RNN suitable for sentiment analysis?
A2:
RNNs (Recurrent Neural Networks) are suitable for sentiment analysis because they are
designed to handle sequential data, such as text, and can remember previous words in the
sequence, which is important for understanding the context of sentiment.
Q3: How do you implement a simple RNN for sentiment analysis in Keras?
A3:
In Keras, you can implement an RNN for sentiment analysis by using the SimpleRNN layer.
Example:
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=10000, output_dim=128),
tf.keras.layers.SimpleRNN(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
Q4: What is the purpose of the embedding layer in a sentiment analysis model?
A4:
The embedding layer converts integer-encoded words into dense vectors of fixed size,
allowing the model to learn relationships between words in the context of the dataset.
Q5: What is the difference between SimpleRNN and LSTM layers?
A5:
SimpleRNN is a basic recurrent neural network layer, while LSTM (Long Short-Term
Memory) is a more advanced RNN variant that addresses the vanishing gradient problem,
making it better
at learning long-term dependencies.
Q6: How do you prepare text data for sentiment analysis?
A6:
Text data needs to be tokenized and padded. Tokenization converts words into numerical
indices, and padding ensures that all input sequences have the same length.
Q7: How do you handle class imbalance in sentiment analysis?
40
A7:
Class imbalance can be handled by using techniques like oversampling the minority class,
undersampling the majority class, or using class weights during model training.
Q8: What loss function is used for binary sentiment classification?
A8:
For binary sentiment classification, binary_crossentropy is used as the loss function because
it compares the predicted probability to the actual binary labels (0 or 1).
Q9: How do you evaluate a sentiment analysis model?
A9:
You evaluate the model using metrics like accuracy, precision, recall, and F1-score. These
metrics are used to determine how well the model is classifying sentiment.
Q10: How can you improve the performance of an RNN for sentiment analysis?
A10:
You can improve the model by:
Using pre-trained word embeddings like GloVe or Word2Vec.
Increasing model complexity by using LSTM or GRU layers.
Regularizing the model using dropout or L2 regularization
41
Exercise 10: Implement an LSTM-Based Autoencoder in TensorFlow/Keras
Aim: Implement an LSTM-based autoencoder for sequence data.
Algorithm:
1. Define the LSTM autoencoder architecture:
o Encoder with LSTM layers.
o Decoder with LSTM layers that reconstruct the input sequence.
2. Compile the model using an appropriate loss function.
3. Train the model on the dataset.
Steps:
1.
2.
3.
4.
5.
6.
input_seq = Input(shape=(100, 1))
encoded = LSTM(64, activation='relu')(input_seq)
decoded = RepeatVector(100)(encoded)
decoded = LSTM(1, activation='sigmoid', return_sequences=True)(decoded)
autoencoder = Model(input_seq, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
Instructions:
1. Build an autoencoder using LSTM layers.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Define the LSTM Autoencoder
input_seq = layers.Input(shape=(100, 1))
# Encoder
encoded = layers.LSTM(64, activation='relu')(input_seq)
# Decoder
decoded = layers.RepeatVector(100)(encoded)
decoded = layers.LSTM(1, activation='sigmoid', return_sequences=True)(decoded)
autoencoder = models.Model(input_seq, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
42
43
VIVA QUESTIONS
Q1: What is an autoencoder?
A1:
An autoencoder is a type of neural network that learns to compress data into a lowerdimensional representation and then reconstruct it back to its original form. It is often used
for unsupervised learning tasks like anomaly detection or data compression.
Q2: Why would you use LSTM layers in an autoencoder?
A2:
LSTM layers are used in an autoencoder for sequential data, such as time-series or text. They
can capture long-term dependencies and patterns, which are essential when reconstructing
sequences with complex temporal relationships.
Q3: How do you build an LSTM-based autoencoder in Keras?
A3:
You can build an LSTM-based autoencoder by using LSTM layers for both the encoder and
decoder. Example:
model = tf.keras.Sequential([
tf.keras.layers.LSTM(64, activation='relu', input_shape=(timesteps, features)),
tf.keras.layers.RepeatVector(timesteps),
tf.keras.layers.LSTM(64, activation='relu', return_sequences=True),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(features))
])
Q4: What is the encoder-decoder architecture in an autoencoder?
A4:
The encoder compresses the input data into a lower-dimensional space (latent representation),
and the decoder reconstructs the data from the compressed representation.
Q5: How do you train an LSTM-based autoencoder?
A5:
You train the autoencoder using the original data as both the input and the output, minimizing
the reconstruction error using loss functions like mean squared error (MSE).
Q6: What is the difference between an autoencoder and a variational autoencoder?
A6:
A standard autoencoder learns to map inputs to a deterministic latent space, whereas a
variational autoencoder uses probabilistic inference to learn a distribution of the latent space,
allowing it to generate new data.
Q7: How do you prevent overfitting when training an LSTM-based autoencoder?
44
A7:
To prevent overfitting, you can use techniques like dropout, regularization, or early stopping.
Q8: What type of data is best suited for an LSTM-based autoencoder?
A8:
An LSTM-based autoencoder is well-suited for sequential data like time-series, audio signals,
or text.
Q9: What loss function is used for an autoencoder?
A9:
Common loss functions for autoencoders include mean squared error (MSE) or binary
crossentropy, depending on the type of data (e.g., continuous or binary).
Q10: How would you use an LSTM-based autoencoder for anomaly detection?
A10:
You can use the autoencoder to reconstruct data points. If the reconstruction error is large, it
indicates that the input data is an anomaly, as the model was unable to reconstruct it well
based on its learned patterns.
45
Image Generation Using GAN
Exercise No:
Date:
Aim: Implement a Generative Adversarial Network (GAN) for generating synthetic images.
Algorithm:
1. Define the generator and discriminator models:
o Generator generates fake images from random noise.
o Discriminator classifies images as real or fake.
2. Compile the discriminator and GAN models.
3. Train the discriminator and generator in a loop:
o Train discriminator on both real and fake images.
o Train generator to deceive the discriminator.
4. Repeat until the generator produces realistic images.
Steps:
1. generator = Sequential([Dense(128, activation='relu', input_dim=100), Dense(784,
activation='sigmoid'), Reshape((28, 28, 1))])
2. discriminator = Sequential([Flatten(input_shape=(28, 28, 1)), Dense(128,
activation='relu'), Dense(1, activation='sigmoid')])
3. discriminator.compile(optimizer='adam', loss='binary_crossentropy')
4. gan = Model(generator_input, discriminator(generator_output))
5. gan.compile(optimizer='adam', loss='binary_crossentropy')
Instructions:
1. Create the generator and discriminator models.
2. Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Generator model
def build_generator():
model = models.Sequential([
layers.Dense(128, activation='relu', input_dim=100),
layers.Dense(784, activation='sigmoid'),
layers.Reshape((28, 28, 1))
])
return model
46
# Discriminator model
def build_discriminator():
model = models.Sequential([
layers.Flatten(input_shape=(28, 28, 1)),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
return model
# Compile models
generator = build_generator()
discriminator = build_discriminator()
discriminator.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Create the GAN model
discriminator.trainable = False
gan_input = layers.Input(shape=(100,))
x = generator(gan_input)
gan_output = discriminator(x)
gan = models.Model(gan_input, gan_output)
gan.compile(optimizer='adam', loss='binary_crossentropy')
47
VIVA QUESTIONS
Q1: What is a GAN (Generative Adversarial Network)?
A1:
A GAN consists of two neural networks: a generator and a discriminator. The generator
creates fake data, and the discriminator tries to distinguish between real and fake data. They
are trained together, with the generator improving to create more realistic data.
Q2: How do you implement a GAN in Keras?
A2:
A simple GAN can be implemented in Keras by defining a generator and discriminator
model. Then, the two models are trained in a loop where the generator creates fake data, and
the discriminator evaluates the real vs fake distinction. Example:
# Generator
generator = tf.keras.Sequential([...])
# Discriminator
discriminator = tf.keras.Sequential([...])
# GAN model
gan = tf.keras.Sequential([generator, discriminator])
Q3: What are the main components of a GAN?
A3:
A GAN consists of:
Generator: Creates fake data (e.g., images) from random noise.
Discriminator: Evaluates data and determines if it is real or generated (fake).
Q4: How do the generator and discriminator networks work together in a GAN?
A4:
The generator creates fake data, and the discriminator tries to classify it as real or fake. The
generator improves by trying to fool the discriminator, and the discriminator improves by
learning to better distinguish between real and fake data.
Q5: What loss functions are used in GANs?
A5:
The typical loss functions in GANs are:
48
Binary Crossentropy: For both the generator and discriminator.
Least Squares Loss: Used in LS-GANs to stabilize training.
Q6: How do you train a GAN?
A6:
Training a GAN involves alternating between training the discriminator and the generator.
First, train the discriminator on real and fake data. Then, train the generator to improve its
ability to fool the discriminator.
Q7: What challenges are faced when training GANs?
A7:
Challenges include:
Mode collapse: The generator produces limited variations of fake data.
Training instability: GANs can be difficult to train and require careful tuning of the
generator and discriminator.
Vanishing gradients: This happens when the discriminator becomes too good at
distinguishing real from fake, preventing the generator from learning effectively.
Q8: How do you evaluate the performance of a GAN?
A8:
Evaluating GANs is subjective since they generate images. One common method is to
visually inspect the generated images. Another method is using metrics like the Inception
Score or Fréchet Inception Distance (FID).
Q9: What are some applications of GANs?
A9:
GANs are used for:
Image generation (e.g., creating realistic images from random noise).
Super-resolution (improving the resolution of images).
Data augmentation.
Style transfer.
Image-to-image translation (e.g., generating realistic photos from sketches).
Q10: How do you prevent mode collapse in GANs?
A10:
To prevent mode collapse, you can:
Use techniques like minibatch discrimination, where the discriminator evaluates
multiple examples at once.
Use conditional GANs, where both the generator and discriminator receive additional
information (e.g., class labels).
49
Sampling techniques like oversampling the minority class or undersampling the
majority class.
50
51
52
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )