Uploaded by Stfu singh

INTERNSHIP REPORT ABHINAV SIINGH

advertisement
KIET Group of Institutions, Ghaziabad
CSE DEPARTMENT
Internship Report
on
Machine Learning with Python: Foundations
Linked Learning Course
2023
Submitted By:
ABHINAV SINGH
B.Tech(CSE) - 5A
2100290100006
Index for Internship Report
1. Acknowledgement
2. Certificate
3. Introduction of Internship.
4. Details of Internship
5. Details of Technical learning during delivery of task
6. Outcome of Internship
7. Future scope of work
8. Certificate
9. Literature review report
ACKNOWLEDGEMENT
I’ve got this golden opportunity to express my kind gratitude and sincere thanks to my Head
of Institution, KIET Group of Institutions of Engineering and Technology, and Head of
Department of CSE for their kind support and necessary counselling in the preparation of this
project report. I’m also indebted to each and every person responsible for the making up of this
project directly or indirectly.
I must also acknowledge or deep debt of gratitude each one of my colleague who led this project
come out in the way it is. It’s my hard work and untiring sincere efforts and mutual cooperation
to bring out the project work. Last but not the least, I would like to thank my parents for their
sound counselling and cheerful support. They have always inspired us and kept our spirit up.
Name of Student – Abhinav Singh
Course and Branch – B.Tech(CSE)
Semester - 5
University Roll No: - 2100290100006
Introduction:
I am pleased to present a comprehensive report on my internship experience, which primarily
focused on the foundations of machine learning. During this internship, I engaged in an
extensive learning program facilitated by LinkedIn Learning, designed to provide a solid
understanding of the core concepts, techniques, and applications of machine learning.
Machine learning is a field of artificial intelligence that empowers computers to learn patterns
and make decisions without explicit programming. It leverages algorithms to analyze data,
identify trends, and improve performance over time. This adaptive process enables machines
to autonomously enhance their understanding and decision-making capabilities, finding
applications in various domains like image recognition, language processing, and predictive
analytics.
Objectives:
The overarching goals of the internship were to:
1. Acquire a fundamental understanding of machine learning concepts.
2. Develop practical skills in applying machine learning algorithms.
3. Gain insights into the real-world applications of machine learning.
4. Enhance my problem-solving capabilities using machine learning techniques.
Learning Platform: LinkedIn Learning:
The chosen platform for this internship was LinkedIn Learning, a widely recognized and
reputable online learning platform. LinkedIn Learning offered a structured curriculum that
covered various aspects of machine learning, ranging from basic concepts to advanced
applications. The courses were led by industry experts, providing a valuable perspective on
the practical aspects of machine learning.
Curriculum Overview:
The curriculum was divided into key modules, each focusing on different aspects of machine
learning. These modules included:
1. Introduction to Machine Learning:
•
Understanding the foundational concepts of machine learning.
•
Differentiating between supervised and unsupervised learning.
2. Machine Learning Algorithms:
•
Exploring popular algorithms such as linear regression, decision trees, and
clustering.
•
Hands-on implementation of algorithms using programming languages like
Python.
3. Model Evaluation and Validation:
•
Techniques for assessing model performance.
•
Cross-validation and hyperparameter tuning.
4. Deep Learning and Neural Networks:
•
Introduction to neural networks and deep learning.
•
Practical applications and case studies.
5. Real-world Applications:
•
Examining how machine learning is applied in diverse industries.
•
Case studies illustrating the impact of machine learning on business and
society.
Practical Application:
The internship emphasized hands-on experience with real-world projects. I had the
opportunity to work on practical assignments that involved implementing machine learning
algorithms, cleaning and preprocessing data, and interpreting results. This practical
application allowed me to solidify my understanding of the theoretical concepts learned
during the coursework.
Challenges Faced:
Throughout the internship, I encountered various challenges, including:
1. Complexity of Algorithms:
•
Understanding and implementing advanced algorithms posed challenges that
required dedicated effort and research.
2. Data Preprocessing:
•
Cleaning and preprocessing raw data for machine learning models required
careful attention and problem-solving skills.
3. Interpreting Results:
•
Extracting meaningful insights from model outputs and making informed
decisions based on results proved to be a learning curve.
Achievements:
Despite the challenges, the internship provided significant achievements, including:
1. Proficiency in Programming:
•
Improved programming skills, particularly in Python, for machine learning
implementation.
2. Practical Application:
•
Successfully completed real-world projects, applying machine learning
techniques to solve specific problems.
3. Understanding Industry Relevance:
•
Gained insights into how machine learning is applied across different
industries, enhancing my awareness of its practical significance.
Conclusion:
In conclusion, the Foundations of Machine Learning internship, conducted through LinkedIn
Learning, has been an enriching experience. The knowledge gained in this program has
equipped me with a solid foundation in machine learning, enabling me to approach real-world
challenges with confidence. I look forward to applying these skills in future endeavors and
contributing to the dynamic field of machine learning.
Thank you for the opportunity to share my internship experience and learnings. I am open to
any questions or further discussions on the topics covered during this internship presentation.
INTERNSHIP CERTIFICATE
Literature Review
S.
No
.
Journals
Year
Authors
Technique
s
Findings
Shortcomin
gs
1.
Continuous
Sign
Language
Recognition
and Its
Translation
into
IntonationColored
Speech
2023
Nurzada
Amangeldy,
Aru
Ukenova,
Gulmira
Bekmanov,
Bibigul
Razakhova,
Marek
Milosz and
Saule
Kudubayeva
sign
language
recognition
, natural
language
processing,
intonational
speech
synthesis,
long shortterm
memory,
spatiotemp
oral
features
Integrated
Approach for
Sign Language
Recognition:
The research
presents an
integrated
approach that
combines
morphological,
syntactic, and
semantic
analysis, as well
as intonation
modeling for
translating
continuous sign
language into
natural
language. This
integrated
approach has
practical and
social
significance.
Quality of
Gesture
Recording: The
study
acknowledges
limitations
related to the
quality of
gesture
recording, where
low camera
resolution,
incorrect camera
positioning, low
lighting,
interference, or
noise can
negatively
impact gesture
recognition
accuracy.
Scientific
Novelty in Sign
Language
Recognition:
The study
introduces a
novel method of
continuous sign
language
recognition by
combining
Minimum
Frame
Requirement:
The model's
limitation,
requiring a
sample to
contain at least
60 frames, might
be restrictive for
certain
applications or
scenarios with
shorter gestures.
multiple
modalities,
resulting in a
high recognition
accuracy of
0.95,
particularly for
the Kazakh
language.
Integration
with NLP
Processor: The
work
successfully
integrates a sign
language
recognizer with
an NLP
processor to
translate
recognized sign
language
sentences into
coherent natural
language
sentences.
Intonation
Study: The
research
provides a
unique study of
the intonation of
the Kazakh
language based
on changes in
the frequency of
the main tone
and sentence
members, which
can contribute to
the synthesis of
Specificity to
Kazakh
Language:
While the
research is
valuable for the
Kazakh
language, it may
not be
immediately
applicable to
other sign
languages
without further
adaptation.
Commercializat
ion Potential:
While the study
mentions the
potential for
commercializatio
n, it does not
provide a
detailed plan or
discussion of
how this will be
achieved.
intonationcolored speech.
2.
Vision2023
based Hand
Gesture
Recognition
for Indian
Sign
Language
Using
Convolution
Neural
Network
Boinpally
Ashwanth,
Sri Bhargav
Ventrapraga
da, Shradha
Reddy
Prodduturi ,
Jeshwanth
Reddy Depa,
K. Venkatesh
Sharma
Indian sign
language
Recognitio
n,
Convolutio
n Neural
Network,
Image
Processing,
Edge
Detection,
Hand
Gesture
Recognitio
n
Effectiveness of
CNNs for Hand
Gesture
Recognition:
The study
demonstrates
that
Convolutional
Neural
Networks
(CNNs) are
highly effective
in recognizing
and classifying
hand gestures in
Indian Sign
Language,
indicating the
potential of deep
learning for this
task.
Method Choice
Depends on
Requirements:
The research
highlights that
the choice of the
recognition
method for
vision-based
hand gesture
recognition
should be based
on specific
problem
requirements
and data
characteristics.
While CNNs
generally
perform well,
other methods
Recognition
Accuracy
Improvement:
The study
suggests that
there is room for
improvement in
recognition
accuracy,
particularly for
complex and
nuanced hand
gestures. Future
research can
focus on
developing
advanced CNN
architectures and
incorporating
additional
modalities to
enhance
accuracy.
Real-time
Implementation
Challenge:
Real-time
implementation
of hand gesture
recognition
remains a
challenge,
especially for
resourceconstrained
devices. The
study points to
the need for
future research
to develop
efficient and
scalable
like Support
Vector
Machines
(SVM) may be
suitable for
specific
scenarios.
Importance of
Large and
Diverse
Datasets: The
study
underscores the
significance of
using large and
diverse datasets
for training and
evaluating hand
gesture
recognition
systems. The
performance of
CNNs is closely
related to the
quality and size
of training data.
3.
Survey on 2022
sign
language
recognition
in context of
vision-based
and deep
learning
S. Subburaj,
S.
Murugavalli
SLR Sign
language
Recognitio
n Computer
vision
Neural
networks
Deep
learning
HMM
CNN
SLR Evolution:
SLR has
evolved from
static signs to
effectively
capturing
dynamic actions
in continuous
image
sequences.
Vision-Based
Superiority:
Vision-based
approaches
generally
outperform
implementations
for real-time
applications.
Specific to
Indian Sign
Language: The
findings of the
study are
specific to Indian
Sign Language,
which limits
their direct
applicability to
other sign
languages.
However, the
methods and
techniques
developed can
potentially be
extended to
improve
accessibility for
other sign
languages.
Subjective
Comparisons:
Method
comparisons
lack standard
evaluation
criteria,
introducing
subjectivity.
Small Datasets:
Self-made small
datasets pose
limitations,
potentially
appearancebased ones,
driven by deep
learning
techniques.
Vocabulary
Expansion:
Researchers
prioritize
creating larger
sign language
vocabularies,
indicating the
desire for more
comprehensive
SLR systems.
Dataset Access
and Speed:
Improved
dataset
availability and
computing
speed enhance
training
opportunities for
SLR models.
Small Dataset
Challenge:
Some
researchers rely
on small, selfmade datasets
due to the lack
of large datasets,
especially for
specific
languages and
regions.
affecting
generalization.
Language
Variations:
Addressing sign
language
variations is not
explored in
detail.
Lack of
Methodology
Details: The
paper lacks
specifics about
the
methodologies
used in the
analyzed
publications.
Publication
Timeframe:
Limited to
publications
from 2010 to
2021, possibly
missing recent
SLR
developments.
Language
Variation: Sign
language
variations exist
based on
grammar and
presentation
style, affecting
SLR systems.
Diverse
Classification
Techniques:
Researchers use
varied
classification
methods,
making method
comparisons
subjective.
Deep Learning
Effectiveness:
Deep learning
methods,
including CNN,
RNN, LSTM,
and BiDirectional
LSTM, perform
well in
processing
image and video
sequences.
4.
Sign
language
recognition
system for
communicat
ing to
2022
Yulius Obia,
Kent Samuel
Claudioa,
Vetri Marvel
Budimana,
Said
Achmada,
Computer
Vision,
Convolutio
nal Neural
Networks,
American
Dataset Use:
The study
utilized a
Kaggle dataset
to develop a
hand gesture
recognition
application.
Gestures
Require
Stability: To
form letters into
words, gestures
need to remain
stable for a few
seconds, leading
people with
disabilities
Aditya
Kurniawana
Sign
Language
(ASL),
Sign
Language
Recognitio
n
to potential
delays.
CNN Model: A
two-layer
Convolutional
Neural Network
(CNN) model
was created and
trained for realtime hand
gesture
recognition.
GUI
Development:
A user-friendly
graphical
interface was
developed for
the application.
Background
Sensitivity: The
model may be
sensitive to
background,
suggesting a
need for
background
removal methods
for robust
performance.
Speed
Optimization:
There is a need
to speed up the
process of
forming letters
High Accuracy: into words to
The application reduce wait
achieved an
times, indicating
impressive
potential
accuracy rate of inefficiencies.
96.3% for
recognizing and
combining hand
Model
gestures into
Accuracy:
words.
Enhancing
accuracy is
recommended
through the
addition of more
CNN layers,
suggesting the
current model
may not be fully
optimized.
Exploration of
Alternatives:
Considering
alternative
methods beyond
CNN is
suggested,
implying that
alternative
techniques might
yield better
results.
5.
Hand
Gesture
Recognition
for Sign
Language
Using
3DCNN
2020
MUNEER ALHAMMADI(
Member,
IEEE),
GHULAM
MUHAMMA
D(Senior
Member,
IEEE),
WADOOD
ABDUL(Mem
ber, IEEE),
MANSOUR
ALSULAIMAN
, MOHAMED
A.
BENCHERIF,
AND
MOHAMED
AMINE
MEKHTICHE
3DCNN,
computer
vision,
deep
learning
3DCNN for
Hand Gesture
Recognition:
The study
explores the use
of 3D
Convolutional
Neural
Networks
(3DCNN) for
recognizing
hand gestures.
Preprocessing:
Preprocessing
techniques
involve
temporal
normalization
using linear
sampling and
spatial
normalization
using face and
body ratios.
Two Feature
Learning
Approaches:
Two approaches
are used for
feature learning.
Hyperparame
ter
Optimization:
The study aims
to improve
performance
through future
hyperparameter
optimization,
indicating
potential
suboptimal
current
performance.
Online
Testing:
Testing the
approach with
live video feeds
is mentioned
but lacks
results or
details, limiting
the assessment
of real-time
applicability.
The first
employs a single
3DCNN
instance to
extract features
from the entire
video. The
second uses
three 3DCNN
instances to
capture features
from different
video regions,
followed by
fusion.
Feature
Fusion: MultiLayer
Perceptrons
(MLP), Long
Short-Term
Memory
(LSTM), and an
autoencoder are
employed for
feature fusion.
Classification
with SoftMax:
SoftMax
activation layers
are used for
classification in
both approaches
LITERATURE REVIEW REPORT
Edge-Cloud
Computing:
Future use of
edge-cloud
computing is
suggested but
not explored,
leaving its
benefits and
feasibility
uncertain.
Lack of
Computationa
l Resource
Details: The
study does not
specify the
computational
resources
required for
training and
testing 3DCNN
models, crucial
for assessing
practicality in
real-world
applications.
Download