Uploaded by Siddardha Madala

Siddardhamadala resume

advertisement
SIDDARDHA MADALA
Mobile :
Mail
:
LinkedIn :
+919318881234
cddardh@gmail.com
https://www.linkedin.com/in/cddardh/
Professional Summary

Data Engineer seeking to leverage the overall 4+ years of experience in big data, ETL, data
warehousing, and reporting technologies and having around 2 years of experience in big
data environments involving tools like Hive, Scoop, and Spark.

Experience in the import and export of data between HDFS and relational database
systems using Sqoop.

Have an in-depth understanding of Hadoop and its various components, such as HDFS,
Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN),
Application Master, and Node Manager.

Extensive hands-on experience of the ETL process consisting of data cleaning, data
transformation, and data loading into data warehouses and data marts.

I worked on multiple ETL tools, such as IBM DataStage, Oracle Data Integrator, and
Microsoft’s MSBI.

Hands-on experience in working with stored procedures and database queries in Oracle
and SQL Server.

Working experience with SAP business objects and Power BI reporting tools.
Work Experience
Organisation :
Period
:
Designation :
Dsmart Systems Pvt Ltd, Hyderabad
Aug-2018 to Dec-2022
Assistant Consultant
Education
 Bachelor of Technology in Computer Science and Engineering from Andhra University in
2017.
Technical Skills
Big data Technologies
Cloud Technologies
Languages
ETL Tools
Data bases
Reporting Tools
Others
:
:
:
:
:
:
:
Hdfs, Hive, Scoop, Spark
GCP, Dataproc, Big query, Cloud composer
Python, C, Sql, Shell
IBM DataStage, Oracle Data Integrator
Sql Server, Oracle
SAP Business Objects, Power BI
Service Now, Jira, Git Hub
Projects Handled
6. GDW: Global Data Warehouse Phase 1
Description:
GDW is a single, consolidated, and certified data repository for all analytics and reporting of the
IRM. It has data about debt, orders, spend, storage, activity, and inventory for all countries in a
single data warehouse.
Responsibilities:
1. Involved in importing data (Oracle, SQL Server) from RDBMS systems to HDFS using
Sqoop.
2. Sourcing the data from multiple repositories into a Hadoop cluster.
3. Involved in writing Python code using the Spark data frames and Spark SQL for
transforming the data as required.
4. Loading the data into hive-managed tables using partitions and buckets.
5. Responsible for creating jobs for dimensions and facts.
6. I worked on the optimization of Spark jobs.
5. IRM APP SUPPORT
Description:
IRM App Support includes the support of various ETL and reporting applications that handle the
data for various countries. I was involved in the support of SAP BO and Oracle Data Integrator
applications. The SAP BO application provides Business Objects reports for IME DW, TPR, and
RCO applications.
Responsibilities:
1. Provide ad hoc reports as requested by the users.
2. Manage the admin activities of the SAP BO applications, which include user creation and
access management of reports and folders.
3. Monitoring the daily, weekly, and monthly publications in BO.
4. Complete the daily jobs in TPR and RCO applications.
5. Validate the daily data loads and communicate them to the users.
6. Resolve the support tickets within the SLA.
4. IME SAP BO
Description:
IME DW is the data warehouse of the organization, which deals with records management in the
Europe region, and the IME SAP BO application provides Business Objects reports on storage,
billing, capacity, inventory, and orders for different lines of business in Europe.
Responsibilities:
•
•
•
•
Built reports for the new countries added to the IME.
Scheduled daily, weekly, and monthly SAP Webi reports with publications.
Created publications for the bulk generation of reports every month.
To create audit reports on the reports and universes used by users with SAP Query Builder.
3. Yadardh Dashboards
Description:
Yadardh Dashboards track the performance of government child welfare activities and workers.
These PowerBI reports are embedded in their respective web applications.
Responsibilities:
•
•
Implemented all the changes to the existing reports as required.
Created new reports required for the monthly and yearly performance monitoring.
2. Alert Monitoring System (AMS)
Description:
The Alert Monitoring System (AMS) is an automation system under the Anti-Money Laundering
department for a leading bank that has a customer base worldwide. AMS will provide all the details
required to investigate any kind of alert generated in the system.
Responsibilities:
•
•
•
•
•
Loading the data into hive tables from a specified data lake.
Performing different calculations and transformations using HiveQL.
Exporting the data from hive tables into XML files.
Scheduling and monitoring the hive jobs using shell scripts and Control M.
Responsible for automating different types of manual activities, such as data cleaning and
processing, using a shell script.
1. AR Track 1 and 2
Description:
AR Track 1 is an accounts receivables project that deals with the credit data of the organization.
This will replace the manual process of running queries against the transactional system with a
data warehousing system.
Responsibilities:
•
•
•
•
Analysis and tuning of the existing queries that are used to extract the data from the source.
Developed Datastage jobs and loaded the data into the target database.
I created sub-sequences and master sequences and scheduled the jobs.
I worked on creating jobs for different dimensions in AR Track 2, which deals with data
warehousing.
Download