hello

Hello!

I’m Sreenath Vemireddy

Sr. BigData Developer

Hello! I’m Sreenath Vemireddy.

I am passionate about technology. Currently I am working as BigData consultant. I am a quick learner and a team worker that gets the job done. I can easily capitalize on low hanging fruits and quickly maximize timely deliverables for real-time schemas.

Name:
Sreenath Reddy Vemireddy
DOB:
21-03-1993
Email:
vemisreenathreddy@gmail.com
Phone:
+91 8712210827 WhatsApp
Address:
India
Status:
______
Sreenath Vemireddy

Summary

6+ years of Big Data with Hadoop, Spark and its components like Spark core, Spark SQL, Hive, HDFS and Azure (Data Factory, Blob Storage, SQL Database).

Excellent enthusiastic to work with new technologies, analytical skills, communication, interpersonal and strong ability to perform as part of a team and individual.

My Skills

Hadoop, Spark, PySpark
Cloudera, Hortonworks
Python, Scala, Core Java, Hue, Superset, Hive, Impala, SQL, Shell, Pig, Sqoop
AZURE (Data Factory, Blob Storage, SQL Database)
GitHub
PyCharm, IntelliJ, Eclipse, Jupyter Notebook

Experience

Sr.BigData Consultant

U3 Info Tech Pvt Ltd, Singapore
2022-06 - Current

Working for a Banking client for risk model monitoring, validation and MAS reporting

Consultant

Ernst & Young LLP, India.
2020-07 - 2022-04

Worked with multiple clients for data migration and reporting projects by using Cloud and On-Prem environments.

Software Engineer

Experis IT Pvt Ltd, India.
2019-05 - 2020-07

As per the client requirements, I have designed and developed the frameworks for generate the analytical reports for daily, weekly, monthly, yearly.

Engineer

RMSI Pvt Ltd, India.
2017-03 - 2019-01

Worked as a Data Engineer and collaborated with various teams & managed to understand the requirements and gave my best work.

Trainee Engineer

TriGeo Technologies Pvt Ltd, India.
2016-06 - 2017-03

Stated my career as Trainee Engineer.

Latest Projects

MAS 637 Reporting

Client: DBS
2023-03 - Current

Description: MAS 637 reporting project is involved analyzing the status of different users, including AUTO, RML, CC, CL, and WM. This analysis focused on the pool IDs associated with each user and determined the status for the previous 12 months as well as the current month. The objective was to identify any significant changes or trends in the status of these pools and share the findings with the Singapore government's Monetary Authority of Singapore (MAS). By fulfilling this responsibility, I contributed to ensuring regulatory compliance and transparency in the financial sector.

Technologies: ADA (Inhouse framework), Python, Spark, Presto, Hive, Hadoop, Jupyter, Airflow, Collibra.

Responsibilities:
   • Created the ADF Pipeline to manage multiple sources and destinations.
   • PySpark Code Developer: Developed PySpark scripts in ADA framework for analyzing pool statuses and identifying trends over 12 months.
   • Data Analyst: Conducted comprehensive data analysis using PySpark to identify significant changes and patterns in pool statuses.
   • Compliance Reporter: Generated accurate and compliant reports based on identified trends, ensuring adherence to regulatory guidelines.

SAS Exit

Client: DBS
2022-06 - Current

Description: SAS Exit Migration is a project where we have migrated all the existing SAS scripts into DBS inhouse platform called ADA (Advaced DBS Analytics platform). The Project is all about calculating various reports, especially impact caused by the default accounts from various products which back offers to it retail or non- retail customers, such as Credit cards, Mortgage loans, Cash line, Auto loans etc..

Technologies: SAS, ADA (Inhouse framework), Python, Spark, Presto, Hive, Hadoop, Airflow, Collibra.

Responsibilities:
   • Created the ADF Pipeline to manage multiple sources and destinations.
   • Requirement Analyst: Understood SAS scripts and prepared requirement documents to capture project specifications and objectives.
   • Architect: Developed an architecture in ADA framework that effectively met the project requirements, ensuring seamless integration and efficient data processing.
   • Metadata Designer: Designed and created metadata structures in Collibra to facilitate data governance and management.
   • PySpark Conversion Specialist: Converted various SAS scripts into PySpark code, leveraging the capabilities of the ADA framework for enhanced data analysis.
   • Airflow Scheduler: Created Directed Acyclic Graphs (DAGs) in Airflow to schedule and automate the execution of PySpark scripts, ensuring timely data processing.

CRM (Customer Relationship Management) Data Migration

Client: ICIC
2022-01 - 2022-04

Description: Migrating the data from multiple sources to Azure cloud environment and generate the reports on SQL Database.

Technologies: Azure Data Factory, Blob Storage, SQL Database, Oracle, Key-Vault and Vertica.

Responsibilities:
   • Created the ADF Pipeline to manage multiple sources and destinations.
   • Migrating the data from Vertica On-premises and other environments to Cloud.
   • Handling the team.

H2H (Hana to Hadoop)

Client: PayPal
2020-07 – 2021-12

Description: H2H is a migration project from SAP-HANA to Hadoop environment. Need to migrate all possible hana reports into Hadoop with better approaches and as client's requests, hand overed the reports through mails, watch (UI web), different file types and hive tables.

Technologies: PySpark, HDFS, Hive, Python, GIT, Shell and Custom PayPal Frameworks.

Responsibilities:
   • Prepared the business requirement, Design docs and Application docs.
   • Converted the SAP-HANA reports into Hadoop system design and analysis.
   • Involved in the Data loading and pre-processing the data using hive and spark.
   • Contributed the data move to the mongo and watch reporting UI.
   • Developed most of the reports using spark for better performance and approaches.

IAP (Integrated Analytics Platform)

Client: Abbvie
2019-05 - 2020-07

Description: Integrated Analytics Platform (IAP) is an analytical project to understand the patient's thoughts in the organization. As an initial task, we have taken the feeds from different applications of data from source systems to understand the data and ingested the same into the Hadoop environment. This project holds the 360-degree view of the patient activities and we are processing the events of patients into hive tables and analyzing the same.

Technologies: Apache Spark, Hadoop, Hive, Scala, Impala, Shell, Autosys.

Responsibilities:
   • Involved in Workflow and pipeline design.
   • Ingestion of all source application data into IAP applications.
   • Managed the different kinds of data as a data engineer.
   • Involved in Spark application development.
   • Written the hive and impala queries for transforming and analyzing the data.
   • Developed Shell Scripts for handling spark jobs.
   • Scheduled the applications using Autosys, so created JIL files for that.
   • Managed all kind of report requirements using PySpark.

Contact Me

Let’s talk

If you like my profile and if you have any desired opportunity then get in touch using my email or my contact number.

See you!

Email:
vemisreenathreddy@gmail.com
Phone:
+91 8712210827 WhatsApp
Links: