soumya nayak resume

5
Soumya Ranjan Nayak IT ANALYST Tata Consultancy services Ph: +91 8792259451 / 9739844503 Mail to: [email protected]/[email protected] OBJECTIVE: Seeking Data warehouse developer and Data Analyst opportunities requiring expertise in enterprise data warehouses and Big Data, where I shall implement and groom my technical and domain know-how. PROFFESIONAL SUMMARY: Big Data and Cloud Computing More than 1.5 year experience in Hadoop on AWS platform. Have extensively used AWS services like EMR(Hadoop service ),Datapipleine,SQS,EC2,S3,Redshift,RDS(MYSQL). Have extensively worked in sqoop and Hive framework in amazon EMR. Used Pig for transformation and consolidation activities in EMR. Created Pig and hive UDF’s for data lookup and data validation. Automated different process using shell script, windows batch and created back end processing modules using python. Have worked in amazon Redshift Service for processing large number of datasets and large volume of datasets. Data Warehouse 2+ years of extensive experience in building Data Warehouse using IBM Datastage. Strong understanding of the principles of Enterprise DataWare housing and Dimension Modeling. Worked on end to end design of Data warehousing solutions which includes analyzing mapping documents, analyzing source systems using information analyzer, Exception handling, error reprocessing models, Generic ETL process. Extensive knowledge and wide range of experience in designing and developing Datastage jobs in Datastage versions 9.1, 8.7, 8.5( IBM Info sphere , Information Server). Basic knowledge of using Standardization and Investigate quality stage components. knowledge of using Information analyzer and writing Rules for column analysis of source systems. Good knowledge of shell scripting. Data Base 1+ years of experience as a SQL Server developer in analysis and implementation in translating the business requirements into the SQL Server 2008 and SQL Server 2012 database. Experience in creating Tables, Views, Triggers, Stored Procedures, User Defined Functions (UDF’s) and other T-SQL (DDL, DML) statements for various applications. Experience in writing complex SQL queries involving joins for many tables. Have extensively worked on translating the business requirements into Stored Procedures, User Defined Functions (UDF’s) and complex queries. Performance tuning using SQL Profiler and Query optimization techniques.

Upload: tata-consultancy-services

Post on 14-Jan-2017

144 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Soumya Nayak Resume

Soumya Ranjan NayakIT ANALYSTTata Consultancy servicesPh: +91 8792259451 / 9739844503Mail to: [email protected]/[email protected]:Seeking Data warehouse developer and Data Analyst opportunities requiring expertise in enterprise data warehouses and Big Data, where I shall implement and groom my technical and domain know-how.

PROFFESIONAL SUMMARY:

Big Data and Cloud ComputingMore than 1.5 year experience in Hadoop on AWS platform.Have extensively used AWS services like EMR(Hadoop service ),Datapipleine,SQS,EC2,S3,Redshift,RDS(MYSQL). Have extensively worked in sqoop and Hive framework in amazon EMR.Used Pig for transformation and consolidation activities in EMR.Created Pig and hive UDF’s for data lookup and data validation.Automated different process using shell script, windows batch and created back end processing modules using python.Have worked in amazon Redshift Service for processing large number of datasets and large volume of datasets.

Data Warehouse2+ years of extensive experience in building Data Warehouse using IBM Datastage.Strong understanding of the principles of Enterprise DataWare housing and Dimension Modeling.Worked on end to end design of Data warehousing solutions which includes analyzing mapping documents, analyzing source systems using information analyzer, Exception handling, error reprocessing models, Generic ETL process.Extensive knowledge and wide range of experience in designing and developing Datastage jobs in Datastage versions 9.1, 8.7, 8.5( IBM Info sphere , Information Server).Basic knowledge of using Standardization and Investigate quality stage components.knowledge of using Information analyzer and writing Rules for column analysis of source systems.Good knowledge of shell scripting.

Data Base1+ years of experience as a SQL Server developer in analysis and implementation in translating the business requirements into the SQL Server 2008 and SQL Server 2012 database.Experience in creating Tables, Views, Triggers, Stored Procedures, User Defined Functions (UDF’s) and other T-SQL (DDL, DML) statements for various applications. Experience in writing complex SQL queries involving joins for many tables.Have extensively worked on translating the business requirements into Stored Procedures, User Defined Functions (UDF’s) and complex queries.Performance tuning using SQL Profiler and Query optimization techniques.

Technical Skills:

Hadoop and Cloud Ecosystem

Hadoop, PIG, HIVE, SQOOP, OOZIE, Amazon Web Services.

Languages Python, Core Java, Unix shell scripting, SQL Server Programming

Databases MYSQL RDS, MS SQL SERVER, DB2Tools Pycharm, Eclipse, MS SqlServer, IBM Infosphere, SQL

developerHardware Hadoop Cluster,AWS platform, AIX ServerMethodologies Agile and Waterfall

WORK HISTORY: Tata Consultancy Services (Jan2012 – Present)

Enterprise Data Services (OCT 2015 - Present)

Page 2: Soumya Nayak Resume

The Enterprise Data service is a one stop shop, data brokerage service, where project teams can register sources of data and as well as consume a variety of sources from different regions and departments within Johnson and Johnson. It is a repository for large quantities and varieties of data that is structured and unstructured. Hosted on the cloud, teams can leverage the EDL to source and consume different data sets, based on a pay as you consume pricing model.

Responsibilities:Responsible for designing, coding, testing, and deploying the backend module.Creating a backend module in python that facilitates launching on-demand Hadoop cluster using AWS EMR and deploying a workflow definition using AWS Data pipeline service. Worked on deploying a Sqoop module that consumes data from varied data sources registered by different source owners and loads the data to HDFS (AWS S3).Creating Hive external tables to perform transformation and consolidation activities.Worked on other AWS services like SQS for message queuing, RDS(MYSQL) for application backend and HIVE metastore,S3 as HDFS and Datastorage.Worked on UNIX shell scripting for automation.

Service Now DataLake (April 2015 – Oct 2015 )

Service Now Datalake is a business critical application for forecasting, deployment and production planning. This application pulls data from various sources like Service Now, NetCool and Apptio and processes them to convert into consumable raw files. These transformed files are applied with the business rules and submitted to the business analytics to generate the metric data like SRM Lifecycle, SRM Quarterly Ops metrics, SRM Dashboards.

Responsibilities: Responsible for designing, coding, testing, deployment phases of map reduce programs, Hive Queries. Extensively used Pig scripts for transformation and COnsolidation activity in Hadoop cluster. Created Pig UDF’s for data lookup and data validation. Worked on Amazon Web Services like - EMR for hadoop, Datapipeline for Worlflow and scheduling. Created Consumer side data loading framework using AWS REDSHIFT. Working on AWS Data pipeline for scheduling the jobs. Worked on UNIX shell scripting for automation.

Huntington National Bank (Jan 2014 – April 2015)To migrate the legacy customer Information System (CIS) in mainframes to IBM MDM platform through IBM Data stage. Master Data like Party, Account, Party to Account Relationship and Address from different source systems as in AFS(Financial Services), MSP (Small Loans), ACBS (Large loans at organization level), TRUST etc. are fed to the MDM System in form of XMLs after performing required cleansing and transformations.

Roles and Responsibilities:To develop various data stage jobs for different Business functionals.Worked on end to end design of Data warehousing solutions which includes analyzing mapping documents, analyzing source systems using information analyzer, Exception handling, error reprocessing models, Generic ETL process.Worked in all kinds of projects which includes New development, Enhancement, Supported Critical production support projects.Worked as a developer in migrating customer, organization and account data to IBM MDM using IBM data stage.

Super Partners (Apr 2012 – Nov 2013)To migrate super partners' clients onto the new administration platform spRIGHT and manage end-to-end system integration

such as configuration work, complex integration with various business critical applications and execution of Complex testing scenarios to ensure that the system meets the business objective of Superpartners for their business transformation program for its’ 16 Industry Retirement fund.

Roles and Responsibilities:Worked as a developer in Workflow and Case Definition Team using Microsoft Sql server 2008/2012, Microsoft Windows Workflow Foundation Framework.Have extensively worked on translating the business requirements into Stored Procedures, User Defined Functions (UDF’s) and complex queries.Performance tuning using SQL Profiler and Query optimization techniques.Also responsible for writing and updating Technical Design Documents for Workflow and Case definitions.Developed Stored Procedures to implement business rules for various modules.

Page 3: Soumya Nayak Resume

EDUCATIONAL QUALIFICATION:

Degree and Date Institute Marks Major/ Specialization

Bachelor of Technology ,June 2011

Gandhi Institute of Engineering and Technology, Gunupur,BijuPatnaikUniversity of Technology, Odisha

7.5/10 CGPA Computer Science andEngineering

12th CBSE June 2007 DAV Public School , Cuttack , Orissa 64.4% Science

10th CBSE, June 2005 DAV Public School , Cuttack , Orissa 77.4% NA

CERTIFICATIONS AND TRAININGS: IBM Certified Solution Developer - InfoSphereDataStage v8.5 : IBM000028984 Oracle Certified Associate (OCA) (1z0-007, 1z0-147) : OC0888575 Completed a training course on Big Data and Hadoop at Edureka Bangalore.

ACHIEVEMENTS: Won STAR OF THE QUARTER award for outstanding contribution towards CBUS development in the year 2013. Selected for the final round of 3rd All Orissa C-Marathon organized by Lakshya in year 2008 after excelling in two college

level rounds. Won 2nd prize in C-Mahaguru, 2008 in the college. Participated in "Campus vs. Concepts", "Networking Seminar", "Seminar on Ethical Hacking", "Linux Basic &

Networking Workshop" conducted in the college. Excelled in the DISTRICT level badminton tournament and represented Cuttack district at the STATE level badminton in

the year 2004 and played ZONAL level badminton in the year 2006. Awarded with RAJYA PURASKAR in Bharat Scout & Guides at school level in the year 2005. Awarded as the Best Scout at school level in the year 2004.

PERSONAL INFORMATION:Date of Birth : 1st December 1988Nationality : IndianPassport No. : J3048373Contact Address : FF1 Suraksha Enclave

7th Main, 20th Cross BTM-2nd Stage Bangalore-560076 Karnataka

DECLARATION:

I hereby declare that all the details furnished above are true to the best of my knowledge and belief.

Soumya Ranjan Nayak IT ANALYST

Tata Consultancy services Bangalore