Provides assessment and estimation of effort needed based on the provided Business request form.
Develops and delivers automated data pipelines (Extract, Transform, Load-ETL) jobs in the Data Lake within expected timelines.
Coordinates with various support groups to accomplish assigned automated data pipes for development
Prepare deployment package for endorsement to Release Testing Team which includes Solutions Design Document and Operations Guide.
Provides support during the stabilization phase of pipes endorsed to production until declared BAUeview and approves builds based on the set framework, standards, and best practices.
Documents source code.
Other job-related activities that may be assigned from time to time.
Job Qualifications / Requirements
Education At least graduate with a Bachelor’s or Master's Degree in IT, Computer Science, Engineering, or any related course.
Related Work Experience At least 5-8 years of Java Experience 5 years' DevOps Experience (Git, Jenkins, SonarQube, Jira) 8 years of experience in ETL development and management In-depth experience in ETL design, implementation, and support.
Experience with Big Data Platforms. Experience establishing and building operational data stores and analytic data stores (enterprise data warehouse)
Knowledge Knowledgeable in the following : In-depth knowledge on Big Data and Distributed Computing Advanced technical experience and business knowledge in various SDLC methodologies including waterfall, iterative, agile software development life cycle, or related disciplines / processes are preferred.
Skills : Must have strong SQL & PL / SQL skills with the ability to solve highly complex challenges Must have good communication and interpersonal skills for interacting and collaborating with developers, analysts, and business staff throughout the organization.
Must have the ability to communicate clearly in writing to document data requirements and translate them into technical solutions.
Must be adept to work in a fast-paced environment with tight SLAs.