Job Description :
Are you looking for unlimited opportunities to develop and succeed? With work that challenges and makes a difference, within a flexible and supportive environment, we can help our customers achieve their dreams and aspirations.
The Asia Data Office is looking for an experienced Data Engineer to join our team to support our regional partners. If you have passion for data and a desire to drive quality data solutions within a data driven culture, then read on to learn more about this opportunity!
We are looking for the right individual who would be interested and enjoy the opportunity to support data operations, strong willingness to learn and build data operations skills through team collaboration, desire to drive and execute on our data strategy, willing to help mature our agile practices, and is looking for an opportunity to embrace technical, operational and informal leadership roles on the team.
Selecting and integrating any Big Data tools and frameworks required providing requested capabilities
Implement ETL / ELT flows via Big Data Solutions i.e Nifi, Spark, Python etc
To be able to monitor the performance and advice necessary infrastructure changes or performance tuning in codes.
Support business users in use of the Enterprise Data Lake (EDL) as part of BAU
Perform POC on new integration patterns and solutions.
Maintain and monitor platform stability and performance of EDL
Attends to Incidents and change requests
Write and maintain technical documentation
Perform unit tests and system integration tests
Executes updates, patches, and other activities required to maintain and enhance the operations of the EDL
Supports the Agile delivery squads when required
Skills and Qualifications
Proficient understanding of distributed computing principles
Good Knowledge of Hadoop cluster, with all included services
Ability to solve any ongoing issues with operating the cluster
Proficiency with Hadoop v2, MapReduce, HDFS
Experience with data engineering projects
i.e integration of data from multiple data sources to a target system
Good knowledge of Big Data querying tools Hive, HBase
Knowledge in data streaming (e.g. Kafka, Spark)
Knowledge in any server-side programming languages (e.g. Java, Python, Spark, R etc)
Knowledge in any Change Data Capture (CDC) solutions
Experience in working on Agile projects is an advantage.
Experience on Azure Cloud and / or other Cloud platforms
Open Source RDBMS management including MongoDB & HBase
Experience in managing Zeppelin, JupyterHub, R Studio is / are definitely added advantage
Education and experience
Bachelor's degree preferably in Computer Science or Data / Engineering related courses
Recognized qualification in Service Management helpful
ITIL certification helpful
3+ years in one or more of the following
Project management experience is an advantage
Understanding of the insurance and / or financial industry is an asset
Travel up to 10% per year