Build data pipelines using spark (Scala) on Databricks
Work with multiple file formats (JSON, xml, rdf, reports, etc.) and analyze data, if required for further processing.
Perform data manipulations, load, extract from several sources of data into another schema.
Support business needs and provide solutions to complex problems.
Please be guided :
Flexibility with work location and set up
Positions available in Junior to Senior levels
We are actively pooling for this role, the start date may vary
Experience in building data pipelines using spark (Scala) on Databricks
Development experience in the Amazon Cloud Environment AWS (S3, EMR, Databricks, Amazon Redshift, Athena.) Development experience in the Amazon Cloud Environment AWS (S3, EMR, Databricks, Amazon Redshift, Athena.)
Experience in working with REST APIs
Understanding of core AWS services, and basic AWS architecture best practices.
Snowflake experience will be a plus.
Has good knowledge in DevOps
Flexible to any shift depending on business need
Amenable to work within Metro Manila central business districts
Report this job
Thank you for reporting this job!
Your feedback will help us improve the quality of our services.
Add to favorites
You need to be logged into your account to add this job to your favorites. Click "Continue" to log in or create a new account. You will then be able to access your favorites from our website or from the neuvoo mobile app.