Job Description : Experience / Knowledge of C++ Experience / Knowledge of Docker Experience with Unix / Linux systems Experience with AWS (EC2, S3, SQS) Experience building analytics platforms Experience working with data scientists and software engineers Experience working on data engineering and data analysis BSc, MSc in Computer Science, Engineering, Mathematics, etc Minimum Qualifications : Candidate must possess at least Bachelor's Degree, Master's Degree / Post Graduate Degree in Engineering (Computer / Telecommunication), Computer Science / Information Technology or equivalent.
At least 2 - 5 Year(s) of working experience in the related field is required for this position. Expert in distributed computing technologies such as Spark, MapR (or Cloudera / Hortonworks / ), Hadoop Experience building Real Time pipeline processing using Spark Streaming (or Apache Flink) and Kafka (or Kinesis).
Experience with Scala and Python - you will need to write, test and maintain production-quality Scala and Python code. Strong software engineering skills -
you will be expected to work on the full software life cycle : Software design and architecture, unit tests, coding, integration tests, help in building the CI / CD infrastructure, code deployment, application monitoring.
Understanding of machine learning and advanced statistical techniques. Exceptional attention to detail. Strong presentation and communication skills.
Ability to work in teams with people with different skills and background. Experience with NoSQL Columnar Databases (Hbase, Cassandra).
Keen to explore the growing world of AIOPS. Preferably Staff (non-management & non-supervisor) specialized in IT / Computer - Software or equivalent.