Hadoop Engineering Admin
AIG Shared Services
Parañaque City, Philippines
12h ago
source : ictjob.ph

Your future team

  • Our technology teams collaborate with their worldwide colleagues and partners every day to take on the challenges of providing IT support to one of the world'
  • s leading financial services firm. We're people who believe that with the right values and hard work, anything is possible.

    We know that if we're at our best, that enables our customers to be their best and realize their dreams and hoped for successes.

  • The Information Technology group provides enterprise-wide IT solutions for all of AIG's specialized disciplines. Technology provides strategic and procedural support in all of AIG'
  • s specialized disciplines, such as policy issuance, premium collection, claims handling, and administration. It enables AIG to deliver business strategies through efficient world-

    class IT and operations services, while ensuring the necessary IT risk management and security measures are in place.

    Your contribution at AIG

    As an influencer at AIG, people come to you as a go-to source for help and support because of your deep knowledge and expertise.

    As a more experienced team member, you are capable of driving continual improvement and impacting the way that things get done.

    Because of your influence, whether direct or indirect, we are able to deliver powerful outcomes for our clients.

    Qualifications

    6 years of professional experience supporting medium to large scale Linux production environments.

    4 years of professional experience working with Hadoop (HDFS, MapReduce, Spark, Pig,Hive,Hbase Sqoop & Flume etc.) and related technology stack.

    Experience with building and supporting Hadoop cluster including design, capacity planning, performance tuning and monitoring.

    A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.

    Solid understanding of automation tools.

    Experience with programming languages like Python, C, C++, Java, Perl or PHP, including with UNIX scripting.

    Fundamental knowledge of Load balancers, firewalls, TCP / IP protocols.

    Knowledge of best practices related to security, performance, and disaster recovery.

    Experience in tool Integration, automation, configuration management in GIT, SVN, Jira platforms.

    Experience with performance tuning (JVM, JMX, connection pooling) using JConsole or similar profiling tools.

    Prior experience in migrating big data platforms from earlier to latest versions of the platforms.

    Experience with APIs, Web Services, SOAP and / or REST services.

    Strong experience with cluster security tools such as Kerberos, Ranger and Knox.

    Self-driven, Ability to work independently and as part of a team with proven track record developing and launching products at scale.

    Good understanding on establishing analytic environments required for structured, semi-structured and unstructured data.

    Draw conclusions and effectively communicate findings with both technical and non-technical team members.

    Serve as Go To person for internal teams / users and various project groups / teams.

    Cloudera or Hortonworks certified.

    Key Responsibilities

    Manage large scale Hadoop cluster environments, handling all Hadoop environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring.

    Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling.

    Responsible for implementation and ongoing administration of Hadoop (Hortonworks HDP 2.3) and Cassandra.

    Working with data delivery teams to setup new Hadoop users. This job includes setting up Kerberos principals and testing HDFS, Hive and MapReduce access for the new users.

    Work with core production support personnel in IT and Engineering to automate deployment and operation of the infrastructure.

    Manage, deploy and configure infrastructure with Puppet or other automation toolsets.

    Screen cluster job performances and capacity planning.

    Monitor cluster connectivity and security.

    File system management and monitoring.

    Diligently teaming with the infrastructure, network, database, application and Platform teams to guarantee high data quality and availability.

    Collaborating with application teams to install operating system and big data platform (Hadoop, Cassandra) updates, patches, version upgrades when required.

    Work with platform partners to create trouble tickets and incorporate fixes from the partners into the environment.

    Automation of deployment, customization, upgrades and monitoring through DevOps tools.

    Identify hardware and software technical problems, storage and / or related system malfunctions.

    Creation of metrics and measures of utilization and performance.

    Capacity planning and implementation of new / upgraded hardware and software releases as well as for storage infrastructure.

    Research and recommend innovative, and where possible, automated approaches for system administration tasks. Identify approaches that leverage our resources, provide economies of scale, and simplify remote / global support issues.

    Perform other work related duties as assigned.

    Apply
    Add to favorites
    Remove from favorites
    Apply
    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Continue
    Application form