BFSI_Big data engineer (Hadoop)
- Experience Range: From 3 years
- Job Location: HCM city
- Duty & Responsibilities:
As a Hadoop big data engineer, you will operate and monitor scalable and resilient data platform based on Hadoop ecosystem to address the business requirements:
Provide high operational excellence guaranteeing high availability and platform stability.
Technical analysis, trouble shooting and fixing the Production Incidents, Problem tickets and Changes
– Take ownership of (Data processing/Batch Jobs) applications from a Support & Maintenance perspective
– Work on small enhancements. Analysis, Build/test, Deployment/release support
– Timely service delivery with effective team handling and stakeholders’ management - Reqirements:
Must requirements:
– Knowledge in Hadoop ecosystem including Spark, HDFS, MapReduce, YARN,…
– Good in programming language Python.
– Experience in monitoring large-scale data processing job (batch-processing, stream processing)
– Understanding of SLA and meeting Timelines for support activitiesGood to Have:
– Experience with Hadoop distributions such as Cloudera, HortonWorks, comparison and feasibility
– Experience with Data warehouse
– Experience in ETLExperience in Data management:
– Data Quality
– Data integration
– Experience in SQL and NoSQL Database
– Experience with SRE, Patching & Automation: Kubernetes or Docker & Containerization
– Experience working with Big Data on Cloud environment
– Experience in Backend development using Java
– Experience in Data API
– Good to have Architecture knowledge or experience - Benefit:
– From the 1st day at work: 100% full of salary
– Insurance plan based on full salary + 13th salary + Performance Bonus.
– 18 paid leaves per year
– Medical Benefit for employee and family
– Working in a fast paced, flexible, and multinational working environment. Chance to travel onsite (in 50 countries).
– Internal Training (Technical & Functional & English).
– Working from Mondays to Fridays. - Preferred Language for Application: English