BFSI_Cloud Data Engineer_Up to $4000& Joining Bonus

  • Experience Range: At least 4 years
  • Job Location: HN/HCMC (HCM preferred)
  • Duty & Responsibilities:

    You will be responsible for building and running the data processing pipeline on Google cloud platform
    – Work with implementation teams from concept to operations, providing deep technical expertise for successfully deploying large scale data solutions in the enterprise, using modern data/analytics technologies on GCP
    – Work with data team to efficiently use GCP to analyze data, build data models, and generate reports/visualizations
    – Integrate massive datasets from multiple data sources for data modelling
    – Implement methods for Devops automation of all parts of the build data pipelines to deploy from development to production
    – Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with product management
    – Design pipelines and architectures for data processing
    – Extract, load, transform, clean and validate data

  • Reqirements:

    * Must requirements:
    – At least 2 years of experience in ETL Tools (such as Infromatica, Apache Beam, Kafka)
    – At least 1 years of experience in Google Cloud Platform (BigQuery, DataProc, DataFlow)
    – Having relative experience in any scripting/programming language as following: Java/Python/Go
    – Having relative experience in declarative CI/CD (Jenkins, Azure DevOps)
    – Having relative experience in Databases: SQL and NoSQL
    – Have a strong engineering mindset to automate tasks, identify use cases, test cases, improve the system, PR/Incident resolution and deployments
    * Good to Have:
    – Having relative experience in Automation: Kubernetes or Docker &Containerization
    – Having relative experience in Infrastructure as a Code (IaaC) (i.e., Terraform, Cloud Formation, Azure ARM Templates)
    – Having knowledge or experience in Big Data – Hadoop ecosystems including HDFS, MapReduce, YARN, HBase, Zookeeper, Spark, Pig, Hive…
    – Having knowledge or experience with Hadoop distributions such as Cloudera, HortonWorks…
    – Having relative experience in Data management:
    o Data Governance
    o Data Architecture
    o Data Modelling
    o Data Quality
    o Data integration

  • Benefit:

    • From the 1st day at work: 100% full of salary
    • Insurance plan based on full salary + 13th salary + Performance Bonus.
    • 18 paid leaves per year (including 12 annual leaves + 6 personal leaves).
    • Medical Benefit for employee and family
    • Working in a fast paced, flexible, and multinational working environment. Chance to travel onsite (in 49 countries).
    • Internal Training (Technical & Functional & English).
    • Working from Mondays to Fridays.

  • Preferred Language for Application: English

Fill in the details!

    captcha