Skip to content

Data Engineer

Job Category:

BPO / KPO / Shared Services

Job Level:


Open Date:




Close date:


Client Overview:

About our client

Discover amazing career opportunities with a company that is all about helping people achieve their dreams and aspirations!


Our client is a leading insurance and wealth and asset management group with global headquarters in Canada, operating regionally in Asia and the United States that 35,000 employees across the globe. It’s global shared service centre in Quezon City and Lapu – Lapu City provides administrative, finance, investments, analytics, professional services, support, contact centre and information technology services to companies around the world and puts the customers at the forefront.


The company aims to attract, develop and retain a diverse workforce and to foster an inclusive work environment that embraces the strength of cultures and individuals.

Client Industry:


Job Description and Qualification:

Job Description:

  • Designs and implements data architectures in production environments
  • Implements data orchestration pipelines, data sourcing, cleansing, augmentation and quality control processes
  • Translates business needs into data architecture solutions
  • Contributes to overall solution, integration and enterprise architectures
  • Develops data landscape modernization architectures and roadmaps
  • Ensure data quality standards following data management framework
  • Deployment of ETL jobs in production

Knowledge & skills

  • Understanding of relational and warehousing database technology working with at least one of the major databases platforms (e.g., Azure Data Lake, Hadoop, Oracle,  SQLServer, Teradata, MySQL, or Postgres)
  • Practical experience with big data processing frameworks and techniques such as HDFS, MapReduce, Storage formats (Avro, Parquet), Stream processing, etc.
  • Extensive experience in Azure Data Ecosystem (Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Analysis Services)
  • Experience in ETL tools such as Talend, Informatica
  • Solid working knowledge of data processing tools using SQL, Spark, Python or similar open source and commercial technologies
  • Knowledge of Java/Scala especially in relation to big data opensource software preferred; knowledge of NiFi
  • Knowledge of non-relational (Cassandra, MongoDB) databases preferred
  • Predictive analytics and machine learning experience (scikit-learn, Tensorflow, MLlib, recommendation systems) preferred
  • Experience with integrating to back-end/legacy environments
  • Experience with industries such as FIs, INS, High Tech and Retail/CPG
  • Knowledge and familiarity with machine learning models application and production pipelines
  • Good organizational and problem-solving abilities that enable you to manage through creative abrasion
  • Good verbal and written communication; effectively articulates technical vision, possibilities, and outcomes
  • BSc in Computer Science, Statistics, Informatics, Information System, Mathematics or equivalent quantitative field preferred
Shift: Midshift