Skip to content

Data Engineer (Pioneer Team)

Job Category:

Information Tech

Job Level:

Professional

Open Date:

10-Dec-2018

Location:

CITY OF MAKATI

Close date:

28-Feb-2019

Client Industry:

MANAGEMENT SERVICES

Job Description and Qualification:


Summary


Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for different teams. 


Job Description

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and other technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into critical information such as operational efficiency and key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and other team members that assist them in building and optimizing processes and products.
  • Work closely with data and analytics members

Qualifications

  • Graduate of Computer Science or related courses
  • 4+ years experience in a Data Engineer role
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience in manipulating, processing and extracting value from large disconnected datasets.
  • Experience with big data tools, relational SQL and NoSQL databases, Experience with data pipeline and workflow management tools
  • Experience with stream-processing systems
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

Salary:

0.00