What you'll doThe Big Data Engineer plays a key role in Digital Engineering team by supporting the strategy and deliverables related to Cloud Platform and Data Engineering.
The Big Data engineer is a core member of the agile teams delivering data pipelines and capabilities within the organization through building and automating data flows, and provides specialised guidance and delivers through self and others to:Design, Develop and Integrate ETL/ELT data pipelines for the necessary data from several sources for analysis and for Technology actions;Build applications and products that make use of large volumes of data and generate outputs that allow actions that generate incremental value;Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up delivery in the Big Data Programme, assuring quality, performance and alignment to the technology blueprint and patterns;Support stakeholders and functions in obtaining benefiting business value from the operational data;Designing and producing high performing stable end-to-end data applications to perform cost efficient and complex processing of batch and streaming massive volumes of data in a multi-tenancy big data platform in the cloud;Ingest and automation the necessary data from local and group sources onto GCP platform;Accountable to ensure delivery of solution & use case enablement, GCP project & resource enablement, data source ingestion for Networks sources, application production rollouts and code/execution optimisation for big data solutions;Working with key stakeholders such as the Group Big Data/Neuron team, ADS, ACoE, local market IT and Big Data teams to define the strategy for evolving the Big Data capability, including solution architectural decisions aligned with the platform architecture;Investigating and driving new technologies adoption to identify where they can bring benefits;Ensuring common data architecture, structure and definition, data cleanings and data integrity;Support data security & privacy, and thorough documentation processes.Who you areDegree in Computer Science or related field3 years of hands on Big Data Engineering experience, ideally over a Cloud computing environmentKnowledge of cloud providers such as GCP, AWS or AzureExpertise developing and optimising Apache SparkSkilled in OOP languages (such as Python, Java or Scala), SQL and bash scripting Experience in implementing large-scale batch and streaming based Big Data solutionsUnderstanding of big data eco systems and tools like Hadoop, Airflow, NiFi, Kafka, various data formats, etcBackground in telecommunication networks and its related data sources, its strengths, weaknesses, semantics, and formatsPortuguese/English language proficiency.