Make an impact by working for sectors where technology is the enabler, everything is ground-breaking and there's a constant need to be innovative.
Be part of the team that combines business knowledge, technological edge and a design experience. Our different backgrounds and know-how are key in developing solutions and experiences for digital clients.
Face challenges and learn other ways of thinking and seeing the world - there's always room for your energy and creativity.
About the role
Data Engineer is responsible for building and maintaining Data Platforms. Recognizes the importance of data for the organization in the areas where it is the key to success. Maintaining an eye on the big picture and knowing the details of the business are decisive for this role.
This role is focused on designing, developing, and maintaining the data platform required for data storage, processing, orchestration, and analysis.
The mission involves implementing scalable and performant data pipelines and data integration solutions.
Agnostic of data sources and technologies to ensure efficient data flow and high data quality, enabling data scientists, analysts, and other stakeholders to access and analyze data effectively.
As a part of your job, you will:
Design, build, and maintain scalable data platforms;
Collect, process, and analyze large and complex data sets from various sources;
Develop and implement data processing workflows using data processing framework technologies such as Spark, and Apache Beam;
Collaborate with cross-functional teams to ensure data accuracy and integrity;
Ensure data security and privacy through proper implementation of access controls and data encryption;
Extraction of data from various sources, including databases, file systems, and APIs;
Monitor system performance and optimize for high availability and scalability.
What are we looking for?
Experience with cloud platforms and services for data engineering (AZURE);
Proficiency in programming languages like Python, Java, or Scala;
Use of Big Data Tools such as Spark, Flink, Kafka, Elastic Search, Hadoop, Hive, Sqoop, Flume, Impala, Kafka Streams and Connect, Druid, etc.;
Knowledge of data modeling and database design principles;
Familiarity with data integration and ETL tools (e.g., Apache Kafka, Talend);
Understanding of distributed systems and data processing architectures;
Strong SQL skills and experience with relational and NoSQL databases;
Familiarity with cloud platforms and services for data engineering (e.g., AWS S3, Azure Data Factory);
Experience with version control tools such as Git.
Personal traits:
Ability to adapt to different contexts, teams and Clients
Teamwork skills but also sense of autonomy
Motivation for international projects and ok if travel is included
Willingness to collaborate with other players
Strong communication skills
We want people who like to roll up their sleeves and open their minds. Believe this is you? Come join the Team!
#J-18808-Ljbffr