At Efficio, we're not just a consulting firm; we're a team of dedicated experts committed to transforming procurement and supply chain functions. Our mission is to drive sustainable, measurable value for our clients through innovative solutions and deep industry insights. Join us and be part of a dynamic environment where your skills and ideas make a real impact.
Efficio is the world's largest specialist procurement and supply chain consultancy, with offices across Europe, North America, and the Middle East. We're a diverse team of over 1,000 individuals, representing more than 60 different nationalities and speaking 40+ languages – and we're continuing to grow rapidly!
We believe we can make the world a better place by helping businesses buy better. Buying better isn't just about saving money; it's also helping businesses to make the world around them better by buying products and services that are from green, ethical, sustainable, diverse, and inclusive suppliers. At Efficio, we do that by being the world's largest procurement and supply chain consultancy firm. We also do that by combining our procurement expertise with our powerful technology and data to help our customers make better purchasing decisions. Data is a fundamental ingredient to that decision-making process and we're now investing heavily in building our in-house data skills, expertise, and infrastructure to rival the best in the world. So, if you'd like to make the world a better place by helping businesses buy better, apply here. Please note this role is a non-consulting position.
In this job, you will develop and implement data pipelines on AWS, maintain and develop them further under the guidance of business owners. You will work alongside Data Scientists, Data Engineers, and others; integrated into our Product and Digital team. This role offers the chance to join Efficio at a pivotal point in our data journey, and to shape and influence our data landscape.
What will you be doing?Collect business requirements and translate them into robust and scalable solutionsDemonstrate an understanding of DataOps and develop use case agnostic ETL pipelinesTake a proactive approach to work and become a go-to expert for data cloud migrationWork with state-of-the-art tooling on AWS including Redshift, S3, and ECSCollaborate with our infrastructure engineers to provision the next generation of toolsDevelop a sense of ownership to deliver high-quality outcomes for the businessMake complex data more accessible, understandable, and usable by the organisationWho we're looking for:We'd love to hear about any additional skills or experiences you bring to the table. We're particularly interested in:
An advanced degree in software engineering, computer science, a related technical field, or relevant work experienceProven experience in building robust and reliable data pipelines, dealing with a wide variety of data sourcesStrong experience in Python, knowledge of data wrangling methodologies and common packagesProficiency with AWS Tools (e.g., S3, Athena, Glue, Lambda, Kinesis) as well as AWS CI/CD and readiness to learn/work with themExperience with Docker, Airflow, and SQL (ideally Postgres)Fundamental understanding of the programming landscape (e.g., APIs, SQL, database management)Great communication skills, comfortable working collaboratively in cross-functional teamsAbility to manage own workload with a customer focus and solution-oriented can-do attitude
#J-18808-Ljbffr