What we are looking for
You’ll be joining a major player with a large-scale data science laboratory, where data engineering plays a central role.
In an innovative, AI-oriented environment, you will contribute to building platforms that provide reliable, usable data for advanced use cases in machine learning and artificial intelligence.
You’ll be working in a modern cloud ecosystem with multidisciplinary teams (data scientists, analysts, architects).
We are looking for someone who is ideally able to move from a listening and analytical posture to a leadership role. We expect our consultants to have strong persuasive skills, capable of guiding our clients and supporting design decisions with development teams.
Above all, we are seeking individuals who embody an intrapreneurial spirit and who want to be part of a young and fast-growing company.
Do these values resonate with you? Do you want to work autonomously in a fun and relaxed atmosphere? Pay great attention, because this job may be for you 👋 !
What we can achieve together
- Design, develop and maintain high-performance, secure data pipelines
- Help ingest data from multiple sources
- Set up complex transformations to prepare data
- Contribute to the quality and reliability of datasets used by data teams
- Help define and implement best practices in data engineering
- Collaborate with data, product and architecture teams in an Agile environment
- Play the role of technical lead/referent on certain subjects
Experience required
- Bac +5 in computer science, software engineering or equivalent
- Minimum 6 years’ experience in data engineering / software engineering
- Excellent command of Python
- Significant experience with PySpark
- ETL / Data Warehouse experience
- Good understanding of development principles (POO, SOLID)
- Knowledge of machine learning / AI (a plus)
- Ability to work in a collaborative, Agile environment
- Bilingualism (FR / EN) appreciated in an international context
Experience required
- Cloud : AWS
- Data: Databricks, Snowflake
- Languages: Python, PySpark
- Methodology: Agile (Scrum)
What makes the difference
- Project at the heart of data & AI issues
- Modern, scalable technical environment
- High exposure to advanced use cases (ML / GENAI)
- Potential to have a structuring impact on data practice




