The company :
Our Client is the world market leader in digital advertisement technology with over 5000 employees across France and the EU. DTS (Digital to Store) is a subsidiary responsible for developing, testing and deploying Data Science models that create innovative new products and upgrade existing ones, ranging from Real-Time Bidding (RTB) click optimisation to user profiling.
The context :
They are looking for candidates with data engineering experience to develop models that exploit search-engine-request and mobile-GPS data as they add to their suite of online advertising offerings.
The role is based at DTS’ charming offices in Soho, London.
The candidate must be able to work in a highly dynamic environment. The successful candidate will gain excellent exposure to new technologies as well as the opportunity to work in a technically challenging, delivery focused environment.
The missions :
As a Data Scientist within our team, your responsibilities will include:
► Work in a startup-esque environment and take on the responsibility that comes from working as part of a small, growing team.
► Exploit our BIG data clusters (Spark, SparkStreaming, Singularity, Kafka, ElasticSearch, Python), executing jobs on literally hundreds of nodes at once.
► Design big data solutions, ensuring models adhere to strategy and meet the requirements of real business problems.
► Work with developers to deploy data science models.
► Collaborate with a team of data scientists and engineers, contributing regularly to our shared knowledge base (Confluence) and code repository (Git).
► Collaborate with Marketing to ensure data science solutions are designed to solve real business problems.
► Be a committed team member who shares information, knowledge, and experience openly.
The profil requested :
• Master’s degree, PhD or equivalent in Mathematics, Statistics or IT
• Min. 2 years’ experience in a programming or analytical role
• Experience contributing clean code in a team environment
• Previous experience using MapReduce logic on a multi-node cluster
• Clear and practical communicator, verbally and in writing
• Strong problem solving ability
• Can work autonomously and find innovative solutions to unsolved problems
• Show initiative, flexibility and resourcefulness
• Ability to take on responsibility for quality, completeness and accuracy of work
• Comfortable using collaboration tools such as GIT and Confluence
• Attention to detail
• Already having deployed a data model into production is a strong plus
• Big data technologies : ElasticSearch, Kafka, Singularity
• SQL, R, Java, Tableau
• Experience contributing to a team knowledgebase such as Confluence
Contact details for applying : firstname.lastname@example.org