Job description Posted 11 November 2022

Working as part of an Agile team within an Agile Scrum framework you will -


  • Support the implementation of data engineering use cases
  • Map, analyse and document business requirements
  • Work closely with the Product Owner to completely understand client and stakeholder needs, and develop epics and user stories
  • Participate in Agile events, including daily scrums, sprint planning, backlog refinement and retrospectives
  • Drive client sprint demos alongside the scrum team
  • Design, implementation and maintenance of reliable and scalable data and analytics infrastructure, including design and development of industrial scale data and ML pipelines on Azure and AWS data platforms and services, building data ingestion and publishing pipelines, and development and provisioning of data sets and ML models for wide scale access.


Essential

  • Experience of Big Data Technologies – e.g. Hadoop, Hive, etc
  • Experience of MPP (Massive Parallel Processing) databases – e.g. Teradata, Netezza
  • Challenges involved in Big Data – large table sizes (e.g. depth/width), even distribution of data
  • Experience of programming- SQL, Python, Pyspark
  • Data Pipelining skills – Data blending, etc
  • Data Science tooling – R, SAS etc
  • Experience working with ETL Integration Tools - SSIS, Informatica, etc
  • Visualisation experience 
  • Data Management experience – e.g. Data Quality, Security, etc
  • Experience of working in a cloud environment (less relevant)
  • Development/Delivery methodologies – Agile, SDLC
  • Experience working in a geographically disparate team.


Desirable

  • Are an avid learner, initiative-taker, and team player
  • Have knowledge of current technology trends, especially in areas like AWS,AZUR, AI etc


All strong Data Engineers with 3+ years of the above skills are invited to apply. Rates are competitive and contract is intended to run into 2023.