London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
and analysing A/B tests Strong storytelling and stakeholder-management skills Full UK work authorization Desirable Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery) and ETL tools (dbt, Airflow) Experience with loyalty-programme analytics or CRM platforms Knowledge of machine-learning frameworks (scikit-learn, TensorFlow) for customer scoring Technical Toolbox Data & modeling: SQL, Python/R, pandas, scikit … learn Dashboarding: Tableau or Power BI ETL & warehousing: dbt, Airflow, Snowflake/Redshift/BigQuery Experimentation: A/B testing platforms (Optimizely, VWO More ❯
Dorset, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
stakeholders, understanding and translating their needs into technical requirements. Possess outstanding communication and interpersonal skills, facilitating clear and effective collaboration within and outside the team. Desirables: Familiarity with the ApacheAirflow platform. Basic knowledge of BI tools such as Power BI to support data visualization and insights. Experience with version control using GIT for collaborative and organized code More ❯
Poole, Dorset, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
stakeholders, understanding and translating their needs into technical requirements. Possess outstanding communication and interpersonal skills, facilitating clear and effective collaboration within and outside the team. Desirables: Familiarity with the ApacheAirflow platform. Basic knowledge of BI tools such as Power BI to support data visualization and insights. Experience with version control using GIT for collaborative and organized code More ❯
Are you a problem-solver with a passion for data, performance, and smart engineering? This is your opportunity to join a fast-paced team working at the forefront of data platform innovation in the financial technology space. You'll tackle More ❯
Are you a problem-solver with a passion for data, performance, and smart engineering? This is your opportunity to join a fast-paced team working at the forefront of data platform innovation in the financial technology space. You'll tackle More ❯
products or platforms Strong knowledge of SQL and experience with large-scale relational and/or NoSQL databases Experience testing data pipelines (ETL/ELT), preferably with tools like ApacheAirflow, dbt, Spark, or similar Proficiency in Python or similar scripting language for test automation Experience with cloud platforms (AWS, GCP, or Azure), especially in data-related services More ❯
DATA ENGINEER - DBT/AIRFLOW/DATABRICKS 4-MONTH CONTRACT £450-550 PER DAY OUTSIDE IR35 This is an exciting opportunity for a Data Engineer to join a leading media organisation working at the forefront of data innovation. You'll play a key role in designing and building the data infrastructure that supports cutting-edge machine learning and LLM … to accelerate delivery of critical pipelines and platform improvements. THE ROLE You'll join a skilled data team to lead the build and optimisation of scalable pipelines using DBT, Airflow, and Databricks. Working alongside data scientists and ML engineers, you'll support everything from raw ingestion to curated layers powering LLMs and advanced analytics.Your responsibilities will include: Building and … maintaining production-grade ETL/ELT workflows with DBT and Airflow Collaborating with AI/ML teams to support data readiness for experimentation and inference Writing clean, modular SQL and Python code for use in Databricks Contributing to architectural decisions around pipeline scalability and performance Supporting the integration of diverse data sources into the platform Ensuring data quality, observability More ❯
SQL, craft new features. Modelling sprint: run hyper-parameter sweeps or explore heuristic/greedy and MIP/SAT approaches. Deployment: ship a model as a container, update an Airflow (or Azure Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow)SQL (Redshift, Snowflake or … similar)AWS SageMaker Azure ML migration, with Docker, Git, Terraform, Airflow/ADFOptional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at scale. Strong grasp of mathematical optimisation (e.g., linear/integer programming, meta-heuristics) as well as ML. Hands-on cloud ML experience (AWS or Azure). Proven track More ❯
data accessibility and quality. Key Skills & Experience Required – About You: Essential: Strong proficiency in Python, SQL, and Jinja Experience with cloud platforms (preferably AWS) Familiarity with orchestration tools (ideally Airflow) Experience building CI/CD workflows (ideally using GitHub Actions) Up-to-date knowledge of modern data engineering practices Proven ability to design and develop scalable data pipelines Strong More ❯
work. 6-month contract , likely to extend Hybrid - 2 days a week onsite in London (Likely to go remote) Active SC Tech stack: Python, SQL, Jinja, AWS or Azure, Airflow, GitHub Actions, Terraform You'll be designing scalable pipelines, building CI/CD workflows, and collaborating with cross-functional teams. Apply now with your CV and availability. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
US TECH SOLUTIONS LIMITED
robust data solutions Qualifications: 5+ years of experience with SQL and PythonStrong background in data modeling and data visualization (Tableau, MicroStrategy, etc.)Hands-on experience with Azure Data Factory, Airflow, or similar ETL toolsAbility to work independently while collaborating with cross-functional teamsExperience with data analysis and metric definition is a plus Nice to Have: Prior experience in support More ❯
standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or integration with external tools (e.g., ApacheAirflow, Azure Data Factory). Data Ingestion & Transformation: Build scalable data ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems More ❯
standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or integration with external tools (e.g., ApacheAirflow, Azure Data Factory). Data Ingestion & Transformation: Build scalable data ingestion processes to handle structured, semi-structured, and unstructured data from various sources (APIs, databases, file systems More ❯
a senior role Deep expertise with AWS services (S3, Glue, Lambda, SageMaker, Redshift, etc.) Strong Python and SQL skills; experience with PySpark a bonus Familiarity with containerization (Docker), orchestration (Airflow, Step Functions), and infrastructure as code (Terraform/CDK) Solid understanding of machine learning model lifecycle and best practices for deployment at scale Excellent communication skills and the ability More ❯