Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
pandas, xarray, SciPy/PyMC/PyTorch, or similar). Experience validating models with historical data and communicating results to non-specialists. Exposure to real-time data engineering (Kafka, Airflow, dbt) Track record turning research code into production services (CI/CD, containers etc) Strong SQL and data-management skills; experience querying large analytical databases (Snowflake highly desirable, but More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
z2bz0 years experience gained in a Hedge Fund, Investment Bank, FinTech or similar Expertise in Python and SQL and familiarity with relational and time-series databases. Exposure to Airflow and dbt, as well as Snowflake, Databricks or other Cloud Data Warehouses preferred. Experience implementing data pipelines from major financial market data vendors (Bloomberg, Refinitiv, Factset....) SDLC and DevOps: Git More ❯
Manage deployments with Helm and configuration in YAML. Develop shell scripts and automation for deployment and operational workflows. Work with Data Engineering to integrate and manage data workflows using ApacheAirflow and DAG-based models. Perform comprehensive testing, debugging, and optimization of backend components. Required Skills Bachelor's degree in Computer Science, Software Engineering, or a related field … and YAML for defining deployment configurations and managing releases. Proficiency in shell scripting for automating deployment and maintenance tasks. Understanding of DAG (Directed Acyclic Graph) models and experience with ApacheAirflow for managing complex data processing workflows. Familiarity with database systems (SQL and NoSQL) and proficiency in writing efficient queries. Solid understanding of software development best practices, including More ❯
in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like ApacheAirflow, Spark, or Kafka. Strong understanding of SQL, NoSQL, and data modeling. Familiarity with cloud platforms (AWS, Azure, GCP) for deploying ML and data solutions. Knowledge of MLOps More ❯
SQL, craft new features. Modelling sprint: run hyper-parameter sweeps or explore heuristic/greedy and MIP/SAT approaches. Deployment: ship a model as a container, update an Airflow (or Azure Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or … similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at scale. Strong grasp of mathematical optimisation (e.g., linear/integer programming, meta-heuristics) as well as ML. Hands-on cloud ML experience (AWS or Azure). Proven … Terraform. SQL mastery for heavy-duty data wrangling and feature engineering. Experimentation chops - offline metrics, online A/B test design, uplift analysis. Production mindset: containerise models, deploy via Airflow/ADF, monitor drift, automate retraining. Soft skills: clear comms, concise docs, and a collaborative approach with DS, Eng & Product. Bonus extras: Spark/Databricks, Kubernetes, big-data panel More ❯
Bracknell, England, United Kingdom Hybrid / WFH Options
Circana, LLC
In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and ApacheAirflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and … desire to make a significant impact, we encourage you to apply! Job Responsibilities Data Engineering & Data Pipeline Development Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow Implement real-time and batch data processing using Spark Enforce best practices for data quality, governance, and security throughout the data lifecycle Ensure data availability, reliability and performance through … data processing workloads Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments. Big Data & Analytics: Build and optimize large-scale data processing pipelines using Apache Spark and PySpark Implement data partitioning, caching, and performance tuning for Spark-based workloads. Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions, actively contribute to RfP response. Ability to be a SPOC for all technical discussions across industry groups. Excellent design experience, with entrepreneurship skills to own and lead solutions for clients Ability to define the monitoring, alerting, deployment strategies for various services. Experience providing … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions, actively contribute to RfP response. Ability to be a SPOC for all technical discussions across industry groups. Excellent design experience, with entrepreneurship skills to own and lead solutions for clients. Ability to define the monitoring, alerting, deployment strategies for various services. Experience providing … minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Required Skills : Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills : Designing Databricks based More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … Dataflow/Airflow/ADF etc. Excellent consulting experience and ability to design and build solutions, actively contribute to RfP response. Ability to be a SPOC for all technical discussions across industry groups. Excellent design experience, with entrepreneurship skills to own and lead solutions for clients Ability to define the monitoring, alerting, deployment strategies for various services. Experience providing … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based More ❯
driven decision-making. What we'd like to see from you: 3–5 years of experience in data integration, orchestration, or automation roles Solid experience with orchestration tools (e.g., ApacheAirflow, MuleSoft, Dell Boomi, Informatica Cloud). Familiarity with cloud data platforms (e.g., AWS, Microsoft Azure, Google Cloud Platform) and related data movement technologies, including AWS Lambda and More ❯
Concept. Contributing to AI infrastructure. Building reliable, scalable, and flexible systems. Influencing opinion and decision-making across AI and ML. Skills Python SQL/Pandas/Snowflake/Elasticsearch Airflow/Spark Familiarity with GenAI models/libraries Requirements 6+ years of relevant software engineering experience post-graduation. A degree (ideally a Master’s) in Computer Science, Physics, Mathematics More ❯
features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/CD pipelines (e.g., GitLab Actions, ApacheAirflow), experiment tracking (MLflow), and model monitoring for reliable production workflows; Cross-Functional Collaboration: Participate in code reviews, architectural discussions, and sprint planning to deliver features end-to More ❯
like ICE, CME, Reuters, Bloomberg). Candidates should have 3-6 years of relevant experience and ideally some exposure to commodities data sets. Strong Python skills and exposure to Airflow are essential. Please apply if you want to be part of this unique build-out. #J-18808-Ljbffr More ❯
platforms including, AWS, Azure or SAP ETL/ELT Development Data Modeling Data Integration & Ingestion Data Manipulation & Processing Version Control & DevOps: Skilled in GitHub, GitHub Actions, Azure DevOps Glue, Airflow, Kinesis, Redshift SonarQube, PyTest If you're ready to take on a new challenge and shape data engineering in a trading-first environment, submit your CV today to be More ❯
core concepts in ML, data science and MLOps. Nice-to-Have : Built agentic workflows/LLM tool-use. Experience with MLFlow, WandB, LangFuse, or other MLOps tools. Experience with AirFlow, Spark, Kafka or similar. Why Plexe? Hard problems: we're automating the entire ML/AI lifecycle from data engineering to insights. High ownership: first 5 engineers write the More ❯
Lumi Space is empowering the future prosperity of earth - making space scalable and sustainable using ground-based laser systems. We work with global companies and institutions to build products and services to precisely track satellites and remove the dangers of More ❯
especially in retail and consumer sectors, and how data supports operational outcomes. Strong coding ability with SQL and Python, as well as experience working with data orchestration tools like Airflow or Dataform. Commercial experience with Spark and Databricks. Familiarity with leading integration and data platforms such as Mulesoft, Talend, or Alteryx. A natural ability to mentor others and provide More ❯