architecture you will: Build and orchestrate data pipelines across Snowflake and AWS environments. Apply data modeling, warehousing, and architecture principles (Kimball/Inmon). Develop pipeline programming using Python, Spark, and SQL; integrate APIs for seamless workflows. Support Machine Learning and AI initiatives, including NLP, Computer Vision, Time Series, and LLMs. Implement MLOps, CI/CD pipelines, data testing … Skills & Experience Strong experience in data pipeline development and orchestration. Proficient with cloud platforms (Snowflake, AWS fundamentals). Solid understanding of data architecture, warehousing, and modeling. Programming expertise: Python, Spark, SQL, API integration. Knowledge of ML/AI frameworks, MLOps, and advanced analytics concepts. Experience with CI/CD, data testing frameworks, and versioning strategies. Ability to work effectively More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Adecco
Strong experience with orchestration tools (Airflow, Prefect, Dagster).* Expertise in Docker and Kubernetes.* Solid understanding of CI/CD principles and tooling.* Familiarity with open-source data technologies (Spark, Kafka, PostgreSQL).* Knowledge of Infrastructure as Code (Terraform, Ansible).* Understanding of data architecture principles.* Experience with monitoring tools like Grafana and Prometheus.* Strong leadership skills to guide … on the client's supplier list for this position.KeywordsLead DataOps Engineer, DataOps, Data Pipeline Automation, Airflow, Prefect, Dagster, Docker, Kubernetes, EKS, AKS, CI/CD, Terraform, Ansible, Grafana, Prometheus, Spark, Kafka, PostgreSQL, Infrastructure as Code, Cloud Data Engineering, Hybrid Working, Security Clearance, Leadership, DevOps, Observability, Monitoring. More ❯
in ML, GenAI, and academic research to infuse innovation. Technologies and tools you will use and own: PyTorch & TensorFlow: for large-scale development and training of deep learning models. Spark/Distributed Systems: for large data processing and model training at scale. A/B Experimentation Platforms: design, monitor, and analyze online experiments. Cloud ML Pipelines and Tools: to …/recommender systems at scale. Deep understanding of recent LLM and generative AI architectures with experience fine-tuning and deploying them. Experience processing large-scale data via distributed systems (Spark, Hadoop, etc.). Excellent communication and collaboration across engineering, analytics, and product teams. Track record of impact through production ML systems and/or peer-reviewed publications. Accommodation requests More ❯