Yarnton, Kidlington, Oxfordshire, England, United Kingdom Hybrid/Remote Options
Noir
Machine Learning Engineer Machine Learning Engineer - AI for Advanced Materials - Oxford/Remote (UK) (Tech stack: Python, PyTorch, TensorFlow, Scikit-learn, MLflow, Airflow, Docker, Kubernetes, AWS, Azure, GCP, Pandas, NumPy, SciPy, CI/CD, MLOps, Data Visualization, Bayesian Modelling, Probabilistic Programming, Terraform) We're looking for a Machine Learning Engineer to join a rapidly scaling deep-tech company that's … seeking Machine Learning Engineers with experience in some or all of the following (full training provided to fill any gaps): Python, PyTorch, TensorFlow, Scikit-learn, MLflow, Airflow, Docker, Kubernetes, Pandas, NumPy, SciPy, CI/CD, Data Visualization, Bayesian Modelling, Probabilistic Programming, Terraform, Azure, AWS, GCP, Git, and Agile methodologies. Join a team that's fusing AI, science, and engineering to More ❯
Adjust, and other MMPs Engineer features for ML models: temporal patterns, user behaviour sequences, campaign attribution Ensure data quality through comprehensive validation, deduplication, schema checks, and defensive programming Optimise pandas/Polars pipelines for performance - we process millions of events daily Handle messy real-world data: duplicates, out-of-order events, schema drift, missing fields, null handling Infrastructure & Orchestration Build … with data scientists to productionise ML workflows and model inference Requirements The ideal candidate must have... 4-6+ years building production data pipelines with demonstrable business impact Deep pandas expertise: you've built real ETL pipelines in production, understand vectorization, memory optimization, and performance patterns Deep expertise in Python and SQL with high code quality standards (e.g. ruff, mypy More ❯
critical simulation initiatives. Key Responsibilities Design, build, and maintain high-performance ETL/ELT data pipelines using Python and PySpark. Apply expertise in Python's data analysis libraries, including Pandas and NumPy, to perform complex data manipulation, cleansing, and transformation. Develop and manage data processing jobs leveraging PySpark for distributed computing across large-scale datasets. Implement DevOps practices and tooling … Skills 5+ years of experience in Data Engineering or a related technical field. Expert-level proficiency in Python, including a strong command of core concepts and specialized data libraries (Pandas, NumPy). Solid hands-on experience with PySpark for building scalable data workflows. Strong background in DevOps principles and tools for deploying Python-based data applications (e.g., containerization, CI/ More ❯
Job Description : What will make this person successful is their ability to convey the analysis of their data in laymans terms to business executives. Give presentations with confidence and clarity, using excellent command of English language.This project is supporting a More ❯
an experienced Data Developer to join their team on a permanent basis. In order to be successful, you will have the following experience: Extensive Data Development background within Python (Pandas) Strong SQL skills along with JSON, XML and CSV formats Experience of using APIs for data access Experience with ETL processes and pipeline development SC Cleared Within this role, you … will be responsible for: Cleaning and processing tabular data (Excel, CSV, databases) Building data transformation pipelines using Python and pandas Writing SQL queries to extract and manipulate relational data Implementing data validation and quality assurance processes Working with JSON, XML, and CSV formats Supporting metadata cataloguing and reference data management Learning and applying RDF and semantic web concepts Collaborating with More ❯
an experienced Data Developer to join their team on a permanent basis. In order to be successful, you will have the following experience: Extensive Data Development background within Python (Pandas) Strong SQL skills along with JSON, XML and CSV formats Experience of using APIs for data access Experience with ETL processes and pipeline development SC Cleared Within this role, you … will be responsible for: Cleaning and processing tabular data (Excel, CSV, databases) Building data transformation pipelines using Python and pandas Writing SQL queries to extract and manipulate relational data Implementing data validation and quality assurance processes Working with JSON, XML, and CSV formats Supporting metadata cataloguing and reference data management Learning and applying RDF and semantic web concepts Collaborating with More ❯
Bristol, Avon, England, United Kingdom Hybrid/Remote Options
Tank Recruitment
Data Scientist Location: Hybrid (Greater Bristol Area) Salary: £54,000 Python - PySpark - Azure - Pandas - Scikit-learn - TensorFlow - PyStats - Data Science - Power BI We're supporting a growing, forward-thinking organisation in their search for an experienced Data Specialist. This is an exciting opportunity to join a dynamic team at a pivotal point in its growth, helping shape data strategy, deliver … solutions through advanced machine learning models, statistical methods, and high-performance data pipelines. Skills & Experience Required To be considered, you will need: Strong proficiency in Python and key libraries (pandas, scikit-learn, TensorFlow, PyStats). A solid understanding of machine learning techniques and real-world performance trade-offs. Experience building and maintaining end-to-end machine learning applications. Hands-on … AI/ML, Data Science, Mathematics, Computer Science or similar discipline. IoT Data Scientist (Data Science & Data Engineering) Location: Hybrid (Greater Bristol Area) Salary: £54,000 Python - PySpark - Azure - Pandas - Scikit-learn - TensorFlow - PyStats - Data Science - Power BI More ❯
At Wayve we're committed to creating a diverse, fair and respectful culture that is inclusive of everyone based on their unique skills and perspectives, and regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship More ❯