City of London, London, United Kingdom Hybrid/Remote Options
Lorien
visualisation libraries (Matplotlib, Seaborn.) SQL for data extraction and manipulation. Experience working with large datasets. Technical Skills Proficiency in cloud computing and python programming. Familiarity with Python libraries like Pandas, NumPy, scikit-learn. Experience with cloud services for mode training and deployment. Machine Learning Fundamentals Statistical concepts for robust data analysis. Linear algebra principles for modelling and optimisation. Calculus for More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Harrington Starr
data structures, algorithms, and software engineering best practices Track record of designing scalable, production-grade systems Excellent problem-solving, collaboration, and communication skills Nice to Have Experience with NumPy, Pandas, Cython, or Numba Exposure to market microstructure, risk modelling, or quantitative research Experience developing and maintaining live trading bots or algo execution systems Background in mentoring or technical leadership within More ❯
data structures, algorithms, and software engineering best practices Track record of designing scalable, production-grade systems Excellent problem-solving, collaboration, and communication skills Nice to Have Experience with NumPy, Pandas, Cython, or Numba Exposure to market microstructure, risk modelling, or quantitative research Experience developing and maintaining live trading bots or algo execution systems Background in mentoring or technical leadership within More ❯
particularly in recommendation systems and deep learning architectures. Strong understanding of two-tower neural networks, embedding techniques, and ranking models. Proficiency in Python with familiarity to ML libraries e.g. pandas, numpy, scipy, scikit-learn, tensorflow, pytorch. Familiarity with cloud platforms (GCP, AWS, Azure) and tools like Dataiku. Experience with ML Ops, including model deployment, monitoring, and retraining pipelines. Ability to More ❯
basic AI algorithms and explore practical applications of AI. Overview of AI and Machine Learning Types of machine learning (supervised, unsupervised, reinforcement) Introduction to Python Libraries for AI - NumPy, Pandas, Matplotlib Scikit-learn for machine learning Building AI Models Data preprocessing Training and evaluating models Advanced Programming for AI Integration This programme aims to equip participants with the knowledge and More ❯
and pipelines. Required Skills and Qualifications Core Technical Skills Skill Area Requirements Programming Strong proficiency in Python for data manipulation and scripting. Familiarity with standard Python data libraries (e.g., Pandas, NumPy ). Database Expert-level proficiency in SQL (Structured Query Language). Experience writing complex joins, stored procedures, and performing performance tuning. Big Data Concepts Foundational understanding of Big Data More ❯
in Python), Databricks, dbt, Terraform. Advanced knowledge of PostgreSQL, Docker, and CI/CD pipelines. A practical understanding of data modelling, metadata management, and pipeline orchestration. Strong Python skills (Pandas, PySpark, or SQLAlchemy a plus) and SQL. Curiosity about how ML models and BI tools connect back to real-world decisions. Bonus Points Experience building and automating ML deployment pipelines More ❯
Northampton, England, United Kingdom Hybrid/Remote Options
Intellect Group
neural networks, NLP, etc.). Hands-on experience with frameworks such as TensorFlow , PyTorch , or scikit-learn . Proficiency in Python and familiarity with common data science libraries (NumPy, pandas, etc.). Solid grasp of statistics, linear algebra, and probability. Excellent problem-solving skills and ability to communicate complex ideas clearly. Desirable Skills Experience with deep learning architectures (CNNs, RNNs More ❯
designing, implementing, and maintaining MLOps processes in a cloud environment (e.g., Azure, AWS, GCP). Technical Skills: Expertise in Python and its ML ecosystem (e.g., TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy). Strong background in statistical analysis, algorithm design, and software engineering best practices. Experience with Docker and Kubernetes for containerization and orchestration. Proficiency with modern version control systems (Git More ❯
of RESTful APIs as well as experience working with both synchronous and asynchronous endpoints Experience with Snowflake or Redshift with a strong understanding of SQL. Proficient in Python and Pandas Experience working with JSON and XML Strong understanding of cloud computing concepts and services (AWS preferably) Experience with Git or equivalent version control systems and CI/CD pipelines. Familiarity More ❯
of RESTful APIs as well as experience working with both synchronous and asynchronous endpoints Experience with Snowflake or Redshift with a strong understanding of SQL. Proficient in Python and Pandas Experience working with JSON and XML Strong understanding of cloud computing concepts and services (AWS preferably) Experience with Git or equivalent version control systems and CI/CD pipelines. Familiarity More ❯
learning models at scale. Deep familiarity with core ML concepts (classification, time-series, statistical modeling) and their real-world tradeoffs. Fluency in Python and commonly used ML libraries (e.g. pandas, scikit-learn; experience with PyTorch or TensorFlow is a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Method Resourcing
learning models at scale. Deep familiarity with core ML concepts (classification, time-series, statistical modeling) and their real-world tradeoffs. Fluency in Python and commonly used ML libraries (e.g. pandas, scikit-learn; experience with PyTorch or TensorFlow is a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Enigma
such as Airflow, Prefect, or Temporal, or by building bespoke pipeline systems for multi-step autonomous processes. You bridge science and engineering — comfortable with scientific computing libraries (NumPy, SciPy, pandas) and familiar with scientific databases and literature formats. What Sets You Apart: You have a research background — perhaps as a former academic researcher or research software engineer in ML/ More ❯
such as Airflow, Prefect, or Temporal, or by building bespoke pipeline systems for multi-step autonomous processes. You bridge science and engineering — comfortable with scientific computing libraries (NumPy, SciPy, pandas) and familiar with scientific databases and literature formats. What Sets You Apart: You have a research background — perhaps as a former academic researcher or research software engineer in ML/ More ❯
data cycle. Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD Coding experience in Apache Spark, Iceberg or Python (Pandas) Experience in change and release management. Experience in Database Warehouse design and data modelling Experience managing Data Migration projects. Cloud data platform development and deployment. Experience of performance tuning in More ❯
applying machine learning or AI techniques Strong programming skills in Python and TypeScript (this is essential) Experience with common Python data/ML libraries (e.g. PyTorch, TensorFlow, scikit-learn, pandas, NumPy) Experience building or contributing to TypeScript codebases (e.g. Node.js backends, React frontends, or internal tools) Hands-on exposure to AWS (e.g. EC2, S3, IAM; bonus points for Lambda, ECS More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Intellect Group
applying machine learning or AI techniques Strong programming skills in Python and TypeScript (this is essential) Experience with common Python data/ML libraries (e.g. PyTorch, TensorFlow, scikit-learn, pandas, NumPy) Experience building or contributing to TypeScript codebases (e.g. Node.js backends, React frontends, or internal tools) Hands-on exposure to AWS (e.g. EC2, S3, IAM; bonus points for Lambda, ECS More ❯
technology frameworks with the team Eagerness for working with third-party/open-source technologies Experience with containerization and CI/CD solutions (e.g. Docker, Devise) Experience working with Pandas, Quantitative Analysis Experience working within the Scrum model Enthusiasm to lead, share new ideas, drive processes and technology frameworks with the team Discover what makes Bloomberg unique - watch our podcast More ❯
learning and cutting edge AI & Agents, honed through extensive practical experience across a range of domains Expert-level proficiency in Python and its data science ecosystem (e.g., scikit-learn, pandas), with the ability to select the right tools for complex problems and set technical standards for the team Advanced, hands-on expertise in SQL and big data platformslike Databricks, used More ❯
quantification, model evaluation, and statistical inference is highly valued. Python expertise: Skilled in building data pipelines and ML models using modern libraries across multiple domains: Data science stack: NumPy, pandas/polars, scikit-learn, XGBoost, LightGBM Deep learning: PyTorch, JAX Statistical programming: NumPyro, PyMC Data skills: Proficient in SQL, with the ability to write efficient, maintainable queries and manage data More ❯
quantification, model evaluation, and statistical inference is highly valued. Python expertise: Skilled in building data pipelines and ML models using modern libraries across multiple domains: Data science stack: NumPy, pandas/polars, scikit-learn, XGBoost, LightGBM Deep learning: PyTorch, JAX Statistical programming: NumPyro, PyMC Data skills: Proficient in SQL, with the ability to write efficient, maintainable queries and manage data More ❯
as a software engineer or a data engineer and a strong passion to learn. BS/MS in Computer Science or equivalent experience in related fields. Experience in Python, Pandas, PySpark, and Notebooks. SQL knowledge and experience working with relational databases including schema design, access patterns, query performance optimization, etc. Experience with data pipeline technologies like AWS Glue, Airflow, Kafka More ❯
data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in Apache Spark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development and deployment. - Experience of performance tuning in More ❯
scale up? Having added nearly 100 people to the business this year, they're looking for Quantitative Developers with proficiency in Python, and experience with the Python data libraries (Pandas, NumPy) (preferably) from the Trading world. They also are on the look out for candidates who: Have deep familiarity with Python data ecosystem Understanding of Jupyter notebooks Exposure to machine More ❯