for efficiency. Enhance and create advanced data visualisation applications. Requirements: Proficient in Software Python Development. 3-5 years experience in software engineering Experience with libraries/frameworks such as Pandas, Numpy, Scipy, etc. Skilled in data pipeline orchestration management libraries (e.g., Airflow, Prefect). Experience with cloud infrastructure (AWS, GCP, Azure). DevOps skills (CI/CD, containerisation). Familiarity More ❯
Abingdon, Oxfordshire, United Kingdom Hybrid/Remote Options
NES Fircroft
Tools for scalable data processing: Kubernetes, Spark â Experience with Java 2D graphics and 3D OpenGL programming. â Experience with scientific computing libraries and frameworks: o Python: NumPy, SciPy, Pandas, TensorFlow (for ML/AI) o C Java: CUDA (for GPU acceleration) o Angular or React o Microservice: Quarkus, Spring Boot, AWS API Gateway o Docker, Kubernetes With over More ❯
framework experience (SQLAlchemy) Containerization familiarity with Docker and/or Kubernetes Cloud platform knowledge (AWS preferred, but also Azure, GCP) Plusses: Experience working with data scientists Data library experience (Pandas and/or NumPy) Knowledge of microservices architecture and RESTful API design Integration experience with LangChain or similar AI frameworks to build AI based workflows Technical - python, containerization, cloud platform More ❯
and more on reliability and fixes. Key Skills: Investigating and debugging complex data flow and Machine Learning issues within a live, high impact production environment. Extensive Python, NumPy and Pandas is required for this role. You must demonstrate a deep commercial background in the following areas: Extensive Python: Very strong, production-level Python coding and debugging skills. Production Environment: Proven More ❯
design) Strong background in AI/ML with experience using frameworks such as TensorFlow, PyTorch, or Scikit-learn Proficiency in data handling and manipulation using libraries like NumPy and Pandas Experience with SQL databases for managing and accessing training data Knowledge of model deployment and scaling in enterprise or cloud environments (AWS, Azure, or GCP) Familiarity with containerization and orchestration More ❯
matter expert for pricing models and valuation logic, supporting risk and trading teams globally. Skills and Experience Expert-level Python developer with strong experience in numerical computing (NumPy, SciPy, Pandas). Deep understanding of derivatives pricing theory, volatility modelling, and stochastic calculus. Experience with calibration, curve bootstrapping, and risk measures (Greeks, sensitivities, VaR). Background in pricing and risk models More ❯
implementing ETL/ELT pipelines and data workflows. Hands-on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM. Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.). Solid understanding of data modeling, relational databases, and schema design. Familiarity with version control, CI/CD, and automation practices. Ability to collaborate with data scientists More ❯
AI/ML development. Strong proficiency in Python, R, or Java. Experience with machine learning libraries such as TensorFlow, Keras, or Scikit-learn. Familiarity with data processing tools (e.g., Pandas, NumPy). Knowledge of AI model deployment and cloud services (AWS, Google Cloud, Azure). Solid understanding of algorithms and data structures. Excellent analytical skills and problem-solving capability. Strong More ❯
AI/ML development. Strong proficiency in Python, R, or Java. Experience with machine learning libraries such as TensorFlow, Keras, or Scikit-learn. Familiarity with data processing tools (e.g., Pandas, NumPy). Knowledge of AI model deployment and cloud services (AWS, Google Cloud, Azure). Solid understanding of algorithms and data structures. Excellent analytical skills and problem-solving capability. Strong More ❯
AI/ML development. Strong proficiency in Python, R, or Java. Experience with machine learning libraries such as TensorFlow, Keras, or Scikit-learn. Familiarity with data processing tools (e.g., Pandas, NumPy). Knowledge of AI model deployment and cloud services (AWS, Google Cloud, Azure). Solid understanding of algorithms and data structures. Excellent analytical skills and problem-solving capability. Strong More ❯
AI/ML development. Strong proficiency in Python, R, or Java. Experience with machine learning libraries such as TensorFlow, Keras, or Scikit-learn. Familiarity with data processing tools (e.g., Pandas, NumPy). Knowledge of AI model deployment and cloud services (AWS, Google Cloud, Azure). Solid understanding of algorithms and data structures. Excellent analytical skills and problem-solving capability. Strong More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
continuous improvement initiatives Essential Skills for the AWS Data Engineer: Extensive hands-on experience with AWS data services Strong programming skills in Python (including libraries such as PySpark or Pandas) Solid understanding of data modelling, warehousing and architecture design within cloud environments Experience building and managing ETL/ELT workflows and data pipelines at scale Proficiency with SQL and working More ❯
facing technical capacity before. About the LanceDB Team LanceDB was created by experts with decades of experience building tools for data science and machine learning. From co-authors of pandas to Apache PMC of HDFS, Arrow, Iceberg and HBase, the LanceDB team has created open-source tools used by millions worldwide. More ❯
building and deploying machine learning models (or equivalent experience through research, internships, or significant personal projects). Proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, numpy). Experience building and maintaining APIs for data-centric applications (e.g., FastAPI, Flask). A strong foundation in software engineering best practices, including version control (Git), testing, and writing More ❯
WHAT WE NEED FROM YOU: Required: 7+ Years of professional work experience in Python software development Extensive experience with core AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, XGBoost) Experience with Python RESTful APIs using Fast API, ensuring effective communication between backend systems and applications Experience with relational and/or NoSQL databases (e.g., SQL, Google FireStore, Azure More ❯
learning fundamentals, and experimental design. Experience with predictive modeling techniques such as regression, classification, clustering, or time-series forecasting. Proficiency in Python and experience with data science libraries (e.g., Pandas, NumPy, scikit-learn, XGBoost, PyTorch, TensorFlow). Strong experience with SQL and data manipulation across large datasets. Familiarity with data visualization tools (e.g., Matplotlib, Seaborn, Plotly, Tableau, or Power BI More ❯
in production environments. Strong proficiency in Python and SQL, with plenty of familiarity using popular libraries for machine learning (e.g. scikit-learn, XGBoost, LightGBM, PyTorch) and data manipulation (e.g. Pandas, NumPy, Polars, DuckDB, Dask). Experience applying software engineering best practices to both greenfield and brownfield development (e.g. testing, CI/CD, containerization, observability) Excellent technical communication and collaboration skills More ❯
Python, R, or other programming languages commonly used in machine learning. Experience with libraries such as TensorFlow, PyTorch, scikit-learn, etc.Tools: Experience with data processing and manipulation tools (SQL, Pandas, Numpy), cloud platforms (AWS, Azure, GCP), and version control systems (GitDesirable Skills: Knowledge of deep learning, NLP, computer vision, or reinforcement learning.Experience in deploying machine learning models in production environments.Familiarity More ❯
visualisation libraries (Matplotlib, Seaborn.) SQL for data extraction and manipulation. Experience working with large datasets. Technical Skills Proficiency in cloud computing and python programming. Familiarity with Python libraries like Pandas, NumPy, scikit-learn. Experience with cloud services for mode training and deployment. Machine Learning Fundamentals Statistical concepts for robust data analysis. Linear algebra principles for modelling and optimisation. Calculus for More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Datatech Analytics
statistics and machine learning techniques (supervised and unsupervised learning, natural language processing, Bayesian statistics, time-series forecasting, collaborative filtering etc) ? Proficiency in Python with familiarity to ML libraries e.g. pandas, numpy, scipy, scikit-learn, tensorflow, pytorch) ? Familiarity with cloud platforms (GCP, AWS, Azure) and tools like Dataiku, Databricks. ? Experience with ML Ops, including model deployment, monitoring, and retraining pipelines. ? Ability More ❯
Computational Biology, or related field 2+ years of hands-on experience with PyTorch and/or JAX for deep learning applications Strong proficiency in Python scientific computing stack (NumPy, Pandas, scikit-learn) Experience with distributed computing and GPU optimization techniques Familiarity with protein structure analysis, computational biology, or analogous problems in natural sciences Understanding of modern deep learning architectures and More ❯