You will need to login before you can apply for a job. Director, AIML & Scientific Computing Optimization Site Name: USA - Washington - Seattle-Onyx, UK - London, USA - Pennsylvania - Upper Providence Posted Date: Apr 4 2025 The Onyx Research Data Tech organization is GSK's Research data ecosystem which has the … at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time. Our AIML & Scientific Computing Optimization team is focused on optimizing first-in-class Compute and AIML platforms that accelerate application development, scale up computational experiments, and integrate all … application deployments. The optimization team's focus is on maximizing scale and performance of all aspects of the platforms. A Director of AIML & Scientific Computing Optimization is a deeply technical leader. They consistently deliver major compute and AIML platform features and solutions with cross-organizational impact and value. They More ❯
in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), ParallelComputing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices More ❯
in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), ParallelComputing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices More ❯
in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), ParallelComputing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices More ❯
building performant, scalable systems. Deep understanding of core Python, including its strengths in data manipulation, asynchronous programming, and performance optimization. Experience with distributed systems, parallelcomputing, and high-performance processing of large datasets. Strong experience in data pipelines, working with tools such as Pandas, NumPy, and SQL/ More ❯
building performant, scalable systems. Deep understanding of core Python, including its strengths in data manipulation, asynchronous programming, and performance optimization. Experience with distributed systems, parallelcomputing, and high-performance processing of large datasets. Strong experience in data pipelines, working with tools such as Pandas, NumPy, and SQL/ More ❯
technologies. Leverage new SQL Server features such as Columnstore Indexes, In-Memory OLTP, Incremental statistics, Trace Flags, SQL CLR functions, window aggregate functions, and parallelcomputing algorithms to reduce the processing time of multi-billion row data sets. Contribute to the automation capabilities of the team. Implement techniques More ❯
with analysts and stakeholders to gather requirements and drive projects from inception to deployment. Your expertise in object-oriented software engineering, quantitative modelling, cloud computing, and data analysis will be essential in enhancing the models that underpin our cashflow and pricing engines. You will also conduct ad-hoc analysis … stakeholders Skills, Knowledge, and Expertise A degree in a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a … version-controlled environment Knowledge of parallelcomputing techniques (Python multiprocessing, Apache Spark) and performance optimization Understanding of data structures and algorithms Problem-solving mindset with enthusiasm for tackling technical challenges Ability to communicate complex technical concepts effectively to non-technical audiences Experience with cloud platforms (Azure, AWS, GCP More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Low Carbon Contracts Company
with analysts and stakeholders to gather requirements and drive projects from inception to deployment. Your expertise in object-oriented software engineering, quantitative modelling, cloud computing, and data analysis will be essential in enhancing the models that underpin our cashflow and pricing engines. You will also conduct ad-hoc analysis … stakeholders Skills, Knowledge, and Expertise A degree in a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a … version-controlled environment Knowledge of parallelcomputing techniques (Python multiprocessing, Apache Spark) and performance optimization Understanding of data structures and algorithms Problem-solving mindset with enthusiasm for tackling technical challenges Ability to communicate complex technical concepts effectively to non-technical audiences Experience with cloud platforms (Azure, AWS, GCP More ❯
Leeds, Yorkshire, United Kingdom Hybrid / WFH Options
Low Carbon Contracts Company
wider business to gather requirements, driving projects from inception to deployment. You will leverage your expertise across object-oriented software engineering, quantitative modelling, cloud computing, and data analysis to help improve the models underpinning our most business-critical cashflow and pricing engines. You will come up with ad-hoc … Expertise A good first degree or higher degree in a highly numerate subject is essential Minimum 2 years' experience in Python development, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Solid understanding of object-oriented software engineering design principles for usability, maintainability and extensibility Experience working … with Git in a version-controlled environment Good knowledge of parallelcomputing techniques (Python multiprocessing, Apache Spark), and performance profiling and optimisation Good understanding of data structures and algorithms An enthusiastic problem-solving mindset with a desire to solve technical problems and model/forecast intricate real-life More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
Ecm Selection
Qt, QML); 3D graphics toolkits (OpenGL, Vulkan or shaders); CI experience (CMake, JIRA, Git, Jenkins); GIS development tools (GDAL API, MapBox API); multithreading/parallelcomputing (GPU programming or CUDA); MATLAB/Python scripting for mathematical/geology problems would be advantageous. Due to specific requirements, applicants without More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
ECM Selection (Holdings) Limited
Qt, QML); 3D graphics toolkits (OpenGL, Vulkan or shaders); CI experience (CMake, JIRA, Git, Jenkins); GIS development tools (GDAL API, MapBox API); multithreading/parallelcomputing (GPU programming or CUDA); MATLAB/Python scripting for mathematical/geology problems would be advantageous. Due to specific requirements, applicants without More ❯
and TensorFlow. Understanding of machine learning algorithms, including model training and inference, and how to optimize these for GPU-based computation. Strong knowledge of parallelcomputing, vectorization, and multi-core systems for high-performance computing (HPC). Experience with profiling tools (e.g., NVIDIA Nsight, gdb, perf) and … keen interest in optimizing systems for ML workloads. A passion for machine learning, AI, and innovative technology. Nice to Have: Experience with high-performance computing (HPC) and large-scale distributed systems. Knowledge of AI/ML libraries such as cuDNN, TensorRT, or other GPU-accelerated libraries. Familiarity with low More ❯
and TensorFlow. Understanding of machine learning algorithms, including model training and inference, and how to optimize these for GPU-based computation. Strong knowledge of parallelcomputing, vectorization, and multi-core systems for high-performance computing (HPC). Experience with profiling tools (e.g., NVIDIA Nsight, gdb, perf) and … keen interest in optimizing systems for ML workloads. A passion for machine learning, AI, and innovative technology. Nice to Have: Experience with high-performance computing (HPC) and large-scale distributed systems. Knowledge of AI/ML libraries such as cuDNN, TensorRT, or other GPU-accelerated libraries. Familiarity with low More ❯
higher education, scientific, research and development, and a host of enterprise based firms. You will be responsible for optimizing, managing, and scaling high-performance computing (HPC) environments. Our client are at the forefront of innovation, driving the boundaries of what's possible with high-performance computing. Their team of … utilization. Crafting and implementing monitoring solutions to identify and address potential issues proactively. Collaborating with researchers, engineers, and developers to optimize application performance and parallelcomputing efficiency. Troubleshooting and resolving system, network, and software problems to keep operations running smoothly. Implementing security measures to protect sensitive data and More ❯
seaborn, or Streamlit. Fluent in Python, with experience working with data processing libraries such as Pandas. Strong SQL skills, a good understanding of Linux, parallelcomputing tools, and experience with Git, Jira, and Confluence. A demonstrated interest in sustainability, systematic investing and a willingness to undertake self-study More ❯
Level executives. This requires deep familiarity across the stack - compute infrastructure (Amazon EC2, Amazon EKA), ML frameworks PyTorch, JAX, orchestration layers Kubernetes and Slurm, parallelcomputing (NCCL, MPI), MLOPs, through to Amazon SageMaker Hyperpod, Amazon Bedrock as well as target use cases in the cloud. This is an … stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and … and at home, there's nothing we can't achieve in the cloud. - 10+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experience - Bachelor's degree in computer science, engineering, mathematics or equivalent - Experience developing technology solutions and evangelising More ❯
or sensor simulation. Experience implementing optical methods such as raytracing, finite difference time domain (FDTD), beam propagation method (BPM), or mode solvers. Experience with parallel computing. COMPANY INFORMATION All Silvaco salary ranges are determined by role, level and geographic location. Within the range, individual pay is determined by work More ❯