HTML/Javascript REST APIs Authentication protocols Data formats (CSV, JSON, XML, Parquet) Experience with time-series databases (e.g., InfluxDB, kdb+, TimescaleDB) and real-time data processing. Familiarity with distributedcomputing and data warehousing technologies (e.g., Spark, Snowflake, Delta Lake). Strong understanding of data governance, master data management, and data quality frameworks. Excellent communication and stakeholder management More ❯
or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributedcomputing, high-performance computing PREFERRED QUALIFICATIONS - Experience using Unix/Linux - Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for More ❯
to design and build scalable, high-performance data solutions. Data Modelling & Warehouse Design : Proficiency in data modelling, warehouse design, and database optimization, with examples of logical and physical models. Distributed Data Systems : Experience in deploying, managing, and tuning distributed systems for optimal reliability and performance. Coding & Development Practices : Demonstrated coding expertise with modular, reusable, and efficient code in … engineering problems. Architecture for Scale : Design scalable, complex data architectures that provide cross-team value. Data Modelling & Governance : Establish standards in logical and physical data modelling and data governance. DistributedComputing : Employ parallel processing, streaming, and batch workflows to manage large data volumes effectively. ETL & Workflow Automation : Build ETL processes and automated workflows for efficient data movement. System More ❯
edge technology on the market right now. Main Responsibilities Develop scalable systems and reusable components that serve as reference points for new team members. Create efficient data workflows using distributedcomputing tools, particularly for large-scale processing and aggregation. Build responsive, asynchronous APIs and backend processes capable of handling high data volumes with minimal delay. Utilise AI-powered … engineering principles such as test-driven development and modular design. Preferred Background Hands-on experience with Spark and Scala in commercial environments. Familiarity with Java and Python. Exposure to distributed data systems and cloud storage platforms. Experience designing data schemas and analytical databases. Use of AI tools to streamline development and debugging. Strong grasp of algorithmic thinking and performance More ❯
edge technology on the market right now. Main Responsibilities Develop scalable systems and reusable components that serve as reference points for new team members. Create efficient data workflows using distributedcomputing tools, particularly for large-scale processing and aggregation. Build responsive, asynchronous APIs and backend processes capable of handling high data volumes with minimal delay. Utilise AI-powered … engineering principles such as test-driven development and modular design. Preferred Background Hands-on experience with Spark and Scala in commercial environments. Familiarity with Java and Python. Exposure to distributed data systems and cloud storage platforms. Experience designing data schemas and analytical databases. Use of AI tools to streamline development and debugging. Strong grasp of algorithmic thinking and performance More ❯
skills required to triage and resolve complex production issues and operate well in a fast-paced, high-pressure environment. A propensity to automate manual tasks, appreciation for large-scale, distributedcomputing systems, and a willingness to develop using a wide range of languages and frameworks will be necessary to succeed in the role. As part of a global … to quickly identify scope and impact of issues during high-pressure situations Solid communication and interpersonal skills Ability to multi-task and prioritize tasks effectively Preferred Qualifications Experience with distributed systems design, maintenance, and troubleshooting. Hands-on experience with debugging and optimizing code, as well as automation. Knowledge of financial markets FIX protocol knowledge ABOUT GOLDMAN SACHS At Goldman More ❯
skills required to triage and resolve complex production issues and operate well in a fast-paced, high-pressure environment. A propensity to automate manual tasks, appreciation for large-scale, distributedcomputing systems, and a willingness to develop using a wide range of languages and frameworks will be necessary to succeed in the role. As part of a global … to quickly identify scope and impact of issues during high-pressure situations Solid communication and interpersonal skills Ability to multi-task and prioritize tasks effectively Preferred Qualifications Experience with distributed systems design, maintenance, and troubleshooting. Hands-on experience with debugging and optimizing code, as well as automation. Knowledge of financial markets FIX protocol knowledge ABOUT GOLDMAN SACHS At Goldman More ❯
application platform development, web and mobile development, cloud, integration, security, etc. Application development experience with at least one of the cloud providers - Amazon AWS or MS Azure Understanding of distributedcomputing paradigm and exposure to building highly scalable systems. Experience with platform modernization and cloud migration projects Expertise in Agile development methodologies like TDD, BDD, Performance/Load More ❯
application platform development, web and mobile development, cloud, integration, security, etc. Application development experience with at least one of the cloud providers - Amazon AWS or MS Azure Understanding of distributedcomputing paradigm and exposure to building highly scalable systems. Experience with platform modernization and cloud migration projects Expertise in Agile development methodologies like TDD, BDD, Performance/Load More ❯
youre comfortable developing or learning to develop custom metrics, identify biases, and quantify data quality. Strong Python skills for Data & Machine Learning, familiarity with PyTorch and TensorFlow. Experience with distributedcomputing and big data scaling ML pipelines for large datasets. Familiarity with cloud-based deployment (such AWS, GCP, Azure, or Modal). Experience in fast moving AI, ML More ❯
Comfortable writing code in either Python or Scala Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one Deep experience with distributedcomputing with Apache Spark and knowledge of Spark runtime internals Familiarity with CI/CD for production deployments Working knowledge of MLOps Design and deployment of performant end More ❯
ML-specific operators AI model serving experience with modern inference servers and API gateways for AI applications Nice to have: Infrastructure as Code experience with Terraform, Ansible, or CloudFormation Distributedcomputing experience with Databricks, Ray, or Spark for large-scale AI workloads AI safety & governance experience with model evaluation, bias detection, and responsible AI practices Multi-modal AI More ❯
experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. … hands on experience with Python to build, train, and evaluate models Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributedcomputing, high-performance computing Experience with design, development, and optimization of generative AI solutions, algorithms, or technologies Experience in patents or publications at peer-reviewed conferences or More ❯
business application - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributedcomputing, high-performance computing PREFERRED QUALIFICATIONS - Experience using Unix/Linux - Experience in professional software development Amazon is an equal opportunities employer. We believe passionately that employing More ❯
Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributedcomputing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes An MSc or PhD in Data Science or an … analytical subject (Physics, Mathematics, Computing) or other quantitative discipline would be handy. The position is based in the Docklands London. This is a 3 to 6 month contract assignment. Please send your CV to us in Word format along with daily rate and availability details. More ❯
Amazon! Key job responsibilities • Collaborate with experienced cross-disciplinary Amazonians to conceive, design, and bring to market innovative products and services. • Design and build innovative technologies in a large distributedcomputing environment and help lead fundamental changes in the industry. • Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and … speed. • Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. • Work in an agile environment to deliver high quality software. BASIC QUALIFICATIONS - Graduated less than 24 months ago or about to complete a Bachelor's or Master's Degree in Computer Science, Computer Engineering, or related fields at … time of application - Knowledge of Computer Science fundamentals - Programming experience in C or Java or Rust - Knowledge in databases PREFERRED QUALIFICATIONS - Previous technical internship(s) if applicable - Experience with distributed, multi-tiered systems, algorithms, and relational databases - Experience such as linear programming and nonlinear optimisation - Ability to effectively articulate technical challenges and solutions - Adept at handling ambiguous or undefined More ❯
expertise in Spark ML to work with a leading financial organisation on a global programme of work. The role involves predictive modeling, and deploying training and inference pipelines on distributed systems such as Hadoop. The ideal candidate will design, implement, and optimise machine learning solutions for large-scale data processing and predictive analytics. Role: Develop and implement machine learning … models using Spark ML for predictive analytics Design and optimise training and inference pipelines for distributed systems (e.g., Hadoop) Process and analyse large-scale datasets to extract meaningful insights and features Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines Evaluate model performance and fine-tune hyperparameters to improve accuracy and efficiency Implement scalable … solutions for real-time and batch inference Monitor and troubleshoot deployed models to ensure reliability and performance Stay updated with advancements in machine learning frameworks and distributedcomputing technologies Experience: Proficiency in Apache Spark and Spark MLlib for machine learning tasks Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering) Experience with distributed systems like Hadoop More ❯
bold vision and lead teams through high impact programs using new technology? Would you like to gain the deepest customer and partner insights on maximizing the value of cloud computing technologies? At AWS, we're hiring a highly technical Senior Cloud Consultant to build innovative solutions with our Global customers and partners. Our consultants deliver meaningful business outcomes to … hands-on delivery teams to accelerate their adoption of new technologies and practices. Delivery: Engagements may include on-site projects proving the use of AWS services to support new distributedcomputing solutions that often span private cloud and public cloud services. Engagements may include migration of existing applications and development of new applications using AWS cloud services. Insights … experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. More ❯
AI revolution! About DeepPCB: DeepPCB is InstaDeep's AI-powered Place & Route PCB (Printed Circuit Board) design tool. We use a combination of deep reinforcement learning and high-performance computing to automate and scale PCB place-and-route workflows, accelerating hardware innovation globally. We are looking for a Machine Learning Engineer to join the DeepPCB team and help push … engineers to bring ideas to life. Responsibilities: Develop scalable and efficient machine learning algorithms to tackle PCB place-and-route challenges. Adapt and optimize ML models for large-scale distributedcomputing environments (e.g., GPUs, multi-node clusters). Build, test, and deploy robust production-level ML systems integrated into the DeepPCB platform. Collaborate with research scientists, software engineers … thrive in a fast-paced, collaborative, and dynamic environment. Nice to haves: Prior experience with PCB design, EDA tools, or related optimization problems. Hands-on experience in high-performance computing environments (e.g., Kubernetes, Ray, Dask). Contributions to open-source projects, publications, or top placements in ML competitions (e.g., Kaggle). Expertise in related fields such as Computer Vision More ❯
through writing, visualisations, or presentations Strong organisational skills with experience in balancing multiple projects Familiarity with Posit Connect, workflow orchestration tools (e.g., Airflow), AWS services (e.g., SageMaker, Redshift), or distributedcomputing tools (e.g., Spark, Kafka) Experience in a media or newsroom environment Agile team experience Advanced degree in Maths, Statistics, or a related field What's in it More ❯
RAG). Familiarity with feature stores, vector databases, or embedding pipelines. Understanding of data quality, lineage, and governance best practices. Exposure to cloud platforms (AWS, Azure, or GCP) and distributed compute environments. More ❯
complement Ripple's Payments, Custody and Stablecoin business units WHAT YOU'LL DO: Be an ambitious builder, working up and down the stack, mixing software engineering, data engineering, and distributed systems knowledge to build modern enterprise payment applications. Build reliable, high-throughput, low-latency microservices to power a diverse range of trading use cases Engage in the complete software … Proactively identify customer and infrastructure difficulties and drive corresponding solutions. WHAT YOU'LL BRING: 2+ years of hands-on Software Development experience within the trading domain on large scale distributed systems, primarily in Java or similar (Golang, Scala etc). Experience working in a Front-Office Trading environment and building financial/trading systems Comfortable in a fast-paced … environment FX and/or Crypto Trading experience Experience in building transactional systems backed by modern persistence technologies (Aurora, DynamoDB etc.) Experience with Agile development of distributed services, with a focus on robust software design, scalability and security. Experience building and deploying containerised applications into modern distributedcomputing environments (Kubernetes, Nomad etc.) Eagerness to work openly and More ❯
to expert in one or more technical areas. Design, implement and deliver performant and scalable algorithms based on state-of-the-art machine learning and neural network methodologies using distributedcomputing systems (CPUs, GPUs, TPUs, Cloud, etc.). Conduct rigorous data analysis and statistical modelling to explain and improve models. Report results clearly and efficiently, both internally and … on application. Nice to haves: Knowledge in areas around immunology, proteomics, and computer vision. Knowledge in molecular biology, biochemistry, structural biology, or a related discipline. Experience with high-performance computing or MLOps. Our commitment to our people We empower individuals to celebrate their uniqueness here at InstaDeep. Our team comes from all walks of life, and we're proud More ❯
Experience in validating and QC'ing complex genomic datasets. Highly proficient in Python with solid command line knowledge and Unix skills. Highly proficient working with cloud environments (ideally Azure), distributedcomputing and optimising workflows and pipelines. Experience working with common data transformation and storage formats, e.g. Apache Parquet, Delta tables. Strong experience working with containerisation (e.g. Docker) and More ❯
into stories and epics. Broad knowledge of IT including Windows/Linux and general networking Good understanding of networking (TCP and UDP) and multicast data delivery Good understanding of distributed server architectures running on Linux Highly Desirable Skills Understanding of ultra-low latency distributedcomputing environments. Familiarity with FIX trading protocol and market data systems. Previous experience More ❯