Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
NLP PEOPLE
Job Specification: Machine Learning Engineer (NLP) (Pytorch) Location: Bristol, UK (Hybrid - 2 days per week in the office) About the Role I'm looking for an NLP Engineer to join a forward-thinking company that specialises in advanced risk analytics More ❯
and implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, Delta Lake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment … Strong knowledge of CI/CD practices for data pipelines, ensuring automated, repeatable, and scalable deployments. Familiarity with open-source data tools such as Spark, and an understanding of how they complement cloud data platforms. Experience creating and maintaining structured technical roadmaps, ensuring successful delivery and future scalability of More ❯
City, Edinburgh, United Kingdom Hybrid / WFH Options
ENGINEERINGUK
years in software engineering, with 3+ years in API-backed ML deployment. Strong programming language skills in Python. Significant experience with SQL (e.g., RDBMS, Spark, Presto, or BigQuery). Experience with machine learning, optimization, and data manipulation tools (e.g., scikit-learn, XGBoost, cvxpy, Pandas, Spark, or PyTorch). More ❯
and build something meaningful. 🛠️ What you'll do: Lead and mentor a junior-heavy team of data engineers Build and scale robust pipelines using Spark, Kafka and Delta Lake Define test-driven, documented and repeatable engineering practices Work closely with AI, research and DevOps to deliver products and insights … do it: 2 days a week in the office (1 day in Oxford and 1 day in London) 🧰 Tech you'll use: Python, SQL, Spark, Kafka, Kubernetes, Docker, Airflow, RabbitMQ, AWS, Delta Lake ✅ You’ll thrive here if you: Believe in clean code, strong documentation and a test-first More ❯
systems, tools, and validation strategies to support new features Help design/build distributed real-time systems and features Use big data technologies (e.g. Spark, Hadoop, HBase, Cassandra) to build large scale machine learning pipelines Develop new systems on top of real-time streaming technologies (e.g. Kafka, Flink) Minimum … in Java, Shell, Python development Excellent knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate) is a plus Experience in Cassandra, HBase, Flink, Spark or Kafka is a plus. Experience in the Spring Framework is a plus Experience with test-driven development is a plus We offer a More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Yelp USA
Summary Yelp engineering culture is driven by our values : we're a cooperative team that values individual authenticity and encourages creative solutions to problems. All new engineers deploy working code their first week, and we strive to broaden individual impact More ❯
technical processes in areas like cloud, AI, machine learning, or mobile platforms. Ability to solve data-oriented problems using technologies like SQL, Relational DB, Spark, NoSQL, with performance optimization. Preferred qualifications, capabilities, and skills Experience with Spark performance tuning on large datasets. Experience delivering production changes using Scala More ❯
Experience in patents or publications at top-tier peer-reviewed conferences or journals. PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. More ❯
neural deep learning methods and machine learning - Experience with prompting techniques for LLMs PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - PhD in math/statistics/ More ❯
Responsibilities: Analyze, design, develop, and test software solutions. Utilize Spring Boot, Spark (Big Data), and Message Bus Architecture. Work with containerization technologies like Kubernetes. Manage cloud infrastructure on AWS. Implement and maintain CI/CD pipelines using Jenkins. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related … field. Proficiency in software engineering with Java and Spring, or other major programming languages. Preferred experience with Spring Boot, Spark (Big Data), containerization, AWS, and CI/CD pipelines. More ❯