Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT processes using tools such as … Apache Airflow, AWS Glue, and Amazon Athena . Work with cloud-native technologies to support scalable, serverless architectures. Collaborate with data science teams to streamline feature engineering and model deployment. Ensure data governance, lineage, and compliance best practices. Mentor and support team members in data engineering best practices . … Skills & Experience Required 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
and infrastructure. Skills and Experience Required: Proficiency in SQL, Python, Scala, and R. Experience with Big Data technologies such as Microsoft Fabric, Microsoft Synapse, Spark, and Kafka. Familiarity with database management systems including SQL Server and PostgreSQL. Knowledge of data integration and ETL tools (e.g., Talend, Informatica, ApacheMore ❯
of this team, you will be working on a plethora of services such as Glue (ETL service), Athena (interactive query service), Managed Workflows of Apache Airflow, etc. Understanding of ETL (Extract, Transform, Load) Creation of ETL Pipelines to extract and ingest data into data lake/warehouse with simple … and managing large data sets from multiple sources. Ability to read and understand Python and Scala code. Understanding of distributed computing environments. Proficient in Spark, Hive, and Presto. Experience working with Docker. Python, and shell scripting. Customer service experience/strong customer focus. Prior working experience with AWS - any … environments and excellent Linux/Unix system administrator skills. PREFERRED QUALIFICATIONS - Proficient in Hadoop Map-Reduce and its Ecosystem (Zookeeper, HBASE, HDFS, Pig, Hive, Spark, etc). - Good understanding of ETL principles and how to apply them within Hadoop. - Prior working experience with AWS - any or all of EC2 More ❯
that will impact millions of users, then this is the place for you! THE MAIN RESPONSIBILITIES FOR THIS POSITION INCLUDE: Support Java based applications & Spark/Flink jobs on Baremetal, AWS & Kubernetes. Understand the application requirements (Performance, Security, Scalability etc.) and assess the right services/topology on AWS … and understanding of SRE principles & goals along with prior on-call experience. Deep understanding and experience in one or more of the following - Hadoop, Spark, Flink, Kubernetes, AWS. The ability to design, author, and release code in any language (Go, Python, Ruby, or Java). Preferred Qualifications Fast learner More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
NLP PEOPLE
Job Specification: Machine Learning Engineer (NLP) (Pytorch) Location: Bristol, UK (Hybrid - 2 days per week in the office) About the Role I'm looking for an NLP Engineer to join a forward-thinking company that specialises in advanced risk analytics More ❯
as a Senior Data Scientist with Machine Learning expertise Strong understanding of ML models and observability tools Proficiency in Python and SQL Experience with Spark and Apache Airflow Knowledge of ML frameworks (PyTorch, TensorFlow, Scikit-Learn) Experience with cloud platforms, preferably AWS Experience with containerization technologies Useful information More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Cathcart Technology
Prior Senior Data Scientist with Machine Learning experience ** Strong understanding and experience with ML models and ML observability tools ** Strong Python and SQL experience ** Spark/Apache Airflow ** ML frame work experience (PyTorch/TensorFlow/Scikit-Learn) ** Experience with cloud platforms (preferably AWS) ** Experience with containerisation technologies More ❯
continue our growth, we are recruiting a Senior Software Engineer focusing on Python for our Software Team. Our Tech Stack: AWS, Athena SQL, Athena Spark, ECS, Azure, Azure Synapse SQL & Spark, Python, Flask, Fast API, Redis, Postgres, React, Plotly, Docker. We will potentially add GCP and on-premise … of seven years experience working in Software Engineering, with at least three of these in Python. You've got two years' experience working with Spark, preferably PySpark. You've had the opportunity to work on Cloud Infrastructure, whether it be AWS, Azure or GCP. You've got experience with More ❯
design, implementation, testing, and support of next-generation features related to Dremio's Query Planner and Reflections technologies Work with open source projects like Apache Calcite and Apache Iceberg Use modular design patterns to deliver an architecture that's elegant, simple, extensible and maintainable Solve complex technical problems … such as S3, ADLS, or HDFS Experience with AWS, Azure, and Google Cloud Platform and background in large scale data processing systems (e.g., Hadoop, Spark, etc.) is a plus Ability to scope and plan solutions for big problems and mentors others on the same Interested and motivated to be … distributed query engines. Hands on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, storage systems, heap management, Apache Arrow, SQL Operators, caching techniques, and disk spilling Hands on experience with multi-threaded and asynchronous programming models More ❯
discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. More ❯
learning systems. Strong expertise in ML/DL/LLM algorithms, model architectures, and training techniques. Proficiency in programming languages such as Python, SQL, Spark, PySpark, TensorFlow, or equivalent analytical/model-building tools. Familiarity with tools and technologies related to LLMs. Ability to work independently while also thriving … in a collaborative team environment. Experience with GenAI/LLMs projects. Familiarity with distributed data/computing tools (e.g., Hadoop, Hive, Spark, MySQL). Background in financial services, including banking or risk management. Knowledge of capital markets and financial instruments, along with modelling expertise. If you are a forward More ❯
models using SQL, Python, and R. Build interactive dashboards and data visualisations with Power BI and Tableau. Process and analyse large-scale datasets using Spark and Hadoop. Work with cloud platforms such as AWS and Azure to enhance data capabilities. Translate complex data insights into actionable recommendations for stakeholders. … large datasets for business impact. Strong problem-solving skills and adaptability to client challenges. Nice-to-Have: Experience with big data technologies such as Spark and Hadoop. Background in public sector, National Security, or Defence projects. Familiarity with GDPR and data privacy regulations. Existing security clearance or eligibility to More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Henderson Scott
You'll Use: Languages & Tools: SQL, Python, Power BI/Tableau, XML, JavaScript Platforms & Frameworks: Azure Data Services, Microsoft Fabric (nice to have), Hadoop, Spark Reporting & Visualization: Power BI, Tableau, Business Objects Methodologies: Agile/Scrum, CI/CD pipelines What You'll Be Doing: Designing and building robust … Python, and BI platforms like Tableau or Power BI Strong background in data warehousing, data modelling, and statistical analysis Experience with distributed computing (Hadoop, Spark) and data profiling Skilled at explaining complex technical concepts to non-technical audiences Hands-on experience with Azure Data Services (or similar cloud platforms More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Henderson Scott
You'll Use: Languages & Tools: SQL, Python, Power BI/Tableau, XML, JavaScript Platforms & Frameworks: Azure Data Services, Microsoft Fabric (nice to have), Hadoop, Spark Reporting & Visualization: Power BI, Tableau, Business Objects Methodologies: Agile/Scrum, CI/CD pipelines What You'll Be Doing: Designing and building robust … Python, and BI platforms like Tableau or Power BI Strong background in data warehousing, data modelling, and statistical analysis Experience with distributed computing (Hadoop, Spark) and data profiling Skilled at explaining complex technical concepts to non-technical audiences Hands-on experience with Azure Data Services (or similar cloud platforms More ❯
and implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, Delta Lake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment … Strong knowledge of CI/CD practices for data pipelines, ensuring automated, repeatable, and scalable deployments. Familiarity with open-source data tools such as Spark, and an understanding of how they complement cloud data platforms. Experience creating and maintaining structured technical roadmaps, ensuring successful delivery and future scalability of More ❯
and implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, Delta Lake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment … Strong knowledge of CI/CD practices for data pipelines, ensuring automated, repeatable, and scalable deployments. Familiarity with open-source data tools such as Spark, and an understanding of how they complement cloud data platforms. Experience creating and maintaining structured technical roadmaps, ensuring successful delivery and future scalability of More ❯
Senior Machine Learning Operations Specialist Posting Date: 18 Apr 2025 Function: Brand and Marketing Unit: Consumer Location: One Braham (4140), London, United Kingdom Salary: Competitive with great benefits London, Birmingham or Bristol 3 days in the office/2 days More ❯
rekindle program Note: For more details on rekindle program, please visit - rekindle program Amazon's India Ad Tech organization is seeking a highly quantitative, enthusiastic Data Engineer to drive the development of Ads analytics and insights. Be a part of More ❯
to enhance the user experience. Key skills: Senior Data Scientist experience Commercial experience in Generative AI and recommender systems Strong Python and SQL experience Spark/Apache Airflow LLM experience MLOps experience AWS Additional information: This role offers a strong salary of up to 95,000 (Depending on More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Cathcart Technology
to enhance the user experience. Key skills: ** Senior Data Scientist experience ** Commercial experience in Generative AI and recommender systems ** Strong Python and SQL experience ** Spark/Apache Airflow ** LLM experience ** MLOps experience ** AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on More ❯
City of London, London, Tottenham Court Road, United Kingdom Hybrid / WFH Options
Cathcart Technology
to enhance the user experience. Key skills: ** Senior Data Scientist experience ** Commercial experience in Generative AI and recommender systems ** Strong Python and SQL experience ** Spark/Apache Airflow ** LLM experience ** MLOps experience ** AWS Additional information: This role offers a strong salary of up to £95,000 (Depending on More ❯
City, Edinburgh, United Kingdom Hybrid / WFH Options
ENGINEERINGUK
years in software engineering, with 3+ years in API-backed ML deployment. Strong programming language skills in Python. Significant experience with SQL (e.g., RDBMS, Spark, Presto, or BigQuery). Experience with machine learning, optimization, and data manipulation tools (e.g., scikit-learn, XGBoost, cvxpy, Pandas, Spark, or PyTorch). More ❯
developing Java systems. Proven track record of leading a team and delivering projects with a commercial mindset. Prior experience with Event Sourcing (Kafka, Akka, Spark) and Data Distribution based architecture Experience with NoSQL (Mongo, Elastic, Hadoop), in memory (MEMSQL, Ignite) and relational (Sybase, DB2, SybaseIQ) data store solutions. Strong … Strong communication skills and the ability to work in a team. Strong analytical and problem-solving skills. PREFERRED QUALIFICATIONS Experience with Kubernetes deployment architectures Apache NiFi experience Experience building trading controls within an investment bank ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and ideas to More ❯
powered by best-in-class understanding of customer behavior and automation. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and Spark SQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal personalisation toolkits to create and deploy … data science, machine learning and data mining. Experience with theory and practice of design of experiments and statistical analysis of results. Experience with Python, Spark SQL, QuickSight, AWS Lambda & React.js - Core tools of team. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is More ❯
drive improvements in how millions of customers discover and evaluate products. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and Spark SQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal personalisation toolkits to create and deploy … data science, machine learning and data mining. - Experience with theory and practice of design of experiments and statistical analysis of results. - Experience with Python, Spark SQL, QuickSight, AWS Lambda & React.js - Core tools of team. More ❯