PySpark Jobs in London

76 to 100 of 111 PySpark Jobs in London

Lead AWS Data Engineer

London Area, United Kingdom
F5 Consultants
ensuring effective collaboration. Design, develop, and optimise scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Establish and enforce best practices in data engineering, coding standards, and architecture … expertise in AWS Data Services, including Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in More ❯
Posted:

Lead AWS Data Engineer

london, south east england, United Kingdom
F5 Consultants
ensuring effective collaboration. Design, develop, and optimise scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Establish and enforce best practices in data engineering, coding standards, and architecture … expertise in AWS Data Services, including Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in More ❯
Posted:

Senior Software Engineer (Data) Software Europe - Remote, London Hybrid Remote

London, United Kingdom
Hybrid / WFH Options
Monolithai
Are you passionate about revolutionising engineering with AI? Here at Monolith AI we're on a mission to empower engineers to use AI to solve even their most intractable physics problems. We've doubled in size over the last four More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer - Streaming Platform

London
Hybrid / WFH Options
Starling Bank
Responsibilities: Analyse source data from source databases (PostgreSQL) to understand structures, relationships and semantics. Design and implement streaming data pipelines using AWS EMR and PySpark to generate real-time (fast-moving) features for the feature store. Develop and maintain batch processing pipelines using DBT and BigQuery to generate batch … ensure efficient integration into Feast feature store. Requirements Good knowledge of programming languages such as Python or Java. Strong experience with streaming technologies (Spark, PySpark, Flink, KSQL or similar) for developing data transformation pipelines. Solid understanding and practical experience with SQL and relational databases (PostgreSQL preferred). Proficiency with More ❯
Employment Type: Permanent
Posted:

Data Engineer Manager

London, United Kingdom
Hybrid / WFH Options
Low Carbon Contracts Company
Data Engineer Manager Department: Tech Hub Employment Type: Permanent Location: London Reporting To: Lead Data Engineer Description Contract type: Permanent Hours: Full time, 37.5 hours per week Salary: circa £80,000 depending on experience Location: Canary Wharf WFH policy: Employees More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Scientist

London Area, United Kingdom
Hybrid / WFH Options
Harnham
across clustering, propensity modelling, regression, and NLP Providing insights on their customers, pricing strategies, and the target audience YOUR SKILLS AND EXPERIENCE Python/PySpark experience is essential to create propensity models and clustering NLP experience is a plus Commercial awareness and insights experience is needed for this role More ❯
Posted:

Senior Data Scientist

London Area, United Kingdom
Hybrid / WFH Options
Harnham
both short and long-term projects across clustering, propensity modelling, regression, and NLP Occasionally building dashboards for clients YOUR SKILLS AND EXPERIENCE Python/PySpark experience is essential to create propensity models and clustering NLP experience is a plus Commercial awareness and insights experience is needed for this role More ❯
Posted:

Senior Data Engineer

London Area, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout … years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data More ❯
Posted:

Senior Data Engineer

london, south east england, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout … years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data More ❯
Posted:

DATA ENGINEER - SC CLEARED

London, United Kingdom
Hybrid / WFH Options
Onyx-Conseil
JSON and Spark), and schema management. Key Skills and Experience: Strong understanding of complex JSON manipulation Experience with Data Pipelines using custom Python/PySpark frameworks Knowledge of the 4 core Data categories (Reference, Master, Transactional, Freeform) and handling Reference Data Understanding of Data Security principles, access controls, GDPR … browser-based IDEs like Jupyter Notebooks Familiarity with Agile methodologies (SAFE, Scrum, JIRA) Languages and Frameworks: JSON YAML Python (advanced proficiency, Pydantic bonus) SQL PySpark Delta Lake Bash Git Markdown Scala (bonus) Azure SQL Server (bonus) Technologies: Azure Databricks Apache Spark Delta Tables Data processing with Python PowerBI (Data More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Data Engineer (SC cleared)

City of London, London, United Kingdom
Experis
schemas (both JSON and Spark), schema management etc Strong understanding of complex JSON manipulation Experience working with Data Pipelines using a custom Python/PySpark frameworks Strong understanding of the 4 core Data categories (Reference, Master, Transactional, Freeform) and the implications of each, particularly managing/handling Reference Data. … write basic scripts) LANGUAGES/FRAMEWORKS JSON YAML Python (as a programming language, not just able to write basic scripts) Pydantic experience DESIRABLE SQL PySpark Delta Lake Bash (both CLI usage and scripting) Git Markdown Scala DESIRABLE Azure SQL Server as a HIVE Metastore DESIRABLE TECHNOLOGIES Azure Databricks Apache More ❯
Employment Type: Contract
Rate: £640 - £641 per day
Posted:

Customer Data Analytics Lead

London Area, United Kingdom
Montash
Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product Owners … 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure Stream Analytics More ❯
Posted:

Customer Data Analytics Lead

london, south east england, United Kingdom
Montash
Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product Owners … 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure Stream Analytics More ❯
Posted:

Jr Data Engineer London Hybrid

London, United Kingdom
Hybrid / WFH Options
DataBuzz
will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes , with a demonstrated ability to implement solutions in a cloud environment. Position - Jr Data Engineer Location - London Job Type - Hybrid … Permanent Mandatory Skills : Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP. Collaborate with data scientists, business analysts to understand their data needs & develop solutions that … of our data solutions. Qualifications : Minimum 4+ yrs of Total experience At least 2+ years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL More ❯
Employment Type: Permanent
Salary: £40000 - £50000/annum
Posted:

Senior Data Engineer London Hybrid

London, United Kingdom
Hybrid / WFH Options
DataBuzz
will play a crucial role in designing, developing, and maintaining data architecture and infrastructure. The successful candidate should possess a strong foundation in Python, Pyspark, SQL, and ETL processes, with a demonstrated ability to implement solutions in a cloud environment. Position - Senior Data Engineer Experience - 6+ yrs Location - London … Job Type - Hybrid, Permanent Mandatory Skills : Design, build, maintain data pipelines using Python, Pyspark and SQL Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS/AZURE/GCP . Collaborate with data scientists, business analysts to understand their data … of our data solutions. Qualifications : Minimum 6+ years of Total experience. At least 4 years of Hands on Experience using The Mandatory skills - Python, Pyspark, SQL. More ❯
Employment Type: Permanent
Salary: £60000 - £70000/annum
Posted:

Data Architect (SC Cleared)

London, United Kingdom
Scrumconnect Limited
and manage data solutions that align with business needs and industry standards. The ideal candidate will have expertise in Java, SQL, Python, and Spark (PySpark & SparkSQL) while also being comfortable working with Microsoft Power Platform. Experience with Microsoft Purview is a plus. The role requires strong communication skills to … data standards. Key Responsibilities: 1. Data Architecture & Engineering Design and implement scalable data architectures that align with business objectives. Work with Java, SQL, Python, PySpark, and SparkSQL to build robust data pipelines. Develop and maintain data models tailored to organizational needs. Reverse-engineer data models from existing live systems. More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Head of Data & AI

London Area, United Kingdom
Careerwise
ensuring data assets are leveraged for maximum operational and commercial impact. Technology Ownership: Stay current with the latest developments in AI/ML, Databricks, PySpark, and Power BI; evaluate and integrate advancements to improve data pipelines, data science workflows, and reporting capabilities. Data Infrastructure: Oversee the design, implementation, and … cybersecurity, or software distribution. Should have experience in GenAI, Graph, Neo4j, Azure Databricks. Expertise in cloud-native data platforms, with strong proficiency in Databricks , PySpark , and Power BI . Solid understanding of AI/ML applications in real-world business use cases. Strong knowledge of data governance, data warehousing More ❯
Posted:

Head of Data & AI

london, south east england, United Kingdom
Careerwise
ensuring data assets are leveraged for maximum operational and commercial impact. Technology Ownership: Stay current with the latest developments in AI/ML, Databricks, PySpark, and Power BI; evaluate and integrate advancements to improve data pipelines, data science workflows, and reporting capabilities. Data Infrastructure: Oversee the design, implementation, and … cybersecurity, or software distribution. Should have experience in GenAI, Graph, Neo4j, Azure Databricks. Expertise in cloud-native data platforms, with strong proficiency in Databricks , PySpark , and Power BI . Solid understanding of AI/ML applications in real-world business use cases. Strong knowledge of data governance, data warehousing More ❯
Posted:

Senior Software Engineer - Distributed Systems (AI Enablement)

London, UK
myGwork
using Java-based microservices and Python batch processing to support AI guardrails, evaluation, and observability. Data Pipelines : Create and maintain robust data pipelines using PySpark and Databricks, ensuring efficient and reliable data flow across AI systems. Vendor Integration : Identify and leverage vendor capabilities (e.g., AWS, Databricks, and other cloud … with distributed systems engineering, including designing and implementing Java-based microservices and Python batch jobs. Data Engineering Skills : Proficiency in building data pipelines using PySpark and Databricks, with a strong understanding of data flow and processing. Cloud Vendor Experience : Hands-on experience leveraging vendor technologies like AWS and Databricks More ❯
Posted:

Data Engineering Senior Manager

London, United Kingdom
Accenture
of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySQL). Quality engineering professionals utilize Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a … team members to provide regular progress updates and raise any risk/concerns/issues. Core skills we're working with include: Palantir Python PySpark/PySQL AWS or GCP What's in it for you: At Accenture, in addition to a competitive basic salary, you will also have More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
Burns Sheehan
Senior Data Engineer 💰 £90,000-£100,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company We are partnered with an industry-leading property data and risk software company, helping businesses in the UK make better decisions. … are experienced with Databricks and Lakehouse architecture for efficient data management. You are experienced building and optimising data pipelines and have strong SQL and PySpark skills. You have a strong understanding of the Azure stack. You have experience with devops practices and infrastructure-as-code, preferably Terraform. You have … membership Gym on-site Cycle to work and electric car schemes Senior Data Engineer 💰 £90,000-£100,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company More ❯
Posted:

Senior Data Engineer

london, south east england, United Kingdom
Hybrid / WFH Options
Burns Sheehan
Senior Data Engineer 💰 £90,000-£100,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company We are partnered with an industry-leading property data and risk software company, helping businesses in the UK make better decisions. … are experienced with Databricks and Lakehouse architecture for efficient data management. You are experienced building and optimising data pipelines and have strong SQL and PySpark skills. You have a strong understanding of the Azure stack. You have experience with devops practices and infrastructure-as-code, preferably Terraform. You have … membership Gym on-site Cycle to work and electric car schemes Senior Data Engineer 💰 £90,000-£100,000 + 10% bonus 🖥️ Databricks, Snowflake, Terraform, Pyspark, Azure 🌍 London, hybrid working (2 days in office) 🏠 Leading property data & risk software company More ❯
Posted:

Senior Data Engineer

London, United Kingdom
Chambers & Partners
contract data engineers to supplement existing team during implementation phase of new data platform. Main Duties and Responsibilities: Write clean and testable code using PySpark and SparkSQL scripting languages, to enable our customer data products and business applications. Build and manage data pipelines and notebooks, deploying code in a … Experience: Excellent understanding of Data Lakehouse architecture built on ADLS. Excellent understanding of data pipeline architectures using ADF and Databricks. Excellent coding skills in PySpark and SQL. Excellent technical governance experience such as version control and CI/CD. Strong understanding of designing, constructing, administering, and maintaining data warehouses More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Engineer

London, England, United Kingdom
Hybrid / WFH Options
Burns Sheehan
Job Description Senior Data Engineer £90,000-£100,000 + 10% bonus ️ Databricks, Snowflake, Terraform, Pyspark, Azure London, hybrid working (2 days in office) Leading property data & risk software company We are partnered with an industry-leading property data and risk software company, helping businesses in the UK make … are experienced with Databricks and Lakehouse architecture for efficient data management. You are experienced building and optimising data pipelines and have strong SQL and PySpark skills. You have a strong understanding of the Azure stack. You have experience with devops practices and infrastructure-as-code, preferably Terraform. You have More ❯
Posted:

Data Engineer PySpark AWS - Relocate to Dubai

West London, London, United Kingdom
Hybrid / WFH Options
Client Server
Data Engineer (PySpark AWS Glue) London/WFH to £65k Are you a technologist Data Engineer looking for an opportunity to work with modern technology with continual learning and development opportunities? You could be joining a global events management company that specialise in large scale corporate and prestige event … enable marketing to send email campaigns of c2.5 million at a time. There's a modern tech stack; you'll mainly be using Python, PySpark and AWS services but will also gain skills with AWS Glue, AWS Lambda, Airflow, CI/CD and Github. You'll be working with … a 2.1 or above in a STEM discipline, Computer Science preferred You have commercial experience as a Data Engineer You have strong Python and PySpark skills You have AWS Glue experience You're collaborative with great communication skills and enthusiasm to learn and progress as a Data Engineer Experience More ❯
Employment Type: Permanent, Work From Home
Salary: £65,000
Posted:
PySpark
London
10th Percentile
£60,800
25th Percentile
£91,875
Median
£110,000
75th Percentile
£138,750
90th Percentile
£147,500