PySpark Jobs in England

51 to 75 of 90 PySpark Jobs in England

Cloud Data Engineer

Knutsford, England, United Kingdom
Hybrid / WFH Options
Capgemini
warehousing solutions. Familiarity with DBT (Data Build Tool) for managing transformations in the data pipeline. Strong programming skills in technologies like Python, Scala, Spark, PySpark or Ab Initio, Glue, Starburst, Snowflake, Redshift, Starburst, Hybrid(on-prem to cloud) for building data pipelines and ETL processes. Experience with data pipeline more »
Posted:

Cloud Data Engineer

Northamptonshire, England, United Kingdom
Hybrid / WFH Options
Capgemini
warehousing solutions. Familiarity with DBT (Data Build Tool) for managing transformations in the data pipeline. Strong programming skills in technologies like Python, Scala, Spark, PySpark or Ab Initio, Glue, Starburst, Snowflake, Redshift, Starburst, Hybrid(on-prem to cloud) for building data pipelines and ETL processes. Experience with data pipeline more »
Posted:

Technical Product Lead - Active SC Mandatory

London Area, United Kingdom
Infoplus Technologies UK Limited
Banking and Financial Services sector is advantageous. • Deep knowledge or experience with using as much of the following: • Azure Cloud Data Components | Databricks | Python | PySpark | Terraform | APIs | Lakehouse | Data Mesh | Nosql DBs | GitHub Person Specification • Self motivator with a desire to learn new skills and embrace new technologies in more »
Posted:

Engineering Delivery Lead (AWS/Snowflake/Python/Scala) - UK based - 6 Month contract + extensions - £450-550/day + negotiable

England, United Kingdom
Orbis Group
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
Posted:

Data Scientist/ Data Engineer

South Harting, England, United Kingdom
Hybrid / WFH Options
Adecco
insurance sector is advantageous. - Education: A degree in Computer Science, Data Science, Engineering, or a related field. Technical Skills Strong proficiency in Python, SQL, PySpark, and Databricks. Demonstrated experience with modern NLP techniques and tools. Proven ability to create and manage data quality metrics and dashboards. Experience working in more »
Posted:

Data Engineer

London Area, United Kingdom
Harrington Starr
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
Posted:

Lead Data Engineer

London, England, United Kingdom
Arrows
preferably GCP | Expertise in event-driven data integrations and click-stream ingestion | Proven ability in stakeholder management and project leadership | Proficiency in SQL, Python, PySpark | Solid background in data pipeline orchestration, data access, and retention tooling | Demonstrable impact on infrastructure scalability and data privacy initiatives | Collaborative spirit | Innovative problem more »
Posted:

Data Engineer

London Area, United Kingdom
Prism Digital
Lead Azure Data Engineer | PySpark (Python) & Synapse | Tech for Good/Charity Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data … people comprising developers, data engineers, QA and DevOps. Essential skills required: Azure – solid experience required of the Azure Data ecosystem Python - ESSENTIAL as PySpark is used heavily. You will be tested on PySpark Azure Synapse - ESSENTIAL as is used heavily Spark Azure Data Lake/Data Bricks/… architecture Familiar with Synapse CI/CD Azure Purview or another governance tool experience. Familiar with building Catalogs and lineage Lead Azure Data Engineer | PySpark (Python), Synapse, Data Lake | Tech for Good/Charity more »
Posted:

Mid Level Data Scientist (Databricks / Pyspark)

South Harting, England, United Kingdom
Adecco
insurance sector is advantageous. Education: A degree in Computer Science, Data Science, Engineering, or a related field. Technical Skills: Strong proficiency in Python, SQL, PySpark, and Databricks. Demonstrated experience with modern NLP techniques and tools. Proven ability to create and manage data quality metrics and dashboards. Experience working in more »
Posted:

Senior Data Engineer

Penrith, Cumbria, United Kingdom
Hybrid / WFH Options
Computer Futures
Knowledge on Spark architecture and modern Datawarehouse/Data-Lake/Lakehouse techniques Build transformation tables using SQL. Moderate level knowledge of Python/PySpark or equivalent programming language. PowerBI Data Gateways and DataFlows, permissions. Creation, utilisation, optimisation and maintenance of Relational SQL and NoSQL databases. Experienced working with more »
Employment Type: Permanent
Salary: £40000 - £65000/annum Benefits
Posted:

Senior Data Engineer

Chiswick, England, United Kingdom
Square One Resources
SQL Server and relational databases. Solid understanding of the Azure data engineering stack, including Azure Synapse and Azure Data Lake. Programming skills in Python, PySpark, and T-SQL. Nice to haves: Familiarity with broader Azure Data Solutions, such as Azure ML Studio. Previous experience with Azure DevOps and knowledge more »
Posted:

Lead Data Engineer

London Area, United Kingdom
Tredence Inc
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
Posted:

Data Engineer

Derby, England, United Kingdom
Mirai Talent
related field Certifications such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging Apache Spark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family more »
Posted:

Senior Data Engineer

London Area, United Kingdom
Hybrid / WFH Options
MBN Solutions
Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, alng with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data warehousing design patterns and implementation Benefits : £50-£60k DOE Mainly home based more »
Posted:

Data Engineer

London Area, United Kingdom
Harrington Starr
of Python Experience developing in the cloud (AWS preferred) Solid understanding of libraries like Pandas and NumPy Experience in data warehousing tools like Snowflake, Pyspark, Databricks Commercial experience with performant database programming in SQL Capability to solve complex technical issues, comprehending risks prior to the circumstance. Please apply today more »
Posted:

Machine Learning Engineer

City Of London, England, United Kingdom
Hybrid / WFH Options
RJC Group
experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined up for later more »
Posted:

Data Scientist

Ipswich, England, United Kingdom
AXA UK
with operational stakeholders propose how a machine learning model could benefit the process, and finally building the model to realise the benefit. Experience using PySpark to process large scale data would be advantageous, particularly use within the Databricks platform. Familiarity with insurance claims data is preferred but not essential. more »
Posted:

Machine Learning Engineer

London Area, United Kingdom
Harnham
Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing but not required Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
Posted:

Data Engineer

London Area, United Kingdom
Axtria - Ingenious Insights
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
Posted:

Data Pipeline Engineer

London Area, United Kingdom
Hexegic
not received on time. Communicating outages with the end users of a data pipeline What We Value Comfortable reading and writing code in Python, Pyspark and Java. Basic understanding of Spark and interested in learning the basics of tuning Spark jobs. Data pipeline monitoring team members should be able more »
Posted:

Data Engineer

London Area, United Kingdom
Mars
to the table. Key Responsibilities Engineer and orchestrate data flows & pipelines in a cloud environment using a progressive tech stack e.g. Databricks, Spark, Python, PySpark, Delta Lake, SQL, Logic Apps, Azure Functions, ADLS, Parquet, Neo4J, Flask Ingest and integrate data from a large number of disparate data sources Design … Spark/Databricks or similar Experience working in a cloud environment (Azure, AWS, GCP) Experience in at least one of: Python (or similar), SQL, PySpark Experience in building data pipeline/ETL/ELT solutions Ability and strong desire to research and learn new technologies and languages Interest in more »
Posted:

Big Data Engineer - Python, Pyspark

City of London, London, United Kingdom
McGregor Boyall Associates Limited
a strong background in business change and transformation focussed expressly around Data analytics and Big data platforms. 5+ years of Big Data Experience utilising Pyspark 5+ years of managing data analytical projects within a financial domain (Banking/Investments) Background within investment managment, financial services, etc. Project management experience more »
Employment Type: Permanent
Salary: £600 - £800 per day
Posted:

Data Engineer - AWS

London, United Kingdom
Boston Hale
performance, scalability, and reliability. Technical Skills required: RedShift Glue (inc. Glue Studio, Glue Data Quality, Glue DataBrew) Step Functions Athena Lambda Kinesis Python, Spark, Pyspark, SQL Your contributions as a Data Engineer will directly impact the organization's operations and revenue. In addition to a competitive annual salary, we more »
Employment Type: Permanent
Salary: £70,000
Posted:

Senior Data Scientist

London, Tottenham Court Road, United Kingdom
Be-IT Resourcing Ltd
engineering leaders/stakeholders in decision making and implementing the models into production. You will need to have hands on skills in Python and PySpark, experience working in a cloud environment and knowledge of development tools like Git or Docker. You can also expect to work with the latest more »
Employment Type: Permanent
Posted:

AWS Data Engineer

SW19, Dundonald, Greater London, United Kingdom
DataBuzz
Engineer you will be pivotal in designing, developing, and maintaining data architecture and infrastructure. The ideal candidate should have a strong foundation in Python, PySpark, SQL, and ETL processes, along with proven experience in implementing solutions in a cloud environment. Roles & Responsibilities: Experienced Data Engineer with a background in … and mastering to management and distribution of large datasets. Mandatory Skills: 6+ Years of experience in Design, build, and maintain data pipelines using Python, PySpark and SQL. Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS. Collaborate with data scientists more »
Employment Type: Permanent
Salary: £50000 - £55000/annum
Posted:
PySpark
England
10th Percentile
£52,500
25th Percentile
£57,500
Median
£80,000
75th Percentile
£92,500
90th Percentile
£110,500