backend development focused on data platforms. Strong hands-on experience with AWS services, especially Glue, Athena, Lambda, and S3 . Proficient in Python (ideally PySpark) and modular SQL for transformations and orchestration. Solid grasp of data modeling (partitioning, file formats like Parquet, etc.). Comfort with CI/CD More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Citigroup Inc
in Java Some knowledge inHadoop, hive, SQL, Spark Understanding of Unix Shell Scripting CI/CD Pipeline Maven or Gradle experience Predictive analytics (desirable) PySpark (desirable) Trade Surveillance domain knowledge (desirable) Education: Bachelor’s/University degree or equivalent experience What we’ll provide you: By joining Citi, you More ❯
TeamCity, Jenkins Knowledge of containerization and Azure services (Service Bus, Function Apps, ADFs) Understanding of data technologies like Data Warehouse, Snowflake, ETL, Data Pipelines, PySpark, Delta tables, Parquet, columnar formats Strong SQL and stored procedures knowledge Ability to lead large-scale performance and automation testing Experience administering testing environments More ❯
London, England, United Kingdom Hybrid / WFH Options
Creative Assembly
valuable insights Desirable Experience working with enterprise data warehouse or data lake platforms Experience working with a cloud platform such as AWS Have used PySpark for data manipulation Previous exposure to game or IoT telemetry events and how such data is generated Knowledge of best practices involving data governance More ❯
tools like TeamCity, Jenkins Knowledge of containerization, Azure services (Service Bus, Function Apps, ADFs) Understanding of data technologies (Data Warehouse, Snowflake, ETL, Data Pipelines, PySpark, Delta tables, Parquet) Strong SQL and stored procedures skills Experience leading performance and automation testing for large projects Ability to manage testing environments within More ❯
London, England, United Kingdom Hybrid / WFH Options
Movera
experience working with Azure Git/DevOps Repos experience Demonstration of problem solving ability Synapse Analytics or similar experience - desirable Visual Files experience – desirable PySpark/Python experience – desirable Powershell experience – desirable What we offer: We aim to reward your hard work generously. You’ll be greeted in our More ❯
Stockport, England, United Kingdom Hybrid / WFH Options
Movera
experience working with Azure Git/DevOps Repos experience Demonstration of problem solving ability Synapse Analytics or similar experience - desirable Visual Files experience - desirable PySpark/Python experience - desirable Powershell experience - desirable What we offer: We aim to reward your hard work generously. You'll be greeted in our More ❯
Horsham, England, United Kingdom Hybrid / WFH Options
SEGA
valuable insights. Desirable Experience working with enterprise data warehouse or data lake platforms. Experience working with a cloud platform such as AWS. Have used PySpark for data manipulation. Previous exposure to game or IoT telemetry events and how such data is generated. Knowledge of best practices involving data governance More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Movera
experience working with Azure Git/DevOps Repos experience Demonstration of problem solving ability Synapse Analytics or similar experience - desirable Visual Files experience – desirable PySpark/Python experience – desirable Powershell experience – desirable What We Offer We aim to reward your hard work generously. You’ll be greeted in our More ❯
strong knowledge of data product development & management best practices. Primary technical skills required: T-SQL, Azure Data Lake, Azure Synapse Analytics, Apache Spark/PySpark, Azure Data Factory, and Power BI. Azure Analysis Services is a nice to have. Extensive experience developing SQL relational databases and data warehousing technologies. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
un:hurd music
data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance More ❯
data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance More ❯
South East London, England, United Kingdom Hybrid / WFH Options
un:hurd music
data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance More ❯
clearance at SC level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Engineering: Ingestion configuration. Write python/pyspark and spark SQL code for validation/curation in notebook. Create data integration test cases. Implement or amend worker pipelines. Implement Data validation/ More ❯
a modern tech stack including SQL, Python, Airflow, Kubernetes, and various other cutting-edge technologies. You'll work with tools like dbt on Databricks, PySpark, Streamlit, and Django, ensuring robust data infrastructure that powers business-critical operations. What makes this role particularly exciting is the combination of technical depth More ❯
System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. More ❯
Exeter, England, United Kingdom Hybrid / WFH Options
MBN Solutions
Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, along with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data warehousing design patterns and implementation Mainly home based working. Twice a month More ❯
London, England, United Kingdom Hybrid / WFH Options
J.D. Power
in backend development. A strong foundation in data engineering , including designing practical data schemas and working with large-scale distributed systems. Proficiency in Python, PySpark, SQL, C#, .NET, and Typescript —and a willingness to quickly learn new technologies. Experience with Agile processes , source control (Git) , and delivering web-based More ❯
Knowledge of node based architecture, graph databases and languages - Neptune, Neo4j, Gremlin, Cypher Experience 8+ years of experience with Databricks product suite, Spark, Scala, PySpark, Python 8+ years of experience in SQL and database technologies like Snowflake or equivalent. 5+ years of experience with AWS services (EKS, S3, EC2 More ❯
coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are More ❯
coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are More ❯
Gildersome, England, United Kingdom Hybrid / WFH Options
Stark Danmark A/S
of MLOps and associated tools such as Azure DevOps/Github, MLFlow, Azure ML. Experience working with large datasets/big data architectures, particularly Pyspark/Databricks. Experience deploying container technologies (e.g. Docker, Kubernetes). Experience playing a lead role on technical AI projects. Excellent communication skills with both More ❯
coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are More ❯
Some Other Highly Valued Skills Include Understanding of code best practice including Git. Familiarity with AWS data science services, including SageMaker. Experience working with PySpark to process data at scale. Proficiency in data visualization tools e.g. Tableau. You may be assessed on key critical skills relevant for success in More ❯
Manchester, England, United Kingdom 1 month ago Manchester, England, United Kingdom 1 month ago Manchester, England, United Kingdom 3 months ago Data Engineer , Python, PySpark, and SQL, AWS Software Engineer (Python) - Data Platform Manchester, England, United Kingdom 1 month ago We’re unlocking community knowledge in a new way. More ❯