Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
Location: Guernsey, Haywards Heath, Home Office (Remote) or Manchester Salary: £85,000 - £100,000 - depending on experience Department: Technology and Data We're First Central Insurance & Technology Group (First Central for short), an innovative, market-leading insurance company. We protect More ❯
closely with teams across trading, finance, compliance, and ops. Profile: Strong experience implementing Snowflake in a lead or senior capacity Solid background in Python , PySpark , and Spark Hands-on with platform setup – ideally with a DevOps-first approach Exposure to AWS environments Experience working with data from trading platforms … or within commodities, banking, or financial services Tech environment: Primary Platform: Snowflake Other Tech: DBT, Databricks, Spark, PySpark, Python Cloud: AWS (preferred), Private Cloud storage Data Sources: Financial/trading systems More ❯
closely with teams across trading, finance, compliance, and ops. Profile: Strong experience implementing Snowflake in a lead or senior capacity Solid background in Python , PySpark , and Spark Hands-on with platform setup – ideally with a DevOps-first approach Exposure to AWS environments Experience working with data from trading platforms … or within commodities, banking, or financial services Tech environment: Primary Platform: Snowflake Other Tech: DBT, Databricks, Spark, PySpark, Python Cloud: AWS (preferred), Private Cloud storage Data Sources: Financial/trading systems More ❯
schemas (both JSON and Spark), schema management etc Strong understanding of complex JSON manipulation Experience working with Data Pipelines using a custom Python/PySpark frameworks Strong understanding of the 4 core Data categories (Reference, Master, Transactional, Freeform) and the implications of each, particularly managing/handling Reference Data. … write basic scripts) LANGUAGES/FRAMEWORKS JSON YAML Python (as a programming language, not just able to write basic scripts) Pydantic experience DESIRABLE SQL PySpark Delta Lake Bash (both CLI usage and scripting) Git Markdown Scala DESIRABLE Azure SQL Server as a HIVE Metastore DESIRABLE TECHNOLOGIES Azure Databricks Apache More ❯
schemas (both JSON and Spark), schema management etc - Strong understanding of complex JSON manipulation - Experience working with Data Pipelines using a custom Python/PySpark frameworks - Strong understanding of the 4 core Data categories (Reference, Master, Transactional, Freeform) and the implications of each, particularly managing/handling Reference Data. … Languages/Frameworks - JSON - YAML - Python (as a programming language, not just able to write basic scripts. Pydantic experience would be a bonus.) - SQL - PySpark - Delta Lake - Bash (both CLI usage and scripting) - Git - Markdown - Scala (bonus, not compulsory) - Azure SQL Server as a HIVE Metastore (bonus) Technologies - Azure More ❯
Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product Owners … 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure Stream Analytics More ❯
Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product Owners … 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure Stream Analytics More ❯
Azure data engineering and cloud-native development. Showcase strong, hands-on experience with Microsoft Fabric, including Lakehouse (Delta format), OneLake, Pipelines & Dataflows Gen2, Notebooks (PySpark), Power BI & Semantic Models. Possess a solid understanding of data integration patterns, ETL/ELT, and modern data architectures. Be familiar with CI/… CD practices in a data engineering context. Have excellent SQL and Spark (PySpark) skills. Have obtained experience in working with large, complex datasets and performance optimization. Exhibit strong leadership and communication skills. Demonstrate previous experience in leading and mentoring a technical team. Be Microsoft Certified (Azure Data Engineer Associate More ❯
Data is the new Oil Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build big-data solutions that process billions of records a day in a scalable More ❯
to London) Duration: 9Months+ Contract Inside IR35 Role Summary: We are seeking a versatile and experienced Data Engineer with a strong foundation in Python , PySpark , and modern data platforms. This role demands hands-on experience with CI/CD automation , unit testing , and working within Azure environments — both through … testing • OpenTelemetry (exposure) • Poetry • VS Code, Dev Containers • SQL Querying • CI/CD tools • ADO/GitLab • Pipelines for automation Data Engineering (Highly desirable) • PySpark • SparkSQL • Data file formats like Delta, parquet Fabric (Not absolutely required but desirable) • Fabric Notebooks • Data Factory pipelines • Kusto • Data Flow Gen 2 Generalist More ❯
and manage data solutions that align with business needs and industry standards. The ideal candidate will have expertise in Java, SQL, Python, and Spark (PySpark & SparkSQL) while also being comfortable working with Microsoft Power Platform. Experience with Microsoft Purview is a plus. The role requires strong communication skills to … data standards. Key Responsibilities: 1. Data Architecture & Engineering Design and implement scalable data architectures that align with business objectives. Work with Java, SQL, Python, PySpark, and SparkSQL to build robust data pipelines. Develop and maintain data models tailored to organizational needs. Reverse-engineer data models from existing live systems. More ❯
WeDo has partnered with a leading fintech scale-up that is looking to scale it's Data Engineering Practice. Who are they.......Imagine a finance app that’s like your smart, low-fee wingman—budgeting help, savings boosts, and no confusing More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
trg.recruitment
Rate: Up to £600 per day 📆 Contract: 6 months (Outside IR35, potential to go perm) 🛠 Tech Stack: Azure Data Factory, Synapse, Databricks, Delta Lake, PySpark, Python, SQL, Event Hub, Azure ML, MLflow We’ve partnered with a new AI-first professional services consultancy that’s taking on the Big … and supporting team capability development What You Need: ✔ 5+ years in data engineering or backend cloud development ✔ Strong Python, SQL, and Databricks skills (especially PySpark & Delta Lake) ✔ Deep experience with Azure: Data Factory, Synapse, Event Hub, Azure Functions ✔ Understanding of MLOps tooling like MLflow and integration with AI pipelines More ❯
london, south east england, united kingdom Hybrid / WFH Options
trg.recruitment
Rate: Up to £600 per day 📆 Contract: 6 months (Outside IR35, potential to go perm) 🛠 Tech Stack: Azure Data Factory, Synapse, Databricks, Delta Lake, PySpark, Python, SQL, Event Hub, Azure ML, MLflow We’ve partnered with a new AI-first professional services consultancy that’s taking on the Big … and supporting team capability development What You Need: ✔ 5+ years in data engineering or backend cloud development ✔ Strong Python, SQL, and Databricks skills (especially PySpark & Delta Lake) ✔ Deep experience with Azure: Data Factory, Synapse, Event Hub, Azure Functions ✔ Understanding of MLOps tooling like MLflow and integration with AI pipelines More ❯
of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilize Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a … practices and contribute to data analytics insights and visualization concepts, methods, and techniques. We are looking for experience in the following skills: Palantir PythonPySpark/PySQL AWS or GCP Set yourself apart: Palantir Certified Data Engineer Certified cloud data engineering (preferably AWS) What's in it for you More ❯
Senior Data Engineer Wiltshire - 3 days in office £65,000 About The Company The company operates in both B2B and D2C markets, providing food solutions to institutions and individuals. With over 30 years of experience and a presence in 400 More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Nine Twenty Recruitment
complex data needs Developing and optimising data pipelines with Azure Data Factory and Azure Synapse Analytics Working with Spark notebooks in Microsoft Fabric, using PySpark, Spark SQL, and potentially some Scala Creating effective data models, reports, and dashboards in Power BI using DAX (and possibly M) Supporting data governance … structure is 90% SQL-based Basic familiarity with Python (we're all at beginner level, but it’s occasionally required) Openness to working with PySpark and potentially Scala (no prior Scala experience required, but a willingness to learn is appreciated) Comfortable working directly with clients and managing technical discussions More ❯
ll Learn: Support data profiling, ingestion, collation, and storage for critical client projects. Enhance your knowledge of agile methodologies and open source stacks like PySpark/PySQL. Utilize Accenture's delivery assets for quality initiatives to ensure solution excellence throughout delivery. As a Data Engineering Manager, your responsibilities include … practices, contributing to analytics insights, visualization concepts, and techniques. Communicating progress, risks, and issues to project leads and team members. Core Skills: Palantir PythonPySpark/PySQL AWS or GCP Benefits: In addition to a competitive salary, Accenture offers extensive benefits including 30 days' vacation, private medical insurance, and More ❯
of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySQL). Quality engineering professionals utilize Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a … team members to provide regular progress updates and raise any risk/concerns/issues. Core skills we're working with include: Palantir PythonPySpark/PySQL AWS or GCP What's in it for you: At Accenture, in addition to a competitive basic salary, you will also have More ❯
of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySQL). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a … team members to provide regular progress updates and raise any risk/concerns/issues. Core skills we're working with include: Palantir PythonPySpark/PySQL AWS or GCP What's in it for you At Accenture in addition to a competitive basic salary, you will also have More ❯
of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySQL). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a … team members to provide regular progress updates and raise any risk/concerns/issues. Core skills we're working with include: Palantir PythonPySpark/PySQL AWS or GCP What's in it for you At Accenture, in addition to a competitive basic salary, you will also have More ❯
contract data engineers to supplement existing team during implementation phase of new data platform. Main Duties and Responsibilities: Write clean and testable code using PySpark and SparkSQL scripting languages, to enable our customer data products and business applications. Build and manage data pipelines and notebooks, deploying code in a … Experience: Excellent understanding of Data Lakehouse architecture built on ADLS. Excellent understanding of data pipeline architectures using ADF and Databricks. Excellent coding skills in PySpark and SQL. Excellent technical governance experience such as version control and CI/CD. Strong understanding of designing, constructing, administering, and maintaining data warehouses More ❯
Senior Data Analyst - Pricing Data Engineering & Automation, CUO Global Pricing Let's care for tomorrow. Whether it's aircraft, international business, offshore wind parks or Hollywood film productions, Allianz Commercial has an extensive range of risks covered when it comes More ❯
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Corecom Consulting
modern tech in a collaborative environment. What You’ll Do: Design, build, and maintain scalable data pipelines Optimize and automate data workflows Work with PySpark, Python, and SQL to process and manage large datasets Collaborate in a cloud-based environment to deliver efficient and reliable data solutions What We … re Looking For: Proven experience with Python, PySpark, and SQL Strong understanding of data engineering principles and cloud infrastructure Ability to work collaboratively and communicate technical concepts clearly A passion for clean, efficient, and scalable code Why Join Us? Supportive team environment with a strong focus on innovation Opportunities More ❯
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Client Server
e.g. Data Science, Mathematics, Statistics, Physics, Computer Science, Informatics or Engineering You have strong experience with analytics and data manipulation software e.g. R, Python, PySpark, SAS, SQL, SPSS You take a consultative approach and have polished communication and stakeholder management skills You're able to work independently and take … and inclusive environment Health and wellbeing support Volunteering opportunities Pension Apply now to find out more about this Data Scientist/Consultant (R PythonPySpark SAS SQL SPSS) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯