Newcastle upon Tyne, Tyne & Wear Hybrid / WFH Options
Client Server
Data Engineer (Python SparkSQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
and automation What You Bring Solid understanding of data modelling, data warehousing principles, and Lakehouse architecture Expert knowledge of ETL using Azure Databricks (Spark, SparkSQL, Python, SQL) and ETL/ELT design patterns Strong Databricks, SQL, and Python skills More ❯
and automation What You Bring Solid understanding of data modelling, data warehousing principles, and Lakehouse architecture Expert knowledge of ETL using Azure Databricks (Spark, SparkSQL, Python, SQL) and ETL/ELT design patterns Strong Databricks, SQL, and Python skills More ❯
Profile Strong technical background in SQL, scripting languages (Python, TypeScript, JavaScript), databases, ML/LLM models, and big data technologies like ApacheSpark (PySpark, SparkSQL). Self-starter with the ability to work from requirements to solutions. Effective communicator, passionate learner … Set up data infrastructure, pipelines, and permissions for ML training and inference. Collaborate with technology teams to extract, transform, and load data using SQL, scripting, and AWS big data tools. Communicate effectively and deliver high-quality results in a fast-paced environment. Drive operational excellence through automation and … SQL. Experience with scripting languages such as Python or KornShell. Knowledge of query languages like SQL, PL/SQL, HiveQL, SparkSQL, Scala. Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Additional Information Amazon's inclusive culture empowers employees to deliver exceptional More ❯
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like SparkSQL & Spark Streaming. … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
role: This role sits within the Group Enterprise Systems (GES) Technology team. The right candidate would be an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) who can work both independently and as a member of a team to deliver enterprise-class data warehouse solutions and analytics … data models. Build and maintain automated pipelines to support data solutions across BI and analytics use cases. Work with Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies. Build patterns, common ways of working, and standardized data pipelines for DLG to ensure consistency across the organization. … Essential Core Technical Experience: 5 to 10+ years' extensive experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing & SQL procedure experience. Experience developing ETL solutions in SQL Server including SSIS & T-SQL. Experience in Microsoft BI technologies More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
engineers to develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT … 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Establish and enforce best practices in data engineering, coding standards, and architecture guidelines. Build and manage data lake … Redshift, Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Establish and enforce best practices in data engineering, coding standards, and architecture guidelines. Build and manage data lake … Redshift, Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
align with business needs and industry standards. The ideal candidate will have expertise in Java, SQL, Python, and Spark (PySpark & SparkSQL) while also being comfortable working with Microsoft Power Platform. Experience with Microsoft Purview is a plus. The role requires strong communication skills to collaborate effectively … 1. Data Architecture & Engineering Design and implement scalable data architectures that align with business objectives. Work with Java, SQL, Python, PySpark, and SparkSQL to build robust data pipelines. Develop and maintain data models tailored to organizational needs. Reverse-engineer data models from existing live systems. Utilize Microsoft Power … solutions with business goals. Analyze and mitigate the impact of data standard breaches. Required Skills & Qualifications: Strong proficiency in Java, SQL, Python, SparkSQL, and PySpark. Experience with Microsoft Power Platform (PowerApps, Power Automate, etc.). Good understanding of data governance, metadata management, and compliance frameworks. Ability to communicate More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
S3, Glue, Redshift, etc. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with Data Scientists and … BI infrastructure including Data Warehousing, reporting, and analytics platforms. Contribute to the development of the BI tools, skills, culture, and impact. Write advanced SQL queries and Python code to develop solutions A day in the life of this role requires you to live at the intersection of data … warehousing, and building ETL pipelines Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting languages (e.g., Python, KornShell) Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience More ❯
Azure Databricks, Azure Data Factory, Delta Lake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and SparkSQL coding - Must have. Very good knowledge of data warehousing skills including dimensional modeling, slowly changing dimension patterns, and time travel. Experience More ❯
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
london, south east england, United Kingdom Hybrid / WFH Options
La Fosse
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
engineers to supplement existing team during implementation phase of new data platform. Main Duties and Responsibilities: Write clean and testable code using PySpark and SparkSQL scripting languages, to enable our customer data products and business applications. Build and manage data pipelines and notebooks, deploying code in a structured, trackable and More ❯
many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. BASIC QUALIFICATIONS - Experience with SQL - 1+ years of data engineering experience - Experience with data modeling … warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, SparkMore ❯
batch mechanism. Good data modeling skills with knowledge of various industry standards such as dimensional modeling, star schemas etc. Proficient in writing performant SQL working with large data volumes. Experience designing and operating very large Data Warehouses. Experience with scripting for automation (e.g., UNIX Shell scripting, Python). … warehousing and building ETL pipelines - Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, SparkMore ❯
support ambitious data initiatives and future projects. IT Manager - The Skills You'll Need to Succeed: Mastery of Databricks, Python/PySpark, and SQL/SparkSQL. Experience in Big Data/ETL (Spark and Databricks preferred). Expertise in Azure. Proficiency with version control systems (Git More ❯
scale, high-volume, high-performance data structures for analytics and Reporting. Implement data structures using best practices in data modeling, ETL processes, and SQL, AWS - Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting. Work with product tech teams and build robust … and scalable data integration (ETL) pipelines using SQL, Python and Spark. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Interface with business customers, gathering requirements and delivering complete reporting solutions. Collaborate with Analysts, Business Intelligence Engineers, SDEs, and Product Managers to … warehousing and building ETL pipelines Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best More ❯
deliver accurate and timely data and reporting to meet or exceed SLAs BASIC QUALIFICATIONS - 4+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling … warehousing, and building ETL pipelines - Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose More ❯
management and governance, guide in structuring cloud environments, and support data initiatives and future projects. Qualifications: Proficiency in Databricks, Python/PySpark, and SQL/SparkSQL. Experience with Big Data/ETL processes, preferably Spark and Databricks. Expertise in Azure cloud platform. Knowledge of version control More ❯
management and governance, guide in structuring cloud environments, and support data initiatives and future projects. Qualifications: Proficiency in Databricks, Python/PySpark, and SQL/SparkSQL. Experience with Big Data/ETL processes, preferably Spark and Databricks. Expertise in Azure cloud platform. Knowledge of version control More ❯