Newcastle upon Tyne, Tyne & Wear Hybrid / WFH Options
Client Server
Data Engineer (Python SparkSQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
experiences, powered by best-in-class understanding of customer behavior and automation. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and SparkSQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal … results to senior leadership. Experience with data visualization using Tableau, Quicksight, or similar tools. Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience programming to extract, transform and clean large (multi-TB) data sets. Experience with statistical analytics and programming languages such as R, Python … science, machine learning and data mining. Experience with theory and practice of design of experiments and statistical analysis of results. Experience with Python, SparkSQL, QuickSight, AWS Lambda & React.js - Core tools of team. Amazon is an equal opportunities employer. We believe passionately that employing a diverse More ❯
we drive improvements in how millions of customers discover and evaluate products. Our work spans multiple technical disciplines: from deep-dive analytics using SQL and SparkSQL for large-scale data processing, to building automated marketing solutions with Python, Lambda, React.js, and leveraging internal … results to senior leadership. - Experience with data visualization using Tableau, Quicksight, or similar tools. - Experience in scripting for automation (e.g. Python) and advanced SQL skills. - Experience programming to extract, transform and clean large (multi-TB) data sets. - Experience with statistical analytics and programming languages such as R, Python … science, machine learning and data mining. - Experience with theory and practice of design of experiments and statistical analysis of results. - Experience with Python, SparkSQL, QuickSight, AWS Lambda & React.js - Core tools of team. More ❯
architectures that business engineering teams buy into and build their applications around. Required Qualifications, Capabilities, and Skills: Experience across the data lifecycle with Spark-based frameworks for end-to-end ETL, ELT & reporting solutions using key components like SparkSQL & Spark Streaming. … end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as More ❯
role: This role sits within the Group Enterprise Systems (GES) Technology team. The right candidate would be an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) who can work both independently and as a member of a team to deliver enterprise-class data warehouse solutions and analytics … data models. Build and maintain automated pipelines to support data solutions across BI and analytics use cases. Work with Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies. Build patterns, common ways of working, and standardized data pipelines for DLG to ensure consistency across the organization. … Essential Core Technical Experience: 5 to 10+ years' extensive experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing & SQL procedure experience. Experience developing ETL solutions in SQL Server including SSIS & T-SQL. Experience in Microsoft BI technologies More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
engineers to develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT … 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Architect and implement robust data solutions for analytics, reporting, machine learning, and data science initiatives. Establish and enforce … including Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
align with business needs and industry standards. The ideal candidate will have expertise in Java, SQL, Python, and Spark (PySpark & SparkSQL) while also being comfortable working with Microsoft Power Platform. Experience with Microsoft Purview is a plus. The role requires strong communication skills to collaborate effectively … 1. Data Architecture & Engineering Design and implement scalable data architectures that align with business objectives. Work with Java, SQL, Python, PySpark, and SparkSQL to build robust data pipelines. Develop and maintain data models tailored to organizational needs. Reverse-engineer data models from existing live systems. Utilize Microsoft Power … solutions with business goals. Analyze and mitigate the impact of data standard breaches. Required Skills & Qualifications: Strong proficiency in Java, SQL, Python, SparkSQL, and PySpark. Experience with Microsoft Power Platform (PowerApps, Power Automate, etc.). Good understanding of data governance, metadata management, and compliance frameworks. Ability to communicate More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
Experience with data modeling, warehousing, and building ETL pipelines. Proficiency in query languages such as SQL, PL/SQL, HiveQL, SparkSQL, or Scala. Experience with scripting languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. Experience … with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. Our inclusive culture empowers Amazon employees to deliver the best results for our customers. If you have a disability and need workplace accommodations during the application and More ❯
Azure Databricks, Azure Data Factory, Delta Lake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and SparkSQL coding - Must have. Very good knowledge of data warehousing skills including dimensional modeling, slowly changing dimension patterns, and time travel. Experience More ❯
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
london, south east england, united kingdom Hybrid / WFH Options
La Fosse
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
in this area. Able to demonstrate expertise in identifying and resolving data quality issues - in datasets at rest and in flight. An expert SQL coder and at ease writing Linux shell scripts. Experienced with automated build and test processes utilizing RLM, Jenkins, Lightspeed, and Harness. Strong knowledge in … agile development methodologies. Prior work on cloud computing platforms. Hands-on experience with other big data tools such as Oozie, YARN, Spark, SparkSQL, Flume, Sqoop2, Pig, Drill, Kafka, Elastic. Familiar with the financial services industry and/or regulatory environments. Able to demonstrate active participation in the big More ❯
engineers to supplement existing team during implementation phase of new data platform. Main Duties and Responsibilities: Write clean and testable code using PySpark and SparkSQL scripting languages, to enable our customer data products and business applications. Build and manage data pipelines and notebooks, deploying code in a structured, trackable and More ❯
Implementation experience with AWS services - Hands on experience leading large-scale global data warehousing and analytics projects. - Experience using some of the following: ApacheSpark/Hadoop ,Flume, Kinesis, Kafka, Oozie, Hue, Zookeeper, Ranger, Elasticsearch, Avro, Hive, Pig, Impala, SparkSQL, Presto, PostgreSQL, Amazon More ❯
many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. BASIC QUALIFICATIONS - Experience with SQL - 1+ years of data engineering experience - Experience with data modeling … warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, SparkMore ❯
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
london, south east england, united kingdom Hybrid / WFH Options
DATAHEAD
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
warehousing and building ETL pipelines. Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala). Experience with one or more scripting language (e.g., Python, KornShell). PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, SparkMore ❯
scale, high-volume, high-performance data structures for analytics and Reporting. Implement data structures using best practices in data modeling, ETL processes, and SQL, AWS - Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting. Work with product tech teams and build robust … and scalable data integration (ETL) pipelines using SQL, Python and Spark. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Interface with business customers, gathering requirements and delivering complete reporting solutions. Collaborate with Analysts, Business Intelligence Engineers, SDEs, and Product Managers to … warehousing and building ETL pipelines Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best More ❯
deliver accurate and timely data and reporting to meet or exceed SLAs Minimum Requirements: - 4+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling … warehousing, and building ETL pipelines - Experience with one or more query languages (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL). Modelling & Statistical Analysis experience, ideally customer related. Coding skills in at least one of Python, R, Scala, C, Java or JS. Track record of … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯