|
6 of 6 Spark SQL Jobs in Central London
City of London, London, United Kingdom Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
City of London, London, United Kingdom CACTUS
Pipeline Development Design and maintain ETL pipelines for diverse data sources (APIs, databases, file systems). Ensure reliability, scalability, and performance. Data Transformation & Processing Implement data transformation using Spark (PySpark/Scala) and related tools. Conduct data cleaning, validation, and enrichment. Azure Databricks Implementation Work with Unity Catalog, Delta Lake, Spark SQL . Optimize … and follow best practices in Databricks development. Program using SQL, Python, R, YAML, JavaScript . Data Integration Integrate from various structured/unstructured sources (CSV, JSON, Parquet, Delta). Enable downstream access for dashboards and reporting. Data Quality & Monitoring Use Azure Purview for governance and quality. Develop and maintain monitoring and validation processes. Collaboration & Communication Work closely with … workflows. Support automation across engineering tasks. Essential Skills & Experience: 5+ years in data engineering or related roles. Strong grasp of data engineering concepts and principles. Proficiency in Python, SQL, and ideally Spark (PySpark). Experience with Azure Databricks, Delta Lake , and data architecture . Familiarity with Azure cloud , version control (e.g., Git), and DevOps pipelines . More ❯
City of London, England, United Kingdom Whitehall Resources Ltd
Design and deliver quality solutions independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing ( Spark … customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL ( Spark SQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is More ❯
City of London, London, United Kingdom Hybrid / WFH Options La Fosse
AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player More ❯
City of London, England, United Kingdom JR United Kingdom
Social network you want to login/join with: GCP Data Engineer (Java, Spark, ETL), London (City of London) Client: Staffworx Location: London (City of London), United … Kingdom Job Category: Other EU work permit required: Yes Job Views: 3 Posted: 16.06.2025 Expiry Date: 31.07.2025 Job Description: Proficiency in programming languages such as Python, PySpark, and Java SparkSQL GCP BigQuery Version control tools (Git, GitHub), automated deployment tools Google Cloud Platform services, Pub/Sub, BigQuery Streaming, and related technologies Deep understanding of real-time data processing and More ❯
City of London, London, United Kingdom Staffworx
Pool - GCP Data Engineer, London, hybrid role - digital Google Cloud transformation programme Proficiency in programming languages such as Python, PySpark and Java develop ETL processes for Data ingestion & preparation SparkSQL CloudRun, DataFlow, CloudStorage GCP BigQuery Google Cloud Platform Data Studio Unix/Linux Platform Version control tools (Git, GitHub), automated deployment tools Google Cloud Platform services, Pub/Sub, BigQuery More ❯
|
|