Permanent Spark SQL Jobs in the Thames Valley

9 of 9 Permanent Spark SQL Jobs in the Thames Valley

Senior Data Engineer

slough, south east england, United Kingdom
Mastek
performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy … throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as … SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data sources. Implement data integration patterns and best practices. Work with API developers to ensure seamless data exchange. Data Quality & Governance: Hands on experience to use Azure Purview for More ❯
Posted:

Data Engineer - Pyspark - Palantir

slough, south east england, united kingdom
Cognitive Group | Part of the Focus Cloud Group
agile team Mentor and lead junior Data Engineers, fostering collaboration and knowledge sharing Skills & Experience: Minimum 5 years’ experience in PySpark, Python, and SQL Proven experience with Palantir Foundry platform Strong background in enterprise data analytics and distributed computing frameworks (Spark/Hive/Hadoop preferred … Demonstrated ability to design end-to-end data management and transformation solutions Proficient in Spark SQL and familiar with cloud platforms such as Azure or AWS Experience with Scrum/Agile methodologies Knowledge of JavaScript/HTML/CSS and Kubernetes is a plus Bachelor’s More ❯
Posted:

Data Engineer

reading, south east england, United Kingdom
Realm
deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code More ❯
Posted:

Data Engineer

slough, south east england, United Kingdom
Realm
deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code More ❯
Posted:

Data Engineer

high wycombe, south east england, United Kingdom
Realm
deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code More ❯
Posted:

Data Engineer

oxford district, south east england, United Kingdom
Realm
deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code More ❯
Posted:

Data Engineer

milton keynes, south east england, United Kingdom
Realm
deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code More ❯
Posted:

Data Engineer

slough, south east england, United Kingdom
Hybrid / WFH Options
La Fosse
technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks More ❯
Posted:

GCP Data Engineer (Java, Spark, ETL)

slough, south east england, United Kingdom
Staffworx
hybrid role - digital Google Cloud transformation programme Proficiency in programming languages such as Python, PySpark and Java develop ETL processes for Data ingestion & preparation SparkSQL CloudRun, DataFlow, CloudStorage GCP BigQuery Google Cloud Platform Data Studio Unix/Linux Platform Version control tools (Git, GitHub), automated deployment tools Google Cloud Platform More ❯
Posted: