|
10 of 10 PySpark Jobs in Slough
slough, south east england, united kingdom Hybrid / WFH Options Publicis Production
data modeling, data warehousing concepts, and distributed systems. Excellent problem-solving skills and ability to progress with design, build and validate output data independently. Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools. Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure. Strong background in database technologies (SQL Server More ❯
slough, south east england, united kingdom DVF Recruitment
Microsoft Azure data services (Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database). Experience building ELT/ETL pipelines and managing data workflows. Proficiency in PySpark, Python, SQL, or Scala. Strong data modelling and relational database knowledge. Solid understanding of GDPR and UK data protection. Preferred Power BI experience. Familiarity with legal industry platforms. Awareness More ❯
slough, south east england, united kingdom KDR Talent Solutions
within Microsoft Azure data tools (Azure Data Factory, Azure Synapse, or Azure SQL). ✅ Dimensional modelling expertise for analytics use cases. ✅ Strong ETL/ELT development skills. ✅ Python/ Pyspark experience ✅ Experience with CI/CD methodologies for data platforms. ✅ Deep knowledge of SQL ✅ Extensive London Markets experience Why Join? 🚀 New Projects – Work on a new data platform, shaping More ❯
slough, south east england, united kingdom Hybrid / WFH Options Amarji
paced and rapidly evolving start-up environment. Skills: Data engineering Microsoft fabric (snowflake, databricks, considered) Power BI DAX API M Query Python SQL Data pipelines/dataflow gen2/ pyspark notebooks Data modelling Benefits: - Mainly fully remote position, with the flexibility to work from home or any location that suits you best. - Occasional requirements to visit client sites for More ❯
slough, south east england, united kingdom HD TECH Recruitment
engineering and Azure cloud data technologies. You must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade More ❯
slough, south east england, united kingdom Hybrid / WFH Options Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
slough, south east england, united kingdom Cint
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
slough, south east england, united kingdom HCLTech
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
slough, south east england, united kingdom Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
slough, south east england, united kingdom Hellowork Consultants
Proven experience with Fabric implementations (not Snowflake, Databricks, or other platforms) Deep understanding of Fabric ecosystem components and best practices Experience with medallion architecture implementation in Fabric Technical Skills: PySpark: Advanced proficiency in PySpark for data processing Data Engineering: ETL/ELT pipeline development and optimization Real-time Processing: Experience with streaming data and real-time analytics Performance More ❯
|
|