and analytics in decision-making processes. • Experience of working in the Construction/Engineering Industry desirable but not essential. • Any experience with ETL, Databricks, PySpark, Powerapps and Powerautomate would be advantageous. What’s in it for you? In addition to an attractive salary we offer a significant benefits package more »
an Agile way. Who are we looking for? • Degree in Computer Science, Information Systems, Data Science, or a related field. • Experience with Databricks, Dataverse, PySpark, Synapse and Powerautomate • Experience with integrating SAP end to end advantageous • Experience with data warehouse technologies and data integration processes. • Knowledge of ETL (Extract more »
preferably GCP | Expertise in event-driven data integrations and click-stream ingestion | Proven ability in stakeholder management and project leadership | Proficiency in SQL, Python, PySpark | Solid background in data pipeline orchestration, data access, and retention tooling | Demonstrable impact on infrastructure scalability and data privacy initiatives | Collaborative spirit | Innovative problem more »
scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our … testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with a more »
United Kingdom Information Technology (IT) Group Functions Job Reference # 294845BR City London Job Type Full Time Your role Do you have proven track record of building big data solutions? Are you confident at iteratively refining user requirements and removing more »
Senior Data Engineer Up to £70k plus bonus Manchester Are you looking to take your Data Engineer career to the next level?? This company use extremely modern technologies, and you can be certain you will grow within a technical environment. more »
in STEM subjects. Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
Cycle · Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Preferred qualifications, capabilities, and skills: · Skilled with Python or PySpark · Exposure to cloud technologies (Airflow, Astronomer, Kubernetes, AWS, Spark, Kafka) · Experience with Big Data solutions or Relational DB. · Experience in Financial Service Industry is more »
birmingham, midlands, United Kingdom Hybrid / WFH Options
Lorien
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. • Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. Their lovely offices are based in the West Midlands more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
such as, Code Repo, Code Workbook, Pipeline Build, migration techniques, Data Connection and Security setup. Design, develop Data Pipelines, and have excellent skills in PySpark and Spark SQL, hands on with code Build and deployment in Palantir. Must lead a team of 6-7 technical associates with PySparkmore »
Knowledge on Spark architecture and modern Datawarehouse/Data-Lake/Lakehouse techniques Build transformation tables using SQL. Moderate level knowledge of Python/PySpark or equivalent programming language. PowerBI Data Gateways and DataFlows, permissions. Creation, utilisation, optimisation and maintenance of Relational SQL and NoSQL databases. Experienced working with more »
DevOps and CI/CD tools, Azure Cloud, Microsoft Fabric, Azure Services, Apache Spark, Experience of using IAC (terraform, APIs), Data Engineer, Big Data, PySpark Location - Edinburgh (Preferred)/London JOB DETAILS You will be responsible for the platform and its integration with enterprise services & capabilities that ensure the more »
requiring 2-3 days onsite) and is paying up to £110,000 per annum Key Skills Strong commercial experience with Python/SQL/PySpark Knowledge of converting business requirements to engineering processes Azure environment - Data Bricks & Data Factory Industry experience with Insurance would be highly desirable The processing more »
data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Design, develop and deploy data solutions in Palantir with excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications working with Ontology, actions, functions, object views, automate, indexing, data more »
data integration pipelines, transformations, pipeline scheduling, Ontology, and applications in Palantir Foundry Design, develop and deploy data solutions in Palantir with excellent skills in PySpark and Spark SQL for data transformations Experience in designing and building interactive data applications working with Ontology, actions, functions, object views, automate, indexing, data more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline and CodeBuild more »
levels of experience within data engineering. Experience deploying pipelines within Azure Databricks in line with the medallion architecture framework. Experience using SQL, Python and PySpark to build data engineering pipelines. Understanding of how to define best practices in relation to documentation standards as well as code standards. Understanding of more »
levels of experience within data engineering. Experience deploying pipelines within Azure Databricks in line with the medallion architecture framework. Experience using SQL, Python and PySpark to build data engineering pipelines. Understanding of how to define best practices in relation to documentation standards as well as code standards. Understanding of more »
SQL Server and relational databases. Solid understanding of the Azure data engineering stack, including Azure Synapse and Azure Data Lake. Programming skills in Python, PySpark, and T-SQL. Nice to haves: Familiarity with broader Azure Data Solutions, such as Azure ML Studio. Previous experience with Azure DevOps and knowledge more »
london (chiswick), south east england, United Kingdom
Square One Resources
SQL Server and relational databases. Solid understanding of the Azure data engineering stack, including Azure Synapse and Azure Data Lake. Programming skills in Python, PySpark, and T-SQL. Nice to haves: Familiarity with broader Azure Data Solutions, such as Azure ML Studio. Previous experience with Azure DevOps and knowledge more »
related field Certifications such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging Apache Spark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »