Mid-or-Senior level Data Scientist Solid knowledge of Data Engineering principles, including productionisation Technical experience with some or all of the following: Python, PySpark, scikit-learn, pandas, Azure Data Services, Databricks. If this sounds of interest, please apply. more »
City of London, London, United Kingdom Hybrid / WFH Options
Develop
Modeling within a cloud-based data platform Strong experience with SQL Server Azure data engineering stack, including Azure Synapse and Azure Data Lake Python, PySpark and T-SQL In return you will be offered a competitive salary and benefits package, remote working options and an opportunity to work with more »
get the most from their data. They are looking for someone with core skills in SQL complimented with Azure experience (Azure Data Factory, Databricks, PySpark etc) This is a very exciting time to join as they shake things up across the industry, so please get in touch asap to … Working with and modelling data warehouses. Skills & Qualifications Strong technical expertise using SQL Server and Azure Data Factory for ETL Solid experience with Databricks, PySpark etc. Understanding of Agile methodologies including use of Git Experience with Python and Spark Benefits £65,000 - £75,000 Bonus To apply for this more »
Pyspark Developer - 6 Month Contract - Inside IR35 - South of England (Hybrid) As a Python Lead Software Engineer, you will be an integral part of an agile team, dedicated to enhancing, constructing, and delivering top-tier technology products. Your focus will be on ensuring the secure, stable, and scalable development … months Location: South of England/WFH/2 days onsite Day Rate: Up to £460 Per Day (Inside IR35) Start Date: ASAP Pyspark Developer - 6 Month Contract - Inside IR35 - South of England (Hybrid more »
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
expertise and requirements. You have A BSc or MSc in computer science or related fields. Expertise in Python, including popular Python libraries: NumPy, Pandas, PySpark and frameworks: Django, Django Rest Framework, FastAPI Hands-on experience with both relational (e.g. PostgreSQL) and non-relational databases (e.g. Elasticsearch, Redis). Strong more »
Liverpool office 2-3 days a week - rest remote* Senior Data Engineer, Data, Datamodelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or PySpark, Agile, migrating data from on-prem to cloud. *Informatica (IICS & IDMC) is essential* A top Reinsurance firm are looking for a Senior/Lead … cloud and tooling - Informatica and Azure are highly desired. Data Engineer, Data, Datamodelling, Migration, ETL, ETL Tooling ETL (Informatica IICS & IDMC) SQL, Python or PySpark, Agile, migrating data from on-prem to cloud. more »
SQL Server and relational databases. Solid understanding of the Azure data engineering stack, including Azure Synapse and Azure Data Lake. Programming skills in Python, PySpark, and T-SQL. Nice to haves: Familiarity with broader Azure Data Solutions, such as Azure ML Studio. Previous experience with Azure DevOps and knowledge more »
related field Certifications such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging Apache Spark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, alng with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data warehousing design patterns and implementation Benefits : £50-£60k DOE Mainly home based more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
of Python Experience developing in the cloud (AWS preferred) Solid understanding of libraries like Pandas and NumPy Experience in data warehousing tools like Snowflake, Pyspark, Databricks Commercial experience with performant database programming in SQL Capability to solve complex technical issues, comprehending risks prior to the circumstance. Please apply today more »
and garnering support for data engineering initiatives across different departments and levels of the organisation Strong understanding in technology such as AWS cloud, Glue, Pyspark, Python, SQL, and Database design concepts or experience in implementing data pipelines would be What you’ll get for this role: Our purpose - with more »
or similar frameworks. Problem-Solving : Strong problem-solving skills and the ability to think quickly in a dynamic environment. Technical Skills: Proficiency in Python, PySpark, Synapse, DataBricks Strong SQL skills. CI/CD practices PyTest or similar Desirable Skills and Experience: Azure/Cloud services Jenkins, Octopus, and Git. more »
Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing but not required Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
incorrect or not received on time. outages with the end users of a data pipeline What We Value reading and writing code in Python, Pyspark and Java. understanding of Spark and interested in learning the basics of tuning Spark jobs. pipeline monitoring team members should be able to use more »
advice to analytical users on how they can access and utilise the new datasets. Qualities Comfortable with Python - ideally experience with Apache Spark and Pyspark Previous data analytics software experience Able to scope new integrations and translate analytical user needs into technical requirement. UK based – data analytics system can more »
to the table. Key Responsibilities Engineer and orchestrate data flows & pipelines in a cloud environment using a progressive tech stack e.g. Databricks, Spark, Python, PySpark, Delta Lake, SQL, Logic Apps, Azure Functions, ADLS, Parquet, Neo4J, Flask Ingest and integrate data from a large number of disparate data sources Design … Spark/Databricks or similar Experience working in a cloud environment (Azure, AWS, GCP) Experience in at least one of: Python (or similar), SQL, PySpark Experience in building data pipeline/ETL/ELT solutions Ability and strong desire to research and learn new technologies and languages Interest in more »