good communicator with your team. Key Skills Python in the software engineering level, including unit and integration test experience. Distributed computing knowledge covered by PySpark or Scala, can debug things in SparkUI and knows how to optimise for this purpose. AWS experience Good understanding of data modelling, change data More ❯
queries for huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks If this sounds like More ❯
queries for huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks If this sounds like More ❯
queries for huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks If this sounds like More ❯
queries for huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks If this sounds like More ❯
essential skills: Typical Data Engineering Experience required (3+ yrs): Strong knowledge and experience: Azure Data Factory and Synapse data solution provision Power BI PythonPySpark (Preference will be given to those who hold relevant certifications) Proficient in SQL. Knowledge of Terraform Ability to develop and deliver complex visualisation, reporting More ❯
skills: Typical Data Engineering Experience required (3+ years): Strong knowledge and experience: Azure Data Factory and Synapse data solution provision Azure Devops PowerBi PythonPySpark (Preference will be given to those who hold relevant certifications) Proficient in SQL. Knowledge of Terraform Ability to develop and deliver complex visualisation, reporting More ❯
skills: Typical Data Engineering Experience required (3+ years): Strong knowledge and experience: Azure Data Factory and Synapse data solution provision Azure Devops PowerBi PythonPySpark (Preference will be given to those who hold relevant certifications) Proficient in SQL. Knowledge of Terraform Ability to develop and deliver complex visualisation, reporting More ❯
in programming languages and data structures such as SAS, Python, R, SQL is key. With Python background, particularly familiarity with pandas/polars/pyspark, pytest; understanding of OOP principles; git version control; knowledge of the following frameworks a plus: pydantic, pandera, sphinx Additionally, experience in any or all More ❯
london, south east england, united kingdom Hybrid / WFH Options
Carnegie Consulting Limited
in programming languages and data structures such as SAS, Python, R, SQL is key. With Python background, particularly familiarity with pandas/polars/pyspark, pytest; understanding of OOP principles; git version control; knowledge of the following frameworks a plus: pydantic, pandera, sphinx Additionally, experience in any or all More ❯
Engineering Experience required ACTIVE SC is mandatory Essential requirement: Azure Data Factory and Synapse data solution provision Azure DevOps Microsoft Azure PowerBi Python misson Pyspark Dimension Data Model Semantic Data Models, including integration to Power BI Data Engineering Capabilities Business analysis to understand service needs and and documents accurately More ❯
Engineering Experience required ACTIVE SC is mandatory Essential requirement: Azure Data Factory and Synapse data solution provision Azure DevOps Microsoft Azure PowerBi Python misson Pyspark Dimension Data Model Semantic Data Models, including integration to Power BI Data Engineering Capabilities Business analysis to understand service needs and and documents accurately More ❯
requirements. Preferred Skills and Experience Databricks Azure Data Factory Data Lakehouse Medallion architecture Microsoft Azure T-SQL Development (MS SQL Server 2005 onwards) Python, PySpark Experience of the following systems would also be advantageous: Azure DevOps MDS Kimball Dimensional Modelling Methodology Power Bi Unity Catalogue Microsoft Fabric Experience of More ❯
and access controls. Monitor and optimize performance of data workflows using CloudWatch, AWS Step Functions, and performance tuning techniques. Automate data processes using Python, PySpark, SQL, or AWS SDKs. Collaborate with cross-functional teams to support AI/ML, analytics, and business intelligence initiatives. Maintain and enhance CI/… a cloud environment. Required Skills & Qualifications: 5+ years of experience in data engineering with a strong focus on AWS cloud technologies. Proficiency in Python, PySpark, SQL, and AWS Glue for ETL development. Hands-on experience with AWS data services, including Redshift, Athena, Glue, EMR, and Kinesis. Strong knowledge of More ❯
with proficiency in designing and implementing CI/CD pipelines in Cloud environments. Excellent practical expertise in Performance tuning and system optimisation. Experience with PySpark and Azure Databricks for distributed data processing and large-scale data analysis. Proven experience with web frameworks , including knowledge of Django and experience with More ❯
apply! A degree in Mathematics, Engineering, Statistics, Computer Science, Physics, or a related field. An advanced degree is highly preferred. Proficient in Python and PySpark; experience with SQL or similar querying languages. Solid foundation in machine learning principles, including model evaluation, optimization, and deployment best practices. Self-motivated, collaborative More ❯
following areas: Data Warehousing (Databricks) Data Modelling (Medallion Arch, Facts/Dimensions) Azure Data Stack (DataFactory/Synapse) Visualisation (PowerBI) Coding Best Practice (Python, PySpark) Real-Time Processing (SQL) Insurance/Finance Experience Startup experience Leadership/Line Management experience. What’s in it for you? Remote First Working More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Trust In SODA
following areas: Data Warehousing (Databricks) Data Modelling (Medallion Arch, Facts/Dimensions) Azure Data Stack (DataFactory/Synapse) Visualisation (PowerBI) Coding Best Practice (Python, PySpark) Real-Time Processing (SQL) Insurance/Finance Experience Startup experience Leadership/Line Management experience. What’s in it for you? Remote First Working More ❯
using data engineering, statistical, and ML/AI approaches to uncover data patterns and build models. We use Microsoft tech stack, including Azure Databricks (Pyspark, python), and we are expanding our data science capabilities. To be successful in the role, you will need to have extensive experience in data More ❯
huge datasets. Has a solid understanding of blockchain ecosystem elements like DeFi, Exchanges, Wallets, Smart Contracts, mixers and privacy services. Bonus Experience: Databricks and PySpark Analysing blockchain data Building and maintaining data pipelines Deploying machine learning models Use of graph analytics and graph neural networks Following funds on chain More ❯
schemas (both JSON and Spark), schema management etc Strong understanding of complex JSON manipulation Experience working with Data Pipelines using a custom Python/PySpark frameworks Strong understanding of the 4 core Data categories (Reference, Master, Transactional, Freeform) and the implications of each, particularly managing/handling Reference Data. More ❯
personal development. What's in it for you? Up to £90k Bonus scheme Skills and Experience Experience in using modern technologies such as Python, Pyspark, Databricks. Experience in using advanced SQL. Experience with Cloud computing, preferably Azure. Experience in working on loyalty scheme projects. If you would like to More ❯
s or Master's degree in Mathematics, Science, Statistics, Economics, or related field, or equivalent professional experience Experience with SQL; experience with Python/PySpark/R for structuring, transforming, and visualizing large data sets is a plus Knowledge of statistical and modeling techniques Strong analytical problem-solving skills More ❯
data into a unified and reliable asset. Projects natural confidence in communication and has strong stakeholder management skills. Has strong proficiency with Pandas, Numpy, PySpark or similar for Data Analysis & Cleaning Python Maintains a solid knowledge of both Automation tools, & other emerging technologies including AI platforms & LLM's. Organised More ❯
sprint planning sessions. Monitor data pipeline executions and investigate test failures or anomalies. Document test results, defects, and quality metrics. Preferred qualifications: Experience with PySpark or notebooks in Databricks. Exposure to Azure DevOps, Unit Testing frameworks, or Great Expectations for data testing. Knowledge of data warehousing or medallion architecture More ❯