Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Stream Analytics, etc. Strong Python or Scala with Spark, PySpark experience Experience with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards, Data more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
experience of this is a strong preference. However other Cloud platFforms like AWS/GCP are acceptable. Coding - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. What’s next? If you believe you have the more »
Ideal Candidate Profile: Proven track record in big data engineering with a solid understanding of ETL pipelines and data system projections. Proficiency in Python, PySpark, SQL, and familiarity with data science tools like R, ML, AI. Strong foundation in database management (SQL and No-SQL databases such as Aurora more »
experience with data modelling, data warehousing, and ETL/ELT processes.A fluency and development experience in at least one of the following: Java, Python, PySpark or Scala.Experience working with a variety of data formats such as JSON, Parquet, XML etc.Experience with or developed understanding of the application of ETL more »
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. · Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. · Machine Learning and Data Science Tools - Any experience in more »
requirements and deliver solutions that drive business value. Requirements: 7+ years in a Data Engineering Role Excellent proficiency in SQL, Python, Microsoft Azure, Databricks, PySpark, Experience managing a team Details: Start Date: ASAP Duration: 3 months, option for permanent extension Day rate: Up to £400Ltd, depending on experience Annual more »
Cardiff, Wales, United Kingdom Hybrid / WFH Options
Creditsafe
levels of the organisation to achieve the business's objectives. We use modern tools and methodologies, leveraging cloud services (AWS), Step Functions, Glue, Athena , PySpark and of course, SQL and Python. We work in an agile manner, delivering iteratively through a metadata-driven approach which allows us to generate more »
Lead Data Engineer: We need some strong Data engineer profiles… they need good experience with Pyspark, Python, SQL, ADF and preferably Databricks experience Job description: Building new data pipelines and optimizing data flows using the Azure cloud stack. Building data products from scratch. Support Business Analysts and Data Architects more »
in key technologies related to Data Management, e.g. SQL, Spark, Python· Experience with the Azure cloud platform· Experience with Advance Python Libraries I.e., Pandas, Pyspark· Experience with development best practices (git, testing, coding standards, CI/CD, documentation, code refactoring...)· A love of data and understanding of some algorithmic more »
understand consumers Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
cross functional teams entrusted with business-critical platforms.Desirable skills & experienceWorking to an Agile methodology and familiarity with Azure DevOpsDeep automation knowledge with PythonSkilled in Pyspark and SynapseExperience with data modelling and visualisation in Power BI (or alternative)A strong understanding of architecting data platforms, BI, MI, or analytics solutionsStrong more »
applied machine learning, probability, statistics, and quantitative risk modelling. High proficiency in Python & SQL. Experience with big data technologies and tools, particularly Databricks and Pyspark, is highly desirable. Experience in agile software development processes is a plus. Experience in insurance, cyber, or a related domain is ideal. Understanding of more »
a generous benefits package. Technical Experience Required: Demonstrated expertise in data engineering with a focus on Azure services. Proficiency in SQL, Azure Databricks, and PySpark, handling unstructured/semi-structured data, with experience of schema evolution and/or serialisation. Extensive experience in building and optimizing data pipelines. Experience more »
and SQL Exposure to developing in a cloud platform such as AWS, GCP or Azure Knowledge of big data technologies e.g., Trino, Hadoop or Pyspark Ability to build trusted and credible relationships with your peers, stakeholders, and customers. Analytical thinker and natural problem solver If this sounds like you more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
the insurance domain is advantageous. - Education : A degree in Computer Science, Data Science, Engineering, or a related discipline. Technical Skills : Proficient in Python, SQL, PySpark, and Databricks. Demonstrated proficiency in modern NLP techniques and tools. Proven track record in developing and managing data quality metrics and dashboards. Experience collaborating more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
in SQL Server and/or Azure (Datafactory, Synapse) and the associated ETL technologies. Experience of Continuous Integration/Continuous Delivery (CI/CD) PySpark Experience If you are interested, please call me or email me on (see below)! Thanks more »
and ensuring best practices and understood and followed. Technical Skills and Qualifications Expert knowledge in python including libraries/frameworks such as pandas, numpy, pyspark Good understanding of OOP, software design patterns, and SOLID principles Good experience in Docker Good experience in Linux Good experience in Airflow Good knowledge more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Blue Wolf Digital
to join their team. The primary focus of the role is Databricks data engineering. You will be building data pipelines using Databricks, coding using PySpark, and supporting internal applications. You will also be using Python for Data Transformations and work across the Azure Data Platform. Must Have Strong Databricks more »
Coventry, West Midlands, West Midlands (County), United Kingdom
Investigo
platform, driving cost optimisation opportunities. Provide expertise in AWS monitoring and optimisation, optimising databases and ETL pipelines. Utilise programming languages such as Python and PySpark to transform big data into manageable datasets. Contribute to the development of interactive dashboards and provide expert analysis across program lifecycles. Transform technical data more »
month contract. Essential Skills: Insurance/Financial Services experience is highly desirable Design efficient and scalable data models. Azure Databricks (preferably), SQL, Python, PySpark, dimensional/star schema data modelling. Understanding of Conceptual, Logical and Physical Data Models Experience in ETL/ELT as well as Entity Relationship (ER more »
PowerBi would also be useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within more »