value through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from more »
quality testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with more »
London, Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
rate of £250-£400, falling inside IR35 regulations. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using AWS, Databricks, Python, Spark, and SQL. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and troubleshoot data … Glue). Hands-on experience with Databricks for data processing and analytics. Proficient in Python programming for data manipulation and automation. Solid understanding of ApacheSpark for big data processing. Strong SQL skills for data querying, transformation, and analysis. Excellent problem-solving abilities and attention to detail. Ability more »
a qualified Data Engineer to join our team, where your responsibilities will include: Designing, optimizing, and maintaining scalable data pipelines and ETL processes using Spark, ensuring streamlined data processing and integration. Collaborating cross-functionally to translate complex data requirements into actionable technical solutions that drive business objectives. Leveraging Microsoft … the Midlands. Ideal Candidate Profile: We are seeking an individual who have the following attributes: Proven expertise as a Data Engineer, demonstrating proficiency in ApacheSpark and cloud-based technologies, particularly Microsoft Azure and Databricks. Strong programming skills, with a focus on Python, along with proficiency in ETL more »
platform (preferably GCP). BSc/MSc in computer science, maths, physics or STEM subject. Basic knowledge of statistics and machine learning. Experience with Spark, Apache services, ETL tools, Data visualization and dashboards. Experience with streamed data processing, parallel compute, and/or event based architectures. Experience with more »
and libraries for geospatial data analysis & modelling. Experience with cloud computing platforms such as Azure, AWS, Google Cloud, and distributed computing frameworks such as ApacheSpark, for processing large geospatial datasets. Familiarity with geospatial databases and data visualisations such as tableau, QGIS & ArcGIS. Knowledge of satellite imagery analysis more »
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
LV= General Insurance
and libraries for geospatial data analysis & modelling. Experience with cloud computing platforms such as Azure, AWS, Google Cloud, and distributed computing frameworks such as ApacheSpark, for processing large geospatial datasets. Familiarity with geospatial databases and data visualisations such as tableau, QGIS & ArcGIS. Knowledge of satellite imagery analysis more »
Bournemouth, Dorset, South West, United Kingdom Hybrid / WFH Options
LV= General Insurance
and libraries for geospatial data analysis & modelling. Experience with cloud computing platforms such as Azure, AWS, Google Cloud, and distributed computing frameworks such as ApacheSpark, for processing large geospatial datasets. Familiarity with geospatial databases and data visualisations such as tableau, QGIS & ArcGIS. Knowledge of satellite imagery analysis more »
leading business intelligence platform (e.g. Microsoft, Crystal, Qlik, SAP, Tableau). Good understanding of open source, big data, and cloud data platforms (e.g. Hadoop, Spark, Hive, Pentaho, AWS, Azure); given a business problem, you can analyse and evaluate options and recommend solutions. Proven experience in designing, building and maintaining more »
tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable Skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent more »
etc). Experience with SQL and query design on large, complex datasets. Experience with cloud and big-data tools and frameworks like Databricks/Spark, Airflow, Snowflake, etc. Expertise designing and developing with distributed data processing platforms like Databricks/Spark. Experience using ELT/ETL tools such as more »
London (city), London, England Hybrid / WFH Options
T Rowe Price
or similar, with 6+ years of professional experience. A good understanding of modern lakehouse architectures and corresponding technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be more »
London (city), London, England Hybrid / WFH Options
T Rowe Price
or similar, with 6+ years of professional experience. A good understanding of modern lakehouse architectures and corresponding technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be more »
managers, to understand data requirements and deliver high-quality solutions as well as architecting data ingestion, transformation, and storage processes using tools such as ApacheSpark, Azure Data Factory, and other similar technologies. Other core duties include optimizing data pipeline performance, ensuring data accuracy, reliability, and timely delivery. … Services Certifications in relevant technologies, such as Azure Data Engineer or Databricks Certified Developer Experience with real-time data processing and streaming technologies like Apache Kafka or Azure Event Hubs Knowledge of data visualization tools, such as Power BI or Tableau Contributions to open-source projects or active participation more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
Guildford, England, United Kingdom Hybrid / WFH Options
Hawksworth
warehousing and ETL frameworks Proficiency in working with relational databases (e.g., Oracle, PostgreSQL), Parquet/Delta files and big data technologies (e.g. Synapse, Hadoop, Spark, Kafka) Experience working with Microsoft Azure and associated data services Strong analytical and data interpretation skills, with the ability to communicate findings to technical more »
machine learning techniques, deep learning, graph data analytics, statistical analysis, time series, geospatial, NLP, sentiment analysis, pattern detection, etc.) Experience using Python, R or Spark to extract insights from data Knowledge of SQL for accessing and processing data (PostgreSQL preferred but general SQL knowledge more important) Experience using the more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
Azure Synapse Analytics. Strong SQL and Python skills. Experience with data modeling, ETL processes, and data warehousing. Knowledge of big data technologies such as Spark and Hadoop is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Experience in the healthcare sector is more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »
Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »