3 of 3 Apache Spark Jobs in Avon

Data Pipeline Engineer On prem to AWS migration

Hiring Organisation
Pontoon
Location
Bristol, Avon, England, United Kingdom
Employment Type
Contractor
Contract Rate
£700 - £750 per day
experience in cloud-based data services. Proficiency in SQL and Python for data manipulation and transformation. Experience with modern data engineering tools, including Apache Spark, Kafka, and Airflow. Strong understanding of data modelling, schema design, and data warehousing concepts. Familiarity with data governance, privacy, and compliance frameworks (e.g. ...

Data Science and AI Industrial Placement

Hiring Organisation
Lloyds Banking Group
Location
Bristol, Avon, South West, United Kingdom
Employment Type
Contract, Work From Home
Contract Rate
£24,000
modern cloud engineering practices. Wherever you land, youll be working with some of the biggest datasets in the UK using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing Design and deploy machine learning models for fraud detection, credit … risk, customer segmentation, and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. Engineer robust data pipelines and ML workflows using Apache Spark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. Apply advanced techniques in deep learning, natural language processing ...

Data Engineer

Hiring Organisation
Indotronix Avani UK Ltd
Location
Bristol, Avon, South West, United Kingdom
Employment Type
Permanent
Salary
£45,000
high-quality, secure, and scalable data solutions gaining exposure to multiple cloud platforms (AWS, Azure, OCI, GCP) and technologies such as Palantir Foundry, Databricks, Spark, and Airflow. Other times you will be working directly with the client, using your analytical and data skills to support their decision-making. … Eligible for UK Security Clearance (or 5+ years UK residency . Desirable Skills & Exposure Familiarity with data lakes, warehouses, or streaming architectures (e.g. Databricks, Spark, Kafka). Experience with ETL tools, orchestration frameworks, or automation pipelines. Understanding of version control and CI/CD (e.g. GitHub, Azure DevOps). ...