In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and ApacheSpark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you … will be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using ApacheSpark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data … to the design of data architectures, storage strategies, and processing frameworks Work with cloud data platforms (e.g., AWS, Azure, or GCP) to deploy scalable solutions Monitor, troubleshoot, and optimize Spark jobs for performance and cost efficiency Liaise with customer and internal stakeholders on a regular basis This represents an excellent opportunity to secure a long term contract, within a More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and ApacheSpark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you … will be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using ApacheSpark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data … to the design of data architectures, storage strategies, and processing frameworks Work with cloud data platforms (e.g., AWS, Azure, or GCP) to deploy scalable solutions Monitor, troubleshoot, and optimize Spark jobs for performance and cost efficiency Liaise with customer and internal stakeholders on a regular basis This represents an excellent opportunity to secure a long term contract, within a More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
best practices. Ability to communicate technical concepts clearly to both technical and non-technical stakeholders. Experience working with large datasets and distributed computing tools such as Python, SQL, Hadoop, Spark, and optimisation software. As a precondition of employment for this role, you must be eligible and authorised to work in the United Kingdom. What we offer: At AXA UK More ❯