In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and ApacheSpark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you … will be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using ApacheSpark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data … to the design of data architectures, storage strategies, and processing frameworks Work with cloud data platforms (e.g., AWS, Azure, or GCP) to deploy scalable solutions Monitor, troubleshoot, and optimize Spark jobs for performance and cost efficiency Liaise with customer and internal stakeholders on a regular basis This represents an excellent opportunity to secure a long term contract, within a More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and ApacheSpark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you … will be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using ApacheSpark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data … to the design of data architectures, storage strategies, and processing frameworks Work with cloud data platforms (e.g., AWS, Azure, or GCP) to deploy scalable solutions Monitor, troubleshoot, and optimize Spark jobs for performance and cost efficiency Liaise with customer and internal stakeholders on a regular basis This represents an excellent opportunity to secure a long term contract, within a More ❯
Stroud, England, United Kingdom Hybrid / WFH Options
Ecotricity
for best practice and technical excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for you... Healthcare plan, life assurance More ❯
Stroud, south east england, united kingdom Hybrid / WFH Options
Ecotricity
for best practice and technical excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for you... Healthcare plan, life assurance More ❯
best practices. Ability to communicate technical concepts clearly to both technical and non-technical stakeholders. Experience working with large datasets and distributed computing tools such as Python, SQL, Hadoop, Spark, and optimisation software. As a precondition of employment for this role, you must be eligible and authorised to work in the United Kingdom. What we offer: At AXA UK More ❯