experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in ApacheSpark, PySpark, and Databricksincluding experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in ApacheSpark, PySpark, and Databricksincluding experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with ApacheSpark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS … Skills Hands-on development experience in an airline, e-commerce or retail industry Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. Experience of More ❯
Bonus) Certifications such as AWS Certified Data Engineer, Google Cloud Data Engineer, or Microsoft Azure Data Fundamentals (even if in-progress). Experience with big data tools like Hadoop, Spark, or Kafka. Familiarity with data visualization tools (Tableau, Power BI, Looker). Hands-on experience in API integrations, streaming data, or real-time processing . Participation in data hackathons More ❯
practices such as testing, version control, and CI/CD Hands-on experience building scalable data pipelines in a modern cloud environment (e.g., dbt, AWS Glue, AWS Lake Formation, ApacheSpark, Amazon Redshift) Strong understanding of data modeling, ELT design patterns, data governance, and security best practices Bonus . Experience with reverse ETL tools (e.g., Census) Knowledge of More ❯
models to solve complex business problems. Strong background in statistical analysis, data mining, and feature engineering for large-scale structured and unstructured datasets. Experience working with big data platforms (Spark, Hadoop) and integrating with cloud environments (AWS, Azure, GCP). Proficiency in building data pipelines, ETL workflows, and collaborating with data engineers for scalable data solutions. Expertise in data More ❯
external suppliers, with annual budgets spreading from 1M- 2M+. Essential Skills Proven experience as a Data Engineer (or similar/related role) Experience with Azure Data Factory, Databricks, or ApacheSpark, following modern ETL/ELT principles. Experience of using programming languages such as Python, Scala and SQL. Demonstrable knowledge of data modelling and data warehousing within platforms More ❯
Oracle, SQL Server, PostgreSQL). Excellent problem-solving, communication, and stakeholder management skills. Good to Have Exposure to data governance and compliance frameworks. Knowledge of modern streaming platforms (Kafka, Spark Streaming, etc.). Experience in designing end-to-end enterprise data strategies. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified by Great Place To More ❯
best practices for data security and compliance. Collaborate with stakeholders and external partners. Skills & Experience: Strong experience with AWS data technologies (e.g., S3, Redshift, Lambda). Proficient in Python, ApacheSpark, and SQL. Experience in data warehouse design and data migration projects. Cloud data platform development and deployment. Expertise across data warehouse and ETL/ELT development in More ❯
City of London, Greater London, UK Hybrid / WFH Options
Areti Group | B Corp
pace with evolving technologies and techniques Candidate Profile 1+ years experience in a data-related role (Analyst, Engineer, Scientist, Consultant, or Specialist) Experience with technologies such as Python, SQL, Spark, Power BI, AWS, Azure, or GCP Strong analytical and problem-solving skills Comfortable working directly with clients and stakeholders Excellent communication and teamwork abilities Must hold active SC or More ❯
their application in cloud environments (Azure would be ideal) Proficiency in ETL/ELT processes, data integration, and engineering tools. Hands-on experience with Python, Airflow, Snowflake, Databricks, and Spark The values and ethos of our business Innovation with real purpose and for real results Support one another - pull together and be helpful We are working hard but having More ❯
solutions from structured and unstructured data. Build data pipelines, models, and AI applications using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
City of London, Greater London, UK Hybrid / WFH Options
Deloitte
solutions from structured and unstructured data. Build data pipelines, models, and AI applications using cloud platforms and frameworks such as Azure AI/ML Studio, AWS Bedrock, GCP Vertex, Spark, TensorFlow, PyTorch, etc. Build and deploy production grade fine-tuned LLMs and complex RAG architectures. Create and manage the complex and robust prompts across the GenAI solutions. Communicate effectively More ❯
Understanding of reproducibility, testing, and modern software practices Strong communication skills and collaborative mindset Desirable Background in healthcare, research environments, or data science teams Experience working with EHR data, Spark, or optimised SQL pipelines Location & Working Pattern This is a hybrid role, with 12 days per week on-site in either the Oxford or London office. Team days and More ❯
across engineering, security, and product teams to deliver at pace and scale. The toolkit youll use Data Science & Engineering: Python (NumPy, Pandas, scikit-learn, PyTorch/TensorFlow), SQL, NoSQL, Spark, big data ecosystems Visualisation & APIs: REST/JSON, Postman, Flask/FastAPI, Power BI/Tableau, D3.js DevOps & Cloud: CI/CD, Docker, AWS (S3, Lambda, SageMaker), Kubernetes, Terraform … project experience. Experience with Palantir Foundry (full training provided). Familiarity with AI/ML Ops pipelines , real-time analytics, or edge deployments. Big Data stack knowledge (e.g., Hadoop, Spark, Kafka). GenAI/LLM experience (e.g., AWS Bedrock, LangChain). Why this is a great move Mission & impact: Work on projects where data-driven decisions have real-world More ❯
City of London, Greater London, UK Hybrid / WFH Options
Areti Group | B Corp
across engineering, security, and product teams to deliver at pace and scale. The toolkit youll use Data Science & Engineering: Python (NumPy, Pandas, scikit-learn, PyTorch/TensorFlow), SQL, NoSQL, Spark, big data ecosystems Visualisation & APIs: REST/JSON, Postman, Flask/FastAPI, Power BI/Tableau, D3.js DevOps & Cloud: CI/CD, Docker, AWS (S3, Lambda, SageMaker), Kubernetes, Terraform … project experience. Experience with Palantir Foundry (full training provided). Familiarity with AI/ML Ops pipelines , real-time analytics, or edge deployments. Big Data stack knowledge (e.g., Hadoop, Spark, Kafka). GenAI/LLM experience (e.g., AWS Bedrock, LangChain). Why this is a great move Mission & impact: Work on projects where data-driven decisions have real-world More ❯
Science degree (ideally MSc and above but all backgrounds considered). Commercial Data Science experiemcn Excellent Python & SQL skills. Strong Machine Learning & Statistical knowledge Ideally LLMS/GenAI experience. Spark and/or Databricks experience would be a lovely bonus. If this role interests you and you would like to learn more, please apply here or contact us via More ❯
International, a global network of top-tier JD Edwards consultancies, were able to deliver even broader, more impactful solutions to our clients worldwide. Our mission is to enable agility, spark growth, and future-proof organisations through Oracle Cloud Fusion and JD Edwards solutions. Were more than consultantswere problem solvers, trusted advisors, and long-term partners to our clients. If More ❯
Engineer to join the team and contribute to a cutting-edge platform for analytics and machine learning. They are looking for a skilled data engineer with experience in Databricks, Spark, and Python, who can deliver high-impact data products. This role offers the opportunity to work alongside an exciting, collaborative team with a clear roadmap for growth and the … wider technology teams. Key Skills: Experience with Azure cloud data lakes and services (Data Factory, Synapse, Databricks). Skilled in ETL/ELT pipeline development and big data tools (Spark, Hadoop, Kafka). Strong Python/PySpark programming and advanced SQL with query optimisation. Experience with relational, NoSQL, and graph databases. Familiar with CI/CD, version control, and More ❯
City of London, Greater London, UK Hybrid / WFH Options
Formula Recruitment
Engineer to join the team and contribute to a cutting-edge platform for analytics and machine learning. They are looking for a skilled data engineer with experience in Databricks, Spark, and Python, who can deliver high-impact data products. This role offers the opportunity to work alongside an exciting, collaborative team with a clear roadmap for growth and the … wider technology teams. Key Skills: Experience with Azure cloud data lakes and services (Data Factory, Synapse, Databricks). Skilled in ETL/ELT pipeline development and big data tools (Spark, Hadoop, Kafka). Strong Python/PySpark programming and advanced SQL with query optimisation. Experience with relational, NoSQL, and graph databases. Familiar with CI/CD, version control, and More ❯