london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's More ❯
Python. Develop real-time streaming features using big data tools such as Spark. SKILLS AND EXPERIENCE Extensive experience using big data tools such as Apache Spark. Experience working in and maintaining an GCP database. Strong Python coding background. Good knowledge of working with SQL. THE BENEFITS Generous Holiday plan. More ❯
within a Google Cloud Platform environment Drive continuous improvements to architecture, infrastructure, and workflow automation Core Tech Stack: Must-have : Google Cloud Platform (GCP), Apache Airflow Nice-to-have : dbt, Terraform, Kubernetes Bonus : Familiarity or curiosity about generative AI tools (e.g. ChatGPT) Ideal Candidate: 4+ Years experience in a More ❯
synthesis prediction, including using QM toolkits (e.g., PSI4, Orca, Gaussian). Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Open More ❯
or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as Apache Spark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and problem More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Careerwise
or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as Apache Spark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and problem More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). *NGS More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). *NGS More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). *NGS More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like Apache Spark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). *NGS More ❯
to Have: Experience in retail and/or e-commerce. Knowledge of Big Data, Distributed Computing, and streaming technologies like Spark Structured Streaming or Apache Flink. Additional programming skills in PowerShell or Bash. Understanding of Databricks Ecosystem components. Familiarity with Data Observability or Data Quality Frameworks. More ❯
Proficient in one of the deep learning stacks such as PyTorch or Tensorflow. Working knowledge of parallelisation and async paradigms in Python, Spark, Dask, Apache Ray. An awareness and interest in economic, financial and general business concepts and terminology. Excellent written and verbal command of English. Strong problem-solving More ❯
knowledge of geoscience and well data (Geology, Geophysics, Petrophysics, Wells, etc.). Familiarity with data visualization tools (Power BI, Tableau) or workflow automation tools(Apache Airflow). Exposure to Azure SQL, cloud storage solutions, or geoscience data platforms. Previous internship or coursework involving data reconciliation, validation, or geoscience records More ❯
knowledge of geoscience and well data (Geology, Geophysics, Petrophysics, Wells, etc.). Familiarity with data visualization tools (Power BI, Tableau) or workflow automation tools(Apache Airflow). Exposure to Azure SQL, cloud storage solutions, or geoscience data platforms. Previous internship or coursework involving data reconciliation, validation, or geoscience records More ❯
and conduct unit testing Required qualifications to be successful in this role • Proficient in Java, preferably Kotlin, with experience on Java 11, Gradle and Apache Spark • Experience in GCP, preferably Big Query and Cloud Composer • Experience with CI/CD, preferably GitHub and GitHub action • Experience with Agile is More ❯
and conduct unit testing. Required qualifications to be successful in this role Proficient in Java, preferably Kotlin, with experience on Java 11, Gradle and Apache Spark. Experience in GCP, preferably Big Query and Cloud Composer. Experience with CI/CD, preferably GitHub and GitHub action. Experience with Agile is More ❯
and ML scientists to plan the architecture for end-to-end machine learning workflows. Implement scalable training and deployment pipelines using tools such as Apache Airflow and Kubernetes. Perform comprehensive testing to ensure reliability and accuracy of deployed models. Develop instrumentation and automated alerts to manage system health and More ❯
data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our More ❯
and prompt-based LLM workflows using containerised services. Ensure reliable model integration across real-time APIs and batch processing systems. Pipeline Automation & MLOps Use Apache Airflow (or similar) to orchestrate ETL and ML workflows. Leverage MLflow or other MLOps tools to manage model lifecycle tracking, reproducibility, and deployment. Create More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits More ❯
experience deploying ML models into production environments, including both batch and real-time/streaming contexts Proficiency working with distributed computing frameworks such as Apache Spark , Dask, or similar Experience with cloud-native ML deployment , particularly on AWS , using services like ECS, EKS, Fargate, Lambda, S3, and more Familiarity More ❯
modelling- experience delivering physical, logical or conceptual data models Data design- experience developing data warehouses, lakehouses or data lakes Use of tools such as Apache Atlas, Hive Metastore, AWS Glue/Datazone Data standards- experience driving data standards If you are interested in working for a government client on More ❯
technologies IT WOULD BE NICE FOR THE DATA ENGINEER TO HAVE Cloud-based architectures Microservice or serverless architectures Messaging/routing technologies such as Apache Nifi/RabbitMQ TO BE CONSIDERED Please apply online or email me directly at . For further information, call me at or . I More ❯
Languages : Python, Bash, Go Network Modelling : YANG API Protocols : RESTCONF, NETCONF Platforms : ServiceNow, GitHub, Azure, AWS Data : XML, JSON Other : Azure DevOps, GIT, Linux, Apache, MySQL Ideal Candidate Strong experience in automation and systems integration. Proficient in Python and automation using Ansible. Familiarity with ServiceNow, GitHub workflows, and network More ❯
applications Experience with NoSQL and in-memory databases (MongoDB, CouchDB, Redis, or others) Experience with analytical databases and processing large scales of data (ClickHouse, Apache Druid, or others) Experience with analyzing and tuning database queries Experience with Event-Driven Architecture Can't find the position you're looking for More ❯