security and Identity and Access Management (IAM). Knowledge of vector databases and retrieval-augmented generation (RAG). Data pipeline development experience using Airflow, Spark, or similar tools. Experience with AWS Lambda and DynamoDB Streams. Why you'll love working at WorkBuzz - Our culture is fast-paced and dynamic More ❯
Microservice based architecture - Experience with database Oracle and SQL knowledge - Preferable to Have - exposure to kafka/Hadoop/React JS/ElasticSearch/Spark - Good to Have - have exposure to Fabric/Kubernetes/Dockers/helm More ❯
systems. Influence Opinion and decision-making across AI and ML Skills Python SQL/Pandas/Snowflake/Elasticsearch Docker/Kubernetes Airflow/Spark Familiarity with GenAI models/libraries Requirements 4+ years of relevant data engineering experience post-graduation A degree (ideally a Master’s) in Computer More ❯
systems. Influence Opinion and decision-making across AI and ML Skills Python SQL/Pandas/Snowflake/Elasticsearch Docker/Kubernetes Airflow/Spark Familiarity with GenAI models/libraries Requirements 4+ years of relevant data engineering experience post-graduation A degree (ideally a Master’s) in Computer More ❯
opportunities and enhancing the platform’s capabilities SKILLS AND EXPERIENCE The successful Senior Data Engineer will have the following skills and experience: Python SQL Spark Snowflake AWS Ideally you will also have: DBT Jenkins Git Tableau BENEFITS The successful Lead Data Engineer will receive the following benefits: Salary between More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Harnham
opportunities and enhancing the platform’s capabilities SKILLS AND EXPERIENCE The successful Senior Data Engineer will have the following skills and experience: Python SQL Spark Snowflake AWS Ideally you will also have: DBT Jenkins Git Tableau BENEFITS The successful Lead Data Engineer will receive the following benefits: Salary between More ❯
programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, and Sci-kit learn, along More ❯
Experience in building machine learning models for business application Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience in implementing Computer Vision algorithms Amazon is an equal opportunities employer. We believe passionately that employing a More ❯
programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along with data More ❯
Experience in building machine learning models for business applications Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy, etc. Experience in implementing Computer Vision algorithms Amazon is an equal opportunities employer. We believe passionately that employing a More ❯
and monitor the accuracy and effectiveness of external price forecast models. Develop analytics related to Power and Gas fundamentals, such as system margin analysis, spark and dark spread analysis, fuel cost adjustment forecast, maintenance program adjusted system availability, sensitivity and price visualisation tools. Explore innovative ideas using advanced statistical More ❯
and conduct unit testing. Required qualifications to be successful in this role Proficient in Java, preferably Kotlin, with experience on Java 11, Gradle and Apache Spark. Experience in GCP, preferably Big Query and Cloud Composer. Experience with CI/CD, preferably GitHub and GitHub action. Experience with Agile is More ❯
with uncertainty quantification and performance estimation (e.g., cross-validation, bootstrapping, Bayesian credible intervals). Familiarity with database and data processing tools (e.g., SQL, MongoDB, Spark, Pandas). Ability to translate ambiguous business problems into structured, measurable, and data-driven approaches. Preferred Qualifications: M.Sc or PhD in Statistics, Electrical Engineering More ❯
Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. More ❯
Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. More ❯
exposure to Agile methodologies or similar technologies (training provided if necessary). • Advanced Technologies: Knowledge of big data technologies (HBase, Hadoop), machine learning frameworks (Spark), or orbit dynamics is of interest. Why CGI Secure Space Systems? Join a team that's at the forefront of space technology innovation. At More ❯
Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g., ApacheMore ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark is a plus Where You'll Be This role is based in London (UK) We offer you the flexibility to More ❯
partners. Previous experience in a Data Engineering role: Passion for data and industry best practices in a dynamic environment. Proficiency in technologies such as Spark/PySpark, Azure Data services, Python or Scala, SQL, testing frameworks, open table formats, CI/CD workflows, and cloud infrastructure management. Excellent communication … and agile methodologies. Nice to Have: Experience in retail and/or e-commerce. Knowledge of Big Data, Distributed Computing, and streaming technologies like Spark Structured Streaming or Apache Flink. Additional programming skills in PowerShell or Bash. Understanding of Databricks Ecosystem components. Familiarity with Data Observability or Data More ❯
s Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as Apache Hadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing … Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages More ❯
design, implementation and technical oversight of an internal data ingestion platform. My client are looking for an individual that has the following: Experience with ApacheSpark, Kafka, and Airflow for data ingestion and orchestration. Experience in designing and deploying solutions on Azure Kubernetes Service (AKS). Knowledge of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake, and MLflow. Benefits: At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
Experience in building machine learning models for business applications - Experience in applied research PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, TensorFlow, numpy, scipy, etc. - Experience in implementing Computer Vision algorithms. Posted: February 26, 2025 (Updated 2 days ago) Posted: March More ❯