experience in machine learning frameworks, including architectural design and data platforms. Knowledge of cloud platforms (AWS, Azure, or GCP) and data engineering tools (e.g., Spark, Kafka). Exceptional communication skills, with the ability to influence technical and non-technical stakeholders alike. More ❯
experience in machine learning frameworks, including architectural design and data platforms. Knowledge of cloud platforms (AWS, Azure, or GCP) and data engineering tools (e.g., Spark, Kafka). Exceptional communication skills, with the ability to influence technical and non-technical stakeholders alike. More ❯
DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing More ❯
managing technical teams. Designing and architecting data and analytic solutions. Developing data processing pipelines in python for Databricks including many of the following technologies: Spark, Delta, Delta Live Tables, PyTest, Great Expectations (or similar). Building and orchestrating data and analytical processing for streaming data with technologies such as More ❯
Purview or equivalent for data governance and lineage tracking Experience with data integration, MDM, governance, and data quality tools . Hands-on experience with ApacheSpark, Python, SQL, and Scala for data processing. Strong understanding of Azure networking, security, and IAM , including Azure Private Link, VNETs, Managed Identities More ❯
Degree such as Maths, Physics, Computer Science, Engineering etc Software Development experience in Python or Scala An understanding of Big Data technologies such as Spark, messaging services like Kafka or RabbitMQ, and workflow management tools like Airflow SQL & NoSQL expertise, ideally including Postgres, Redis, MongoDB etc Experience with AWS More ❯
processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential for the Mid-level. The Role: Build and maintain scalable data pipelines. Design/implement optimised data architecture. More ❯
Kinesis, Step Functions, Lake Formation and data lake design. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure-as-Code (Terraform, CloudFormation) for automating AWS data More ❯
Extensive development experience using SQL. Hands-on experience with MPP databases such as Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino. Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations. Comprehensive understanding of modern data platforms, including data governance More ❯
or equivalent experience Good data modelling, software engineering knowledge, and strong knowledge of ML packages and frameworks Skilful in writing well-engineered code using Spark, and advanced SQL and Python coding skills Experienced in working with Azure Databricks Proven experience working with Data Scientists to deliver best in-class More ❯
to influence. A drive to learn new technologies and techniques. Experience/aptitude towards research and openness to learn new technologies. Experience with Azure, Spark (PySpark), and Kubeflow - desirable. We pay competitive salaries based on experience of the candidates. Along with this, you will be entitled to an award More ❯
in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in database (e.g., SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) Experience in consulting, design, and implementation of serverless distributed solutions Experience in software development with object-oriented language PREFERRED QUALIFICATIONS AWS experience More ❯
growing finance user base, come join us! BASIC QUALIFICATIONS - 4+ years of data engineering experience - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue More ❯
Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc. and experience building and deploying solutions to More ❯
including OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like Apache Flink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise More ❯
london, south east england, united kingdom Hybrid / WFH Options
eTeam
including OAuth, JWT, and data encryption. Fluent in English with strong communication and collaboration skills. Preferred Qualifications Experience with big data processing frameworks like Apache Flink or Spark. Familiarity with machine learning models and AI-driven analytics. Understanding of front-end and mobile app interactions with backend services. Expertise More ❯
clients to deliver these analytical solutions Collaborate with stakeholders and customers to ensure successful project delivery Write production-ready code in SQL, Python, and Spark following software engineering best practices Coach team members in machine learning and statistical modelling techniques Who we are looking for We are looking for More ❯
awareness, able to prioritise across several projects and to lead and coordinate larger initiatives. Good Python and SQL skills, experience with the AWS stack, Spark, Databricks and/or Snowflake desirable. Solid understanding of statistical modelling and machine learning algorithms, and experience deploying and managing models in production. Experience More ❯
working within a globally-distributed team A background in some of the following a bonus: Java experience Python experience Ruby experience Big data technologies: Spark, Trino, Kafka Financial Markets experience SQL: Postgres, Oracle Cloud-native deployments: AWS, Docker, Kubernetes Observability: Splunk, Prometheus, Grafana For more information about DRW's More ❯
working with hierarchical reference data models. Proven expertise in handling high-throughput, real-time market data streams. Familiarity with distributed computing frameworks such as Apache Spark. Operational experience supporting real-time systems. Equal Opportunity Workplace We are proud to be an equal opportunity workplace. We do not discriminate based More ❯
building ETL pipelines Experience with SQL Experience mentoring team members on best practices PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience operating large data warehouses Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our More ❯
in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in database (e.g., SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) Experience in consulting, design and implementation of serverless distributed solutions Experience in software development with object-oriented language AWS experience preferred, with More ❯
etc.) Have experience productionising machine learning models Are an expert in one of predictive modeling, classification, regression, optimisation or recommendation systems Have experience with Spark Have knowledge of DevOps technologies such as Docker and Terraform and ML Ops practices and platforms like ML Flow Have experience with agile delivery More ❯
as well as hands on experience on AWS services like SageMaker and Bedrock, and programming skills such as Python, R, SQL, Java, Julia, Scala, Spark/Numpy/Pandas/scikit, JavaScript Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a More ❯
like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience in previous More ❯