open-source ETL, and data pipeline orchestration tools such as Apache Airflow and Nifi. Experience with large scale/Big Data technologies, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka. Experience with workflow orchestration tools like Apache Airflow. Experience with containerisation using Docker and deployment on Kubernetes. Experience with More ❯
or custom scripts. Familiarity with ELT (Extract, Load, Transform) processes is a plus. Big Data Technologies : Familiarity with big data frameworks such as ApacheHadoop and Apache Spark, including experience with distributed computing and data processing. Cloud Platforms: Proficient in using cloud platforms (e.g., AWS, Google Cloud Platform, Microsoft More ❯
Platform (GCP) Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at More ❯
Platform (GCP) Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data technologies such as Hadoop, Spark, or Hive Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow Proficiency in Python and at More ❯
Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written communication More ❯
Azure, or GCP. Hands-on experience with AI/ML workflows or deploying machine learning models in production. Knowledge of big data technologies like Hadoop, Hive, or Spark. Familiarity with MLOps tools and practices, such as MLflow, Kubeflow, or DataRobot. Education: Bachelor’s degree in Computer Science, Software Engineering More ❯
Azure, or GCP. Hands-on experience with AI/ML workflows or deploying machine learning models in production. Knowledge of big data technologies like Hadoop, Hive, or Spark. Familiarity with MLOps tools and practices, such as MLflow, Kubeflow, or DataRobot. Education: Bachelor’s degree in Computer Science, Software Engineering More ❯
Azure, or GCP. Hands-on experience with AI/ML workflows or deploying machine learning models in production. Knowledge of big data technologies like Hadoop, Hive, or Spark. Familiarity with MLOps tools and practices, such as MLflow, Kubeflow, or DataRobot. Education Bachelor’s degree in Computer Science, Software Engineering More ❯
GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at More ❯
Strong communication skills and adaptability in dynamic environments. Nice-to-Have Requirements: Experience in designing scalable data solutions. Knowledge of big data technologies like Hadoop, Kafka, or Spark. Experience with data visualization tools such as Tableau or Power BI. Hands-on experience with data warehousing solutions like Snowflake or More ❯
a team-oriented environment. Preferred Skills: Experience with programming languages such as Python or R for data analysis. Knowledge of big data technologies (e.g., Hadoop, Spark) and data warehousing concepts. Familiarity with cloud data platforms (e.g., Azure, AWS, Google Cloud) is a plus. Certification in BI tools, SQL, or More ❯
Several years of experience in data engineering or a related field, with expertise in designing scalable data solutions. Familiarity with big data technologies like Hadoop, Kafka, or Spark for processing large-scale data. Experience with data visualization tools such as Tableau, Power BI, or similar platforms for building reports More ❯
Several years of experience in data engineering or a related field, with expertise in designing scalable data solutions. Familiarity with big data technologies like Hadoop, Kafka, or Spark for processing large-scale data. Experience with data visualization tools such as Tableau, Power BI, or similar platforms for building reports More ❯
problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security More ❯
London, England, United Kingdom Hybrid / WFH Options
Luupli
problem-solving, and critical thinking skills. 8.Experience with social media analytics and understanding of user behaviour. 9.Familiarity with big data technologies, such as ApacheHadoop, Apache Spark, or Apache Kafka. 10.Knowledge of AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend. 11.Experience with data governance and security More ❯
London, England, United Kingdom Hybrid / WFH Options
Trudenty
large-scale data. Experience with ETL processes for data ingestion and processing. Proficiency in Python and SQL. Experience with big data technologies like ApacheHadoop and Apache Spark. Familiarity with real-time data processing frameworks such as Apache Kafka or Flink. MLOps & Deployment: Experience deploying and maintaining large-scale More ❯
Oracle, SQL Server, PostgreSQL) and data warehousing technologies. Experience with cloud-based data solutions (AWS, Azure, GCP). Familiarity with big data technologies like Hadoop, Spark, and Kafka. Technical Skills: Proficiency in data modelling (ERD, normalization) and data warehousing concepts. Strong understanding of ETL frameworks and tools (e.g., Talend More ❯
Data Analytics - Specialty or AWS Certified Solutions Architect - Associate. Experience with Airflow for workflow orchestration. Exposure to big data frameworks such as Apache Spark, Hadoop, or Presto. Hands-on experience with machine learning pipelines and AI/ML data engineering on AWS. Benefits: Competitive salary and performance-based bonus More ❯
or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience More ❯
data engineering roles with progressively increasing responsibility Proven experience designing and implementing complex data pipelines at scale Strong knowledge of distributed computing frameworks (Spark, Hadoop ecosystem) Experience with cloud-based data platforms (AWS, Azure, GCP) Proficiency in data orchestration tools (Airflow, Prefect, Dagster, or similar) Solid programming skills in More ❯
in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end More ❯
platforms like GCP, AWS, or Azure for data storage and processing. Experience with high-throughput data streams. Knowledge of big data technologies (e.g., Spark, Hadoop) and ETL pipelines. Experience with workflow management tools. Knowledge/experience with data storage management systems (ZFS/RAID, magnetic tape drives, etc.). More ❯
Python and R, and ML libraries (TensorFlow, PyTorch, scikit-learn). Hands-on experience with cloud platforms (Azure ML) and big data ecosystems (e.g., Hadoop, Spark). Strong understanding of CI/CD pipelines, DevOps practices, and infrastructure automation. Familiarity with database systems (SQL Server, Snowflake) and API integrations. More ❯
roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design More ❯
and other programming skills (Spark/Scala desirable). Experience both using and building APIs. Strong SQL background. Some exposure to big data technologies (Hadoop, Spark, Presto, etc.). Works well collaboratively, and independently, with a proven ability to form and manage strong relationships within the organisation and clients. More ❯