Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written communication More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Careerwise
S3, BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache Spark, and other distributed computing frameworks. Solid understanding of machine learning algorithms, data preprocessing, model tuning, and evaluation. Experience in working with LLM More ❯
AWS Certified Data Engineer, or AWS Certified Data Analytics, or AWS Certified Solutions Architect Experience with big data tools and technologies like Apache Spark, Hadoop, and Kafka Knowledge of CI/CD pipelines and automation tools such as Jenkins or GitLab CI About Adastra For more than 25 years More ❯
roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design More ❯
experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge of Cloudera-based Hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. Familiarity More ❯
Service Catalogue, Cloud Formation, Lake Formation, SNS, SQS, Event Bridge Language & Scripting: Python and Spark ETL: DBT Good to Have: Airflow, Snowflake, Big Data (Hadoop), and Teradata Responsibilities: Serve as the primary point of contact for all AWS related data initiatives and projects. Responsible for leading a team of More ❯
Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative More ❯
contributions to the delivery process, manage tasks, and update teams on progress. Skills & Experience: Proven experience as a Data Engineer with expertise in Databricks, Hadoop/Spark. Strong programming skills in Python, Scala, or SQL, with knowledge of CI/CD platforms. Proficiency with distributed computing frameworks and cloud More ❯
Richmond, North Yorkshire, Yorkshire, United Kingdom
Datix Limited
programming languages, specifically Python and SQL. Expertise in data management, data architecture, and data visualization techniques. Experience with data processing frameworks like Apache Spark, Hadoop, or Flink. Strong understanding of database systems (SQL and NoSQL) and data warehousing technologies. Familiarity with cloud computing platforms (AWS, Azure) and data security More ❯
or Amazon QuickSight. Programming Languages: Familiarity with Python or R for data manipulation and analysis. Big Data Technologies: Experience with big data technologies like Hadoop or Spark. Data Governance: Understanding of data governance and data quality management. A Bit About Us When it comes to appliances and electricals, we More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
recommending scalable, creative solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you'll get: Full responsibility for projects from day More ❯
frameworks like TensorFlow, Keras, or PyTorch. Knowledge of data analysis and visualization tools (e.g., Pandas, NumPy, Matplotlib). Familiarity with big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Preferred Qualifications: Experience with More ❯
Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi More ❯
. Knowledge of cloud platforms (e.g., Azure). Familiarity with containerization is a plus (e.g., Docker, Kubernetes). Knowledge of big data technologies (e.g., Hadoop, Spark). Knowledge of data lifecycle management. Strong problem-solving skills and attention to detail. Ability to work in an agile development environment. Excellent More ❯
Engineering, Mathematics, or related field. - Proven experience (5+ years) in developing and deploying data engineering pipelines and products - Strong proficiency in Python - Experienced in Hadoop, Kafka or Spark - Experience leading/mentoring junior team members - Strong communication and interpersonal skills, with the ability to effectively communicate complex technical concepts More ❯
information with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. More ❯
with source control tools (e.g., Git) and CI/CD pipelines. Desirable Skills Familiarity with big data or NoSQL technologies (e.g., MongoDB, Cosmos DB, Hadoop). Exposure to data analytics tools (Power BI, Tableau) or machine learning workflows. Knowledge of data governance, GDPR, and data compliance practices. Why Join More ❯
data governance. Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics More ❯
architecture, etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics More ❯
technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice More ❯
automation & configuration management Ansible (plus Puppet, Saltstack), Terraform, CloudFormation NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript Big data processing and analysis, e.g. ApacheHadoop (CDH), Apache Spark RedHat Enterprise Linux, CentOS, Debian or Ubuntu Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ Open source technologies Experience of More ❯
other engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like More ❯
Tableau, Looker, or QlikSense . Ability to create well-documented, scalable, and reusable data solutions. Desirable Skills Experience with big data technologies such as Hadoop, MapReduce, or Spark. Exposure to microservice-based data APIs . Familiarity with data solutions in other public cloud platforms . AWS certifications (e.g., Solutions More ❯