e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
London, England, United Kingdom Hybrid / WFH Options
Made Tech Limited
As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in helping public sector organisations become truly data-lead, by equipping them with robust data platforms. You'll also join a data team on More ❯
is a plus; familiarity with Java is a big plus. Experience using ELK Stack (Elasticsearch, Logstash, Kibana). Experience using business intelligence tools (e.g., Tableau) and data frameworks (e.g., Hadoop). Experience with AI/ML-based solutions for data analysis is a big plus. Analytical mind and business acumen. Strong math skills (e.g., statistics, algebra). To Be More ❯
the real estate, facilities management, or related industries. Certification in relevant areas (e.g., AWS Certified Data Analytics, Google Data Analytics Professional Certificate). Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure). Experience with data visualization design principles and storytelling techniques. Knowledge of agile methodologies and project management. Strategic thinking with the ability More ❯
to tackle business problems. Comfort with rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. Deploying machine learning models More ❯
Qualifications: PhD degree in Computer Science, Engineering, Mathematics, Physics or a related field. Hands-on experience with LLMs, RAG, LangChain, or LlamaIndex. Experience with big data technologies such as Hadoop, Spark, or Kafka. The estimated total compensation range for this position is $75,000 - $90,000 ( USD base plus bonus). Actual compensation for the position is based on More ❯
Qualifications: PhD degree in Computer Science, Engineering, Mathematics, Physics or a related field. Hands-on experience with LLMs, RAG, LangChain, or LlamaIndex. Experience with big data technologies such as Hadoop, Spark, or Kafka. The estimated total compensation range for this position is $75,000 - $90,000 ( USD base plus bonus). Actual compensation for the position is based on More ❯
Cambridge, England, United Kingdom Hybrid / WFH Options
Bit Bio
AWS. Working with a variety of stakeholders and cross-functional teams, performing analysis of their data requirements and documenting it. Big data tools and stream-processing systems such as: Hadoop, Spark, Kafka, Storm, Spark-Streaming. Relational SQL and NoSQL databases, including Postgres and Cassandra. Experience designing and implementing knowledge graphs for data integration and analysis. Data pipeline and workflow More ❯
Press Tab to Move to Skip to Content Link Select how often (in days) to receive an alert: The job you're considering The Cloud Data Platforms team is part of the Insights and Data Global Practice and has seen More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
LHH
oriented with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data science workflows Proven ability to More ❯
London, England, United Kingdom Hybrid / WFH Options
LHH
oriented with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data science workflows Proven ability to More ❯
application Deep understanding in software architecture, object-oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark. Good experience in Test driven development and automating test cases using Java/Python Experience in SQL/NoSQL (Oracle, Cassandra) database design Demonstrated ability to be More ❯
manipulation and analysis using libraries such as Pandas, NumPy, and SQLAlchemy. Extensive experience with Dash framework for building web applications. In-depth knowledge of Impala or other SQL-on-Hadoop query engines. Understanding of web development concepts (HTML, CSS, JavaScript). Proficiency in data visualization libraries (Plotly, Seaborn). Solid understanding of database design principles and normalization. Experience with More ❯
Cloudera Data Platform (CDP) solutions, ensuring scalability, performance, and security. Strong Cloudera experience with expertise in Cloudera Data Platform (CDP), Cloudera Manager, and Cloudera Navigator . Strong knowledge of Hadoop ecosystem and related technologies such as HDFS, YARN, Hive, Impala, Spark, and Kafka . Strong AWS services/Architecture experience with hands-on expertise in cloud-based deployments (AWS … data lakes, ETL pipelines, and streaming architectures . Strong Data Architecture and Solutions experience , collaborating with stakeholders to define big data architecture , data ingestion strategies, and governance frameworks. Optimize Hadoop ecosystem components to enhance data processing capabilities. Ensure best practices for data security, access controls, and compliance , including Kerberos, TLS, and Ranger policies . Lead migration projects from legacy More ❯
or more scripting language (e.g., Python, KornShell) - 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments Our More ❯
of the following architectural frameworks (TOGAF, ZACHMAN, FEAF) Cloud Experience: AWS or GCP preferred, particularly around migrations and cloud architecture Good technical knowledge and understanding of bigdata frameworks like Hadoop, Cloudera etc. Deep technical knowledge of database development, design and migration Experience of deployment in cloud using Terraform or CloudFormation Automation or Scripting experience using languages such as python … monitoring of hybrid on-premise and cloud data solutions Working with a variety of enterprise level organisations to understand and analyse existing on-prem environments such as Oracle, Teradata & Hadoop etc., and be able to design and plan migrations to AWS or GCP Deep understanding of high and low level designs and architecture solutions Developing database scripts to migrate More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Familiarity with SQL and relational database systems (e.g., PostgreSQL, MySQL) Exposure to cloud platforms such as AWS, Azure, or GCP Experience with big data tools such as Spark and Hadoop Previous experience working with financial data, including understanding of financial metrics and industry trends #J-18808-Ljbffr More ❯
relational and non-relational databases. Qualifications/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
agnostic approach to machine learning technologies. Proficiency in Python. Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch, XGBoost). Strong knowledge of data engineering tools and technologies (e.g., Spark, Hadoop, SQL). Experience with cloud platforms such as AWS, Azure, or Google Cloud. Understanding of industry regulations, compliance, and ethical considerations (e.g., GDPR, HIPAA, data ethics). Exceptional communication More ❯
Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience building large-scale, high-throughput, 24x7 data systems - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience providing technical leadership and mentoring other engineers for best practices on data engineering Our inclusive culture empowers Amazonians to deliver the best results for our More ❯
London, England, United Kingdom Hybrid / WFH Options
Enigma
in Computer Science, Engineering, or a related field. • 3+ experience as a Software Engineer with a strong focus on Data work. • Strong proficiency in leading big data technologies (e.g., Hadoop, Spark, Hive). • Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). • Strong problem-solving skills and attention to detail. • Excellent communication and collaboration skills. Bonus • Experience working More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
CMSPI
with innovative ideas or examples of coding challenges or competitions. Highly desirable skills: Familiarity with Agile practices in a collaborative team environment. Exposure to big data tools, such as Hadoop and Spark for handling large-scale datasets. Experience with cloud platforms like Microsoft Azure. Benefits Comprehensive, payments industry training by in-house and industry experts. Excellent performance-based earning More ❯
to tackle business problems. Comfort with rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. Deploying machine learning models More ❯
visualization tools such as Tableau, Power BI, or similar to effectively present validation results and insights. Nice-to-Have Requirements Familiarity with big data tools and technologies, such as Hadoop, Kafka, and Spark. Familiarity with data governance frameworks and validation standards in the energy sector. Knowledge of distributed computing environments and model deployment at scale. Strong communication skills, with More ❯
and orchestration tools like Kubernetes * Understanding of CI/CD pipelines and DevOps practices * Knowledge of security best practices and data privacy considerations * Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus * Basic understanding of machine learning concepts and their software engineering implications Key job responsibilities Key job responsibilities 1. Design and implement robust, scalable architectures for More ❯