West Bromwich, England, United Kingdom Hybrid / WFH Options
Leonardo UK Ltd
Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don’t tick every box. The More ❯
SQL proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Research experience with peer-reviewed publications Certifications in cloud-based machine learning services (AWS, Azure More ❯
Job summary We are seeking 3 Data Engineers to join our defence & security client on a contract basis. Key skills required for this role DV cleared, Data Engineer, ETL, Elastic Stack, Apache NiFi Important DV Cleared - Data Engineer - ELK & NiFi More ❯
closely with data engineers to build and optimize data pipelines that facilitate the processing and analysis of large datasets Utilize cloud platforms and big data technologies (e.g., AWS, Azure, Hadoop, Spark) for efficient data processing and model deployment Design and implement robust data storage, retrieval, and management strategies for research datasets Create compelling data visualizations and reports that convey … or MATLAB for developing and testing models Experience with data visualization tools (e.g., Matplotlib, Seaborn, Tableau, Power BI) to present insights effectively Strong understanding of big data technologies (e.g., Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud) Familiarity with NLP, computer vision, or other specialized techniques relevant to computer and information research Experience with version control systems (e.g. More ❯
native environments. · Familiarity with containerization (Docker, Kubernetes) and DevOps pipelines. · Exposure to security operations center (SOC) tools and SIEM platforms. · Experience working with big data platforms such as Spark, Hadoop, or Elastic Stack. #J-18808-Ljbffr More ❯
in a team environment PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Familiarity with big data technologies (Hadoop, Spark, etc.) - Knowledge of data security and privacy best practices - Strong problem-solving and analytical skills - Excellent written and verbal communication skills Our inclusive culture empowers Amazonians to deliver More ❯
or other data visualization administration • Experience completing Databricks development and/or administrative tasks • Familiarity with some of these tools: DB2, Oracle, SAP, Postgres, Elastic Search, Glacier, Cassandra, DynamoDB, Hadoop, Splunk, SAP HANA, Databricks • Experience working with federal government clients Security Clearance: Active CBP Public Trust required SALARY RANGE: $130,000 to $140,000 annually. Benefits available For consideration More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
London, England, United Kingdom Hybrid / WFH Options
HipHopTune Media
learning. Experience in using advanced features of cloud platforms (AWS, Azure, Google Cloud) such as machine learning services and automated data pipeline tools. Familiarity with big data frameworks like Hadoop or Spark is beneficial. Skills in advanced data visualization tools and software beyond basic reporting—such as Tableau, Power BI, or even more sophisticated interactive web visualization frameworks like More ❯
development e.g. R, Python Strong knowledge of deploying end-to-end machine learning models in Databricks utilizing Pyspark, MLflow and workflows Strong knowledge of data platforms and tools, including Hadoop, Spark, SQL, and NoSQL databases Communicate algorithmic solutions in a clear, understandable way. Leverage data visualization techniques and tools to effectively demonstrate patterns, outliers and exceptional conditions in the More ❯
modern data architectures, Lambda type architectures - Proficiency in writing and optimizing SQL - Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) - Ability to write code in Python, Ruby, Scala or other platform-related Big data technology - Knowledge of professional software engineering practices & best practices for More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Lloyds Bank plc
exciting new technologies to design and build scalable real-time data applications. Spanning the full data lifecycle and experience using a mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you’ll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data. Helping in … and non-relational databases to build data solutions, such as SQL Server/Oracle , experience with relational and dimensional data structures. Experience in using distributed frameworks ( Spark, Flink, Beam, Hadoop ). Proficiency in infrastructure as code (IaC) using Terraform . Experience with CI/CD pipelines and related tools/frameworks. Containerisation: Good knowledge of containers ( Docker, Kubernetes etc More ❯
DL libraries like TensorFlow, PyTorch, or JAX. Knowledge of data analytics concepts, including data warehouse technical architectures, ETL and reporting/analytic tools and environments (such as Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Customer facing experience of discovery, assessment, execution, and operations. Demonstrated excellent communication, presentation, and problem solving skills. Experience in project governance and enterprise More ❯
DevOps/Cloud/Software engineer Proficiency in programming languages such as Python, Java, or Scala. Strong experience with relational databases (e.g., PostgreSQL, MySQL) and big data technologies (e.g., Hadoop, Spark). Experienced with Elasticsearch and Cloud Search. Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with data pipeline orchestration tools (e.g. More ❯
Proven experience as a Data Engineer , preferably in a freelance or consulting capacity. Strong expertise in SQL, Python, and/or Scala . Experience with big data technologies (Spark, Hadoop, Kafka) is a plus. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud). Knowledge of ETL tools, data warehouses (Snowflake, BigQuery, Redshift) and pipeline orchestration (Airflow, dbt More ❯
functional teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
functional teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
English is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a More ❯
English is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a More ❯
by championing best practices in coding, architecture, and performance. Foster a team culture focused on continuous improvement, where learning is encouraged. Leverage Big Data Technologies: Utilise tools such as Hadoop, Spark, and Kafka to design and manage large-scale on-prem data processing systems. Collaboration: Collaborate with cross-functional teams and stakeholders to deliver high-impact solutions that align More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
1 month ago Be among the first 25 applicants Description As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in helping public sector organisations become truly data-lead, by equipping them with robust More ❯
by championing best practices in coding, architecture, and performance. Foster a team culture focused on continuous improvement, where learning is encouraged. Leverage Big Data Technologies: Utilise tools such as Hadoop, Spark, and Kafka to design and manage large-scale on-prem data processing systems. Collaboration: Collaborate with cross-functional teams and stakeholders to deliver high-impact solutions that align More ❯
London, England, United Kingdom Hybrid / WFH Options
Aecom
languages such as Python, R, and SQL.+ In-depth experience with data manipulation and visualization libraries (e.g., Pandas, NumPy, Matplotlib, etc.).+ Solid understanding of big data technologies (e.g., Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud).+ Strong expertise in the full data science lifecycle: data collection, preprocessing, model development, deployment, and monitoring.+ Experience in leading teams More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some cases. More ❯