Stevenage, Hertfordshire, South East, United Kingdom Hybrid/Remote Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
with a strong understanding of security and compliance requirements Proven track record in managing data systems and ensuring data security Technical Skills:Expertise in data management tools (e.g., SQL, Hadoop, Tableau, etc.) Experience with data encryption, data protection technologies, and access control systems. Strong understanding of database management systems (DBMS) and cloud platforms (e.g., AWS, Azure) Clearance:Must possess More ❯
Docker and orchestration tools like Kubernetes. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Knowledge of data engineering and experience with big data technologies like Hadoop, Spark, or Kafka. Experience with CI/CD pipelines and automation, such as using Jenkins, GitLab, or CircleCI. As an equal opportunities employer, we welcome applications from individuals of More ❯
years in a managerial role. Proven ability to manage and mentor a team of Backline Engineers, guiding career development. Strong technical expertise in Apache Spark , Databricks Runtime, Delta Lake, Hadoop, and cloud platforms (AWS, Azure, GCP) to troubleshoot complex customer issues. Ability to oversee and drive customer escalations, ensuring seamless coordination between Frontline Support and Backline Engineering. Experience in More ❯
years in a managerial role. Proven ability to manage and mentor a team of Backline Engineers, guiding career development Strong technical expertise in Apache Spark, Databricks Runtime, Delta Lake, Hadoop, and cloud platforms (AWS, Azure, GCP) to troubleshoot complex customer issues. Ability to oversee and drive customer escalations, ensuring seamless coordination between Frontline Support and Backline Engineering. Experience in More ❯
APIs, Data structures, Algorithms, Collections, Multi-threading and memory management and concurrency Preferred Qualifications: Sound knowledge of software engineering design patterns and practices Experience in Big data ecosystem using Hadoop, Spark, Scala using Python packages and libraries for large scale data Good understanding of Agile software development frameworks Strong communication and Analytical skills Ability to work in teams in More ❯
scale processing. The qualified candidate will have experience with database systems, Azure cloud storage, and significant exposure to or experience with modern big data processing tools (such as Spark, Hadoop, and Databricks). Candidates will be expected to be able to design and implement data solutions that use the following Azure services: Azure Cosmos DB, Azure SQL Database, Azure More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid/Remote Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI ?? Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
technologies (e.g., MongoDB, InfluxDB, Neo4J). Experience with data exchange and processing (ETL, ESB, APIs). Proficiency in Python or similar programming languages. Familiarity with big data frameworks (e.g., Hadoop ecosystem). Desirable Skills: Understanding of NLP (Natural Language Processing) and OCR (Optical Character Recognition). Exposure to Generative AI concepts and tools. Experience with containerisation (e.g., Docker). More ❯
Familiarity with and experience of using UNIX Knowledge of CI toolsets Good client facing skills and problem solving aptitude DevOps knowledge of SQL Oracle DB Postgres ActiveMQ Zabbix Ambari Hadoop Jira Conflucene BitBucket ActiviBPM Oracle SOA Azure SQLServer IIS AWS Grafana Oracle BPM Jenkins Puppet CI and other cloud technologies. Benefits Include: Contributory pension scheme Employee Assistance Program Medical More ❯
Familiarity with and experience of using UNIX Knowledge of CI toolsets Good client facing skills and problem solving aptitude DevOps knowledge of SQL Oracle DB Postgres ActiveMQ Zabbix Ambari Hadoop Jira Conflucene BitBucket ActiviBPM Oracle SOA Azure SQLServer IIS AWS Grafana Oracle BPM Jenkins Puppet CI and other cloud technologies. Benefits Include: Contributory pension scheme Employee Assistance Program Medical More ❯
CD, and infrastructure-as-code Docker; Kubernetes (EKS, GKE, AKS); Jenkins, GitLab CI, or GitHub Actions; Terraform or CloudFormation; Prometheus, Grafana, Datadog, or New Relic; Slurm, Torque, LSF; MPI; Hadoop or Spark;Director of In Experience with high-performance computing, distributed systems, and observability tools Strong communication and executive presence, with the ability to translate complex technical concepts for More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Harnham
CD, and infrastructure-as-code Docker; Kubernetes (EKS, GKE, AKS); Jenkins, GitLab CI, or GitHub Actions; Terraform or CloudFormation; Prometheus, Grafana, Datadog, or New Relic; Slurm, Torque, LSF; MPI; Hadoop or Spark;Director of In Experience with high-performance computing, distributed systems, and observability tools Strong communication and executive presence, with the ability to translate complex technical concepts for More ❯
across a global estate Strong domain knowledge of Data & AI, including familiarity with cloud data and AI platforms (e.g. MS Azure, AWS Google Cloud) and big data technologies (e.g. Hadoop, Spark) Excellent leadership, communication, and interpersonal skills Ability to think strategically and solve complex business scenarios Working knowledge of TOGAF, DAMA-DMBOK, or similar frameworks is a plusWe are More ❯
technologies into production systems. • Design and implement scalable cloud-based and on-premises data architectures using platforms like Azure, AWS, or Google Cloud. • Work with big data technologies (e.g., Hadoop, Spark) and data lake architectures to ensure the organization's data can be ingested, processed, and analyzed at scale. • Manage the integration of AI models and algorithms into big … AI/ML technologies (e.g., TensorFlow, PyTorch, Scikit-learn, Keras) and understanding of deep learning and natural language processing (NLP). • Strong understanding of big data platforms such as Hadoop, Apache Spark, and data lakes. • Hands-on experience with cloud platforms (Azure, AWS, or Google Cloud) for building scalable data solutions. • Proficiency in data modeling, data warehousing, and ETL More ❯
Fort Lauderdale, Florida, United States Hybrid/Remote Options
Vegatron Systems
PARTY WORK AUTHORIZATION: US CITZ or GC,GC - EAD,H4 & L2 EAD, NO H1B & OPT/CPT ACCEPTED BY CLIENT RATE: $OPEN - DOE JOB TITLE: Data Engineer Big Data Hadoop JOB DESCR: Candidates will start out REMOTE WORK and then will eventually be sitting in Frt. Lauderdale, FL. Candidates should be senior Data Engineers with big data tools (Hadoop … with a heavy emphasis on cloud and big data technologies • Healthcare Knowledge and Experience heavily preferred Knowledge and Skills: • Knowledge of new and emerging data and analytics technologies (i.e. Hadoop, Spark, Elastic, AWS, etc.) • Experience working and developing in Cloud platforms such as AWS, Azure, or Google Cloud • Experience with big data tools: Hadoop, Spark, Kafka, etc. • Experience … and Experience heavily preferred • Past experience in executing and delivering solutions in an Agile scrum development environment is preferred • Knowledge of new and emerging data and analytics technologies (i.e. Hadoop, Spark, Elastic, AWS, etc.) • Experience working and developing in Cloud platforms such as AWS, Azure, or Google Cloud • Experience with big data tools: Hadoop, Spark, Kafka, etc. • Experience More ❯
Job Description: Scala/Spark • Good Big Data resource with the below Skillset: Java Big data technologies. • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives More ❯
Job Description: Scala/Spark • Good Big Data resource with the below Skillset: Java Big data technologies. • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives More ❯
the development process of AB testing framework. Build services which respond to batch and real-time data to safely rollout features and experiments using technology stack of AB testing, Hadoop, Spark, Flink, Hbase, Druid, Python, Java, Distributed Systems, React and statistical analysis. Work closely with partners to implement sophisticated statistical methodology into the platform. Telecommuting is permitted. Minimum Requirements More ❯
JavaScript framework (e.g. React, Backbone, AngularJS) • Knowledge of core CS concepts such as common data structures and algorithms • Full stack experience developing in Scala/Python and working with Hadoop and related tools is a plus • Code samples from private github repos, side projects, and open source project contributions is a plus Work Authorization US Citizen Green Card More ❯
succeed. Qualifications About You Experience with implementing RPA and/or BI Tools and/or building Dashboards, Apps and Action Flows. Knowledge of either Python, R, SQL, Spark, Hadoop, MATLAB, Git. Excellent communication and presentation abilities Ability to translate complex technical concepts into clear business language that can be tailored dependent on the audience Change management experience across More ❯
functional teams Understanding of Agile software development methodologies Desired skills: Experience with Docker, Kubernetes, OpenShift Enterprise Database Administration such as MySQL, Oracle, Maria DB, Microsoft SQL Server Experience with Hadoop distributions in the Cloud is a plus, AWS, Azure, Google Experience with Apache Tomcat Experience with Terraform Pay Range: $63.68-$71.68 Only candidates available and ready to work directly More ❯
Java Backend engineer Sunnyvale, CA Long term. Very Good Core Java - Backend engineers Good in problem solving skills Experience in Hadoop & Big-data, Spark Strong experience with Database Ideal candidate will have 2-5 years of backend experience More ❯
by analyzing Shell/Hive scripts, Informatica workflows and SQLs Able to work with team members support partners located in multiple geographical locations Unix, PL/SQL, Autosys Informatica Hadoop, Spark Pay range: $48.85 - $56.85 Only candidates available and ready to work directly as Genesis10 employees will be considered for this position. If you have the described qualifications and More ❯