Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Data Inc. (UK) Ltd
contractors. Skill Set & Experience: We are specifically looking for a Scala Data Engineer not an application developer. The candidate must have experience migrating from Hadoop to the Cloud using Scala . Strong experience in Data Pipeline creation is essential. Candidates should have Big Data experience . Please ensure they … similar Data Engineering role before sharing their details with us. Keywords for Search: When reviewing CVs, please look for relevant technologies such as: Spark, Hadoop, Big Data, Scala, Spark-Scala, Data Engineer, ETL, AWS (S3, EMR, Glue ETL) . Interview Process: the client will conduct an interview round that More ❯
Willingness to be a committer/contributor to open source applications Java programming for distributed systems, with experience in networking and multi-threading. ApacheHadoopApache Accumulo Apache NiFi Agile development experience Well-grounded in Linux fundamentals and knowledge in at least one scripting language(e.g., Python, Ruby, Perl More ❯
privacy training for employees with access to PII. Adhere to privacy and safeguarding requirements for PII, including encryption and breach reporting. Maintain the ApacheHadoop Ecosystem, utilizing HBase, MapReduce, and Spark. Manage ETL processes using Linux shell scripting, Perl, Python, and Apache Airflow. Utilize AWS services such as CloudWatch More ❯
West Midlands, England, United Kingdom Hybrid / WFH Options
Aubay UK
like Tableau and Power BI. Proficiency in analytics platforms such as SAS and Python. Familiarity with Amazon Elastic File System (EFS), S3 Storage, and Hadoop Distributed File System (HDFS). Key Role Responsibilities Lead the design and development of large-scale data solutions, ensuring they meet business objectives and More ❯
Masters in Computer Science or related discipline from an accredited college or university may be substituted for one (1) year of experience. Cloudera Certified Hadoop Developer certification may be substituted for one (1) year of Cloud experience. This position is contingent on funding and may not be filled immediately. More ❯
Swagger, Git, Subversion, Maven, Jenkins, Gradle, Nexus, Eclipse, IntelliJ, Ext-Js, JQuery, and D3. Cloud technologies: Pig, Hive, Apache Spark, Azure DataBricks, Storm, HBase, Hadoop Distributed File System, and MapReduce Open-source virtual machines and Cloud-based This position is contingent on funding and may not be filled immediately. More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature selection More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature selection More ❯
at the heart of this business, and you can expect to work with a cutting-edge range of technologies, including big data tools (Spark, Hadoop) and cloud platforms (Microsoft Azure, AWS). If you are eager to grow in these areas, comprehensive, top-tier training will be provided. Key More ❯
Full-Stack Web Frameworks: Django, Flask Front-End Frameworks: React, Angular, Typescript Cloud: Openstack, AWS, Azure DevOps: Docker, Ansible, Gitlab Big Data: Spark, Elasticsearch, Hadoop, Neo4j Geo-Related Mapping Experience Apply today to make a real impact with cutting-edge technology and innovative solutions More ❯
requiring data analysis and visual support. Skills: • Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau. • The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on More ❯
experience in micro service architecture using Spring Framework, Spring Boot, Tomcat, AWS, Docker Container or Kubernetes solutions. 5. Demonstrated experience in big data solutions (Hadoop Ecosystem, MapReduce, Pig, Hive, DataStax, etc.) in support of a screening and vetting mission. More ❯
and strategizing the roadmap around On-premise and cloud solutions. Experience in designing and developing real time data processing pipelines Expertise in working with Hadoop data platforms and technologies like Kafka, Spark, Impala, Hive and HDFS in multi-tenant environments Expert in Java programming ,SQL and shell script, DevOps More ❯
Active TS/SCI Clearance with Full Scope Polygraph. Preferred Skills: Microservices, REST, JSON, XML. CI/CD, Docker, Kubernetes, Jenkins. Big Data technologies (Hadoop, Kafka, Spark). CISSP, Java, AWS certifications a plus. Ready to Transform Your Career? Join Trinity Enterprise Services-where professional growth meets personal fulfillment. More ❯
with, and understanding of, algorithms for classification, regression, clustering, and anomaly detection Knowledge of relational databases, including SQL and large-scale distributed systems (e.g. Hadoop), expertise with statistical data analysis (e.g. linear models, multivariate analysis, stochastic models, sampling methods), and demonstrated effectiveness in collecting information and accurately representing/ More ❯
Python, and TypeScript. -Participate in software development to support innovative and enhancing software of customer applications. -Experience with big data tools such as Spark, Hadoop, Cassandra, DynamoDB, Kinesis, SOLR, Elasticsearch. -Experience with at least one of the following HTML, CSS, JavaScript, and at least a modern framework such as More ❯
data. Experience with Linux and cloud environments. Data Visualisation Technologies (e.g. Amazon QuickSight, Tableau, Looker, QlikSense). Desirable experience: Familiarity with large data techniques (Hadoop, MapReduce, Spark, etc.) Familiarity with providing data via a microservice API. Experience with other public cloud data lakes. AWS Certifications (particularly Solution Architect Associate More ❯
understanding of data and creation of reports and actionable intelligence. Required qualifications to be successful in this role Data Analysis experience, using SQL, ideally Hadoop or other Big Data environments. Experience with ETL and on ETL projects where you have been involved with data mappings and transformations. Analytical problem More ❯
Experience with programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along More ❯
tools frameworks e.g. ML flow and WB. Nice to Have: Strong knowledge and deep experience of toolchains Java SQL JavaScript D3js Bash. Data Processing Hadoop Spark Kafka Hive NumPy Pandas Matplotlib. Mandatory Certification: Microsoft Certified Azure Data Scientist Associate Benefits/perks listed below may vary depending on the More ❯
Cloud-based or SaaS products and a good understanding of Digital Marketing and Marketing Technologies. Have experience working with Big Data technologies (such as Hadoop, MapReduce, Hive/Pig, Cassandra, MongoDB, etc) An understanding of web technologies such as Javascript, node.js and html. Some level of understanding or experience More ❯
in solving real-world business problems using machine learning, deep learning, data mining, and statistical algorithms. Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory. Strong analytical thinking. Ability to creatively solve business problems, innovating new More ❯
understanding of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects More ❯
science and mathematical concepts, practices, and procedures associated with the analysis of large datasets, from indexing schemes to statistical tests Knowledge of Cloud computing, Hadoop or MapReduce, Netezza, Vertica, BigTable, Pig, Terradata, or Amazon Web Services-Ability to obtain a security clearance. Apply expert knowledge in computer science, particularly More ❯
understanding of data and creatin of reports and actionable intelligence Required qualifications to be successful in this role • Data Analysis experience, using SQL, ideally Hadoop or other Big Data environments. • Experience with ETL experience and on ETL projects where you have been involved with data mappings and transformations • Analytical More ❯