such as Python, Java, or Scala, and experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn Experience with large-scale distributed systems and big data technologies (e.g., Spark, Hadoop, Kafka) Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach More ❯
native tech stack in designing and building data & AI solutions Experience with data modeling, ETL processes, and data warehousing Knowledge of big data tools and frameworks such as Spark, Hadoop, or Kafka More ❯
Stevenage, England, United Kingdom Hybrid / WFH Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI 🌟 Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
stevenage, east anglia, united kingdom Hybrid / WFH Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI 🌟 Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
watford, hertfordshire, east anglia, united kingdom Hybrid / WFH Options
Henderson Scott
with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI 🌟 Bonus Points For Docker/containerisation experience Any previous work in industrial, aerospace or secure environments Exposure to tools like More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Experis - ManpowerGroup
Using machine learning tools to select features, create and optimize classifiers Qualifications: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Experis
Using machine learning tools to select features, create and optimize classifiers Qualifications: Programming Skills - knowledge of statistical programming languages like python, and database query languages like SQL, Hive/Hadoop, Pig is desirable. Familiarity with Scala and java is an added advantage. Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency More ❯
SQL, Oracle) and noSQL skills (e.g. MongoDB, Neo4J) Proficiency in data exchange and processing tools (ETL, ESB, APIs) Development skills in Python and familiarity with Big Data technologies (e.g. Hadoop) Knowledge of NLP and OCR; Generative AI expertise advantageous Understanding of containerisation (Docker) and, ideally, the industrial/defence sector The Package: Company bonus: Up to £2,500 (based More ❯
real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance, and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums. Qualifications Proven technical pre-sales or technical consulting experience. OR Bachelor's Degree in More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
. noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J). Data exchange and processing skills (e.g. ETL, ESB, API). Development skills (e.g. Python). Big data technologies knowledge (e.g. Hadoop stack). This position offers a lucrative benefits package, which includes but is not inclusive of: Salary: Circa £45,000 - £55,000 (depending on experience) Bonus scheme: Up to More ❯
Agile working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong More ❯
and machine learning PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers More ❯
Previous experience as a Data Engineer (3-5 years); Deep expertise in designing and implementing solutions on Google Cloud; Strong interpersonal and stakeholder management skills; In-depth knowledge of Hadoop, Spark, and similar frameworks; In-depth knowledge of programming languages including Java; Expert in cloud-native technologies, IaC, and Docker tools; Excellent project management skills; Excellent communication skills; Proactivity More ❯
models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Office-assigned Stripes spend at least 50% of the time More ❯
NLP technologies Proficiency in Python and/or Scala; experience with ML libraries such as TensorFlow, PyTorch, HuggingFace, or scikit-learn Experience with Databricks, distributed data systems (e.g., Spark, Hadoop), and cloud platforms (AWS, GCP, or Azure) Ability to thrive in ambiguous environments, working closely with cross-functional teams to define and deliver impactful solutions Strong communication skills with More ❯
junior engineers and establishing best practices within a team Experience in the following would be beneficial: Working with big data technologies, ideally Spark, but experience with others such as Hadoop or Elasticsearch is also valuable Exposure to DevOps tooling and practices (we care deeply about automation and tooling, and even have a dedicated Developer Tooling team) Building data processing More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Curo Resourcing Ltd
and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable: Jupyter Hub Awareness RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
london (city of london), south east england, united kingdom
HCLTech
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯