team Experience in A/B hypothesis testing (confidence intervals and p-values, linear regression to generalised additive models, time series) Knowledge of Spark, Hadoop or similar platform Experience with CI/CD including Git, Docker Experience with data visualisation platforms like Tableau, Power BI, Looker Studio, etc. Experience more »
at scale. What we expect from you Strong experience building python packages, installable with pip/conda Experience processing big data, ideally in a Hadoop/Spark environment Experience working with relational databases, and SQL-like operations Experience with Airflow/orchestration tooling is beneficial Understanding of Continuous Integration more »
consulting environment • Current or previous consulting experience highly desirable • Experience of working with companies in the finance sector highly desirable • Platform implementation experience (ApacheHadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile delivery more »
Work On: Data Warehousing ETL (Extract, Transform, Load) Processes Data Modelling and Database Management Data Pipeline Development Data Quality Assurance Big Data Technologies (e.g., Hadoop, Spark) Data Visualization Roles & Responsibilities: Collaborate with experienced data engineering professionals and global team members. Participate in designing and implementing data warehousing solutions. Develop more »
scaling, and management of containerized applications. Redhat/Linux: Familiarity with Redhat or other Linux distributions, including system administration, scripting, and troubleshooting. Desirable Skills: Hadoop: Understanding of Hadoop ecosystem components and their applications in big data processing. Accumulo: Experience with Apache Accumulo, a sorted, distributed key/value more »
Stevenage, England, United Kingdom Hybrid / WFH Options
Capgemini Engineering
visualization tools (e.g., Matplotlib, Seaborn, Tableau).• Ability to work independently and lead projects from inception to deployment.• Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure) is desirable.• MSc or PhD in Computer Science, Data Science, Artificial Intelligence, or related field is more »
to 10 years' IT Architecture experience working in a software development, technical project management, digital delivery, or technology consulting environment • Platform implementation experience (ApacheHadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile delivery more »
A very exciting opportunity! The following skills/experience is required: Strong Data Architect background Experience in Data Technologies to include: Finbourne LUSID, Snowflake, Hadoop, Spark. Experience in Cloud Platforms: AWS, Azure or GCP. Previously worked in Financial Services: Understanding of data requirements for equities, fixed income, private assets more »
Darlington, County Durham, United Kingdom Hybrid / WFH Options
Additional Resources
Excellent problem-solving abilities with the capacity to convert business requirements into analytical solutions. Experience with big data technologies and distributed computing frameworks (e.g., Hadoop, Spark)would be beneficial.. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and knowledge of database systems (SQL, NoSQL) would be preferred. What's more »
Data Scientist Location: Wokingham Duration: 6 Months We are seeking a skilled and experienced Data Scientist with expertise in time series-based predictive analysis and strong proficiency in Python & MLOps. As a Data Scientist, you will be responsible for analysing more »
Pyspark Developer - Hybrid in Knutsford, Cheshire - Inside IR35 JD -Professional & Technical Skills: - Required Skill: Expert proficiency in PySpark, Hadoop, Python, Git - Strong understanding of data engineering concepts and ETL processes, Data warehousing, Data Pipeline, SQL - Additional Good To Have Skills: AWS Glue, Knowledge of implementing ETL and data pipelines more »
Desirable Skills: AWS skills and accreditation. Knowledge of COTS products such as Elasticsearch, NiFi, Rabbit, Kafka, MongoDB, Hadoop, Ansible, Git, and Kubernetes. Experience with dashboard monitoring and alerting tools (Grafana, Splunk, Prometheus). Familiarity with on-premise to cloud application migration. Full UK driving licence and access to a … written reports. Active security clearance is essential. Desirable Skills: AWS skills and accreditation. Knowledge of COTS products such as Elasticsearch, NiFi, Rabbit, Kafka, MongoDB, Hadoop, Ansible, Git, and Kubernetes. Experience with dashboard monitoring and alerting tools (Grafana, Splunk, Prometheus). Familiarity with on-premise to cloud application migration. Full more »
and Data integration workflows. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., SQL, NoSQL, Hadoop, Spark) and data analysis tools. Nice to have: Knowledge of data visualization tools (e.g., Tableau, Power BI). Construction, Transport or Rail industry experience. more »
container-based applications in a microservices architecture, using the state-of-the-art software engineering best practices. You will have expertise with Big Data Hadoop platforms like Databricks, Cloudera, Teradata, etc. and solid fundamental understanding of the Hadoop architecture. You will have a creativity and passion for tackling … query languages (specifically Hive/SparkSQL and ANSI SQL).Experience building large scale Spark 3.x applications & data pipelines, ideally with Batch processing running on Hadoop clusters. If you had experience with messaging queues such as Kafka, RabbitMQ or JMS and reactive architecture paradigm. Experience designing and developing highly available more »
SN25, Upper Stratton, Borough of Swindon, Wiltshire, United Kingdom
BG Automotive
ABOUT BG AUTOMOTIVE BG Automotive (BGA) is a leader in the Automotive Aftermarket spares industry, catering to both UK and export markets. At BGA, you will join a dynamic environment where innovation and data-driven decision-making are at the more »
Enthusiasm for learning and adapting to new technologies Desired Skills Knowledge of machine learning algorithms and their applications Experience with big data technologies (e.g., Hadoop, Spark) Understanding of financial services industry trends and challenges Familiarity with cloud platforms (AWS, Azure, or GCP) Interest in cybersecurity and its intersection with more »
Analysis : Lead data-driven analysis to pinpoint issues, develop strategies, and drive performance improvements. Data Expertise : Manage large, complex data sets, using tools like Hadoop and cloud technologies to generate valuable insights. AI & Machine Learning : Leverage AI, predictive models, and machine learning to deliver cutting-edge solutions for business more »
City Of Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
Salary: £85,000 to £100,000 depending on experience Location: Hybrid working in either Bristol, Manchester, London. Are you a skilled Data Engineer who can deliver Data Platforms? Are you familiar of working on agile delivery-led projects? Data Engineers more »
AWS Data Engineer Salary: £50,000 - £95,000 – Bonus + Pension + Private Healthcare Location: London/UK Wide Location – Hybrid working * To be successfully appointed to this role, you must be eligible for Security Check (SC) and/or more »
Vice President Data & AI London based We are searching for a Vice President of Data and Artificial Intelligence- someone with hands on experience designing AI solutions to solve complex business problems. Your new role is a leadership position at a more »
or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience more »
or Tableau. Experience with real-time analytics and event-driven architectures using tools such as Apache Kafka. Background in big data technologies such as Hadoop, HBase, or Cassandra. more »
experience in big data technologies, with at least 3 years of experience in Apache Spark and Cloudera. Strong knowledge of big data technologies, including Hadoop, Hive, HBase, Kafka, and YARN. Excellent programming skills in Python and/or Scala. Strong analytical, problem-solving, and communication skills. Bachelor’s or more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
senior data leadership role, with a track record of managing and scaling data teams. Technical Skills: Expertise in data engineering, big data technologies (e.g., Hadoop, Spark), and cloud services (e.g., AWS, Google Cloud). Analytical Skills: Strong analytical mindset with the ability to translate complex data into actionable insights. more »