build solutions that offer best in class experience to our risk managers and analysts. Our technology stack also includes Sybase IQ, MongoDb, Presto on Hadoop clusters, Elastic Search, Kafka, Angular, React, Elastic Search and Tableau. We do extensive domain modelling using the firm-developed open source platform called Alloy. more »
database management. Cloud Platform : AWS for cloud infrastructure. Programming Languages : JavaScript for front-end development and Java for back-end processes. Big Data Technologies : Hadoop, Spark, or Kafka for handling large-scale data processing. What We Need from You Essential Skills: Technical Proficiency : Expertise in React.js, front-end technologies more »
one leading business intelligence platform (e.g. Microsoft, Crystal, Qlik, SAP, Tableau). Good understanding of open source, big data, and cloud data platforms (e.g. Hadoop, Spark, Hive, Pentaho, AWS, Azure); given a business problem, you can analyse and evaluate options and recommend solutions. Proven experience in designing, building and more »
Director of Data & AI London based We are searching for a Director of Data and Artificial Intelligence- someone with hands on experience designing AI solutions to solve complex business problems. Your new role is a leadership position at a business more »
on experience with analytic tools like R & Python; & visualization tools like Tableau & Power BI Exposure to cloud platforms and big data systems such as Hadoop HDFS, and Hive is a plus Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes Graduate more »
London, England, United Kingdom Hybrid / WFH Options
Global Relay
JMeter or similar tools Web services technology such as REST, JSON or Thrift Testing web applications with Selenium WebDriver Big data technology such as Hadoop, MongoDB, Kafka or SQL Network principles and protocols such as HTTP, TLS and TCP Continuous integration systems such as Jenkins or Bamboo Continuous delivery more »
through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from various more »
Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ years more »
or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience more »
ETL processes, and data warehousing. - Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. more »
phases of projects through prototyping, architectural design and delivery. You will be working with Azure tools such as Databricks, Data Factory as well as Hadoop to create big data environments which, in turn, will help businesses to gain greater insight into their big data repositories. RESPONSIBILITIES Working on projects more »
Data Analytics stack (IS, AS, RS) Power BI, DAX MDS Azure Data Lakes Supporting: Azure ML .Net/HTML5 Azure infrastructure R, Python Powershell Hadoop, Data Factory Principles: Data Modelling Data Warehouse Theory Data Architecture Master Data Management Data Science WHY ADATIS? There’s a long list of reasons more »
services or hedge fund industry. Technical Skills: Proficiency in Python and SQL. Experience with relational and NoSQL databases. Knowledge of big data frameworks (e.g., Hadoop, Spark, Kafka). Understanding of financial markets and trading systems. Strong analytical, problem-solving, and communication skills. Familiarity with DevOps tools and practices. This more »
limited to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency more »
TensorFlow, PyTorch). Solid understanding of ML and data pipeline architectures and best practices. Experience with big data technologies and distributed computing (e.g., Spark, Hadoop) is a plus. Proficient in SQL and experience with relational databases. Strong analytical and problem-solving skills, with a keen attention to detail. Knowledge more »
clustering and classification techniques, and algorithms Fluency in a programming language (Python, C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Leading ecommerce client are now searching for a Senior Data Engineer to contribute towards the delivery of their data strategy. This engineer will shape our client’s data function, delivering end to end solutions for an array of customer data more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
models, ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
Senior Software Engineer - Data Initial 6-month contract role Remote Working £525 - £625, Inside IR35 We're recruiting on behalf of an IT Services Provider for a Senior Software Engineer (Data) to design and develop modern data management tools. As more »
such as TensorFlow, PyTorch, or Scikit-learn. Strong knowledge of statistical modelling, data mining, and data visualization techniques. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Strong problem-solving skills and the ability to think critically and creatively. Excellent analytical skills more »
Azure SQL Data Warehouse, Azure Data Lake, AWS S3,AWS RDS,AWS Lambda or similar Have experience with Open Source big data products i.e. Hadoop Hive, Pig, Impala or similar Have experience with Open Source non-relational or NoSQL data repositories such as: MongoDB, Cassandra, Neo4J or similar Be more »
Azure SQL Data Warehouse, Azure Data Lake, AWS S3,AWS RDS,AWS Lambda or similar Have experience with Open Source big data products i.e. Hadoop Hive, Pig, Impala or similar Have experience with Open Source non-relational or NoSQL data repositories such as: MongoDB, Cassandra, Neo4J or similar Be more »
Bedford, Bedfordshire, United Kingdom Hybrid / WFH Options
Understanding Recruitment
ML frameworks (TensorFlow, PyTorch etc.)MLOps experienceNice to have:Familiarity with Git or other Version Control SystemsComputer Vision Library exposureUnderstanding of Big Data Technologies (Hadoop, Spark etc)Experience with Cloud platforms (AWS, GCP or Azure)This is a fully remote role, but may require very occasional travel (once a more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary: £45,000 - £75,000 (DoE) • Discretionary Bonus: Circa 10% per annum • DV Bonus: Circa £5,000 • Flex Fund: £5000 • Health: Private more »