Technical Discipline. Technical Expertise: Proficiency in SQL and experience with cloud-based data pipelines (Azure, AWS, GCP). Familiarity with big data tools like Hadoop and Spark. Data Management Skills: Hands-on experience working with large data sets, data pipelines, workflow management tools, and Azure cloud services. Exposure to more »
Cloud ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributed computing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working with authentication and authorisation protocols/ more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
within a typical retail trading environment is key. Experience required: A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumes Background in data more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary: £45,000 - £75,000 (DoE) • Discretionary Bonus: Circa 10% per annum • DV Bonus: Circa £5,000 • Flex Fund: £5000 • Health: Private more »
proof of concepts. Develop monitoring strategies for infrastructure, platforms and applications aligning with enterprise strategy and overall industry trends. Big Data Technologies such as Hadoop, Spark, Kafka, etc. Hadoop: 5+ years Kafka: 3+ years Spark: 4+ years PySpark: 3+ years more »
data solutions (AWS, Azure or GCP), engineering languages including Python, SQL, Java, and pipeline management tools e.g., Apache Airflow. Familiarity with big data technologies, Hadoop, or Spark. If this opportunity is of interest, or you know anyone who would be interested in this role, please send your CV and more »
solving skills and creativity. Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data more »
table and be open to expanding your skills further. With industrial experience in AWS/GCP/Azure and familiarity with data products like Hadoop, Spark, and PostgreSQL, you'll thrive in our data-driven environment. Your problem-solving skills and meticulous attention to detail will make you a more »
experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms (Databricks more »
Data and Artificial Intelligence, Senior Vice President We are searching for a Senior Vice President of Data and Artificial Intelligence- someone with hands on experience designing AI solutions to solve complex business problems. Your new role is a leadership position more »
rulemaking. What you'll need to succeed Extensive Business and Data Analysis experience Strong SQL and Excel skills Data Visualisation experience Experience with Python, Hadoop or Big Data What you'll get in return An exciting opportunity to join an international organisation working with a major financial services organisation. more »
in Apache Iceberg, Spark, Big Data 3+ years of Big Data project development experience Hands on experience in working areas like Apache Iceberg & Spark, Hadoop, Hive Must have knowledge in any Database Ex: Postgres, Oracle, MongoDB Excellent in SDLC Processes and DevOps knowledge (Jira, Jenkins pipeline) Working in Agile more »
would be an advantage Data visualization – Tools like Tableau Master data management (MDM) – Concepts and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL more »
experience, Experience with cloud computing platforms such as AWS, Azure, or GCP (Google Cloud Platform). Familiarity with big data technologies such as ApacheHadoop, Spark, or Kafka. Experience deploying machine learning models in production environments. Contributions to open-source machine learning projects or research publications in relevant conferences more »
learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role escalating areas more »
the following platforms: MySQL or Cassandra. Experience of developing and deploying applications into AWS or a private cloud. Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, UI Development. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our more »
/Kotlin. Familiarity with Kotlin or willingness to learn. Industrial experience with AWS/GCP/Azure. Knowledge of common data products such as Hadoop, Spark, Airflow, PostgreSQL, S3, etc. Problem solving/troubleshooting skills and attention to detail. 👋 About Us High-quality data access and provisioning shouldn't more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a more »
experience in ETL technical design, automated data quality testing, QA, documentation, data warehousing, data modelling, and data wrangling. Proficiency in RDMS, ETL pipelines, Python, Hadoop, SQL, and a solid grasp of modern code development practices. Ability to manage multiple data and analytic systems with an awareness of decentralised data more »
phases of projects through prototyping, architectural design and delivery. You will be working with Azure tools such as Databricks, Data Factory as well as Hadoop to create big data environments which, in turn, will help businesses to gain greater insight into their big data repositories. RESPONSIBILITIES Working on projects more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
Data Analytics stack (IS, AS, RS) Power BI, DAX MDS Azure Data Lakes Supporting: Azure ML .Net/HTML5 Azure infrastructure R, Python Powershell Hadoop, Data Factory Principles: Data Modelling Data Warehouse Theory Data Architecture Master Data Management Data Science WHY ADATIS? There’s a long list of reasons more »