variety of technical and executive audiences both written and verbal Preferred (but not required) to have: Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with data science and machine learning technology People want to buy from people who understand them. Our Solutions Engineers build connections More ❯
hold or be willing to gain a UK Security Clearance Preferred (but not required) to have: Hands on experience with Python Experience working with modern data technology (e.g. dbt, spark, containers, devops tooling, orchestration tools, git, etc.) Experience with AI, data science and machine learning technologies People want to buy from people who understand them. Our Solution Engineers build More ❯
deploying models in production and adjusting model thresholds to improve performance Experience designing, running, and analyzing complex experiments or leveraging causal inference designs Experience with distributed tools such as Spark, Hadoop, etc. A PhD or MS in a quantitative field (e.g., Statistics, Engineering, Mathematics, Economics, Quantitative Finance, Sciences, Operations Research) Office-assigned Stripes spend at least 50% of the More ❯
current cyber security threats, actors and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis). Strong sense of ownership combined with collaborative approach to overcoming challenges and influencing organizational change. Amazon is an equal opportunities employer. We believe passionately that More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
that help us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on More ❯
the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything-spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds? You'll More ❯
priorities aimed at maximizing value through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL … processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. More ❯
listed below. AI techniques (supervised and unsupervised machine learning, deep learning, graph data analytics, statistical analysis, time series, geospatial analysis, NLP, sentiment analysis, pattern detection, etc.) Python, R, or Spark for data insights Data Bricks/Data QISQL for data access and processing (PostgreSQL preferred, but general SQL knowledge is important) Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML … SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark Streaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with Ansible, containerisation with Docker, Kubernetes) Data visualization skills (JavaScript preferred) Experience More ❯
researching new technologies and software versions Working with cloud technologies and different operating systems Working closely alongside Data Engineers and DevOps engineers Working with big data technologies such as spark Demonstrating stakeholder engagement by communicating with the wider team to understand the functional and non-functional requirements of the data and the product in development and its relationship to … networks into production Experience with Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you More ❯
education None Preferred education Bachelor's Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as Apache Hadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for efficiency, scalability, and … degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or SQL. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
options Hybrid working - 1 day a week in a central London office High-growth scale-up with a strong mission and serious funding Modern tech stack: Python, SQL, Snowflake, Apache Iceberg, AWS, Airflow, dbt, Spark Work cross-functionally with engineering, product, analytics, and data science leaders What You'll Be Doing Lead, mentor, and grow a high-impact More ❯
systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and … ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. … Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and More ❯
We strive to build an inclusive environment reflecting the patients and communities we serve. Join our Novartis Network: Not the right role? Sign up to stay connected: Skills Desired: ApacheSpark, AI, Big Data, Data Governance, Data Literacy, Data Management, Data Quality, Data Science, Data Strategy, Data Visualization, Machine Learning, Python, R, Statistical Analysis #J-18808-Ljbffr More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
in prompt engineering, RAG, guardrail design, orchestration, and tools like LangGraph or Semantic Kernel. Deep knowledge of ML model development, deployment, and evaluation. Proficiency in Python, PyTorch, TensorFlow, SQL, Spark, and AWS tools like SageMaker and Bedrock. Understanding of scalable data infrastructure and cloud architecture. Location & Relocation: This role is based in Sydney, Australia . We offer full relocation More ❯
experience as a Data Engineer (3-5 years); Deep expertise in designing and implementing solutions on Google Cloud; Strong interpersonal and stakeholder management skills; In-depth knowledge of Hadoop, Spark, and similar frameworks; In-depth knowledge of programming languages including Java; Expert in cloud-native technologies, IaC, and Docker tools; Excellent project management skills; Excellent communication skills; Proactivity; Business More ❯
deep learning methods and machine learning PREFERRED QUALIFICATIONS - Experience with popular deep learning frameworks such as MxNet and Tensor Flow - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and More ❯
have Experience with Identity vendors Experience in online survey methodologies Experience in Identity graph methodologies Ability to write and optimize SQL queries Experience working with big data technologies (e.g. Spark) Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver across borders. Innovation is in our blood We’re More ❯
Our mission is to improve society's experience with software. Come join one of the fastest-growing startups, supported by best-in-class institutions like Battery Ventures, Salesforce Ventures, Spark Capital and Meritech. You will gain experience in a diverse and exciting set of technologies and clients and have a real impact on Pendo's future. Our culture is More ❯
Our mission is to improve society's experience with software. Come join one of the fastest-growing startups, supported by best-in-class institutions like Battery Ventures, Salesforce Ventures, Spark Capital and Meritech. You will gain experience in a diverse and exciting set of technologies and clients and have a real impact on Pendo's future. Our culture is More ❯
audiences alike It would be great if you have: Built search related products e.g. chatbots Exposure to building data products that use generative AI and LLMs Previous experience using Spark (either via Scala or Pyspark) Experience with statistical methods like regression, GLMs or experiment design and analysis, shipping productionized machine learning systems or other advanced techniques are also welcome More ❯
influencing C-suite executives and driving organizational change • Bachelor's degree, or 7+ years of professional or military experience • Experience in technical design, architecture and databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) • Experience implementing serverless distributed solutions • Software development experience with object-oriented languages and deep expertise in AI/ML PREFERRED QUALIFICATIONS • Proven ability to shape market segments More ❯
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯
South West London, London, United Kingdom Hybrid / WFH Options
JAM Recruitment Ltd
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯