systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and … ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. … Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and More ❯
these roles include; Multiple Databricks projects delivered Excellent consulting and client facing experience 7 - 10 years+ experience of Consulting in Data Engineering, Data Platform and Analytics Deep experience with ApacheSpark, PySpark CI/CD for Production deployments Working knowledge of MLOps Strong experience with Optmisations for performance and scalability These roles will be paid at circa More ❯
We strive to build an inclusive environment reflecting the patients and communities we serve. Join our Novartis Network: Not the right role? Sign up to stay connected: Skills Desired: ApacheSpark, AI, Big Data, Data Governance, Data Literacy, Data Management, Data Quality, Data Science, Data Strategy, Data Visualization, Machine Learning, Python, R, Statistical Analysis #J-18808-Ljbffr More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
in prompt engineering, RAG, guardrail design, orchestration, and tools like LangGraph or Semantic Kernel. Deep knowledge of ML model development, deployment, and evaluation. Proficiency in Python, PyTorch, TensorFlow, SQL, Spark, and AWS tools like SageMaker and Bedrock. Understanding of scalable data infrastructure and cloud architecture. Location & Relocation: This role is based in Sydney, Australia . We offer full relocation More ❯
experience as a Data Engineer (3-5 years); Deep expertise in designing and implementing solutions on Google Cloud; Strong interpersonal and stakeholder management skills; In-depth knowledge of Hadoop, Spark, and similar frameworks; In-depth knowledge of programming languages including Java; Expert in cloud-native technologies, IaC, and Docker tools; Excellent project management skills; Excellent communication skills; Proactivity; Business More ❯
deep learning methods and machine learning PREFERRED QUALIFICATIONS - Experience with popular deep learning frameworks such as MxNet and Tensor Flow - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and More ❯
have Experience with Identity vendors Experience in online survey methodologies Experience in Identity graph methodologies Ability to write and optimize SQL queries Experience working with big data technologies (e.g. Spark) Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver across borders. Innovation is in our blood We’re More ❯
Our mission is to improve society's experience with software. Come join one of the fastest-growing startups, supported by best-in-class institutions like Battery Ventures, Salesforce Ventures, Spark Capital and Meritech. You will gain experience in a diverse and exciting set of technologies and clients and have a real impact on Pendo's future. Our culture is More ❯
Our mission is to improve society's experience with software. Come join one of the fastest-growing startups, supported by best-in-class institutions like Battery Ventures, Salesforce Ventures, Spark Capital and Meritech. You will gain experience in a diverse and exciting set of technologies and clients and have a real impact on Pendo's future. Our culture is More ❯
audiences alike It would be great if you have: Built search related products e.g. chatbots Exposure to building data products that use generative AI and LLMs Previous experience using Spark (either via Scala or Pyspark) Experience with statistical methods like regression, GLMs or experiment design and analysis, shipping productionized machine learning systems or other advanced techniques are also welcome More ❯
influencing C-suite executives and driving organizational change • Bachelor's degree, or 7+ years of professional or military experience • Experience in technical design, architecture and databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) • Experience implementing serverless distributed solutions • Software development experience with object-oriented languages and deep expertise in AI/ML PREFERRED QUALIFICATIONS • Proven ability to shape market segments More ❯
the following areas: Software Design or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. - Experience in a 24x7 operational services or support environment. - Experience with AWS Cloud services and/or other Cloud offerings. Our inclusive culture More ❯
and applying best practices in security and compliance, this role offers both technical depth and impact. Key Responsibilities Design & Optimise Pipelines - Build and refine ETL/ELT workflows using Apache Airflow for orchestration. Data Ingestion - Create reliable ingestion processes from APIs and internal systems, leveraging tools such as Kafka, Spark, or AWS-native services. Cloud Data Platforms - Develop … DAGs and configurations. Security & Compliance - Apply encryption, access control (IAM), and GDPR-aligned data practices. Technical Skills & Experience Proficient in Python and SQL for data processing. Solid experience with Apache Airflow - writing and configuring DAGs. Strong AWS skills (S3, Redshift, etc.). Big data experience with Apache Spark. Knowledge of data modelling, schema design, and partitioning. Understanding of More ❯
Oversee pipeline performance, address issues promptly, and maintain comprehensive data documentation. What Youll Bring Technical Expertise: Proficiency in Python and SQL; experience with data processing frameworks such as Airflow, Spark, or TensorFlow. Data Engineering Fundamentals: Strong understanding of data architecture, data modelling, and scalable data solutions. Backend Development: Willingness to develop proficiency in backend technologies (e.g., Python with Django … to support data pipeline integrations. Cloud Platforms: Familiarity with AWS or Azure, including services like Apache Airflow, Terraform, or SageMaker. Data Quality Management: Experience with data versioning and quality assurance practices. Automation and CI/CD: Knowledge of build and deployment automation processes. Experience within MLOps A 1st class Data degree from one of the UKs top 15 Universities More ❯
Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that More ❯
discipline At least AAB at A-Level or equivalent UCAS points (please ensure A-Level grades are included on your CV). Outstanding customer-facing skills with a sales spark A motivated self-starter with a problem-solving attitude Strong aptitude for picking up technologies Ability to work with autonomy and as part of a team Great communication skills More ❯
to be a part of it! Our Future, Our Responsibility - Inclusion and Diversity at Future We embrace and celebrate diversity, making it part of who we are. Different perspectives spark ideas, fuel creativity, and push us to innovate. That's why we're building a workplace where everyone feels valued, respected, and empowered to thrive. When it comes to More ❯
Bromsgrove, Worcestershire, United Kingdom Hybrid / WFH Options
Talk Recruitment
Stress-testing, performance-tuning, and optimization skills. Debugging in multi-threaded environments. Eligible to work in the UK. Desirable Skills: Technologies such as Zookeeper, Terraform, Ansible, Cassandra, RabbitMQ, Kafka, Spark, Redis, MongoDB, CosmoDB, Xsolla Backend(AcceleratXR), Pragma, Playfab, Epic Online Services, Unity Game Services, Firebase, Edgegap, Photon Game engine experience with Unreal or Unity Web application development experience (NodeJS More ❯
Bromsgrove, Worcestershire, United Kingdom Hybrid / WFH Options
Talk Recruitment
Stress-testing, performance-tuning, and optimization skills. Debugging in multi-threaded environments. Eligible to work in the UK. Desirable Skills: Technologies such as Zookeeper, Terraform, Ansible, Cassandra, RabbitMQ, Kafka, Spark, Redis, MongoDB, CosmoDB, Xsolla Backend(AcceleratXR), Pragma, Playfab, Epic Online Services, Unity Game Services, Firebase, Edgegap, Photon Game engine experience with Unreal or Unity Web application development experience (NodeJS More ❯
and a strong background in using data to influence decisions and behaviours Experience with (in rough priority order): SQL (experience in writing performant queries) Python & DS libraries (sklearn, pandas, spark, etc) Data transformation Data visualisation & storytelling Any of the following would be a bonus DBT Experience working with ambiguity in a scale-up (or scale-up-like) environment Passion More ❯
on experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures including Data Lakehouse and Data … and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling data More ❯
business transformation. You'll also contribute to best practice implementation and continuous improvement within cross-functional engineering teams. What You'll Do Design and develop robust pipelines using DeltaLake, Spark Structured Streaming, and Unity Catalog Build real-time event-driven solutions with tools such as Kafka and Azure Event Hubs Apply DevOps principles to develop CI/CD pipelines … and GDPR-compliant solutions Working knowledge of DevOps tools and CI/CD processes Bonus Points For Development experience in Scala or Java Familiarity with Cloudera, Hadoop, HIVE, and Spark ecosystem Understanding of data privacy regulations, including GDPR, and experience working with sensitive data Ability to learn and adapt new technologies quickly to meet business needs Collaborative mindset with More ❯
Role: Data Engineer Role type: Permanent Location: UK or Greece Preferred start date: ASAP LIFE AT SATALIA As an organization, we push the boundaries of data science, optimization, and artificial intelligence to solve the hardest problems in industry. Satalia is More ❯