data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
Microservice based architecture - Experience with database Oracle and SQL knowledge - Preferable to Have - exposure to kafka/Hadoop/React JS/ElasticSearch/Spark - Good to Have - have exposure to Fabric/Kubernetes/Dockers/helm More ❯
Microservice based architecture - Experience with database Oracle and SQL knowledge - Preferable to Have - exposure to kafka/Hadoop/React JS/ElasticSearch/Spark - Good to Have - have exposure to Fabric/Kubernetes/Dockers/helm More ❯
opportunities and enhancing the platform’s capabilities SKILLS AND EXPERIENCE The successful Senior Data Engineer will have the following skills and experience: Python SQL Spark Snowflake AWS Ideally you will also have: DBT Jenkins Git Tableau BENEFITS The successful Lead Data Engineer will receive the following benefits: Salary between More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Harnham
opportunities and enhancing the platform’s capabilities SKILLS AND EXPERIENCE The successful Senior Data Engineer will have the following skills and experience: Python SQL Spark Snowflake AWS Ideally you will also have: DBT Jenkins Git Tableau BENEFITS The successful Lead Data Engineer will receive the following benefits: Salary between More ❯
generative AI applications in media (e.g., content summarisation, auto-tagging, script generation). Expertise in Python, with knowledge of data engineering tools (e.g., SQL, Spark, Airflow) a plus. Understanding of media workflows, digital publishing, recommendation systems, and audience analytics. Strong stakeholder management and communication skills to translate technical insights More ❯
generative AI applications in media (e.g., content summarisation, auto-tagging, script generation). Expertise in Python, with knowledge of data engineering tools (e.g., SQL, Spark, Airflow) a plus. Understanding of media workflows, digital publishing, recommendation systems, and audience analytics. Strong stakeholder management and communication skills to translate technical insights More ❯
programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, and Sci-kit learn, along More ❯
Experience in building machine learning models for business application Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience in implementing Computer Vision algorithms Amazon is an equal opportunities employer. We believe passionately that employing a More ❯
programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along with data More ❯
a track record of thought leadership and contributions that have advanced the field. - 2+ years experience with large scale distributed systems such as Hadoop, Spark etc. - Excellent written and spoken communication skills Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does More ❯
from start to finish. Key job responsibilities Build and operate our foundational data infrastructure using AWS services such as Redshift, S3, Step Functions, Glue, Spark, Flink, Kinesis, and large-scale event stores. Develop and scale our ingestion pipelines for speed, reliability, and multi-tenancy, supporting various data sources and More ❯
and conduct unit testing. Required qualifications to be successful in this role Proficient in Java, preferably Kotlin, with experience on Java 11, Gradle and Apache Spark. Experience in GCP, preferably Big Query and Cloud Composer. Experience with CI/CD, preferably GitHub and GitHub action. Experience with Agile is More ❯
A-Level or equivalent UCAS points (please ensure A-Level grades are included on your CV). Outstanding customer-facing skills with a sales spark A motivated self-starter with a problem-solving attitude Strong aptitude for picking up technologies Ability to work with autonomy and as part of More ❯
Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful. More ❯
the following areas: software design or development, content distribution/CDN, scripting/automation, database architecture, IP networking, IT security, BigData/Hadoop/Spark, operations management, service oriented architecture - Experience with AWS services or other cloud offerings - Experience with operational parameters and troubleshooting for three (3) of the More ❯
solutions such as Openstack, MicroCloud and Ceph, and solutions that could be deployed either on-premises or in public clouds such as Kubernetes, Kubeflow, Spark, PostgreSQL, etc. The team works hands-on with the technologies by deploying, testing and handing over the solution to our support or managed services More ❯
from concept to completion You have extensive, up-to-date knowledge and hands-on experience with machine learning and data engineering technologies (e.g., Kubernetes, Spark, Databricks, GPU) You have a proactive, problem-solving mindset with the ability to simplify complexity and translate high-level business requirements into specific projects More ❯
building and managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills … or a similar language. Proficiency in database technologies (SQL, NoSQL, time-series databases) and data modelling. Strong understanding of data pipeline orchestration tools (e.g., Apache Airflow, Kubernetes). You thrive when working as part of a team. Comfortable in a fast-paced environment. Have excellent written and verbal English More ❯
machine learning techniques, deep learning, graph data analytics, statistical analysis, time series, geospatial, NLP, sentiment analysis, pattern detection). Proficiency in Python, R, or Spark to extract insights from data. Experience with Data Bricks/Data QI and SQL for accessing and processing data (PostgreSQL preferred but general SQL … version control, code review). Experience with Hadoop (especially the Cloudera and Hortonworks distributions), other NoSQL (especially Neo4j and Elastic), and streaming technologies (especially Spark Streaming). Deep understanding of data manipulation/wrangling techniques. Experience using development and deployment technologies (e.g. Vagrant, Virtualbox, Jenkins, Ansible, Docker, Kubernetes). More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯