economic and data science models into production. We are looking for a individual who is interested in working with the latest big data technology (Spark, EMR, Glue, SageMaker, and Airflow) and collaborate with Economist and Scientist in creating scalable solutions for our multiple Retail Businesses. Key job responsibilities - Partnering … as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation More ❯
ECOM are pleased to be exclusively recruiting for a Senior Data Engineer here in Manchester. You'll join a team where your work reaches millions. This role is within a forward-thinking company leading, offering a dynamic environment where you More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
problem. Right now, we use: A variety of languages, including Java and Go for backend and Typescript for frontend Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux Industry-standard build tooling, including Gradle, Webpack, and GitHub What We Value Ability to communicate and collaborate with a variety More ❯
Computer Science, Engineering, Mathematics, or a related field - Data Warehousing experience with Redshift, Teradata. - Experience with workflow management platforms for data engineering pipelines (ex. Apache Airflow) - Experience with Big Data Technologies (Spark, Hadoop, Hive, Pig, etc.) - Experience building/operating highly available, distributed systems of data extraction, ingestion More ❯
quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's More ❯
efficient data models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or … Scala . Proficiency in SQL, NoSQL, and time-series databases . Knowledge of orchestration tools (Apache Airflow, Kubernetes). If you are a passionate and experienced Senior Data Engineer seeking a Lead role, or a Lead Data Engineer aiming to make an impact like in a previous position, we More ❯
efficient data models for real-time analytics. Proven experience in managing real-time data pipelines across multiple initiatives. Expertise in distributed streaming platforms (Kafka, Spark Streaming, Flink). Experience with GCP (preferred), AWS, or Azure for real-time data ingestion and storage. Strong programming skills in Python, Java, or … Scala . Proficiency in SQL, NoSQL, and time-series databases . Knowledge of orchestration tools (Apache Airflow, Kubernetes). If you are a passionate and experienced Senior Data Engineer seeking a Lead role, or a Lead Data Engineer aiming to make an impact like in a previous position, we More ❯
in Scala, Python and/or Java. Strong experience with SQL, including querying, optimizing, and managing databases. Experience with data processing platforms such as Spark, Hadoop. Demonstrated experience with GCP services such as DataProc, BigQuery, GCS, IAM, and others, and/or their AWS equivalents. Work well as an … implement elegant solutions for them. Are a data enthusiast who wants to be surrounded by brilliant teammates and huge challenges. Bonus Points: Experience with Apache Airflow, including designing, managing, and troubleshooting DAGs and data pipelines. Experience with CI/CD pipelines and tools like Jenkins, including automating the process More ❯
strongly preferred; other languages include Java, Scala, TypeScript, C++, C#). Experience using big data technologies in cloud environments to build data pipelines (e.g. Spark, EMR, Lambda, etc.). Excellent communication, organization, and prioritization skills, with a strong ability to deliver results within tight timelines. Passionate about working with … to ensure secure and efficient data operations that support business growth and strategic objectives. Writing code - lots of it! We use Python, Java, TypeScript, Spark, and SQL, welcoming engineers from diverse programming backgrounds who are passionate about building robust data solutions. Design, architect, and implement scalable, maintainable data pipelines More ❯
to ensure secure and efficient data operations that support business growth and strategic objectives. Writing code - lots of it! We use Python, Java, TypeScript, Spark, and SQL, welcoming engineers from diverse programming backgrounds who are passionate about building robust data solutions. Design, architect, and implement scalable, maintainable data pipelines … preferred with other languages including Java, Scala, TypeScript, C++, C#). Experience using big data technologies in cloud environments to build data pipelines (e.g. Spark, EMR, Lambda etc.). Excellent communication, organisation and prioritisation skills, and have a strong ability to deliver results within tight timelines. Passionate to work More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark. You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering collaborative … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark is a plus Where You'll Be This role is based in London (UK). We offer you the flexibility More ❯
Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow … etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Salary: 30000 per annum + benefits Apply For This Job If you would like to apply More ❯
With us, you'll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it - we want you to grow More ❯
As a Senior BI Developer, you will be at the forefront of creating Analytical Solutions and insights into a wide range of business processes throughout the organisation and playing a core role in our strategic initiatives to enhance data-driven More ❯
About Us The Company: Dotdigital is a thriving global community of passionate, dedicated professionals, committed to the collective success of the organization and its clients. Our core principles of innovation, teamwork, and client-focused solutions drive us to approach challenges More ❯
areas of Data Mining, Classical Machine Learning, Deep Learning, NLP and Computer Vision. Experience with Large Scale/Big Data technology, such as Hadoop, Spark, Hive, Impala, PrestoDb. Hands-on capability developing ML models using open-source frameworks in Python and R and applying them on real client use … cases. Proficient in one of the deep learning stacks such as PyTorch or Tensorflow. Working knowledge of parallelisation and async paradigms in Python, Spark, Dask, Apache Ray. An awareness and interest in economic, financial and general business concepts and terminology. Excellent written and verbal command of English. Strong More ❯
Join our dynamic team at Baseten, where we're revolutionizing AI deployment with cutting-edge inference infrastructure. Backed by premier investors such as IVP , Spark Capital , Greylock , and Conviction , we're trusted by leading enterprises and AI-driven innovators-including Descript , Bland.ai , Patreon , Writer , and Robust Intelligence -to deliver More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT processes using tools such as … Apache Airflow, AWS Glue, and Amazon Athena . Work with cloud-native technologies to support scalable, serverless architectures. Collaborate with data science teams to streamline feature engineering and model deployment. Ensure data governance, lineage, and compliance best practices. Mentor and support team members in data engineering best practices . … Skills & Experience Required 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
of this team, you will be working on a plethora of services such as Glue (ETL service), Athena (interactive query service), Managed Workflows of Apache Airflow, etc. Understanding of ETL (Extract, Transform, Load) Creation of ETL Pipelines to extract and ingest data into data lake/warehouse with simple … and managing large data sets from multiple sources. Ability to read and understand Python and Scala code. Understanding of distributed computing environments. Proficient in Spark, Hive, and Presto. Experience working with Docker. Python, and shell scripting. Customer service experience/strong customer focus. Prior working experience with AWS - any … environments and excellent Linux/Unix system administrator skills. PREFERRED QUALIFICATIONS - Proficient in Hadoop Map-Reduce and its Ecosystem (Zookeeper, HBASE, HDFS, Pig, Hive, Spark, etc). - Good understanding of ETL principles and how to apply them within Hadoop. - Prior working experience with AWS - any or all of EC2 More ❯
that will impact millions of users, then this is the place for you! THE MAIN RESPONSIBILITIES FOR THIS POSITION INCLUDE: Support Java based applications & Spark/Flink jobs on Baremetal, AWS & Kubernetes. Understand the application requirements (Performance, Security, Scalability etc.) and assess the right services/topology on AWS … and understanding of SRE principles & goals along with prior on-call experience. Deep understanding and experience in one or more of the following - Hadoop, Spark, Flink, Kubernetes, AWS. The ability to design, author, and release code in any language (Go, Python, Ruby, or Java). Preferred Qualifications Fast learner More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
NLP PEOPLE
Job Specification: Machine Learning Engineer (NLP) (Pytorch) Location: Bristol, UK (Hybrid - 2 days per week in the office) About the Role I'm looking for an NLP Engineer to join a forward-thinking company that specialises in advanced risk analytics More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Cathcart Technology
Prior Senior Data Scientist with Machine Learning experience ** Strong understanding and experience with ML models and ML observability tools ** Strong Python and SQL experience ** Spark/Apache Airflow ** ML frame work experience (PyTorch/TensorFlow/Scikit-Learn) ** Experience with cloud platforms (preferably AWS) ** Experience with containerisation technologies More ❯