etc.) Proficiency in one or more scripting languages (Shell, Python, Ruby, etc.) Experience with Source Control Management (Git, SVN) Demonstrable Linux Systems Administration (Ubuntu, CentOS) Experience with Web servers (Apache, NGINX) Excellent written and verbal communication Experience of producing technical and process documentation Qualification: Bachelor of Science degree in Computer Science, Management Information Systems, or related is desirable but … not essential. Nice to have but not essential: · Service monitoring and graphing tools (Prometheus + Grafana, Nagios, Datadog) Elastic stack Repository solutions (Jfrog Artifactory, Jfrog Bintray) OpenVPN Apache Tomcat Messaging streams & communication platforms (RabbitMQ, Postfix, Mandrill) SQL Databases (MongoDB, PostgreSQL, or MySQL) Microservice Architecture Apache Kafka Our Values: We work together We believe in people We won’t More ❯
etc.) Proficiency in one or more scripting languages (Shell, Python, Ruby, etc.) Experience with Source Control Management (Git, SVN) Demonstrable Linux Systems Administration (Ubuntu, CentOS) Experience with Web servers (Apache, NGINX) Excellent written and verbal communication Experience of producing technical and process documentation Qualification: Bachelor of Science degree in Computer Science, Management Information Systems, or related is desirable but … not essential. Nice to have but not essential: · Service monitoring and graphing tools (Prometheus + Grafana, Nagios, Datadog) Elastic stack Repository solutions (Jfrog Artifactory, Jfrog Bintray) OpenVPN Apache Tomcat Messaging streams & communication platforms (RabbitMQ, Postfix, Mandrill) SQL Databases (MongoDB, PostgreSQL, or MySQL) Microservice Architecture Apache Kafka Our Values: We work together We believe in people We won’t More ❯
Engineer with an Azure focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Skills/Experience Design and build … high-performance data pipelines: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Build and deploy AI/ML models: Integrate Machine Learning into … and best practices with a focus on how AI can support you in your delivery work Solid experience as a Data Engineer or similar role. Proven expertise in Databricks, Apache Spark, and data pipeline development and strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks and Azure Data More ❯
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
london, south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Acquired Talent Ltd
Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Data Engineer (Outside IR35 Contract role) Determination: Outside IR35 Day Rate: Up to £575 per day Location: Hybrid Zone 1 Duration: 3 months (initial) Job Title: Data Engineer About the role: We're on the lookout for an experience Data Engineer … for good space. You'll be involved in the full end-to-end process, building data pipelines and dashboards. Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform Requirements: 5+years' experience with PostgresSQL, SQL & Terrform Demonstrable experience with building data pipelines from scratch 3+ years' Dashboarding/Building Dashboards, (Apache … an experienced data engineer with experience building data pipelines please apply, or send your CV directly to callum@acquiredtalent.co.uk Data Engineer/PostgreSQL/SQL/Data Pipelines/Apache Superset/PowerBI/Tableau/Terraform More ❯
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high-performance … data platforms: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/ML models … to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, Apache Spark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure Data More ❯
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, Apache Spark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high-performance … data platforms: Utilize Databricks and Apache Spark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/ML models … to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, Apache Spark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure Data More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and Apache Spark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you will … be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using Apache Spark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data quality, integrity More ❯
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
optimizing scalable data solutions using the Databricks platform. YOUR PROFILE Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … within Databricks. Provide technical mentorship and guidance to junior engineers Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain More ❯
Data Engineering (Python Automation Lead) Work Location- Austin, TX Job Description: Seeking a Senior Specialist with 7 to 11 years of experience in Python and data technologies including Flask Apache Spark Scala and Nginx to design and implement scalable data driven solutions Develop and maintain high performance data processing applications using Apache Spark and Scala Build and deploy … rigorous testing and best practices. Stay updated with the latest trends and advancements in data engineering and Python ecosystems. Lead the design and development of data processing pipelines leveraging Apache Spark and Scala. Architect and implement backend services and APIs using Flask to support data driven applications. Manage deployment and configuration of Nginx servers to ensure high availability and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
london (city of london), south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯