microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning More ❯
networks into production Experience with Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don More ❯
primarily GCP. Experience with some or all of the services below would put you at the top of our list Google Cloud Storage Google Data Transfer Service Google Dataflow (Apache Beam) Google PubSub Google CloudRun BigQuery or any RDBMS Python Debezium/Kafka dbt (Data Build tool) Interview process Interviewing is a two way process and we want you More ❯
ability to learn others as needed: Distributed or large-scale systems MySQL/SQL database design, query optimization, and administration Web development using HTML, CSS, JavaScript, Vue/React Apache web server and related modules Cloud platforms such as AWS, Google Cloud, Azure CI/CD pipeline setup, testing, and administration Networking and firewall configuration Natural language processing Responsibilities More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE. Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED. Please either apply by clicking online or emailing me directly to For further information please call me on 07704 152 640. More ❯
San Diego, California, United States Hybrid / WFH Options
Gridiron IT Solutions
or Iterative) Scripting and other languages (e.g., sh, csh, bash, ksh, make, imake, XML, HTML, CSS, and/or PERL) Development tools and services (e.g., Eclipse, Spring Framework, JBoss, Apache, Tomcat, Maven, Ant and/or automated test tools) Familiarity with server-side Java/JEE development User Interface development tools for the JEE stack Java Frameworks such as More ❯
and static site generation (SSG) in Next.js Experience with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent communicator, able More ❯
for real-world applications such as fraud detection, network analysis, and knowledge graphs. - Optimize performance of graph queries and design for scalability. - Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments. - Implement metadata management, security, and data governance using Data Catalog and IAM. - Work across functional teams and clients in diverse EMEA time … for large datasets. Expertise in BigQuery, including advanced SQL, partitioning, clustering, and performance tuning. Hands-on experience with at least one of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation More ❯
system performance and functionality. Requirements: -Active Top Secret/SCI Eligibility Clearance. -Minimum of 8 years of experience in data engineering or related work. -Proficiency in Java, AWS, Python, Apache Spark, Linux, Git, Maven, and Docker. -Experience maintaining an Apache Hadoop Ecosystem using tools like HBase, MapReduce, and Spark. -Knowledge of ETL processes utilizing Linux shell scripting, Perl … Python, and Apache Airflow. -Experience with AWS services such as CloudWatch, CloudTrail, ELB, EMR, KMS, SQS, SNS, and Systems Manager. -Experience in supporting, maintaining, and migrating JavaFX applications to modern cloud-native solutions. -Strong decision-making skills and domain knowledge. -Bachelor's Degree in a related field OR an additional 4 years of relevant experience in lieu of a More ❯
utilizing the Django web framework for the backends and React for developing the client facing portion of the application Create, extract, transform, and load (ETL) pipelines using Hadoop and Apache Airflow for various production big data sources to fulfill intelligence data availability requirements Automate retrieval of data from various sources via API and direct database queries for intelligence analysts … iterations Support capabilities briefings for military personnel Required Qualifications: Bachelor's degree in related field preferred Active TS/SCI Required Preferred Qualifications: Windows 7/10, MS Project Apache Airflow Python, Java, JavaScript, React, Flask, HTML, CSS, SQL, R, Docker, Kubernetes, HDFS, Postgres, Linux AutoCAD JIRA, Gitlab, Confluence Also looking for a Senior Developer at a higher compensation More ❯
Demonstrated experience with Data Quality and Data Governance concepts and experience. Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. Demonstrated experience with Apache Spark More ❯
with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Required Skills: Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills: Designing Databricks based solutions More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Required Skills: Mandatory Skills [at least 2 Hyperscalers]: GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF. Preferred Skills: Designing Databricks based solutions More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and or More ❯
for deployment and workflow orchestration Solid understanding of financial data and modelling techniques (preferred) Excellent analytical, communication, and problem-solving skills Experience with data engineering & ETL tools such as Apache Airflow or custom ETL scripts. Strong problem-solving skills with a keen analytical mindset especially in handling large data sets and complex data transformations. Strong experience in setting up More ❯
City of London, Greater London, UK Hybrid / WFH Options
SGI
for deployment and workflow orchestration Solid understanding of financial data and modelling techniques (preferred) Excellent analytical, communication, and problem-solving skills Experience with data engineering & ETL tools such as Apache Airflow or custom ETL scripts. Strong problem-solving skills with a keen analytical mindset especially in handling large data sets and complex data transformations. Strong experience in setting up More ❯
teams • Mentor junior developers Requirements: • British-born sole UK National with active SC or DV Clearance • Strong Java skills, familiarity with Python • Experience in Linux, Git, CI/CD, Apache NiFi • Knowledge of Oracle, MongoDB, React, Elasticsearch • Familiarity with AWS (EC2, EKS, Fargate, S3, Lambda) Active DV Clearance If you do not meet all requirements still feel free to More ❯
teams • Mentor junior developers Requirements: • British-born sole UK National with active SC or DV Clearance • Strong Java skills, familiarity with Python • Experience in Linux, Git, CI/CD, Apache NiFi • Knowledge of Oracle, MongoDB, React, Elasticsearch • Familiarity with AWS (EC2, EKS, Fargate, S3, Lambda) Active DV Clearance If you do not meet all requirements still feel free to More ❯
technical and professional experience Preferred Skills: Experience working within the public sector. Knowledge of cloud platforms (e.g., IBM Cloud, AWS, Azure). Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop). Understanding of data warehousing concepts and experience with tools like IBM Cognos or Tableau. Certifications:While not required, the following certifications would be highly beneficial: Experience … working within the public sector. Knowledge of cloud platforms (e.g., IBM Cloud, AWS, Azure). Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop). Understanding of data warehousing concepts and experience with tools like IBM Cognos or Tableau. ABOUT BUSINESS UNIT IBM Consulting is IBM's consulting and global professional services business, with market leading capabilities in More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Signify Technology
data loads, and data pipeline monitoring. Develop and optimise data pipelines for integrating structured and unstructured data from various internal and external sources. Leverage big data technologies such as Apache Spark, Kafka, and Scala to build robust and scalable data processing systems. Write clean, maintainable code in Python or Scala to support data transformation, orchestration, and integration tasks. Work More ❯
data loads, and data pipeline monitoring. Develop and optimise data pipelines for integrating structured and unstructured data from various internal and external sources. Leverage big data technologies such as Apache Spark, Kafka, and Scala to build robust and scalable data processing systems. Write clean, maintainable code in Python or Scala to support data transformation, orchestration, and integration tasks. Work More ❯
Gloucester, Gloucestershire, South West, United Kingdom Hybrid / WFH Options
Anson Mccade
tools like JUnit, Git, Jira, MongoDB, and React Familiarity with cloud platforms (especially AWS), microservices, and containerisation DV clearance (or eligibility to obtain it) Nice to Have: Experience with Apache NiFi, JSF, Hibernate, Elasticsearch, Kibana, or AWS services like EC2, Lambda, EKS CI/CD pipeline expertise using GitLab Knowledge of secure, scalable architectures for cloud deployments O.K. I More ❯