Proven desire to expand your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: Delta Lake, Iceberg, Hudi Proven knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Competence in evaluating and selecting development tools and technologies Sound like the role you More ❯
microservice architecture, API development. Machine Learning (ML): • Deep understanding of machine learning principles, algorithms, and techniques. • Experience with popular ML frameworks and libraries like TensorFlow, PyTorch, scikit-learn, or Apache Spark. • Proficiency in data preprocessing, feature engineering, and model evaluation. • Knowledge of ML model deployment and serving strategies, including containerization and microservices. • Familiarity with ML lifecycle management, including versioning More ❯
networks into production Experience with Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don More ❯
primarily GCP. Experience with some or all of the services below would put you at the top of our list Google Cloud Storage Google Data Transfer Service Google Dataflow (Apache Beam) Google PubSub Google CloudRun BigQuery or any RDBMS Python Debezium/Kafka dbt (Data Build tool) Interview process Interviewing is a two way process and we want you More ❯
ability to learn others as needed: Distributed or large-scale systems MySQL/SQL database design, query optimization, and administration Web development using HTML, CSS, JavaScript, Vue/React Apache web server and related modules Cloud platforms such as AWS, Google Cloud, Azure CI/CD pipeline setup, testing, and administration Networking and firewall configuration Natural language processing Responsibilities More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE. Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED. Please either apply by clicking online or emailing me directly to For further information please call me on 07704 152 640. More ❯
San Diego, California, United States Hybrid / WFH Options
Gridiron IT Solutions
or Iterative) Scripting and other languages (e.g., sh, csh, bash, ksh, make, imake, XML, HTML, CSS, and/or PERL) Development tools and services (e.g., Eclipse, Spring Framework, JBoss, Apache, Tomcat, Maven, Ant and/or automated test tools) Familiarity with server-side Java/JEE development User Interface development tools for the JEE stack Java Frameworks such as More ❯
and static site generation (SSG) in Next.js Experience with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent communicator, able More ❯
system performance and functionality. Requirements: -Active Top Secret/SCI Eligibility Clearance. -Minimum of 8 years of experience in data engineering or related work. -Proficiency in Java, AWS, Python, Apache Spark, Linux, Git, Maven, and Docker. -Experience maintaining an Apache Hadoop Ecosystem using tools like HBase, MapReduce, and Spark. -Knowledge of ETL processes utilizing Linux shell scripting, Perl … Python, and Apache Airflow. -Experience with AWS services such as CloudWatch, CloudTrail, ELB, EMR, KMS, SQS, SNS, and Systems Manager. -Experience in supporting, maintaining, and migrating JavaFX applications to modern cloud-native solutions. -Strong decision-making skills and domain knowledge. -Bachelor's Degree in a related field OR an additional 4 years of relevant experience in lieu of a More ❯
Kafka (MSK) team! We are seeking builders for our Amazon MSK service, a fully managed service that makes it easy for customers to build and run applications that use Apache Kafka to process streaming data. We are looking for engineers who are enthusiastic about data streaming, and are as passionate about contributing to open source as they are about … second, enjoys solving complex software problems, and possesses analytical, design and problem-solving skills. Ideally you have an in-depth understanding of streaming data technologies like Amazon Kinesis or Apache Kafka, and experience with open-source data processing frameworks like Apache Spark, Apache Flink, or Apache Storm. Your responsibilities will include collaborating with other engineers to … Amazon MSK) Launch AWS re:Invent 2020: How Goldman Sachs uses an Amazon MSK backbone for Transaction Banking Platform AWS re:Invent 2020: How New Relic is migrating its Apache Kafka cluster to Amazon MSK AWS re:Invent 2021: How Coinbase uses Amazon MSK as an event store for applications MSK Tiered Storage: Optimize cost and improve Kafka scalability More ❯
utilizing the Django web framework for the backends and React for developing the client facing portion of the application Create, extract, transform, and load (ETL) pipelines using Hadoop and Apache Airflow for various production big data sources to fulfill intelligence data availability requirements Automate retrieval of data from various sources via API and direct database queries for intelligence analysts … iterations Support capabilities briefings for military personnel Required Qualifications: Bachelor's degree in related field preferred Active TS/SCI Required Preferred Qualifications: Windows 7/10, MS Project Apache Airflow Python, Java, JavaScript, React, Flask, HTML, CSS, SQL, R, Docker, Kubernetes, HDFS, Postgres, Linux AutoCAD JIRA, Gitlab, Confluence Also looking for a Senior Developer at a higher compensation More ❯
Demonstrated experience with Data Quality and Data Governance concepts and experience. Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. Demonstrated experience with Apache Spark More ❯
with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions More ❯
technologies – Azure, AWS, GCP, Snowflake, Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka … skills. A minimum of 5 years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Desirable Skills: Designing Databricks based solutions More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and or More ❯
and optimize system performance using Amazon CloudWatch and Dynatrace. Collaborate with DevOps teams to build CI/CD pipelines using GitLab. Support big data processing using Amazon EMR and Apache Spark. Integrate data orchestration tools like Apache Airflow and data warehousing with Amazon Redshift. Work with data governance tools such as Informatica (EDC, AXON, IDQ) and Denodo. Enable More ❯
technical and professional experience Preferred Skills: Experience working within the public sector. Knowledge of cloud platforms (e.g., IBM Cloud, AWS, Azure). Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop). Understanding of data warehousing concepts and experience with tools like IBM Cognos or Tableau. Certifications:While not required, the following certifications would be highly beneficial: Experience … working within the public sector. Knowledge of cloud platforms (e.g., IBM Cloud, AWS, Azure). Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop). Understanding of data warehousing concepts and experience with tools like IBM Cognos or Tableau. ABOUT BUSINESS UNIT IBM Consulting is IBM's consulting and global professional services business, with market leading capabilities in More ❯
Gloucester, Gloucestershire, South West, United Kingdom Hybrid / WFH Options
Anson Mccade
tools like JUnit, Git, Jira, MongoDB, and React Familiarity with cloud platforms (especially AWS), microservices, and containerisation DV clearance (or eligibility to obtain it) Nice to Have: Experience with Apache NiFi, JSF, Hibernate, Elasticsearch, Kibana, or AWS services like EC2, Lambda, EKS CI/CD pipeline expertise using GitLab Knowledge of secure, scalable architectures for cloud deployments O.K. I More ❯
Science or a related field. Experience working on and shipping live service games. Experience working on Spring Boot projects. Experience deploying software/services on Kubernetes. Experience working with Apache Spark and Iceberg. More ❯
NoSQL databases such as MongoDB, ElasticSearch, MapReduce, and HBase. Demonstrated experience maintaining, upgrading, troubleshooting, and managing software, hardware and networks (specifically the hardware networks piece). Demonstrated experience with Apache NiFi. Demonstrated experience with the Extract, Transform, and Load (ETL) processes. More ❯
NoSQL databases such as MongoDB, ElasticSearch, MapReduce, and HBase. Demonstrated experience maintaining, upgrading, troubleshooting, and managing software, hardware and networks (specifically the hardware networks piece). Demonstrated experience with Apache NiFi. Demonstrated experience with the Extract, Transform, and Load (ETL) processes. More ❯
Newcastle upon Tyne, Tyne and Wear, Tyne & Wear, United Kingdom
Randstad Technologies Recruitment
institutions, alongside a proven record of relevant professional experience." Proven experience in a data specialist role with a passion for solving data-related problems. Expertise in SQL, Python , and Apache Spark , with experience working in a production environment. Familiarity with Databricks and Microsoft Azure is a plus. Financial Services experience is a bonus, but not required. Strong verbal and More ❯
are constantly looking for components to adopt in order to enhance our platform. What you'll do: Develop across our evolving technology stack - we're using Python, Java, Kubernetes, Apache Spark, Postgres, ArgoCD, Argo Workflow, Seldon, MLFlow and more. We are migrating into AWS cloud and adopting many services that are available in that environment. You will have the More ❯
and engineering practices. Key competencies include: Microsoft Fabric expertise : Designing and delivering data solutions using Microsoft Fabric, including Pipelines, Notebooks, Dataflows Gen2. Programming and query languages : Proficiency in Python, Apache Spark, KQL (Kusto Query Language). End-to-end data solution delivery : Experience with Data Governance, Migration, Modelling, ETL/ELT, Data Lakes, Warehousing, MDM, and BI. Engineering delivery More ❯