processes and use of modern data analytics technology. BASIC QUALIFICATIONS - Experience in processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) - Experience in relational database technology (such as Redshift, Oracle, MySQL or MS SQL) - Experience in developing and operating More ❯
related industries. Certification in relevant areas (e.g., AWS Certified Data Analytics, Google Data Analytics Professional Certificate). Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure). Experience with data visualization design principles and storytelling techniques. Knowledge of agile methodologies and project management. Strategic More ❯
the big 3 cloud ML stacks (AWS, Azure, GCP). Hands-on experience with open-source ETL, and data pipeline orchestration tools such as Apache Airflow and Nifi. Experience with large scale/Big Data technologies, such as Hadoop, Spark, Hive, Impala, PrestoDb, Kafka. Experience with workflow orchestration … tools like Apache Airflow. Experience with containerisation using Docker and deployment on Kubernetes. Experience with NoSQL and graph databases. Unix server administration and shell scripting experience. Experience in building scalable data pipelines for highly unstructured data. Experience in building DWH and data lakes architectures. Experience in working in cross More ❯
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What More ❯
methods for synthesis prediction using tools like PSI4 , Orca , or Gaussian . Big Data : Experience curating and processing data from diverse sources; exposure to ApacheSpark or Hadoop is beneficial. Cloud Platforms : Proficiency with AWS , GCP , or Azure . ML Frameworks : Hands-on with scikit-learn , TensorFlow , PyTorch More ❯
Stroud, England, United Kingdom Hybrid / WFH Options
Data Engineer
excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for More ❯
Stroud, south east england, united kingdom Hybrid / WFH Options
Data Engineer
excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for More ❯
databases Skilled in Python, Java, or Scala for data pipeline development Experienced with BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub Exposure to Hadoop, Spark, Kafka Data Engineer - GCP & Python Location: London, UK Type: Hybrid (3 days onsite, 2 remote) Employment: Full-time, Permanent/Fixed Term We're More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
knowledge of data modelling, warehousing, and real-time analytics. Proficiency in SQL, Python, Java, or similar programming languages. Familiarity with big data technologies (e.g., Spark, Hadoop) and BI tools (e.g., Power BI, Tableau). Excellent problem-solving and stakeholder engagement skills. Desirable: Experience in research-driven or complex data More ❯
Knutsford Contract Role Job Description: AWS Services: Glue, Lambda, IAM, Service Catalogue, Cloud Formation, Lake Formation, SNS, SQS, Event Bridge Language & Scripting: Python and Spark ETL: DBT Good to Have: Airflow, Snowflake, Big Data (Hadoop), and Teradata Responsibilities: Serve as the primary point of contact for all AWS related More ❯
Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency More ❯
Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. Our inclusive culture empowers Amazon employees to deliver the best results for More ❯
related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and More ❯
Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc. and experience building and deploying solutions to More ❯
related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and More ❯
HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing More ❯
communication and collaboration skills. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Experience with big data technologies like Hadoop, Spark, or Databricks. Familiarity with machine learning and AI concepts. If you encounter any suspicious mail, advertisements, or persons who offer jobs at Wipro, please More ❯
to support business insights, analytics, and other data-driven initiatives. Job Specification ( Technical Skills) : Cloud Platforms: Expert-level proficiency in Azure (Data Factory, Databricks, Spark, SQL Database, DevOps/Git, Data Lake, Delta Lake, Power BI), with working knowledge of Azure WebApp and Networking. Conceptual understanding of Azure AI More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
mansfield, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯