Job ID: R1546-1222-12120-DE
Role Data Engineer- AWS
No of Positions 01
Experience 5-10 Years
Technical Competencies Data Warehousing, Big Data, AWS Cloud, SDLC Process
As Data Engineer you will be working on applications that closely communicate with systems like Air Filters, Drone, Engineering Sensors (IOT), Geo spatial systems & embedded application. You will help in integrating, controlling, optimizing and also process the data from devices. You will be building systems that help business to improve their engineering system’s efficiency and effectiveness.
- Building the Data Lake using AWS technologies like S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
- Developing sustainable, scalable and adaptable data pipelines
- Operationalizing data pipelines to support advanced analytics and decision making
- Building data APIs and data delivery services to support critical operational and analytical applications
- Leveraging capabilities of Databricks Lakehouse functionality as needed to build Common/Conformed layers within the data lake
- Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application
- Leveraging reusable code modules to solve problems across the team and organization
- Handling multiple functions and roles for the projects and Agile teams
- At least 2-3 years’ experience with designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies.
- At least 1 years’ experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing.
- At least 2-3 years’ experience developing applications with Monitoring, Build Tools, Version Control, Unit Test, TDD, Change Management to support DevOps
- At least 3 years’ experience with SQL and Shell Scripting experience
- At least 2 years’ experience with Spark programming (Py Spark or Scala)
- At least 2 years’ experience with Databricks implementations
- Familiarity with the concepts of “delta lake” and “lake house” technologies
- At least 1 years’ experience in designing, building, and deploying production-level data pipelines using tools from Hadoop stack. Comfortable developing applications that use tools like Hive/Impala, HBase, Oozie, Spark, NiFi, Apache Beam, Apache Airflow etc.
- At least 2 years’ experience with MS Azure, Amazon Web Services (AWS), Google Compute or another public cloud service
- At least 1 years’ experience working with Streaming using Spark or Flink or Kafka
- Hands on experience with OOAD & OOP
- Exposure to AWS cloud environment
- Knowledge of agile scrum & SDLC Process
|Job Category||Digital Engineering|