Job Description
Role AWS Databricks Data Engineer Lead Location USA Hartford CT Job description Must Have AWS Glue Databricks Python Good to have Spark Snowflake Overview Data Engineering team constructs pipelines that contextualize and provide easy access to data by the entire enterprise As a Data Engineer you will play a key role in growing and transforming analytics landscape
Looking for Data Engineer lead having experience on building ingestion solutions to integrate 3rd party data using databricks
Requirements - Strong python skills
- AWSCloud infrastructure knowledge commonly used AWS Glue S3 IAM Airflow
- Experience with building data pipelines using Databricks on AWS
- Knowledge and Handson Snowflake
- Experience in Agile methodologies and Atlassian tools like JIRA
- Expertise in using version control tools like Git Bitbucket
- Experience on CICD using Kubernetes GIT and Monitoring and ing tools
- Experience on data migration from OnPrem databases to AWS Cloud on S3
Roles resposibilities - Acts as a single point of contact for databricks projects for customer
- Provides innovative and costeffective solution using AWS databricks python
- Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit
- As a leader in the Cloud Engineering you will be responsible for the overseeing development
- Learnadapt quickly to new Technologies as per the business need
- Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability
- Understand where to obtain information needed to make the appropriate decisions
- Demonstrate ability to break down a problem to manageable pieces and implement effective timely solutions
- Identify the problem versus the symptoms
- Develop solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit
Skills - The Candidate must have 3-5 yrs of experience in Databricks AWS Python
- Hands on experience on AWS Cloud platform especially S3 Glue Lamda
- Experience on spark scripting
- Has working knowledge on migrating relational and dimensional databases on AWS Cloud platform
- Relevent experience with ETL methods and with retrieving data from dimensional data models and data warehouses
- Strong experience with relational databases and data access methods especially SQL
- Knowledge of Amazon AWS architecture and design
Minimum Qualifications Bachelors degree or equivalent training with data tools techniques and manipulation
Four years of data engineering or equivalent experience
Skills Databricks-SparkSQL-Scala/Python/Java VDart
Job Tags