Job Description
We are seeking a skilled Databricks Engineer to join our data engineering team and support the design, development, and optimization of scalable data pipelines and analytics solutions. The ideal candidate will have strong hands-on experience with Databricks, Python, PySpark, and SQL, along with exposure to modern cloud-based data platforms.
You will work closely with data architects, analysts, and business stakeholders to deliver reliable, high-performance data solutions that support analytics and reporting initiatives.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT data pipelines using Databricks
- Build and optimize data transformations using Python, PySpark, and SQL
- Ingest and process data from multiple sources into Data Lakes
- Work with structured and semi-structured datasets for analytics use cases
- Integrate Databricks workflows with Snowflake and other analytical platforms
- Optimize job performance, cluster usage, and query execution
- Collaborate with cross-functional teams to understand data requirements and deliver solutions
- Ensure data quality, reliability, and documentation standards are met
Required Skills & Qualifications
- Strong hands-on experience as a Data Engineer / Databricks Engineer
- Proficiency in Python and PySpark
- Advanced SQL skills for data transformation and analysis
- Experience working with Databricks in production environments
- Solid understanding of Data Lakes and modern data architectures
- Experience with Snowflake for analytics and reporting
- Exposure to Azure Data Factory or similar data orchestration tools
- Ability to work with large datasets and optimize data processing workflows
- Strong problem-solving and communication skills
#J-18808-Ljbffr