No more applications are being accepted for this job
- Design and maintain cloud, on-premises, and hybrid data platform architectures
- Identify, design and implement full data warehouse and Big Data solutions
- Design and maintain optimal data pipeline architectures for batch and real-time processing
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
- Work in a customer facing environment alongside with technical and business stakeholders to assist with data-related needs
- Manage the workload of team members though productivity and collaborations tools such as DevOps and JIRA
- Execute regular work review and coaching of team members
- Provide regular and effective progress updates to and work closely with Project Managers to ensure the management of any delivery risks or issues
- Take responsibility for making key decisions to ensure the successful implementation of all initiatives
- Contribute and support the business development team with solution design, budget and high-level schedule
- Strong hands-on experience on implementing modern data platform solutions
- Experience supporting and working with cross-functional teams in a dynamic environment
- Capable of leading data initiatives and communicate clearly with business stakeholders
- Ability to work efficiently without supervision
- Advanced working SQL and Python knowledge and experience working with relational and unstructured databases and, query authoring (SQL) and Pyspark as well as working familiarity with a variety of databases
- Experience building and optimizing data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- Technical aspects of Data Governance including glossary, metadata, data quality, master data management & data lifecycle management
- Security, authentication and authorization
- Experience working in agile/scrum setup
- Experience with Azure Devops or any other Devops practice
- Experience building and maintaining Data Lakes
- Experience building and maintaining Azure Data Lakes and Synapse Azure environments
- Experience with Data Lineage and Data cleansing tools – CluedIn preferred
- Bachelor of Science Degree in IT/Computer Science/ Computer Engineering.
- Minimum 7 years of relevant experience in Data Analysis and BI solutions preferred .
- Azure Certified (Data track) ex (dp-900, pl-200, DA-100).
- Experience in Azure-based data platform mainly Azure Data Factory, Azure SQL, and Azure Synapse
- Experience in Power BI and Cognos.