Website NMB Bank Tanzania
NMB Bank Plc is a full-service commercial bank based in Tanzania, providing a wide array of financial services and products to retail customers, small and medium-sized enterprises (SMEs), corporations, and government institutions.
NMB Bank Tanzania is a full-service commercial bank based in Tanzania, providing a wide array of financial services and products to retail customers, small and medium-sized enterprises (SMEs), corporations, and government institutions. NMB has been recognized with numerous accolades, including being named “Best Bank in Tanzania” by Euromoney multiple times.
Job Purpose:
Design, develop, optimize data architectures to ensure effective data collection, storage, and processing that facilitate deeper analysis and reporting. Ensure the performance and quality of data pipelines, enabling analysts and data scientists to efficiently use the data.
Maintain the data warehouse & BI platform and implement the big data strategy for the Bank.
Main Responsibilities:
- Design, implement and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies into a Data Warehouse based on internal process improvements, automation and optimization of data delivery.
- Identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Design and build solutions to empower users to perform self-serve analytics needs.
- Assemble large, complex data sets that meet functional / non-functional business requirements and design custom ETL and ELT processes in support of these processes
- Implement enhancements and new features across data infrastructure systems including Data warehouse, ETL, Master Data Management & BI platform
- Maintain the overall data infrastructure systems (ETL, data warehouse, BI, MDM, credit systems) to ensure they are available 24/7 and with good performance.
- Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
- Design and build an optimal organizational data infrastructure and architecture for optimal extraction, transformation, and loading of large data volumes from a wide variety of data sources using SQL and Azure, AWS big data technologies.
- Work with architecture and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
- Anticipate, identify and solve issues concerning data management to improve data quality.
- Work with analytics tools that utilize the data pipeline to provide actionable insights into operational efficiency and other key business performance metrics.
- Advise on the best tools/services/resources to build robust data pipelines for data ingestion, connection, transformation, and distribution
- Transform data and engineer new features for machine learning models
- Perform deep-dive analysis including the application of advanced analytical techniques to solve some of the more critical and complex business problems
- Create new methods to visualize core business metrics through reports, dashboards, and analytics tools.
Knowledge and Skills:
- ETL, Data warehouse, BI & data analytics
- Advanced knowledge with SQL, DAX, M and Python
- Programming languages e.g. SQL, Python and R.
- Excellent analytical, creative and problem-solving skills.
- Strong communication and collaboration skills This includes the ability to effectively communicate with both technical and non-technical audiences.
- Basic data modeling, transforming data and deriving strategic insights.
- Strong understanding of the data lifecycle in decision-making processes
- Extensive experience in ETL processes for seamless data movement across systems.
- Familiarity with Big Query for processing and analyzing large datasets.
- Ability to design robust structures for efficient data storage and retrieval.
- Ability to work independently with minimal supervision & lead a teams
Qualifications and Experience:
- BSc in Computer Science, Computer Engineering, Data science, or relevant field.
- Strong programming experience in SQL, Python, R
- At least 5 years of experience in Data Engineering, Data Warehouse and Business Intelligence
- Experience in building and optimizing data warehouse and big data pipeline architectures
- Experience with design, development and maintenance of ETL tools
- Experience with maintenance and troubleshooting of BI platforms
- Experience with SQL and NoSQL databases
- Experience in data mining & machine learning
- Experience with data governance and data security