Associate Consultant MC Core-Modeling

Roles & responsibilities 

— Assess, capture, and translate complex business issues and solution requirements into structured technical tasks for the data engineering team, including rapid learning of industry standards and development of effective work stream plans 

— Design, build, launch, optimize and extend full-stack data andbusiness intelligence solutions spanning extraction, storage, complex transformation and visualization layers — Support build of big data environments that enable analyticssolutions on a variety of big data platforms, including assessing the usefulness of new technologies and advocating for their adoption 

— Continuously focus on opportunities to improve the processes and efficiencies of the data pipelines 

— Continuously focus on improving data quality and reliability of existing pipelines

 — Work with a variety of stakeholders and team to build and modify data pipelines as required that can meet the business needs

 — Create data access tools for the analytics and data scientist team to leverage the data and develop new solutions — Conduct code reviews, assist other developers, and train team members, as required 

— Ensure that developed system comply with industry standards and best practices, and that they meet the project's requirements. 

 

This role is for you if you have the below Educational qualifications 

  • Bachelor’s in computer science engineering or equivalent orrelevant experience
  • Certification in cloud technologies especially Azure, would be goodto hava MS Fabric working Knowledge.
  • Bachelor’s in computer science engineering or equivalentor relevant experience o Certification in cloud technologies especially Azure, would be good to have
     

Work experience

2-3+ years of development experience building and maintaining ETL /ELT pipelines that operate on a variety of sources, such as APIs, MS Fabric ,FTP sites, cloud-based blob stores, databases (relational and non-relational)

  • Experience working with operational programming tasks, such as version control, CI/CD, testing and quality assuranceoExperience with Apache data projects (Hadoop, Spark, Hive, Airflow), or cloud platform equivalents (Databricks, Azure Data Lake Services, Azure Data Factory) and in one or more of the following programming languages: Python, Scala, R, Java, Golang, Kotlin, C, or C++)oExperience with SDLC methodologies, particularly Agile and project management tools, preferably Azure DevOpsoDesign, develop, and implement a high-performance Python coding with a focus on efficiency, scalability, and performance.
  • Expert level on using standard ML libraries such as Numpy and PandasoAdvanced level on using ML frameworks PyTorch / TensorFlow on Azure cloud ecosystemoCollaborate closely with data scientists, machine learning engineers, and other stakeholders to understand their requirements and translate these into data-driven solutionsoTroubleshoot, debug, and resolve any issues that occur within the generative AI system development, ensuring that the models perform optimally.oDocument all processes, specifications, and training procedures, and maintain stringent version control to ensure the high quality, accuracy, and replicability of generative models.oPBI Fabric, DAX, Power Queries


Subscribe now!

Want to receive frequent updates by email? Subscribe to our automatic job service!

Related vacancies