Employment Information


industry Industry : Data Science

industry Salary : Not Disclosed

industry Job Type : Permanent

industry Updated on : 17-05-2023

industry Job Level : Mid Level

industry Experience : 4 - 6 Years

industry Deadline : 25/02/2023

industry Location : Hyderabad

Job Description

Strong experience in Python Programming, MongoDB and MySQL databases, data tools (Kafka, Spark and Hadoop)

Job Description:

1.Assembling large, complex sets of data that meet non-functional and functional business requirements.
2.Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
3.Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies.
4.Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition.
5.Develop algorithms to transform data into useful, actionable information.
6.Build, test, and maintain database pipeline architectures.
7.Collaborate with management to understand company objectives.
8.Create new data validation methods and data analysis tools.
9.Ensure compliance with data governance and security policies.
10.Identify opportunities for data acquisition.
11.Collaborate with data scientists and architects on several projects.

Qualification

Bachelors and/or Advanced Degree in any Specialization

Skills Required

Python MongoDB MySQL Kafka Spark

Get ready to start your ONPASSIVE journey

Come, be a part of a revolutionary technology world that will change the future. If you are in it, we will help you win it! We provide you with the best career opportunities that help you achieve your professional goals in the right direction.

joureny_cat

© 2024 ONPASSIVE CAREER - All Rights Reserved