Data Integration Jobs In Gurgaon

Ready to make a difference in your career, your life, the world?

At Virtusa, your passion will drive the kind of innovation that transforms industries, economies, and lives throughout the world. Join Virtusa, a global leader in digital busines strategy, digital engineering, and IT services and solutions. Explore all the ways you can make a difference. Learn more about working at Virtusa

Who we are

Businesses today require transformational change at a scale and speed that defies traditional ways of working. We spark change through our Digital Transformation Studio that delivers deep digital engineering and industry expertise through client-specific and integrated agile scrum teams.

Virtusa helps business move forward, faster by combining deep industry expertise and frictionless technology delivery. Learn more about working at Virtusa

 

Join Our Team

We’re always on the look out for new talent. Check out our current openings and apply today!

Results (1-4 of 4)

List Grid
Data Integration Jobs In Gurgaon
JOB TITLE
CATEGORY
LOCATION
JOB ID
Gurgaon

Data engineer

P3-C3-STSBechtel Corporation is seeking talented and ambitious big data engineer to join our Big Data and A.I. Center of Excellence (BDAC) team. BDAC designs, develops, and deploys industry-leading data science and big data engineering solutions, using Artificial Intelligence (AI), Machine Learning (ML), and big data platforms and technologies, to increase efficiency in the complex work processes, enable and empower data-driven decision making, planning, and execution throughout the lifecycle of projects, and improve outcomes to Bechtel organization and its customers.   Who you are: You yearn to be part of groundbreaking projects that work to deliver world-class solutions on schedule. Someone who is motivated to find opportunity in and develop solutions for evolving challenges, is passionate about their craft, and driven to deliver exceptional results. You love to learn modern technologies and guide other engineers to raise the bar on your team. You are imaginative and engaged about intuitive user interfaces, as well as new/emerging concepts and techniques.   Job Responsibilities: Big data design and analysis, data modeling, development, deployment, and CICD operations of big data pipelines Collaborate with a team of other data engineers, data scientists, and business subject matter experts to process data and prepare data sources. Mentor other data engineers to develop a world class data engineering team. Ingest, Process, and Model data from heterogeneous data sources to support Data Science projects.   Basic Qualifications: Bachelors degree or higher in Computer Science, or equivalent degree and 3-10 years related working experience. In-depth experience with a big data cloud platform, preferably Azure. Strong grasp of programming languages (Python, PySpark, or equivalent) and a willingness to learn new ones. Experience writing database-heavy services or APIs. Experience building and optimizing data pipelines, architectures, and data sets.  Working knowledge of queueing, stream processing, and highly scalable data stores  Experience working with and supporting cross-functional teams. Strong understanding of structuring code for testability.   Preferred Qualifications: Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureML. Professional experience implementing data ingestion pipelines using Data Factory. Professional experience with Databricks and coding with notebooks. Professional experience processing and manipulating data using SQL and Python code. Professional experience with user training, customer support, and coordination with cross-functional teams.
Learn More
CREQ206518
Data Integration
India, Gurgaon
Gurgaon

Azure DE

We are seeking an experienced and knowledgeable Azure Data Engineer with expert-level skills in Azure Databricks and Delta Architecture, specifically focused on supply chain and clinical trial data. As an Azure Data Engineer, you will be responsible for designing, developing, and maintaining our Azure data platform for supply chain and clinical trial data using state-of-the-art technologies and best practices. ResponsibilitiesDesign, develop, and implement end-to-end data solutions on the Azure platform, leveraging Azure Databricks and Delta Architecture for supply chain and clinical trial data. Architect, build, and optimize data pipelines and ETL processes to ingest, transform, and load supply chain and clinical trial data from various sources into Azure Databricks. Collaborate with stakeholders from the supply chain and clinical trial domains to understand data requirements and translate them into technical solutions. Build scalable and efficient data models using Delta Lake for structured and unstructured supply chain and clinical trial data on Azure Databricks. Develop and implement data ingestion frameworks, data validation processes, and data quality controls specific to supply chain and clinical trial data. Optimize performance and troubleshoot issues within Azure Databricks by tuning Spark configurations, optimizing query performance, and implementing caching strategies for supply chain and clinical trial data processing. Monitor, manage, and ensure the security, reliability, and availability of data pipelines and the Azure data platform for supply chain and clinical trial data. Implement best practices in data governance, data security, and data privacy for supply chain and clinical trial data, ensuring compliance with regulatory requirements. Collaborate with infrastructure and DevOps teams to ensure effective deployment, configuration, and management of Azure Databricks clusters for supply chain and clinical trial data processing.Stay up to date with the latest advancements and trends in Azure Databricks, Delta Architecture, and other relevant Azure data technologies for supply chain and clinical trial data, proactively bringing in new ideas and solutions to enhance the data platform. Requirements Bachelor or master degree in computer science, information systems, or closely related field. Strong experience as a data engineer, with a focus on Azure data solutions, specifically in the supply chain and clinical trial domains. Expert-level skills in Azure Databricks and Delta Architecture for processing supply chain and clinical trial data. Proficiency in Apache Spark, Scala, Python, SQL, and other relevant programming languages for data processing and analytics. In-depth knowledge of data warehousing concepts, data modeling, and ETL processes for supply chain and clinical trial data. Experience with data integration and data ingestion techniques using various technologies and protocols for supply chain and clinical trial data REST APIs, Kafka, Azure Event Hubs, etc Strong understanding of data governance, data security, and data privacy requirements specific to supply chain and clinical trial data. Familiarity with Azure cloud services, such as Azure Data Factory, Azure SQL Database, Azure Storage, and Azure Functions for supply chain and clinical trial data processing.
Learn More
CREQ208844
Data Integration
India, Gurgaon
Gurgaon

DBT Architect

Mandatory Skills AWS Data Services, DBT Core, Airflow, Spark, Power BI, Data Lake, GIT, Architecture & Design, SQLSecondary Skills Release management, Project Management, Azure Data ServicesJob Summary We are seeking a highly skilled Data Build Tool Architect to join our team. The ideal candidate will have extensive experience in designing, implementing, and managing data build tools and frameworks. This role involves leading the architecture and development of scalable data solutions to support our data-driven initiatives.Key ResponsibilitiesArchitect and Design Data Solutions Lead the design and architecture of data build tools and frameworks to support data integration, transformation, and loading processes.Cloud Integration Integrate data build tools with cloud platforms, primarily AWS, to ensure seamless data processing and storage.Performance Optimization Optimize data build processes for performance, scalability, and cost-efficiency.Agile Collaboration: Work closely with data engineers, data anayst, and other stakeholders to understand data requirements and deliver robust solutions. Automation Implement automation for data build processes to enhance efficiency and reduce manual intervention.Security and Compliance Ensure all data build tools and processes adhere to security and compliance standards.Troubleshooting Identify and resolve issues related to data build tools and processes.QualificationsExperience Minimum of 5 years of experience in data engineering, with a focus on data build tools and architecture.Technical Skills Expertise in DBT (data build tool), cloud platforms (AWS), and programming languages (Python, SQL).Education Bachelors or Masters degree in Computer Science, Information Technology, or a related field.Certifications AWS Certified Solutions Architect or AWS Certified Data Analytics is a plus.Soft Skills Strong problem-solving skills, excellent communication, and the ability to lead and collaborate with cross functional teams.Preferred QualificationsExperience with big data technologies like Hadoop, Spark, or Kafka. Familiarity with containerization and orchestration tools like Podman, Rancher and Kubernetes.Knowledge of data warehousing and tools Data integration, Data transformation, Orchestration, Reporting
Learn More
CREQ207825
Data Integration
India, Gurgaon
Page Please enter expected page number. of 1

GO

Previous

Next

Enjoy the freedom to innovate

At Virtusa, we're thinkers and doers; we thrive on collaboration, competition, and endless curiosity. We get stuff done, together. And we're looking for people with bold, fresh ideas, and that certain spark, who embody what it takes to be a #Virtusan and can move at the speed of change.

Related Content