The Evolution from Data Analytics to AI/ML, Transforming Engineering Practices: Deepika Verma, Director of Engineering, Walmart

The Evolution from Data Analytics to AI/ML, Transforming Engineering Practices: Deepika Verma, Director of Engineering, WalmartAs someone who has witnessed the remarkable transformation of data analytics over the past decade, Deepika Verma, Director of Engineering, Walmart believes that we’re at a pivotal moment where traditional business intelligence is seamlessly evolving into sophisticated AI and machine learning capabilities. The transition isn’t just about adopting new tools, it’s about fundamentally reimagining how we extract value from data and make decisions that drive engineering excellence. In a compelling conversation with ilouge Media, Deepika Verma shares her insights on how AI and machine learning are revolutionising the way engineers innovate, optimise, and deliver impact.

How do you see AI and machine learning reshaping core engineering practices over the next few years?

AI and ML are revolutionizing engineering practices in ways that would have seemed impossible just a few years ago. The shift from reactive to predictive engineering is perhaps the most significant change that we are witnessing. Traditional data analytics helped us understand what happened, but AI-powered systems now enable us to predict what will happen and automatically respond to emerging issues.

The integration of AI into traditional engineering workflows is creating what I call “smart engineering,” where automation becomes increasingly intelligent through continuous data feedback loops. In my experience, this shift means engineers will spend less time on repetitive design iterations and more time on strategic, creative problem-solving, with AI handling the heavy lifting of data analysis and pattern recognition that once required extensive manual effort. Data insights, when powered by AI, become actionable intelligence that directly improves engineering outcomes.

What are the biggest challenges engineering teams face when integrating AI/ML solutions into existing systems?

The biggest hurdles engineering teams encounter when bringing AI and machine learning into their work come down to two main issues: data and integration.

First, data quality and availability can make or break a project. In my experience, data quality remains the most critical barrier. Existing data may be siloed, unstructured or of inconsistent quality, hampering model training. Engineering teams often have to spend a lot of time cleaning, filling gaps and validating before they can even train a model.

Second, integrating AI into existing workflows is far from straightforward. AI development isn’t a linear, step-by-step process like traditional engineering projects. From feeding models new information to setting up AI agents and handling unexpected outputs, there are countless “what-if” scenarios where things can go off track. The unpredictable nature of AI doesn’t fit neatly into engineering’s usual precision, so teams have to create new ways to test and check AI systems that can handle their uncertain outputs.

How do you prioritize AI/ML initiatives within your technology roadmap—what factors drive those decisions?

When prioritizing AI/ML initiatives, I believe successful organizations follow a structured approach that balances strategic alignment with technical feasibility. Technical feasibility assessment should examine alignment with current data assets, required technology complexity and implementation challenges.

In my experience, the most effective approach is using a value versus effort matrix combined with strategic alignment criteria. High-value, low-effort initiatives become your quick wins, while high-value, high-effort projects represent your long-term investments. I’ve found that successful AI roadmaps typically follow a phased approach, starting with near-term initiatives that leverage existing data analytics capabilities and gradually building toward more sophisticated machine learning applications. I particularly emphasize on time-to-value, as AI/ML projects that can demonstrate clear benefits within 90 days help build organizational confidence and secure buy-in for more ambitious initiatives.

Can you share an example of a successful AI/ML deployment and the key factors that made it work?

One of the examples of a successful AI/ML project is a Rule- based deployment analyzer. It uses a powerful generative AI/ML engine to turn metrics, logs and traces into automated “guardrails” that stop non-compliant deployments before they reach production. Some of the key factors that made it work are:-

  • Shift-Left enforcement- By catching violations during development and testing, it prevents issues early rather than relying on post-deployment audits. Instead of waiting for post-deployment audits, it checks rule compliance early during change creation and deployment—so violations are caught and fixed before code reaches production.
  • Seamless integration into developers’ everyday tools ensures compliance checks happen where they work.
  • Continuous monitoring of live data streams enables instant blocking of bad deployments, contextual alerts and automated ticket creation within minutes.
  • Real-time BI reports and interactive dashboards—delivering live metrics on deployment frequency, failure rates and top rule violations. This helps teams gain end-to-end visibility and can proactively optimize their processes.

How do you ensure the ethical and responsible use of AI in product development and decision-making?

In my experience, ensuring ethical and responsible AI use in product development requires a comprehensive, multi-layered approach that integrates ethical considerations throughout the entire AI lifecycle. This begins with rigorous data governance practices and quality engineering to test AI systems for bias detection and mitigation, ensuring training data diversity and fairness in model outcomes.

Transparency and explainability are fundamental—stakeholders must understand how AI systems make decisions. This requires implementing robust human oversight frameworks with clear accountability mechanisms, continuous performance monitoring to detect anomalies and regular auditing to identify potential ethical issues.

Data privacy and security become especially critical when transitioning from traditional analytics to AI-powered systems, necessitating privacy-by-design principles and robust data protection measures built into AI systems from the ground up. Success depends on fostering cross-functional collaboration between AI developers, cybersecurity professionals and regulatory agencies to create an ecosystem of responsible innovation where security and ethical concerns are addressed proactively rather than reactively. Ultimately, this requires cultivating an organizational culture that ensures responsible AI practices become integral to the development process, rather than an afterthought.

Curious about how AI and data intelligence are transforming investigations too?

Explore Cyber Autopsy: A Beginner’s Dive into Digital Forensics — a guide into the evolving world of digital forensic analysis powered by intelligent tools.

Latest news

Lt. Gen Syed Ata Hasnain (Retd.) Appointed Governor of Bihar

Lt. Gen Syed Ata Hasnain (Retd.) has been appointed as the Governor of Bihar, marking the transition of the...

Sanjay Agarwal Reappointed as MD & CEO of AU Small Finance Bank

AU Small Finance Bank has reappointed Sanjay Agarwal as its Managing Director & Chief Executive Officer, reaffirming leadership continuity...

Odisha IAS Reshuffle: Key Changes Across MSME, Industry and Governance Sectors

The Government of Odisha has issued a fresh round of administrative adjustments among senior IAS officers, as per a...

Haryana Govt Transfers 3 IAS Officers; Raja Sekhar Vundru Posted as ACS (Food & Civil Supplies)

The Government of Haryana has ordered the transfer and posting of three Indian Administrative Service (IAS) officers with immediate...

You might also likeRELATED
Recommended to you