Revamping Employee Engagement Through Big Data: A Close Look Into Antony Akisetty’s Work In AI And Analytics
An established Data Scientist, Antony Akisetty has worked for more than 13 years employing AI and analytics solutions to the healthcare, manufacturing, and public safety industries.

Amgen and Eastern Michigan University are just some of the places where Antony Akisetty utilized AI-powered technologies in healthcare and in business sectors. He worked at a construction site in the public safety sector and came up with solutions that made a difference on multiple levels. It is remarkable how someone is able to take in-depth, difficult chunks of data and breakdown all to useful business insights as given in his career. For his cutting-edge work on optimizing data pipelines and utilizing machine learning, he has received several accolades and even led transformational changes while marketing data systems and achieved a notable increase in functionality and performance. He recently won the prestigious Outstanding Leadership Award at the Marketing 2.0 Conference for his work with over 500 enterprise clients and on major AI process transformations that netted them over $2 million in savings.
Q 1: What details can you share on how the transition from working in healthcare to manufacturing transformed the way you look at data science?
A: I was engaged in a very rigorous work environment at Amgen and that has prepared me well for the health care sector that deals with complex, high volume data that has strict accuracy requirements. I worked with critical data that had a 99.9% uptime while managing billions of rows automation of ETL processes which has enhanced my understanding of the interconnections of working with vast amounts of data. For instance, Amgen utilized our technologies to decrease patient readmission rates by 20% and improve patient compliance with medication management by 15%. I've successfully introduced AI-based systems that have decreased faults in a manufacturing setting by 25% during my time there. Moreover, I have worked at Accuride International where predictive maintenance systems were developed resulting in improved overall equipment effectiveness by 12% while achieving a decrease of 30% in equipment downtime. The magnitude of the change has been in achieving these levels of accuracy and reliability while deploying these methods in real-time manufacturing settings.
Q2: Could you provide details of your work in conversational AI and natural language processing?
A: My work in leading Customer Service Departments includes the initiation and execution of advanced Cbots that minimize service requests from 135 down to 63 while efficiently boosting operational performance by 65%. From my previous projects, I have integrated NLP to SQL pipelines utilizing pandas data frame agents and implemented several RAG approaches with Q&A chains, among others. More recently, while leading the development of agentic chatbots for the New York City Fire Department (FDNY), I have focused on making such devices available for firefighters, EMS respondents, and other practitioners, allowing for rapid access to crucial data during brief but critical decisions in emergency settings. The previous model, alongside the technology implementation, emphasized the requirement for computers to correctly answer practical user questions and verify the relevance of the answer.
Q 3: How do you process real time data in a manufacturing setting?
A: I have used Golang and gRPC to develop real time monitoring systems which cut down the equipment downtime by thirty percent. While working at WME Group, I handled real time data pipelines at Amgen which managed millions of records from numerous sources. My time at WME Group consisted of implementing Kafka and PySpark and real time pipelines which cut the time for data processing by fifty percent. The core part of this task was maintaining the balance between speed and accuracy which I managed to do on order to help speed things up. I have also gained experience over time of maintaining deadlines while coming up with solutions to assist the relevant parties in the decision making processes.
Q 4: How do you integrate quality control with machine learning?
A: Throughout history, machines have struggled with imperfections in the work they do in terms of image recognition and classifying things. Over the years of machine learning development I have used several various models while working at WME Group such as Random Forest, Gradient-Boost Trees, and Logistic Regression. While working for WME Group, I implemented MMM techniques that revolutionized ROI marketing by over 18 percent gearning great profits to the firm. The central focus of my work has always been to create models which promote interpretable results coupled with accurate prediction capabilities.
Q 5: How do you optimize your data pipeline design to ensure efficient data transfer?
A: While working at Amgen, I remember handling multiple streams of healthcare data; I recall achieving 99.9% system uptime. This is just one instance in which pipe optimization was a necessity for me on a constant basis throughout my career. I have also designed and put into place Kafka and PySpark Step Functions to provide real time GenAI data pipelines, processing 150 million records from over 30 sources. Along with this, the infrastructure cost was reduced by 30% as a result of the migration and optimization to the cloud. This comes with a fundamental principle: Efficient and effective pipelines designed to scale in performance while also handling an increasing load.
Q 6: How do you go about integrating MLOps into model deployment?
A: In my opinion, MLOps is an ecosystem of best practices and tools built to manage the lifecycle of models. I have taken the initiative to reinforce consistency by enabling pre-deployment assessments for models, training, and data in the CI/CD pipeline. In my most recent effort with the FDNY, I assisted with the strategy of creating MLOps systems that would enhance the EMS service through the use of data driven techniques and models. As a Certified Scrum Master and Product Owner, I enhanced experimentation among teams while strictly adhering to policies which resulted in productivity increasing by over 40%, while cutting the time required to deliver projects by up to 20%.
Q 7: What role do cloud technologies have on your data science projects?
A: I have utilized GCP technologies like BigQuery, VertexAI, and CloudSQL for data storage and processing and also assisted a migration from Oracle to redshift, which saved $678,000 annually while working at Amgen. Such technologies have become profoundly important while working with various companies. It has aided me in successfully migrating and utilizing AWS and GCP services, while focusing on drastically cutting the infrastructure costs and optimizing the system.
Q 8: What are your thoughts on the process of statistical hypothesis testing and conducting experiments?
A: Having the necessary statistical requirements is integral when working with data sciences, as it has increased the ability to create more accurate technical A/B and power analysis. At my previous companies, Kayak and FDNY, I worked on resource deployment during emergency cases for extended periods, in which I gathered a multitude of data over the years and worked to extract important results from it. The goal is to obtain mathematically accurate results that can be used for major business decisions, while remaining aware of potential bias and variance sources.
Q 9: What strategies do you incorporate in managing IoT and edge computing for a Manufacturing Setup?
A: Usability of IoT and edge computing in industrial setups comes with its fair share of complications. Nevertheless, I was able to implement a scalable backend service for IoT-centric inventory control using Golang and gRPC, which improved stock accuracy by 95% and incidence of stockouts by 30%. The primary challenge is ensuring accurate real-time reporting and the protection of data integrity over the various systems designed to operate in or utilize edge computing.
Q 10: Do you envision any growth opportunities for data science and AI in manufacturing sectors?
A: Data science will become much more complicated as predictive models evolve into fully AI products, providing everything from condition monitoring and maintenance to quality inspection tasks. Moreover, I expect more GenAI fusion with classical AI and advanced IoT, as well as strong focus on automated decision platforms across the whole spectrum of manufacturing, pharmaceuticals, and public sectors. The difficult part will be deploying these technologies and yet remain reliable and interpretable.
About Antony Akisetty.
Antony Akisetty is a remarkable Data Scientist who has over thirteen years of experience in applying AI and analytics solutions in the domains of healthcare, manufacturing and public safety. He also holds an EMBA from Quantic School of Business and Technology, as well as an MS degree in Data Analytics from Southern New Hampshire University, which makes him possess both technical and business strategic skills. In his research and projects, he has achieved a considerable enhancement of operational efficiency and effectiveness such as 25% improvement of error reduction due to implementation of AI-powered technology and 65% enhancement in service operations due to the introduction of conversational AI. Recently awarded the Outstanding Leadership Award at the Marketing 2.0 Conference, Antony remains on the cutting edge of data science and provides quantitative business value across numerous sectors.