Are you ready to upgrade your DevOps to a whole new level? If so, this course is perfect for you!
You may already be familiar with the term “DevOps lifecycle” and its different stages, including planning, development, testing, deployment, and monitoring. But have you ever wondered how these stages are implemented in real-world organizations?
If so, this course has got you covered.
In this hands-on course, we’ll walk you through an end-to-end DevOps project in a sprint format, practically implementing the setup on a GCP cloud environment.
Are you ready to take your DevOps skills to the next level and impress potential employers in your next interview?
Then, join us on this exciting journey and gain an in-depth idea of a real-world DevOps project setup.
All you need is a basic understanding of:
Take advantage of this incredible opportunity to advance your DevOps career today!
Raghunandana Krishnamurthy, currently serving as a Staff Data Engineer and MLOps Engineer at Talabat, is renowned for his expertise in the fields of data engineering and machine learning operations. He has a strong background in both GCP and AWS platforms, utilizing tools like SageMaker and VertexAI to accelerate model development and deployment.
His experience at HelloFresh as a Senior Data Engineer involved migrating legacy ETL to AWS EMR and managing hybrid data infrastructure, showcasing his proficiency in big data cloud stacks. He was also responsible for creating visibility on ETL's through monitoring and alerting with Prometheus, Grafana, and other tools.
At Careem, Raghunandana was a Data Engineering Technical Lead, focusing on big data platform development and ensuring the health and alignment of the growing team. His responsibilities included maintaining big data platforms and ensuring data quality for analytics, machine learning, and AI applications.
His tenure at Cerner Corporation as a Big Data Engineer further highlights his deep understanding of Hadoop systems and DevOps culture. He was involved in the development, maintenance, and upgrading of Hadoop clusters, as well as in building scalable distributed systems.