Last year, during the summer break I went to IIT Hyderabad for my internship.

30 km from the main city, lay concrete marvel. Nevertheless, it was still under construction and would take around 10 years to complete. I reached the college around 8 PM, got my room allotted and unpacked my stuff. I was lost, nervous but excited.


An Evening at IIT Hyderabad


Next day, I met my PhD mentor, Charantej Reddy Pochimreddy, while my guide was not available. In the meantime, Charan gave me a tour of the lab, and the work they do at the lab. They aimed mainly towards signal processing and theoretical understanding of neural networks. Also, they studied adversarial machine learning. Charan suggested some readings and asked me to make a list of all the ideas and formulate a research proposal for them. For the rest of the day, I explored cleverhans.io, to explore the dimensions of adversarial machine learning. Next day, I met Prof. Aditya Siripuram and showed him the proposal. He suggested application based project instead of diving into theoretical approaches.

Finally, after rigorous first principle questioning, we developed an idea where we have to generate the training dataset of a black box classifier. Charan’s experience was quite helpful, he had submitted some research papers. Initial readings were based on decision trees and forming fuzzy images. We scraped some more papers and found out a really fascinating GAN based architecture.

My daily routine revolved within mess-lab-mess-lab-mess-room. Soon I met Divyansh Tiwari, another intern from Jabalpur, who first thought I was a M. Tech. student. As usual, I was running late and so was Divyansh. Day in day out, Divyansh became my first friend at IIT Hyderabad and his gang — Sonam, a mechanical engineering intern; Kailash, a skeptical athletic civil engineering intern and Chirag, another mechanical engineer intern. Chirag and Sonam worked under the same professor. On June 20, Kakul Shrivastava aka Shrini came to IIT Hyderabad for the internship, a hardworking introvert. For those who don’t know, Kakul is from ZHCET AMU.

Simultaneously, Our project was slowly blooming. I aimed to develop an architecture similar to Generative Adversarial Network to learn the feature space of the classifier. The classifier identified between the images of Orlando Bloom and Keira Knightley. Once the classifier was trained, we attached it in parallel to the discriminator. The weights of the Classifier were fixed. To understand the feature space of the classifier, we devised weighted loss. A dynamic scaling factor was introduced. As the image is developed by a generator, we pass it through a discriminator and classifier. The loss through discriminator was multiplied by alpha while the loss from classifier through 1-alpha and then back propagated. Initially alpha was set to be 1, and it changed as the model got better. This was to make sure that the classifier acted as the discriminator. Now, that the whole foundation was laid, our aim was to stabilize the GAN.

Whilst, I enjoyed the peaceful life — a chill, sobriety. Enjoying 3 free coffees daily at Shiru’s Cafe and having endless discussions about the weirdness of the shape of the building. Finally, other interns left. Me and Kakul were left. Interestingly, we both worked on the same floor in two different labs at diagonally opposite ends. We used to share breakfast, lunch, dinner and evening coffee. It was fun. She is one of the greatest friends I ever had, I bonded quite well with her, she is always available to help. She is like the President of Happy Club, no expectation, no demands. Probably the nicest person I have met.

But my project wasn’t sailing so smoothly. GANs are very hard to stabilize. The most fragile neural networks I have ever played with. Even a slight change in a factor and GANs flip out. For most of the time, I tried to stabilize GAN and clean our data. For a month or so I tried to fix the three very basic problems of GAN: Mode Collapse, Diminished Gradient and Non Convergence. To fix this, I tried the Wasserstein GANs, Wasserstein GAN with Gradient Penalty, Energy Based GANs, Spectral Normalization GAN. Most of the time I could address only 2 of the 3 specified problem. But while working with Spectral Normalization GAN, there was a substantial improvement in the result.

Soon I had to leave — the vibe, the calmness, the food, the melancholy and most importantly the late night show with Kakul. We both were going to meet in college but…. Period.