notion image
Inspired by the trend started by Derek Sivers, this page details what I’m up to right now. Starting from 2022, I will try to update this page every month to write down the problems I tend to ponder upon at that time.
The format of this page is a lil informal, so please bear with me!
 

Monthly Updates

10/03/22

Questions: What are features one would expect for an open-domain dialogue system to behave like a human?
Thoughts: I believe that first and foremost if the model is capable of perceiving the user’s prompt, it would lead to a more nuanced representation of the user’s query in the system. Then only one can work on improving the available methods.
Promising Directions: Once you have an honest and practical representation of an input, the agent has to have some decision-making capabilities (see what I did here! 😬 off course this would lead to DRL) to make smart choices while conversing with the user.
What’s Next?: Well, meta-learning is what I am studying currently with a flavour of Reinforcement learning. Hopefully, this would lead me to the concepts which concern the ‘smart decision making’ aspect of the architecture. Fingers crossed!

09/01/22

Working on open-domain dialogue systems.
Questions: How can we measure the information density in the replies generated by dialogue systems? Does the information in the responses traded off for the quality and coherency of the system?
Thoughts: Lot of metrics to discuss. Not one mainly defines the overall quality of conversation.
Promising Directions: Why didn’t anyone tell me about Gricean Maxims before? 🤷🏻‍♂️

17/10/21

Working on mitigating racial bias in language models.
Observations: I succeeded in reducing the overall bias of a given masked language model. The proposed Deep Reinforcement Learning architecture is working well. However, there is one problem. What’s that?
Problem: These AI systems are smart. How so? Well, drop in an email to know more. It’s not that straightforward. 😉
 

 
©Rameez Qureshi
 
notion image
notion image
 
badge