When I came into the lab today I realized that was one thing that my program still couldn’t do, find the center of the pupil. When I discussed this issue before I was told I could do one of three things, research the Hough Circle transformation and try to implement it, use Aayush’s code or extract from the Pupil Lab cameras. Since there was no one in the lab when I came in I tried to implement the hough circle transformation. By the middle of the day, I got a program that could reliably draw two circles where the pupil was. This is because, as I quickly realized, the pupil is an ellipse. I treated the center of the two circles as the foci of an ellipse. Then by doing some math that I learned in Pre-Calculus this year I fit an ellipse around the pupil. The only thing left to do now is all of this new code as another function in my main program to determine torsional eye movements. I have attached below a picture that my new program outputs.
Today was the last day Jeff Pelz would be in the lab for a week so I spent this time to review both my program and presentation with him. When he was reviewing my code it was hard for him to understand due to me not using variable names that were easy to understand.Today was the last day Jeff Pelz would be in the lab for a week so I spent this time to review both my program and presentation with him. When he was reviewing my code it was hard for him to understand due to me not using variable names that were easy to understand. We also discussed other methods to clear up my code like writing comments as I wrote as well as including more whitespace. Along with these changes, I also wrote a function so I could separate the different tasks that my program accomplished in to easy to read chunks. When we went over my presentation I was given lots of good advice from both Aayush and Jeff. I spent the rest of the day modifying both my program and my presentation to make them as understandable as possible. Today was the last day Jeff Pelz would be in the lab for a week so I spent this time to review both my program and presentation with him. When he was reviewing my code it was hard for him to understand due to me not using variable names that were easy to understand.
At the begening of today, I caught up on a lot of overdue blog posts and worked om my presentation that I will give in a week and two days. After I got all of this work out of the way I worked on geting my tortinal eye movement program to work with real eyes and masks provided to me by Aayush. I did not have a working program by the end of the day but when I went home I found the bug and fixed it before I came in the next day.
Today I wrote more lines of code than I have any other day in this internship. I started off talking with Dr.Pelz about ways to induce torsional eye movements. He stated that I could get ground truth manually from torsional eye movements so it would be okay to have the subject tilt their head. I decided to write a program that would rotate a mask about the pupil until it matched with the new image. I determined how similar the two images were by subtracting them and taking the average brightness of that image. Most of my day was spent writing a debugging this code. Some logic errors occured due to me being acustomed to Java and not pythonn but by the endo of the day I had a working program. Attached bellow are the results of the program when I artificialy rotated the image three degrees and checked every .05 degres.
Although I was able to get a considerable amount of work done, today felt slow. This week there are no morning meetings because too many interns can’t make them due to other commitments. At the very start of the day, I started brainstorming different ways that I could induce torsional eye movements. I didn’t want to have the subject move their head because it is less reliable than inducing torsional eye movements with something that can be measured easily like a motor. However, I could not seem to measure any torsional eye movement by modifying a stimulus. After debugging my edge detection code I got it to work perfectly. It was very satisfying seeing what my program could do because of all the time I spent working on it. I have attached pictures of my edge detection program along with the outputs it creates. Tomorrow I will do more work in OpenCV that is relevant to torsional eye movements and find a way to induce them.
Today my family took a one day vacation where we went kayaking at Canadice lake and swam in Stony Brook State Park. However, over the weekend I made significant progress on my python program in OpenCV that uses a kernel to find edges in an image or apply a gaussian blur.
The Undergrad research symposium was today! I was able to go to six presentations and lean a lot as well as continuing to manipulate images in OpenCV. The first three presentations I went to were all about neural networks and machine learning. These were a little hard to understand by I was able to learn something from each of them. My favorite was about trying to identify the difference between positive and negative laughter when subjects were watching various videos. The last three presentations were all about mechanical engineering and astronomy. This interested me because I want to major in mechanical engineering and I was able to learn a lot. One presentation, in particular, surprised me because it was given by a high school student, and he was very well-spoken and knowledgeable. In between presentations, I was also able to work on OpenCV. By the end of the day, I had finished the basics, rotating, cropping, scaling, and flipping, and had started to wok on the worksheet Aayush gave me. This involved converting images to grayscale increasing the contrast of images by converting from BGR to HSV and starting to make a kernel that I could apply to images. Today was a fun day and I am now very interested in using more OpenCV.
I was able to get a lot of work done today! At the beginning of the day, I decided to put off working with open CV and put all my focus on filling out the datasheet I made the template for yesterday. Some small changes needed to be done to the math and the overall layout of the document because I made mistakes yesterday. After these changes were made entering the data was relatively straightforward and simple. Attached below is the document with all the final data in it as well as some histograms and metrics provided on the sides. The average, standard deviation, and other metrics for the uncorrected data are highlighted in red while the corresponding values in green have been corrected for the refresh rate of the iPad. After lunch, I started trying to understand the basics of OpenCV and pretty soon I had an image with a circle and square drawn on it. I found image processing using VOpenC very interesting so I asked if there was anything else I needed to learn. Aayush then gave me a big list of tasks I could work on in OpenCV over the next few days.
Today was disappointing. After completing the data set I mentioned from yesterday it was clear that changes needed to be made to the experiment. Multiple solutions were proposed, including using a different monitor and switching from Keynote to Python. We ended up using Python to control the slide changes but, continued to use the iPad. This created less work for me because I did not need to come up with a new parametric equation to describe the refresh rate of the display. To make the new slides many lines of code needed to be written. So that I would understand the new system, Aayush gave me the task of researching how to read images in Python as well as drawing circles and squares. Most of the day was spent brainstorming a new solution as well as going to a Masters Thesis defense meeting and creating an Excel template where over 500 data points will be entered. See attached.
Today was a Monday. Aayush had sent me an email over the weekend of a video to analyze frame by frame, so I was able to start on that right after our morning meeting. I spent a lot of time setting up the excel sheet to compile the data on. After setting up the sheet, actually entering all the data was a relatively arduous process. By the end of the day I had collected 101 data points for Aayush’s experiment. It was important that I recorded all these data points for a second video. I started by double checking any outliers that I found in the first data set. Four outliers were found which meant more work for both Aayush and myself trying to fix this problem. Attached below is 202 color coded data points, along with 2 histograms.