UAV Targetting System
Individual project for CSE190a at UCSD Winter Quarter 2008. The project is based upon the design and requirements specified by the UCSD AUVSI team that is entering the AUVSI UAS competition for UAV reconnaissance.
Monday, March 17, 2008
Wednesday, March 12, 2008
Monday, March 10, 2008
Location
I have also over the weekend tried to implement a location based tool that allows me to visualise where the image targets found think that they actually are. I simply took the best matching window and plotted a red square around it to ensure that the chosen location can be visualised. also the Co-ordinates of this location have been taken from the center of the chosen window and printed to the output file.
To get the distance from the center we take arc tan of the field of view times the altitude to get the distance the whole image covers...then on a pixel distance we can decide how far from the center which will be the gps location of the plane (approx).
Image location boxes can be viewed below:
To get the distance from the center we take arc tan of the field of view times the altitude to get the distance the whole image covers...then on a pixel distance we can decide how far from the center which will be the gps location of the plane (approx).
Image location boxes can be viewed below:
Blurred Test Solves Several Cases
Testing the blurred images again i found a human error in my code due to reusing code from elsewhere in the program. It is now fixed and actually shows a good improvement in that all images are classified as Present if they truly are present. I have yet to see a test/sample case where the images have been incorrectly classified.
Also with the new blurred image code the images are now actually picking up previous blurred images too. Showing that a careful selection of varying conditions from illumination to blurred tests will give results that a usable throughout the system. Ideally a larger set of training data would be used rather than the minimal 12 training images currently working with, yet this gives reasonable results to the minimum degree of accuracy required...100% coverage of all target images, yet still allows through a few similar images that are not truly targets (i.e true negatives).
Also with the new blurred image code the images are now actually picking up previous blurred images too. Showing that a careful selection of varying conditions from illumination to blurred tests will give results that a usable throughout the system. Ideally a larger set of training data would be used rather than the minimal 12 training images currently working with, yet this gives reasonable results to the minimum degree of accuracy required...100% coverage of all target images, yet still allows through a few similar images that are not truly targets (i.e true negatives).
2: 0.119070 Present - Normal
3: 0.128092 Present - Normal
4: 0.082883 Present - Normal
5: 0.131328 Present - Normal
6: 0.072161 Present - Dark
7: 0.053662 Present - Normal
8: 0.159959 Present - Normal
9: 1.000000 Empty
10: 0.113700 Present - Normal
11: 0.156434 Present - Normal
12: 1.000000 Empty
13: 0.045220 Present - Normal
14: 0.077655 Present - Blurred Test //Old Blurry image now found
15: 0.099331 Present - Blurred Test //Old Blurry image now found
16: 1.000000 Empty
17: 1.000000 Empty
18: 1.000000 Empty
19: 0.181530 Present - Normal
20: 0.187497 Present - Normal
21: 0.183492 Present - Normal
22: 0.154783 Present - Normal //Non blurry version of image
23: 0.140616 Present - Normal //Non blurry version of image
24: 0.163971 Present - Normal //New Blurry image found as a normal instead
25: 0.095063 Present - Blurred Test //New Blurry image found
Blurred images Results
The blurred image results are quite bizarre. All the cases of blurred targets are found as well as the base cases too. These blurred images though are not linked to the blurred test images however but are classified by the light and normal image sets....very peculiar so i am now currently trying to debug the cases and see what the results are just for the blurred cases and what the return value is in this situation.
(Trying to add pictures but google has a network error)...
Ahh i solved it another way...apparently google died this morning.
(Trying to add pictures but google has a network error)...
Ahh i solved it another way...apparently google died this morning.
Wednesday, March 5, 2008
Improvements To Illumination Variation
I have made some changes to the base images that are used for the illumination variation, however i have found that the system required some tweaking in the range of what is classified as a target as often i have found that the avc difference in chi squared between the tests is too strick, to loosen this i have simply drawn a boundary of either the average chi squared or 0.1 which is the larger.
By doing so i get a better classification result for the images.
AVC: 0.191358 AVD: 0.032299 AVL: 0.014278
I see this as a massive improvement. I was always dubious about blurring and now i can see clearly that this is an issue i have still to fix but i hope that with a test of such an image with no blurring it is picked up. This would prove to me that the blurring is the issue and not the classification of that model/style of image.
Immediate Possibilities:
By doing so i get a better classification result for the images.
AVC: 0.191358 AVD: 0.032299 AVL: 0.014278
2: 0.109081 Present - Normal
3: 0.045300 Present - Light
4: 0.022777 Present - Light
5: 0.058292 Present - Light
6: 0.083343 Present - Dark
7: 0.030796 Present - Light
8: 0.063477 Present - Light
9: 1.000000 Empty //Actually Present just extremely bright.
10: 0.060175 Present - Light
11: 0.167384 Present - Normal
12: 1.000000 Empty //Actually Present just extremely dark.
13: 0.156815 Present - Normal
14: 1.000000 Empty //Actually Present images are blurred.
15: 1.000000 Empty //Actually Present images are bluured.
16: 1.000000 Empty
17: 1.000000 Empty
18: 1.000000 Empty
19: 0.187468 Present - Normal //Actually Empty image of bag strap
20: 0.171188 Present - Normal //Actually Empty image of chemistry book
21: 0.186641 Present - Normal //Actually Empty image of penny
I see this as a massive improvement. I was always dubious about blurring and now i can see clearly that this is an issue i have still to fix but i hope that with a test of such an image with no blurring it is picked up. This would prove to me that the blurring is the issue and not the classification of that model/style of image.
Immediate Possibilities:
- Test Blur Theory Against Non Blurry Images
- Check Dark Images for Improvements
- Target Location
Varying Illuimnation - Color Constancy Issues
After spending most of the weekend reading color constancy papers, and finding out things that seem to be quite damaging to my methods i have come across one great big artifact of them. When trying to give my system samples of varying lighting conditions and by artificially creating these as has been done before i have found that the samples although taken under similar conditions with the same camera in virtually the same place in the room have massively different values in the Hue channel thus ruining my results. However i do believe this is due to poor selection of target images, yet it does highlight that if this image had appeared in the live data set rather than the training one it would be misclassified. This is the issue in general with color constancy that human perception does not necesaarily match the machine representation of the image.
The images below are my samples for dark targets of a bottletop. Clearly they all contain something circular but it is hard to see, possibly red etc but not a clear view.
Yet in hue channels we can clearly see that there is a massive difference in these images as the right hand column is completely variant. This i believe to be color constancy or rather the lack of color constancy in effect.
The images below are my samples for dark targets of a bottletop. Clearly they all contain something circular but it is hard to see, possibly red etc but not a clear view.
Yet in hue channels we can clearly see that there is a massive difference in these images as the right hand column is completely variant. This i believe to be color constancy or rather the lack of color constancy in effect.
Subscribe to:
Posts (Atom)