Table of Contents:

Due Date

11:59PM, Thursday, February 28, 2019

Introduction

This homework reviews some important concepts related to Project 2 (Panorama Stitching). It is composed of a coding problem and two short answer questions.

Part 1: Coding - Harris Corner Detection (50 pts)

In Project 2, you’ll be using Matlab’s built-in corner detector, via the cornermetric function. In this coding assignment, we’ll ask you to implement a simple corner detector yourself: the “Harris” corner detector.

Check out these slides from Penn State’s CSE486 for an excellent overview of how the Harris Corner Detector works.

What you need to do.

Write a matlab function corner_response=myharris(I, window_size, corner_thresh) which implements Harris corner detection. To make things a bit easier, you don’t need to implement the final non-maximal suppression step. So, it should return a heatmap of corner response, for a given image, window size, and corner threshold.

See “Submission Guidelines” for instructions on what to include in your report.

Part 2: Short Answer

Question 1: Harris Corner Detector Properties (15 pts)

Corners are useful for matching features between different images of the same scene, which might have changes in lighting, viewpoint, etc. Consider the following simple changes that we might expect to occur between two images of a given scene:

  • image rotation
  • image scaling
  • increment all pixel values by a constant.
  • multiply all pixel values by a constant.

Based on your understaning of the Harris Corner Detector, how robust do you think it would be to each of these changes? e.g., if an image were rotated, would the Harris corner detector identify the same corners before and after rotation?

Answer briefly– two paragraphs or fewer.

Question 2: Image Warping and Invertible Transformations (35 pts)

Given a digital image, and an invertible transformation of the form we would like to compute the warped image whereby each point in the original image is transformed to its new location.

This type of image warping is exactly what the Matlab imwarp function does, for example.

We could envision a somewhat straightforward algorithm for performing this image warp: for each location in the original image, compute the nearest pixel location of the transformed point in the warped image, and copy the color found in into the warped image at location .

However, the vastly preferable algorithm is to loop over the destination pixels in the warped image, and use the inverse transformation to identify the nearest pixel in the source image and copy the color from that source pixel to the destination.

What is the difference between the two approaches? Why is the second one preferable? Please answer in no more than a paragraph.

Submission Guidelines

We will deduct points if your homework doesn’t follow these specifications.

File tree and naming

Your submission on Canvas must be a zip file, following the naming convention YourDirectoryID_hw2.zip. For example, xyz123_hw2.zip. The file must have the following directory structure, based on the starter files

YourDirectoryID_hw2.zip/

  • myharris.m
  • report.pdf

Report

Please run your corner detector on these images (also used in project 2), for several different window sizes and corner threshold values, and include these results in your report.

Also include your responses to the short answer questions from part 2.

As usual, your report must be full English sentences,not commented code.

Collaboration Policy

You are encouraged to discuss the ideas with your peers. However, the code should be your own, and should be the result of you exercising your own understanding of it. If you reference anyone else’s code in writing your project, you must properly cite it in your code (in comments) and your writeup. For the full honor code refer to the CMSC426 Spring 2019 website