top of page
sonilogdebin

Sudoku Visual Basic Code: A Fun and Challenging Project for Beginners



My implementation of a Sudoku solver. It isn't done using the most naive way but still it does an exhaustive search with some assistance from a heap. The only constraints I have used is the basic rules of Sudoku (a number can occur only once in a row, column and it's box). There probably are more techniques or reasonings with which it can be improved but before that I would like to get this as optimized as possible. I would appreciate any advice on how to make it faster and how my code can be made compatible with modern C++ best practices. Thank you for your time!




Sudoku Visual Basic Code



  • Assignments: HW#1 (Floodfill) Implement the floodfill algorithm in C/C++. Create an executable that allows the user to choose the filename and seed point; it is okay if you hardcode the new color. The application should load the image from disk, display the original image, run the algorithm, and display the resulting image. (The specific interface is up to you: Either use command-line parameters, such as: filename x y (in that order), where 'filename' is the image filename and (x,y) are the coordinates of the seed point; Or use a windows-based interface, such as CFileDialog for selecting the file and GrabMouseClick for getting the seed point.) To create a console app in Visual C++ 6.0, follow these instructions: File -> New -> Project -> Win32 Console Application. Give it a name and keep the checkbox on "Create new workspace". Choose "An application that supports MFC." Now compile and run (Build -> Build ..., and Build -> Execute, or F7 and Ctrl-F5). Under FileView -> Source Files you will find the main cpp file. (Also, I would recommend that you turn off Precompiled Headers: Project -> Settings -> C/C++ -> Precompiled headers -> Not using precompiled headers. Before you click on the radio button, though, first select All configurations in the drop down box so that both Debug and Release versions are affected.)

  • The images that the grader will use to test your code are quantized.pgm, tillman.ppm, and others that are similar.

  • Your code should work for either grayscale or color images, and it should allow the new value to be a Bgr color (first convert the grayscale image to Bgr).

  • For simplicity, use 4-neighbor connectedness (but 8-connected is fine, too, if you want to do a little additional work).

  • To make memory management easier, feel free to use std::stack or std::vector.

  • A tutorial on the Blepo library will be given in class. You may use any part of the library except the Floodfill function itself.

  • No report is due for this assignment.

  • HW#2 (Fruit classification)Write code to automatically detect and classify fruit on a dark background. Implement the Ridler-Calvard algorithm to automatically compute a threshold value. However, you will notice that the thresholded image does not look very good. To fix this problem, implement double thresholding using two thresholds that are constant offsets from the automatic threshold returned from Ridler-Calvard. I.e., if Ridler-Calvard returns a value of t, then your high threshold will be t+th, and your low threshold will be t-tl, where th and tl are two offset values. To make your life easier, you may determine th and tl by trial and error, after which you may hardcode them in your code.

  • At any point before or during thresholding, perform noise removal (if needed) using your own combination of erosion / dilation / opening / closing.

  • Implement connected components (by repeated applications of floodfill, if you wish) to detect and count the foreground regions of the graylevel image, distinguishing them from the background. Hint: If you implement the classic connected components algorithm (which is not recommended), use an ImgInt rather than an ImgGray for the output labels, since there is a good chance of having more than 256 regions, even if there are only a small number of objects in the image.

  • Compute the properties of each foreground region, includingzeroth-, first- and second-order moments (regular and centralized)

  • compactness (To compute the perimeter, apply the logical XOR to the thresholded image and the result of eroding this image with a 3x3 structuring element of all ones; the result will be the number of 8-connected foreground boundary pixels.)

  • eccentricity (or elongatedness), using eigenvalues

  • direction, using either eigenvectors (PCA) or the moments formula (they are equivalent)

  • Using a combination of these properties or others that you develop, write an algorithm to automatically classify each piece of fruit into one of three categories: apple, grapefruit, and banana.

  • Also detect the banana stem.

  • Your output should look like this:One figure window should show the original image. Another four figures should show the result of thresholding the image with the three thresholds: t, th, and tl, along with the double-thresholding procedure. Use Figure::SetTitle() to set the title of each figure to an appropriate human-readable string that indicates what is being displayed. If you want to display additional intermediate results in additional figures, feel free; but be sure to include the five figures just mentioned.

  • In a final figure, display the original image with a one-pixel-thick boundary overlaid on each object, the color of the boundary indicating the type of fruit: Red indicates apple, green indicates grapefruit, and yellow indicates banana. For each object, draw a cross at its centroid and draw two perpendicular lines (with appropriate lengths) to indicate the major and minor axes. Indicate the banana stem by coloring with magenta the boundary pixels in that portion of the banana.

  • Print out all the region properties you computed, either on the console window (using printf, for example) or in the dialog window (using SetWindowText).

  • The grader will test your code on the images fruit1.pgmand fruit2.pgm (or, in BMP format,fruit1.bmp and fruit2.bmp), along with other similar images (same scale and lighting conditions, but the image dimensions, rotation, and number of fruit instances may change). The same algorithm parameters should be used for all objects and for both images.

  • For this assignment, you may not use any Blepo functionality contained or prototyped in ImageAlgorithms.h. You may, however, use functions in ImageOperations.h, except for the dilation and erosion functions. As a strategy, you may find it helpful to use the dilation, erosion, Floodfill and/or ConnectedComponents in Blepo in the initial debugging of the rest of your code before you write your own versions. However, if you use these functions in the code you turn in, you will incur a loss of points.

  • No report is due for this assignment.

  • HW#3 (Canny edge detection)Implement the Canny edge detector. Your code should accept a single scale parameter (sigma) as input. There should be three steps to your code: gradient estimation, non-maximum suppression, and thresholding with hysteresis (i.e., double-thresholding). For the gradient estimation, convolve the image with the derivative of a Gaussian (i.e., convolve with a 2D Gaussian derivative, implemented using the separable property), rather than computing finite differences in the smoothed image. Do not worry about image borders; the simplest solution is to simply set the border pixels in the convolution result to zero rather than extending the image. Automatically compute the threshold values based upon image statistics. Run your code on the following images: cat.pgm andcameraman.pgm. Display intermediate results (e.g., the two x- and y- gradient components, the gradient magnitude and angle, and the edges before thresholding) in separate figures, in addition to the final result.

  • Implement the chamfer distance algorithm with the Manhattan distance. Compute the chamfer distance of the Canny edges of the cherrypepsi.jpg image, then perform an exhaustive search (for simplicity, only consider locations for which the template is completely in bounds) for the best location of the cherrypepsi_template.jpg template. Convert from color to grayscale before computing the edges. Display the resulting probability map by summing the distances to the edges, and (in a separate window) overlay on the original image the rectangle corresponding to the peak.

  • For this assignment, you may not use any Blepo functionality contained or prototyped in ImageAlgorithms.h (e.g., Chamfer), and you may not use the Gauss*, Grad*, Convolve, Correlate, Smooth, etc. functions prototyped in ImageOperations.h.

  • Write a report describing your approach, including your algorithms and methodology, experimental results, and discussion. Be sure to show the effect of the scale parameter on the output for at least one image.

  • HW#4 (Image segmentation) Implement the Felzenszwalb-Huttenlocher minimum spanning tree segmentation algorithm (Efficient Graph-Based Image Segmentation, IJCV 2004). It is crucial that you first smooth the image by convolving with a Gaussian, even with a tiny variance of 0.5. This must be done with an ImgFloat so that your resulting image has floating point values. If your image has integer values, you will not get good results, because the algorithm will not be able to perform any additional merging once a region has reached the size of the scale parameter; in other words, a maximum size will be enforced. Hint: First extract the three color channels from the Bgr image using ExtractBgr(); then Convert() each one to floating point; then call Smooth() on each result.

  • When your algorithm is finished, you will probably still have some tiny regions due to image noise. One way to remove these is to enforce a minimum size. Sequentially consider each edge, and if the two pixels are in different regions and at least one of the regions is smaller than the minimum size, then merge them.

  • Note that the algorithm uses a disjoint set data structure, as in classic union-find connected components. In your disjoint set data structure, be sure to store both the maximum edge weight and the count at the root index, not somewhere else. While it is easy for the merge operation to maintain these values at the root index, it is nearly impossible (and not necessary) to maintain them for all other pixels in the region. Therefore, the values at the root index for any given region should always be used.

  • Your program should take a grayscale or color image as input and display the output in three different formats: boundaries overlaid on the original image, pseudocolored output indicating the regions, all the pixels in each region colored with the mean color of the region.

  • Your program should also accept a single integer scale parameter k.

  • For this assignment, you may not use any Blepo functionality contained or prototyped in ImageAlgorithms.h.

  • Test your algorithm on the following images: holes.pgm,monalisa.jpg, mandrill.ppm, as well as a few images of your own.

  • Write a report describing your approach, including your algorithms and methodology, experimental results, and discussion.

  • HW#5 (Stereo matching)Implement correlation-based matching of rectified stereo images. The resulting disparity map should be the same size as the two input images, although the values at the left edge will be erroneous. Match from left to right (i.e., for each window in the left image, search in the right image), so that the disparity map is with respect to the left image. Recall that a (left) disparity map D(x,y) between a left image L and a right image R that have been rectified is an array such that the pixel corresponding to L(x,y) is R(x-D(x,y), y). Implement the left-to-right consistency check, retaining a value in the left disparity map only if the corresponding point in the right disparity map yields the negative of that disparity. The resulting disparity map should be valid only at the pixels that pass the consistency check; set other pixels to zero.

  • Your code should be efficient as possible, on the order of several frames per second. (Hint: First compute the dissimilarities of all the pixels for each disparity, storing the results in an array of images; then convolve each image with a summing kernel (all ones) in both directions. Further speedup can be obtained using mmx_diff and xmm_diff in Blepo, but this is not required.)

  • Suggestion: use SAD (sum of absolute differences) to match raw intensities and use a window size of 5x5.

  • Run your code on tsukuba_left.pgm andtsukuba_right.pgm. Show the results both with and without the consistency check. What kind of errors do you notice? Now run the algorithm onlamp_left.pgm and lamp_right.pgm. What happens? Why is this image difficult?

  • Your code should output a PLY file that can be read byMeshLab. This will enable you to visualize the matching results in 3D. Here is anexample PLY file created from a set ofKermit images. PLY files are ASCII files with a simple format: In the header you specify the number of vertices, along with the properties stored for each vertex (e.g., x y z nx ny nz r g b); then after the header there is one line per vertex. For your assignment, you should just output six columns (x y z r g b) for each matched pixel, ignoring the normal components. You can use either perspective or orthographic projection to get your x,y,z coordinates. Orthographic is simpler and will lead to a more aesthetically pleasing point cloud, but it is less accurate mathematically.

  • Your stereo matching code does not have to work on color images, but color will make your PLY file more pleasant to look at, if you care to use it: tsukuba_left.ppm and tsukuba_right.ppm .

  • Take a look at the results of the latest stereo research at (click on the "Evaluation" tab). Look only at the column (all) under the column Tsukuba. What errors do you see in the best algorithm (the one with minimum error in this column)? What does this tell you about the difficulty of the problem?

  • Write a report describing your approach, including your algorithm and methodology, experimental results, and discussion.

  • HW#6 (Lucas-Kanade)Implement Lucas-Kanade feature point detection and tracking. Detection. For each pixel in a graylevel image, construct the 2x2 covariance matrix of the gradients in the 5x5 window surrounding the pixel. Then compute the minimum eigenvalue of the gradient covariance matrix for each pixel. Perform non-maximal suppression to detect the n most salient features, separated from each other by a distance of at least k pixels, where n=100 and k=8.

  • Tracking. For each feature, track its location from one image frame to the next by iteratively solving the Lucas-Kanade equation Zd=e, where Z is the 2x2 gradient covariance matrix and e is the 2x1 vector of gradients multiplied by the temporal derivative. Display a movie of the original images with features overlaid. You will want to smooth the images first by convolving with a Gaussian to increase the basin of attraction, particularly to handle swift camera motion, and you should use a large window size, e.g., 11x11 or 17x17, for the same reason. For more details, you may want to refer to Jean-Yves Bouguet'stechnical report (but ignore the pyramidal part) or theKLT references. Keep your feature coordinates as floating point values throughout the tracking process, only rounding for display purposes.

  • Run your code on the following image sequences: flowergarden.zip andstatue_sequence.zip, overlaying the features on the original images. Your code will be tested on these images.

  • For this assignment you may not use any of the Lucas-Kanade or KLT implementations in Blepo, or any other existing implementations of Lucas-Kanade. You also may not use any of the Interp functions.

  • Write a report describing your approach, including your algorithm and methodology, experimental results, and discussion.

Grading standard: A. Report is coherent, concise, clear, and neat, with correct grammar and punctuation. Code works correctly the first time and achieves good results on both images.B. Report adequately describes the work done, and code generally produces good results. There are a small number of defects either in the implementation or the writeup, but the essential components are there.C. Report or code are inadequate. The report contains major errors or is illegible, the code does not run or produces significantly flawed results, or instructions are not followed.D or F. Report or code not attempted, not turned in, or contains extremely serious deficiencies.Detailed grading breakdown is available in the grading chart. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Roblox test server apk

APK do Roblox Test Server: o que você precisa saber Se você é fã do Roblox, pode estar curioso sobre o APK do servidor de teste do...

Comments


!
Widget Didn’t Load
Check your internet and refresh this page.
If that doesn’t work, contact us.
bottom of page