Open Source Computer Vision Library (OpenCV)
License: New and Simplified BSD licenses
Web Page: http://code.opencv.org/projects/opencv/wiki/GSoC_2014
Mailing List: email@example.comThe Open Source Computer Vision Library (OpenCV) is a comprehensive computer vision library and machine learning (over 2500 functions) written in C++ and C with additional Python and Java interfaces. It officially supports Linux, Mac OS, Windows, Android and iOS. OpenCV has specific optimizations for SSE instructions, CUDA and especially Tegra. OpenCV is now supported by a non-profit organization: OpenCV.org. It has an active user group of 55 thousand members and over 6M downloads. OpenCV has uses from gesture recognition, Android and iPhone vision apps on up to medical, robotics, mine safety and Google Streetview.
- Computational Photography: Image Decomposition and Color Algorithms 1) Implementation of Intrinsic images which are usually referred as the separation of illumination (shading) and reﬂectance components from an input photograph. 2) Implementation of Color transfer algorithms. 3) Implementation of Color constancy algorithms.
- Custom Calibration Pattern Utilizing a calibrated camera, it is possible to create a custom pattern that can be used to track the orientation and position of the camera with respect to it. It also gives the opportunity to use it as a new calibration pattern to find the intrinsic parameters of the camera. This is especially interesting in the aspect of mobile phone positioning and trajectory tracking.
- Dense Tracking and Mapping (DTAM) for OpenCV The Dense Tracking and Mapping (DTAM) algorithm, allows realtime camera pose tracking and dense 3D reconstruction using only video from a camera. DTAM is a variational algorithm that is a major advance in both detail and robustness over the current common algorithms such as PTAM, and does not require feature tracking. DTAM has applications in augmented reality, robotics, and 3D scanning.
- Extension of OpenCV-Python Bindings and Tutorials Aim of this project includes 1) Extension of OpenCV-Python bindings to new modules in OpenCV, 2) Add more OpenCV-Python tutorials, 3) Make the OpenCV-Python bindings development more automated and 4) Fix any bugs, missing docs etc.
- Fastest Pedestrian Detector in the West implementation Goal of project — implement cascade pedestrian detector based on integral channel features. Such detector must meet three following requirements: 1. High quality of detection. 2. High speed of detection. 3. It must be well tested, documented and easy to use.
- Geometrical-primitives based localization system in indoor environments The project idea is using the pictures shot by a smart device to construct a map of an indoor environment, by the extraction of landmarks from pictures using geometrical primitives.
- Improve and expand Scene Text Detection module in OpenCV This is a proposal for the improvement and expansion of the work done during the last GSoC edition in scene text detection. The main goals are: implementing a text grouping algorithm combining the multiple segmentations provided by the current implementation, and adding a character recognition (OCR) module. At the same time, an optimization task is also proposed, in order to make the detection algorithm implemented last year as fast as possible.
- Learning based trackers I’m proposing to implement tracker, described in the reference paper , by Kalal and co-authors. This is algorithm is known as TLD (Tracking-Learning-Detection) and also sometimes advertised under the name “Predator”. It is known as one of state-of-art methods. What further adds to importance is the fact, that TLD can be seen not only as a tracker, but rather as a novel framework, which combines machine learning, tracker and detector.
- Local feature descriptors The goal of the project is to implement a set of complimentary improvements concerning local feature descriptors in opencv, namely: update local feature detector/descriptor evaluation tools, port or implement new local feature algorithm, most probably AKAZE, use binary descriptors to improve stereo correspondence algorithms, port LBD line descriptor and unify some of the "detection - description - matching" approaches, rewrite tutorials on local features.
- Matting Laplacian I propose to implement recently discovered matting Laplacian which is usefull for many image editing tasks like segmentation or colorization and can be considered as a general energy optimization for specific pairwise term and arbitrary data term.
- New edge-aware filters for OpenCV Edge-aware filters are very powerful tool for various applications. Nevertheless, today in OpenCV they are presented by only one simple method – bilateral filtering. But recent many novel edge-aware filters was presented, for example: Guided Filter, Domain Transform Filter, Constant Time Weighted Median Filter. This methods have less computational complexity and produce better result in some applications than bilateral filter. Therefore this filters must be presented in OpenCV.
- Optical Flow Estimation – state-of-the-art algorithm and testing framework Expected project results include: - an implementation of the MDP-Flow2 algorithm, performing significantly better than all of currently implemented in OpenCV - preparing a testing framework for optical flow estimation functions in OpenCV based on Middlebury methodology and dataset If there is enough time: - an original modification to the algorithm (not to use SIFT and be patent-free), - CUDA - enabled version of it
- Real Time Pose Estimation Tutorial and DLS implementation In this proposal, I give a brief outline of the scheduling of my project for the Google Summer of Code 2014. First of all, I will focus in contribute to OpenCV tutorials of how to manage with the 3D reconstruction API in order to guide the developers how to estimate the 3D position of an object from a 2D image. Secondly, I will focus the work in the implementation in OpenCV format the Direct Least-Squares (DLS) method.
- Recognition and Pose Estimation of 3D Objects through 3D Features Enabling robots to automatically locate and pick up randomly placed objects from a bin is crucial in factory automation replacing tedious and heavy manual labor. Similar tools are also capable of guiding impaired people/robots through unstructured environments (automated navigation). These make 3D matching a ubiquitous necessity. Within the context of this application, I will be proposing an OpenCV implementation of a 3D object recognition and pose estimation algorithm using 3D point features.
- SALIENCY BASED IMPROVEMENTS FOR TRACKING PURPOSES Many of online tracking algorithms are subject to two kinds of problems: the inability to manage variation of target’s scale during a video sequence and the selection of positive and negative samples in the training of the classifiers. I'd like to address such problems developing a saliency-based target scale detection and cooperative samples selection and create more suitable salience algorithms in the context of tracking, thus giving greater prominence to the feature of motion.