Tuesday, January 20, 2015

From feature descriptors to deep learning: 20 years of computer vision

We all know that deep convolutional neural networks have produced some stellar results on object detection and recognition benchmarks in the past two years (2012-2014), so you might wonder: what did the earlier object recognition techniques look like? How do the designs of earlier recognition systems relate to the modern multi-layer convolution-based framework?

Let's take a look at some of the big ideas in Computer Vision from the last 20 years.

The rise of the local feature descriptors: ~1995 to ~2000
When SIFT (an acronym for Scale Invariant Feature Transform) was introduced by David Lowe in 1999, the world of computer vision research changed almost overnight. It was robust solution to the problem of comparing image patches. Before SIFT entered the game, people were just using SSD (sum of squared distances) to compare patches and not giving it much thought.
The SIFT recipe: gradient orientations, normalization tricks

SIFT is something called a local feature descriptor -- it is one of those research findings which is the result of one ambitious man hackplaying with pixels for more than a decade.  Lowe and the University of British Columbia got a patent on SIFT and Lowe released a nice compiled binary of his very own SIFT implementation for researchers to use in their work.  SIFT allows a point inside an RGB imagine to be represented robustly by a low dimensional vector.  When you take multiple images of the same physical object while rotating the camera, the SIFT descriptors of corresponding points are very similar in their 128-D space.  At first glance it seems silly that you need to do something as complex as SIFT, but believe me: just because you, a human, can look at two image patches and quickly "understand" that they belong to the same physical point, this is not the same for machines.  SIFT had massive implications for the geometric side of computer vision (stereo, Structure from Motion, etc) and later became the basis for the popular Bag of Words model for object recognition.

Seeing a technique like SIFT dramatically outperform an alternative method like Sum-of-Squared-Distances (SSD) Image Patch Matching firsthand is an important step in every aspiring vision scientist's career. And SIFT isn't just a vector of filter bank responses, the binning and normalization steps are very important. It is also worthwhile noting that while SIFT was initially (in its published form) applied to the output of an interest point detector, later it was found that the interest point detection step was not important in categorization problems.  For categorization, researchers eventually moved towards vector quantized SIFT applied densely across an image.

I should also mention that other descriptors such as Spin Images (see my 2009 blog post on spin images) came out a little bit earlier than SIFT, but because Spin Images were solely applicable to 2.5D data, this feature's impact wasn't as great as that of SIFT. 

The modern dataset (aka the hardening of vision as science): ~2000 to ~2005
Homography estimation, ground-plane estimation, robotic vision, SfM, and all other geometric problems in vision greatly benefited from robust image features such as SIFT.  But towards the end of the 1990s, it was clear that the internet was the next big thing.  Images were going online. Datasets were being created.  And no longer was the current generation solely interested in structure recovery (aka geometric) problems.  This was the beginning of the large-scale dataset era with Caltech-101 slowly gaining popularity and categorization research on the rise. No longer were researchers evaluating their own algorithms on their own in-house datasets -- we now had a more objective and standard way to determine if yours is bigger than mine.  Even though Caltech-101 is considered outdated by 2015 standards, it is fair to think of this dataset as the Grandfather of the more modern ImageNet dataset. Thanks Fei-Fei Li.

Category-based datasets: the infamous Caltech-101 TorralbaArt image

Bins, Grids, and Visual Words (aka Machine Learning meets descriptors): ~2000 to ~2005
After the community shifted towards more ambitious object recognition problems and away from geometry recovery problems, we had a flurry of research in Bag of Words, Spatial Pyramids, Vector Quantization, as well as machine learning tools used in any and all stages of the computer vision pipeline.  Raw SIFT was great for wide-baseline stereo, but it wasn't powerful enough to provide matches between two distinct object instances from the same visual object category.  What was needed was a way to encode the following ideas: object parts can deform relative to each other and some image patches can be missing.  Overall, a much more statistical way to characterize objects was needed.

Visual Words were introduced by Josef Sivic and Andrew Zisserman in approximately 2003 and this was a clever way of taking algorithms from large-scale text matching and applying them to visual content.  A visual dictionary can be obtained by performing unsupervised learning (basically just K-means) on SIFT descriptors which maps these 128-D real-valued vectors into integers (which are cluster center assignments).  A histogram of these visual words is a fairly robust way to represent images.  Variants of the Bag of Words model are still heavily utilized in vision research.
Josef Sivic's "Video Google": Matching Graffiti inside the Run Lola Run video

Another idea which was gaining traction at the time was the idea of using some sort of binning structure for matching objects.  Caltech-101 images mostly contained objects, so these grids were initially placed around entire images, and later on they would be placed around object bounding boxes.  Here is a picture from Kristen Grauman's famous Pyramid Match Kernel paper which introduced a powerful and hierarchical way of integrating spatial information into the image matching process.

Grauman's Pyramid Match Kernel for Improved Image Matching


At some point it was not clear whether researchers should focus on better features, better comparison metrics, or better learning.  In the mid 2000s it wasn't clear if young PhD students should spend more time concocting new descriptors or kernelizing their support vector machines to death.

Object Templates (aka the reign of HOG and DPM): ~2005 to ~2010
At around 2005, a young researcher named Navneet Dalal showed the world just what can be done with his own new badass feature descriptor, HOG.  (It is sometimes written as HoG, but because it is an acronym for “Histogram of Oriented Gradients” it should really be HOG. The confusion must have came from an earlier approach called DoG which stood for Difference of Gaussian, in which case the “o” should definitely be lower case.)

Navneet Dalal's HOG Descriptor


HOG came at the time when everybody was applying spatial binning to bags of words, using multiple layers of learning, and making their systems overly complicated. Dalal’s ingenious descriptor was actually quite simple.  The seminal HOG paper was published in 2005 by Navneet and his PhD advisor, Bill Triggs. Triggs got his fame from earlier work on geometric vision, and Dr. Dalal got his fame from his newly found descriptor.  HOG was initially applied to the problem of pedestrian detection, and one of the reasons it because so popular was that the machine learning tool used on top of HOG was quite simple and well understood, it was the linear Support Vector Machine.

I should point out that in 2008, a follow-up paper on object detection, which introduced a technique called the Deformable Parts-based Model (or DPM as we vision guys call it), helped reinforce the popularity and strength of the HOG technique. I personally jumped on the HOG bandwagon in about 2008.  My first few years as a grad student (2005-2008) I was hackplaying with my own vector quantized filter bank responses, and definitely developed some strong intuition regarding features.  In the end I realized that my own features were only "okay," and because I was applying them to the outputs of image segmentation algorithms they were extremely slow.  Once I started using HOG, it didn’t take me long to realize there was no going back to custom, slow, features.  Once I started using a multiscale feature pyramid with a slightly improved version of HOG introduced by master hackers such as Ramanan and Felzenszwalb, I was processing images at 100x the speed of multiple segmentations + custom features (my earlier work).
The infamous Deformable Part-based Model (for a Person)

DPM was the reigning champ on the PASCAL VOC challenge, and one of the reasons why it became so popular was the excellent MATLAB/C++ implementation by Ramanan and Felzenszwalb.  I still know many researchers who never fully acknowledged what releasing such great code really meant for the fresh generation of incoming PhD students, but at some point it seems like everybody was modifying the DPM codebase for their own CVPR attempts.  Too many incoming students were lacking solid software engineering skills and giving them the DPM code was a surefire way to get some some experiments up and running.  Personally, I never jumped on the parts-based methodology, but I did take apart the DPM codebase several times.  However, when I put it back together, the Exemplar-SVM was the result.

Big data, Convolutional Neural Networks and the promise of Deep Learning: ~2010 to ~2015
Sometime around 2008, it was pretty clear that scientists were getting more and more comfortable with large datasets.  It wasn’t just the rise of “Cloud Computing” and “Big Data,” it was the rise of the data scientists.  Hacking on equations by morning, developing a prototype during lunch, deploying large scale computations in the evening, and integrating the findings into a production system by sunset.  I spent two summers at Google Research, I saw lots of guys who had made their fame as vision hackers.  But they weren’t just writing “academic” papers at Google -- sharding datasets with one hand, compiling results for their managers, writing Borg scripts in their sleep, and piping results into gnuplot (because Jedis don’t need GUIs?). It was pretty clear that big data, and a DevOps mentality was here to stay, and the vision researcher of tomorrow would be quite comfortable with large datasets.  No longer did you need one guy with a mathy PhD, one software engineer, one manager, and one tester.  Plenty of guys who could do all of those jobs.

Deep Learning: 1980s - 2015
2014 was definitely a big year for Deep Learning.  What’s interesting about Deep Learning is that it is a very old technique.  What we're seeing now is essentially the Neural Network 2.0 revolution -- but this time around, there's we're 20 years ahead R&D-wise and our computers are orders of magnitude faster.  And what’s funny is that the same guys that were championing such techniques in the early 90s were the same guys we were laughing at in the late 90s (because clearly convex methods were superior to the magical NN learning-rate knobs). I guess they really had the last laugh because eventually these relentless neural network gurus became the same guys we now all look up to.  Geoffrey Hinton, Yann LeCun, Andrew Ng, and Yeshua Bengio are the 4 Titans of Deep Learning.  By now, just about everybody has jumped ship to become a champion of Deep Learning.

But with Google, Facebook, Baidu, and a multitude of little startups riding the Deep Learning wave, who will rise to the top as the master of artificial intelligence?


How to today's deep learning systems resemble the recognition systems of yesteryear?
Multiscale convolutional neural networks aren't that much different than the feature-based systems of the past.  The first level neurons in deep learning systems learn to utilize gradients in a way that is similar to hand-crafted features such as SIFT and HOG.  Objects used to be found in a sliding-window fashion, but now it is easier and sexier to think of this operation as convolving an image with a filter. Some of the best detection systems used to use multiple linear SVMs, combined in some ad-hoc way, and now we are essentially using even more of such linear decision boundaries.  Deep learning systems can be thought of a multiple stages of applying linear operators and piping them through a non-linear activation function, but deep learning is more similar to a clever combination of linear SVMs than a memory-ish Kernel-based learning system.

Features these days aren't engineered by hand.  However, architectures of Deep systems are still being designed manually -- and it looks like the experts are the best at this task.  The operations on the inside of both classic and modern recognition systems are still very much the same.  You still need to be clever to play in the game, but now you need a big computer. There's still lot of room for improvement, so I encourage all of you to be creative in your research.

Research-wise, it never hurts to know where we have been before so that we can better plan for our journey ahead.  I hope you enjoyed this brief history lesson and the next time you look for insights in your research, don't be afraid to look back.

To learn more about computer vision techniques:

Some Computer Vision datasets:
Caltech-101 Dataset
ImageNet Dataset

To learn about the people mentioned in this article:
Kristen Grauman (creator of Pyramid Match Kernel, Prof at Univ of Texas)
Bill Triggs's (co-creator of HOG, Researcher at INRIA)
Navneet Dalal (co-creator of HOG, now at Google)
Yann LeCun (one of the Titans of Deep Learning, at NYU and Facebook)
Geoffrey Hinton (one of the Titans of Deep Learning, at Univ of Toronto and Google)
Andrew Ng (leading the Deep Learning effort at Baidu, Prof at Stanford)
Yoshua Bengio (one of the Titans of Deep Learning, Prof at U Montreal)
Deva Ramanan (one of the creators of DPM, Prof at UC Irvine)
Pedro Felzenszwalb (one of the creators of DPM, Prof at Brown)
Fei-fei Li (Caltech101 and ImageNet, Prof at Stanford)
Josef Sivic (Video Google and Visual Words, Researcher at INRIA/ENS)
Andrew Zisserman (Geometry-based methods in vision, Prof at Oxford)
Andrew E. Johnson (SPIN Images creator, Researcher at JPL)
Martial Hebert (Geometry-based methods in vision, Prof at CMU)





Thursday, November 27, 2014

Barcodes: Realtime Training and Detection with VMX

In this VMX screencast, witness the creation of a visual barcode detection program in under 9 minutes. You can see the entire training procedure -- creating an initial data set of labeled barcodes, improving the detector via a 5 minute interactive learning step, and finishing off with a qualitative evaluation of the trained barcode detector.


The inspiration came after reading Dr. Rosebrock's blog post on detecting barcodes using OpenCV and Python (http://www.pyimagesearch.com/2014/11/24/detecting-barcodes-images-python-opencv/).  While the code presented in Rosebrock's blog post is quite simple, it is most definitely domain-specific.  Different domain-specific programs must be constructed for different objects.  In other words, different kinds of morphological operations, features, and thresholds must be used for detecting different objects and it is not even clear how you would construct the rules to detect a complex object such as a "monkey."  If you are just getting started with programming and want to learn how to construct some of these domain-specific programs, you're just going to have to subscribe to http://www.pyimagesearch.com/.

Writing these kinds of vision programs is hard.  Unless... you address the problem with some advanced machine learning techniques.  Applying machine learning to visual problems is "the backbone" of what we do at vision.ai and computer vision research has been a personal passion of mine for over a decade.  So I decided to take our most recent piece of vision tech for a spin.  We try not to code while on vacation (a good team needs good rest), and I don't consider using our GUI-based VMX software as hardcore as "coding."  Unlike traditional vision systems whose operation might leave you with an engineering-hangover, using VMX is more akin to playing Minecraft.  I figured that playing a video game or two on vacation is permissible.

Eliminating the residual sunscreen from my hands, I rebooted my soul with an iced gulp of Spice Isle Coffee and fired up my trusty Macbook Pro.  I then grabbed the first few vacation-themed objects from the kitchen. (And yes, I'm on vacation for Thanksgiving -- the objects include canned fruit, sunscreen, and a bottle of booze.)  Then it was time to throw the barcode detection problem at VMX.

Step 1: Barcode Initial Selections
30 seconds worth of initial clicks followed by several minutes worth of waving objects in front of the webcam is not hard work.  5 minutes later we have a sexy barcode detector.  Not too bad for computer vision in a non-laboratory setting.  While on vacation, I don't have access to a lab and neither should you.  A sun-filled patio will have to suffice.  In fact, it was so bright outside that I had to wear sunglasses the entire time. (Towards the end of the video, a "sunglasses" detector makes a cameo.)

Please note that he barcode is not actually "read" (so this program can't tell whether the region corresponds to canned pineapples or sunscreen), the region of interest is simply detected and tracked in real-time.

Final Step: Tweaking Learned Positives and Negatives
This video is an example of a pure machine-learning based approach to barcode detection.  The underlying algorithm can be used to learn just about any visual concept you're interested in detecting.  A bar code is just like a face or a car -- it is a 2D pattern which can be recognized by machines.  Throughout my career I've trained thousands of detectors (mostly in an academic setting).  VMX is the most fun with object recognition I've ever had and it lets me train detectors without having to worry about the mathematical details.  Once you get your own copy of VMX, what will you train?

To learn how to get your hands on VMX, sign up on the mailing list at http://vision.ai or if you're daring enough, you can purchase an early beta license key from https://beta.vision.ai.

So what's next?  Should I build a boat detector? Maybe I should train a detector to let me know when I run low on Spice Isle Coffee? Or how about going on a field trip and counting bikinis on the beach?

Sunday, October 26, 2014

VMX is ready

I haven't posted anything here in the last few months, so let me give you guys a brief update. VMX has matured since the Prototype stage last year and the vision.ai team has already started circulating some beta versions of our software.

For those of you who don't remember, last year I decided to leave my super-academic life at MIT and go the startup-route focusing on vision, learning, and automation.  Our goal is to make building and deploying vision applications as easy as pie. We want to be the Heroku of computer vision.  Personally, I've always wanted to expose the magic of vision to a broader audience.  I don't know if the robots of the future are going to have two legs, four arms, or they will forever be airborne -- but I can tell you that these creatures are going to have to perceive the world around them. 2014 is not a bad place to be for a vision company.

VMX, the suite of vision and automation tools which we showcased last year in our Kickstarter campaign, is going live very soon.  VMX will be vision.ai's first product.  While VMX doesn't do everything vision-related (there's OpenCV for that), it makes training visual object detectors really easy.  Whether you're just starting out with vision or AI, have a killer vision-app idea, want to automate more things in your home, you're gonna want to experience VMX yourself.



We will be providing a native installer for Mac OS X as well as single command installer for Linux machines based on Docker. VMX will run on your machine without an internet connection (the download plus all dependencies plus all necessary pre-trained files is approximately 2GB and an activation license will cost between $100 and $1000).  The VMX App Builder runs in your browser, is built in AngularJS, and our REST API will allow you to write your own scripts/apps in any language you like.  We even have lots of command line examples if you're a curl-y kind of guy/gal. If there's sufficient demand, we'll work on a native Windows installer. 

We have been letting some of our close friends and colleagues beta-test our software and we're confident you're going to love it.  If you would like to beta-test our software, please sign up on the vision.ai mailing list and send us a beta-key request.  We have a limited number of beta-testing keys, so I'm sorry if we don't get back to you.  If you want a hands-on demo by one of the VMX creators, we are more than happy to take a hacking break and show off some VMX magic.  We can be found in Boston, MA and/or Burlington, VT.  If you're thinking of competing in a Hackathon near one of our offices, drop us a line, we'll try to send a vision.ai jedi your way.

Geoff has been championing Docker for the last year and he's done amazing things Dockerizing our build pipeline while I refactored the vision engine API using some ideas I picked up from Haskell, and made considerable performance tweaks to the underlying learning algorithm.  I spent a few months toying with different deep network representations, and modernized the internal representation so I can find another deep learning guru to help us out with R&D in 2015.

4 VMXserver processes running on Macbok Pro

We're going to release plenty of pre-trained models plus all the tools and video tutorials you'll need to create your own models from scratch.  

We will be offering a $100 personal license and a $1000 professional license of VMX.  Beta testers get a personal license in return for helping find installation bugs. Internally, we are at version 0.1.3 of VMX and once we attain 90%+ code coverage we will have VMX 1.0 sometime in early 2015.  We typically release stable versions every 1 months and bleeding edge development builds every week. 

The future of vision.ai 

In the upcoming months, we'll be perfecting our cloud-based deployment platform, so if you're interested in building on top of our vision.ai infrastructure or want to have fun running some massively parallel vision computations with us, just shoot us an email.



Monday, January 20, 2014

Sponsor Your Favorite Object Detector + VMX Smile Detector

Many of you asked if the VMX Project will come with an initial set of object detectors. Yes! VMX will come equipped with a library of pre-trained object detectors. We are committed to providing you with an amazing VMX computer vision experience and want to give you as much as possible when you start using VMX.




Today, we’d like to introduce a special “sponsor your favorite object detector” reward. We’re introducing a new $300 pledge level to our Kickstarer page, one which lets you sponsor a object detector that will be come inside the VMX pre-trained object library. In addition to sponsorship, you will obtain all the other perks of being a $300 level backer: 650 Compute Hours, a local VMX install, early-access, the VMX cookbook, and VMX developer status. By sponsoring a detector, your name will appear inside the model library when a VMX user mouses over your favorite detector. This is your chance to make a pledge which will have an ever-lasting effect on our project. Consider the number of people that at some point use a generic car detector! Each time they visit the VMX model library, you will have your own claim to fame. “Look mom, I sponsored the car detector!” 
We have 100 slots for the $300 “sponsor an object detector reward,” and the name of the backer sponsoring the an object detector will appear as you mouseover the object model in the model library. This way, your name will be inside the VMX webapp model library, in addition to the wall of backers on our company page. You will be able to choose your name, your best friend’s name, your twitter handle (such as @quantombone), or your nickname. Sorry, no profanity allowed.



We will release the list of 100 object detectors which will come with VMX at the end of January. Sponsors will get the chance to choose their object detectors on a first-come-first-serve basis. If you are the first one to become a sponsor, you will get to choose “face,” “car,” “guitar” or whatever other object you might be excited about! As always, you can change your pledge level and reward.  So act now and don’t forget that by sponsoring an object detector you are supporting our dream project come to life! 
And for those of you interested in seeing more VMX action shots, here's a new video showing off VMX detecting smiles.  This one was taken with Tom's iPhone because the screencapture software on his computer slows everything down.  No post-processing, this is as fast as the prototype runs. Enjoy!


(Cross-posted from the VMX Project Kickstarter Update #10)

Tuesday, January 14, 2014

10% of our Kickstarter campaign total will go to free High School Student technology licenses

Dear Kickstarters, technology enthusiasts, and STEM educators,
We’re happy to announce a new reward in our Kickstarter project, one designed for free access of our robotic vision technology to high school students. If we reach our Kickstarter campaign milestone of $100K, we will give 10% of Kickstarter generated funds to high school students and clubs in the form of software licenses. $100K raised will translate to 100 single-machine VMX licenses given out to 100 different high schools and clubs during the Summer of 2014, free of charge. Optionally, qualifying high schools can choose to claim 100 VMX Compute hours if they have a problem with local performance, don’t have access to a Linux machine, and/or their security policy doesn’t allow virtual machines.
Our Kickstarter project, the VMX Project, is an easy-to-use and fully trainable computer vision programming environment. With VMX, you can teach your computer to recognize objects using the webcam. We’ve already surpassed the 30% funding milestone and generated lots of great ideas from our community. Ideas ranging from medical disease diagnosis and 3D object reconstruction to smart wine inventory management. By bringing a computer vision app-building environment to students, we’re excited about the prospect of giving teens a sandbox for innovation -- an ecosystem to achieve their own technology-oriented Eureka moments. So whether a student decides to study computer science in college or comes up with the next great startup idea, we want to give them a headache-free entry to the world of computer vision.
If you want to learn more about the VMX Project, please see our Kickstarter page: 
The VMX High School Program is designed to give a limited number of students and student clubs free access to VMX in-browser object recognition technology. We understand that “Computer Vision for Everyone” needs to include a broader range of individuals, individuals with little or no spending income. We’re committed to letting those who can be most influenced by new technology, the young innovators inside our classrooms, get access to our technology.
By supporting our Kickstarter campaign, you are backing our vision of bringing computer vision technology to the masses. So whether you want VMX for your own creative use or want to give something to your community, we hope you’ll appreciate our new VMX Project High School Program reward and back our project. In addition, backers of our project will be able to donate any of their unused Compute Hours into the Eureka fund so that additional high school students get access to our technology.
If you are a high school student or high school teacher and would like get some cool computer vision technology for your school, please send an email to “admin@vision.ai” with “VMX Project High School Program” in the title, briefly describing what you’d like to do with VMX, your age, and school name. To generate interest among your students and friends, share our VMX Kickstarter video with your classroom and have one of your students email us with their idea.
Kickstarter is all-or-nothing, so we need to reach the $100K funding milestone to make this project a reality.
We are excited that as software developers, our creations have the potential to spread rapidly. But we want to make sure that one of valuable demographics, creative high school students, isn’t left-behind. Help spread the word about VMX using social networking and let’s make 2014 the year of new technology by bringing computer vision technology to the masses.
Sincerely, 
Tomasz Malisiewicz, PhD 
Co-Founder of vision.ai

Sunday, January 12, 2014

Can a person-specific face recognition algorithm be used to determine a person's race?

It's a valid question: can a person-specific face recognition algorithm be used to determine a person's race?

I trained two separate person-specific face detectors.  For each detector I used videos of the target person's face to generate positive examples and faces from [google image search for "faces"] as negative examples.  This is a fairly straightforward machine learning problem: find a decision boundary between the positive examples and the negative examples.  I used the VMX Project recognition algorithm which learns from videos with minimal human supervision.  In both cases, I used the VMX webapp for training (training each detector took about ~20 minutes from scratch).  In fact, I didn't even have to touch the command line.  Since videos were used an input, what I created are essentially full-blown sliding window detectors, meaning that they scan an entire image and can even find small faces. I then ran this detector on the large average male face image.  This average face image has been around the internet for a while now and it was created by averaging people's faces.  By running the algorithm on this one image, it analyzed all of the faces contained inside and I was able to see which country returned the highest scoring detection!


Experiment #1
For the first experiment, I used a video of my own face.  Because I was using a live video stream, I was able to move my face around so that the algorithm saw lots of different viewing conditions.  Here is a the output.  Notice the green box around "Poland."  Pretty good guess, especially since I moved from Poland to the US when I was 8.


Here is a 5 min video (VMX screencapture) of me running the "Tomasz" (that's my name in case you don't know) detector as I fly around the average male image.  You can see the scores on lots of different races.  High scoring detections are almost always on geographically relevant races.


Experiment #2
For the second target, I used a few videos of Andrew Ng to get positives.  For those of you who don't know, Andrew Ng is a machine learning researcher, entrepreneur, professor at Stanford, and MOOC visionary.  Here is the result.  Notice the green box around "Japan."  Very reasonable answer -- especially since I didn't give the algorithm an extra Asian faces for negatives.


Here is a 5 min video (VMX screencapture) of me running the "Andrew Ng" detector as I fly around the average male image.



In conclusion, person-specific face detectors from VMX can be used to help determine a person's race.  At least the two VMX face detectors I trained behaved as expected.  This is far from a full-out study, but I only had the chance to try out on two subjects and wanted to share what I found.  The underlying algorithm inside VMX is a non-parametric exemplar-based model.  During training the algorithm uses ideas from max-margin learning to create a separator between the positives and negatives.  

If you've been following up on my computer vision research projects, you should have a good idea of how these things work.  I want to mention that while I showcase VMX being used for face detection, there is nothing face-specific inside the algorithm.  The same representation is used for bottles, cars, hands, mouths, etc.  VMX is a general purpose object recognition ecosystem and we're excited to finally be releasing this technology to the world.

There are lots of cool applications of VMX detectors.  What app will you build?

To learn more about VMX and get-in on the action, simply checkout the VMX Kickstarter project and back our campaign.  

Tuesday, January 07, 2014

Tracking points in a live camera feed: A behind-the-scenes look at the VMX Project webapp

In our computer vision startup, vision.ai, we're using open-source tools to create a one-of-a-kind object recognition experience.  Our goal is to make state-of-the-art visual object recognition as easy as waving an object in front of your laptop's or smartphone's camera.  We've made a webapp and programming environment called VMX that allows you to teach your computer about objects without any advanced programming, nor any bulky software installations -- you'll finally be able to put your computer's new visual reasoning abilities to good use.  Today's blog post is about some of the underlying technology that we used to build the VMX prototype.  (To learn about the entire project and how you can help, please visit VMX Project on Kickstarter.)

The VMX project utilizes many different programming languages and technologies.  Many of the behind-the-scenes machine learning algorithms have been developed in our lab, but to make a good product it takes more than just robust backed algorithms.  On the front-end, the two key open source (MIT licensed) projects we rely on are AngularJS and JSFeat. AngularJS is an open-source JavaScript framework, maintained by Google, that assists with running single-page applications.  Today's focus will be on JSFeat, the Javascript Computer Vision Library we use inside the front-end webapp.  What is JSFeat?  Quoting Eugene Zatepyakin, the author of JSFeat, "The project aim is to explore JS/HTML5 possibilities using modern & state-of-art computer vision algorithms."

We use the JSFeat library to track points inside the video stream.  Below is a YouTube video of our webapp in action, where we enabled the "debug display" to show you what is happening to tracked points behind the scenes.  The blue points are being tracked inside the browser, the green box is the output of our object detection service (already trained on my face), and the black box is the interpolated result which integrates the backend service and the frontend tracker.



The tracker calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.  The algorithm basically looks at two consecutive video frames and determines how points move by using a straightforward least-squares optimization method. The Lucas-Kanade algorithm is a classic in the computer vision community -- to learn more see the Lucas-Kanade Wikipedia page or take a graduate level computer vision course. Alternatively, if you find me on the street and ask nicely, I might give you an impromptu lecture on optical flow.

Instead of using interest points, in our prototype video we used a regularly spaced grid of points covering the entire video stream.  This grid gets re-initialized every N seconds.  It avoids the extra expense of finding interest points inside every frame.  NOTE: inside our vision.ai computer vision lab, we are incessantly experimenting with better ways of integrating point tracks with strong object detector results.  What you're seeing is just an early snapshot of the technology in action.

To play with a Lucas-Kanade tracker, take a look at the JSFeat demo page which runs a point tracker directly inside your browser.  You'll have to click on points, one at a time.  You'll need Google Chrome or Firefox (just like our VMX project), and this will give you a good sense of what using VMX is going to be like once it is available.


To summarize, there are lots of great computer vision tools out there, but none of these tools can give you a comprehensive object recognition system which requires little-to-none programming experience.  There is a lot of work needed to put together appropriate machine learning algorithms, object detection libraries, web services, trackers, video codecs, etc.  Luckily, the team at vision.ai loves both code and machine learning.  In addition, having spent the last 10 years of my life working as a research in Computer Vision doesn't hurt.

Getting a PhD in Computer Vision and learning how all of these technologies work is a truly amazing experience.  I encourage many students to undertake this 6+ year journey and learn all about computer vision.  But I know the PhD path is not for everybody.  That's why we've built VMX.  So the rest of you can enjoy the power of industrial-grade computer vision algorithms and the ease of intuitive web-based interfaces, without the expertise needed to piece together many different technologies.  The number of applications of computer vision tech is astounding and it is a shame that such technology hasn't been delivered with such a lower barrier-to-entry earlier.

With VMX, we're excited that the world is going to experience visual object recognition the way it was meant to be experienced.  But for that to happen, we still need your support.  Check out our VMX Project on Kickstarter (the page has lots of additional VMX in action videos), and help spread the word.