Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
cd953c8
Add and test Net::HasBlob and GetBlob to simplify feature extraction
kloudkl Feb 23, 2014
760d098
Add and test Net::HasLayer and GetLayerByName
kloudkl Feb 23, 2014
e76f7dc
Add image retrieval example
kloudkl Feb 23, 2014
f0336e1
Add feature extraction example
kloudkl Feb 23, 2014
b7b9dd8
Add feature binarization example
kloudkl Feb 23, 2014
fc740a3
Simplify image retrieval example to use binary features directly
kloudkl Feb 23, 2014
4de8280
Add __builtin_popcount* based fast Hamming distance math function
kloudkl Feb 25, 2014
dfe6380
Fix bugs in the feature extraction example
kloudkl Feb 25, 2014
01bb481
Enhance help, log message & format of the feature extraction example
kloudkl Feb 25, 2014
cfb2f91
Fix bugs of the feature binarization example
kloudkl Feb 25, 2014
23eecde
Fix bugs in the image retrieval example
kloudkl Feb 25, 2014
dd13fa0
Fix saving real valued feature bug in the feature extraction example
kloudkl Feb 25, 2014
706a926
Change feature binarization threshold to be the mean of all the values
kloudkl Feb 25, 2014
f97e87b
Save and load data correctly in feat extracion, binarization and IR demo
kloudkl Feb 26, 2014
c60d551
Move extract_features, binarize_features, retrieve_images to tools/
kloudkl Feb 26, 2014
8e7153b
Use lowercase underscore naming convention for Net blob & layer getters
kloudkl Feb 26, 2014
5bcdebd
Fix cpplint errors for Net, its tests and feature related 3 examples
kloudkl Feb 26, 2014
6a60795
Don't create a new batch after all the feature vectors have been saved
kloudkl Mar 17, 2014
25b6bcc
Add a python script to generate a list of all the files in a directory
kloudkl Mar 17, 2014
a2ad3c7
Add documentation for the feature extraction demo
kloudkl Mar 17, 2014
a967cf5
Move binarize_features, retrieve_images to examples/feauture_extraction
kloudkl Mar 18, 2014
44ebe29
Removing feature binarization and image retrieval examples
kloudkl Mar 19, 2014
c7201f7
Change generate file list python script path in feature extraction doc
kloudkl Mar 19, 2014
72c8c9e
Explain how to get the mean image of ILSVRC
kloudkl Mar 19, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions docs/feature_extraction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
layout: default
title: Caffe
---

Extracting Features Using Pre-trained Model
===========================================

CAFFE represents Convolution Architecture For Feature Extraction. Extracting features using pre-trained model is one of the strongest requirements users ask for.

Because of the record-breaking image classification accuracy and the flexible domain adaptability of [the network architecture proposed by Krizhevsky, Sutskever, and Hinton](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf), Caffe provides a pre-trained reference image model to save you from days of training.

If you need detailed usage help information of the involved tools, please read the source code of them which provide everything you need to know about.

Get the Reference Model
-----------------------

Assume you are in the root directory of Caffe.

cd models
./get_caffe_reference_imagenet_model.sh

After the downloading is finished, you will have models/caffe_reference_imagenet_model.

Preprocess the Data
-------------------

Generate a list of the files to process.

examples/feature_extraction/generate_file_list.py /your/images/dir /your/images.txt

The network definition of the reference model only accepts 256*256 pixel images stored in the leveldb format. First, resize your images if they do not match the required size.

build/tools/resize_and_crop_images.py --num_clients=8 --image_lib=opencv --output_side_length=256 --input=/your/images.txt --input_folder=/your/images/dir --output_folder=/your/resized/images/dir_256_256

Set the num_clients to be the number of CPU cores on your machine. Run "nproc" or "cat /proc/cpuinfo | grep processor | wc -l" to get the number on Linux.

build/tools/generate_file_list.py /your/resized/images/dir_256_256 /your/resized/images_256_256.txt
build/tools/convert_imageset /your/resized/images/dir_256_256 /your/resized/images_256_256.txt /your/resized/images_256_256_leveldb 1

In practice, subtracting the mean image from a dataset significantly improves classification accuracies. Download the mean image of the ILSVRC dataset.

data/ilsvrc12/get_ilsvrc_aux.sh

You can directly use the imagenet_mean.binaryproto in the network definition proto. If you have a large number of images, you can also compute the mean of all the images.

build/tools/compute_image_mean.bin /your/resized/images_256_256_leveldb /your/resized/images_256_256_mean.binaryproto

Define the Feature Extraction Network Architecture
--------------------------------------------------

If you do not want to change the reference model network architecture , simply copy examples/imagenet into examples/your_own_dir. Then point the source and meanfile field of the data layer in imagenet_val.prototxt to /your/resized/images_256_256_leveldb and /your/resized/images_256_256_mean.binaryproto respectively.

Extract Features
----------------

Now everything necessary is in place.

build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/feature_extraction/imagenet_val.prototxt fc7 examples/feature_extraction/features 10

The name of feature blob that you extract is fc7 which represents the highest level feature of the reference model. Any other blob is also applicable. The last parameter above is the number of data mini-batches.
25 changes: 25 additions & 0 deletions examples/feature_extraction/generate_file_list.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/usr/bin/env python
import os
import sys

def help():
print 'Usage: ./generate_file_list.py file_dir file_list.txt'
exit(1)

def main():
if len(sys.argv) < 3:
help()
file_dir = sys.argv[1]
file_list_txt = sys.argv[2]
if not os.path.exists(file_dir):
print 'Error: file dir does not exist ', file_dir
exit(1)
file_dir = os.path.abspath(file_dir) + '/'
with open(file_list_txt, 'w') as output:
for root, dirs, files in os.walk(file_dir):
for name in files:
file_path = file_path.replace(os.path.join(root, name), '')
output.write(file_path + '\n')

if __name__ == '__main__':
main()
9 changes: 9 additions & 0 deletions include/caffe/net.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,13 @@ class Net {
inline int num_outputs() { return net_output_blobs_.size(); }
inline vector<Blob<Dtype>*>& input_blobs() { return net_input_blobs_; }
inline vector<Blob<Dtype>*>& output_blobs() { return net_output_blobs_; }
// has_blob and blob_by_name are inspired by
// https://github.com/kencoken/caffe/commit/f36e71569455c9fbb4bf8a63c2d53224e32a4e7b
// Access intermediary computation layers, testing with centre image only
bool has_blob(const string& blob_name);
const shared_ptr<Blob<Dtype> > blob_by_name(const string& blob_name);
bool has_layer(const string& layer_name);
const shared_ptr<Layer<Dtype> > layer_by_name(const string& layer_name);

protected:
// Function to get misc parameters, e.g. the learning rate multiplier and
Expand All @@ -91,11 +98,13 @@ class Net {
// Individual layers in the net
vector<shared_ptr<Layer<Dtype> > > layers_;
vector<string> layer_names_;
map<string, int> layer_names_index_;
vector<bool> layer_need_backward_;
// blobs stores the blobs that store intermediate results between the
// layers.
vector<shared_ptr<Blob<Dtype> > > blobs_;
vector<string> blob_names_;
map<string, int> blob_names_index_;
vector<bool> blob_need_backward_;
// bottom_vecs stores the vectors containing the input for each layer.
// They don't actually host the blobs (blobs_ does), so we simply store
Expand Down
4 changes: 4 additions & 0 deletions include/caffe/util/math_functions.hpp
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
// Copyright 2013 Yangqing Jia
// Copyright 2014 kloudkl@github

#ifndef CAFFE_UTIL_MATH_FUNCTIONS_H_
#define CAFFE_UTIL_MATH_FUNCTIONS_H_
Expand Down Expand Up @@ -100,6 +101,9 @@ Dtype caffe_cpu_dot(const int n, const Dtype* x, const Dtype* y);
template <typename Dtype>
void caffe_gpu_dot(const int n, const Dtype* x, const Dtype* y, Dtype* out);

template <typename Dtype>
int caffe_hamming_distance(const int n, const Dtype* x, const Dtype* y);

} // namespace caffe


Expand Down
42 changes: 42 additions & 0 deletions src/caffe/net.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,12 @@ void Net<Dtype>::Init(const NetParameter& in_param) {
LOG(INFO) << "This network produces output " << *it;
net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
}
for (size_t i = 0; i < blob_names_.size(); ++i) {
blob_names_index_[blob_names_[i]] = i;
}
for (size_t i = 0; i < layer_names_.size(); ++i) {
layer_names_index_[layer_names_[i]] = i;
}
GetLearningRateAndWeightDecay();
LOG(INFO) << "Network initialization done.";
LOG(INFO) << "Memory required for Data " << memory_used*sizeof(Dtype);
Expand Down Expand Up @@ -327,6 +333,42 @@ void Net<Dtype>::Update() {
}
}

template <typename Dtype>
bool Net<Dtype>::has_blob(const string& blob_name) {
return blob_names_index_.find(blob_name) != blob_names_index_.end();
}

template <typename Dtype>
const shared_ptr<Blob<Dtype> > Net<Dtype>::blob_by_name(
const string& blob_name) {
shared_ptr<Blob<Dtype> > blob_ptr;
if (has_blob(blob_name)) {
blob_ptr = blobs_[blob_names_index_[blob_name]];
} else {
blob_ptr.reset((Blob<Dtype>*)(NULL));
LOG(WARNING) << "Unknown blob name " << blob_name;
}
return blob_ptr;
}

template <typename Dtype>
bool Net<Dtype>::has_layer(const string& layer_name) {
return layer_names_index_.find(layer_name) != layer_names_index_.end();
}

template <typename Dtype>
const shared_ptr<Layer<Dtype> > Net<Dtype>::layer_by_name(
const string& layer_name) {
shared_ptr<Layer<Dtype> > layer_ptr;
if (has_layer(layer_name)) {
layer_ptr = layers_[layer_names_index_[layer_name]];
} else {
layer_ptr.reset((Layer<Dtype>*)(NULL));
LOG(WARNING) << "Unknown layer name " << layer_name;
}
return layer_ptr;
}

INSTANTIATE_CLASS(Net);

} // namespace caffe
77 changes: 77 additions & 0 deletions src/caffe/test/test_math_functions.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
// Copyright 2014 kloudkl@github

#include <stdint.h> // for uint32_t & uint64_t

#include "gtest/gtest.h"
#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/filler.hpp"
#include "caffe/util/math_functions.hpp"

#include "caffe/test/test_caffe_main.hpp"

namespace caffe {

template<typename Dtype>
class MathFunctionsTest : public ::testing::Test {
protected:
MathFunctionsTest()
: blob_bottom_(new Blob<Dtype>()),
blob_top_(new Blob<Dtype>()) {
}

virtual void SetUp() {
Caffe::set_random_seed(1701);
this->blob_bottom_->Reshape(100, 70, 50, 30);
this->blob_top_->Reshape(100, 70, 50, 30);
// fill the values
FillerParameter filler_param;
GaussianFiller<Dtype> filler(filler_param);
filler.Fill(this->blob_bottom_);
filler.Fill(this->blob_top_);
}

virtual ~MathFunctionsTest() {
delete blob_bottom_;
delete blob_top_;
}
// http://en.wikipedia.org/wiki/Hamming_distance
int ReferenceHammingDistance(const int n, const Dtype* x, const Dtype* y);

Blob<Dtype>* const blob_bottom_;
Blob<Dtype>* const blob_top_;
};

#define REF_HAMMING_DIST(float_type, int_type) \
template<> \
int MathFunctionsTest<float_type>::ReferenceHammingDistance(const int n, \
const float_type* x, \
const float_type* y) { \
int dist = 0; \
int_type val; \
for (int i = 0; i < n; ++i) { \
val = static_cast<int_type>(x[i]) ^ static_cast<int_type>(y[i]); \
/* Count the number of set bits */ \
while (val) { \
++dist; \
val &= val - 1; \
} \
} \
return dist; \
}

REF_HAMMING_DIST(float, uint32_t);
REF_HAMMING_DIST(double, uint64_t);

typedef ::testing::Types<float, double> Dtypes;
TYPED_TEST_CASE(MathFunctionsTest, Dtypes);

TYPED_TEST(MathFunctionsTest, TestHammingDistance) {
int n = this->blob_bottom_->count();
const TypeParam* x = this->blob_bottom_->cpu_data();
const TypeParam* y = this->blob_top_->cpu_data();
CHECK_EQ(this->ReferenceHammingDistance(n, x, y),
caffe_hamming_distance<TypeParam>(n, x, y));
}

} // namespace caffe
Loading