Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework, originally developed at University of California, Berkeley.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. - Official website
Caffe allows switching between CPU and GPU by setting a single flag. Caffe is among the fastest ConvNet implementations available.
Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. - Official website
# | Minimum | Recommended | Optimal |
---|---|---|---|
1 | CUDA v5.5, and 5.0 (considered legacy) | CUDA v6.* | CUDA v7+ |
2 | Basic Linear Algebra Subprograms via ATLAS, MKL, or OpenBLAS I Boost >= 1.55 I protobuf, glog, gflags, hdf5 | Minimum plus (optional): OpenCV >= 2.4 including 3.0 I IO libraries: lmdb, leveldb (note: leveldb requires snappy) I cuDNN for GPU acceleration (v6) | |
3 | For Python Caffe: Python 2.7 or Python 3.3+, numpy (>= 1.7), boost-provided boost.python I For MATLAB Caffe: MATLAB with the mex compiler. |