Caffe
Caffe is a deep learning framework made with expression, speed, and modularity in mind
Features
Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework, originally developed at University of California, Berkeley.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. - Official website
Caffe allows switching between CPU and GPU by setting a single flag. Caffe is among the fastest ConvNet implementations available.
Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. - Official website
System Requirements
- Version ↓
# | Minimum | Recommended | Optimal |
---|---|---|---|
1 | CUDA v5.5, and 5.0 (considered legacy) | CUDA v6.* | CUDA v7+ |
2 | Basic Linear Algebra Subprograms via ATLAS, MKL, or OpenBLAS I Boost >= 1.55 I protobuf, glog, gflags, hdf5 | OpenCV >= 2.4 including 3.0 I IO librarieslmdb, leveldb (noteleveldb requires snappy) I cuDNN for GPU acceleration (v6) | |
3 | Python 2.7 or Python 3.3+, numpy (>= 1.7), boost-provided boost.python I For MATLAB CaffeMATLAB with the mex compiler. |
Developer
Yangqing Jia(OD), Berkeley Vision and Learning Center(BVLC)-Berkeley AI Research(BAIR)
Written in
C++, Python, CUDA
Initial Release
10 October 2013
Alternatives
Deep Learning
OpenNN
Apache MXNet (Incubating)
Apache SystemML
Eclipse Deeplearning4j
PyTorch
TensorFlow
The Microsoft Cognitive Toolkit
Torch
Weka
This page was last updated with commit: Following: - Fixed: missing sources for features now added - Removed: Google Analytics Async (deprecated) - Added: missing aria-labels to input elements - Updated: partials/seo.html code for new data structure - Fixed: changed aria-label to title for span and divs - Fixed: color of status icon on softpages not appearing correctly (5221a6e)