Caffe

A deep learning framework made with expression, speed, and modularity in mind

Status |
  Get it    Visit
                   
</page-source>
Overview    Platform    Social    System Requirements    Ratings    Developer    Written in    Initial Release    Repository    License    Categories   

Overview

Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework, originally developed at University of California, Berkeley.

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. - Official website

Caffe allows switching between CPU and GPU by setting a single flag. Caffe is among the fastest ConvNet implementations available.

Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. - Official website

Documentation I Mailing list - Users

Platform

  

Social

 

System Requirements

#MinimumRecommendedOptimal
1CUDA v5.5, and 5.0 (considered legacy)CUDA v6.*CUDA v7+
2Basic Linear Algebra Subprograms via ATLAS, MKL, or OpenBLAS I Boost >= 1.55 I protobuf, glog, gflags, hdf5Minimum plus (optional): OpenCV >= 2.4 including 3.0 I IO libraries: lmdb, leveldb (note: leveldb requires snappy) I cuDNN for GPU acceleration (v6)
3For Python Caffe: Python 2.7 or Python 3.3+, numpy (>= 1.7), boost-provided boost.python I For MATLAB Caffe: MATLAB with the mex compiler.

Ratings

4.00
5
InfoWorld: 4
5  based on professional's opinion

Developer

Yangqing Jia(OD), Berkeley Vision and Learning Center(BVLC)/Berkeley AI Research(BAIR)

Written in

C++, Python, CUDA

Initial Release

10 October 2013

License

BSD-2