各种流行深度学习构架的性能对比

知乎上对各种深度学习方法的对比:        在众多的神经网络框架如chainer, caffe, torch,mxnet等之间如何做选择?四个月前就有人提出更新对比,现在我看还没有对比更新过。        原文:Abstract. In this study, I evaluate some popular deep learning toolkits. The candidates are

知乎上对各种深度学习方法的对比:

        在众多的神经网络框架如chainer, caffe, torch,mxnet等之间如何做选择?

四个月前就有人提出更新对比,现在我看还没有对比更新过。

        Evaluation of Deep Learning Toolkits


原文:

Abstract. In this study, I evaluate some popular deep learning toolkits. The candidates are listed in alphabetical order:Caffe,CNTK,TensorFlow,Theano, andTorch. This is a dynamic document and the evaluation, to the best of my knowledge, is based on the current state of their code.

I also provide ratings in some areas because for a lot of people, ratings are useful. However, keep in mind that ratings are inherently subjective [1].

If you find something wrong or inadequate, please help improve by filing an issue.

      本文对 Caffe,CNTK,TensorFlow,Theano, andTorch. 几种框架进行对比,如有错误,敬请指正!


Table of contents

  1. Modeling Capability   
  2. Interfaces
  3. Model Deployment
  4. Performance
  5. Architecture
  6. Ecosystem
  7. Cross-platform

         


一、Modeling Capability-兼容性

   

In this section, we evaluate each toolkit's ability to train common and state-of-the-art networks without writing too much code. Some of these networks are:

  • ConvNets: AlexNet, OxfordNet, GoogleNet
  • RecurrentNets: plain RNN, LSTM/GRU, bidirectional RNN
  • Sequential modeling with attention.

In addition, we also evaluate the flexibility to create a new type of model.

        模型 相容性:     在此章节中,评价每个工具箱 在不修改更多代码的情况下 训练通用和日新月异的网络的能力。

一些网络为:

       卷积神经网络: AlexNet, OxfordNet, GoogleNet

       递归神经网路 :  plain RNN, LSTM/GRU, bidirectional RNN

       注意力 序列模型

Caffe

        Caffe 作为 社区和业界最为流行的深度神经网络,具有很强的伸缩性、扩展性和相容性;但是对递归神经网络的支持比较贫乏。

Caffe is perhaps the first mainstream industry-grade deep learning toolkit, started in late 2013, due to its excellent convnet implementation (at the time). It is still the most popular toolkit within the computer vision community, with many extensions being actively added.

However, its support for recurrent networks and language modeling in general is poor, due to its legacy architecture, which's limitations are detailed in thearchitecture section.

CNTK

         CNTK在speech社区更为流行。在CNTK(如 TensorFlow 和 Theano ),网络作为一个向量操作图,栗如 矩阵 加和乘。一个层是这种运算的组合。buildding blocks 的微调粒度允许 在不执行底层的情况下 创建一个更复杂的层。

CNTK is a deep learning system started by the speech people whostarted the deep learning craze and grown into a more general platform-independent deep learning system. It is better known in the speech community than in the general deep learning community.

In CNTK (as in TensorFlow and Theano), a network is specified as a symbolic graph of vector operations, such as matrix add/multiply or convolution. A layer is just a composition of those operations. The fine granularity of the building blocks (operations) allows users to invent new complex layer types without implementing them in a low-level language (as in Caffe).

As of today, CNTK is not usable for a variety of tasks such as sequence-2-sequence.

TensorFlow

       tensorflow 是一个较新的网络,对RNN的表示较为容易且有效(使用桶的方法 );特点:RNN API、次最优执行;双向RNN;暂时没有适用于视频的3D卷积。

       每一个计算流被构建为一个静态图,这会使一些计算困难,比如 柱搜索 方法(常用于序列预测任务的方法)。

State-of-the-art models

  • RNN API and implementation are suboptimal. The team also commented about ithere andhere.
  • Bidirectional RNN not available yet
  • No 3D convolution, which is useful for video recognition

New modelsSince TF uses symbolic graph of vector operations approach, specifying a new network is fairly easy. Although it doesn't support symbolic loop yet (at least not well tested/documented, as of 05/2016), RNNs can be made easy and efficient using the bucketing trick.

However, TF has a major weakness in terms of modeling flexibility. Every computational flow has be constructed as a static graph. That makes some computations difficult, such asbeam search (which is used frequently in sequence prediction tasks).

Theano

     Theano:较新的框架结构,一般以高层的构架运行或者一纯Theano运行;

     新的模型:Theano倡导使用符号图表 运行网络,其符号API支持 环控制--成为 搜索,这种方法使RNN执行变得容易且有效;

State-of-the-art models. Theano has implementation for most state-of-the-art networks, either in the form of a higher-level framework (e.g.Blocks,Keras, etc.) or in pure Theano.

New models. Theano pioneered the trend of using symbolic graph for programming a network. Theano's symbolic API supports looping control, so-calledscan, which makes implementing RNNs easy and efficient. Users don't always have to define a new model at the tensor operations level. There are a few higher-level frameworks, mentioned above, which make model definition and training simpler.

Torch

State-of-the-art models

  • Excellent for conv nets. It's worth noting that temporal convolution can be done in TensorFlow/Theano viaconv2d but that's a trick. The native interface for temporal convolution in Torch makes it slightly more intuitive to use.
  • Rich set of RNNs available through anon-official extension [2]

New models. In Torch, there are multiple ways (stack of layers or graph of layers) to define a network but essentially, a network is defined as a graph of layers. Because of this coarser granularity, Torch is sometimes considered less flexible because for new layer types, users have to implement the full forward, backward, and gradient input update.

However, unlike Caffe, defining a new layer in Torch is much easier because you don't have to program in C++. Plus, in Torch, the difference between new layer definition and network definition is minimal. In Caffe, layers are defined in C++ while networks are defined via Protobuf.

Torch is more flexible than TensorFlow and Theano in that it is imperative while TF/Theano are declarative (i.e. one has to declare a computational graph). That makes some operations, e.g. beam search, much easier to do in Torch.

      Torch在CNN网络方面做的极为优秀,在2维卷积网络方面使用的更为直观。   与caffe不同的是,Torch更容易构建网络,因为构建新层不涉及C++的执行。因此,使得网络和层的定义可以占比重较小。而Caffe定义网络:每一层使用C++定义,整个网络配置则使用Protobuf文件。
      TF/Theano are declarative使用陈述时语言(语法图),而命令式语言的Torch则显得扩展性更强。使得一些方法如柱搜索 更加容易。



Left: graph model of CNTK/Theano/TensorFlow; Right: graph model of Caffe/Torch



二、Interfaces--接口

Caffe

Caffe has pycaffe interface but that's a mere secondary alternative to the command line interface. The model has to be defined in protobuf (usually with a plain text editor), even if you usepycaffe.

此外,Caffe提供了Python的接口,可以使用命令式语言逐步执行。

CNTK

The way to use CNTK, similar to Caffe, is to specify a config file and run command line. CNTK is slightly worse than Caffe because there's no Python or any other high-level language interface.

CNTK与Caffe类似,提供了命令行执行接口,但糟糕的是没有提供Python和其他高级语言的接口。

TensorFlow

TF supports two interfaces: Python and C++. This means that you can do experiments in a rich, high-level environment and deploy your model in an environment that requires native code or low latency.

It would be perfect if TF supports F# or TypeScript. The lack of static type in Python is just ... painful :).

TensorFlow 有C++和Python的接口,这意味着可以使用高级脚本语言执行,并能兼顾运行效率。

TensorFlow支持F# 就是脑残了!!!

Theano

Python

Torch

Torch runs on LuaJIT, which is amazingly fast (comparable with industrial languages such as C++/C#/Java). Hence developers don't have to think about symbolic programming, which can be limited. They can just write all kinds of computations without worrying about performance penalty.

However, let's face it, Lua is not yet a mainstream language.

Torch使用了LUA脚本语言接口。



三、Model Deployment--模型部署 难易度

How easy to deploy a new model?

Caffe

Caffe is C++ based, which can be compiled on a variety of devices. It is cross-platform (windows port is available and maintainedhere). Which makes Caffe the best choice with respect deployment.   

基于C++的特性,部署起来还是较为困难的。

CNTK

Like Caffe, CNTK is also C++ based and is cross-platform. Hence, deployment should be easy in most cases. However, to my understanding, it doesn't work on ARM architecture, which limits its its capability on mobile devices. 

CNTK更甚,甚至不能运行在ARM平台上。

TensorFlow

TF supports C++ interface and the library can be compiled/optimized on ARM architectures because it usesEigen (instead of a BLAS library). This means that you can deploy your trained models on a variety of devices (servers or mobile devices) without having to implement a separate model decoder or load Python/LuaJIT interpreter [3].

TF doesn't work on Windows yet so TF models can't be deployed on Windows devices though.

TensorFlow不能用于windows,因此不能用于windows设备

Theano

The lack of low-level interface and the inefficiency of Python interpreter makes Theano less attractive for industrial users. For a large model, the overhead of Python isn’t too bad but the dogma is still there.

The cross-platform nature (mentioned below) enables a Theano model to be deployed in a Windows environment. Which helps it gain some points.

Torch

Torch require LuaJIT to run models. This makes it less attractive than bare bone C++ support of Caffe/CNTK/TF. It’s not just the performance overhead, which is minimal. The bigger problem is integration, at API level, with a larger production pipeline.

Torch要求LUA的JIT编译器,这样使Torch在效率上低于支持C++的Caffe/CNTK/TF



四、Performance  性能表现

Single-GPU   在单GPU上的表现

All of these toolkits call cuDNN so as long as there’s no major computations or memory allocations at the outer level, they should perform similarly.

Soumith@FB has done some benchmarking for ConvNets. Deep Learning is not just about feedforward convnets, not just about ImageNet, and certainly not just about a few passes over the network. However, Soumith’s benchmark is the only notable one as of today. So we will base the Single-GPU performance rating based on his benchmark.

TensorFlow and Torch

可以在一个 TitanX GPU上运行 的TensorFlow...表现如下表:

TensorFlow used to be slow when it first came out but as of 05/2016, it has reached the ballpark of other frameworks in terms of ConvNet speed. This is not surprising because every framework nowadays calls CuDNN for the actual computations.

Here's my latest micro benchmark of TensorFlow 0.8 vs before. The measurement is latency, in milliseconds, for one full minibatch forward-backward pass on a single Titan X GPU.

Network TF 0.6 [ref] TF 0.8 [my run] Torch FP32 [my run]
AlexNet 292 97 81
Inception v1 1237 518 470

Theano

在大型的网络中,Theano的表现

此外,Theano可以使用CUDA本地代码,单机并行执行.....

On big networks, Theano’s performance is on par with Torch7, according to this benchmark. The main issue of Theano is startup time, which is terrible, because Theano has to compile C/CUDA code to binary. We don’t always train big models. In fact, DL researchers often spend more time debugging than training big models. TensorFlow doesn’t have this problem. It simply maps the symbolic tensor operations to the already-compiled corresponding function calls.

Even import theano takes time because thisimport apparently does a lot of stuffs. Also, afterimport Theano, you are stuck with a pre-configured device (e.g.GPU0).


Multi-GPU-分布式多GPU的表现

TBD


五、Architecture-结构

Developer Zone

Caffe

逐层初始化的设计,网络块的构建基础是层。   Caffe趋向于标准模型的使用,使用C++构建层,并使用逐层初始化方式,并使用了protobuf作为配置接口。

      对于新的层,必须定义 前向-后向和梯度更新规则。可以参考...

      对于GPU和CPU的切换,你可能必须要更改代码支持(貌似可以不这样)......

      更糟糕的是:必须对每一层的功能ID进行清晰的定义,若过早合并,可能发生冲突.....

     (虽然如此,Caffe依然是兼顾效率和易用度最适合的用于CNN的网络...)

Protobuf. 配置接口...

Caffe's architecture was considered excellent when it was born but in the modern standard, it is considered average. The main pain points of Caffe are its layer-wise design in C++ and the protobuf interface for model definition.

Layer-wise design. The building block of a network in Caffe is layer.

  • For new layer types, you have to define the full forward, backward, and gradient update. You can see an alreadylong-list of layers implemented in (official) caffe.
  • What's worse is that if you want to support both CPU and GPU, you need to implement extra functions, e.g.Forward_gpu and Backward_gpu.
  • Worse, you need to assign an int id to your layer type and add that to theproto file. If your pull request is not merged early, you may need to change the id because someone else already claims that.

Protobuf. Caffe haspycaffe interface but that's a mere replacement of the command line interface. The model has to be defined in protobuf (usually with a plain text editor), even if you usepycaffe.

[Copied from my own answer on Quora]

CNTK

To be updated ...

TensorFlow

TF has a clean, modular architecture with multiple frontends and execution platforms. Details are in thewhite paper.

Theano

The architecture is fairly hacky: the whole code base is Python where C/CUDA code is packaged as Python string. This makes it hard to navigate, debug, refactor, and hence contribute as developers.

Torch

Torch7 and nn libraries are also well-designed with clean, modular interfaces.

Ecosystem

  • Caffe and CNTK: C++
  • TensorFlow: Python and C++
  • Theano: Python
  • Torch: Lua is not a mainstream language and hence libraries built for it are not as rich as ones built for Python.

Cross-platform

Caffe, CNTK, and Theano work on all OSes. TensorFlow and Torch do not work on Windows and there's no known plan to port from either camp.



综述:(与原作者无关)

           对于个人实验者极力推崇 Caffe 用于 CNN;另外使用RNN的科学工作者,推荐使用 Theano 和 TensorFlow。

           使用并行分布式的系统,推荐CNN使用Caffe,而RNN使用TensorFlow。


Karpathy的评价:
  1. 特征提取或者在已知的模型上进行fine-tuning,用Caffe
  2. 在已经训练好的模型上进行更加复杂的应用,用Torch
  3. 自己写各个网络层,用Torch
  4. 大量的使用RNNs的话,用Theano或者Tensorflow
  5. 大规模的模型训练,或者并行的模型训练,用Tensorflow


Footnotes

[1] Note that I don’t aggregate ratings because different users/developers have different priorities.

[2] Disclaimer: I haven’t analyzed this extension carefully.

[3] See my blog post for why this is desirable.



作者:wishchin
原文链接:https://blog.csdn.net/wishchin/article/details/51853599

  • 发表于 2019-11-07 08:13
  • 阅读 ( 17 )
  • 分类:深度学习

0 条评论

请先 登录 后评论
不写代码的码农
wishchin_csdn

0 篇文章

作家榜 »

  1. AI君 10 文章
  2. Tzung-Wen Liau 0 文章
  3. blairan 0 文章
  4. rookie 0 文章
  5. 陈凯 0 文章
  6. huanxue 0 文章
  7. admin 0 文章
  8. Lzs1998_csdn 0 文章