文档

卷积神经网络的可视化特性

这个例子展示了如何可视化卷积神经网络学习的特性。

卷积神经网络使用特性对图像进行分类。网络学习训练过程中这些特性本身。什么是网络学习训练中有时不清楚。然而,您可以使用deepDreamImage学习函数可视化特性。

卷积网络输出层多个二维数组。每个数组(或通道)对应于一个过滤器应用到输入层。渠道产出完全连接层对应的高层组合层早些时候学到的特性。

你可以想象什么样子的学习功能deepDreamImage生成图片强烈激活特定频道的网络层。

要求神经网络工具箱™的例子中,图像处理工具箱™,神经网络工具箱模型AlexNet网络金宝app支持包。

负载Pretrained网络

加载一个pretrained AlexNet网络。

网= alexnet;

可视化卷积层

有五个二维卷积AlexNet网络层。卷积层对网络有一个小的开始接受域大小和学习小,低层次的特性。网络层末期有更大的接受域大小和学习更大的功能。

使用层属性,查看网络体系结构和定位卷积层。注意到二维卷积层2层,6,10,12,14所示。

net.Layers
ans x1 = 25层阵列层:227 x227x3图像数据的图像输入“zerocenter”正常化2 conv1卷积96年11 x11x3旋转步[4 4]和填充[0 0]3‘relu1 ReLU ReLU 4 norm1的横通道正常化横通道正常化与5频道/元素5“pool1”马克斯池3 x3马克斯池步(2 - 2)和填充[0 0]6 conv2卷积256 5 x5x48旋转步[1]和填充(2 2]7‘relu2 ReLU ReLU 8 norm2的横通道正常化横通道正常化与5频道/元素9“pool2”马克斯池3 x3马克斯池步(2 - 2)和填充[0 0]384 3 x3x256 conv3的卷积运算与步幅[1]和填充[1]11的relu3 ReLU ReLU 12 conv4卷积384 3 x3x192旋转步[1]和填充[1]13的relu4 ReLU ReLU 14 conv5卷积256 3 x3x192旋转步[1]和填充[1]15 ' relu5 ReLU ReLU 16“pool5”马克斯池3 x3 Max池步(2 - 2)和填充[0 0]17 fc6完全连接4096完全连接层18“relu6”ReLU ReLU 19“drop6”辍学50%辍学20“fc7”完全连接4096完全连接层21 ' relu7 ReLU ReLU 22“drop7”辍学50%辍学23 fc8完全连接1000完全连接层24“概率”Softmax Softmax 25“输出”分类输出crossentropyex“鲤鱼”,“金鱼”,998其他的类

卷积图层1的功能

第一个回旋的层。这一层是第二层网络和命名“conv1”

层= 2;name = net.Layers(层). name
name = ' conv1 '

想象第一个学会了通过这一层使用56特性deepDreamImage通过设置渠道的矢量指数点56。集“PyramidLevels”1,这样的图像缩放。一起来显示图像,您可以使用蒙太奇(图像处理工具箱)。

deepDreamImage使用一个兼容的GPU,默认情况下,如果可用。否则它使用CPU。一个CUDA®启用NVIDIA®GPU计算能力训练需要3.0或更高版本的GPU。

渠道= 56;我= deepDreamImage(净、图层、通道,“PyramidLevels”1);图蒙太奇(我)标题([“层”、名称、“特性”])
| = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | | |金字塔迭代激活| | |级力量| | | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | 1 | 1 | 2.46 | | 1 | 2 | 38.02 | | 1 | 3 | 73.59 | | 1 | 4 | 109.16 | | 1 | 5 | 144.72 | | 1 | 6 | 180.29 | | 1 | 7 | 215.86 | | 1 | 8 | 251.42 | | 1 | 9 | 286.99 | | 1 | 10 | 322.56 | | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = |

这些图片主要包含边缘和颜色,这表明过滤层“conv1”是边缘探测器和颜色过滤器。边缘探测器在不同的角度,它允许网络构建更复杂的功能层的后期。

功能卷积层2

这些特性是使用功能创建的图层“conv1”。第二个卷积层命名“conv2”,它对应于层6。可视化前30特性学习通过这一层设置渠道的矢量指数下午1:30

层= 6;渠道= 1:30;我= deepDreamImage(净、图层、通道,“PyramidLevels”1);图蒙太奇(我)name = net.Layers(层). name;标题([“层”、名称、“特性”])
| = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | | |金字塔迭代激活| | |级力量| | | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | 1 | 1 | 3.86 | | 1 | 2 | 4.87 | | 1 | 3 | 14.14 | | 1 | 4 | 21.64 | | 1 | 5 | 27.33 | | 1 | 6 | 31.88 | | 1 | 7 | 35.66 | | 1 | 8 | 39.32 | | 1 | 9 | 41.96 | | 1 | 10 | 44.27 | | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = |

功能卷积层3 - 5

为每个剩余的卷积层、可视化前30学习特性。抑制详细输出优化过程,集“详细”“假”在调用deepDreamImage。注意到更深的层次网络产生更详细的过滤器。

层= (10 12 14);渠道= 1:30;层= I = deepDreamImage层(网络层、通道“详细”假的,“PyramidLevels”1);图蒙太奇(我)name = net.Layers(层). name;标题([“层”、名称、“特性”])结束

想象完全连接层

有三个AlexNet完全连接层模型。年底完全连接层的网络和学习高级组合早期层学习的特性。

选择第一个两个完全连接层(层17和20)。

20层= [17];

对于每一个层,使用deepDreamImage可视化前六的特性。集“NumIterations”在调用到50deepDreamImage产生更详细的图片。生成的图像从最终完全连接层对应于图像类。

渠道= 1:6;层= I = deepDreamImage层(网络层、通道“详细”假的,“NumIterations”,50);图蒙太奇(我)name = net.Layers(层). name;标题([“层”、名称、“特性”])结束

生产每个类最相似的图像,选择最终的完全连接层,集渠道类的指标。

层= 23;渠道= (9 188 231 563 855 975);

存储在类名一会输出层的属性(最后一层)。您可以查看所选类的名称通过选择条目渠道

net.Layers(结束).ClassNames(渠道)
ans = 1×6单元阵列列1到4“母鸡”“约克郡犬”“喜乐蒂牧羊犬”“喷泉”列5到6“戏剧窗帘”“喷泉”

产生强烈激活这些类的详细图像。

我= deepDreamImage(净、图层、通道,“详细”假的,“NumIterations”,50);图蒙太奇(我)name = net.Layers(层). name;标题([“层”、名称、“特性”])

另请参阅

|

相关的话题

这个主题有帮助吗?