VGGNet
- 2014년 당시 VGGNet은 GoogLeNet보다 훨씬 간결하다. (성능 차이는 크지 않음)
- 네트워크의 깊이와 모델 성능 영향에 집중.
- convolution 커널 사이즈를 3x3으로 고정 했다.
- 커널 사이즈가 크면 이미지 사이즈 축소가 급격하게 이뤄져서 더 깊은 층을 만들기 어렵고, 파라미터 개수와 연산량도 더 많이 필요하다.
- 단일화된 커널 크기, padding, strides 값으로 단순한 네트워크 구성을 하지만 AlexNet보다 더 나은 성능.
- AlexNet의 11x11, 5x5와 같은 큰 receptive field를 가진 커널의 크기를 적용하지 않고도, 3x3 커널을 연속으로 적용. ( receptive field는 입력에서 피처를 만드는 영역의 기본 크기)
- AlexNet보다 더 깊은 네트워크를 적용하지만 더 적은 수의 파라미터를 갖는다.
- AlexNet보다 더 많은 채널 수와 더 깊은 레이어를 구성한다.
- 대표적으로 16개의 weight layer가 있는 VGG16과 19개의 weight layer가 있는 VGG19를 많이 활용한다.
import numpy as np
import pandas as pd
import os
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
input_tensor = Input(shape=(224, 224, 3))
base_model = VGG16(input_tensor=input_tensor, include_top=True, weights='imagenet')
model = Model(inputs=input_tensor, outputs=base_model.output)
model.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5
553467904/553467096 [==============================] - 2s 0us/step
553476096/553467096 [==============================] - 2s 0us/step
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
flatten (Flatten) (None, 25088) 0
fc1 (Dense) (None, 4096) 102764544
fc2 (Dense) (None, 4096) 16781312
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense , Conv2D , Dropout , Flatten , Activation, MaxPooling2D , GlobalAveragePooling2D
from tensorflow.keras.optimizers import Adam , RMSprop
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.callbacks import ReduceLROnPlateau , EarlyStopping , ModelCheckpoint , LearningRateScheduler
def create_vggnet(in_shape=(224, 224, 3), n_classes=10):
input_tensor = Input(shape=in_shape)
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(input_tensor)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
x = Dense(units = 120, activation = 'relu')(x)
x = Dropout(0.5)(x)
# 마지막 softmax 층 적용.
output = Dense(units = n_classes, activation = 'softmax')(x)
model = Model(inputs=input_tensor, outputs=output)
model.summary()
return model
model = create_vggnet(in_shape=(224, 224, 3), n_classes=10)
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
global_average_pooling2d (G (None, 512) 0
lobalAveragePooling2D)
dropout (Dropout) (None, 512) 0
dense (Dense) (None, 120) 61560
dropout_1 (Dropout) (None, 120) 0
dense_1 (Dense) (None, 10) 1210
=================================================================
Total params: 14,777,458
Trainable params: 14,777,458
Non-trainable params: 0
_________________________________________________________________
VGG16을 연속된 Conv를 하나의 block으로 간주하고 이를 생성할 수 있는 conv_block()함수 만듬.
- conv_block()함수는 인자로 입력 feature map과 Conv 연산에 사용될 커널의 필터 개수와 사이즈(무조건 3x3), 그리고 출력 feature map을 크기를 줄이기 위한 strides를 입력 받음
- 또한 repeats인자를 통해 연속으로 conv 연산 수행 횟수를 정함.
from tensorflow.keras.layers import Conv2D, Dense, MaxPooling2D, GlobalAveragePooling2D, Input
from tensorflow.keras.models import Model
# 인자로 입력된 input_tensor에 kernel 크기 3x3(Default), 필터 개수 filters인 conv 연산을 n회 연속 적용하여 출력 feature map을 생성.
# repeats인자를 통해 연속으로 conv 연산 수행 횟수를 정함
# 마지막에 MaxPooling(2x2), strides=2 로 출력 feature map의 크기를 절반으로 줄임. 인자로 들어온 strides는 MaxPooling에 사용되는 strides임.
def conv_block(tensor_in, filters, kernel_size, repeats=2, pool_strides=(2, 2), block_id=1):
'''
파라미터 설명
tensor_in: 입력 이미지 tensor(처음인 경우) 또는 입력 feature map tensor (중간인 경우)
filters: conv 연산 filter개수
kernel_size: conv 연산 kernel 크기
repeats: conv 연산 적용 회수(Conv2D Layer 수)
pool_strides:는 MaxPooling의 strides임. Conv 의 strides는 (1, 1)임.
'''
x = tensor_in
# 인자로 들어온 repeats 만큼 동일한 Conv연산을 수행함.
for i in range(repeats):
# Conv 이름 부여
conv_name = 'block'+str(block_id)+'_conv'+str(i+1)
x = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', padding='same', name=conv_name)(x)
# max pooling 적용하여 출력 feature map의 크기를 절반으로 줄임. 함수인자로 들어온 strides를 MaxPooling2D()에 인자로 입력.
x = MaxPooling2D((2, 2), strides=pool_strides, name='block'+str(block_id)+'_pool')(x)
return x
생성한 conv_block()을 이용하여 convolution block을 생성하고 확인
input_tensor = Input(shape=(224, 224, 3), name='test_input')
x = conv_block(tensor_in=input_tensor, filters=64, kernel_size=(3, 3), repeats=3, pool_strides=(2, 2), block_id=1)
conv_layers = Model(inputs=input_tensor, outputs=x)
conv_layers.summary()
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
test_input (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_conv3 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
=================================================================
Total params: 75,648
Trainable params: 75,648
Non-trainable params: 0
_________________________________________________________________
VGG 16 모델을 생성.
- 앞에서 만든 conv_block()을 이용하여 block별로 feature map들을 생성. 총 5개의 block 생성.
- 1번~ 4번 block까지는 입력 feature map 대비 출력 feature map의 필터수는 2배, 크기는 절반으로 줄임. 5번 block은 filter수는 그대로, 크기만 절반으로 줄임.
- 논문 대로 네트웍 구성시 Fully connected layer에서 많은 파라미터가 필요하므로 GlobalAverage Pooling을 적용
def create_vggnet_by_block(in_shape=(224, 224,3), n_classes=10):
input_tensor = Input(shape=in_shape, name='Input Tensor')
# (입력 image Tensor 또는 Feature Map)->Conv->Relu을 순차적으로 2번 실행, 출력 Feature map의 filter 수는 64개. 크기는 MaxPooling으로 절반.
x = conv_block(input_tensor, filters=64, kernel_size=(3, 3), repeats=2, pool_strides=(2, 2), block_id=1)
# Conv연산 2번 반복, 입력 Feature map의 filter 수를 2배로(128개), 크기는 절반으로 출력 Feature Map 생성.
x = conv_block(x, filters=128, kernel_size=(3, 3), repeats=2, pool_strides=(2, 2), block_id=2)
# Conv연산 3번 반복, 입력 Feature map의 filter 수를 2배로(256개), 크기는 절반으로 출력 Feature Map 생성.
x = conv_block(x, filters=256, kernel_size=(3, 3), repeats=3, pool_strides=(2, 2), block_id=3)
# Conv연산 3번 반복, 입력 Feature map의 filter 수를 2배로(512개), 크기는 절반으로 출력 Feature Map 생성.
x = conv_block(x, filters=512, kernel_size=(3, 3), repeats=3, pool_strides=(2, 2), block_id=4)
# Conv 연산 3번 반복, 입력 Feature map의 filter 수 그대로(512), 크기는 절반으로 출력 Feature Map 생성.
x = conv_block(x, filters=512, kernel_size=(3, 3), repeats=3, pool_strides=(2, 2), block_id=5)
# GlobalAveragePooling으로 Flatten적용.
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
x = Dense(units = 120, activation = 'relu')(x)
x = Dropout(0.5)(x)
# 마지막 softmax 층 적용.
output = Dense(units = n_classes, activation = 'softmax')(x)
# 모델을 생성하고 반환.
model = Model(inputs=input_tensor, outputs=output, name='vgg_by_block')
model.summary()
return model
model = create_vggnet_by_block(in_shape=(224, 224, 3), n_classes=10)
Model: "vgg_by_block"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Input Tensor (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
global_average_pooling2d_1 (None, 512) 0
(GlobalAveragePooling2D)
dropout_2 (Dropout) (None, 512) 0
dense_2 (Dense) (None, 120) 61560
dropout_3 (Dropout) (None, 120) 0
dense_3 (Dense) (None, 10) 1210
=================================================================
Total params: 14,777,458
Trainable params: 14,777,458
Non-trainable params: 0
_________________________________________________________________
CIFAR10 데이터 세트로 VGG16 모델 학습 및 성능 테스트
IMAGE_SIZE = 128
BATCH_SIZE = 64
기존 사용한 데이터 전처리/인코딩/스케일링 함수 및 CIFAR_Dataset
import tensorflow as tf
import numpy as np
import pandas as pd
import random as python_random
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import Sequence
import cv2
import sklearn
def zero_one_scaler(image):
return image/255.0
def get_preprocessed_ohe(images, labels, pre_func=None):
# preprocessing 함수가 입력되면 이를 이용하여 image array를 scaling 적용.
if pre_func is not None:
images = pre_func(images)
# OHE 적용
oh_labels = to_categorical(labels)
return images, oh_labels
# 학습/검증/테스트 데이터 세트에 전처리 및 OHE 적용한 뒤 반환
def get_train_valid_test_set(train_images, train_labels, test_images, test_labels, valid_size=0.15, random_state=2021):
# 학습 및 테스트 데이터 세트를 0 ~ 1사이값 float32로 변경 및 OHE 적용.
train_images, train_oh_labels = get_preprocessed_ohe(train_images, train_labels)
test_images, test_oh_labels = get_preprocessed_ohe(test_images, test_labels)
# 학습 데이터를 검증 데이터 세트로 다시 분리
tr_images, val_images, tr_oh_labels, val_oh_labels = train_test_split(train_images, train_oh_labels, test_size=valid_size, random_state=random_state)
return (tr_images, tr_oh_labels), (val_images, val_oh_labels), (test_images, test_oh_labels )
from tensorflow.keras.utils import Sequence
import cv2
import sklearn
# 입력 인자 images_array labels는 모두 numpy array로 들어옴.
# 인자로 입력되는 images_array는 전체 32x32 image array임.
class CIFAR_Dataset(Sequence):
def __init__(self, images_array, labels, batch_size=BATCH_SIZE, augmentor=None, shuffle=False, pre_func=None):
'''
파라미터 설명
images_array: 원본 32x32 만큼의 image 배열값.
labels: 해당 image의 label들
batch_size: __getitem__(self, index) 호출 시 마다 가져올 데이터 batch 건수
augmentor: albumentations 객체
shuffle: 학습 데이터의 경우 epoch 종료시마다 데이터를 섞을지 여부
'''
# 객체 생성 인자로 들어온 값을 객체 내부 변수로 할당.
# 인자로 입력되는 images_array는 전체 32x32 image array임.
self.images_array = images_array
self.labels = labels
self.batch_size = batch_size
self.augmentor = augmentor
self.pre_func = pre_func
# train data의 경우
self.shuffle = shuffle
if self.shuffle:
# 객체 생성시에 한번 데이터를 섞음.
#self.on_epoch_end()
pass
# Sequence를 상속받은 Dataset은 batch_size 단위로 입력된 데이터를 처리함.
# __len__()은 전체 데이터 건수가 주어졌을 때 batch_size단위로 몇번 데이터를 반환하는지 나타남
def __len__(self):
# batch_size단위로 데이터를 몇번 가져와야하는지 계산하기 위해 전체 데이터 건수를 batch_size로 나누되, 정수로 정확히 나눠지지 않을 경우 1회를 더한다.
return int(np.ceil(len(self.labels) / self.batch_size))
# batch_size 단위로 image_array, label_array 데이터를 가져와서 변환한 뒤 다시 반환함
# 인자로 몇번째 batch 인지를 나타내는 index를 입력하면 해당 순서에 해당하는 batch_size 만큼의 데이타를 가공하여 반환
# batch_size 갯수만큼 변환된 image_array와 label_array 반환.
def __getitem__(self, index):
# index는 몇번째 batch인지를 나타냄.
# batch_size만큼 순차적으로 데이터를 가져오려면 array에서 index*self.batch_size:(index+1)*self.batch_size 만큼의 연속 데이터를 가져오면 됨
# 32x32 image array를 self.batch_size만큼 가져옴.
images_fetch = self.images_array[index*self.batch_size:(index+1)*self.batch_size]
if self.labels is not None:
label_batch = self.labels[index*self.batch_size:(index+1)*self.batch_size]
# 만일 객체 생성 인자로 albumentation으로 만든 augmentor가 주어진다면 아래와 같이 augmentor를 이용하여 image 변환
# albumentations은 개별 image만 변환할 수 있으므로 batch_size만큼 할당된 image_name_batch를 한 건씩 iteration하면서 변환 수행.
# 변환된 image 배열값을 담을 image_batch 선언. image_batch 배열은 float32 로 설정.
image_batch = np.zeros((images_fetch.shape[0], IMAGE_SIZE, IMAGE_SIZE, 3), dtype='float32')
# batch_size에 담긴 건수만큼 iteration 하면서 opencv image load -> image augmentation 변환(augmentor가 not None일 경우)-> image_batch에 담음.
for image_index in range(images_fetch.shape[0]):
#image = cv2.cvtColor(cv2.imread(image_name_batch[image_index]), cv2.COLOR_BGR2RGB)
# 원본 image를 IMAGE_SIZE x IMAGE_SIZE 크기로 변환
image = cv2.resize(images_fetch[image_index], (IMAGE_SIZE, IMAGE_SIZE))
# 만약 augmentor가 주어졌다면 이를 적용.
if self.augmentor is not None:
image = self.augmentor(image=image)['image']
# 만약 scaling 함수가 입력되었다면 이를 적용하여 scaling 수행.
if self.pre_func is not None:
image = self.pre_func(image)
# image_batch에 순차적으로 변환된 image를 담음.
image_batch[image_index] = image
return image_batch, label_batch
# epoch가 한번 수행이 완료 될 때마다 모델의 fit()에서 호출됨.
def on_epoch_end(self):
if(self.shuffle):
#print('epoch end')
# 원본 image배열과 label를 쌍을 맞춰서 섞어준다. scikt learn의 utils.shuffle에서 해당 기능 제공
self.images_array, self.labels = sklearn.utils.shuffle(self.images_array, self.labels)
else:
pass
원-핫 인코딩 및 스케일링, 학습/검증/테스트 데이터 세트 분할
# CIFAR10 데이터 재 로딩 및 Scaling/OHE 전처리 적용하여 학습/검증/데이터 세트 생성.
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
print(train_images.shape, train_labels.shape, test_images.shape, test_labels.shape)
(tr_images, tr_oh_labels), (val_images, val_oh_labels), (test_images, test_oh_labels) = \
get_train_valid_test_set(train_images, train_labels, test_images, test_labels, valid_size=0.2, random_state=2021)
print(tr_images.shape, tr_oh_labels.shape, val_images.shape, val_oh_labels.shape, test_images.shape, test_oh_labels.shape)
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 6s 0us/step
170508288/170498071 [==============================] - 6s 0us/step
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
(40000, 32, 32, 3) (40000, 10) (10000, 32, 32, 3) (10000, 10) (10000, 32, 32, 3) (10000, 10)
학습, 검증용 CIFAR_Dataset 생성
- 32x32 image array를 배치 개수만큼만 128x128로 변경.
- scaling은 VGG 원래 구현시 사용한 채널별 값 - mean = [103.939, 116.779, 123.68] 적용.
from tensorflow.keras.applications.vgg16 import preprocess_input as vgg_preprocess
tr_ds = CIFAR_Dataset(tr_images, tr_oh_labels, batch_size=BATCH_SIZE, augmentor=None, shuffle=True, pre_func=vgg_preprocess)
val_ds = CIFAR_Dataset(val_images, val_oh_labels, batch_size=BATCH_SIZE, augmentor=None, shuffle=False, pre_func=vgg_preprocess)
print(next(iter(tr_ds))[0].shape, next(iter(val_ds))[0].shape)
print(next(iter(tr_ds))[1].shape, next(iter(val_ds))[1].shape)
# 채널별 값 - mean = [103.939, 116.779, 123.68]
print(next(iter(tr_ds))[0][0])
(64, 128, 128, 3) (64, 128, 128, 3)
(64, 10) (64, 10)
[[[ 73.061 57.221 40.32 ]
[ 73.061 57.221 40.32 ]
[ 70.061 54.221 38.32 ]
...
[-34.939003 -42.779 -50.68 ]
[-35.939003 -44.779 -53.68 ]
[-35.939003 -44.779 -53.68 ]]
[[ 73.061 57.221 40.32 ]
[ 73.061 57.221 40.32 ]
[ 70.061 54.221 38.32 ]
...
[-34.939003 -42.779 -50.68 ]
[-35.939003 -44.779 -53.68 ]
[-35.939003 -44.779 -53.68 ]]
[[ 75.061 59.221 42.32 ]
[ 75.061 59.221 42.32 ]
[ 72.061 56.221 40.32 ]
...
[-34.939003 -42.779 -50.68 ]
[-35.939003 -44.779 -52.68 ]
[-35.939003 -44.779 -52.68 ]]
...
[[120.061 102.221 109.32 ]
[120.061 102.221 109.32 ]
[116.061 99.221 107.32 ]
...
[-35.939003 -44.779 -55.68 ]
[-34.939003 -43.779 -53.68 ]
[-34.939003 -43.779 -53.68 ]]
[[121.061 103.221 110.32 ]
[121.061 103.221 110.32 ]
[117.061 100.221 107.32 ]
...
[-36.939003 -45.779 -56.68 ]
[-35.939003 -44.779 -54.68 ]
[-35.939003 -44.779 -54.68 ]]
[[121.061 103.221 110.32 ]
[121.061 103.221 110.32 ]
[117.061 100.221 107.32 ]
...
[-36.939003 -45.779 -55.68 ]
[-35.939003 -44.779 -54.68 ]
[-35.939003 -44.779 -54.68 ]]]
VGG16 모델 생성 후 학습 및 성능 검증
- learning_rate를 기존 0.001이 아닌 0.0001로 초기값 부여
vgg_model = create_vggnet_by_block(in_shape=(128, 128, 3), n_classes=10)
vgg_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
# 5번 iteration내에 validation loss가 향상되지 않으면 learning rate을 기존 learning rate * 0.2로 줄임.
rlr_cb = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, mode='min', verbose=1)
ely_cb = EarlyStopping(monitor='val_loss', patience=10, mode='min', verbose=1)
history = vgg_model.fit(tr_ds, epochs=5,
#steps_per_epoch=int(np.ceil(tr_images.shape[0]/BATCH_SIZE)),
validation_data=val_ds,
#validation_steps=int(np.ceil(val_images.shape[0]/BATCH_SIZE)),
callbacks=[rlr_cb, ely_cb]
)
Model: "vgg_by_block"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Input Tensor (InputLayer) [(None, 128, 128, 3)] 0
block1_conv1 (Conv2D) (None, 128, 128, 64) 1792
block1_conv2 (Conv2D) (None, 128, 128, 64) 36928
block1_pool (MaxPooling2D) (None, 64, 64, 64) 0
block2_conv1 (Conv2D) (None, 64, 64, 128) 73856
block2_conv2 (Conv2D) (None, 64, 64, 128) 147584
block2_pool (MaxPooling2D) (None, 32, 32, 128) 0
block3_conv1 (Conv2D) (None, 32, 32, 256) 295168
block3_conv2 (Conv2D) (None, 32, 32, 256) 590080
block3_conv3 (Conv2D) (None, 32, 32, 256) 590080
block3_pool (MaxPooling2D) (None, 16, 16, 256) 0
block4_conv1 (Conv2D) (None, 16, 16, 512) 1180160
block4_conv2 (Conv2D) (None, 16, 16, 512) 2359808
block4_conv3 (Conv2D) (None, 16, 16, 512) 2359808
block4_pool (MaxPooling2D) (None, 8, 8, 512) 0
block5_conv1 (Conv2D) (None, 8, 8, 512) 2359808
block5_conv2 (Conv2D) (None, 8, 8, 512) 2359808
block5_conv3 (Conv2D) (None, 8, 8, 512) 2359808
block5_pool (MaxPooling2D) (None, 4, 4, 512) 0
global_average_pooling2d_3 (None, 512) 0
(GlobalAveragePooling2D)
dropout_6 (Dropout) (None, 512) 0
dense_6 (Dense) (None, 120) 61560
dropout_7 (Dropout) (None, 120) 0
dense_7 (Dense) (None, 10) 1210
=================================================================
Total params: 14,777,458
Trainable params: 14,777,458
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
625/625 [==============================] - 93s 147ms/step - loss: 1.9802 - accuracy: 0.2278 - val_loss: 1.6182 - val_accuracy: 0.3855 - lr: 1.0000e-04
Epoch 2/5
625/625 [==============================] - 91s 146ms/step - loss: 1.5350 - accuracy: 0.4226 - val_loss: 1.3120 - val_accuracy: 0.5140 - lr: 1.0000e-04
Epoch 3/5
625/625 [==============================] - 91s 146ms/step - loss: 1.1867 - accuracy: 0.5784 - val_loss: 0.9783 - val_accuracy: 0.6521 - lr: 1.0000e-04
Epoch 4/5
625/625 [==============================] - 91s 146ms/step - loss: 0.9371 - accuracy: 0.6797 - val_loss: 0.7892 - val_accuracy: 0.7246 - lr: 1.0000e-04
Epoch 5/5
625/625 [==============================] - 91s 145ms/step - loss: 0.7610 - accuracy: 0.7433 - val_loss: 0.7236 - val_accuracy: 0.7524 - lr: 1.0000e-04
from tensorflow.keras.applications.vgg16 import preprocess_input as vgg_preprocess
test_ds = CIFAR_Dataset(test_images, test_oh_labels, batch_size=BATCH_SIZE, augmentor=None, shuffle=False, pre_func=vgg_preprocess)
vgg_model.evaluate(test_ds)
157/157 [==============================] - 7s 44ms/step - loss: 0.7347 - accuracy: 0.7457
[0.7346897125244141, 0.7457000017166138]