{"id":38,"date":"2026-05-07T22:40:26","date_gmt":"2026-05-07T13:40:26","guid":{"rendered":"https:\/\/material-ai-lab.com\/?p=38"},"modified":"2026-05-07T22:40:26","modified_gmt":"2026-05-07T13:40:26","slug":"perceptual-loss%e3%81%a8%e3%81%af%ef%bc%9fvae%e3%81%ae%e3%81%bc%e3%82%84%e3%81%91%e3%82%92%e6%94%b9%e5%96%84%e3%81%99%e3%82%8b%e6%96%b9%e6%b3%95%e3%82%92cifar-10%e3%81%a7%e5%88%86%e3%81%8b%e3%82%8a","status":"publish","type":"post","link":"https:\/\/material-ai-lab.com\/?p=38","title":{"rendered":"Perceptual Loss\u3068\u306f\uff1fVAE\u306e\u307c\u3084\u3051\u3092\u6539\u5584\u3059\u308b\u65b9\u6cd5\u3092CIFAR-10\u3067\u5206\u304b\u308a\u3084\u3059\u304f\u89e3\u8aac"},"content":{"rendered":"\n<p>\u753b\u50cf\u7cfb\u30bf\u30b9\u30af\u3067VAE\uff08Variational Autoencoder\u3001\u5909\u5206\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\uff09\u3092\u4f7f\u3046\u3068\u3001<strong>\u51fa\u529b\u753b\u50cf\u304c\u307c\u3084\u3051\u3084\u3059\u3044<\/strong>\u3068\u3044\u3046\u554f\u984c\u304c\u3042\u308a\u307e\u3059\u3002\u3053\u308c\u306f\u3001\u518d\u69cb\u6210\u8aa4\u5dee\u3068\u3057\u3066\u753b\u7d20\u3054\u3068\u306e\u5dee\u3092\u305d\u306e\u307e\u307e\u6700\u5c0f\u5316\u3059\u308b\u3068\u3001\u8907\u6570\u306e\u3042\u308a\u5f97\u308b\u7d30\u90e8\u3092\u5e73\u5747\u3057\u305f\u3088\u3046\u306a\u51fa\u529b\u306b\u306a\u308a\u3084\u3059\u3044\u305f\u3081\u3067\u3059\u3002\u7279\u306b\u3001\u5f62\u72b6\u3084\u80cc\u666f\u306e\u30d0\u30ea\u30a8\u30fc\u30b7\u30e7\u30f3\u304c\u5927\u304d\u3044\u81ea\u7136\u753b\u50cf\u3067\u306f\u3001\u3053\u306e\u50be\u5411\u304c\u76ee\u7acb\u3061\u307e\u3059\u3002\u3053\u3046\u3057\u305f\u554f\u984c\u3092\u6539\u5584\u3059\u308b\u65b9\u6cd5\u306e1\u3064\u304c\u3001<strong>Perceptual Loss<\/strong>\u306e\u5c0e\u5165\u3067\u3059\u3002Perceptual Loss\u306f\u3001\u753b\u7d20\u5024\u305d\u306e\u3082\u306e\u3067\u306f\u306a\u304f\u3001\u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u306e\u753b\u50cf\u5206\u985e\u30e2\u30c7\u30eb\u304c\u6349\u3048\u305f<strong>\u7279\u5fb4\u306e\u5dee<\/strong>\u3092\u4f7f\u3063\u3066\u753b\u50cf\u3092\u6bd4\u8f03\u3059\u308b\u8003\u3048\u65b9\u3067\u3059\u3002\u5b9f\u969b\u306b\u3001Perceptual Loss\u306f\u753b\u7d20\u30d9\u30fc\u30b9\u306e\u640d\u5931\u3088\u308a\u3082\u7d30\u90e8\u3092\u4fdd\u3061\u3084\u3059\u304f\u3001VAE\u3078\u5fdc\u7528\u3057\u305f\u7814\u7a76\u3067\u3082\u898b\u305f\u76ee\u306e\u81ea\u7136\u3055\u3084\u77e5\u899a\u54c1\u8cea\u306e\u6539\u5584\u304c\u5831\u544a\u3055\u308c\u3066\u3044\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u672c\u8a18\u4e8b\u3067\u306f\u3001Perceptual Loss\u306e\u8003\u3048\u65b9\u3092\u8aac\u660e\u3057\u3001\u305d\u308c\u3092CIFAR-10\u3092\u7528\u3044\u305fVAE\u306e\u5b66\u7fd2\u306b\u5fdc\u7528\u3059\u308b\u6d41\u308c\u3092\u6574\u7406\u3057\u307e\u3059\u3002VAE\u306e\u5b66\u7fd2\u306f\u5206\u304b\u3063\u3066\u304d\u305f\u3082\u306e\u306e\u3001\u81ea\u7136\u753b\u50cf\u3067\u518d\u69cb\u6210\u304c\u307c\u3084\u3051\u308b\u3068\u611f\u3058\u3066\u3044\u308b\u65b9\u306b\u3068\u3063\u3066\u3001\u6b21\u306e\u4e00\u6b69\u3068\u3057\u3066\u7406\u89e3\u3057\u3084\u3059\u3044\u5185\u5bb9\u3092\u76ee\u6307\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Perceptual Loss\u3068\u306f<\/h2>\n\n\n\n<p>VAE\u3067\u81ea\u7136\u753b\u50cf\u3092\u6271\u3046\u3068\u3001\u8907\u6570\u306e\u3042\u308a\u5f97\u308b\u7d30\u90e8\u3092\u5e73\u5747\u3057\u305f\u3088\u3046\u306a\u51fa\u529b\u306b\u306a\u308a\u3084\u3059\u304f\u3001\u8f2a\u90ed\u304c\u7518\u304f\u306a\u3063\u305f\u308a\u3001\u8cea\u611f\u304c\u5931\u308f\u308c\u305f\u308a\u3059\u308b\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002\u305d\u3053\u3067\u91cd\u8981\u306b\u306a\u308b\u306e\u304c\u3001\u300c\u753b\u7d20\u5024\u304c\u3069\u308c\u3060\u3051\u4e00\u81f4\u3057\u3066\u3044\u308b\u304b\u300d\u3060\u3051\u3067\u306a\u304f\u3001\u300c<strong>\u898b\u305f\u76ee\u3068\u3057\u3066\u3069\u308c\u3060\u3051\u4f3c\u3066\u3044\u308b\u304b<\/strong>\u300d\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u3067\u3059\u3002Johnson\u3089\u306f\u3001\u753b\u50cf\u5909\u63db\u3084\u8d85\u89e3\u50cf\u306b\u304a\u3044\u3066\u3001\u753b\u7d20\u30d9\u30fc\u30b9\u306e\u640d\u5931\u3067\u306f\u306a\u304f\u3001\u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u7279\u5fb4\u7a7a\u9593\u3067\u753b\u50cf\u540c\u58eb\u306e\u5dee\u3092\u6e2c\u308b<strong>Perceptual Loss<\/strong>\u3092\u7528\u3044\u308b\u3053\u3068\u3067\u3001\u3088\u308a\u7d30\u90e8\u3092\u4fdd\u3063\u305f\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b\u3053\u3068\u3092\u793a\u3057\u307e\u3057\u305f\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEicSqF9XDakzuTROarHU_YvQTmGgjJNaVZDIJs6pwp9X_cqgGdsf3BM5qUcOrCW-eTrAmHQGcJRdCUGEyPicw61i7SPdXiqUUh2DqEzHUa6J68KLlqw3U5izSxJBqy-dRIjmYaOj6-HJwgicePi-IA4Il5aA_MLI4iz7dOHEilL4gdc4y6lfYZs6eHXTC4W=w400-h375\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u8d85\u89e3\u50cf\u306e\u4f8b\u3002Johnson, J., Alahi, A., &amp; Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European Conference on Computer Vision (ECCV 2016).<\/p>\n\n\n\n<p>Perceptual Loss\u3067\u306f\u3001\u4e00\u822c\u306b<strong>VGG-Net<\/strong>\u306e\u3088\u3046\u306a\u4e8b\u524d\u5b66\u7fd2\u6e08\u307fCNN\u3092\u4f7f\u3044\u307e\u3059\u3002\u5143\u753b\u50cf\u3068\u751f\u6210\u753b\u50cf\u306e\u4e21\u65b9\u3092VGG-Net\u306b\u5165\u529b\u3057\u3001\u4e2d\u9593\u5c64\u306e\u51fa\u529b\u3092\u53d6\u308a\u51fa\u3057\u3066\u6bd4\u8f03\u3057\u307e\u3059\u3002\u6d45\u3044\u5c64\u3067\u306f\u30a8\u30c3\u30b8\u3084\u5c40\u6240\u7684\u306a\u30c6\u30af\u30b9\u30c1\u30e3\u306e\u3088\u3046\u306a\u7279\u5fb4\u304c\u3001\u3088\u308a\u6df1\u3044\u5c64\u3067\u306f\u3088\u308a\u5e83\u3044\u53d7\u5bb9\u91ce\u306b\u57fa\u3065\u304f\u9ad8\u6b21\u306e\u7279\u5fb4\u304c\u8868\u73fe\u3055\u308c\u3084\u3059\u3044\u305f\u3081\u3001\u4e2d\u9593\u5c64\u306e\u5dee\u3092\u5c0f\u3055\u304f\u3059\u308b\u3053\u3068\u3067\u3001\u5358\u306a\u308b\u753b\u7d20\u4e00\u81f4\u3088\u308a\u3082<strong>\u898b\u305f\u76ee\u306e\u81ea\u7136\u3055<\/strong>\u3092\u4fdd\u3061\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002\u6df1\u5c64\u7279\u5fb4\u306f\u3001\u4eba\u9593\u306e\u77e5\u899a\u306b\u8fd1\u3044\u985e\u4f3c\u5ea6\u6307\u6a19\u3068\u3057\u3066\u6709\u52b9\u3067\u3042\u308b\u3053\u3068\u3082\u5831\u544a\u3055\u308c\u3066\u3044\u307e\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEg4CyhofWSBQlz5tNNpWV_7rfpSNzUTcrJ0NPy052_pqEaYFFdYmCI_geCRR8R8NZKWUqBYCbcDQBmoqgWUSX0FKOowDT15NXOxfaM_DBL2s5OtyIj3neloGdi1bjRuD4JENvuSg5Ey9h0Rifl2NUZ74s-K31sKW563BvPNp9kU-knVapunXSDmWKKldk1t=w400-h371\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>PVAE: \u901a\u5e38\u306eVAE\u3001VAE-123\u3068VAE-345\u306fperceptual loss\u3092\u7528\u3044\u305f\u30e2\u30c7\u30eb\u3002Hou, X., Shen, L., Sun, K., &amp; Qiu, G. (2016). Deep Feature Consistent Variational Autoencoder. arXiv preprint arXiv:1610.00291.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Perceptual Loss\u3092\u7528\u3044\u305fVAE\u306e\u8aa4\u5dee\u95a2\u6570<\/h2>\n\n\n\n<p>Perceptual Loss\u3092VAE\u306b\u5c0e\u5165\u3059\u308b\u5834\u5408\u3001\u57fa\u672c\u7684\u306a\u8003\u3048\u65b9\u306f\u30b7\u30f3\u30d7\u30eb\u3067\u3059\u3002\u3082\u3068\u3082\u3068\u306eVAE\u306e\u640d\u5931\u95a2\u6570\u3067\u3042\u308b<strong>\u518d\u69cb\u6210\u8aa4\u5dee<\/strong>\u3068<strong>KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9<\/strong>\u306b\u52a0\u3048\u3066\u3001<strong>Perceptual Loss<\/strong>\u3092\u8db3\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u5f0f\u3067\u66f8\u304f\u3068\u3001\u6b21\u306e\u3088\u3046\u306a\u5f62\u306b\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>L=Lrec+\u03b2LKL+\u03bbLperc\u200b<\/p>\n\n\n\n<p>\u3053\u3053\u3067\u3001<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>LrecL_{\\mathrm{rec}}\u200b\uff1a\u753b\u7d20\u30d9\u30fc\u30b9\u306e\u518d\u69cb\u6210\u8aa4\u5dee<\/li>\n\n\n\n<li>LKLL_{\\mathrm{KL}}\u200b\uff1a\u6f5c\u5728\u5206\u5e03\u3092\u6574\u3048\u308b\u305f\u3081\u306eKL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9<\/li>\n\n\n\n<li>LpercL_{\\mathrm{perc}}\u200b\uff1aVGG-Net\u306a\u3069\u306e\u4e2d\u9593\u5c64\u7279\u5fb4\u306e\u5dee<\/li>\n\n\n\n<li>\u03b2,\u03bb\\beta, \\lambda\uff1a\u5404\u640d\u5931\u306e\u91cd\u307f<\/li>\n<\/ul>\n\n\n\n<p>\u3067\u3059\u3002<\/p>\n\n\n\n<p>Perceptual Loss\u305d\u306e\u3082\u306e\u306f\u3001\u305f\u3068\u3048\u3070VGG\u306e\u8907\u6570\u5c64\u306e\u7279\u5fb4\u30de\u30c3\u30d7\u3092\u4f7f\u3063\u3066\u3001\u5143\u753b\u50cf&nbsp;xxx&nbsp;\u3068\u518d\u69cb\u6210\u753b\u50cf&nbsp;x^\\hat{x}x^&nbsp;\u306e\u5dee\u3092\u6e2c\u308b\u5f62\u3067\u66f8\u3051\u307e\u3059\u3002<\/p>\n\n\n\n<p>Lperc=\u2211l\u2225\u03d5l(x)\u2212\u03d5l(x^)\u22251\u200b<\/p>\n\n\n\n<p>\u3053\u3053\u3067&nbsp;\u03d5l(\u22c5)\\phi_l(\\cdot)\u306f\u3001\u4e8b\u524d\u5b66\u7fd2\u6e08\u307fCNN\u306e&nbsp;ll&nbsp;\u5c64\u76ee\u306e\u51fa\u529b\u3067\u3059\u3002L1\u30ce\u30eb\u30e0\u3067\u3082L2\u30ce\u30eb\u30e0\u3067\u3082\u5b9f\u88c5\u3067\u304d\u307e\u3059\u304c\u3001\u91cd\u8981\u306a\u306e\u306f\u3001<strong>\u753b\u7d20\u5024\u3067\u306f\u306a\u304f\u7279\u5fb4\u91cf\u3092\u6bd4\u8f03\u3059\u308b<\/strong>\u3068\u3044\u3046\u70b9\u3067\u3059\u3002Johnson\u3089\u306f\u3053\u306e\u8003\u3048\u65b9\u3092\u753b\u50cf\u5909\u63db\u306b\u7528\u3044\u3001Hou\u3089\u306fVAE\u306e\u5b66\u7fd2\u3078\u5fdc\u7528\u3057\u3066\u3044\u307e\u3059\u3002&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u5b9f\u88c5\u4f8b<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader, Subset\n\nimport torchvision\nimport torchvision.transforms as transforms\nfrom torchvision.models import vgg16, VGG16_Weights\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport json\n\n\n# ---------- \u8a08\u7b97\u6a5f\u306e\u78ba\u8a8d ----------\nprint(\"PyTorch version:\", torch.__version__)\nprint(\"Torchvision version:\", torchvision.__version__)\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(\"Using device:\", device)\n\n\n# ---------- \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u6e96\u5099 ----------\ntransform = transforms.ToTensor()\n\ntrain_dataset = torchvision.datasets.CIFAR10(\n    root=\".\/data\",\n    train=True,\n    download=True,\n    transform=transform,\n)\n\ntest_dataset = torchvision.datasets.CIFAR10(\n    root=\".\/data\",\n    train=False,\n    download=True,\n    transform=transform\n)\n\n# CIFAR-10\u306e\u30af\u30e9\u30b9\u78ba\u8a8d\nprint(\"classes:\", train_dataset.classes)\n# &#91;'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\n\n# automobile \u306e\u30e9\u30d9\u30eb\u756a\u53f7\ncar_label = train_dataset.class_to_idx&#91;\"horse\"]\nprint(\"car_label:\", car_label)\n\n# automobile \u3060\u3051\u306e index \u3092\u53d6\u5f97\ntrain_car_indices = &#91;i for i, label in enumerate(train_dataset.targets) if label == car_label]\ntest_car_indices = &#91;i for i, label in enumerate(test_dataset.targets) if label == car_label]\n\n# Subset \u3067\u7d5e\u308a\u8fbc\u307f\ntrain_car_dataset = Subset(train_dataset, train_car_indices)\ntest_car_dataset = Subset(test_dataset, test_car_indices)\n\ntrain_loader = DataLoader(train_car_dataset, batch_size=64, shuffle=True)\ntest_loader = DataLoader(test_car_dataset, batch_size=64, shuffle=False)\n\nprint(\"\u5b66\u7fd2\u30c7\u30fc\u30bf\u6570:\", len(train_car_dataset))\nprint(\"\u30c6\u30b9\u30c8\u30c7\u30fc\u30bf\u6570:\", len(test_car_dataset))\n\n\n# ---------- \u30c7\u30fc\u30bf\u306e\u78ba\u8a8d ----------\nimages, labels = next(iter(train_loader))\n\nprint(\"images shape:\", images.shape)\nprint(\"labels shape:\", labels.shape)\nprint(\"\u6700\u521d\u306e\u30e9\u30d9\u30eb:\", labels&#91;0].item())\n\nplt.figure(figsize=(10, 4))\nfor i in range(8):\n    plt.subplot(2, 4, i + 1)\n    plt.imshow(images&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(f\"label: {labels&#91;i].item()}\")\n    plt.axis(\"off\")\nplt.tight_layout()\nplt.show()\n\n\n# ---------- VAE\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nclass CNNVAE(nn.Module):\n    def __init__(self, latent_dim=16):\n        super().__init__()\n        self.latent_dim = latent_dim\n\n        # ===== Encoder =====\n        self.enc_conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)   # 32x32 -> 32x32\n        self.enc_conv2 = nn.Conv2d(32, 32, kernel_size=3, padding=1)   # 32x32 -> 32x32\n        self.enc_pool1 = nn.MaxPool2d(2)                              # 32x32 -> 16x16\n\n        self.enc_conv3 = nn.Conv2d(32, 64, kernel_size=3, padding=1)  # 16x16 -> 16x16\n        self.enc_conv4 = nn.Conv2d(64, 64, kernel_size=3, padding=1)  # 16x16 -> 16x16\n        self.enc_pool2 = nn.MaxPool2d(2)                              # 16x16 -> 8x8\n\n        self.fc_mu = nn.Linear(64 * 8 * 8, latent_dim)\n        self.fc_logvar = nn.Linear(64 * 8 * 8, latent_dim)\n\n        # ===== Decoder =====\n        self.fc_dec = nn.Linear(latent_dim, 64 * 8 * 8)\n\n        self.up1 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 8x8 -> 16x16\n        self.dec_conv1 = nn.Conv2d(64, 64, kernel_size=3, padding=1)\n        self.dec_conv2 = nn.Conv2d(64, 32, kernel_size=3, padding=1)\n\n        self.up2 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 16x16 -> 32x32\n        self.dec_conv3 = nn.Conv2d(32, 32, kernel_size=3, padding=1)\n        self.dec_conv4 = nn.Conv2d(32, 16, kernel_size=3, padding=1)\n\n        self.dec_conv5 = nn.Conv2d(16, 3, kernel_size=3, padding=1)\n\n    def encode(self, x):\n        # \u5165\u529b: &#91;N, 1, 28, 28]\n        x = F.relu(self.enc_conv1(x))       # &#91;N, 32, 32, 32]\n        x = F.relu(self.enc_conv2(x))       # &#91;N, 32, 32, 32]\n        x = self.enc_pool1(x)               # &#91;N, 32, 16, 16]\n\n        x = F.relu(self.enc_conv3(x))       # &#91;N, 64, 16, 16]\n        x = F.relu(self.enc_conv4(x))       # &#91;N, 64, 16, 16]\n        x = self.enc_pool2(x)               # &#91;N, 64, 8, 8]\n\n        x = torch.flatten(x, start_dim=1)   # &#91;N, 64*8*8]\n        mu = self.fc_mu(x)                  # &#91;N, latent_dim(default=16)]\n        logvar = self.fc_logvar(x)          # &#91;N, latent_dim(default=16)]\n        return mu, logvar\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)\n        eps = torch.randn_like(std)\n        z = mu + eps * std\n        return z\n\n    def decode(self, z):\n        # \u5165\u529b: &#91;N, latent_dim(default=16)]\n        x = self.fc_dec(z)                  # &#91;N, 64*8*8]\n        x = x.view(-1, 64, 8, 8)            # &#91;N, 64, 8, 8]\n\n        x = self.up1(x)                     # &#91;N, 64, 16, 16]\n        x = F.relu(self.dec_conv1(x))       # &#91;N, 32, 16, 16]\n        x = F.relu(self.dec_conv2(x))       # &#91;N, 32, 16, 16]\n\n        x = self.up2(x)                     # &#91;N, 32, 32, 32]\n        x = F.relu(self.dec_conv3(x))       # &#91;N, 32, 32, 32]\n        x = F.relu(self.dec_conv4(x))       # &#91;N, 32, 32, 32]\n\n        x = torch.sigmoid(self.dec_conv5(x))\n        return x\n\n    def forward(self, x):\n        mu, logvar = self.encode(x)\n        z = self.reparameterize(mu, logvar)\n        recon = self.decode(z)\n        return recon, mu, logvar\n\n\n# ---------- Perceptual loss\u306e\u4f5c\u6210 ----------\nclass VGGPerceptualLoss(nn.Module):\n    def __init__(self, resize_to_224=True):\n        super().__init__()\n        self.resize_to_224 = resize_to_224\n\n        vgg = vgg16(weights=VGG16_Weights.DEFAULT).features\n\n        # \u4f7f\u3044\u3084\u3059\u3044\u6d45\u301c\u4e2d\u5c64\u30923\u6bb5\n        self.blocks = nn.ModuleList(&#91;\n            vgg&#91;:4].eval(),    # relu1_2 \u4ed8\u8fd1\n            vgg&#91;4:9].eval(),   # relu2_2 \u4ed8\u8fd1\n            vgg&#91;9:16].eval(),  # relu3_3 \u4ed8\u8fd1\n        ])\n\n        for block in self.blocks:\n            for p in block.parameters():\n                p.requires_grad = False\n\n        self.register_buffer(\"mean\", torch.tensor(&#91;0.485, 0.456, 0.406]).view(1, 3, 1, 1))\n        self.register_buffer(\"std\", torch.tensor(&#91;0.229, 0.224, 0.225]).view(1, 3, 1, 1))\n\n    def preprocess(self, x):\n        if self.resize_to_224:\n            x = F.interpolate(x, size=(224, 224), mode=\"bilinear\", align_corners=False)\n        x = (x - self.mean) \/ self.std\n        return x\n\n    def forward(self, pred, target):\n        # target\u5074\u3078\u306f\u52fe\u914d\u4e0d\u8981\n        target = target.detach()\n\n        pred = self.preprocess(pred)\n        target = self.preprocess(target)\n\n        loss = 0.0\n        x = pred\n        y = target\n\n        for block in self.blocks:\n            x = block(x)\n            y = block(y)\n            loss = loss + F.l1_loss(x, y)\n\n        return loss\n\n\n# ---------- \u640d\u5931\u8aa4\u5dee\uff08\u518d\u69cb\u6210\u8aa4\u5dee + KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9 + Perceptual\uff09 ----------\ndef vae_perceptual_loss(recon_x, x, mu, logvar, perceptual_fn, beta=1e-3, lambda_perc=0.1):\n    # pixel-level reconstruction\n    recon_loss = F.binary_cross_entropy(recon_x, x, reduction=\"sum\")\n\n    # KL divergence\n    kl_loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n\n    # perceptual loss\n    perc_loss = perceptual_fn(recon_x, x)\n\n    total_loss = recon_loss + beta * kl_loss + lambda_perc * perc_loss\n    return total_loss, recon_loss, kl_loss, perc_loss\n\n\n# ---------- \u5b66\u7fd2\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nlatent_dim = 64\n\nmodel = CNNVAE(latent_dim=latent_dim).to(device)\nperceptual_fn = VGGPerceptualLoss(resize_to_224=False).to(device)\n\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\n\nprint(model)\n\n\n# ---------- \u5b66\u7fd2 ----------\ntrain_losses = &#91;]\ntrain_recon_losses = &#91;]\ntrain_kl_losses = &#91;]\n\nnum_epochs = 100\nbeta = 1.0\nlambda_perc = 10000\n\nfor epoch in range(num_epochs):\n    model.train()\n\n    train_loss = 0.0\n    train_recon = 0.0\n    train_kl = 0.0\n\n    for images, _ in train_loader:\n        images = images.to(device)\n\n        optimizer.zero_grad()\n\n        recon, mu, logvar = model(images)\n        \n        loss, recon_loss, kl_loss, perc_loss = vae_perceptual_loss(\n            recon, images, mu, logvar,\n            perceptual_fn=perceptual_fn,\n            beta=beta,\n            lambda_perc=lambda_perc\n        )\n\n        loss.backward()\n        optimizer.step()\n\n        train_loss += loss.item()\n        train_recon += recon_loss.item()\n        train_kl += kl_loss.item()\n\n    avg_loss = train_loss \/ len(train_dataset)\n    avg_recon = train_recon \/ len(train_dataset)\n    avg_kl = train_kl \/ len(train_dataset)\n\n    train_losses.append(avg_loss)\n    train_recon_losses.append(avg_recon)\n    train_kl_losses.append(avg_kl)\n\n    print(\n        f\"Epoch &#91;{epoch+1}\/{num_epochs}] \"\n        f\"Loss: {avg_loss:.4f} | Recon: {avg_recon:.4f} | KL: {avg_kl:.4f}\"\n    )\n\n\n# ---------- \u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u753b\u50cf\u751f\u6210 ----------\nmodel.eval()\n\nimages, _ = next(iter(test_loader))\nimages = images&#91;:8].to(device)\n\nwith torch.no_grad():\n    recon, mu, logvar = model(images)\n\nimages = images.cpu()\nrecon = recon.cpu()\n\nplt.figure(figsize=(12, 4))\nfor i in range(8):\n    # \u5143\u753b\u50cf\n    plt.subplot(2, 8, i + 1)\n    plt.imshow(images&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(\"Original\")\n    plt.axis(\"off\")\n\n    # \u518d\u69cb\u6210\u753b\u50cf\n    plt.subplot(2, 8, 8 + i + 1)\n    plt.imshow(recon&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(\"Recon\")\n    plt.axis(\"off\")\n\nplt.tight_layout()\nplt.show()\n\n\n# ---------- \u753b\u50cf\u306e\u9023\u7d9a\u6027\u306e\u78ba\u8a8d ----------\nmu1, logvar1 = model.encode(images&#91;0:1].to('cuda'))\nmu2, logvar2 = model.encode(images&#91;1:2].to('cuda'))\n\nz1 = model.reparameterize(mu1, logvar1)\nz2 = model.reparameterize(mu2, logvar2)\n\nN = 6\nplt.figure(figsize=(15, 4))\nfor i in range(N):\n    z = z1*(i\/(N-1)) + z2*(1-i\/(N-1))\n    y = model.decode(z)\n    y = y.cpu().detach()&#91;0]\n\n    plt.subplot(1,N,i+1)\n    plt.imshow(y.numpy().transpose((1, 2, 0)))\n    plt.axis(\"off\")\n\n\n# ---------- \u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u306e\u753b\u50cf\u751f\u6210 ----------\nN = 6\nplt.figure(figsize=(12, 4))\nfor i in range(N):\n    z = torch.randn(1,64).cuda()\n    y = model.decode(z)\n    y = y.cpu().detach()&#91;0]\n\n    plt.subplot(2,N,i+1)\n    plt.imshow(y.numpy().transpose((1, 2, 0)))\n    plt.axis(\"off\")\n\nfor i in range(N):\n    z = torch.randn(1,64).cuda()\n    y = model.decode(z)\n    y = y.cpu().detach()&#91;0]\n\n    plt.subplot(2,N,i+7)\n    plt.imshow(y.numpy().transpose((1, 2, 0)))\n    plt.axis(\"off\")<\/code><\/pre>\n\n\n\n<p>\u307e\u305a\u3001\u901a\u5e38\u3069\u304a\u308aEncoder\u3068Decoder\u3092\u6301\u3064VAE\u3092\u7528\u610f\u3057\u307e\u3059\u3002\u6b21\u306b\u3001<strong>\u4e8b\u524d\u5b66\u7fd2\u6e08\u307f\u306eVGG16\u307e\u305f\u306fVGG19<\/strong>\u3092\u8aad\u307f\u8fbc\u307f\u3001\u3053\u3061\u3089\u306f\u5b66\u7fd2\u3055\u305b\u305a\u306b\u56fa\u5b9a\u3057\u307e\u3059\u3002\u305d\u306e\u3046\u3048\u3067\u3001\u5143\u753b\u50cf\u3068VAE\u306e\u518d\u69cb\u6210\u753b\u50cf\u306e\u4e21\u65b9\u3092VGG\u306b\u901a\u3057\u3001\u9078\u3093\u3060\u4e2d\u9593\u5c64\u306e\u7279\u5fb4\u30de\u30c3\u30d7\u3092\u53d6\u308a\u51fa\u3057\u307e\u3059\u3002\u6700\u5f8c\u306b\u3001\u305d\u306e\u7279\u5fb4\u30de\u30c3\u30d7\u540c\u58eb\u306e\u5dee\u3092Perceptual Loss\u3068\u3057\u3066\u8a08\u7b97\u3057\u3001VAE\u306e\u640d\u5931\u3078\u52a0\u3048\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u5b9f\u88c5\u4e0a\u306e\u30dd\u30a4\u30f3\u30c8\u306f\u3001\u6b21\u306e3\u3064\u3067\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>VGG\u5074\u306e\u91cd\u307f\u306f\u66f4\u65b0\u3057\u306a\u3044<\/li>\n\n\n\n<li>\u5165\u529b\u753b\u50cf\u306fVGG\u304c\u60f3\u5b9a\u3059\u308b\u5f62\u5f0f\u306b\u5408\u308f\u305b\u3066\u6b63\u898f\u5316\u3059\u308b<\/li>\n\n\n\n<li>Perceptual Loss\u306e\u91cd\u307f\u00a0\u03bb\\lambda\u00a0\u3092\u5927\u304d\u304f\u3057\u3059\u304e\u306a\u3044<\/li>\n<\/ul>\n\n\n\n<p>\u03bb\\lambda\u03bb&nbsp;\u304c\u5927\u304d\u3059\u304e\u308b\u3068\u3001\u518d\u69cb\u6210\u753b\u50cf\u304c\u300c\u7279\u5fb4\u306f\u4f3c\u3066\u3044\u308b\u304c\u8272\u3084\u5168\u4f53\u30d0\u30e9\u30f3\u30b9\u304c\u5d29\u308c\u308b\u300d\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002\u9006\u306b\u5c0f\u3055\u3059\u304e\u308b\u3068\u3001\u901a\u5e38\u306eVAE\u3068\u306e\u5dee\u304c\u307b\u3068\u3093\u3069\u51fa\u307e\u305b\u3093\u3002\u305d\u306e\u305f\u3081\u3001<strong>\u518d\u69cb\u6210\u8aa4\u5dee\u3001KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9\u3001Perceptual Loss\u306e\u91cd\u307f\u306e\u30d0\u30e9\u30f3\u30b9\u8abf\u6574<\/strong>\u304c\u91cd\u8981\u306b\u306a\u308a\u307e\u3059\u3002&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u7d50\u679c<\/h2>\n\n\n\n<p>\u306f\u3058\u3081\u306b\u3001CIFAR-10\u306b\u542b\u307e\u308c\u308b10\u30af\u30e9\u30b9\u3059\u3079\u3066\u306e\u753b\u50cf\u3092\u307e\u3068\u3081\u3066\u5b66\u7fd2\u3057\u305f\u7d50\u679c\u3092\u793a\u3057\u307e\u3059\u3002Perceptual Loss\u306a\u3057\u306e\u30d1\u30bf\u30fc\u30f3\u3068\u6bd4\u8f03\u3059\u308b\u3068\u3001<strong>\u8f2a\u90ed\u3084\u5f62\u72b6\u306e\u307e\u3068\u307e\u308a\u304c\u6539\u5584\u3057\u3001\u307c\u3084\u3051\u304c\u660e\u78ba\u306b\u8efd\u6e1b\u3055\u308c\u305f<\/strong>\u3053\u3068\u304c\u5206\u304b\u308a\u307e\u3059\u3002\u3053\u308c\u306f\u3001\u753b\u7d20\u306e\u4e00\u81f4\u3060\u3051\u3067\u306a\u304f\u3001\u7279\u5fb4\u7a7a\u9593\u3067\u306e\u4e00\u81f4\u3082\u540c\u6642\u306b\u5b66\u7fd2\u3057\u305f\u52b9\u679c\u3068\u8003\u3048\u3089\u308c\u307e\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEhXdPVjrb1yVkJGIN0nxuZ1ONwFSjlLF-yejSdUgVhqCHnCwcx828E0l3BlMoP16wu9akhwD1SMQcSDaBBeEXpPDkgkdFfC9p4u9WPHOtGzt5DBLvU8rJCExWGdqZbPZoEAaKbjbYW-arf5D8ESwAVTJ5p29N0XuOgOGRUM9iTCHi00xUTZIJgcIuCFguBD=w640-h338\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u4e0a: Perceptual loss\u306a\u3057\u3002\u4e0b: Perceptual loss\u3042\u308a<\/p>\n\n\n\n<p>\u6b21\u306b\u3001\u6f5c\u5728\u7a7a\u9593\u306e\u9023\u7d9a\u6027\u3092\u78ba\u304b\u3081\u308b\u305f\u3081\u306b\u30012\u753b\u50cf\u9593\u306e\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u3092\u88dc\u9593\u3057\u305f\u7d50\u679c\u3068\u3001\u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u753b\u50cf\u3092\u751f\u6210\u3057\u305f\u7d50\u679c\u3092\u78ba\u8a8d\u3057\u307e\u3059\u30022\u753b\u50cf\u9593\u306e\u88dc\u9593\u3067\u306f\u3001\u753b\u50cf\u304c\u306a\u3081\u3089\u304b\u306b\u5909\u5316\u3057\u3066\u304a\u308a\u3001<strong>\u6f5c\u5728\u7a7a\u9593\u306e\u9023\u7d9a\u6027<\/strong>\u306f\u4fdd\u305f\u308c\u3066\u3044\u308b\u3053\u3068\u304c\u5206\u304b\u308a\u307e\u3057\u305f\u3002\u4e00\u65b9\u3067\u3001\u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u751f\u6210\u3057\u305f\u753b\u50cf\u306f\u3001\u5f62\u72b6\u306f\u3042\u308b\u7a0b\u5ea6\u51fa\u3066\u3044\u308b\u3082\u306e\u306e\u3001\u307e\u3060\u4e0d\u660e\u77ad\u306a\u3082\u306e\u304c\u6b8b\u308a\u307e\u3057\u305f\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEhYjKIYMvpLdjnbhmTdrp7DLXnWWZq3Zt3C_pWcYK2nl0k5oBWgoh7jDd6NVOTxfQsMNRiihwEzvcB1k16cmZ8gXld9-yWm2vdo7_X-0InVgNWE98F1MTGpIIOp0QIk5cXgT8sJ84w0cFpPHd0HBypuLpyk1Cvp65BHHZOh1uc8zIdrL7FIbF5qN96aQ3LN=w640-h434\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>CIFAR-10\u306e\u5168\u30af\u30e9\u30b9\u3092\u7528\u3044\u305f\u5b9f\u9a13\u3002<\/p>\n\n\n\n<p>\u4e0a: \u6f5c\u5728\u7a7a\u9593\u306e\u9023\u7d9a\u6027\u306e\u78ba\u8a8d\u3002\u4e0b: \u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u753b\u50cf\u304b\u3089\u306e\u751f\u6210<\/p>\n\n\n\n<p>\u3053\u306e\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u307e\u305a<strong>\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304c64\u6b21\u5143\u3068\u6bd4\u8f03\u7684\u5c0f\u3055\u304f\u3001\u30e2\u30c7\u30eb\u81ea\u4f53\u3082\u30b7\u30f3\u30d7\u30eb<\/strong>\u3067\u3042\u308b\u3053\u3068\u304c\u8003\u3048\u3089\u308c\u307e\u3059\u3002CIFAR-10\u306f\u81ea\u7136\u753b\u50cf\u3067\u3042\u308a\u3001\u540c\u3058\u30af\u30e9\u30b9\u306e\u4e2d\u3067\u3082\u898b\u305f\u76ee\u306e\u3070\u3089\u3064\u304d\u304c\u5927\u304d\u3044\u305f\u3081\u3001\u9650\u3089\u308c\u305f\u8868\u73fe\u529b\u3067\u5168\u4f53\u3092\u30ab\u30d0\u30fc\u3059\u308b\u306e\u306f\u7c21\u5358\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002\u307e\u305f\u3001\u98db\u884c\u6a5f\u3084\u30ab\u30a8\u30eb\u306e\u3088\u3046\u306b\u3001\u30af\u30e9\u30b9\u9593\u306e\u898b\u305f\u76ee\u306e\u5dee\u304c\u5927\u304d\u3044\u30c7\u30fc\u30bf\u30921\u3064\u306e\u9023\u7d9a\u6f5c\u5728\u7a7a\u9593\u3067\u307e\u3068\u3081\u3066\u6271\u3046\u305f\u3081\u3001\u4e2d\u9593\u7684\u306a\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u3092\u53d6\u3063\u305f\u3068\u304d\u306b\u3001\u3084\u3084\u66d6\u6627\u306a\u753b\u50cf\u306b\u306a\u308a\u3084\u3059\u3044\u3068\u8003\u3048\u3089\u308c\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u305d\u3053\u3067\u3001\u300c\u99ac\u300d\u3084\u300c\u8eca\u300d\u306a\u3069\u3001<strong>\u30af\u30e9\u30b9\u3092\u7d5e\u3063\u305f\u30c9\u30e1\u30a4\u30f3\u5185\u3067VAE\u3092\u5b66\u7fd2<\/strong>\u3055\u305b\u305f\u7d50\u679c\u3082\u78ba\u8a8d\u3057\u307e\u3057\u305f\u3002\u3053\u306e\u5834\u5408\u306f\u3001\u753b\u50cf\u304c\u3088\u308a\u9bae\u660e\u306b\u518d\u69cb\u6210\u3055\u308c\u3001\u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u751f\u6210\u3057\u305f\u753b\u50cf\u3067\u3082\u3001\u5f62\u72b6\u3092\u3042\u308b\u7a0b\u5ea6\u8b58\u5225\u3067\u304d\u308b\u7d50\u679c\u304c\u5f97\u3089\u308c\u307e\u3057\u305f\u3002\u3053\u308c\u306f\u3001\u5b66\u7fd2\u5bfe\u8c61\u306e\u3070\u3089\u3064\u304d\u304c\u5c0f\u3055\u304f\u306a\u308a\u3001\u6f5c\u5728\u7a7a\u9593\u304c\u8868\u73fe\u3059\u3079\u304d\u5909\u52d5\u304c\u9650\u5b9a\u3055\u308c\u305f\u305f\u3081\u3060\u3068\u8003\u3048\u3089\u308c\u307e\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEh01M7lOs3kTHJ2KiO5-vNbnbXCNbbbOUk7CKL0v28u6_CY2SVNpRwvh95DkT3ZoG32s3WJYuiqqpV_mkhVoo0MCRfJA760Jap42REgGzsBYfw0XFwEQx2lJJFCMWbt6hB6u7fZqyVlEklXP91MvSh5rPVew-dFw2AlLM94nFGGlkvIuDHFoYfnymZED2o6=w640-h434\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/1102317349996980035#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEgS1YWTJ9BNUUNkHWQDoJdGWcV8G3n7byUQ4kBgg5UTO0LPlUeqj_mm88BnmhBQT_Zdc0NZzGO3KasZtcMYFMoZLKYLnFYCEimBXCiTqewXNDXAA9J77Vmt4alyXNF0hUfGhNiLYW3jcTdhAlaa8BVUcooOc0qzDIMRyQirG6hkhbQIHwiBBmOEWKI0RaK7=w640-h434\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>CIFAR-10\u306e\u99ac\u3068\u8eca\u3067\u306e\u5b9f\u9a13\u3002<\/p>\n\n\n\n<p>\u4e0a: \u6f5c\u5728\u7a7a\u9593\u306e\u9023\u7d9a\u6027\u306e\u78ba\u8a8d\u3002\u4e0b: \u30e9\u30f3\u30c0\u30e0\u306a\u6f5c\u5728\u753b\u50cf\u304b\u3089\u306e\u751f\u6210<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u307e\u3068\u3081<\/h2>\n\n\n\n<p>\u672c\u8a18\u4e8b\u3067\u306f\u3001VAE\u3067\u81ea\u7136\u753b\u50cf\u3092\u6271\u3046\u3068\u304d\u306b\u8d77\u3053\u308a\u3084\u3059\u3044<strong>\u307c\u3084\u3051\u306e\u554f\u984c<\/strong>\u306b\u5bfe\u3057\u3066\u3001<strong>Perceptual Loss<\/strong>\u3092\u5c0e\u5165\u3059\u308b\u8003\u3048\u65b9\u3092\u8aac\u660e\u3057\u307e\u3057\u305f\u3002<\/p>\n\n\n\n<p>Perceptual Loss\u306f\u3001\u5143\u753b\u50cf\u3068\u751f\u6210\u753b\u50cf\u3092\u4e8b\u524d\u5b66\u7fd2\u6e08\u307fCNN\u306b\u901a\u3057\u3001\u305d\u306e<strong>\u4e2d\u9593\u5c64\u306e\u7279\u5fb4\u306e\u5dee<\/strong>\u3092\u5c0f\u3055\u304f\u3059\u308b\u3088\u3046\u306b\u5b66\u7fd2\u3059\u308b\u65b9\u6cd5\u3067\u3059\u3002\u3053\u308c\u306b\u3088\u308a\u3001\u5358\u306a\u308b\u753b\u7d20\u4e00\u81f4\u3067\u306f\u6349\u3048\u306b\u304f\u3044\u30a8\u30c3\u30b8\u3084\u5f62\u72b6\u306e\u60c5\u5831\u3092\u4fdd\u3061\u3084\u3059\u304f\u306a\u308a\u3001VAE\u306e\u518d\u69cb\u6210\u753b\u50cf\u306e\u898b\u305f\u76ee\u3092\u6539\u5584\u3057\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002Perceptual Loss\u306f\u753b\u50cf\u5909\u63db\u3084\u8d85\u89e3\u50cf\u3067\u6709\u52b9\u6027\u304c\u793a\u3055\u308c\u3066\u304a\u308a\u3001VAE\u3078\u5fdc\u7528\u3057\u305f\u7814\u7a76\u3067\u3082\u3001\u3088\u308a\u81ea\u7136\u306a\u898b\u305f\u76ee\u3068\u9ad8\u3044\u77e5\u899a\u54c1\u8cea\u304c\u5831\u544a\u3055\u308c\u3066\u3044\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u4eca\u56de\u306e\u7d50\u679c\u304b\u3089\u3082\u3001CIFAR-10\u5168\u4f53\u306e\u3088\u3046\u306b\u3070\u3089\u3064\u304d\u306e\u5927\u304d\u3044\u81ea\u7136\u753b\u50cf\u3067\u306f\u4f9d\u7136\u3068\u3057\u3066\u96e3\u3057\u3055\u304c\u6b8b\u308b\u4e00\u65b9\u3067\u3001Perceptual Loss\u3092\u5c0e\u5165\u3059\u308b\u3053\u3068\u3067\u3001\u901a\u5e38\u306eVAE\u3088\u308a\u3082\u307c\u3084\u3051\u3092\u6539\u5584\u3057\u3084\u3059\u3044\u3053\u3068\u304c\u5206\u304b\u308a\u307e\u3059\u3002\u7279\u306b\u3001\u30af\u30e9\u30b9\u3092\u7d5e\u3063\u305f\u5b66\u7fd2\u3067\u306f\u6539\u5584\u304c\u3088\u308a\u5206\u304b\u308a\u3084\u3059\u304f\u3001<strong>\u7814\u7a76\u7528\u9014\u3067\u307e\u305a\u8a66\u3057\u3084\u3059\u3044\u62e1\u5f35<\/strong>\u3068\u3057\u3066\u6709\u529b\u3067\u3059\u3002<\/p>\n\n\n\n<p>VAE\u306e\u307c\u3084\u3051\u306b\u60a9\u3093\u3067\u3044\u308b\u5834\u5408\u306f\u3001\u307e\u305a<strong>Perceptual Loss\u3092\u8ffd\u52a0\u3059\u308b<\/strong>\u3053\u3068\u3092\u691c\u8a0e\u3059\u308b\u3068\u3088\u3044\u3067\u3057\u3087\u3046\u3002\u5b9f\u88c5\u306e\u96e3\u6613\u5ea6\u306f\u6bd4\u8f03\u7684\u4f4e\u304f\u3001\u305d\u308c\u3067\u3044\u3066\u898b\u305f\u76ee\u306e\u6539\u5584\u52b9\u679c\u304c\u5f97\u3089\u308c\u3084\u3059\u3044\u65b9\u6cd5\u3067\u3059\u3002<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u753b\u50cf\u7cfb\u30bf\u30b9\u30af\u3067VAE\uff08Variational Autoencoder\u3001\u5909\u5206\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\uff09\u3092\u4f7f\u3046\u3068\u3001\u51fa\u529b\u753b\u50cf\u304c\u307c\u3084\u3051\u3084\u3059\u3044\u3068\u3044\u3046\u554f\u984c\u304c\u3042\u308a\u307e&#8230;<\/p>\n","protected":false},"author":1,"featured_media":19,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4,7],"tags":[],"class_list":["post-38","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-python"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/38","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=38"}],"version-history":[{"count":2,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/38\/revisions"}],"predecessor-version":[{"id":40,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/38\/revisions\/40"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/media\/19"}],"wp:attachment":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=38"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=38"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=38"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}