{"id":34,"date":"2026-05-07T22:30:11","date_gmt":"2026-05-07T13:30:11","guid":{"rendered":"https:\/\/material-ai-lab.com\/?p=34"},"modified":"2026-05-07T22:30:24","modified_gmt":"2026-05-07T13:30:24","slug":"34","status":"publish","type":"post","link":"https:\/\/material-ai-lab.com\/?p=34","title":{"rendered":"VAE\u3068\u306f\uff1fMNIST\u3068CIFAR-10\u3067\u5b66\u3076\u753b\u50cf\u751f\u6210\u306e\u57fa\u790e"},"content":{"rendered":"\n<p>VAE\uff08\u5909\u5206\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\u3001Variational Autoencoder\uff09\u306f\u3001\u5165\u529b\u30c7\u30fc\u30bf\u3092\u5727\u7e2e\u3057\u3066\u7279\u5fb4\u3092\u62bd\u51fa\u3057\u3001\u305d\u306e\u60c5\u5831\u3092\u3082\u3068\u306b\u30c7\u30fc\u30bf\u3092\u518d\u69cb\u6210\u3059\u308b\u6a5f\u68b0\u5b66\u7fd2\u30e2\u30c7\u30eb\u3067\u3059\u3002<strong>\u6b21\u5143\u5727\u7e2e\u3001\u753b\u50cf\u751f\u6210\u3001\u7570\u5e38\u691c\u77e5\u3001\u30c7\u30fc\u30bf\u88dc\u5b8c<\/strong>\u306a\u3069\u3001\u3055\u307e\u3056\u307e\u306a\u7528\u9014\u3067\u4f7f\u308f\u308c\u3066\u3044\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u672c\u8a18\u4e8b\u3067\u306f\u3001\u624b\u66f8\u304d\u6587\u5b57\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3067\u3042\u308b<strong>MNIST<\/strong>\u3068\u3001\u8eca\u3084\u99ac\u306a\u3069\u306e\u81ea\u7136\u753b\u50cf\u3092\u542b\u3080<strong>CIFAR-10<\/strong>\u3092\u7528\u3044\u3066\u3001VAE\u306b\u3088\u308b\u753b\u50cf\u751f\u6210\u306e\u57fa\u672c\u3092\u5206\u304b\u308a\u3084\u3059\u304f\u8aac\u660e\u3057\u307e\u3059\u3002\u307e\u305f\u3001<strong>\u306a\u305cCIFAR-10\u3067\u306f\u753b\u50cf\u304c\u307c\u3084\u3051\u3084\u3059\u3044\u306e\u304b<\/strong>\u3001\u305d\u3057\u3066<strong>\u305d\u306e\u6539\u5584\u65b9\u6cd5<\/strong>\u306b\u3064\u3044\u3066\u3082\u89e3\u8aac\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. \u306f\u3058\u3081\u306b\u901a\u5e38\u306eAE\uff08Autoencoder\u3001\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\uff09\u306e\u8aac\u660e<\/h2>\n\n\n\n<p>VAE\u3092\u7406\u89e3\u3059\u308b\u306b\u306f\u3001\u307e\u305a\u901a\u5e38\u306eAE\uff08Autoencoder\uff09\u3092\u77e5\u3063\u3066\u304a\u304f\u3068\u5206\u304b\u308a\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>AE\u306f\u3001<strong>\u5165\u529b\u30c7\u30fc\u30bf\u3092\u3044\u3063\u305f\u3093\u5727\u7e2e\u3057\u3001\u305d\u3053\u304b\u3089\u5143\u306e\u30c7\u30fc\u30bf\u3092\u518d\u69cb\u6210\u3059\u308b<\/strong>\u30e2\u30c7\u30eb\u3067\u3059\u3002\u69cb\u9020\u306f\u5927\u304d\u304f\u6b21\u306e2\u3064\u306b\u5206\u304b\u308c\u307e\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Encoder<\/strong>\uff1a\u5165\u529b\u30c7\u30fc\u30bf\u3092\u5727\u7e2e\u3057\u3066\u6f5c\u5728\u5909\u6570\u306b\u5909\u63db\u3059\u308b<\/li>\n\n\n\n<li><strong>Decoder<\/strong>\uff1a\u6f5c\u5728\u5909\u6570\u304b\u3089\u5143\u306e\u30c7\u30fc\u30bf\u3092\u518d\u69cb\u6210\u3059\u308b<\/li>\n<\/ul>\n\n\n\n<p>\u305f\u3068\u3048\u3070\u753b\u50cf\u3092\u5165\u529b\u3059\u308b\u3068\u3001Encoder\u304c\u753b\u50cf\u306e\u7279\u5fb4\u3092\u5c11\u6570\u306e\u5024\u306b\u307e\u3068\u3081\u3001Decoder\u304c\u305d\u306e\u5024\u304b\u3089\u5143\u753b\u50cf\u306b\u8fd1\u3044\u753b\u50cf\u3092\u518d\u73fe\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEizeHy4KKqSFZZeUFYtU3EygeVBTjRyB-ne_kLTol-SfruMrCYmmZhj44WEpl-hccgM7N8f4eA8PqOcKgifALQ2G4cgDKUVuNW7me5Os_3g3ovAlJ21hi4gqLxSq_znMV8Dh9Y2E0rX8NUHS-wsAipN9XA2u9U-swowAFiQig6kZa5mBjgbY3gZf_rf1pHS=w593-h178\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u3053\u306e\u4ed5\u7d44\u307f\u306b\u3088\u3063\u3066\u3001AE\u306f\u753b\u50cf\u306e\u7279\u5fb4\u3092\u3046\u307e\u304f\u5727\u7e2e\u3067\u304d\u307e\u3059\u3002\u3057\u304b\u3057\u3001\u901a\u5e38\u306eAE\u306b\u306f\u5f31\u70b9\u304c\u3042\u308a\u307e\u3059\u3002\u6f5c\u5728\u7a7a\u9593\u304c\u5358\u306b\u300c\u5727\u7e2e\u3055\u308c\u305f\u7279\u5fb4\u306e\u7f6e\u304d\u5834\u300d\u306b\u306a\u308a\u3084\u3059\u304f\u3001<strong>\u6f5c\u5728\u5909\u6570\u3092\u5c11\u3057\u5909\u3048\u305f\u3068\u304d\u306b\u3001\u610f\u5473\u306e\u3042\u308b\u5909\u5316\u304c\u8d77\u304d\u308b\u3068\u306f\u9650\u3089\u306a\u3044<\/strong>\u306e\u3067\u3059\u3002\u3064\u307e\u308a\u3001\u753b\u50cf\u751f\u6210\u306b\u4f7f\u3044\u3084\u3059\u3044\u6ed1\u3089\u304b\u306a\u6f5c\u5728\u7a7a\u9593\u306b\u306a\u3089\u306a\u3044\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u3053\u306e\u5f31\u70b9\u3092\u6539\u5584\u3057\u305f\u306e\u304cVAE\u3067\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. VAE\uff08\u5909\u5206\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\u3001Variational Autoencoder\uff09\u306e\u8aac\u660e<\/h2>\n\n\n\n<p>VAE\u306f\u3001AE\u3092\u767a\u5c55\u3055\u305b\u305f\u30e2\u30c7\u30eb\u3067\u3001<strong>\u6f5c\u5728\u7a7a\u9593\u306b\u9023\u7d9a\u6027\u3092\u6301\u305f\u305b\u308b\u3088\u3046\u306b\u8a2d\u8a08<\/strong>\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u901a\u5e38\u306eAE\u3067\u306f\u3001Encoder\u306f\u5165\u529b\u753b\u50cf\u3092\u305d\u306e\u307e\u307e1\u3064\u306e\u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u306b\u5909\u63db\u3057\u307e\u3059\u304c\u3001VAE\u3067\u306f\u5c11\u3057\u8003\u3048\u65b9\u304c\u7570\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>VAE\u306eEncoder\u306f\u3001\u6f5c\u5728\u5909\u6570\u30921\u70b9\u3068\u3057\u3066\u51fa\u529b\u3059\u308b\u306e\u3067\u306f\u306a\u304f\u3001<strong>\u5e73\u5747 \u03bc\uff08\u30df\u30e5\u30fc\uff09\u3068\u5206\u6563 \u03c3\u00b2\uff08\u307e\u305f\u306f\u6a19\u6e96\u504f\u5dee\uff09<\/strong>\u3092\u51fa\u529b\u3057\u3001\u305d\u306e\u5468\u8fba\u306e\u5206\u5e03\u3068\u3057\u3066\u6f5c\u5728\u7a7a\u9593\u3092\u8868\u73fe\u3057\u307e\u3059\u3002\u3064\u307e\u308a\u3001\u300c\u3053\u306e\u753b\u50cf\u306f\u6f5c\u5728\u7a7a\u9593\u306e\u3053\u306e\u3042\u305f\u308a\u306b\u3042\u308a\u305d\u3046\u3060\u300d\u3068\u78ba\u7387\u7684\u306b\u8868\u3059\u308f\u3051\u3067\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEgs7yrbFx_8eW0eIngcyc5LsDbPwd2mgpYod-GWKvniKTl99D1Cd8pR3uSQ7_lgnE8aQdTeDCT6NPpgbiii43jbE7ASH6bIdumvCsmzFuYjwql2Fw6H-Fayg5xNT3evBS4XL5eq5QOJa0a-ecTZU9sMtE9C4SBbFMGaPrKwUEQzDJ0LLXRxpbTw7ehG3Qn1=w640-h182\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u3053\u306e\u3088\u3046\u306b\u3059\u308b\u3068\u3001\u6f5c\u5728\u7a7a\u9593\u304c\u6ed1\u3089\u304b\u306b\u306a\u308a\u3001\u6f5c\u5728\u5909\u6570\u3092\u5c11\u3057\u305a\u3064\u5909\u5316\u3055\u305b\u305f\u3068\u304d\u306b\u3082\u81ea\u7136\u306b\u753b\u50cf\u304c\u5909\u308f\u308a\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002\u3053\u308c\u304c\u3001VAE\u304c<strong>\u753b\u50cf\u751f\u6210<\/strong>\u306b\u5411\u3044\u3066\u3044\u308b\u7406\u7531\u306e1\u3064\u3067\u3059\u3002<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Reparameterization Trick\u3068\u306f\u4f55\u304b<\/h3>\n\n\n\n<p>\u305f\u3060\u3057\u3001\u3053\u3053\u30671\u3064\u554f\u984c\u304c\u3042\u308a\u307e\u3059\u3002VAE\u3067\u306f\u5206\u5e03\u304b\u3089\u4e71\u6570\u3092\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u304c\u3001\u305d\u306e\u307e\u307e\u3067\u306f\u8aa4\u5dee\u9006\u4f1d\u64ad\u306b\u3088\u308b\u5b66\u7fd2\u304c\u3057\u306b\u304f\u304f\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u305d\u3053\u3067\u4f7f\u308f\u308c\u308b\u306e\u304c<strong>Reparameterization Trick<\/strong>\u3067\u3059\u3002<\/p>\n\n\n\n<p>VAE\u3067\u306f\u3001\u6f5c\u5728\u5909\u6570&nbsp;z&nbsp;\u3092\u76f4\u63a5\u30e9\u30f3\u30c0\u30e0\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u4ee3\u308f\u308a\u306b\u3001\u6b21\u306e\u3088\u3046\u306b\u8868\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<p>z=\u03bc+\u03c3\u03f5<\/p>\n\n\n\n<p>\u3053\u3053\u3067\u3001\u03f5\u306f\u6a19\u6e96\u6b63\u898f\u5206\u5e03&nbsp;N(0,1)\u304b\u3089\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u305f\u4e71\u6570\u3067\u3059\u3002\u3053\u3046\u3059\u308b\u3053\u3068\u3067\u3001\u30e9\u30f3\u30c0\u30e0\u6027\u3092&nbsp;\u03f5\u306b\u5206\u96e2\u3057\u3064\u3064\u3001\u03bc\u3068\u03c3\u306b\u5bfe\u3057\u3066\u52fe\u914d\u3092\u6d41\u305b\u308b\u3088\u3046\u306b\u306a\u308a\u307e\u3059\u3002\u3053\u308c\u306b\u3088\u308a\u3001VAE\u3092\u901a\u5e38\u306e\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3068\u540c\u3058\u3088\u3046\u306b\u5b66\u7fd2\u3067\u304d\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u307e\u305f\u3001VAE\u306e\u640d\u5931\u95a2\u6570\u306f\u5927\u304d\u304f2\u3064\u306e\u9805\u304b\u3089\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u518d\u69cb\u6210\u8aa4\u5dee<\/strong>\uff1a\u5165\u529b\u753b\u50cf\u3092\u3069\u308c\u3060\u3051\u3046\u307e\u304f\u518d\u73fe\u3067\u304d\u305f\u304b<\/li>\n\n\n\n<li><strong>KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9<\/strong>\uff1a\u6f5c\u5728\u5206\u5e03\u304c\u6a19\u6e96\u6b63\u898f\u5206\u5e03\u306b\u8fd1\u304f\u306a\u308b\u3088\u3046\u306b\u3059\u308b\u9805<\/li>\n<\/ul>\n\n\n\n<p>\u3053\u306e2\u3064\u306e\u30d0\u30e9\u30f3\u30b9\u306b\u3088\u3063\u3066\u3001VAE\u306f\u300c\u518d\u69cb\u6210\u306e\u3046\u307e\u3055\u300d\u3068\u300c\u6f5c\u5728\u7a7a\u9593\u306e\u6271\u3044\u3084\u3059\u3055\u300d\u3092\u4e21\u7acb\u3057\u3088\u3046\u3068\u3057\u307e\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. VAE\u3092MNIST\u3067\u5b66\u7fd2\u3059\u308b<\/h2>\n\n\n\n<p>\u307e\u305a\u3001VAE\u3092MNIST\u3067\u5b66\u7fd2\u3055\u305b\u308b\u3053\u3068\u3092\u8003\u3048\u307e\u3059\u3002MNIST\u306f0\u304b\u30899\u307e\u3067\u306e\u624b\u66f8\u304d\u6570\u5b57\u753b\u50cf\u304b\u3089\u306a\u308b\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3067\u3001\u767d\u9ed2\u304b\u3064\u69cb\u9020\u304c\u6bd4\u8f03\u7684\u5358\u7d14\u3067\u3059\u3002\u305d\u306e\u305f\u3081\u3001VAE\u306e\u5165\u9580\u306b\u975e\u5e38\u306b\u5411\u3044\u3066\u3044\u307e\u3059\u3002<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\n\nimport torchvision\nimport torchvision.transforms as transforms\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\n# ---------- \u8a08\u7b97\u6a5f\u306e\u78ba\u8a8d ----------\nprint(\"PyTorch version:\", torch.__version__)\nprint(\"Torchvision version:\", torchvision.__version__)\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(\"Using device:\", device)\n\n\n# ---------- \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u6e96\u5099 ----------\ntransform = transforms.ToTensor()\n\ntrain_dataset = torchvision.datasets.MNIST(\n    root=\".\/data\",\n    train=True,\n    download=True,\n    transform=transform\n)\n\ntest_dataset = torchvision.datasets.MNIST(\n    root=\".\/data\",\n    train=False,\n    download=True,\n    transform=transform\n)\n\ntrain_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)\ntest_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)\n\nprint(\"\u5b66\u7fd2\u30c7\u30fc\u30bf\u6570:\", len(train_dataset))\nprint(\"\u30c6\u30b9\u30c8\u30c7\u30fc\u30bf\u6570:\", len(test_dataset))\n\n\n# ---------- \u30c7\u30fc\u30bf\u306e\u78ba\u8a8d ----------\nimages, labels = next(iter(train_loader))\n\nprint(\"images shape:\", images.shape)\nprint(\"labels shape:\", labels.shape)\nprint(\"\u6700\u521d\u306e\u30e9\u30d9\u30eb:\", labels&#91;0].item())\n\nplt.figure(figsize=(10, 4))\nfor i in range(8):\n    plt.subplot(2, 4, i + 1)\n    plt.imshow(images&#91;i].squeeze(), cmap=\"gray\")\n    plt.title(f\"label: {labels&#91;i].item()}\")\n    plt.axis(\"off\")\nplt.tight_layout()\nplt.show()\n\n\n# ---------- VAE\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nclass CNNVAE(nn.Module):\n    def __init__(self, latent_dim=16):\n        super().__init__()\n        self.latent_dim = latent_dim\n\n        # ===== Encoder =====\n        self.enc_conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1)   # 28x28 -> 28x28\n        self.enc_pool1 = nn.MaxPool2d(2)                              # 28x28 -> 14x14\n\n        self.enc_conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)  # 14x14 -> 14x14\n        self.enc_pool2 = nn.MaxPool2d(2)                              # 14x14 -> 7x7\n\n        self.fc_mu = nn.Linear(64 * 7 * 7, latent_dim)\n        self.fc_logvar = nn.Linear(64 * 7 * 7, latent_dim)\n\n        # ===== Decoder =====\n        self.fc_dec = nn.Linear(latent_dim, 64 * 7 * 7)\n\n        self.up1 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 7x7 -> 14x14\n        self.dec_conv1 = nn.Conv2d(64, 32, kernel_size=3, padding=1)\n\n        self.up2 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 14x14 -> 28x28\n        self.dec_conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1)\n\n        self.dec_conv3 = nn.Conv2d(16, 1, kernel_size=3, padding=1)\n\n    def encode(self, x):\n        # \u5165\u529b: &#91;N, 1, 28, 28]\n        x = F.relu(self.enc_conv1(x))       # &#91;N, 32, 28, 28]\n        x = self.enc_pool1(x)               # &#91;N, 32, 14, 14]\n\n        x = F.relu(self.enc_conv2(x))       # &#91;N, 64, 14, 14]\n        x = self.enc_pool2(x)               # &#91;N, 64, 7, 7]\n\n        x = torch.flatten(x, start_dim=1)   # &#91;N, 64*7*7]\n        mu = self.fc_mu(x)                  # &#91;N, latent_dim(default=16)]\n        logvar = self.fc_logvar(x)          # &#91;N, latent_dim(default=16)]\n        return mu, logvar\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)\n        eps = torch.randn_like(std)\n        z = mu + eps * std\n        return z\n\n    def decode(self, z):\n        # \u5165\u529b: &#91;N, latent_dim(default=16)]\n        x = self.fc_dec(z)                  # &#91;N, 64*7*7]\n        x = x.view(-1, 64, 7, 7)            # &#91;N, 64, 7, 7]\n\n        x = self.up1(x)                     # &#91;N, 64, 14, 14]\n        x = F.relu(self.dec_conv1(x))       # &#91;N, 32, 14, 14]\n\n        x = self.up2(x)                     # &#91;N, 32, 28, 28]\n        x = F.relu(self.dec_conv2(x))       # &#91;N, 1, 28, 28]\n\n        x = torch.sigmoid(self.dec_conv3(x))\n        return x\n\n    def forward(self, x):\n        mu, logvar = self.encode(x)\n        z = self.reparameterize(mu, logvar)\n        recon = self.decode(z)\n        return recon, mu, logvar\n\n# ---------- \u5b66\u7fd2\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nlatent_dim = 8 # \u6f5c\u5728\u6b21\u5143\nmodel = CNNVAE(latent_dim=latent_dim).to(device)\nprint(model)\n\n\n# ---------- \u640d\u5931\u8aa4\u5dee\uff08\u518d\u69cb\u6210\u8aa4\u5dee + KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9\uff09 ----------\ndef vae_loss_function(recon_x, x, mu, logvar):\n    # \u518d\u69cb\u6210\u8aa4\u5dee\n    recon_loss = F.binary_cross_entropy(recon_x, x, reduction=\"sum\")\n\n    # KL divergence\n    kl_loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n\n    total_loss = recon_loss + kl_loss\n    return total_loss, recon_loss, kl_loss\n\n\n# ---------- \u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc ----------\noptimizer = optim.Adam(model.parameters(), lr=0.001)\n\n\n# ---------- \u5b66\u7fd2 ----------\nnum_epochs = 10\n\nfor epoch in range(num_epochs):\n    model.train()\n\n    train_loss = 0.0\n    train_recon = 0.0\n    train_kl = 0.0\n\n    for images, _ in train_loader:\n        images = images.to(device)\n\n        optimizer.zero_grad()\n\n        recon, mu, logvar = model(images)\n        loss, recon_loss, kl_loss = vae_loss_function(recon, images, mu, logvar)\n\n        loss.backward()\n        optimizer.step()\n\n        train_loss += loss.item()\n        train_recon += recon_loss.item()\n        train_kl += kl_loss.item()\n\n    avg_loss = train_loss \/ len(train_dataset)\n    avg_recon = train_recon \/ len(train_dataset)\n    avg_kl = train_kl \/ len(train_dataset)\n\n    print(\n        f\"Epoch &#91;{epoch+1}\/{num_epochs}] \"\n        f\"Loss: {avg_loss:.4f} | Recon: {avg_recon:.4f} | KL: {avg_kl:.4f}\"\n    )\n\n\n# ---------- \u753b\u50cf\u306e\u518d\u69cb\u6210\u30c6\u30b9\u30c8 ----------\nmodel.eval()\n\nimages, _ = next(iter(test_loader))\nimages = images&#91;:8].to(device)\n\nwith torch.no_grad():\n    recon, mu, logvar = model(images)\n\nimages = images.cpu()\nrecon = recon.cpu()\n\nplt.figure(figsize=(12, 4))\nfor i in range(8):\n    # \u5143\u753b\u50cf\n    plt.subplot(2, 8, i + 1)\n    plt.imshow(images&#91;i].squeeze(), cmap=\"gray\")\n    plt.title(\"Original\")\n    plt.axis(\"off\")\n\n    # \u518d\u69cb\u6210\u753b\u50cf\n    plt.subplot(2, 8, 8 + i + 1)\n    plt.imshow(recon&#91;i].squeeze(), cmap=\"gray\")\n    plt.title(\"Recon\")\n    plt.axis(\"off\")\n\nplt.tight_layout()\nplt.show()\n\n\n# ---------- \u6f5c\u5728\u30d9\u30af\u30c8\u30eb\u304b\u3089\u753b\u50cf\u751f\u6210 ----------\nmodel.eval()\n\nwith torch.no_grad():\n    z = torch.randn(16, latent_dim).to(device)\n    samples = model.decode(z).cpu()\n\nplt.figure(figsize=(8, 8))\nfor i in range(16):\n    plt.subplot(4, 4, i + 1)\n    plt.imshow(samples&#91;i].squeeze(), cmap=\"gray\")\n    plt.axis(\"off\")\nplt.tight_layout()\nplt.show()<\/code><\/pre>\n\n\n\n<p>MNIST\u3067VAE\u3092\u5b66\u7fd2\u3059\u308b\u3068\u3001<strong>\u6f5c\u5728\u7a7a\u9593\u306e\u6b21\u5143\u304c2\u6b21\u5143\u3067\u3082\u3001\u3042\u308b\u7a0b\u5ea6\u306f\u753b\u50cf\u3092\u518d\u69cb\u6210\u3067\u304d\u308b<\/strong>\u3053\u3068\u304c\u5206\u304b\u308a\u307e\u3059\u3002\u3055\u3089\u306b4\u6b21\u5143\u30018\u6b21\u5143\u3068\u6f5c\u5728\u6b21\u5143\u3092\u5897\u3084\u3057\u3066\u3044\u304f\u3068\u3001\u6570\u5b57\u306e\u8f2a\u90ed\u3084\u7d30\u90e8\u3082\u3088\u308a\u5b89\u5b9a\u3057\u3066\u518d\u69cb\u6210\u3055\u308c\u308b\u3088\u3046\u306b\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEgqVNWzECw0sE9N43aiaKVMKHOcQbRjgf9rXAkHOxFOOsK6gLFlr_C5k-ZXIe8izcyQT7PoMRifIJpKty1AmVMU5t-EinOgGR3dldnfNc7FRUYh20Vc_DN2HcFTLrbyqRYT7UpNRZRvvt9B3AYSTimikYPdl4VppIlqC_MyJhS61JaiWMRppm4siHBB1iZj=w640-h522\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u4e0a: \u5143\u753b\u50cf\u3001\u4e0b: \u518d\u69cb\u6210\u753b\u50cf<\/p>\n\n\n\n<p>\u3053\u308c\u306f\u3001MNIST\u304c\u6bd4\u8f03\u7684\u5358\u7d14\u306a\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3060\u304b\u3089\u3067\u3059\u3002\u6570\u5b57\u306e\u7a2e\u985e\u306f10\u30d1\u30bf\u30fc\u30f3\u3067\u3001\u80cc\u666f\u3082\u307b\u307c\u4e00\u5b9a\u3067\u3059\u3002\u305d\u306e\u305f\u3081\u3001\u6f5c\u5728\u7a7a\u9593\u306b\u5fc5\u8981\u306a\u60c5\u5831\u91cf\u304c\u6bd4\u8f03\u7684\u5c11\u306a\u304f\u3001VAE\u3067\u3082\u7279\u5fb4\u3092\u6349\u3048\u3084\u3059\u3044\u306e\u3067\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">4. VAE\u3092CIFAR-10\u3067\u5b66\u7fd2\u3059\u308b<\/h2>\n\n\n\n<p>\u6b21\u306b\u3001VAE\u3092CIFAR-10\u3067\u5b66\u7fd2\u3055\u305b\u307e\u3059\u3002CIFAR-10\u306f\u3001\u81ea\u52d5\u8eca\u3001\u99ac\u3001\u9ce5\u3001\u732b\u306a\u3069\u3092\u542b\u3080\u81ea\u7136\u753b\u50cf\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3067\u3059\u3002MNIST\u3068\u540c\u3058\u304f10\u30af\u30e9\u30b9\u3067\u3059\u304c\u3001\u753b\u50cf\u306e\u96e3\u3057\u3055\u306f\u5927\u304d\u304f\u7570\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\n\nimport torchvision\nimport torchvision.transforms as transforms\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\n# ---------- \u8a08\u7b97\u6a5f\u306e\u78ba\u8a8d ----------\nprint(\"PyTorch version:\", torch.__version__)\nprint(\"Torchvision version:\", torchvision.__version__)\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(\"Using device:\", device)\n\n\n# ---------- \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u6e96\u5099 ----------\ntransform = transforms.ToTensor()\n\ntrain_dataset = torchvision.datasets.CIFAR10(\n    root=\".\/data\",\n    train=True,\n    download=True,\n    transform=transform\n)\n\ntest_dataset = torchvision.datasets.CIFAR10(\n    root=\".\/data\",\n    train=False,\n    download=True,\n    transform=transform\n)\n\ntrain_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)\ntest_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)\n\nprint(\"\u5b66\u7fd2\u30c7\u30fc\u30bf\u6570:\", len(train_dataset))\nprint(\"\u30c6\u30b9\u30c8\u30c7\u30fc\u30bf\u6570:\", len(test_dataset))\n\n\n# ---------- \u30c7\u30fc\u30bf\u306e\u78ba\u8a8d ----------\nimages, labels = next(iter(train_loader))\n\nprint(\"images shape:\", images.shape)\nprint(\"labels shape:\", labels.shape)\nprint(\"\u6700\u521d\u306e\u30e9\u30d9\u30eb:\", labels&#91;0].item())\n\nplt.figure(figsize=(10, 4))\nfor i in range(8):\n    plt.subplot(2, 4, i + 1)\n    plt.imshow(images&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(f\"label: {labels&#91;i].item()}\")\n    plt.axis(\"off\")\nplt.tight_layout()\nplt.show()\n\n\n# ---------- VAE\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nclass CNNVAE(nn.Module):\n    def __init__(self, latent_dim=16):\n        super().__init__()\n        self.latent_dim = latent_dim\n\n        # ===== Encoder =====\n        self.enc_conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)   # 32x32 -> 32x32\n        self.enc_pool1 = nn.MaxPool2d(2)                              # 32x32 -> 16x16\n\n        self.enc_conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)  # 16x16 -> 16x16\n        self.enc_pool2 = nn.MaxPool2d(2)                              # 16x16 -> 8x8\n\n        self.fc_mu = nn.Linear(64 * 8 * 8, latent_dim)\n        self.fc_logvar = nn.Linear(64 * 8 * 8, latent_dim)\n\n        # ===== Decoder =====\n        self.fc_dec = nn.Linear(latent_dim, 64 * 8 * 8)\n\n        self.up1 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 8x8 -> 16x16\n        self.dec_conv1 = nn.Conv2d(64, 32, kernel_size=3, padding=1)\n\n        self.up2 = nn.Upsample(scale_factor=2, mode=\"nearest\")        # 16x16 -> 32x32\n        self.dec_conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1)\n\n        self.dec_conv3 = nn.Conv2d(16, 3, kernel_size=3, padding=1)\n\n    def encode(self, x):\n        # \u5165\u529b: &#91;N, 1, 28, 28]\n        x = F.relu(self.enc_conv1(x))       # &#91;N, 32, 32, 32]\n        x = self.enc_pool1(x)               # &#91;N, 32, 16, 16]\n\n        x = F.relu(self.enc_conv2(x))       # &#91;N, 64, 16, 16]\n        x = self.enc_pool2(x)               # &#91;N, 64, 8, 8]\n\n        x = torch.flatten(x, start_dim=1)   # &#91;N, 64*8*8]\n        mu = self.fc_mu(x)                  # &#91;N, latent_dim(default=16)]\n        logvar = self.fc_logvar(x)          # &#91;N, latent_dim(default=16)]\n        return mu, logvar\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)\n        eps = torch.randn_like(std)\n        z = mu + eps * std\n        return z\n\n    def decode(self, z):\n        # \u5165\u529b: &#91;N, latent_dim(default=16)]\n        x = self.fc_dec(z)                  # &#91;N, 64*8*8]\n        x = x.view(-1, 64, 8, 8)            # &#91;N, 64, 8, 8]\n\n        x = self.up1(x)                     # &#91;N, 64, 16, 16]\n        x = F.relu(self.dec_conv1(x))       # &#91;N, 32, 16, 16]\n\n        x = self.up2(x)                     # &#91;N, 32, 32, 32]\n        x = F.relu(self.dec_conv2(x))       # &#91;N, 3, 32, 32]\n\n        x = torch.sigmoid(self.dec_conv3(x))\n        return x\n\n    def forward(self, x):\n        mu, logvar = self.encode(x)\n        z = self.reparameterize(mu, logvar)\n        recon = self.decode(z)\n        return recon, mu, logvar\n\n# ---------- \u5b66\u7fd2\u30e2\u30c7\u30eb\u306e\u4f5c\u6210 ----------\nlatent_dim = 64\nmodel = CNNVAE(latent_dim=latent_dim).to(device)\nprint(model)\n\n\n# ---------- \u640d\u5931\u8aa4\u5dee\uff08\u518d\u69cb\u6210\u8aa4\u5dee + KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9\uff09----------\ndef vae_loss_function(recon_x, x, mu, logvar):\n    # \u518d\u69cb\u6210\u8aa4\u5dee\n    recon_loss = F.binary_cross_entropy(recon_x, x, reduction=\"sum\")\n\n    # KL divergence\n    kl_loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n\n    total_loss = recon_loss + kl_loss\n    return total_loss, recon_loss, kl_loss\n\n\n# ---------- \u30aa\u30d7\u30c6\u30a3\u30de\u30a4\u30b6\u30fc ----------\noptimizer = optim.Adam(model.parameters(), lr=0.001)\n\n\n# ---------- \u5b66\u7fd2 ----------\ntrain_losses = &#91;]\ntrain_recon_losses = &#91;]\ntrain_kl_losses = &#91;]\n\nnum_epochs = 100\n\nfor epoch in range(num_epochs):\n    model.train()\n\n    train_loss = 0.0\n    train_recon = 0.0\n    train_kl = 0.0\n\n    for images, _ in train_loader:\n        images = images.to(device)\n\n        optimizer.zero_grad()\n\n        recon, mu, logvar = model(images)\n        loss, recon_loss, kl_loss = vae_loss_function(recon, images, mu, logvar)\n\n        loss.backward()\n        optimizer.step()\n\n        train_loss += loss.item()\n        train_recon += recon_loss.item()\n        train_kl += kl_loss.item()\n\n    avg_loss = train_loss \/ len(train_dataset)\n    avg_recon = train_recon \/ len(train_dataset)\n    avg_kl = train_kl \/ len(train_dataset)\n\n    train_losses.append(avg_loss)\n    train_recon_losses.append(avg_recon)\n    train_kl_losses.append(avg_kl)\n\n    print(\n        f\"Epoch &#91;{epoch+1}\/{num_epochs}] \"\n        f\"Loss: {avg_loss:.4f} | Recon: {avg_recon:.4f} | KL: {avg_kl:.4f}\"\n    )\n\n\n# ---------- \u753b\u50cf\u306e\u518d\u69cb\u6210\u30c6\u30b9\u30c8 ----------\nmodel.eval()\n\nimages, _ = next(iter(test_loader))\nimages = images&#91;:8].to(device)\n\nwith torch.no_grad():\n    recon, mu, logvar = model(images)\n\nimages = images.cpu()\nrecon = recon.cpu()\n\nplt.figure(figsize=(12, 4))\nfor i in range(8):\n    # \u5143\u753b\u50cf\n    plt.subplot(2, 8, i + 1)\n    plt.imshow(images&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(\"Original\")\n    plt.axis(\"off\")\n\n    # \u518d\u69cb\u6210\u753b\u50cf\n    plt.subplot(2, 8, 8 + i + 1)\n    plt.imshow(recon&#91;i].numpy().transpose((1, 2, 0)))\n    plt.title(\"Recon\")\n    plt.axis(\"off\")\n\nplt.tight_layout()\nplt.show()<\/code><\/pre>\n\n\n\n<p>CIFAR-10\u3067VAE\u3092\u5b66\u7fd2\u3059\u308b\u3068\u3001\u6f5c\u5728\u7a7a\u9593\u306e\u6b21\u5143\u3092\u5897\u3084\u3059\u306b\u3064\u308c\u3066\u3001\u305f\u3057\u304b\u306b\u753b\u50cf\u306e\u518d\u69cb\u6210\u7cbe\u5ea6\u306f\u5411\u4e0a\u3057\u307e\u3059\u3002\u3057\u304b\u3057\u3001<strong>\u3042\u308b\u7a0b\u5ea6\u4ee5\u4e0a\u306f\u6539\u5584\u304c\u982d\u6253\u3061\u306b\u306a\u308a\u3001\u753b\u50cf\u306f\u5168\u4f53\u7684\u306b\u307c\u3084\u3051\u3084\u3059\u3044<\/strong>\u3068\u3044\u3046\u7d50\u679c\u306b\u306a\u308a\u304c\u3061\u3067\u3059\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEhkyu2Iib5T5hv55aIugRMs_8YKLVog-E--TsVO7Xoi81RYvp2reNzBxGNzjEMPoPfk4jUL1Jy3SIxSitp8nvtppIHgfCxVPLpe4LYDDOdqPya7l7BBHIW9E300J8QeMpBfdxcAoecXsVkbaDCaAVNU57V9eg9nWMSOWYE0j3W6kngZWOBuUd-8zjstiw1z=w640-h192\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u4e0a: \u5143\u753b\u50cf\u3001\u4e0b: \u518d\u69cb\u6210\u753b\u50cf\u3002<\/p>\n\n\n\n<p>\u6f5c\u5728\u6b21\u5143=64\u3001\u30a8\u30f3\u30b3\u30fc\u30c0\u30fcCNN\u5c64\u6df1\u3055=2\u3001\u30c7\u30b3\u30fc\u30c0\u30fcCNN\u5c64\u6df1\u3055=2\u306e\u30d9\u30b9\u30c8\u306a\u3082\u306e<\/p>\n\n\n\n<p>\u305f\u3068\u3048\u3070\u6f5c\u5728\u6b21\u5143\u3092\u5897\u3084\u3057\u305f\u308a\u3001\u5c64\u3092\u6df1\u304f\u3057\u305f\u308a\u3059\u308b\u3068\u591a\u5c11\u306e\u6539\u5584\u306f\u898b\u3089\u308c\u307e\u3059\u304c\u3001MNIST\u306e\u3088\u3046\u306b\u304f\u3063\u304d\u308a\u3057\u305f\u753b\u50cf\u3092\u518d\u73fe\u3059\u308b\u306e\u306f\u7c21\u5358\u3067\u306f\u3042\u308a\u307e\u305b\u3093\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEiBR4SecdqyiY-hv1rjWCNa8LprwWxrnAWZwVDFbJ6GRAFpz7rbJQbtVsN9Kl3BWDBwihHQyYoHOm8cUIpAHC4vX2UK03jShOI63svWBLCL8u_3Bdf8LgcZhdPaoDG5kFNrP-El_OrPtlj2TjjybG3P89EWV3_qaLGlQx5ptHdMuCDSqBnjJ5BkPMEanr0P=w640-h254\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u6700\u9069\u5316\u7d50\u679c\u3002\u6f5c\u5728\u6b21\u5143\u304c64\u4ee5\u4e0a\u306f\u5927\u304d\u304f\u6539\u5584\u3057\u306a\u3044\u3002\u5c64\u306e\u6df1\u3055\u3082\u5927\u5e45\u306b\u5bc4\u4e0e\u3057\u306a\u3044\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">5. \u306a\u305cCIFAR-10\u3067\u306f\u307c\u3084\u3051\u305f\u753b\u50cf\u306b\u306a\u308b\u306e\u304b<\/h2>\n\n\n\n<p>\u305d\u306e\u7406\u7531\u306f\u3001<strong>MNIST\u3068CIFAR-10\u3067\u306f\u30bf\u30b9\u30af\u306e\u96e3\u3057\u3055\u304c\u5927\u304d\u304f\u7570\u306a\u308b<\/strong>\u305f\u3081\u3067\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>MNIST<\/strong>\uff1a\u767d\u9ed2\u3067\u5358\u7d14\u306a\u624b\u66f8\u304d\u6570\u5b57\u3092\u518d\u69cb\u6210\u3059\u308c\u3070\u3088\u3044<\/li>\n\n\n\n<li><strong>CIFAR-10<\/strong>\uff1a\u7269\u4f53\u306e\u5f62\u3001\u8272\u3001\u80cc\u666f\u3001\u8996\u70b9\u304c\u5927\u304d\u304f\u7570\u306a\u308b\u81ea\u7136\u753b\u50cf\u3092\u518d\u69cb\u6210\u3059\u308b\u5fc5\u8981\u304c\u3042\u308b<\/li>\n<\/ul>\n\n\n\n<p>MNIST\u3067\u306f\u3001\u753b\u50cf\u306e\u30d1\u30bf\u30fc\u30f3\u304c\u3042\u308b\u7a0b\u5ea6\u9650\u5b9a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\u4e00\u65b9\u3067CIFAR-10\u3067\u306f\u3001\u540c\u3058\u30af\u30e9\u30b9\u3067\u3082\u898b\u305f\u76ee\u304c\u5927\u304d\u304f\u7570\u306a\u308a\u3001\u80cc\u666f\u3084\u69cb\u56f3\u3082\u3055\u307e\u3056\u307e\u3067\u3059\u3002\u3064\u307e\u308a\u3001<strong>\u81ea\u7136\u753b\u50cf\u306f\u60c5\u5831\u91cf\u304c\u975e\u5e38\u306b\u591a\u3044<\/strong>\u306e\u3067\u3059\u3002<\/p>\n\n\n\n<p>\u305d\u306e\u305f\u3081\u3001CIFAR-10\u306e\u3088\u3046\u306a\u8907\u96d1\u306a\u753b\u50cf\u3092\u9650\u3089\u308c\u305f\u6b21\u5143\u306e\u6f5c\u5728\u5909\u6570\u306b\u5727\u7e2e\u3057\u3066\u304b\u3089\u518d\u69cb\u6210\u3057\u3088\u3046\u3068\u3059\u308b\u3068\u3001\u3069\u3046\u3057\u3066\u3082\u7d30\u304b\u306a\u60c5\u5831\u304c\u843d\u3061\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u307e\u305f\u3001VAE\u3067\u306f\u518d\u69cb\u6210\u8aa4\u5dee\u3068\u3057\u3066\u753b\u7d20\u3054\u3068\u306e\u5dee\u3092\u7528\u3044\u308b\u3053\u3068\u304c\u591a\u304f\u3001\u3053\u306e\u5834\u5408\u3001\u8907\u6570\u306e\u3042\u308a\u5f97\u308b\u7d30\u90e8\u3092\u5e73\u5747\u3057\u305f\u3088\u3046\u306a\u51fa\u529b\u306b\u306a\u308a\u3084\u3059\u3044\u3068\u3044\u3046\u7279\u5fb4\u304c\u3042\u308a\u307e\u3059\u3002\u3053\u308c\u3082\u3001\u753b\u50cf\u304c\u307c\u3084\u3051\u308b\u5927\u304d\u306a\u7406\u7531\u3067\u3059\u3002<\/p>\n\n\n\n<p>\u3055\u3089\u306b\u3001VAE\u3067\u306f\u6f5c\u5728\u7a7a\u9593\u3092\u6ed1\u3089\u304b\u3067\u6271\u3044\u3084\u3059\u3044\u5f62\u306b\u4fdd\u3064\u305f\u3081\u3001KL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9\u306b\u3088\u308b\u5236\u7d04\u3082\u52a0\u3048\u3066\u3044\u307e\u3059\u3002\u3053\u306e\u5236\u7d04\u306f\u751f\u6210\u306b\u306f\u6709\u5229\u3067\u3059\u304c\u3001\u518d\u69cb\u6210\u753b\u50cf\u306e\u7d30\u304b\u3055\u3060\u3051\u3092\u898b\u308b\u3068\u4e0d\u5229\u306b\u50cd\u304f\u3053\u3068\u304c\u3042\u308a\u307e\u3059\u3002\u3064\u307e\u308a\u3001<strong>\u751f\u6210\u3057\u3084\u3059\u3055\u3068\u9bae\u660e\u3055\u306e\u9593\u306b\u306f\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u304c\u3042\u308b<\/strong>\u306e\u3067\u3059\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">6. VAE\u306e\u307c\u3084\u3051\u3092\u3069\u306e\u3088\u3046\u306b\u514b\u670d\u3059\u308b\u304b\uff1f<\/h2>\n\n\n\n<p>VAE\u306e\u307c\u3084\u3051\u3092\u6539\u5584\u3059\u308b\u65b9\u6cd5\u306f\u3044\u304f\u3064\u304b\u3042\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u03b2-VAE\u3084KL\u9805\u306e\u91cd\u307f\u8abf\u6574<\/strong>\uff1aKL\u30c0\u30a4\u30d0\u30fc\u30b8\u30a7\u30f3\u30b9\u306e\u91cd\u307f\u3092\u8abf\u6574\u3057\u3001\u6f5c\u5728\u7a7a\u9593\u306e\u6027\u8cea\u3068\u518d\u69cb\u6210\u306e\u3057\u3084\u3059\u3055\u306e\u30d0\u30e9\u30f3\u30b9\u3092\u53d6\u308b<\/li>\n\n\n\n<li><strong>Perceptual Loss\u306e\u5c0e\u5165<\/strong>\uff1a\u753b\u7d20\u3054\u3068\u306e\u5dee\u3060\u3051\u3067\u306a\u304f\u3001\u753b\u50cf\u306e\u7279\u5fb4\u306e\u5dee\u3082\u898b\u308b\u3053\u3068\u3067\u3001\u898b\u305f\u76ee\u3092\u81ea\u7136\u306b\u3059\u308b<\/li>\n\n\n\n<li><strong>VQ-VAE<\/strong>\uff1a\u9023\u7d9a\u7684\u306a\u6f5c\u5728\u7a7a\u9593\u3067\u306f\u306a\u304f\u3001\u96e2\u6563\u7684\u306a\u6f5c\u5728\u8868\u73fe\u3092\u4f7f\u3063\u3066\u3001\u3088\u308a\u306f\u3063\u304d\u308a\u3057\u305f\u8868\u73fe\u3092\u5b66\u7fd2\u3057\u3084\u3059\u304f\u3059\u308b<\/li>\n\n\n\n<li><strong>Diffusion\u30e2\u30c7\u30eb<\/strong>\uff1a\u30ce\u30a4\u30ba\u9664\u53bb\u3092\u7e70\u308a\u8fd4\u3057\u3066\u9ad8\u54c1\u8cea\u306a\u753b\u50cf\u3092\u751f\u6210\u3059\u308b\u624b\u6cd5\u3067\u3001VAE\u3068\u306f\u5225\u7cfb\u7d71\u306e\u4ee3\u8868\u7684\u306a\u751f\u6210\u30e2\u30c7\u30eb<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">7. \u7814\u7a76\u8005\u304c\u4f7f\u3046\u753b\u50cf\u7cfb\u30bf\u30b9\u30af\u3067\u306f\u4f55\u304c\u304a\u3059\u3059\u3081\u304b<\/h2>\n\n\n\n<p>\u7814\u7a76\u7528\u9014\u3067VAE\u3092\u4f7f\u3046\u5834\u5408\u3001\u7279\u306b\u5316\u5b66\u3084\u6750\u6599\u7cfb\u306e\u7814\u7a76\u3067\u306f\u3001<strong>\u6f5c\u5728\u5909\u6570\u3092\u9023\u7d9a\u30d9\u30af\u30c8\u30eb\u3068\u3057\u3066\u6271\u3044\u3084\u3059\u3044\u3053\u3068<\/strong>\u304c\u5927\u304d\u306a\u5229\u70b9\u3067\u3059\u3002\u6f5c\u5728\u7a7a\u9593\u306e\u89e3\u6790\u3084\u3001\u6f5c\u5728\u5909\u6570\u3092\u4f7f\u3063\u305f\u6700\u9069\u5316\u3092\u8003\u3048\u308b\u306a\u3089\u3001\u3053\u306e\u6027\u8cea\u306f\u975e\u5e38\u306b\u4fbf\u5229\u3067\u3059\u3002<\/p>\n\n\n\n<p>\u305d\u306e\u305f\u3081\u3001\u7814\u7a76\u8005\u304c\u307e\u305a\u8a66\u3057\u3084\u3059\u3044\u65b9\u6cd5\u3068\u3057\u3066\u306f\u3001\u6b21\u306e2\u3064\u304c\u304a\u3059\u3059\u3081\u3067\u3059\u3002<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>KL\u4fc2\u6570\u306e\u8abf\u6574<\/strong>\uff1a\u5b9f\u88c5\u304c\u7c21\u5358\u3067\u3001\u518d\u69cb\u6210\u3068\u306e\u30d0\u30e9\u30f3\u30b9\u3092\u53d6\u308a\u3084\u3059\u3044<\/li>\n\n\n\n<li><strong>Perceptual Loss\u306e\u5c0e\u5165<\/strong>\uff1a\u898b\u305f\u76ee\u306e\u81ea\u7136\u3055\u3092\u6539\u5584\u3057\u3084\u3059\u3044<\/li>\n<\/ul>\n\n\n\n<p>\u4e00\u65b9\u3067\u3001VQ-VAE\u3084Diffusion\u30e2\u30c7\u30eb\u306f\u9ad8\u54c1\u8cea\u306a\u753b\u50cf\u751f\u6210\u306b\u306f\u6709\u529b\u3067\u3059\u304c\u3001\u30e2\u30c7\u30eb\u306e\u69cb\u9020\u3084\u904b\u7528\u304c\u8907\u96d1\u306b\u306a\u308a\u3084\u3059\u304f\u3001\u6f5c\u5728\u7a7a\u9593\u306e\u89e3\u91c8\u3084\u64cd\u4f5c\u3092\u91cd\u8996\u3059\u308b\u7814\u7a76\u3067\u306f\u6271\u3044\u306b\u304f\u3044\u5834\u9762\u3082\u3042\u308a\u307e\u3059\u3002&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.blogger.com\/blog\/post\/edit\/7026973157148678023\/4462177816909944976#\"><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/a\/AVvXsEjMTIdGqjkvWGj3zilKgahdWn93ar9H3O7KA2jewIRoTn2M0s_-ewsX-Sw5IvPE4Lb6RJqd3P5S94E0fX76_q4fJ6feY5t7rQU07hwxovVozig6RvRyIJeBZ2Tq-DqLuH-bUShUNd5AJ4D0iW7HysdisIFJWFTJNSCN5my7Zy3z_Vt_QpxLVCrI9Pa3w05I=w640-h340\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<p>\u4f8b Perceptual loss\u3092\u5c0e\u5165\u3057\u305fCNN-VAE\u3002\u660e\u78ba\u306b\u9bae\u660e\u306b\u518d\u69cb\u6210\u3055\u308c\u3066\u3044\u308b\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8. \u307e\u3068\u3081<\/h2>\n\n\n\n<p>\u672c\u8a18\u4e8b\u3067\u306f\u3001VAE\u3092MNIST\u3068CIFAR-10\u3067\u5b66\u7fd2\u3055\u305b\u305f\u3068\u304d\u306e\u9055\u3044\u3092\u901a\u3057\u3066\u3001VAE\u306e\u57fa\u672c\u3068\u9650\u754c\u3092\u8aac\u660e\u3057\u307e\u3057\u305f\u3002<\/p>\n\n\n\n<p>MNIST\u3067\u306f\u753b\u50cf\u306e\u69cb\u9020\u304c\u6bd4\u8f03\u7684\u5358\u7d14\u3067\u3042\u308b\u305f\u3081\u3001VAE\u3067\u3082\u5c11\u306a\u3044\u6f5c\u5728\u6b21\u5143\u3067\u3046\u307e\u304f\u518d\u69cb\u6210\u3067\u304d\u307e\u3059\u3002\u4e00\u65b9\u3001CIFAR-10\u306e\u3088\u3046\u306a\u81ea\u7136\u753b\u50cf\u3067\u306f\u3001\u5f62\u72b6\u3084\u80cc\u666f\u306e\u591a\u69d8\u6027\u304c\u5927\u304d\u304f\u3001\u9650\u3089\u308c\u305f\u6f5c\u5728\u7a7a\u9593\u306b\u60c5\u5831\u3092\u62bc\u3057\u8fbc\u3081\u308b\u306e\u304c\u96e3\u3057\u3044\u305f\u3081\u3001\u753b\u50cf\u304c\u307c\u3084\u3051\u3084\u3059\u304f\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<p>\u3053\u308c\u306f\u5358\u306a\u308b\u5b9f\u88c5\u306e\u554f\u984c\u3067\u306f\u306a\u304f\u3001<strong>VAE\u306e\u300c\u5727\u7e2e\u3057\u306a\u304c\u3089\u6ed1\u3089\u304b\u306a\u6f5c\u5728\u7a7a\u9593\u3092\u4f5c\u308b\u300d\u3068\u3044\u3046\u6027\u8cea\u305d\u306e\u3082\u306e\u306b\u7531\u6765\u3059\u308b\u8ab2\u984c<\/strong>\u3067\u3059\u3002<\/p>\n\n\n\n<p>\u305d\u306e\u6539\u5584\u65b9\u6cd5\u3068\u3057\u3066\u306f\u3001<strong>KL\u4fc2\u6570\u306e\u8abf\u6574\u3001Perceptual Loss\u306e\u5c0e\u5165\u3001VQ-VAE\u3001Diffusion\u30e2\u30c7\u30eb<\/strong>\u306a\u3069\u304c\u3042\u308a\u307e\u3059\u3002\u7279\u306b\u7814\u7a76\u7528\u9014\u3067\u306f\u3001\u5b9f\u88c5\u306e\u3057\u3084\u3059\u3055\u3068\u6f5c\u5728\u7a7a\u9593\u306e\u6271\u3044\u3084\u3059\u3055\u306e\u30d0\u30e9\u30f3\u30b9\u304b\u3089\u3001\u307e\u305a\u306f<strong>KL\u4fc2\u6570\u306e\u8abf\u6574\u3068Perceptual Loss<\/strong>\u3092\u8a66\u3059\u306e\u304c\u73fe\u5b9f\u7684\u3067\u3059\u3002<\/p>\n\n\n\n<p>VAE\u306f\u30b7\u30f3\u30d7\u30eb\u3067\u7406\u89e3\u3057\u3084\u3059\u3044\u4e00\u65b9\u3067\u3001\u751f\u6210\u30e2\u30c7\u30eb\u3068\u3057\u3066\u306e\u91cd\u8981\u306a\u8003\u3048\u65b9\u304c\u591a\u304f\u8a70\u307e\u3063\u3066\u3044\u307e\u3059\u3002\u753b\u50cf\u751f\u6210AI\u3092\u5b66\u3076\u6700\u521d\u306e\u4e00\u6b69\u3068\u3057\u3066\u3001\u4eca\u3067\u3082\u975e\u5e38\u306b\u6709\u7528\u306a\u30e2\u30c7\u30eb\u3067\u3059\u3002<\/p>\n","protected":false},"excerpt":{"rendered":"<p>VAE\uff08\u5909\u5206\u30aa\u30fc\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\u3001Variational Autoencoder\uff09\u306f\u3001\u5165\u529b\u30c7\u30fc\u30bf\u3092\u5727\u7e2e\u3057\u3066\u7279\u5fb4\u3092\u62bd\u51fa\u3057\u3001\u305d\u306e\u60c5\u5831\u3092\u3082\u3068\u306b\u30c7\u30fc\u30bf\u3092\u518d&#8230;<\/p>\n","protected":false},"author":1,"featured_media":19,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4,7],"tags":[],"class_list":["post-34","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-python"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/34","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=34"}],"version-history":[{"count":2,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/34\/revisions"}],"predecessor-version":[{"id":36,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/posts\/34\/revisions\/36"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=\/wp\/v2\/media\/19"}],"wp:attachment":[{"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=34"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=34"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/material-ai-lab.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=34"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}