Jekyll2023-08-29T19:14:30+00:00https://jreypo.io/feed.xmlJuanma’s BlogJuan Manuel ReyAutomate your Azure Bastion tunnels with a shell script2023-08-29T17:10:00+00:002023-08-29T17:10:00+00:00https://jreypo.io/2023/08/29/automate-your-azure-bastion-tunnels-with-a-shell-script<p><a href="https://learn.microsoft.com/en-us/azure/bastion/">Azure Bastion</a> service enables conectivity to Windows and Linux virtual mahciens running on Azure without the need of having RDP or SSH ports open to the public. By default an Azure Bastion connection to a VM will be open in a new tab in the browser, but Bastion also gives you the possibility of establishing a tunnel using Azure CLI and connect with native RDP or SSH clients.</p>
<p>I run many virtual machines in Azure for my day-to-day work, from jumpboxes to development VMs, and I use Azure Bastion all the time to access them. To quickly automate establishing tunnels I created the below shell script, I use it from macOS since I have a MacBook Pro as my daily driver but it can be run in any Linux instance or in WSL as well.</p>
<p>The only requriment is to have <a href="https://learn.microsoft.com/en-us/cli/azure/">Azure CLI</a> installed and configured.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="c">#</span>
<span class="c"># Azure Bastion tunnel script by @jreypo</span>
<span class="c">#</span>
<span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Enter resource group name: "</span>
<span class="nb">read </span>rg
<span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Enter Bastion name: "</span>
<span class="nb">read </span>bastion
<span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Enter virtual machine name: "</span>
<span class="nb">read </span>vmname
<span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Enter remote resource port: "</span>
<span class="nb">read </span>resourceport
<span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Enter local port: "</span>
<span class="nb">read </span>port
<span class="nv">vmid</span><span class="o">=</span><span class="si">$(</span>az vm show <span class="nt">--resource-group</span> <span class="nv">$rg</span> <span class="nt">--name</span> <span class="nv">$vmname</span> <span class="nt">--query</span> <span class="nb">id</span> <span class="nt">--output</span> tsv<span class="si">)</span>
az network bastion tunnel <span class="nt">--resource-group</span> <span class="nv">$rg</span> <span class="nt">--target-resource-id</span> <span class="nv">$vmid</span> <span class="nt">--resource-port</span> <span class="nv">$resourceport</span> <span class="nt">--port</span> <span class="nv">$port</span> <span class="nt">--name</span> <span class="nv">$bastion</span>
</code></pre></div></div>
<p>Hope it helps. I created a <a href="https://gist.github.com/jreypo/babf475d1a2b678b7dc1347f6dc78f9f">Gist in GitHub</a> with the script code in case you have any fedback or find any issue. Comments are welcome as always.</p>
<p>–Juanma</p>Juan Manuel ReyAzure Bastion service enables conectivity to Windows and Linux virtual mahciens running on Azure without the need of having RDP or SSH ports open to the public. By default an Azure Bastion connection to a VM will be open in a new tab in the browser, but Bastion also gives you the possibility of establishing a tunnel using Azure CLI and connect with native RDP or SSH clients.A look into CBL-Mariner, Microsoft’s internal Linux distribution2021-07-09T12:45:00+00:002021-07-09T12:45:00+00:00https://jreypo.io/2021/07/09/a-look-into-cbl-mariner-microsoft-internal-linux-distribution<p>Mariner or more exactly CBL-Mariner where CBL stands for <em>Common Base Linux</em>, is a Linux distribution created by Microsoft’s Linux System Group which is the same team at Microsoft which created the <a href="https://github.com/microsoft/WSL2-Linux-Kernel">Linux kernel used for Windows Subsystem for Linux version 2</a>, or WSL2. The goal of Mariner is to be used as an internal Linux distribution for Microsoft’s engineering teams to build cloud infrastructure and edge products and services.</p>
<p>Of course Mariner is open source and it has its own repo under <a href="https://github.com/microsoft/CBL-Mariner">Microsoft’s GitHub</a> organization. No ISOs or images of Mariner are provided, however the repo has instructions to build them on Ubuntu 18.04. There are a series of prerequisites listed in this <a href="https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/building/prerequisites.md">GitHub page</a> that roughly include Docker, RPM tools, ISO build tools and Golang, amongst others.</p>
<p>The build process for an ISO is very straightforward, it relays on pre-compiled RPM packages from <a href="https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/building/prerequisites.md">CBL-Mariner package repository</a>. Since I wanted to install Mariner on my vSphere 7 homelab I choose to create the ISO.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/microsoft/CBL-Mariner.git
cd CBL-Mariner/toolkit
sudo make iso REBUILD_TOOLS=y REBUILD_PACKAGES=n CONFIG_FILE=./imageconfigs/full.json
</code></pre></div></div>
<h2 id="installation-process">Installation process</h2>
<p>In my vSphere lab I created a couple of new VMs and set the guest OS to <code class="language-plaintext highlighter-rouge">Other 5.x or later Linux (64-bit)</code>, configure the hardware with 1 vCPU, 2GB of RAM and a 16GB disk. This would be enough for a simple test.</p>
<p>The installation process will give the option to do it in text or graphic mode, I choose graphic one since I was curious if it was based in Fedora’s Anaconda or any other.</p>
<p><a href="https://jreypo.io/assets/images/cbl-mariner-graphic-installer.png"><img src="/assets/images/cbl-mariner-graphic-installer.png" alt="" /></a></p>
<p>There are two types of installations:</p>
<ul>
<li>Core</li>
<li>Full</li>
</ul>
<p>The installation process is very fast in both cases, it took around 29 seconds for the Core and around 76 for the Full one. During the process it will ask you for the typical parameters like user, partitioning, etc.</p>
<ul>
<li>Partition configuration</li>
</ul>
<p><a href="https://jreypo.io/assets/images/cbl-mariner-partition-config.png"><img src="/assets/images/cbl-mariner-partition-config.png" alt="" /></a></p>
<ul>
<li>System configuration</li>
</ul>
<p><a href="https://jreypo.io/assets/images/cbl-mariner-system-install.png"><img src="/assets/images/cbl-mariner-system-install.png" alt="" /></a></p>
<h2 id="cbl-mariner-overview">CBL-Mariner overview</h2>
<p>CBL-Mariner feels very similar to other Linux distros like Fedora or Photon-OS, which is expected since in the <a href="https://github.com/microsoft/CBL-Mariner#acknowledgments">Acknowledgments</a> section of their GitHub repo they list both projects because the team used their SPEC files as starting point and reference.</p>
<p>As anyone will expect in any modern Linux distro <code class="language-plaintext highlighter-rouge">systemd</code> it is used as CBL-Mariner system manager. After installing my Mariner VM I had to access it through vSphere console because there is no SSH daemon installed in the default installation, but it can be easily installed using <code class="language-plaintext highlighter-rouge">tdnf</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo tdnf install -y openssh-server
sudo systemctl enable --now sshd.service
</code></pre></div></div>
<h3 id="package-and-update-system">Package and update system</h3>
<p>CBL-Mariner package system is RPM-based. The package update system uses both <code class="language-plaintext highlighter-rouge">dnf</code> and <code class="language-plaintext highlighter-rouge">tdnf</code>, <a href="https://github.com/vmware/tdnf">Tiny DNF</a> is a package manager based on <code class="language-plaintext highlighter-rouge">dnf</code> and coming from VMware’s Photon OS.</p>
<p>CBL-Mariner also supports an image-based update mechanism for atomic servicing and rollback using <a href="https://rpm-ostree.readthedocs.io/en/stable/">RPM-OSTree</a>, <code class="language-plaintext highlighter-rouge">rpm-ostree</code> is an open source tool based on <a href="https://ostreedev.github.io/ostree/introduction/">OSTree</a> to manage bootable, immutable, versioned filesystem trees. The idea behind rpm-ostree is to use a client-server architecture to keep Linux hosts updated and in sync with the latest packages in a reliable manner.</p>
<p>In terms of software available after the installation there are two package repositories, <code class="language-plaintext highlighter-rouge">base</code> and <code class="language-plaintext highlighter-rouge">update</code>, configured in the system.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vadmin@cbl-mariner [ ~ ]$ sudo tdnf repolist
Loaded plugin: tdnfrepogpgcheck
repo id repo name status
mariner-official-baseCBL-Mariner Official Base 1.0 x86_64 enabled
mariner-official-updateCBL-Mariner Official Update 1.0 x86_64 enabled
vadmin@cbl-mariner [ ~ ]$
</code></pre></div></div>
<p>Around 3300 packages are available between both repositories. In my case it was a very pleasant surprise to find <code class="language-plaintext highlighter-rouge">open-vm-tools</code> package, since I run my CBL-Mariner instances on vSphere is fantastic to have the VMware Tools packages available.</p>
<h3 id="security-by-default">Security by default</h3>
<p>CBL-Mariner follows the secure-by-default principle, most aspects of the OS have been built with an emphasis on security. It comes with a hardened kernel, signed updates, ASLR, compiler-based hardening and tamper-resistant logs amongst many features.</p>
<p>All Mariner security features are listed in <a href="https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/security-features.md">CBL-Mariner’s GitHub repository</a>.</p>
<p>I hope this quick overview of CBL-Mariner has been interesting. I encourage you to look at Mariner’s GitHub repo and to create your own ISO and/or VHDX images.</p>
<p>Stay safe.</p>
<p>–Juanma</p>Juan Manuel ReyMariner or more exactly CBL-Mariner where CBL stands for Common Base Linux, is a Linux distribution created by Microsoft’s Linux System Group which is the same team at Microsoft which created the Linux kernel used for Windows Subsystem for Linux version 2, or WSL2. The goal of Mariner is to be used as an internal Linux distribution for Microsoft’s engineering teams to build cloud infrastructure and edge products and services.Running pi-hole as a podman container in Fedora2021-03-12T18:00:00+00:002021-03-12T18:00:00+00:00https://jreypo.io/2021/03/12/running-pihole-as-a-podman-container-in-fedora<p>I run Pi-Hole at home to filter and block ad traffic, is a fantastic piece of of software that helps me to keep me and my son away from unwanted ads. It can be run on a VM or even better a <a href="https://github.com/pi-hole/docker-pi-hole">Docker container</a>. If you are not doing it yet I encourage you to visit <a href="https://pi-hole.net/">Pi-Hole homepage</a> and take a look at the project and donate or contribute in <a href="https://github.com/pi-hole">GitHub</a> to keep it running.</p>
<p>For the last year and a half or so I have been running pi-hole on a CentOS 7 virtual machine, it’s been working great but I wanted to move it outside of my ESXi host and separate it from my lab workloads. Many people use a Raspberry Pi to run it and my original intention was exactly that and use <a href="https://www.portainer.io/">Portainer</a> to manage the container like I was doing on CentOS, however I had a <a href="https://www.gigabyte.com/Mini-PcBarebone/GB-BXBT-2807-rev-10">GIGABYTE GB-BXBT-2807</a> mini PC laying around. This little fella used to be my media center, is a NUC-like machine with a dual-core Intel Celeron N2807 CPU, 8GB of RAM and a 60GB SSD. It has enough room not only for Pi-Hole but also for other workloads I am planning to run to support my home network.</p>
<p>I decided to move from CentOS to Fedora Server so I installed version 33. Fedora 33 comes with a caveat, since it uses <a href="https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html">cgroupvs2</a> Docker does not work. But that is not the end of the world because it has <a href="https://podman.io/">Podman</a> available instead, I’ve used it in the past for testing purposes but never to run any serious workloads at home, there are ways to install Docker in Fedora 33 but I decided to use the default option and try run Pi-Hole on it.</p>
<p>If you have never heard about Podman, or Pod Manager, it is a daemonless container engine for OCI Conainers on Linux originally developed by Red Hat as an open source project and intended to replace Docker in the Fedora/CentOS/RHEL ecosystem.</p>
<h2 id="prepare-the-server">Prepare the server</h2>
<p>Install Fedora 33 in a virtual machine or like in my case in a physical system, is up to you to decide the type of installation but for me minimal installation is more than enough to keep the system and its attack surface as small as possible.</p>
<p>After the OS is installed run <code class="language-plaintext highlighter-rouge">dnf update</code> to get latest Fedora updates and install Podman.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo dnf update -y
sudo dnf install -y podman
</code></pre></div></div>
<p>Next we need to adjust Fedora network configuration. By default Fedora Server comes with <a href="https://wiki.gnome.org/Projects/NetworkManager/"><strong>NetworkManager</strong></a> to manage networking and <code class="language-plaintext highlighter-rouge">systemd-resolved</code> service enabled, we will maintain <strong>NetworkManager</strong> but will configure static <code class="language-plaintext highlighter-rouge">ipv4</code> addressing, FirewallD DNS servers using <code class="language-plaintext highlighter-rouge">nmcli</code> tool and disable <code class="language-plaintext highlighter-rouge">systemd-resolved</code>.</p>
<h3 id="set-an-static-ip-address">Set an static IP address</h3>
<p>Modify your existing connection with <code class="language-plaintext highlighter-rouge">nmcli</code>, my connection is <code class="language-plaintext highlighter-rouge">ens192</code> but you should run <code class="language-plaintext highlighter-rouge">nmcli connection</code> to get a list of the existing connections and use the appropiate one.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nmcli connection modify ens192 ipv4.method manual
sudo nmcli connection modify ens192 ipv4.addresses 192.168.1.94/24
sudo nmcli connection modify ens192 ipv4.gateway 192.168.1.1
</code></pre></div></div>
<h3 id="configure-dns">Configure DNS</h3>
<p>Configure DNS and stop and disable <code class="language-plaintext highlighter-rouge">systemd-resolved</code> service.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nmcli connection modify ens192 ipv4.dns "8.8.8.8 8.8.4.4"
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
sudo unlink /etc/resolv.conf
sudo systemctl restart NetworkManager
</code></pre></div></div>
<h3 id="configure-firewalld">Configure FirewallD</h3>
<p>Configure <a href="https://firewalld.org/">FirewallD</a> by adding rules to enable access to Pi-Hole TCP and UDP ports. If you want to know more about <code class="language-plaintext highlighter-rouge">firewalld</code> review my article <a href="/2015/04/08/firewalld-quickstart-guide/">here</a>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo firewall-cmd --zone=FedoraServer --add-port=80/tcp
sudo firewall-cmd --zone=FedoraServer --add-port=443/tcp
sudo firewall-cmd --zone=FedoraServer --add-port=53/tcp
sudo firewall-cmd --zone=FedoraServer --add-port=53/udp
sudo firewall-cmd --zone=FedoraServer --add-port=67/udp
sudo firewall-cmd --permanent --zone=FedoraServer --add-port=53/udp
sudo firewall-cmd --permanent --zone=FedoraServer --add-port=53/tcp
sudo firewall-cmd --permanent --zone=FedoraServer --add-port=443/tcp
sudo firewall-cmd --permanent --zone=FedoraServer --add-port=67/udp
sudo firewall-cmd --permanent --zone=FedoraServer --add-port=80/tcp
</code></pre></div></div>
<h2 id="run-pi-hole">Run Pi-Hole</h2>
<p>Pi-Hole will need two container volumes to persist data, we can create them with <code class="language-plaintext highlighter-rouge">podman</code> in the same way as with <code class="language-plaintext highlighter-rouge">docker</code> cli.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo podman volume create pihole_pihole
sudo podman volume create pihole_dnsmasq
</code></pre></div></div>
<p>Pull latest <code class="language-plaintext highlighter-rouge">pihole</code> container image.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo podman pull pihole/pihole
</code></pre></div></div>
<p>Run <code class="language-plaintext highlighter-rouge">pihole</code> container to test that it works.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>podman run --name=pihole \
--hostname=pi-hole \
--cap-add=NET_ADMIN \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
-e TZ=Europe/Madrid \
-e SERVERIP=192.168.1.20 \
-e WEBPASSWORD=my_pihole_password \
-e DNS1=1.1.1.1 \
-e DNS2=1.0.0.1 \
-e DNSSEC=true \
-e CONDITIONAL_FORWARDING=true \
-e CONDITIONAL_FORWARDING_IP=192.168.1.1 \
-e CONDITIONAL_FORWARDING_DOMAIN=lan \
-e TEMPERATUREUNIT=c \
-v pihole_pihole:/etc/pihole:Z \
-v pihole_dnsmasq:/etc/dnsmasq.d:Z \
-p 80:80/tcp \
-p 443:443/tcp \
-p 67:67/udp \
-p 53:53/tcp \
-p 53:53/udp \
pihole/pihole
</code></pre></div></div>
<p>You can verify that container running with <code class="language-plaintext highlighter-rouge">sudo podman ps</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c0f7b90e121 docker.io/pihole/pihole:latest 2 weeks ago Up 2 weeks ago 0.0.0.0:53->53/tcp, 0.0.0.0:53->53/udp, 0.0.0.0:67->67/udp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pihole
</code></pre></div></div>
<h2 id="configure-pi-hole-as-systemd-service">Configure Pi-Hole as systemd service</h2>
<p>We have verified that Pi-Hole can be run as a Podman container, however if the server gets down or we reboot it for whatever reason we will need to run the container manually again. In CentOS with Docker I had this solved by using Portainer, however since Portainer needs to be run as privileged container cannot be run with Podman. Instead I decided to create a SystemD service that will be automatically run during server startup.</p>
<p>Create your service unit file in <code class="language-plaintext highlighter-rouge">/etc/systemd/system</code>, I named mine <code class="language-plaintext highlighter-rouge">pi-hole.service</code>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>Unit]
<span class="nv">Description</span><span class="o">=</span>Pi-Hole Podman Container
<span class="nv">After</span><span class="o">=</span>firewalld.service
<span class="o">[</span>Service]
<span class="nv">ExecStart</span><span class="o">=</span>/usr/bin/podman run <span class="nt">--name</span><span class="o">=</span>pihole <span class="nt">--hostname</span><span class="o">=</span>pi-hole <span class="nt">--cap-add</span><span class="o">=</span>NET_ADMIN <span class="nt">--dns</span><span class="o">=</span>127.0.0.1 <span class="nt">--dns</span><span class="o">=</span>1.1.1.1 <span class="nt">-e</span> <span class="nv">TZ</span><span class="o">=</span>Europe/Madrid <span class="nt">-e</span> <span class="nv">SERVERIP</span><span class="o">=</span>192.168.1.20 <span class="nt">-e</span> <span class="nv">WEBPASSWORD</span><span class="o">=</span>my_pihole_password <span class="nt">-e</span> <span class="nv">DNS1</span><span class="o">=</span>1.1.1.1 <span class="nt">-e</span> <span class="nv">DNS2</span><span class="o">=</span>1.0.0.1 <span class="nt">-e</span> <span class="nv">DNSSEC</span><span class="o">=</span><span class="nb">true</span> <span class="nt">-e</span> <span class="nv">CONDITIONAL_FORWARDING</span><span class="o">=</span><span class="nb">true</span> <span class="nt">-e</span> <span class="nv">CONDITIONAL_FORWARDING_IP</span><span class="o">=</span>192.168.1.1 <span class="nt">-e</span> <span class="nv">CONDITIONAL_FORWARDING_DOMAIN</span><span class="o">=</span>lan <span class="nt">-e</span> <span class="nv">TEMPERATUREUNIT</span><span class="o">=</span>c <span class="nt">-v</span> pihole_pihole:/etc/pihole:Z <span class="nt">-v</span> pihole_dnsmasq:/etc/dnsmasq.d:Z <span class="nt">-p</span> 80:80/tcp <span class="nt">-p</span> 443:443/tcp <span class="nt">-p</span> 67:67/udp <span class="nt">-p</span> 53:53/tcp <span class="nt">-p</span> 53:53/udp pihole/pihole
<span class="nv">ExecStop</span><span class="o">=</span>/usr/bin/podman stop <span class="nt">-t</span> 2 pihole
<span class="nv">ExecStopPost</span><span class="o">=</span>/usr/bin/podman <span class="nb">rm </span>pihole
<span class="o">[</span>Install]
<span class="nv">WantedBy</span><span class="o">=</span>multi-user.target
</code></pre></div></div>
<p>After creating the service unit file and before staring the service configure SELinux to allow <code class="language-plaintext highlighter-rouge">systemd</code> to load containers.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>setsebool -P container_manage_cgroup on
</code></pre></div></div>
<p>Enable and start the new service and reboot to verify it runs at boot.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo systemctl enable pi-hole.service
sudo systemctl start pi-hole.service
</code></pre></div></div>
<p>Access yoour Pi-Hole at <code class="language-plaintext highlighter-rouge">http://pi_hole_ip/admin</code>, login and see the magic happen :)</p>
<p><a href="https://jreypo.io/assets/images/pi-hole.png"><img src="/assets/images/pi-hole.png" alt="" /></a></p>
<p>FInally I installed <code class="language-plaintext highlighter-rouge">cockpit-podman</code> package to keep track of this and future podman containers from <a href="https://cockpit-project.org/">Cockpit</a>.</p>
<p><a href="https://jreypo.io/assets/images/cockpit-podman.png"><img src="/assets/images/cockpit-podman.png" alt="" /></a></p>
<p>Please left any comments about your usage of pi-hole, experience with podman, etc.</p>
<p>–Juanma</p>Juan Manuel ReyI run Pi-Hole at home to filter and block ad traffic, is a fantastic piece of of software that helps me to keep me and my son away from unwanted ads. It can be run on a VM or even better a Docker container. If you are not doing it yet I encourage you to visit Pi-Hole homepage and take a look at the project and donate or contribute in GitHub to keep it running.A first look into Azure VMware Solution2021-02-19T14:00:00+00:002021-02-19T14:00:00+00:00https://jreypo.io/2021/02/19/a-first-look-into-azure-vmware-solution<p>As I said in my previous post I moved last year to Microsoft Azure engineering in the Azure VMware Solution product group, so it makes total sense that my first post in the new era of the blog is about AVS. Let’s begin!</p>
<h1 id="what-is-azure-vmware-solution">What is Azure VMware Solution?</h1>
<p>Well, that’s easy. Azure VMware Solution, or AVS, is a first-party Azure service that allows customers to run native VMware workloads on Azure. The important part is <strong>first-party</strong>, because <strong>AVS is Azure</strong>, it not a third party service or a partner delivered service, Microsoft operates and supports the service which has been built in collaboration with VMware. It provides the customer with a vSphere-based Private Cloud, built on dedicated hardware on an Azure region.</p>
<p>All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. You can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. VMware HCX Advanced is also provided in the AVS software stack to enable the workload migration scenarios and limited disaster recovery</p>
<p>Today the service is available in East US, West US, North Central US, Canada Central, UK South, West Europe, Japan East and Australia East regions, with more Azure regions coming in the near future.</p>
<h2 id="avs-service-components">AvS Service Components</h2>
<p>AVS comes with the following VMware products bundled and licensed, no need for customer to buy additional licenses separately from VMware.</p>
<ul>
<li>vSphere 6.7 Update 3 Enterprise Plus</li>
<li>VSAN 6.7 Enterprise</li>
<li>NSX-T 2.5.2</li>
<li>HCX R139 Advanced</li>
</ul>
<p>On the hardware side AV36 is the only SKU available today to deploy your AVS Private Cloud.</p>
<ul>
<li><strong>CPU</strong> - Intel Xeon Gold 6140 2.3 GHz</li>
<li><strong>Memory</strong> - 576 GB</li>
<li><strong>Storage vSAN Caching</strong> - 2 × 1.6 TB NVMe</li>
<li><strong>Storage vSAN Capacity</strong> - 8 × 1.92 TB SSD</li>
<li><strong>Network</strong> - 2 Dual Port 25 GbE</li>
</ul>
<h2 id="avs-architecture">AVS Architecture</h2>
<p>As seen in the previous section AVS is built on top of VMware Cloud Foundation, deployed on dedicated, bare-metal Azure hosts. The architecture of the service is more or less like this one.</p>
<p><a href="https://jreypo.io/assets/images//avs_architecture.png"><img src="/assets/images/avs_architecture.png" alt="" /></a></p>
<p>To enable the connectivity between AVS workloads and the main Azure fabric <a href="https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach">ExpressRoute Global Reach</a> is used. ExpressRoute is a dedicated line that enables customers to connect their on-premises environment into Azure and Global Reach is an ExpressRoute add-on that allows to link ExpressRoute circuits together to make a private network between customer on-premises networks, in this case is used to link AVS ExpressRoute circuit with an existing customer circuit. Since transitive routing between circuits is not enabled on Azure ExpressRoute Gateways, the usage of Global Reach is mandatory in order to interconnect an on-premises vSphere environment and AVS.</p>
<p>ExpressRoute Global Reach is needed as well for VMware HCX since it is not supported over an Azure Site-to-Site VPN connection.</p>
<p><a href="https://docs.microsoft.com/en-gb/azure/virtual-wan/">Azure Virtual WAN</a> acts as a communications hub between on-premises and Azure IaaS and PaaS services, for AVS running workloads Azure VWAN will provide Public IP capabilities through its integrated Azure Firewall.</p>
<h1 id="getting-started-with-the-service">Getting started with the service</h1>
<p>Getting started with AVS service requires for your Azure subscription to be whitelisted and get quota assigned, follow instructions on <a href="https://docs.microsoft.com/en-us/azure/azure-vmware/enable-azure-vmware-solution">AVS documentation</a>. With the quota assigned register the Azure VMware Solution resource provider using Azure CLI.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az provider register -n Microsoft.AVS --subscription <your subscription ID>
</code></pre></div></div>
<p>Check the registration state until it appears as <code class="language-plaintext highlighter-rouge">Registered</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az provider show -n Microsoft.AVS
Namespace RegistrationPolicy RegistrationState
------------- -------------------- -------------------
Microsoft.AVS RegistrationRequired Registered
</code></pre></div></div>
<p>Once the provider is registered go to <a href="https://portal.azure.com">Azure Portal</a>, on the main screen click on <strong>Create a Resource</strong> and search for Azure VMware Solution to deploy a new Private Cloud.</p>
<p><a href="https://jreypo.io/assets/images/avs-create-screen.png"><img src="/assets/images/avs-create-screen.png" alt="" /></a></p>
<p>In the above screen the following parameters are needed.</p>
<ul>
<li><strong>Resource Group</strong> – The Azure resource group that will contain he AVS Private Cloud resource.</li>
<li><strong>Location</strong> – Azure region to deploy the new Private Cloud.</li>
<li><strong>Resource Name</strong> – Name for the Private Cloud.</li>
<li><strong>SKU</strong> – Node type to deploy, currently there is only one <strong>AV36</strong>.</li>
<li><strong>ESXi Hosts</strong> – Number of hosts to deploy, the minimum number is 3 and can be later scale out up to a max of 16 per cluster and a maximum of 4 clusters.</li>
<li><strong>vCenter admin Password</strong> – Password for <code class="language-plaintext highlighter-rouge">cloudadmin@vsphere.local</code> user to access vCenter Server. Full admin rights are not granted for vCenter.</li>
<li><strong>NSX-T Manager Password</strong> – Password for NSX-T admin user.</li>
<li><strong>Address Block</strong> – CIDR block used when deploying management components, requires a /22 segment and only <a href="https://tools.ietf.org/html/rfc1918">RFC 1918</a> private address spaces are permitted. See table below using 172.16.0.0/22 as CIDR.</li>
<li><strong>Virtual Network</strong> – Select an existing VNET or create a new one, this VNET will be later used to deploy an ExpressRoute Gateway and a jumpbox to be able to access the Private Cloud management components.</li>
</ul>
<table>
<thead>
<tr>
<th>Network usage</th>
<th>Subnet</th>
</tr>
</thead>
<tbody>
<tr>
<td>Private cloud management</td>
<td><code class="language-plaintext highlighter-rouge">172.16.0.0/26</code></td>
</tr>
<tr>
<td>HCX Mgmt Migrations</td>
<td><code class="language-plaintext highlighter-rouge">172.16.0.64/26</code></td>
</tr>
<tr>
<td>Global Reach Reserved</td>
<td><code class="language-plaintext highlighter-rouge">172.16.0.128/26</code></td>
</tr>
<tr>
<td>ExpressRoute Reserved</td>
<td><code class="language-plaintext highlighter-rouge">172.16.0.192/27</code></td>
</tr>
<tr>
<td>ExpressRoute peering</td>
<td><code class="language-plaintext highlighter-rouge">172.16.1.0/25</code></td>
</tr>
<tr>
<td>vMotion Network</td>
<td><code class="language-plaintext highlighter-rouge">172.16.1.128/25</code></td>
</tr>
<tr>
<td>Replication Network</td>
<td><code class="language-plaintext highlighter-rouge">172.16.2.0/25</code></td>
</tr>
<tr>
<td>vSAN</td>
<td><code class="language-plaintext highlighter-rouge">172.16.2.128/25</code></td>
</tr>
<tr>
<td>HCX Uplink</td>
<td><code class="language-plaintext highlighter-rouge">172.16.3.0/26</code></td>
</tr>
<tr>
<td>Reserved</td>
<td><code class="language-plaintext highlighter-rouge">172.16.3.64/26</code></td>
</tr>
<tr>
<td>Reserved</td>
<td><code class="language-plaintext highlighter-rouge">172.16.3.128/26</code></td>
</tr>
<tr>
<td>Reserved</td>
<td><code class="language-plaintext highlighter-rouge">172.16.3.192/26</code></td>
</tr>
</tbody>
</table>
<p>After filling in all the required parameters launch the private cloud creation. The deployment wll take a couple of hours to complete.</p>
<p>Once the deployment is complete the quickest way to access your private cloud would be trough a jumpbox, deploy a Windows virtual machine connected to a subnet of the VNET we created, or selected, during the AVS private cloud deployment. RDP into the jumpbox or even better use <a href="https://azure.microsoft.com/en-us/services/azure-bastion/">Azure Bastion</a> too to avoid exposing the VM publicly.</p>
<p>Grab vCenter Server and NSX-T Managers URLs and credentials from Azure portal in the <strong>Identity</strong> blade of the AVS private cloud.</p>
<p><a href="https://jreypo.io/assets/images/avs-identity.png"><img src="/assets/images/avs-identity.png" alt="" /></a></p>
<p>From the desktop of the jumpbox access vCenter Server and NSX-T Manager to verify that everything is up and running.</p>
<p><a href="https://jreypo.io/assets/images/avs-bastion.png"><img src="/assets/images/avs-bastion.png" alt="" /></a></p>
<p>At this point the next steps will be configure NSX-T DHCP and network segments and deploy a virtual machine, I encourage you to review <a href="https://docs.microsoft.com/en-us/azure/azure-vmware/">Azure VMware Solution documentation</a> for the details. I will go more deep into NSX-T and general network architecture in AVS in a future post.</p>
<p>Thanks for staying to the end and again please stay safe out there.</p>
<p>–Juanma</p>Juan Manuel ReyAs I said in my previous post I moved last year to Microsoft Azure engineering in the Azure VMware Solution product group, so it makes total sense that my first post in the new era of the blog is about AVS. Let’s begin!A new chapter in my life, a new era for the blog2021-01-07T23:10:00+00:002021-01-07T23:10:00+00:00https://jreypo.io/2021/01/07/a-new-chapter-in-my-life-a-new-era-for-the-blog<p>It has been almost 13 months since my last publication in the blog, you could say it has a lost year for my small corner in the Internet and that would be true. However it has not been a lost year for me personally, not at all. Many things happened during all these months, the most obvious of course is the COVID-19 health world crisis, I am grateful because me and my family are doing fine and I sincerely hope that you and your families are all healthy and safe.</p>
<p>On the professional side 2020 has been a tipping point for me. I started the year with some incredible news about being selected for an internal program at Microsoft for architects in the field that would allow me to join an Azure engineering team for a full semester. Yes working in an Azure product group for six months! The service was no other than <a href="https://azure.microsoft.com/en-us/services/azure-vmware/">Azure VMware Solution</a>, imagine that I was given the opportunity to work with oow of my favorite technologies at the same time. But those were not the best news, because the new iteration for the AVS service was being planned and developed at that time. I joined the team at the perfect time!</p>
<p>My tenure in the programs kicked-off in January, I was able to spend two full weeks in Redmond between January and February working face to face with the team and learning everything in plan and development for the whole year, and the rest fo the time remotely from Madrid. I started to work in different part of the service, learning a lot from my new colleagues and actively participating in first limited private preview, than COVID happened but and we all went full remote as we still are. But of course that did not stop us, the team doubled the effort and we got the service to public preview, <a href="https://azure.microsoft.com/en-us/blog/microsoft-announces-next-evolution-of-azure-vmware-solution/">announcement included</a>, just before <strong>Microsoft Build</strong>.</p>
<p>It was an amazing experience for me, learning from all the great engineers and PMs in the team and of course learning all the Azure internals, I was like a little kid every time I learned something new about our internal architecture, the networking details, etc. In June I got even better news, the program was extended for a few more months to give us the opportunity of finishing the work we were doing with our engineering peers before going back to our original field teams. ANd finally at the end of September during <a href="https://myignite.microsoft.com/home">Microsoft Ignite</a> the <a href="https://azure.microsoft.com/en-us/blog/the-new-azure-vmware-solution-is-now-generally-available/">service went into GA</a>! I could not be happier, in nine months I had been able to witness and be part of the GA process for an Azure service. And then it happened, I was offered a permanent position in Azure VMware Solution team… <strong>YES!</strong></p>
<p><a href="https://jreypo.io/assets/images/oh-yeah.jpg"><img src="/assets/images/oh-yeah.jpg" alt="" /></a></p>
<p>The two managers in the team and our director gave me the incredible opportunity of joining their team, the same team I had been working with for nine months. In November I officially joined as Senior Program Manager. If you are asking yourself if I will move to Redmond when all this COVID shit is over the answer is no, I will stay in Madrid which is perfect for me since I cannot move to the US for personal reasons, at least for now.</p>
<p>So reflecting back 2020 has been a great year for me in many ways, I have been to achieve a goal I set many years ago whe I was working at VMware, to join an engineering team and stay in Spain at the same time. I cannot express how grateful I am for this, specially to Eduardo, Ram and Brett who decided to trust me and to my former manager Manuel who supported me in this journey.</p>
<p>Of course as you can imagine my involvement with the AVS team in 2020 was the main reason to not update my blog, I was focused working on the service and of course everything I was working on was confidential. Along this past year I have been thinking a lot about what to do with the blog, my first though was to follow the trend and migrate it to <a href="https://gohugo.io/">Hugo</a> and move it out of GitHub Pages, but to be honest I have no issues with Jekyll and GitHub Pages provides me with all I need for now, maybe in the future I will consider to move it to Azure Blob Storage but will see.</p>
<p>What the blog really need was a facelift, I was still running an old version of the <a href="https://mmistakes.github.io/minimal-mistakes/">Minimal Mistakes</a> theme with some custom modifications and CSS style sheets. I considered to use a new Jekyll theme but in the end I decided to update the theme to its latest version. Upgrading it was relatively easy, although it was a manual process since I was running a very old version. I did not have to implement any extra configuration since the current version provides everything I need. It took me a couple of days to migrate and review all the articles, fix <a href="https://disqus.com/">DISQUS</a> comments and fix some old typos in a few articles. I will continue to review it in the coming weeks since I want to clear up the mess I have in the content tags and post categories.</p>
<p>Regarding the content, my intention is to continue to write about my work since that has been the main theme of the blog since I started it in 2009, will continue to write about Kubernetes because is still part of my professional life. VMware content of course will come back to some extent and I want to expand the content to other areas I am interested in like distributed systems, software development, blockchain, Linux, my personal 3D printing and maker projects at home and many more.</p>
<p>Again please stay safe, take care of yourselves and your families and let this 2021 begin, I am sure it will be fantastic!</p>
<p>–Juanma</p>Juan Manuel ReyIt has been almost 13 months since my last publication in the blog, you could say it has a lost year for my small corner in the Internet and that would be true. However it has not been a lost year for me personally, not at all. Many things happened during all these months, the most obvious of course is the COVID-19 health world crisis, I am grateful because me and my family are doing fine and I sincerely hope that you and your families are all healthy and safe.Deploying a Kubernetes cluster in Azure using kubeadm2019-12-12T23:15:00+00:002019-12-12T23:15:00+00:00https://jreypo.io/2019/12/12/deploying-a-kubernetes-cluster-in-azure-using-kubeadm<p>The easiest way to have a <a href="https://kubernetes.io/">Kubernetes</a> cluster up and running in Azure in a short amount of time is by using <a href="https://azure.microsoft.com/es-es/services/kubernetes-service/">AKS service</a>, also if you want a more granular control of your cluster or a more customized cluster you can alway use <a href="https://github.com/azure/aks-engine">AKS-Egine</a>.</p>
<p>However this time I wanted to take a different approach and use a more widely used tool very popular amongst the Kubernetes community like <code class="language-plaintext highlighter-rouge">kuebadm</code>. I like <code class="language-plaintext highlighter-rouge">kubeadm</code> as a fantastic way to learn the internals of Kubernetes, also I used the content of this post as part of the preparation for <a href="https://www.cncf.io/certification/cka/">CKA certification</a> exam, which I am planning to take in December or January.</p>
<h1 id="create-azure-infrastructure">Create Azure infrastructure</h1>
<p>The first thing we myst do is create the necessary infrastructure in our subscription, this includes the instances and the network.</p>
<p>Create the resource group.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az group create --name k8s-lab-rg3 --location westeurope
</code></pre></div></div>
<p>Create a VNet, during the creation of the VNet we will declare as well the subnet for our cluster.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az network vnet create --name k8s-lab-vnet --resource-group k8s-lab-rg3 --location westeurope --address-prefixes 172.10.0.0/16 --subnet-name k8s-lab-net1 --subnet-prefixes 172.10.1.0/24
</code></pre></div></div>
<p>Create the instances, one master and three nodes. For my lab I am using the laster <code class="language-plaintext highlighter-rouge">UbuntuLTS</code> image and <code class="language-plaintext highlighter-rouge">Standard_DS2_v2</code> for the size of the instances.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="nv">RG</span><span class="o">=</span>k8s-lab-rg3
<span class="nv">LOCATION</span><span class="o">=</span>westeurope
<span class="nv">SUBNET</span><span class="o">=</span><span class="si">$(</span>az network vnet show <span class="nt">--name</span> k8s-lab-vnet <span class="nt">-g</span> <span class="nv">$RG</span> <span class="nt">--query</span> subnets[0].id <span class="nt">-o</span> tsv<span class="si">)</span>
<span class="c"># Master instance</span>
<span class="nb">echo</span> <span class="s2">"Creating Kubernetes Master"</span>
az vm create <span class="nt">--name</span> kube-master <span class="se">\</span>
<span class="nt">--resource-group</span> <span class="nv">$RG</span> <span class="se">\</span>
<span class="nt">--location</span> <span class="nv">$LOCATION</span> <span class="se">\</span>
<span class="nt">--image</span> UbuntuLTS <span class="se">\</span>
<span class="nt">--admin-user</span> azureuser <span class="se">\</span>
<span class="nt">--ssh-key-values</span> ~/.ssh/id_rsa.pub <span class="se">\</span>
<span class="nt">--size</span> Standard_DS2_v2 <span class="se">\</span>
<span class="nt">--data-disk-sizes-gb</span> 10 <span class="se">\</span>
<span class="nt">--subnet</span> <span class="nv">$SUBNET</span> <span class="se">\</span>
<span class="nt">--public-ip-address-dns-name</span> kube-master-lab
<span class="c"># Nodes intances</span>
az vm availability-set create <span class="nt">--name</span> kubeadm-nodes-as <span class="nt">--resource-group</span> <span class="nv">$RG</span>
<span class="k">for </span>i <span class="k">in </span>0 1 2<span class="p">;</span> <span class="k">do
</span><span class="nb">echo</span> <span class="s2">"Creating Kubernetes Node </span><span class="k">${</span><span class="nv">i</span><span class="k">}</span><span class="s2">"</span>
az vm create <span class="nt">--name</span> kube-node-<span class="k">${</span><span class="nv">i</span><span class="k">}</span> <span class="se">\</span>
<span class="nt">--resource-group</span> <span class="nv">$RG</span> <span class="se">\</span>
<span class="nt">--location</span> <span class="nv">$LOCATION</span> <span class="se">\</span>
<span class="nt">--availability-set</span> kubeadm-nodes-as <span class="se">\</span>
<span class="nt">--image</span> UbuntuLTS <span class="se">\</span>
<span class="nt">--admin-user</span> azureuser <span class="se">\</span>
<span class="nt">--ssh-key-values</span> ~/.ssh/id_rsa.pub <span class="se">\</span>
<span class="nt">--size</span> Standard_DS2_v2 <span class="se">\</span>
<span class="nt">--data-disk-sizes-gb</span> 10 <span class="se">\</span>
<span class="nt">--subnet</span> <span class="nv">$SUBNET</span> <span class="se">\</span>
<span class="nt">--public-ip-address-dns-name</span> kube-node-lab-<span class="k">${</span><span class="nv">i</span><span class="k">}</span>
<span class="k">done
</span>az vm list <span class="nt">--resource-group</span> <span class="nv">$RG</span> <span class="nt">-d</span>
</code></pre></div></div>
<h1 id="prepare-the-cluster-master-and-node-instances">Prepare the cluster master and node instances</h1>
<p>With the instances up and running we need to install the software we will use to create our Kubernetes cluster.</p>
<p>Access the master and install <code class="language-plaintext highlighter-rouge">docker</code>, remember to install a Docker release validated for Kubernetes. In my case I will use Kubernetes 1.16 and Docker 18.09.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
azureuser@kube-master-lab:~$ sudo apt-key fingerprint 0EBFCD88
pub rsa4096 2017-02-22 [SCEA]
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) <docker@docker.com>
sub rsa4096 2017-02-22 [S]
azureuser@kube-master-lab:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
azureuser@kube-master-lab:~$ sudo apt-get update
...
azureuser@kube-master-lab:~$ sudo apt-get install -y docker-ce=5:18.09.9~3-0~ubuntu-bionic docker-ce-cli containerd.io
</code></pre></div></div>
<p>Configure Docker daemon for Kubernetes.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
azureuser@kube-master-lab:~$ sudo mkdir -p /etc/systemd/system/docker.service.d
azureuser@kube-master-lab:~$ sudo systemctl daemon-reload
azureuser@kube-master-lab:~$ sudo systemctl restart docker
azureuser@kube-master-lab:~$
</code></pre></div></div>
<p>Configure Kubernetes <code class="language-plaintext highlighter-rouge">apt</code> repo and install <code class="language-plaintext highlighter-rouge">kubeadm</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ sudo apt-get install -y apt-transport-https
...
azureuser@kube-master-lab:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
azureuser@kube-master-lab:~$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
azureuser@kube-master-lab:~$ sudo apt-get update
...
azureuser@kube-master-lab:~$ sudo apt-get install -y kubelet kubeadm kubectl
...
azureuser@kube-master-lab:~$ sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
azureuser@kube-master-lab:~$
</code></pre></div></div>
<p>Repeat the same process for each of the nodes either manually or by using the below script, which can also be found as a <a href="https://gist.github.com/jreypo/8264157231a649fe4d65762917d6a27f">Gist on my GitHub</a>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="nb">echo</span> <span class="s2">"Installing Docker..."</span>
curl <span class="nt">-fsSL</span> https://download.docker.com/linux/ubuntu/gpg | <span class="nb">sudo </span>apt-key add -
<span class="nb">sudo </span>apt-key fingerprint 0EBFCD88
<span class="nb">sudo </span>add-apt-repository <span class="s2">"deb [arch=amd64] https://download.docker.com/linux/ubuntu </span><span class="si">$(</span>lsb_release <span class="nt">-cs</span><span class="si">)</span><span class="s2"> stable"</span>
<span class="nb">sudo </span>apt-get update <span class="o">&&</span> <span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> docker-ce<span class="o">=</span>5:18.09.9~3-0~ubuntu-bionic docker-ce-cli containerd.io
<span class="nb">echo</span> <span class="s2">"Configuring Docker..."</span>
<span class="nb">sudo cat</span> <span class="o">></span> /etc/docker/daemon.json <span class="o"><<</span><span class="no">EOF</span><span class="sh">
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
</span><span class="no">EOF
</span><span class="nb">sudo mkdir</span> <span class="nt">-p</span> /etc/systemd/system/docker.service.d
<span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl restart docker
<span class="nb">echo</span> <span class="s2">"Installing Kubernetes components..."</span>
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> apt-transport-https
curl <span class="nt">-s</span> https://packages.cloud.google.com/apt/doc/apt-key.gpg | <span class="nb">sudo </span>apt-key add
<span class="nb">cat</span> <span class="o"><<</span><span class="no">EOF</span><span class="sh"> | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
</span><span class="no">EOF
</span><span class="nb">sudo </span>apt-get update <span class="o">&&</span> <span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> kubelet kubeadm kubectl
<span class="nb">sudo </span>apt-mark hold kubelet kubeadm kubectl
</code></pre></div></div>
<h1 id="create-the-cluster">Create the cluster</h1>
<h2 id="create-kubeadm-configuration">Create <code class="language-plaintext highlighter-rouge">kubeadm</code> configuration</h2>
<p>To bootstrap a cluster integrated with Azure, this is using the Azure cloud provider, using <code class="language-plaintext highlighter-rouge">kubeadm</code> we will need a <code class="language-plaintext highlighter-rouge">kuebadm</code> configuration file. In this file we will specify the Cluster Manager and API Server parameters instructing <code class="language-plaintext highlighter-rouge">kubeadm</code> to configure it with <code class="language-plaintext highlighter-rouge">--cloud-provider=azure</code> flag. For more information on Kubernetes cloud providers with <code class="language-plaintext highlighter-rouge">kubeadm</code> review the official <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/?source=post_page-----357210e2eb50----------------------#kubeadm">Kubernetes documentation</a>.</p>
<p>Below is my <code class="language-plaintext highlighter-rouge">kubeadm.yaml</code> configuration file, you can use it and adjust the networking parameters to your preference.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kubeadm.k8s.io/v1beta2</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">InitConfiguration</span>
<span class="na">nodeRegistration</span><span class="pi">:</span>
<span class="na">kubeletExtraArgs</span><span class="pi">:</span>
<span class="na">cloud-provider</span><span class="pi">:</span> <span class="s2">"</span><span class="s">azure"</span>
<span class="na">cloud-config</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="nn">---</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kubeadm.k8s.io/v1beta2</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">ClusterConfiguration</span>
<span class="na">kubernetesVersion</span><span class="pi">:</span> <span class="s">v1.13.0</span>
<span class="na">apiServer</span><span class="pi">:</span>
<span class="na">extraArgs</span><span class="pi">:</span>
<span class="na">cloud-provider</span><span class="pi">:</span> <span class="s2">"</span><span class="s">azure"</span>
<span class="na">cloud-config</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">extraVolumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">cloud</span>
<span class="na">hostPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">controllerManager</span><span class="pi">:</span>
<span class="na">extraArgs</span><span class="pi">:</span>
<span class="na">cloud-provider</span><span class="pi">:</span> <span class="s2">"</span><span class="s">azure"</span>
<span class="na">cloud-config</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">extraVolumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">cloud</span>
<span class="na">hostPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/etc/kubernetes/cloud.conf"</span>
<span class="na">networking</span><span class="pi">:</span>
<span class="na">serviceSubnet</span><span class="pi">:</span> <span class="s2">"</span><span class="s">10.12.0.0/16"</span>
<span class="na">podSubnet</span><span class="pi">:</span> <span class="s2">"</span><span class="s">10.11.0.0/16"</span>
</code></pre></div></div>
<p>Next create <code class="language-plaintext highlighter-rouge">/etc/kubernetes/cloud.conf</code> file, it will contain the configuration for the Azure Cloud Provider.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"cloud"</span><span class="p">:</span><span class="s2">"AzurePublicCloud"</span><span class="p">,</span><span class="w">
</span><span class="nl">"tenantId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"xxxx"</span><span class="p">,</span><span class="w">
</span><span class="nl">"subscriptionId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"xxxx"</span><span class="p">,</span><span class="w">
</span><span class="nl">"aadClientId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"xxxx"</span><span class="p">,</span><span class="w">
</span><span class="nl">"aadClientSecret"</span><span class="p">:</span><span class="w"> </span><span class="s2">"xxxx"</span><span class="p">,</span><span class="w">
</span><span class="nl">"resourceGroup"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s-lab-rg4"</span><span class="p">,</span><span class="w">
</span><span class="nl">"location"</span><span class="p">:</span><span class="w"> </span><span class="s2">"westeurope"</span><span class="p">,</span><span class="w">
</span><span class="nl">"vmType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"standard"</span><span class="p">,</span><span class="w">
</span><span class="nl">"subnetName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s-lab-net1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"securityGroupName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"kube-masterNSG"</span><span class="p">,</span><span class="w">
</span><span class="nl">"vnetName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s-lab-vnet"</span><span class="p">,</span><span class="w">
</span><span class="nl">"vnetResourceGroup"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"routeTableName"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"primaryAvailabilitySetName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"kubeadm-nodes-as"</span><span class="p">,</span><span class="w">
</span><span class="nl">"primaryScaleSetName"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderBackoffMode"</span><span class="p">:</span><span class="w"> </span><span class="s2">"v2"</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderBackoff"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderBackoffRetries"</span><span class="p">:</span><span class="w"> </span><span class="mi">6</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderBackoffDuration"</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderRatelimit"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderRateLimitQPS"</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderRateLimitBucket"</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderRatelimitQPSWrite"</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudProviderRatelimitBucketWrite"</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w">
</span><span class="nl">"useManagedIdentityExtension"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
</span><span class="nl">"userAssignedIdentityID"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"useInstanceMetadata"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"loadBalancerSku"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Basic"</span><span class="p">,</span><span class="w">
</span><span class="nl">"disableOutboundSNAT"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
</span><span class="nl">"excludeMasterFromStandardLB"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
</span><span class="nl">"providerVaultName"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"maximumLoadBalancerRuleCount"</span><span class="p">:</span><span class="w"> </span><span class="mi">250</span><span class="p">,</span><span class="w">
</span><span class="nl">"providerKeyName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s"</span><span class="p">,</span><span class="w">
</span><span class="nl">"providerKeyVersion"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<h2 id="bootstrap-the-master-node">Bootstrap the master node</h2>
<p>Initialize the master, or control plane node, passing <code class="language-plaintext highlighter-rouge">kuebadm.yaml</code> as configuration parameter. Make sure that the instance name in Azure is the same as the hostname or <code class="language-plaintext highlighter-rouge">kubeadm</code> will fail fail to initialize the <code class="language-plaintext highlighter-rouge">kubelet</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ sudo kubeadm init --config kubeadm.yml
[init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
...
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.10.1.4:6443 --token 81l08m.0g09hbdekfxczgs0 \
--discovery-token-ca-cert-hash sha256:a868e59818db186a2cb03a32c2478d7abafbf4ceae471532e1152fb4949298fd
azureuser@kube-master-lab:~$
</code></pre></div></div>
<p>As the output suggests create a <code class="language-plaintext highlighter-rouge">kubeconfig</code> file to start using the cluster.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ mkdir -p $HOME/.kube
azureuser@kube-master-lab:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
azureuser@kube-master-lab:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
azureuser@kube-master-lab:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master NotReady master 12m v1.16.3 172.10.1.4 51.144.178.87 Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
azureuser@kube-master:~$
</code></pre></div></div>
<h2 id="install-a-networking-add-on">Install a networking add-on</h2>
<p>Next is to install a networking add-on in the master, in my example I am going to use <a href="https://www.projectcalico.org/">Calico</a> but can you choose another if you want. First retrieve <code class="language-plaintext highlighter-rouge">calico.yaml</code> manifest from <code class="language-plaintext highlighter-rouge">https://docs.projectcalico.org/v3.8/manifests/calico.yaml</code> edit it and replace <code class="language-plaintext highlighter-rouge">CALICO_IPV4POOL_CIDR</code> value of <code class="language-plaintext highlighter-rouge">192.168.0.0/16</code> with the one from <code class="language-plaintext highlighter-rouge">podSubnet</code> property defined in ourt <code class="language-plaintext highlighter-rouge">kubeadm.yaml</code> file, in my case <code class="language-plaintext highlighter-rouge">10.11.0.0/16</code>.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">CALICO_IPV4POOL_CIDR</span>
<span class="na">value</span><span class="pi">:</span> <span class="s2">"</span><span class="s">10.11.0.0/16"</span>
</code></pre></div></div>
<p>Then apply the manifest with <code class="language-plaintext highlighter-rouge">kubectl</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
azureuser@kube-master-lab:~$
azureuser@kube-master-lab:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-55754f75c-b42wk 1/1 Running 0 108s
calico-node-gmv55 1/1 Running 0 109s
coredns-5644d7b6d9-2l49m 1/1 Running 0 9m12s
coredns-5644d7b6d9-hhqq2 1/1 Running 0 9m12s
etcd-kube-master 1/1 Running 0 8m12s
kube-apiserver-kube-master 1/1 Running 0 8m16s
kube-controller-manager-kube-master 1/1 Running 0 8m11s
kube-proxy-r4pfn 1/1 Running 0 9m12s
kube-scheduler-kube-master 1/1 Running 0 8m23s
azureuser@kube-master-lab:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master Ready master 11m v1.16.3 172.10.1.4 51.144.178.87 Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
azureuser@kube-master=lab:~$
</code></pre></div></div>
<h2 id="bootstrap-the-nodes">Bootstrap the nodes</h2>
<p>SSH into the first node and execute <code class="language-plaintext highlighter-rouge">kubeadm join</code> from master <code class="language-plaintext highlighter-rouge">kuebadm init</code> output. If you did not take note of the command or more than 24 hours have passed do not worry since we can easily reconstruct it with the following commands.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ kubeadm token create
bvbjmy.z2m3y0mu9gtvar3s
azureuser@kube-master-lab:~$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
bvbjmy.z2m3y0mu9gtvar3s 23h 2019-12-12T10:15:21Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
azureuser@kube-master-lab:~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
> openssl dgst -sha256 -hex | sed 's/^.* //'
b5b9429546c8cdf4accf006250558551240a56371528f8bff1a85e401fea4be2
azureuser@kube-master-lab:~$
</code></pre></div></div>
<p>Now login back into your first node and execute the <code class="language-plaintext highlighter-rouge">kubeadm join</code> command using the token as argument for the <code class="language-plaintext highlighter-rouge">--token</code> option and the hash key with format <code class="language-plaintext highlighter-rouge">sha256:<your_hasch></code> as argument for the <code class="language-plaintext highlighter-rouge">--discovery-token-ca-cert-hash</code> option.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-node-0:~$ sudo kubeadm join 172.10.1.4:6443 --token bvbjmy.z2m3y0mu9gtvar3s --discovery-token-ca-cert-hash sha256:b5b9429546c8cdf4accf006250558551240a56371528f8bff1a85e401fea4be2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
azureuser@kube-node-0:~$
</code></pre></div></div>
<p>Repeat the process for each node, you can verify the status of the nodes from the master with <code class="language-plaintext highlighter-rouge">kubectl get nodes</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azureuser@kube-master-lab:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master Ready master 8d v1.16.3 172.10.1.4 51.144.178.87 Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
kube-node-0 Ready <none> 8m7s v1.16.3 172.10.1.5 <none> Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
kube-node-1 Ready <none> 91s v1.16.3 172.10.1.6 <none> Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
kube-node-2 NotReady <none> 19s v1.16.3 172.10.1.7 <none> Ubuntu 18.04.3 LTS 5.0.0-1025-azure docker://18.9.9
azureuser@kube-master-lab:~$
</code></pre></div></div>
<p>With the nodes properly joined the cluster is ready to be used.</p>
<p>Thanks for reading! Hope this whole procedure has been helpful and instructive, as always if you have any comments, questions or suggestions please leave them in the comments or reach out to me on Twitter.</p>
<p>–Juanma</p>Juan Manuel ReyThe easiest way to have a Kubernetes cluster up and running in Azure in a short amount of time is by using AKS service, also if you want a more granular control of your cluster or a more customized cluster you can alway use AKS-Egine.Getting started with Azure Red Hat OpenShift2019-11-18T10:45:00+00:002019-11-18T10:45:00+00:00https://jreypo.io/2019/11/18/getting-started-with-azure-red-hat-openshift<p>Azure Red Hat OpenShift was <a href="https://azure.microsoft.com/en-us/blog/openshift-on-azure-the-easiest-fully-managed-openshift-in-the-cloud/">announced</a> during last year Red Hat Summit, since then the service has been first in private preview and then during this year Red Hat Summit the service <a href="https://azure.microsoft.com/en-us/blog/generally-available-azure-red-hat-openshift/">was declared GA</a>.</p>
<p>After the GA however not every Microsoft and Red Hat customer was able to freely try it in the classic pay as yo go model, instead there was a requirement to reserve four application nodes up-front for the first cluster. However this requirement has been finally lifted during Microsoft Ignite and from now on you can deploy Azure Red Hat OpenShift, or ARO for short, in a PAYG fashion like it should be with any cloud service.</p>
<p>With this in mind I am going to describe the process to deploy your first ARO cluster.</p>
<h1 id="prerequisites">Prerequisites</h1>
<p>Before deploying a cluster we will need to meet several requirements in order to integrate the authentication with Azure Active Directory. All the processes for the prerequisites using Azure Portal are very well described in ARO documentation <a href="https://docs.microsoft.com/en-us/azure/openshift/howto-aad-app-configuration#create-an-azure-ad-app-registration">here</a>, so instead of repeating that part I will instead demonstrate the process using Azure CLI.</p>
<p>We will need to perform two distinct operations:</p>
<ul>
<li>Create an Azure AD security group</li>
<li>Create an Azure AD app registration</li>
</ul>
<h2 id="create-an-azure-ad-security-group">Create an Azure AD security group</h2>
<p>First create the Azure AD security group</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad group create --display-name aro-group --mail-nickname aro-group -o json
{
"deletionTimestamp": null,
"description": null,
"dirSyncEnabled": null,
"displayName": "aro-group",
"lastDirSyncTime": null,
"mail": null,
"mailEnabled": false,
"mailNickname": "aro-group",
"objectId": "4f2aefef-a111-4f84-b980-99c989a4e0cc",
"objectType": "Group",
"odata.metadata": "https://graph.windows.net/b40a365a-9f29-4480-b1f9-28f9179421de/$metadata#directoryObjects/@Element",
"odata.type": "Microsoft.DirectoryServices.Group",
"onPremisesDomainName": null,
"onPremisesNetBiosName": null,
"onPremisesSamAccountName": null,
"onPremisesSecurityIdentifier": null,
"provisioningErrors": [],
"proxyAddresses": [],
"securityEnabled": true
}
</code></pre></div></div>
<p>With the group created take note of the <code class="language-plaintext highlighter-rouge">objectId</code> property, we will need later to create the cluster. Next step is to add the accounts fo the future ARO administrators to this group, in my case I will add a custom account I created in my tenant.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad group member add --group aro-group --member-id 804fb062-a879-4e72-aaca-8be50e6dc49a
$ az ad group member list --group aro-group --query '[].{Name:displayName, NickName:mailNickname, Id:objectId}' -o json
[
{
"Id": "804fb062-a879-4e72-aaca-8be50e6dc49a",
"Name": "ARO Admin",
"NickName": "aroadmin"
}
]
</code></pre></div></div>
<h2 id="create-an-azure-ad-app">Create an Azure AD app</h2>
<p>To be able to integrate our cluster with Azure Active Directory we will need to create an Azure AD application.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ appId=$(az ad app create --display-name aro-aad --query appId -o tsv)
$ echo $appId
57887cd0-868b-4e0d-88a7-519a0ad590cf
</code></pre></div></div>
<h3 id="confgure-the-application">Confgure the application</h3>
<p>With our application created we need to configure it, first set the owner which in my case is the <code class="language-plaintext highlighter-rouge">aroadmin</code> user.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad app owner add --id $appId --owner-object-id 804fb062-a879-4e72-aaca-8be50e6dc49a
$ az ad app owner list --id $appId --query '[].{Name:displayName}' -o json
[
{
"Name": "ARO Admin"
}
]
</code></pre></div></div>
<p>Next step is add the permissions to the specific Azure Active Directory Graph APIs. We need to add permissions for <code class="language-plaintext highlighter-rouge">Directory.Read.All</code> and <code class="language-plaintext highlighter-rouge">User.Read</code>. The first one is defined as an <em>Application</em> type and will allow the app to <em>Read directory data</em>. the second will be granted as <em>Delegated</em> and will enable the app to <em>Sing in and read user profile</em></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad app permission add --id $appId --api 00000002-0000-0000-c000-000000000000 --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope 5778995a-e1bf-45b8-affa-663a9f3f4d04=Role
</code></pre></div></div>
<p>Lets dig a bit in the previous command options before we continue.</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">00000002-0000-0000-c000-000000000000</code> represents Azure Active Directory Graph API</li>
<li><code class="language-plaintext highlighter-rouge">311a71cc-e848-46a1-bdf8-97ff7156d8e6</code> corresponds to <code class="language-plaintext highlighter-rouge">User.Read</code> API permission and the <code class="language-plaintext highlighter-rouge">Scope</code> option indicates this would be a delegated permission.</li>
<li><code class="language-plaintext highlighter-rouge">5778995a-e1bf-45b8-affa-663a9f3f4d04</code> corresponds to <code class="language-plaintext highlighter-rouge">Directory.Read.All</code> permission and the <code class="language-plaintext highlighter-rouge">Role</code> option tells Azure AD API this would be an application permission.</li>
</ul>
<p>To finish the API permissions we need to grant admin consent for this application, in order to grant the consent you need to be the Azure AD admin or request an admin to do it.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad app permission admin-consent --id $appId
$ az ad app permission list --id $appId -o json
[
{
"additionalProperties": null,
"expiryTime": "2020-05-15T16:17:56.6007636",
"resourceAccess": [
{
"additionalProperties": null,
"id": "311a71cc-e848-46a1-bdf8-97ff7156d8e6",
"type": "Scope"
},
{
"additionalProperties": null,
"id": "5778995a-e1bf-45b8-affa-663a9f3f4d04",
"type": "Role"
}
],
"resourceAppId": "00000002-0000-0000-c000-000000000000"
}
]
</code></pre></div></div>
<p>Generate a secure password by any method and add as secret to the application.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad app update --id $appId --password <SUPER_SECRET_PASSWORD>
</code></pre></div></div>
<p>Finally update the application to disable <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-implicit-grant-flow">implicit grant flow</a> using ID Tokens.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az ad app update --id $appId --set oauth2AllowIdTokenImplicitFlow=false
</code></pre></div></div>
<p>There is one more setting to configure, the Redirect URI, but we cannot set it until the cluster is created.</p>
<h1 id="creating-the-cluster">Creating the cluster</h1>
<p>Once all prerequisites are finished we can proceed with the cluster creation. Create a new resource group, although this resource group will not contain the resources of the cluster but I will explain thsi later.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az group create --name arorg --location westeurope
</code></pre></div></div>
<p>The paremeter needed for the create operation are:</p>
<ul>
<li>Cluster name.</li>
<li>Resource group.</li>
<li>Azure AD Application ID - The ID of the we created in the previous section.</li>
<li>Azure AD Application secret - The application password we created.</li>
<li>Azure AD Tenant ID - The Azure AD tenant ID the clsuter will be integrated with.</li>
<li>Customer Admin Group ID - The ID of the Azure AD security group created before that includes all the admin accounts.</li>
<li>Application Node count - By default is 4 but in my case I am deploying just two application nodes.</li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az openshift create -n aro-cl1 -g arorg -c 2 --aad-client-app-id $apId --aad-client-app-secret <SUPER_SECRET_PASSWORD> --aad-tenant-id b40a365a-9f29-4480-b1f9-2gf9179421de --customer-admin-group-id 4f2aefef-a111-4f84-b980-99c989a4e0cc -l westeurope
$ az openshift list
Name Location ResourceGroup OpenShiftVersion ProvisioningState PublicHostname
------- ---------- --------------- ------------------ ------------------- ---------------------------------------------------
aro-cl1 westeurope arorg v3.11 Succeeded openshift.3a298ccfc966456481fe.westeurope.azmosa.io
</code></pre></div></div>
<h2 id="update-the-application-redirect-uri">Update the application redirect URI</h2>
<p>Update the Azure AD app registration with the URL <code class="language-plaintext highlighter-rouge">https://<aro_public_hostname>/oauth2callback/Azure%20AD</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az ad app update --id $appId --reply-urls https://openshift.3a298ccfc966456481fe.westeurope.azmosa.io/oauth2callback/Azure%20AD
$ az ad app show --id $appId --query replyUrls -o json
[
"https://openshift.3a298ccfc966456481fe.westeurope.azmosa.io/oauth2callback/Azure%20AD"
]
</code></pre></div></div>
<h1 id="access-the-cluster">Access the cluster</h1>
<p>To access our newly deployed ARO cluster and start working on it we have two main methods.</p>
<ul>
<li>OpenShift Console</li>
<li>OpenShift CLI</li>
</ul>
<h2 id="openshift-console">OpenShift Console</h2>
<p>From your favorite browser open the URL <code class="language-plaintext highlighter-rouge">https://<aro_public_hostname></code> you will be prompted to login with your Azure AD user, use one of the ARO administrator accounts for this first login.</p>
<p><a href="https://jreypo.io/assets/images/aro_console.png"><img src="/assets/images/aro_console.png" alt="" title="Azure Red Hat OpenShift Console" /></a></p>
<p>From here we can navigate to either <em>Cluster Console</em>, <em>Application Console</em> or <em>Service Catalog</em> which is the main landing page. I will not go in detail throughout all those sections since they are [perfectly detailed in O<a href="https://docs.openshift.com/">penShift documentation</a>.</p>
<h2 id="openshift-cli">OpenShift CLI</h2>
<p>The OpenShift command line is <code class="language-plaintext highlighter-rouge">oc</code>, which is basically <code class="language-plaintext highlighter-rouge">kuebctl</code> with additional functionality for OpenShift. To get the <code class="language-plaintext highlighter-rouge">oc</code> tool access the <em>Command Line Tools</em>, the link can be found in the upper right part of the <em>Service Catalog</em> are as shown in the following screen capture.</p>
<p><a href="https://jreypo.io/assets/images/command_line_tools.png"><img src="/assets/images/command_line_tools.png" alt="" /></a></p>
<p>To login into our ARO cluster wiht <code class="language-plaintext highlighter-rouge">oc</code> we will use a token. Get your token again from the <em>Service Catalog</em> area in the upper right corner by clicking in your user name and then in <em>Copy Login Command</em>.</p>
<p><a href="https://jreypo.io/assets/images/login_command.png"><img src="/assets/images/login_command.png" alt="" /></a></p>
<p>Paste the login command in your shell and execute it to log into the cluster.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ oc login https://openshift.3a298ccfc966456481fe.westeurope.azmosa.io --token=bK2dcoYJuWgEGcwOJwuKOtdGMQf_zuUxYBwb_fsu394
Logged into "https://openshift.3a298ccfc966456481fe.westeurope.azmosa.io:443" as "aroadmin@jreypo.onmicrosoft.com" using the token provided.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* openshift
Using project "openshift".
$ oc get nodes
NAME STATUS ROLES AGE VERSION
compute-1573691909-000000 Ready compute 3d v1.11.0+d4cacc0
compute-1573691909-000001 Ready compute 3d v1.11.0+d4cacc0
infra-1573691909-000000 Ready infra 3d v1.11.0+d4cacc0
infra-1573691909-000001 Ready infra 3d v1.11.0+d4cacc0
infra-1573691909-000002 Ready infra 3d v1.11.0+d4cacc0
master-000000 Ready master 3d v1.11.0+d4cacc0
master-000001 Ready master 3d v1.11.0+d4cacc0
master-000002 Ready master 3d v1.11.0+d4cacc0
</code></pre></div></div>
<p>And that’s it we can now start developing and deploying applications in out new shiny ARO cluster. In future posts posts I will show what kind of operations can be done in the cluster from the administrator perspective and will go deeper in ARO internals.</p>
<p>Comments are welcome. Take care!</p>
<p>–Juanma</p>Juan Manuel ReyAzure Red Hat OpenShift was announced during last year Red Hat Summit, since then the service has been first in private preview and then during this year Red Hat Summit the service was declared GA.AKS integration with Azure Container Registry2019-10-16T14:28:00+00:002019-10-16T14:28:00+00:00https://jreypo.io/2019/10/16/aks-integration-with-azure-container-registry<p>Since the beginning of the service AKS has been able to use Azure Container Registry, or ACR, to pull the container images used in a deployment initiated either by an engineer or by a CD pipeline. However very recently the team shipped a new tight integration between AKS and ACR.</p>
<p>You must have at least Azure CLI 2.0.73 installed on you laptop or use Azure Cloud Shell.</p>
<h2 id="integrate-acr-during-aks-cluster-creation">Integrate ACR during AKS cluster creation</h2>
<p>This new integration allows for an easy setup of the authentication mechanism during cluster creation and easily enabling it for an existing cluster as well. During the creation operation is as simple as use the <code class="language-plaintext highlighter-rouge">--attach-acr</code> option with the regitry name as the parameter.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az aks create -n aks-cl1 -g k8s-demo-rg2 --dns-name-prefix aks-cl1 --admin-username azuser -l westeurope --attach-acr acr-demo-1
</code></pre></div></div>
<h2 id="integrate-acr-with-an-existing-aks-cluster">Integrate ACR with an existing AKS cluster</h2>
<p>A common scenario would be to have an already existing ACR and AKS cluster that you want to have integrated. To do so we must perform an <code class="language-plaintext highlighter-rouge">update</code> operation with the same <code class="language-plaintext highlighter-rouge">--attach-acr</code> option.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ az aks update --name aks-demo2 --resource-group aksdemo2-rg --attach-acr acr-demo-1
AAD role propagation done[############################################] 100.0000%┌
$
</code></pre></div></div>
<p>–Juanma</p>Juan Manuel ReySince the beginning of the service AKS has been able to use Azure Container Registry, or ACR, to pull the container images used in a deployment initiated either by an engineer or by a CD pipeline. However very recently the team shipped a new tight integration between AKS and ACR.Scaling operations with AKS-Engine2019-06-13T04:12:00+00:002019-06-13T04:12:00+00:00https://jreypo.io/2019/06/13/scaling-operations-with-aks-engine<p>After reviewing how to perform Kubernetes version upgrades with AKS-Engine in a <a href="/2019/02/07/kubernetes-version-upgrade-with-aks-engine/">previous post</a> the next logical step is show how to scale our Kubernetes clusters with AKS-Engine. I will cover manual scaling of the cluster, of course you can always deploy and configure the <a href="https://github.com/kubernetes/autoscaler">Kubernetes Cluster Autoscaler</a>.</p>
<p>There are several scaling scenarios that can achieved using AKS-Engine:</p>
<ul>
<li>Resize an existing nodepool.</li>
<li>Add a new nodepool.</li>
<li>Remove a nodepool.</li>
</ul>
<p>Keep in mind also that the scaling operations will require the API model file used originally to deploy the cluster.</p>
<h1 id="resize-an-existing-nodepool">Resize an existing nodepool</h1>
<p>To resize an existing nodepool the best way is tu use the <code class="language-plaintext highlighter-rouge">aks-engine scale</code> command. The arguments are very similar to the ones used for the upgrade and include:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">--deployment-dir</code> The location of the output files creted during the generate operation.</li>
<li><code class="language-plaintext highlighter-rouge">--auth-method</code> The authentication method, can be <code class="language-plaintext highlighter-rouge">client_secret</code> or <code class="language-plaintext highlighter-rouge">client_certificate</code>, in our example we are using the first one.</li>
<li><code class="language-plaintext highlighter-rouge">--client-id</code> AAD Service Principal ID.</li>
<li><code class="language-plaintext highlighter-rouge">--client-secret</code> AAD Service Principal Secret.</li>
<li><code class="language-plaintext highlighter-rouge">--subscription-id</code> Azure Subscription ID.</li>
<li><code class="language-plaintext highlighter-rouge">--location</code> Location, the Azure region where the cluster is deployed on.</li>
<li><code class="language-plaintext highlighter-rouge">--master-FQDN</code> FQDN of the master.</li>
<li><code class="language-plaintext highlighter-rouge">--resource-group</code> Resource group.</li>
<li><code class="language-plaintext highlighter-rouge">--node-pool</code> Nodepool name.</li>
<li><code class="language-plaintext highlighter-rouge">--new-node-count</code> New number of nodes.</li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aks-engine scale --location westeurope --subscription-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx --resource-group k8s-lab-cl2 --node-pool agentpool1 --master-FQDN https://my-cluster.westeurope.cloudapp.azure.com --new-node-count 4 --auth-method client_secret --client-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx --client-secret xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --deployment-dir ./_output/k8s-lab-cl2/
INFO[0000] validating...
INFO[0014] Name suffix: 44862260
INFO[0014] Found no resources with type Microsoft.Network/routeTables in the template. source="scaling command line"
INFO[0014] Starting ARM Deployment (k8s-lab-cl2-1860437642). This will take some time...
INFO[0289] Finished ARM Deployment (k8s-lab-cl2-1860437642). Succeeded
</code></pre></div></div>
<p>After the command is completed succesfully check that a new node has been added to the cluster.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-agentpool1-44862260-vmss000003 Ready agent 60d v1.12.2
k8s-agentpool1-44862260-vmss000004 Ready agent 60d v1.12.2
k8s-agentpool1-44862260-vmss000005 Ready agent 60d v1.12.2
k8s-agentpool1-44862260-vmss000006 Ready agent 3m8s v1.12.2
k8s-master-44862260-0 Ready master 61d v1.12.2
</code></pre></div></div>
<h1 id="add-a-new-nodepool">Add a new nodepool</h1>
<p>To add a new <code class="language-plaintext highlighter-rouge">nodepool</code> to your cluster you will need to edit the <code class="language-plaintext highlighter-rouge">apimodel.json</code> file, in the <code class="language-plaintext highlighter-rouge">_output/<cluster-fqdn></code> directory, and add a <code class="language-plaintext highlighter-rouge">nodepool</code> entry in the <code class="language-plaintext highlighter-rouge">agentPoolProfiles </code> array. For example my current <code class="language-plaintext highlighter-rouge">agentPoolProfiles</code> looks like this one:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">"agentPoolProfiles"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"agentpool1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">4</span><span class="p">,</span><span class="w">
</span><span class="nl">"vmSize"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Standard_B2s"</span><span class="p">,</span><span class="w">
</span><span class="nl">"osType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Linux"</span><span class="p">,</span><span class="w">
</span><span class="nl">"availabilityProfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"VirtualMachineScaleSets"</span><span class="p">,</span><span class="w">
</span><span class="nl">"storageProfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ManagedDisks"</span><span class="p">,</span><span class="w">
</span><span class="nl">"distro"</span><span class="p">:</span><span class="w"> </span><span class="s2">"aks"</span><span class="p">,</span><span class="w">
</span><span class="nl">"kubernetesConfig"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"kubeletConfig"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"--address"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.0.0.0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--allow-privileged"</span><span class="p">:</span><span class="w"> </span><span class="s2">"true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--anonymous-auth"</span><span class="p">:</span><span class="w"> </span><span class="s2">"false"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--authorization-mode"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Webhook"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--azure-container-registry-config"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/azure.json"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cgroups-per-qos"</span><span class="p">:</span><span class="w"> </span><span class="s2">"true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--client-ca-file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/certs/ca.crt"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cloud-config"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/azure.json"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cloud-provider"</span><span class="p">:</span><span class="w"> </span><span class="s2">"azure"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cluster-dns"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10.0.0.10"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cluster-domain"</span><span class="p">:</span><span class="w"> </span><span class="s2">"cluster.local"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--enforce-node-allocatable"</span><span class="p">:</span><span class="w"> </span><span class="s2">"pods"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--event-qps"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--eviction-hard"</span><span class="p">:</span><span class="w"> </span><span class="s2">"memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--feature-gates"</span><span class="p">:</span><span class="w"> </span><span class="s2">"PodPriority=true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-gc-high-threshold"</span><span class="p">:</span><span class="w"> </span><span class="s2">"85"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-gc-low-threshold"</span><span class="p">:</span><span class="w"> </span><span class="s2">"80"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-pull-progress-deadline"</span><span class="p">:</span><span class="w"> </span><span class="s2">"30m"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--keep-terminated-pod-volumes"</span><span class="p">:</span><span class="w"> </span><span class="s2">"false"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--kubeconfig"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/var/lib/kubelet/kubeconfig"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--max-pods"</span><span class="p">:</span><span class="w"> </span><span class="s2">"30"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--network-plugin"</span><span class="p">:</span><span class="w"> </span><span class="s2">"cni"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--node-status-update-frequency"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10s"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--non-masquerade-cidr"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.0.0.0/0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-infra-container-image"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s.gcr.io/pause-amd64:3.1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-manifest-path"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/manifests"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-max-pids"</span><span class="p">:</span><span class="w"> </span><span class="s2">"100"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"acceleratedNetworkingEnabled"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
</span><span class="nl">"acceleratedNetworkingEnabledWindows"</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="p">,</span><span class="w">
</span><span class="nl">"fqdn"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
</span><span class="nl">"preProvisionExtension"</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="p">,</span><span class="w">
</span><span class="nl">"extensions"</span><span class="p">:</span><span class="w"> </span><span class="p">[],</span><span class="w">
</span><span class="nl">"singlePlacementGroup"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="err">,</span><span class="w">
</span></code></pre></div></div>
<p>I will need to copy the <code class="language-plaintext highlighter-rouge">nodepool1</code> entry and modify it accordingly.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"agentpool2"</span><span class="p">,</span><span class="w">
</span><span class="nl">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
</span><span class="nl">"vmSize"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Standard_DS4_v2s"</span><span class="p">,</span><span class="w">
</span><span class="nl">"osType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Linux"</span><span class="p">,</span><span class="w">
</span><span class="nl">"availabilityProfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"VirtualMachineScaleSets"</span><span class="p">,</span><span class="w">
</span><span class="nl">"storageProfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ManagedDisks"</span><span class="p">,</span><span class="w">
</span><span class="nl">"distro"</span><span class="p">:</span><span class="w"> </span><span class="s2">"aks"</span><span class="p">,</span><span class="w">
</span><span class="nl">"kubernetesConfig"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"kubeletConfig"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"--address"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.0.0.0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--allow-privileged"</span><span class="p">:</span><span class="w"> </span><span class="s2">"true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--anonymous-auth"</span><span class="p">:</span><span class="w"> </span><span class="s2">"false"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--authorization-mode"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Webhook"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--azure-container-registry-config"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/azure.json"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cgroups-per-qos"</span><span class="p">:</span><span class="w"> </span><span class="s2">"true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--client-ca-file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/certs/ca.crt"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cloud-config"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/azure.json"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cloud-provider"</span><span class="p">:</span><span class="w"> </span><span class="s2">"azure"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cluster-dns"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10.0.0.10"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--cluster-domain"</span><span class="p">:</span><span class="w"> </span><span class="s2">"cluster.local"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--enforce-node-allocatable"</span><span class="p">:</span><span class="w"> </span><span class="s2">"pods"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--event-qps"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--eviction-hard"</span><span class="p">:</span><span class="w"> </span><span class="s2">"memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--feature-gates"</span><span class="p">:</span><span class="w"> </span><span class="s2">"PodPriority=true"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-gc-high-threshold"</span><span class="p">:</span><span class="w"> </span><span class="s2">"85"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-gc-low-threshold"</span><span class="p">:</span><span class="w"> </span><span class="s2">"80"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--image-pull-progress-deadline"</span><span class="p">:</span><span class="w"> </span><span class="s2">"30m"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--keep-terminated-pod-volumes"</span><span class="p">:</span><span class="w"> </span><span class="s2">"false"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--kubeconfig"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/var/lib/kubelet/kubeconfig"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--max-pods"</span><span class="p">:</span><span class="w"> </span><span class="s2">"110"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--network-plugin"</span><span class="p">:</span><span class="w"> </span><span class="s2">"cni"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--node-status-update-frequency"</span><span class="p">:</span><span class="w"> </span><span class="s2">"10s"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--non-masquerade-cidr"</span><span class="p">:</span><span class="w"> </span><span class="s2">"0.0.0.0/0"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-infra-container-image"</span><span class="p">:</span><span class="w"> </span><span class="s2">"k8s.gcr.io/pause-amd64:3.1"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-manifest-path"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/etc/kubernetes/manifests"</span><span class="p">,</span><span class="w">
</span><span class="nl">"--pod-max-pids"</span><span class="p">:</span><span class="w"> </span><span class="s2">"100"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span></code></pre></div></div>
<p>Once the <code class="language-plaintext highlighter-rouge">apimodel</code> file is modified run <code class="language-plaintext highlighter-rouge">aks-engine generate --api-model _output/<clustername>/apimodel.json</code>. This operation will update the original <code class="language-plaintext highlighter-rouge">azuredeploy.json</code> and <code class="language-plaintext highlighter-rouge">azuredeploy.parameters.json</code> files used durign the ARM template deployment.</p>
<p>After the <code class="language-plaintext highlighter-rouge">aks-engine generate</code> operation is done then run <code class="language-plaintext highlighter-rouge">az group deployment create --template-file _output/<clustername>/azuredeploy.json --parameters _output/<clustername>/azuredeploy.parameters.json --resource-group</code> <my-resource-group>`</my-resource-group></p>
<h1 id="remove-a-nodepool">Remove a nodepool</h1>
<p>Removing a nodepool from an existing cluster is very similar to the adding operation, just edit the <code class="language-plaintext highlighter-rouge">_output/<clustername>/apimodel.json</code> file, remove the nodepool entry and then run again the <code class="language-plaintext highlighter-rouge">aks-engine generate</code> and <code class="language-plaintext highlighter-rouge">az group deployment create</code> commands.</p>
<p>However there is a catch, you have to manually drain the nodes in your nodepool before executing <code class="language-plaintext highlighter-rouge">az group deployment create</code>. After the operation is finished review your resource group to verify that every related resource has been correctly eliminated.</p>
<p>Hope the post helps to clarify the different scaling scenarios with AKS-Engine. Comments as always are welcome.</p>
<p>–Juanma</p>Juan Manuel ReyAfter reviewing how to perform Kubernetes version upgrades with AKS-Engine in a previous post the next logical step is show how to scale our Kubernetes clusters with AKS-Engine. I will cover manual scaling of the cluster, of course you can always deploy and configure the Kubernetes Cluster Autoscaler.Understanding AKS built-in roles2019-04-10T08:55:00+00:002019-04-10T08:55:00+00:00https://jreypo.io/2019/04/10/understanding-aks-built-in-roles<p>Every Azure Kubernetes Service cluster comes with two built-in roles in Azure RBAC:</p>
<ul>
<li>Azure Kubernetes Service Cluster Admin Role</li>
<li>Azure Kubernetes Service Cluster User Role</li>
</ul>
<p>The confusion is that many people tend to believe that these roles are Kubernetes RBAC roles or related to it in some way. I’ve got the question about this so many times that I decided to write a quick post to clarify what are these roles and the use cases for them.</p>
<p>Both roles are intended exclusively to be used for retrieving credentials with the <code class="language-plaintext highlighter-rouge">az aks get-credentials</code> command.</p>
<ul>
<li>Admin Role allows access to the <code class="language-plaintext highlighter-rouge">Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action</code> API to get the cluster administrator credentials.</li>
<li>User Role permits access to the <code class="language-plaintext highlighter-rouge">Microsoft.ContainerService/managedClusters/listClusterUserCredential/action</code> API and retrieve the cluster user credentials.</li>
</ul>
<p>In both cases using <code class="language-plaintext highlighter-rouge">az aks get-credentials</code> command the credentials are merged into a new or an existing <code class="language-plaintext highlighter-rouge">kubeconfig</code> file. Keep in mind that the user credentials aren’t limited to a specific namespace but will have access to all namespaces and will be able to deploy workloads in any of them. To achieve that kind of access you will need to implement RBAC in the cluster which combined with AAD can be very useful to limit the access of your developers and user to their specific namespaces, I will go on the details for this configuration in a future post.</p>
<p>Finally, neither of these roles will by itself allow to perform operations like scale, create or delete an AKS cluster. To be able to perform those operations an Azure user will need to have the Contributor role in the AKS resource group or the whole subscription.</p>
<p>–Juanma</p>Juan Manuel ReyEvery Azure Kubernetes Service cluster comes with two built-in roles in Azure RBAC: