You need to get to the latest update level on the CentOS systems. If the systems have been unloved you will likely find that they can no longer access the repos servers.
Change the baseurl to http://vault.centos.org/, comment out the mirrorlist as per this image.
You can use containers inside container orchestration platforms and of course you can do that with phpIPAM as well, but in my case I just wanted the convenience of the container packaging approach and running it on a single Linux host without having to worry about the overheads of K8S style platforms.
I was using a RHEL derivative, Alma Linux 9.0 in this case and also using Podman rather than Docker.
I did want to use the docker-compose approach to configuring and maintaining the application. The compose format makes it really quite simple to deploy and maintain simple container applications that are single system hosted.
Since I wasn’t using Docker, rather Podman, I found that you can use a tool called podman-compose to orchestrate podman to deliver the outcome you’d expect from a docker-compose file.
Firstly, start like this, getting podman and pip3 installed.
yum install podman python3-pip
Then it’s simple to install podman-compose
pip3 install podman-compose
With a docker-compose.yml file similar to the following (change the default passwords i’ve put in the file) you can get going very quickly.
Then you connect to the IP address of your underlying system, and execute the installation dialogue. You should only need to enter the MySQL/MariaDB username / password, everything else should be pre-filled with the correct information.
I’ve been doing some work on Oracle’s Cloud as they provide a decent free tier to experiment with. I’ve been very pleasantly surprised with OCI and will likely move some of my personal workloads there.
It wasn’t without a bit of a head scratching experience though when I was trying to get application connectivity between two OCI images on the same private 10.0.0.0/24 network I had created.
eg.
curl http://10.0.0.53/
curl: (7) Failed to connect to 10.0.0.53 port 80: No route to host
My first thought was the cloud ingress rules, but i’d added the following as a first desperate attempt to get things working.
Try again, Still no route!
What I discovered is the OCI supplied images (I was using the Ampere Ubuntu 20.04 image in this case) have an interesting set of iptables rules baked into the image.
root@blog:~# cat /etc/iptables/rules.v4
# CLOUD_IMG: This file was created/modified by the Cloud Image build process
# iptables configuration for Oracle Cloud Infrastructure
# See the Oracle-Provided Images section in the Oracle Cloud Infrastructure
# documentation for security impact of modifying or removing these rule
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [463:49013]
:InstanceServices - [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p udp --sport 123 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
#-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -d 169.254.0.0/16 -j InstanceServices
.
.
.
I’ve commented out the offending line. With OCI supplied images, even though the default filter is ACCEPT, they place a reject-with icmp-host-prohibited at the end of the INPUT table, which effectively rejects everything not specifically allowed (such as the port 22 rule the line before).
My two options were to either put in my specific allows (the right thing to do) or remove the reject and just use the INPUT filter default. I chose the latter as I was experimenting in this case and kept the information at my finger tips for more ‘production-like’ deployments.
The end result, communication between the 2 OCI Ubuntu instances over the private network now works fine.
Caveat: In my case I understood the risks associated with removing the reject for my Use Case. Please perform your own due diligence for your Use Case, you’re probably better off specifically adding the communication rules you want to allow.
It’s a simple issue to resolve, but just a little annoying.
The CVO doesn’t complete because the Cluster-monitoring-operator pod rollout stuck with error message CreateContainerConfigError.
The actual error shows that :
Error: container has runAsNonRoot and image has non-numeric user (nobody), cannot verify user is non-root
This is still an open issue with Red Hat and it’s being tracked via this BZ. It is however easily corrected by deleting the offending pod and letting it get re-created
Don’t believe sites telling you it’s hard to install Dotnet core on a Raspberry Pi 4 – it isn’t and hopefully I haven’t tempted fate by showing how simple it is 🙂
pi@raspberrypi:~ $ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 10.3 (buster)
Release: 10.3
Codename: buster
pi@raspberrypi:~ $ bash dotnet-install.sh
dotnet_install: Warning: Unable to locate zlib. Probable prerequisite missing; install zlib.
dotnet-install: Downloading link: https://dotnetcli.azureedge.net/dotnet/Sdk/3.1.201/dotnet-sdk-3.1.201-linux-arm.tar.gz
dotnet-install: Extracting zip from https://dotnetcli.azureedge.net/dotnet/Sdk/3.1.201/dotnet-sdk-3.1.201-linux-arm.tar.gz
dotnet-install: Adding to current process PATH: `/home/pi/.dotnet`. Note: This change will be visible only when sourcing script.
dotnet-install: Installation finished successfully.
Then it’s just a simple case of adding /home/pi/.dotnet to your system PATH variable.
pi@raspberrypi:~ $ dotnet --version
3.1.201
Wondering about the missing pre-requisite for zlib – so was I. The installer is looking for zlib via a ldconfig command. It seems package zlib1g-dev installs it as libz so the check fails. I’ll update this post if I run into any actual problems.
Always fun to strike problems in what should be the simplest things. I wanted to add Ansible Tower as a service into ManageIQ. Cloudforms would have a similar result.
So, what is a person to do? Hit the google. Eventually I came across this bugzilla item https://bugzilla.redhat.com/show_bug.cgi?id=1740860 and it gave a hint as to just specifying the /api/v2 in the URL I gave to ManageIQ rather than just the base hostname. eg. https://blah…./api/v2
Tried it, it worked! My credential validated and a provider refresh was automatically initiated and all my Ansible Tower templates and inventories were discovered correctly.
So yes, that is quite a specific title for a blog post. The path leading to it wasn’t as succinct, but it was an enjoyable journey.
Firstly, VMware provides a fine Powercli container built on top of Photon OS , but being me I thought Hey I wonder if I can get the same thing with a Red Hat Universal Base Image (UBI)? And so, my journey began.
I decided i’d use the VMware Dockerfile as the starting point, but I want to build it using buildah and run it using podman – because I’d like to know (you can see a pattern here) .
The original Dockerfile is accessible here, or here’s a local copy.
I’ve made a few changes, some cosmetic due to the way I like to layout my docker file, but the outcome is similar. My Dockerfile is below or you can find it over at my github account. Using the default RHEL7 UBI (sadly Microsoft don’t have powershell for RHEL8 as yet) I was able to build the image at around 567 Mb, whereas the Photon OS image is around 362 Mb. Not a bad result given how little effort (none) i’ve put into making it as small as possible.
As you can see in the Dockerfile, i’m simply installing powershell from the microsoft repository on top of the RHEL7 UBI image and then (via powershell) installed the PowerCLI, PowerNSX and PowervRA modules from the upstream powershell gallery.
Building it with buildah is trivial.
buildah build-using-dockerfile -t rcli .
And to run it via podman (trivial example)
[gocallag@orac8 rhel7]$ podman run -it rcli pwsh
PowerShell 6.2.3
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS /root> Get-VM # plus a couple of tabs to force auto-completion of the command
Get-VM Get-VmfsDatastoreInfo Get-VMHostPatch
Get-VMByToolsInfo Get-VMGuest Get-VMHostPciDevice
Get-VMCCommand Get-VMHost Get-VMHostProfile
Get-VMCEdge Get-VMHostAccount Get-VMHostProfileImageCacheConfiguration
Get-VMCEdgeNic Get-VMHostAdvancedConfiguration Get-VMHostProfileRequiredInput
Get-VMCEdgeNicStat Get-VMHostAttributes Get-VMHostProfileStorageDeviceConfiguration
Get-VMCEdgeStatus Get-VMHostAuthentication Get-VMHostProfileUserConfiguration
Get-VMCEdgeUplinkStat Get-VMHostAvailableTimeZone Get-VMHostProfileVmPortGroupConfiguration
Get-VMCFirewallRule Get-VMHostBirthday Get-VMHostRoute
Get-VMCLogicalNetwork Get-VMHostDiagnosticPartition Get-VMHostService
Get-VMCOrg Get-VMHostDisk Get-VMHostSnmp
Get-VMCPSettings Get-VMHostDiskPartition Get-VMHostStartPolicy
Get-VMCSDDC Get-VMHostFirewallDefaultPolicy Get-VMHostStorage
Get-VMCSDDCCluster Get-VMHostFirewallException Get-VMHostSysLogServer
Get-VMCSDDCDefaultCredential Get-VMHostFirmware Get-VMmaxIOPS
Get-VmcSddcNetworkService Get-VMHostFirmwareVersion Get-VMQuestion
Get-VMCSDDCPublicIP Get-VMHostHardware Get-VMResourceConfiguration
Get-VMCSDDCVersion Get-VMHostHba Get-VMStartPolicy
Get-VmcService Get-VMHostImageProfile Get-VMToolsGuestInfo
Get-VMCTask Get-VMHostMatchingRules Get-VMToolsInfo
Get-VMCVMHost Get-VMHostModule Get-VMToolsInstallLastError
Get-VMEncryptionInfo Get-VMHostNetwork Get-VMToolsUpgradePolicy
Get-VMEvcMode Get-VMHostNetworkAdapter
Get-VmfsDatastoreIncrease Get-VMHostNtpServer
You’re likely,possibly, most likely not wondering if I have anything planned for this container. The answer is yes, but it will be the subject of later posts. I’m a big fan of the ability to run Powercli via powershell on linux, and doing it via a container is a very neat packaging solution. Sure, i’ve could’ve used the VMware container (kudos to them for creating it), but I now know more than I did this morning and that’s the result I was aiming for.
The section ‘Authenticating with Azure‘ sounds like the right place, but you can’t use your AD username / password from Ansible because you turned on 2FA – You turned it on RIGHT? So the option left to you is to create a Service Principal (SP).
Note: having 2FA on your account is what you should be doing, so don’t turn it off.
It’s quite simple to create a credential for Ansible to use when connecting to Azure. Simply, fire up the Cloud Shell (awesome feature BTW Microsoft) and create a Service Principal (SP).
But Hang On, what is a Service Principal? The Ansible guide refers you to the Azure documentation over at https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal which you will read, and if you’re like me, you’ll wonder what you actually just read. Have no fear. As I mentioned above you can use a simple Azure CLI command (via the Cloud Shell you just started) and create the Service Principal. Think of the Service Principal as a credential an application (in this case Ansible) can use to access the Azure service(s).
geoff@Azure:~$ az ad sp create-for-rbac --name svc-ansible-azure # (optional if not specified one will be generated) --password 'ALovelyComplexPasswor@'
Changing "svc-ansible-azure" to a valid URI of "http://svc-ansible-azure", which is the required format used for service principal names
Creating a role assignment under the scope of "/subscriptions/88888888-4444-4444-4444-cccccccccccc"
Retrying role assignment creation: 1/36
Retrying role assignment creation: 2/36
{
"appId": "appid888-4444-4444-4444-cccccccccccc",
"displayName": "svc-ansible-azure",
"name": "http://svc-ansible-azure",
"password": "password-4444-4444-4444-cccccccccccc",
"tenant": "tenant88-4444-4444-4444-cccccccccccc"
}
geoff@Azure:~$
If you want to see what that command just did in the Azure portal, head over to the Azure Active Directory -> App registrations blade.
and then you can see the Service Principal you just created.
So what do you do with the new credential.
The Ansible Azure scenario guide has a section on what to do, however, it’s a bit too vague for me.
Using Environment Variables
To pass service principal credentials via the environment, define the following variables:
For your sanity, AZURE_CLIENT_ID ==> appId AZURE_SECRET ==> password AZURE_TENANT ==> tenant
The remaining item, AZURE_SUBSCRIPTION_ID is exactly that, you can also get from the Cloud Shell as follows
geoff@Azure:~$ az account list
[
{
"cloudName": "AzureCloud",
"id": "subscrip-4444-4444-4444-cccccccccccc
"isDefault": true,
.
.
.
In this case AZURE_SUBSCRIPTION_ID ==> id , whichever id in your account that is valid for your use case.
If you want to add these credentials into Ansible Tower, simply create a Credential of type Microsoft Azure Resource Manager and use the values you’ve deduced above. Ansible Tower will automatically translate them into Environment Variables for your Tower template execution.