Thursday, December 30, 2021

Setting Up a Networked SunOS 4.1.4 VM in qemu

If you're only interested in the implementation, skip ahead to SETUP.

A Little History

I started my UNIX career at the US National Institutes of Health (NIH). When I first started there, in 1992, I was a student intern with the Office of the Director. I returned, as a student assistant computer specialist, in 1995. At that time, I was given access to an amazing variety of state of the art equipment, including a pair of Sun SPARCstation 2s and a SPARCstation 10, which were exclusively for my use. Given the time period, I installed Solaris 2.x on them, starting with release 2.5. There were still machines around, mostly SS1, SS1+, and SS2 models, running SunOS 4.x, but I didn't have much interaction with them. They were mostly AFS (Andrew file system, an early distributed filesystem standard, created by Carnegie Mellon University) hosts, and were old tech, to me. So, while I saw fellow employees using them, I didn't pay a lot of attention to how they worked.

Interest Surfaces

I have always had an interest in what would be considered vintage equipment, since I didn't have the means to purchase state of the art machines. The first UNIX box that I personally owned was a DECstation 5000/120 that I purchased from Terrapin Trader, the surplus sales group at the University of Maryland, College Park. That machine didn't come with a complete operating system, so I quickly located The NetBSD Project, and the nascent pmax port, just before its first release, in NetBSD 1.1. I did some basic testing, and got my system working around November of 1997. This was my first real interaction with BSD UNIX, in any form. I quickly came to appreciate the extremely open and collaborative nature of NetBSD, and have enjoyed it, in many forms, from then until now. My interest in NetBSD led to investigations into the history of BSD, in general, and to my presentation on "Unusual UNIXes" at the Vintage Computer Festival East 2019.

Setup

So, let's get rolling with some actual shell! To get started, you should make sure that you have qemu built for your system. Here are some links to get you started:

I use qemu on a ton of platforms, but as I type this, I'm on vacation, so I am currently using an Apple MacBook Air (2020/M1). So, I built qemu (with GL acceleration), following the instructions from knazarov's homebrew-qemu-virgl repo. Once I had that running, I created a new folder to store my SPARCstation 5 VM in, and grabbed the necessary components:

$ mkdir sunos-414-sparc
$ cd sunos-414-sparc
$ wget https://fsck.technology/software/Sun%20Microsystems/SunOS%20Install%20Media/SunOS%204.1.4%20SPARC%20%28CD%29/sunos_4.1.4_install.iso
$ wget https://github.com/itomato/NeXTSPARC/raw/master/ROMs/SPARC/ss5.bin

Once I had those, I started the configuration of the VM. Let me first thank KEKLKAKL, who provided a great starting point for this exercise. To get started, I generated the disk that I would be installing SunOS 4.1.4 on, and created my launch script:

$ qemu-img create -f qcow2 -o compat=1.1 sunos414.img 2G
$ cat << EOF > run.sh
> #!/bin/bash
qemu-system-sparc -L . \
-bios ss5.bin \
-m 32 \
-M SS-5 \
-drive file=sunos414.img,format=qcow2,media=disk \
-drive file=sunos_4.1.4_install.iso,format=raw,media=cdrom \
-net nic \
-net user,net=192.168.76.0/24,hostfwd=tcp::2222-:22
> EOF

Notice that I have changed the default network range that qemu provides for user (SLIRP) networking. I did this because I was having issues in getting SunOS to set the netmask correctly for qemu's default 10.0.2.0/24 subnet. I then launched the installation, using ./run.sh, and followed KEKLKAKI's excellent walk-through of the installation of SunOS 4.1.4. During the installation, I chose 192.168.76.15 as the IP address for the VM.

Once that was complete, I rebooted the system, and configured the default gateway. This is a little more difficult than it is in Solaris 2.x/SunOS 5.x, as there is no /etc/defaultrouter file. Instead, we add the following to /etc/rc.local:

$ echo "route add net 0.0.0.0 192.168.76.2 1" >> /etc/rc.local

This adds a default route (0.0.0.0 network) with 192.168.76.2 as the gateway and a metric of 1, since there is one hop to that gateway.

DNS on SunOS 4.1.4

By default, SunOS 4.x does not do DNS, except as a fall-back from NIS. Yes, Sun really wanted everyone to get onboard with NIS, and even expected you to use it for TCP/IP name resolution. So, your options are:

  1. install a NIS/YP server as a DNS proxy
  2. do some library hacking to change the resolution priorities

I chose to do #2, as that sounded a lot more straight-forward. Sun actually wrote up a great document on how to do just that, and I have captured it here. I created the resolv.conf as follows:

 $ echo "nameserver 192.168.76.3" > /etc/resolv.conf

Once you do that, you have a working emulated SPARCstation 5, running SunOS 4.1.4, with functional TCP/IP!


Monday, September 6, 2021

Installing a Real SSL Certificate on a UniFi Cloud Key

 I recently decided that I was no longer going to tolerate my UniFi Cloud Key's web management tool using a self-signed SSL certificate. I have, many times in the past, replaced the supplied certificates on the self-installed UniFi Controller software that I have run, on various Raspberry Pis or Linux VMs. The process has never been all that cumbersome, and it involved updating the `keystore.jks` file, and restarting the UniFi Controller.

However, the process seems to be very different, on the Cloud Key Gen 2. I'm not sure if the same goes for the Gen 1 Cloud Key, but it seems likely, as UniFi tends to update the software on all devices, with each release. I'd have no problem with this, but the process is completely undocumented.

After many, many hours of poking and prodding my Cloud Key, and much searching of the UniFi Community forum, I found the following fantastic post, from the user loafbread:

https://community.ui.com/questions/Install-a-Commercial-Wildcard-SSL-Certificate-on-Cloud-Key-and-Controller/040c640e-5c48-4477-82dd-aff56178d3f3#answer/b3bb7541-51d0-432a-a33b-b9864615604d

So, it seems that Ubiquiti has made the SSL certificate process much easier. Fantastic, of course, but they somehow failed to document that fact. To make sure that this critical bit of information is maintained, I will summarize the process, here:

  1. copy your PEM-format certificates to your Cloud Key
    1. you will need both the certificate itself and the full CA chain certificate files
  2. back up the following files:
    1. cp -p /data/unifi-core/config/unifi-core.crt /data/unifi-core/config/unifi-core.crt.orig
    2. cp -p /data/unifi-core/config/unifi-core.key /data/unifi-core/config/unifi-core.key.orig
  3. replace the the unifi-core.crt file with your full CA certificate chain and the unifi-core.key file with your certificate
  4. restart the unifi-core service
    1. systemctl restart unifi-core.service

So, the process isn't difficult, if it were only documented.

Oh well.

Sunday, January 17, 2021

Building Python 3.9.1 for Solaris 10

 I won’t get into the whys or the wherefores, but I needed to build Python 3.9.1 for Solaris 10, using gcc-5.5. I attempted the build, using the guide found here, but ran into an issue with setup.py, preventing the building of the socket module.

To get past this, I patched setup.py, with the patch I posted here.

I then used the following configure line, to set up the build:

$ ./configure --prefix=/opt/python3 --with-openssl=/opt/csw/ --enable-optimizations LDFLAGS='-L/opt/local/lib -I/opt/csw/include/ncurses -I/opt/csw/include -L/opt/csw/lib  -R/opt/local/lib' PKG_CONFIG_PATH=/opt/csw/lib/amd64/pkgconfig/ CPPFLAGS='-L/opt/local/lib -I/opt/csw/include -I/opt/csw/include/ncurses -L/opt/csw/lib  -R/opt/local/lib'

After that, everything worked fine.

Friday, December 18, 2020

Samba and Windows 98

 I've got a Windows 98 host here (yes, really, in 2020!) that I use to read and write floppy disks for my various vintage systems. For those who are curious, it is (at the moment):

Compaq Deskpro EN (chassis only)
Gigabyte GA-6VEM Socket 370 mainboard
Pentium 3 1.4 GHz
1 GB RAM
60 GB SATA2 SSD with IDE/SATA bridge
1.44 MB and 360 kB floppy drives

 I was having issues with the machine being unable to connect to my Fedora-based NAS, receiving the error:

The solution was suggested here, but I needed to add one more configuration item:

[global]
server min protocol = NT1
lanman auth = yes
ntlm auth = yes

That solved the problem and allowed my 98 host to connect and mount my shares. Of course, enabling LANMAN auth is a significant security hole, but on my internal network, I'm not too worried.


Friday, December 6, 2019

Beta Testing the Engravinator, Part 2

Just a quick update on the Engravinator...I completed printing the parts to build, early this morning, and I was really happy with how they came out:


A lot of that goes down to how Adam (the creator) laid these parts out, for printing.  For those who are unaware, the orientation of a part on the build plate, has a lot to do with how strong it is, under torsion, compression, tension, and shear on the X, Y, and Z axes.  This is especially a b.g deal, due to the additive nature of the plastic 3D printing process.

I also assembled the aluminum extrusion frame of the engraver.  Again, I can't say enough positive about the quality of the kit, and the thoughtfulness of the engineering.  Instead of the usual hammer nuts, or t-nuts, Adam included sprung post-insertion nuts, which can be added after assembly of the extrusions, and also don't slide around, or fall out, when you rotate the item being built.  These cost a bit more, but are really helpful!

Here is the completed frame, which is as far as I have completed, so far:


I'll continue to post here, as I build the kit, and give my thoughts on it.

Thursday, December 5, 2019

Beta Testing the Engravinator, Part 1

Anyone who knows me is aware that I'm an extremely (if not completely consistent!) backer of Open Source and Free Software.  That's one of the major reasons why I work for Red Hat.  Given this, I'm a frequent contributor to the Electronic Frontier Foundation (and you should be, too!), to Wikipedia, and I just signed up for a monthly contribution to the Internet Archive.  What all of these things have in common is that they are part of the larger Free Culture movement.  I'm also a passionate maker-of-things, and a heavy user of 3D printing, and occasionally of CNC wood and metal cutting.

All of that is a rather long-winded way of explaining my significant interest in the Engravinator.  The Engravinator is an Open Source hardware project that was started, in 2018, by my fellow Red Hatter, Adam Haile.  You can read his blog here, where he discusses the various other projects that he has created.  I met Adam at the second East Coast RepRap Fesival (ERRF), back in October.  He was showing his prototype laser engraver, and I was immediately impressed with the quality and thoughtful engineering that had clearly gone into the device.


So, when Adam told me that there was a beta test group being set up, to test out the process for building the machine, I jumped at the opportunity.  After an amazingly quick kitting process, where Adam was inundated with parts, he finished the kits over the Thanksgiving holiday.  Mine arrived on December 3rd, and I quickly realized that I had forgotten to print the required 3D-printed parts!  I finished the first set (the core components) early this morning (in Prusament Galaxy Black, perfect for a Baltimore Orioles fan!), and I think that they came out amazingly well:


I'm now running the print for the electronics enclosure, on my Prusa i3 Mk2.5, and am hoping to finish all the prints by late, tonight:


If you're as excited as I am about this project, join the forum, and discuss this amazing project with the rest of us!

Saturday, November 23, 2019

Fedora, Ansible and vSphere

I'm working on testing OpenShift 4.2 on vSphere, in my home lab.  Eventually, this will lead to an OCP 4.2 workshop, but I need to be able to do repeated builds, easily, first.

There is a really nice Ansible-based deployer already available, that uses Terraform, here.  However, I wanted to use pure Ansible, so I, as usual, made more work for myself.

I started going through the Ansible/vSphere configuration document, here.  I quickly discovered that the instructions, while complete, didn't seem to work on Fedora 30 or 31.  The issue turned out to be that the built-in Ansible didn't find the vSphere automation SDK, even when built into an activated virtualenv.

To work around this, I installed Ansible, itself, in the virtualenv.  So the completed steps to a working Ansible/vSphere integration are:

$ virtualenv ansible
$ source ansible/bin/activate
$ pip install ansible
$ git clone https://github.com/vmware/vsphere-automation-sdk-python.git
$ cd vsphere-automation-sdk-python/
$ pip install --upgrade --force-reinstall -r requirements.txt --extra-index-url file:///~/vsphere-automation-sdk-python/lib
Once that is complete, you can use the vSphere dynamic inventory plugin, by doing the following:

$ cat << EOF > ansible.cfg
[inventory]
enable_plugins = vmware_vm_inventory
EOF
$ cat << EOF > inventory.vmware.yml
plugin: vmware_vm_inventory
strict: False
hostname: vcenter

username: <vCenter admin user>
password: <vCenter admin password>
validate_certs: False
with_tags: True

Then, you can run an inventory query against your vCenter server:

$ ansible-inventory -i inventory.vmware.yml --list
{
    "_meta": {
        "hostvars": {
            "Fedora 31 (64-bit)_420ce9bb-dce1-05c6-33f5-6f2072436499": {
                "ansible_host": "10.0.1.66",
                "config.cpuHotAddEnabled": false,
                "config.cpuHotRemoveEnabled": false,
                "config.hardware.numCPU": 1,
                "config.instanceUuid": "500c0fdc-f1a7-1d79-d0a3-642e8e26642c",
                "config.name": "Fedora 31 (64-bit)",
                "config.template": false,
                "guest.guestId": "fedora64Guest",
                "guest.guestState": "running",
                "guest.hostName": "fedora31.jajcs.loc",
                "guest.ipAddress": "10.0.1.66",
                "name": "Fedora 31 (64-bit)",
                "runtime.maxMemoryUsage": 2048
            },
            "VMware vCenter Server Appliance_564d4d0f-52f6-5d7d-bfc1-6641b464586b": {
                "ansible_host": "10.0.1.15",
                "config.cpuHotAddEnabled": true,
                "config.cpuHotRemoveEnabled": true,
                "config.hardware.numCPU": 4,
                "config.instanceUuid": "524f8660-11e1-df20-ab50-049c484fa387",
                "config.name": "VMware vCenter Server Appliance",
                "config.template": false,
                "guest.guestId": "vmwarePhoton64Guest",
                "guest.guestState": "running",
                "guest.hostName": "vcenter.jajcs.loc",
                "guest.ipAddress": "10.0.1.15",
                "name": "VMware vCenter Server Appliance",
                "runtime.maxMemoryUsage": 16384
            }

        }
    },
    "all": {
        "children": [

            "fedora64Guest",
            "other3xLinux64Guest",
            "poweredOn",

            "ungrouped"
        ]
    },

    "fedora64Guest": {
        "hosts": [
            "Fedora 31 (64-bit)_420ce9bb-dce1-05c6-33f5-6f2072436499",
        ]
    },
    "other3xLinux64Guest": {
        "hosts": [
            "VMware vCenter Server Appliance_564d4d0f-52f6-5d7d-bfc1-6641b464586b"
        ]
    },
    "poweredOn": {
        "hosts": [
            "Fedora 31 (64-bit)_420ce9bb-dce1-05c6-33f5-6f2072436499",

            "VMware vCenter Server Appliance_564d4d0f-52f6-5d7d-bfc1-6641b464586b",
        ]
    },
}

Good luck!