A Modest Proposal

No, this doesn’t involve eating the Irish, nor is it satire. But it does involve the killing of certain ‘sacred’ cows, so not a completely bad title…

Having lived in both the Windows and macOS admin worlds for three decades, I’ve had some time to deal with the foibles of both, and while Windows is a capable, usable, feature-rich OS, it is also a gods-damned mess in ways that are, in 2022, almost 2023, inexcusable. The registry, the issues with library and runtime versions…no matter how hard deploying software on macOS can be, it is orders of magnitude easier than on Windows. So with that in mind, looking at it from a macOS perspective, how can we solve this problem in a sane manner? What ideas can we steal from macOS and Linux to help, while applying them in a way that works for Windows?

Get off the 32-bit pot

It is 2022, almost 2023, MS needs to just pull an Apple and say “As of <version>, we will no longer support 32-bit anything in Windows.” OEMs/ISVs will either go along or they won’t, but hanging on to 32-bit support is no longer justifiable given the amount of work it creates. Stop it. Stop it for every application too. The fact there’s even an option for a current 32-bit Office (and I would be thrilled to be now wrong about that) is even more inexcusable. Dumping all the 32-bit legacy code would be a massive improvement on every level, including OS layout, security, simplicity, etc. That users have to still care what the “bittedness” of their OS or applications is? Stupid. Like straight up stupid. Yes, I know, enterprise customer still have 32-bit apps they use.

If they’re big enough to try to stop this, they’re big enough to have or hire the staff to update their crap. It’s 2022 almost 2023, there is no justification for 32-bit Windows or Windows apps. Just. Stop. It. No, don’t even allow them to run sandboxed. Put them to rest like they should have been years ago. Spine up and do it.

Partition, Partition, Partition

Not in the filesystem sense, but in the OS structure sense. The way Windows as an OS is set up, the difference between user and system data is not as clear as it should be. But, thats something that can be fixed.

First, let’s create some core structures, some of which are already there:

  • Windows
  • AD
  • MyComputer
  • Users

Okay, admittedly the AD/MyComputer names are lame, but given MS branding over the years, they’re not as bad as some of what MS has come up with (anyone want to get a brown Zune squirt?) So what are the purposes of each of these partitions?


This is the local OS, as it is now. But, we’re going to steal an idea from Apple, and make C:\Windows read-only. Like hardcore. There’s no reason not to, it’s a perfectly usable method, as Apple has shown, and while it can cause minor bumps, the management tools on Windows are, or should be, mature enough to manage this. The only thing that goes in here is installed during the OS install. By actual OS installers, not imaging. To be blunt, Imaging needs to die, but that means that MS has to stop allowing OEMs to modify windows to add in their own nonsense as part of the OS. Install it somewhere else, but the Windows OS install has to be carved in stone as it were by MS.

This also means MS has to stop tarting up the OS install itself. The Xbox stuff, the other non-OS essential items don’t have to be eliminated as products, but they can’t be a part of the core OS. Windows as an OS needs to be slimmed down. This would a) be a huge boon to enterprises everywhere, especially in high-security areas. If they can rely on the OS they get on the machine being just the OS, and everything outside of Windows being removable without causing issues with the operation of the computer, a major need for imaging (the “de-tarting” of the OS) goes away. The OS is the OS, the OS directory structure is read-only, periodt. I’m not saying do the partition tricks Apple does, although those have much to speak for them in terms of locking down the OS, but at the very least, you should never be a single password from having your computer owned at the “burn it to the ground to fix it” levels.

Again, the only thing in C:\Windows is the OS, the things needed to be a copy of Windows. Nothing more. Also, stop with the stupid stub files for Office, does anyone not despise those?


This is a nod to the ubiquity and integration with AD that is a part of the Windows world. This is where everything required for AD management goes, and it’s created only on binding with AD. Policies? here. Configs? Here. Device Management configs? Here. Needed Scripts? Here Once it’s in place, the only source that can modify it is AD. No local fuqery allowed. The only thing a local admin can do is wipe the drive to get rid of it. If you want to delete the directory/unbind from AD, the minimal privileges required for a local interactive user should be Enterprise Admins. Yes, you have to lock this down like that to make it work. If the machine is removed from AD remotely via AD tools/processes, then part of that is deleting the C:\AD directory.

Which also means that any AD-only users lose their abilities to log in. This would require the thought process of “do we want to allow this user to be a hybrid AD/Local user?”, which should require some thought. The cases where this would be a problem should be relatively small, but it has to be the starting point. Yes, I am quite serious about how hard I want to lock out local admins/users from being able to modify C:\AD. It’s needed, and since the only way to create C:\AD requires AD (or some other LDAP server that can play AD games correctly), the legitimate needs to be able to modify/delete C:\AD as a local machine admin/user are small.


This is analogous to /Library on macOS. This is for local settings/policies that affect every user on a specific computer. If it affects all users, or is a system setting, it goes here. Obviously, local admins can mess with this, it would exist on every Windows computer. But, you’d have to have local admin rights to do so. Note that nothing here should be required to boot the computer and log in. But if you just delete stuff, your apps or local settings may be very strange. No, it’s not hidden. Hidden directories for this kind of thing are silly, and I very much include hiding ~/Library on macOS in this.

But yes, application settings, login settings for all users, etc., that all goes here. It’s basically the current C:\ProgramData directory, with some updates. Like not being hidden.


I hope this is obvious, since it already exists. This is for User data, including per-user installed applications. We should already understand this concept, so I’m not going to go into details on this. There will be one significant change that I’ll discuss in a bit, but it’s a good change that will make things like user migration easier. I will say get rid of the stupid Roaming/Local/LocalLow, there’s little need for that.

Whither Applications?

There’s no real need to change this, C:\Program Files works fine. What I will say again is, get rid of 32-bit support. That C:\Program Files(x86) still exists, and is still needed is an embarrassment. So we keep C:\Program Files, but we’re going to take an idea from macOS and modify it a bit to make certain things easier: All applications go in their own folder in C:\Program Files, and all application-specific data, files, and libraries goes in the application folder. No more dropping an executable in C:\Program Files and then vomiting library files everywhere, including runtimes that other apps use because “oh, that VC++ runtime is there, no need for me to install mine.”

If your application needs 34 runtimes, they go in your application’s folder. Not in C:\MyComputer, that’s only for things needed by every user on the computer. That your application needs a specific version of the .NET runtime that is different than the OS version? That’s fine, but it goes in your application’s folder, periodt, and only your app can access stuff in that folder. This has a number of effects that are hugely positive:

  1. Uninstalling becomes orders of magnitude simpler. Along any directories you put in C:\MyComputer, the entire uninstall process no longer requires complex executables. A handful of remove-item statements in a PowerShell script at most are all you need. (Yes, I know about the elephant, patience children, patience.)
  2. Installing is simpler. You copy a folder to C:\Program Files, put some shortcuts on the current user’s desktop, (other users, if this is a shared use machine can be handled via a First Run action), shortcuts in the Start Menu, create any necessary services, and you’re done. Everything else can, and should be handled by a First Run action on that computer when the human initiates it. So the only reason for .msi at all is integration into managed deployment systems, but those become so much more simple. Also, it helps put an end to setup.exe, the bane of admins everywhere. There is nothing about your installer so clever that you need to write your own. Wank somewhere else, not on my computer.
  3. It makes troubleshooting easier, because it creates known places for all your stuff. You can make assumptions.
  4. It makes reinstalls easier
  5. It makes updates easier
  6. It removes the need for the OS to manage application-specific library needs

This is such an obvious change that I’m really surprised it hasn’t happened. It simplifies so many things, it removes so much confusion. Seriously, this change alone makes things ridiculously better.

So about that elephant…

So what about the registry? How do we update that for this brave new world?

We don’t. We kill it. We remove it. We obliterate it. We treat it like the Death Star treated Alderaan.

There’s nothing about the registry that is objectively good. It’s awful on every level, and if you look at the actual data it contains, a huge part of that? File Paths. Which are better managed in literally any other way, via settings files. Text, XML, JSON, I don’t care. There is nothing good about the registry. It’s a trivially modified place for critical system settings, it is a hard to read/use database that is a glorious target for every bad actor out there…it was never really a good idea, just admit it and make it go away.

If you get rid of the registry, a lot of things get easier. Installing software. Uninstalling software. Updating software. Migrating users to a new machine. Removing a user from a machine. Adding a user to a machine. In fact, almost everything that uses the registry now gets easier if you remove the registry.

Just kill it. Kill it dead. It was never a good idea, and the only thing the Windows registry does better than any other method is be the Windows Registry. Make it stop, delete it from the computer/IT lexicon. If the path “HKLM:/…” is never seen again, it will be a good day. Kill the registry. Do it.

This is not complete

Obviously, this is a broad strokes post. The details on this are numerous and important, but I really don’t like to complain without at least attempting to offer a solution, and I think this is not the worst attempt.

The question is, does anyone on the Windows team have the spine to actually make the changes to make things better, or are they too stuck in “we can eventually fix it without changing anything?”-ville. Because if that’s the case, Windows will never get better. We’ve watched decades of failed incrementalism on the platform. Time to blow some things up and make it actually better.


Application Scripting is Weird

There’s a tendency in the Apple world to paint AppleScript as some uniquely weird inconsistent language. I’m usually amused by that, because then the person doing that will use shell as an example of a consistent language. Which is highly amusing.

But here’s the the thing: the core language, the core AppleScript syntax is really quite consistent. It’s when you get into scripting applications that things get weird, because app devs are not consistent in how they implement things.

So let’s take a look at it via Excel, which has the advantages of being scriptable in wildly different languages on a different platform. We’re going to do a fairly simple set of steps:

  1. Open an excel file
  2. Set a range of columns to be formatted as a table
  3. sort that table by the first column ascending


Here’s how we do this in AppleScript:

set theExcelFile to "pathtofile" as POSIX file
tell application "Microsoft Excel"
     open theExcelFile
     set theWorkbook to the active workbook
     set theWorksheet to active sheet of theWorkbook
     set theRange to get entire column of range ("A:H")
     set theList to make new list object at theWorksheet with properties {range object:theRange}
     set theSort to sort object of theList
     set theSortRange to sortrange of theSort
     sort theSortRange key1 (column "$A:$A" of theSortRange) order1 sort ascending
end tell

I mean, if you know Excel, and how “Format as Table” actually creates a list object, and sorts are weird within a table/list object, and you script Excel a LOT, this makes sense. You:

  1. Create the path to the file
  2. Tell Excel to start/activate
  3. Tell Excel open the file
  4. Create a reference to the active workbook
  5. Create a reference to the active (work) sheet of the active workbook reference
  6. Create a range of columns
  7. Make a new list object (format as table) in the active sheet for the range you just created and create a reference to that list object
  8. Create a reference to the built in sort object of the list object
  9. Create a reference to the sortrange of that sort object reference
  10. sort the sortrange reference by the first column of the sortrange in ascending order

Okay, so what about say, PowerShell on windows? That has to be way less application-specific right? Surely it’s not that weird…


$fileToOpen = "fullpathtofile" 
$excelObject = New-Object -ComObject Excel.Application 
$excelFileObject = $excelObject.Workbooks.Open($fileToOpen)
$excelFileWorksheet = $excelFileObject.ActiveSheet 
$excelFileWorksheetRange = $excelFileWorksheet.Range("A1","H1").entireColumn 
$excelFileTableObject = $excelFileWorksheet.ListObjects.add([Microsoft.Office.Interop.Excel.XlListObjectSourceType]::xlSrcRange,$excelFileWorksheetRange,$null,[Microsoft.Office.Interop.Excel.XlYesNoGuess]::xlYes)
$excelFileTableObject.Sort.Apply() (actually perform the sort)
$excelObject.Visible = $true 

Obviously this is totally different, because here we:

  1. Create the path to the file
  2. Tell Excel to start/activate
  3. Tell Excel to open the file
  4. Create a reference to the active workbook
  5. Create a reference to the active (work) sheet of the active workbook reference
  6. Create a range of columns
  7. Make a new list object (format as table) in the active sheet for the range you just created and create a reference to that list object
  8. Create a sort object made up of the list object specifying the column to search on and how
  9. Apply the sort object to the list object
  10. Actually make the excel file visible

Oh yeah, that’s totally different and that syntax is just as bog-standard PowerShell as can be, unlike that Excel AppleScript which has nothing to do with core AppleScript. 🙄

If you were to do the same thing in Numbers, you’d see a similar syntax, because applications have specific needs that a core language for an OS does not. Any language that can be extended to fit the needs of a specific application is going to get weird based on the needs, features, and naming conventions of the application. For example, “Table” in Excel can cover a lot of very different things. “Table” in Numbers covers basically one thing, it’s not like Numbers has Pivot Tables. So doing table operations in Numbers is similar, but not identical to format as table in Excel.

Any language supporting application scripting is going to get weird as more applications use it. It’s the unavoidable nature of the beast.

Get-Macinfo Update

tl;dr, updated for Apple Silicon

During my talk at JNUC, a few folks pointed out that my Get-Macinfo script didn’t work well on Apple Silicon. I wasn’t surprised, but as I don’t have an Apple Silicon Mac, I can’t exactly test for that. However, some of y’all really came through with details on command results, and with the help of folks, in particular Kelly Dickson and Dr. Michael Richmond, I was able to get the info I needed.

For Apple Silicon, in the system profiler hardware report, the following values:

  • CPU Speed
  • CPU Count
  • L2 Cache
  • L3 Cache
  • Hyperthreading

don’t exist. Not a shock, but as that query is dumped into an array, missing 5 items meant my array references were all wrong.

I’ve got the first update for Apple Silicon up at ye olde github site, so anyone with an Apple Silicon Mac who wants to look at it and feels like installing/running PowerShell on their Mac (if they don’t already have it) can beat on it. It still seems to work correctly on Intel.

Again, thanks to everyone who helped out, it’s really appreciated, and if anyone has anything they’d like to see added to the list of things Get-Macinfo reports on, I’m happy to add where I can.


JNUC 2022

Sitting in the airport with an hour to kill, thought I’d jot down my thoughts. First, I really enjoyed the conference. The Jamf folks did a solid job, San Diego was great, the hotel location was perfect, and having an event on a Naval Aviation museum? PLANE NERDGASM. Even as a non-Jamf user/admin, the sessions were varied enough that I had no trouble basically double-booking myself. Really good choice, even if I am slightly biased.

Speaker Thoughts

As a speaker, I’ve a couple of minor nits: the speaker room, “Green Room” was a bit sparse. It could use some sprucing up, more coffee (to be fair, there’s never enough coffee as far as I’m concerned, so don’t take that too seriously, I’m an E D G E C A S E when it comes to coffee consumption.) But a bit larger and a few more amenities would be appreciated, it’s good to have a place where one can quietly go over a presentation one last time.

On the flip side of that, the ability to see, almost in real time how many people attended vs. registered is really useful. I was blown away at the numbers my PowerShell session pulled, I honestly didn’t expect but about ten people to show up. That feature is really useful for speakers, so kudos to Jamf on that one. The pre-show prep covered literally months, but that allowed it all to be spaced out and unhurried, which is something I really liked. Whoever set that up did a fantastic job of making it really clear what I needed to do and when, and that prep was a massive help for me, so thank you all.

However, while I understand that 30 minute sessions need to be tight, the inability to have any live demos during a presentation was a killjoy and a half. I’d timed my session out with demos in mind, and without those, I went from a solid 20+ to an okay 15+ minute session. I’m good at vamping, but that’s a lot of time to tap-dance through. Being able to demo things, especially when talking about something like PowerShell or similar that a lot of the audience is unfamiliar with is critical. I hope that for 2023, Jamf adjusts their setup to allow for better demos.

Other than the demos issue, the overall presentation was fantastic. The speaker monitors were perfectly placed, so there was never a need to do the “turn head to see what you’re talking about” dance. Having the presentation display there so I could also see my notes was really useful, and greatly appreciated.

Overall as a speaker, I think other than the very minor nit of the speaker room, and the demos issue, Jamf did a great job here.

Attendee Thoughts

Knocked it out of the park. The location, as I said was amazing. Being able to run along the bay between the end of sessions and any after events was a really great way to unwind a bit, and the solid spacing between the end of sessions and after events was hugely appreciated. That’s something a lot of conferences overlook, and after two days of non-stop go from 7/8am to 11pm or later, one runs out of energy for the last day or so. Having that break kept that from happening, please don’t lose that.

The room setups were great, easy to see and hear from everywhere in the room, and the chairs were quite comfortable. Having a good bit of elbow room at the tables in the rooms was so nice, like just so nice. Having breakfast and lunch provided was nice, the quality of the food was aces. The overall conference had a hard WWDC vibe, but from the older days when you got actual decent food, not box lunches that make USAF flightline meals look luxurious. If you’re going to provide food, spend more than a buck a meal, and Jamf did this about perfectly. The plethora of coffee stations, again, I loved. Could have used more. And bigger cups. But that’s (literally) just me.

Finally, the attendees. First, to everyone (and there were a lot of you) who told me how much they enjoyed my session, thank you so much. I was kind of unsure how such a non-Jamf-specific session would be received at all, and given it was on PowerShell, I had braced myself to be mostly ignored. Instead, I had almost a packed room for the ‘live’ session, and twice that for the virtual and oh my goodness, than you all SO MUCH, it means a lot. That sense of community I’d been missing for the last few years was there in abundance, and it really, really felt good. Y’all are amazing and wonderful.


my first-ever JNUC, I was genuinely impressed on every level. The few things that stood out as less-than amazing did so more because everything else was so good, and I think all of them are fixable without a huge amount of work or expense. I really enjoyed the experience as both a speaker and attendee, and am absolutely figuring out how I can work 2023.

Finally, dear Jamf folks, my loves, my sweets…if you’re looking to hold future JNUCs in places that aren’t Minneapolis, I understand Kansas City is a conveniently-located venue. Middle of the country, good-sized airport, lots of hotels, and for the big event, an absolutely amazing art museum that regularly hosts such things, or a just-as-amazing train station, and literally, the best chocolatier in the country if not the planet. Oh and a few solid hotels and convention centers. Just sayin’…;-P

Azure Management tip for PowerShell on MacOS

The other day I was messaging back and forth with a good friend and former minion who was talking about a roadblock he’d hit with trying to use PowerShell on a Mac to manage Azure servers. We talked a bit, and then I went hunting and stumbled on a way to do this, so I thought I should share it with you.

In this specific case, he was trying to manage his Exchange instance, and when he’d run Connect-ExchangeOnline, he’d get the web auth dialog, authenticated, and then get the following error:

Exception: This parameter set requires WSMan, and no supported WSMan client library was found. WSMan is either not installed, or unavailable for this system.

I tried it myself and got the same error, so I started poking. A bit of searching on the PowerShell Gallery led me to PSWSMan, and the docs for that helped me then run Install-WSMan (part of this for the Mac uses MacPorts, fyi.) You have to do the install as root, but once you’ve installed the modules and enabled, them, then you should be able to do many Azure things via PowerShell from your Mac.

If you’re ever trying to find a PowerShell module, I cannot recommend PowerShell Gallery enough as a starting point, it’s an amazing resource.

On Applying to MacIT Companies

Sorry about the title being kind of meh, but it’s not that important. I wanted to spend some time on three companies in the macOS space, specifically, what it’s like trying to get hired by them.

Note: I’ve never been hired by any of them, but I’ve done some work here and there for and with two of them, (Apple and JAMF)

I don’t have any particular beef with any of them. I don’t fit a lot of molds, I don’t have any one thing you can point at and say “this is what John is better at than everyone else.” My background has depth in breadth, if that makes any sense, so I get that’s weird given the modern “you must focus on one thing to truly master it” focus. I’m also sans degree, and even with my experience, (almost 30 years), that’s an issue. Always will be, perhaps because I’ve got so much experience.

The three companies in question are Jamf, Apple, and Kandji, and I’m going to talk about how they manage applicants they reject. I think this is important, it’s like a “waiter test”, wherein you see how someone is with someone they don’t have to be kind to. It’s also similar to the “do you put your shopping cart back in the corral when you’re done” experiment.


Of the three, Kandji is a clear winner based on what is for me, the most critical of all metrics: ghosting. Kandji has never ghosted me. When I send in an application, I get a quick “we have your application response” message, so I know they got my application. When they rejected me, I got a clearly automated, but still well-written email to that effect. Is it hand-written? No, but that’s less important than them taking the time to create a process for whatever ATS they may have to send out that email. It’s a small thing really, to automate such a thing, but it’s important. It resonates with the values they claim to have. Even in saying “no”, they are being kind, they are treating me like a human being, they are meeting the basic level of good treatment. Even being rejected, I feel good about them, because based on that alone, they have baked at least some basic decency into their system. Huge.

Their application process is really simple and clear. I don’t need to create an account to apply. I can just upload my resume without dealing with “what version of what resume” etc. I don’t have to re-enter information that is obviously in my resume.

Applying for a gig at Kandji reeks of someone not treating the applicant as some form of show dog. Highly recommend, 10/10, would apply again.


Apple is a mixed bag, because their sheer size forces a certain amount of complexity upon them, and the space they hold at the literal center of the MacIT world makes it tempting to overlook their errors. Fortunately, I don’t have to overlook them, and I think no one is done any service by having their errors ignored. You can’t improve what you don’t know is suboptimal.

In terms of ghosting, Apple only gets second place, because on occasion they have not ghosted me. Like…maybe 3-4 times, (out of over 30 applications, yes, I’m sure about that number) but that counts. Were it not for that, they’d be tied for last, they ghost pretty relentlessly, and there’s no excuse for it. Setting up an automated rejection email, especially for a company of Apple’s size with the in-house resources they’ve access to. It’s inexcusable, to be honest, and Dierdre O’Brien really needs to fix this. All the “Apple cares” marketing in the world doesn’t make up for the inability to treat applicants humanely and kindly, especially when rejecting them.

Apple’s application process is definitely complicated, but I give them a bit of a bye on some of it, they are a massive company, the volume of applications they get in a given hour would overwhelm some smaller outfits. The way they manage resumes is annoying, as you can only have one in your profile, which makes tweaking a resume for a given position hard, since in theory, that resume is used to potentially vet you for other positions. The education section puts the last school you entered at the bottom of the list instead of the top, so if you enter a new school, the only way to have that info at the top is to manually rebuild the list. If you’re an educational itinerant wanderer like me, re-entering 5-6 schools is tedious.

The past employment section is annoying for much the same reasons as the education, only moreso. I have 11 or so employers across over 30+ years, but if I enter a new one it’s…at the bottom. Even if it’s current. S I G H. Come on Apple, you can do better, I know you know how. But again, if I’ve given you a resume, why is this section required?

You can have a cover letter saved, but only one, so that’s kind of pointless. Once your profile is set, applying is pretty straightforward. Do you want to use your resume and a custom cover letter, do you want to apply with LinkedIn? That part of the process is pretty easy. Just don’t expect any communications if you’re rejected. Apple has a ghosting problem with rejected applicants.


I can safely say I have not been ghosted by Jamf exactly once. I had one phone screen, where we couldn’t come to terms on me relocating from Tallahassee to Atlanta, and as I couldn’t, and they required it, that was a hard “no” for both of us. Which happens, sometimes you can’t easily relocate. I’ve hit that with different companies before, I was a single parent and an only child. Relocating was difficult. No harm no foul.

But when it comes to ghosting rejected applications, Jamf is relentless. Other than that one phone screen, I’ve never gotten a rejection email from Jamf. They always ghost, to where I don’t think I’d be willing to apply any more, because the ghosting issue is that bad. That may not be a common response, but I doubt I’m unique here. Again, Jamf is not a wee company with severe resource limitations. They’re fairly large, they could fix this, but evidently, it’s not something they care to fix, so ¯\_(ツ)_/¯ As with Apple, Michelle Bucaria over at Jamf should have her team fix this, as like with Apple, it’s inexcusable to not have an automatic rejection email sent out.

Their overall process is much improved over what it was a few years back, at this point, it’s basically the same as Kandji’s. Attach resume and cover letter, fill in some mandatory questions, fill in some other questions that may not need to be there, but there’s a really small amount of them, so eh, no biggie. Really Jamf’s biggest problem is the ghosting. (Which they could have fixed, but at this point, the process of finding out is of little interest to me anymore. That’s not to say I wouldn’t work for them, but I’ve shown my interest enough over the last decade or more, they know how to find me if they want me.)


So of the three, Kandji, Apple, Jamf, I absolutely recommend applying to Kandji. Easy process, and the company has a kind vibe I dig. If you’re in IT in the macOS or i(Pad)OS worlds, eventually you’ll apply to Apple or Jamf. Just be sure you’re okay with ghosting if you do.

Oh, none of the three posts salary ranges for positions. Which honestly, is kind of lame, but I think of all of them, Kandji would be the likeliest candidate to fix that, it’s literally the only knock I can come up with for them.

Some Reasons why Installers are a mess

The other day, I was kvetching about installers (if you’re new here, I do that. A lot. You get used to it) and got an interesting reply on twitter:

Installers should be a 100% solved problem these days. Use a freakin’ msi on Windows, and use a pkg on macOS and all will be right with the world. Do not use bloated Java-based installers. Yes, I’m looking at you, InstallAnywhere and whatever they use (used to use?) for Archicad.

So the thing is, yes and no. Installers are, as this person said, something that should be solved, but it is not, and the problem is far, far worse on Windows, even if one uses MSI’s, than it is on macOS. Like far worse, for a variety of reasons.

First, and this exists regardless of platforms, outside of a very small number of companies, the amount of care put into installers is basically none. It’s scutwork, it generates zero direct income, it’s the first circle of hell for interns, etc. One of the best reasons for drag and drop installs on macOS is that it literally avoids installer executables.

The problem with this is that an application that has unit tests galore, UI/UX research by the pound, endless resources devoted to making sure it runs in a way that makes angel sing gets installed by executables that take all that and shove it into some of the most awful UI with almost zero testing on anything but the programmer’s machine. (I have literally dealt with installers that had hardcoded paths to the dev’s machine in the config files. More than once.)

It is also a place where the worst sorts of unnecessary cleverness lives, especially if we’re talking about high-end applications that have roaming license servers. In some cases, I have seen installers for Linux servers that:

  1. Require manual undocumented hacks to install on RedHat
  2. Don’t have a command-line option, so the only documented way to install is to install a GUI. On a Linux server. To install software. Shoot. Me.

There is no realm of software that can begin to touch installers for sheer, mind-numbing WAT. The things I have seen…

But even if you do things “the right way”, there’s a lot of problems. I’m going to talk about Windows here, because it’s…well, honestly, it’s easy.

First, MSIs are not a magical end-all. For example, I have seen MSI over MSI that has to be run as an admin, but if you’re not physically logged in as an admin and double-click on the MSI, you’re never asked to elevate to run the installer, it just fails with a mostly-useless error message. This is a trivial thing to avoid, and yet it happens a lot. I’m pretty sure I know why, the one dev building the installer logs in as an admin, so this problem doesn’t exist for them.

Sometimes you get a “you need to be an admin, start over” dialog. Thanks a pantload Chet, you could have handled this as part of the install. But you didn’t, and now you suck.

You end up getting really good with running msiexec from administrator PowerShell windows after a while.

MSI has a fairly standard set of switches for quiet installs, etc., but a lot of MSI installers don’t always use them all for some reason. So the install either fails, (sans logging, and logging on Windows is awful) or it runs in default mode with nary an error message to be found.

Did I mention installers are regularly awful? Because they are. Autodesk for example, has arbitrary filename length limits for its installers. That it regularly violates. Let me say that again, the company that sets the rules for how long a filename can be to work with its installer names files that break that rule. Not filename and path, just the filename. Make it make sense.

But even if you don’t hit that kind of nonsense, even if the MSI is perfect, then there’s the library issue. Also known as the endless strings of VC++ and .NET runtimes that have to be installed, often in a specific order, and when you uninstall, those stay behind, because if some other app is also using it, (and you have no way of knowing this), then removing it breaks other things, often, again, with no useful error message.

This is one place where the package format macOS uses is literal brilliance, a brilliance I did not truly appreciate until I had to deal with Windows deployments again. In some cases, we’re talking about nine or more separate MSIs that have to run in a specific order before the actual installer for the app you want to install can run. None of these will be visible to the user in Settings or the Control Panel. So when you “uninstall” the application, you’re only uninstalling the actual application, not the n things that were actually installed. Because there’s no safe way to do that in Windows.

On macOS, you just shove all the libraries in the application bundle and you’re good to go. For example, Adobe Photoshop 2022 has 121 frameworks, 8 “required” plugins, 52+ other “required” pieces of code and image files, and hundreds of various localization files. All of them are in the application bundle. There’s some other things outside in the Application Folder, some config files in ~/Library/Application Support/Adobe, some in /Library/Application Support/Adobe, and really, that’s mostly it. Compared to how things work in Windows, that’s almost nothing.

There’s also, on Windows, no good way around it. The architecture of the OS forces you into doing stuff like the VC++ redistributable/.NET Runtime dance. You don’t have a choice, because you absolutely can’t make assumptions about anything. Linux is literally more predictable than Windows.

However, that being said, there’s ways to make things easier.

  1. Fully support deployment tools, and no, I do not mean your weird .hta file. I mean SCCM/Intune/MEM, or other managed deployment tools. Out of the box, with support documentation. Autodesk is particularly good here. Solidworks is particularly not good here. If you create the software, it’s your job to make it work with managed deployment tools. Not the IT department’s, not the VAR’s, yours. If you cannot actually test with any deployment tools, then at the very least, do the work so that your installer can be slotted into said tools without weird dances, and undocumented tricks.
  2. When you uninstall, clean up after yourself as much as possible. You can at least delete the application folders. That’s not too much to ask. Mathsoft is a rather annoying offender with this one.
  3. Make it easy to slipstream/update your deployment point. I should not have to build a new deployment point from scratch just to add a .x update file to it. Yes, it’s not zero work. It’s more work for me when I have to push that out to a few thousand machines, and I should start charging consulting fees for ISVs that make this harder than it should be
  4. If on Windows, use the registry as sparingly as possible, and for the love of christ, do not put application-level variables needed to run the app in the user-specific hive. Deployment tools don’t run as a logged in user, doing stuff like that adds a lot of work to deployments. There’s no good reason to do this, stop it.
  5. Avoid environment variables. Those suck to manage outside of your installer.
  6. Document your installers and the process in excruciating detail. Seriously, we don’t mind.
  7. Virtual machines are literally free. It costs only time to test your installer basics. Do that.
  8. If your installer doesn’t work perfectly smoothly in fully manual mode (manual double-click on the installer file), it is not ready to go. Periodt.
  9. If there are multiple post-install steps that have to be done before the installer quits, those aren’t post-install steps. Those are installer steps. Automate them as much as possible. Don’t pause things just so I can click “next”. If I have to click next to finish the install, then just assume “next” will always be clicked, and bake those steps into the installer. Don’t make people click when there’s no real need.

FInally, the installer is not only code, it is the first code your customers will run. Why do you want them to hate you that fast?

OS X Server is Gone

“I read the news today, oh boy…”

Trite, but kind of how I feel. I get it’s weird to feel anything for a product that in all honesty had ceased to be much of anything over the last few years, but for those not in the “greybeard” section of macOS née Mac OS X Server née Rhapsody, explaining what that product, which mind you, used to not be free, or even cheap, meant to a lot of people is kind of hard. Especially those of us coming from the “dark ages” of AppleShareIP et al. There’s not a lot these days that creates the kind of community OS X Server did. It was a confluence of a lot of things that I don’t think could exist today.

OSXS as it was more commonly known started the careers of so many people and jumpstarted the careers of many others, like, well, me. It did something that hadn’t happened in the server space in years: it created something new. No one was an expert at it when it came out. So everyone was reset to equal, and that created so many ways for products and people to flourish. It was a boost to the people already familiar with Unix-based server OS’s, who understood a bit of LDAP, (especially after 10.5, when NetInfo was finally put to pasture. Oh my the parties about that.)

It wasn’t magical, right? The product itself was always kind of this afterthought, and you could tell what part of it Apple used to sell Macs to people depending on the year. Netboot was huge for a long, long time, then Open Directory, then other things. For orgs that didn’t want to move to Active Directory from NT Domains, or couldn’t, it was a way to delay that move. And it gave Apple at least the ability, along with the Xserve and the Xserve RAID, to say “We have a place in the server room.” Which in the halcyon days before we handed our entire infrastructure to Amazon and/or Microsoft Azure, was important.

There were a lot of people who learned how to be sysadmins because of that product. Which I think created the biggest thing about OSXS: the community. It was a weird community, but it was definitely there, and at times, it was just the most amazing, warm, welcoming thing. Ten seconds later, you wanted to burn it to the ground, but it was awesome just a little more than it was awful. There’s nothing like that any more. No, the MacAdmins Slack channel ain’t it. Not even close.

I met a lot of people who I’m still close to, or at least in touch with because of that product. I’m glad existed the way it did, and yes, it was time for it to go. But for a while, it was a really cool thing to be a part of, and I guess that’s the best thing you can say about any software. So to Eric, Doug, DaveO, Tom, Richard, all the people who helped create and make OSXS and other folks like Chuck, Bartosh, Schoun, Kevin, Greg, Josh, Nigel, Joel, Andrina, Pam, Sara, Mark, and so many others, I will always feel privileged to have been able to share in such a special, weird, and regularly wonderful community with y’all.


(Not) Fried Ice Cream

It’s not really fried, it just looks that way when you’re done. There are variations that do actually fry the ice cream, however this isn’t one of them. This is a bit tedious, so you probably aren’t going to make them all the time, but for a special occaision, they rock mightily.

• 1 half gallon Breyer’s Vanilla Ice Cream
• Whipped Cream, preferably hand – whipped, but good Cool – Whip works too
• 1 box Special K cereal, crushed well, but not to a powder.
• Chocolate Fudge topping
• Molasses
• Wide and Shallow Sundae dishes

Keep all the ingredients but the Ice Cream in the refridgerator until you use them, and put them back as soon as you’re done. Obviously, keep the Ice Cream in the freezer. The harder you can freeze the ice cream the better. You really want to make these one at a time so that the ice cream doesn’t have a chance to melt any more than necessary.

Scoop enough ice cream so that you can make a ball of ice cream about the size of a softball. Once you’re done, put the ice cream ball in the sundae dish, and put both in the freezer until it’s hard frozen again.

Once the ice cream is re-frozen, then you coat the ice cream ball in molasses, and roll it in the crushed Special K until it’s thoroughly covered, i.e. you can’t see ice cream or molasses. Put the ball back in the dish, put the dishes back in the freezer, and let it reset.

The last two steps are done right before serving.

Pull the dishes with the Special K – covered ice cream out of the freezer, cover the exposed part of the ice cream with the chocolate fudge topping, add the whipped cream and serve.