Adobe Flash is dead – let’s get it removed

After 24 years and 1078 known security vulnerabilities, Adobe Flash has reached end of life status as of December 31, 2020.

Screen Shot 2021 01 01 at 1 25 22 PM

To assist with the process of removing Adobe Flash, I’ve written an uninstall script which will completely remove Adobe Flash. For more details, please see below the jump.

This script is designed to uninstall the Adobe Flash plug-ins and their associated components. As part of this, it runs the following actions:

  1. Stop Adobe Flash Install Manager
  2. If running, unload the launchdaemon used by the Adobe Flash update process.
  3. Remove the Adobe Flash plug-ins, Adobe Flash Install Manager and their associated components.
  4. Remove Adobe Flash preference pane settings at the user level.
  5. Forget the installer package receipts.

The script is available below and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/uninstallers/adobe_flash_uninstall

#!/bin/bash
# This script uninstalls Adobe Flash software
AdobeFlashUninstall (){
echo "Uninstalling Adobe Flash software…"
# kill the Adobe Flash Player Install Manager
echo "Stopping Adobe Flash Install Manager."
killall "Adobe Flash Player Install Manager"
if [[ -f "/Library/LaunchDaemons/com.adobe.fpsaud.plist" ]]; then
echo "Stopping Adobe Flash update process."
/bin/launchctl bootout system "/Library/LaunchDaemons/com.adobe.fpsaud.plist"
fi
if [[ -f "/Library/Application Support/Macromedia/mms.cfg" ]]; then
echo "Deleting Adobe Flash update preferences."
rm "/Library/Application Support/Macromedia/mms.cfg"
fi
if [[ -e "/Library/Application Support/Adobe/Flash Player Install Manager/fpsaud" ]]; then
echo "Deleting Adobe software update app and support files."
rm "/Library/LaunchDaemons/com.adobe.fpsaud.plist"
rm "/Library/Application Support/Adobe/Flash Player Install Manager/FPSAUConfig.xml"
rm "/Library/Application Support/Adobe/Flash Player Install Manager/fpsaud"
fi
if [[ -e "/Library/Internet Plug-Ins/Flash Player.plugin" ]]; then
echo "Deleting NPAPI browser plug-in files."
rm -Rf "/Library/Internet Plug-Ins/Flash Player.plugin"
rm -Rf "/Library/Internet Plug-Ins/Flash Player Enabler.plugin"
rm "/Library/Internet Plug-Ins/flashplayer.xpt"
fi
if [[ -e "/Library/Internet Plug-Ins/PepperFlashPlayer/PepperFlashPlayer.plugin" ]]; then
echo "Deleting PPAPI browser plug-in files."
rm -Rf "/Library/Internet Plug-Ins/PepperFlashPlayer/PepperFlashPlayer.plugin"
rm "/Library/Internet Plug-Ins/PepperFlashPlayer/manifest.json"
fi
if [[ -e "/Library/PreferencePanes/Flash Player.prefPane" ]]; then
echo "Deleting Flash Player preference pane from System Preferences."
rm -Rf "/Library/PreferencePanes/Flash Player.prefPane"
fi
# Removing Adobe Flash preference pane settings at user level
allLocalUsers=$(/usr/bin/dscl . -list /Users UniqueID | awk '$2>500 {print $1}')
for userName in ${allLocalUsers}; do
# get path to user's home directory
userHome=$(/usr/bin/dscl . -read "/Users/$userName" NFSHomeDirectory 2>/dev/null | /usr/bin/sed 's/^[^\/]*//g')
/usr/bin/defaults delete "${userHome}/Library/Preferences/com.apple.systempreferences.plist" com.adobe.preferences.flashplayer 2>/dev/null
done
#Remove receipts
rm -Rf /Library/Receipts/*FlashPlayer*
pkgutil –forget com.adobe.pkg.FlashPlayer >/dev/null 2>&1
pkgutil –forget com.adobe.pkg.PepperFlashPlayer >/dev/null 2>&1
# Remove Adobe Flash Player Install Manager.app
if [[ -e "/Applications/Utilities/Adobe Flash Player Install Manager.app" ]]; then
echo "Deleting the Adobe Flash Player Install Manager app."
rm -Rf "/Applications/Utilities/Adobe Flash Player Install Manager.app"
fi
echo "Uninstall completed successfully."
}
# Set exit error code
ERROR=0
# Check to see if Adobe Flash sofware is installed by locating either the Flash NPAPI or PPAPI browser
# plug-ins in /Library/Internet Plug-Ins or the Adobe Flash Player Install Manager.app in /Applications/Utilities
if [[ -e "/Library/Internet Plug-Ins/Flash Player.plugin" ]] || [[ -e "/Library/Internet Plug-Ins/PepperFlashPlayer/PepperFlashPlayer.plugin" ]] || [[ -e "/Applications/Utilities/Adobe Flash Player Install Manager.app" ]]; then
# Run the Adobe Flash Player software uninstaller
AdobeFlashUninstall
if [[ $? -eq 0 ]]; then
echo "Adobe Flash Player uninstalled successfully."
else
echo "Error: Failed to uninstall Adobe Flash Player software."
ERROR=1
fi
else
echo "Error: Adobe Flash Player software is not installed."
ERROR=1
fi
exit $ERROR

Setting up AutoPkg, AutoPkgr and JSSImporter on an Amazon Web Services macOS EC2 instance

One of the outcomes of the recent Amazon Web Service’s Insight conference was AWS’s announcement that, as of November 30th, macOS EC2 instances were going to be available as on-demand instances or as part of one of AWS’s reduced cost plans for those who needed them long-term.

There are a few differences about AWS’s macOS offerings, as opposed to their Linux and Windows offerings. macOS EC2 instances are set up to run on actual Apple hardware, as opposed to being completely virtualized. This means that there are the following dependencies to be aware of:

  1. macOS EC2 instances must run on dedicated hosts (AWS has stated these are Mac Minis)
  2. One macOS EC2 instance can be provisioned per dedicated host.

AWS has also stipulated that that dedicated hosts for macOS EC2 instances have a minimum billing duration of 24 hours. That means that even if your dedicated host was only up and running for one hour, you will be billed as if it was running for 24 hours.

For now, only certain AWS regions have EC2 Mac instances available. As of December 20th, 2020, macOS EC2 instances are available in the following AWS Regions:

  • US-East-1 (Northern Virginia)
  • US-East-2 (Ohio)
  • US-West-2 (Oregon)
  • EU-West-1 (Ireland)
  • AP-Southeast-1 (Singapore)

The macOS EC2 instances at this time support two versions of macOS:

macOS Big Sur is not yet supported as of December 20th, 2020, but AWS has stated that Big Sur support will be coming shortly.

By default, macOS EC2 instances will include the following pre-installed software:

For folks looking to build services or do continuous integration testing on macOS, it’s clear that AWS went to considerable lengths to have macOS EC2 instances be as fully-featured as their other EC2 offerings. Amazon has also either made it possible to install the tools you need or just went ahead and installed them for you. They’ve also included drivers for their faster networking options and made it possible to manage and monitor Mac EC2 instances using AWS’s tools just like their Linux and Windows EC2 instances.

That said, all of this comes with a price tag. Here’s how it works out (all figures expressed in US dollars):

mac1 Dedicated Hosts (on-demand pricing):

$1.083/hour (currently with a 24 hour minimum charge, after which billing is by the second.)
$25.99/day
$181.93/week
$9493.58/year

Now, you can sign up for an AWS Savings Plan and save some money by paying up-front for one year or three years. Paying for three years, all cash up front is the cheapest option currently available:

$0.764/hour
$18.33/day
$128.31/week
$6697.22/year

Now some folks are going to look at that and have a heart attack, while others are going to shrug because the money involved amounts to a rounding error on their existing AWS bill. I’m mainly going through this to point out that hosting Mac services on AWS is going to come with costs. None of AWS’s existing Mac offerings are part of AWS’s Free Tier.

OK, so we’ve discussed a lot of the background but let’s get to the point: How do you set up AutoPkg to run in the AWS cloud? For more details, please see below the jump.

If you’ve worked with Amazon Web Service’s EC2 service previously, getting AutoPkg up and running in AWS should be fairly straightforward. That said, if you haven’t worked with either AWS or EC2 before, there may be a bit of a learning curve. For folks in this situation, I gave a talk on Amazon Web Services which should help get you started:

Getting Started with Amazon Web Services: http://docs.macsysadmin.se/2018/video/Day4Session4.mp4

In this example, I’m going to setting up a macOS EC2 instance with the following:

  • git
  • AutoPkg
  • AutoPkgr
  • JSSImporter

Pre-requisites:

  • An Amazon Web Services account
  • Money (at least $25.99)

Setting up a dedicated host

To run a macOS instance in EC2, you need to first choose an actual Mac Mini to run that instance on. Amazon refers to this as a dedicated host and the process looks like this:

1. Open the Amazon EC2 web console at https://console.aws.amazon.com/ec2/.

2. In the navigation pane, choose Dedicated Hosts.

Screen Shot 2020 12 18 at 2 48 07 PM

3. Choose Allocate Dedicated Host and then do the following:

Screen Shot 2020 12 18 at 2 58 25 PM

For Name Tag:, give it an appropriate name.

Screen Shot 2020 12 18 at 3 02 02 PM

For Instance family, choose mac1.

Screen Shot 2020 12 18 at 3 30 34 PM

For Support multiple instance types, uncheck the Enable checkbox.

For Instance type, select mac1.metal.

Screen Shot 2020 12 18 at 3 05 35 PM

For Availability Zone, choose the Availability Zone for the Dedicated Host. (For this example, I’m in US-East-2 and I’m choosing us-east-2b.)

Screen Shot 2020 12 18 at 3 30 34 PM

For Instance auto-placement, do not check anything.
For Host recovery, do not check anything.
For Quantity, keep 1.

Screen Shot 2020 12 18 at 3 30 34 PM

Click the Allocate button. (This is the part where Amazon charges you $25.99)

Screen Shot 2020 12 18 at 3 30 57 PM

At this point, the Dedicated Host should be created.

Screen Shot 2020 12 18 at 3 32 58 PM

 

Setting up a macOS EC2 instance

If you haven’t previously done so, set up an AWS SSH key pair for use with EC2 instances:

https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-keypairs.html

Once your keypair has been created, select the Dedicated Host that you created and then do the following:

Choose Actions, Launch instances onto host.

Screen Shot 2020 12 18 at 5 13 00 PM

Select a macOS AMI. For this example, I’m selecting macOS Catalina 10.15.7.

Screen Shot 2020 12 18 at 5 14 12 PM

Select the mac1.metal instance type.

Screen Shot 2020 12 18 at 5 15 13 PM

Click the Next: Configure Instance Details button.

Screen Shot 2020 12 18 at 5 15 14 PM

On the Configure Instance Details page, verify the following:

Tenancy: Set as dedicated host.

 

Screen Shot 2020 12 18 at 5 16 17 PM

Host is set as the Dedicated Host you created.

Screen Shot 2020 12 18 at 5 16 17 PM

Update Affinity as needed. Mine is set to Off.

In User Data, I have a script that the Mac EC2 instance can run at boot.

This user data script does the following:

Configures Mac EC2 instance with the following:

  • Account password for the default ec2-user account
  • Set the Mac to auto-login as the default ec2-user account
  • git
  • AutoPkg
  • AutoPkgr
  • JSSImporter

Once these tools and modules are installed, the script configures AutoPkg to use the recipe repos defined in the AutoPkg repos section.

If you want to use this user data script, it’s available from the following address on GitHub:

https://github.com/rtrouton/aws_scripts/tree/master/setup_mac_ec2_instance_for_autopkg

Before adding the user data script to the instance build process, check the variables in the script and verify that they are set up the way you want. There is also an upper limit of 15K in size for this script.

Screen Shot 2020 12 18 at 5 29 46 PM

 

From there, either copy and paste the script into the available user data blank or select the user data script as a file.

Screen Shot 2020 12 18 at 5 31 12 PM

Double-check your Tenancy, Host and User Data settings to make sure everything is set as desired, then click the Next: Add Storage button.

Screen Shot 2020 12 18 at 5 31 13 PM

Set how much storage you want. For this example, I’m setting it at 60 GBs of storage.

Screen Shot 2020 12 18 at 5 34 29 PM

 

Note: Depending on how many AutoPkg recipes you’re running and the size of the installers, you may want to double or even triple the amount of storage I’m setting. Another thing to be aware of is that, the instance’s boot volume will need to be resized to recognize the additional space. If using the user data script linked above, boot volume resizing is included as part of the script’s run.

Once storage is set, click the Next:Add Tags button.

Screen Shot 2020 12 18 at 5 34 30 PM

Set tags as desired, then click the Next: Security Group button.

Screen Shot 2020 12 18 at 5 36 07 PM

Choose the options to set a security group as desired.

Screen Shot 2020 12 18 at 5 38 32 PM

If you don’t have a security group available, I recommend creating one and setting it to allow SSH from only your IP address, then click the Review and Launch button.

Screen Shot 2020 12 18 at 5 38 33 PM

Review your instance’s settings and make sure everything is OK. Once you’re sure, click the Launch button.

Screen Shot 2020 12 18 at 5 39 47 PM

When prompted, select your SSH keypair, then click the Launch instances button.

Screen Shot 2020 12 18 at 5 46 14 PM

Your Mac instance will now launch on the dedicated host. To see if it in the Instances list, click the View instances on host button.

Screen Shot 2020 12 18 at 5 48 07 PM

To find out its public DNS address and other useful information, click on the instance ID.

Screen Shot 2020 12 18 at 5 48 32 PM

Screen Shot 2020 12 18 at 5 48 51 PM

Wait about fifteen minutes for your instance to finish setting itself up. After that you should be able to connect to it via SSH and (assuming you configured the right variables for VNC access) also via remote screen sharing.

Connecting to the macOS EC2 instance following setup

Following setup, you can connect to the newly-built EC2 instance via SSH. To do so, open Terminal and use the following SSH command:

ssh -i /path/my-key-pair.pem ec2-user@my-instance-public-dns-name

For example, if your SSH keypair was stored in ~/.ssh and named AutoPkg_SSH_Keypair.pem, you would use the following command to connect to a macOS EC2 instance whose address is ec2-3-23-97-197.us-east-2.compute.amazonaws.com:

ssh -i ~/.ssh/AutoPkg_SSH_Keypair.pem ec2-user@ec2-3-23-97-197.us-east-2.compute.amazonaws.com

No password is needed in this case, as you are using your SSH keypair to authenticate the SSH session.

Screen Shot 2020 12 18 at 9 53 52 PM

 

To connect via VNC, I recommend setting up VNC to run over an SSH tunnel. The reason for this is that VNC by default does not encrypt its traffic so all network communication between you and the instance (including any passwords) would be sent in the clear. Using an SSH tunnel will allow you to wrap this unencrypted traffic inside SSH’s encryption, which should secure it against third parties.

To set up VNC to run inside an SSH tunnel, you will need to first set up a password for the ec2-user account if you haven’t done so already. You can do this by connecting to the instance via SSH and running the following passwd command:

sudo passwd ec2-user

Screen Shot 2020 12 18 at 9 57 35 PM

Once the command has been run, follow the prompts to change the password. Once the password is set up, run the following SSH command on your end:

ssh -L 5900:localhost:5900 -i /path/my-key-pair.pem ec2-user@my-instance-public-dns-name

For example, if your SSH keypair was stored in ~/.ssh on your Mac and named AutoPkg_SSH_Keypair.pem, you would use the following command to set up an SSH tunnel for VNC between your Mac and a macOS EC2 instance whose address is ec2-3-23-97-197.us-east-2.compute.amazonaws.com:

ssh -L 5900:localhost:5900 -i ~/.ssh/AutoPkg_SSH_Keypair.pem ec2-user@ec2-3-23-97-197.us-east-2.compute.amazonaws.com

Once that’s done, do the following:

1. Under the Go menu, select Connect to Server.
2. In the Connect to Server window, enter the following:

vnc://localhost:5900

Screen Shot 2020 12 18 at 9 47 35 PM

When prompted, use the following username and password:

Username: ec2-user
Password: Whatever password you defined in the script for the ec2-user account to use.

Screen Shot 2020 12 18 at 9 48 51 PM

Once connected, you’ll be able to work with the Mac instance like you would any other remotely-accessible Mac.

Screen Shot 2020 12 18 at 9 49 33 PM

Screen Shot 2020 12 18 at 9 50 05 PM

In the case of a AutoPkg server built using the user data script I linked to above, you could open AutoPkgr and start setting up your recipes to begin scheduled runs.

Screen Shot 2020 12 18 at 9 50 25 PM

Resizing an AWS macOS EC2 instance’s boot drive to use all available disk space

I’ve started working with Amazon Web Service’s new macOS EC2 instances and after a while, I noticed that no matter how much EBS drive space I assigned to a EC2 instance running macOS, the instance would only have around 30 GBs of usable space. In this example, I had assigned around 200 GBs of EBS storage, but the APFS container was only using around 30 GBs of the available space.

Screen Shot 2020 12 19 at 3 23 59 PM

After talking with AWS Support, there’s a fix for this using APFS container resizing. This is a topic I’ve discussed previously in the context of resizing boot drives for virtual machines. For more details, see below the jump.

To resize a macOS EC2 instance’s boot volume, you need to do two things:

1. Identify the appropriate APFS container:

APFS containers act as storage pools for APFS volumes. APFS volumes are what act as the mounted filesystem, where you store your files, directories, metadata, etc. When you grow the APFS container, the APFS volumes will likewise get additional space.

To identify the container for the instance’s boot volume, use the command shown below:

/usr/sbin/diskutil list physical external | awk '/Apple_APFS/ {print $7}'

Screen Shot 2020 12 19 at 3 47 03 PM

2. Once the appropriate APFS container has been identified, use diskutil to resize the container with all available disk space.

You can specify a size of zero (0) to grow the targeted container using all unallocated drive space.

/usr/sbin/diskutil apfs resizeContainer apfs_container_id_goes_here 0

In this example, I have an instance where my APFS-formatted boot drive is using 32 GBs of space, but the instance has 200 GBs of available EBS disk space.

Assuming that the command above gave us disk1s2 as a result, the command shown below can be used to resize the boot drive’s APFS container with all available disk space.

/usr/sbin/diskutil apfs resizeContainer disk1s2 0
ec2-user@ip-172-31-23-238 ~ % /usr/sbin/diskutil apfs resizeContainer disk1s2 0
Started APFS operation
Aligning grow delta to 182,536,110,080 bytes and targeting a new physical store size of 214,538,608,640 bytes
Determined the maximum size for the targeted physical store of this APFS Container to be 214,537,580,544 bytes
Resizing APFS Container designated by APFS Container Reference disk2
The specific APFS Physical Store being resized is disk1s2
Verifying storage system
Using live mode
Performing fsck_apfs -n -x -l -S /dev/disk1s2
Checking the container superblock
Checking the EFI jumpstart record
Checking the space manager
Checking the space manager free queue trees
Checking the object map
Checking volume
Checking the APFS volume superblock
The volume Macintosh HD – Data was formatted by newfs_apfs (1412.141.1) and last modified by apfs_kext (1412.141.1)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Checking volume
Checking the APFS volume superblock
The volume Preboot was formatted by diskmanagementd (1412.141.1) and last modified by apfs_kext (1412.141.1)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Checking volume
Checking the APFS volume superblock
The volume Recovery was formatted by diskmanagementd (1412.141.1) and last modified by apfs_kext (1412.141.1)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Checking volume
Checking the APFS volume superblock
The volume VM was formatted by diskmanagementd (1412.141.1) and last modified by
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Checking volume
Checking the APFS volume superblock
The volume Macintosh HD was formatted by diskmanagementd (1412.141.1) and last modified by apfs_kext (1412.141.1)
Checking the object map
Checking the snapshot metadata tree
Checking the snapshot metadata
Checking the extent ref tree
Checking the fsroot tree
Verifying allocated space
The volume /dev/disk1s2 appears to be OK
Storage system check exit code is 0
Growing APFS Physical Store disk1s2 from 32,002,498,560 to 214,538,608,640 bytes
Modifying partition map
Growing APFS data structures
Finished APFS operation
ec2-user@ip-172-31-23-238 ~ %

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once the container resizing has completed, the OS should now recognize and be able to use the now-allocated space.

Screen Shot 2020 12 19 at 3 35 47 PM

This can be confirmed by other disk space measuring tools.

ec2-user@ip-172-31-23-238 ~ % df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk2s5 200Gi 10Gi 181Gi 6% 488252 2094615348 0% /
devfs 186Ki 186Ki 0Bi 100% 642 0 100% /dev
/dev/disk2s1 200Gi 5.7Gi 181Gi 4% 161309 2094942291 0% /System/Volumes/Data
/dev/disk2s4 200Gi 2.0Gi 181Gi 2% 1 2095103599 0% /private/var/vm
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/home
ec2-user@ip-172-31-23-238 ~ %

view raw
gistfile1.txt
hosted with ❤ by GitHub

Identifying Universal 2 apps on macOS Mojave and later

As Apple introduces its new Apple Silicon Macs, it’s important that Mac admins be able to identify if their environment’s software will be able to run natively on both Intel and Apple Silicon as Universal 2 apps or if they’ll need Apple’s Rosetta 2 translation service installed first on their Apple Silicon Macs to allow their apps to run.

To assist with this identification effort, Apple has provided two tools:

Both have been around for a while and initially helped identify the original Universal binaries, which were compiled to support both PowerPC and Intel processors. They’ve now been updated for this new processor transition and either will be able to identify if an app’s binary was compiled for the following:

  • x86_64 (Intel)
  • arm64 (Apple Silicon)
  • Both x86_64 and arm64 (Universal 2)

For more details, please see below the jump.

To identify if an app is Intel-only or Universal using the lipo tool, please use the command shown below:

lipo -detailed_info /path/to/binary

For example, on macOS Catalina 10.15.7 Apple’s Safari browser is an Intel-only binary, since macOS Catalina won’t run on an Apple Silicon Mac. Running lipo on macOS Catalina 10.15.7’s Safari should produce output similar to what’s shown below:

username@computername ~ % lipo -detailed_info /Applications/Safari.app/Contents/MacOS/Safari
input file /Applications/Safari.app/Contents/MacOS/Safari is not a fat file
Non-fat file: /Applications/Safari.app/Contents/MacOS/Safari is architecture: x86_64
username@computername ~ %

Likewise, Jamf Pro 10.25.2’s jamf binary now supports both Intel and Apple Silicon. Running the lipo command described above should produce output similar to what’s shown below:

username@computername ~ % lipo -detailed_info /usr/local/jamf/bin/jamf
Fat header in: /usr/local/jamf/bin/jamf
fat_magic 0xcafebabe
nfat_arch 2
architecture x86_64
cputype CPU_TYPE_X86_64
cpusubtype CPU_SUBTYPE_X86_64_ALL
capabilities 0x0
offset 16384
size 6441136
align 2^14 (16384)
architecture arm64
cputype CPU_TYPE_ARM64
cpusubtype CPU_SUBTYPE_ARM64_ALL
capabilities 0x0
offset 6471680
size 6121168
align 2^14 (16384)
username@computername ~ %

To identify if an app is Intel-only or Universal using the file tool, please use the command shown below:

file /path/to/binary

Running the file command described above on macOS Catalina 10.15.7’s Safari should produce output similar to what’s shown below:

username@computername ~ % file /Applications/Safari.app/Contents/MacOS/Safari
/Applications/Safari.app/Contents/MacOS/Safari: Mach-O 64-bit executable x86_64
username@computername ~ %

Running the file command described above on the Jamf Pro 10.25.2 jamf binary should produce output similar to what’s shown below:

username@computername ~ % file /usr/local/jamf/bin/jamf
/usr/local/jamf/bin/jamf: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64]
/usr/local/jamf/bin/jamf (for architecture x86_64):	Mach-O 64-bit executable x86_64
/usr/local/jamf/bin/jamf (for architecture arm64):	Mach-O 64-bit executable arm64
username@computername ~ %

For more information about app testing, Howard Oakley has a blog post discussing the lipo tool in more detail which I recommend checking out. I’ve linked to it below:

Magic, lipo and testing for Universal binaries:
https://eclecticlight.co/2020/07/24/magic-lipo-and-testing-for-universal-binaries/

Installing Rosetta 2 on Apple Silicon Macs

With Apple now officially selling Apple Silicon Macs, there’s a design decision which Apple made with macOS Big Sur that may affect various Mac environments:

At this time, macOS Big Sur does not install Rosetta 2 by default on Apple Silicon Macs.

Rosetta 2 is Apple’s software solution for aiding in the transition from Macs running on Intel processors to Macs running on Apple Silicon processors. It allows most Intel apps to run on Apple Silicon without issues, which provides time for vendors to update their software to a Universal build which can run on both Intel and Apple Silicon.

Without Rosetta 2 installed, Intel apps do not run on Apple Silicon. So for those folks who need Rosetta 2, how to install it? For more details, please see below the jump.

You can install Rosetta 2 on Apple Silicon Macs using the softwareupdate command. To install Rosetta 2, run the following command with root privileges:

/usr/sbin/softwareupdate --install-rosetta

Installing this way will cause an interactive prompt to appear, asking you to agree to the Rosetta 2 license. If you want to perform a non-interactive install, please run the following command with root privileges to install Rosetta 2 and agree to the license in advance:

/usr/sbin/softwareupdate --install-rosetta --agree-to-license

Having the the non-interactive method for installing Rosetta 2 available makes it easier to script the installation process. My colleague Graham Gilbert has written a script for handling this process and discussed it here:

https://grahamgilbert.com/blog/2020/11/13/installing-rosetta-2-on-apple-silicon-macs/

I’ve written a similar script to Graham’s, which is available below and from the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/install_rosetta_on_apple_silicon

#!/bin/bash
# Installs Rosetta as needed on Apple Silicon Macs.
exitcode=0
# Determine OS version
# Save current IFS state
OLDIFS=$IFS
IFS='.' read osvers_major osvers_minor osvers_dot_version <<< "$(/usr/bin/sw_vers -productVersion)"
# restore IFS to previous state
IFS=$OLDIFS
# Check to see if the Mac is reporting itself as running macOS 11
if [[ ${osvers_major} -ge 11 ]]; then
# Check to see if the Mac needs Rosetta installed by testing the processor
processor=$(/usr/sbin/sysctl -n machdep.cpu.brand_string | grep -o "Intel")
if [[ -n "$processor" ]]; then
echo "$processor processor installed. No need to install Rosetta."
else
# Check Rosetta LaunchDaemon. If no LaunchDaemon is found,
# perform a non-interactive install of Rosetta.
if [[ ! -f "/Library/Apple/System/Library/LaunchDaemons/com.apple.oahd.plist" ]]; then
/usr/sbin/softwareupdate –install-rosetta –agree-to-license
if [[ $? -eq 0 ]]; then
echo "Rosetta has been successfully installed."
else
echo "Rosetta installation failed!"
exitcode=1
fi
else
echo "Rosetta is already installed. Nothing to do."
fi
fi
else
echo "Mac is running macOS $osvers_major.$osvers_minor.$osvers_dot_version."
echo "No need to install Rosetta on this version of macOS."
fi
exit $exitcode

Preventing the macOS Big Sur upgrade advertisement from appearing in the Software Update preference pane on macOS Catalina

Not yet ready for macOS Big Sur in your environment, but you’ve trained your folks to look at the Software Update preference pane to see if there’s available updates? One of the ways Apple is advertising the macOS Big Sur upgrade is via the Software Update preference pane:

Screen Shot 2020 11 12 at 2 25 15 PM

You can block it from appearing using the softwareupdate –ignore command, but for macOS Catalina, Mojave and High Sierra, that command now requires one of the following enrollments as a pre-requisite:

  • Apple Business Manager enrollment
  • Apple School Manager enrollment
  • Enrollment in a user-approved MDM

For more information on this, please reference the following KBase article: https://support.apple.com/HT210642 (search for the following: Major new releases of macOS can be hidden when using the softwareupdate(8) command).

For more details, please see below the jump.

Once that pre-requisite condition has been satisfied, run the following command with root privileges:

softwareupdate --ignore "macOS Big Sur"

You should see text appear which looks like this:

Ignored updates:
(
"macOS Big Sur"
)

Screen Shot 2020 11 12 at 2 28 44 PM

The advertisement banner should now be removed from the Software Update preference pane.

Screen Shot 2020 11 12 at 2 28 58 PM

Note: If the pre-requisite condition has not been fulfilled, running the softwareupdate –ignore command will have no effect.

Screen Shot 2020 11 12 at 2 28 03 PM

Detecting kernel panics using Jamf Pro

Something that has (mostly) become more rare on the Mac platform are kernel panics, which are computer errors from which the operating system cannot safely recover without risking major data loss. Since a kernel panic means that the system has to halt or automatically reboot, this is a major inconvenience to the user of the computer.

6lYdt

Kernel panics are always the result of a software bug, either in Apple’s code or in the code of a third party’s kernel extension. Since they are always from bugs and they cause work interruptions, it’s a good idea to get on top of kernel panic issues as quickly as possible. To assist with this, a Jamf Pro Extension Attribute has been written to detect if a kernel panic has taken place. For more details, please see below the jump.

When a Mac has a kernel panic, the information from the panic is logged to a log file in /Library/Logs/DiagnosticReports. This log file will be named something similar to this:

Kernel-date-goes-here.panic

The Extension Attribute is based off an earlier example posted by Mike Morales on the Jamf Nation forums. It performs the following tasks:

  1. Checks to see if there are any logs in the /Library/Logs/DiagnosticReports with a .panic file extension.
  2. If there are, check to see which are from the past seven days.
  3. Output a count of how many .panic logs were generated in the past seven days.

To test the Extension Attribute, it is possible to force a kernel panic on a Mac. To do this, please use the process shown below:

1. Disable System Integrity Protection
2. Run the following command with root privileges:

dtrace -w -n "BEGIN{ panic();}"

Screen Shot 2020 11 10 at 10 52 23 AM

3. After the kernel panic, run a Jamf Pro inventory update.

After the inventory update, it should show that at least one kernel panic had occurred on that Mac. For more information about kernel panics, please see the link below:

https://developer.apple.com/library/content/technotes/tn2004/tn2118.html

The Extension Attribute is available below and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/kernel_panic_detection

#!/bin/bash
# Detects kernel panics which occurred in the last seven days.
#
# Original idea and script from here:
# https://www.jamf.com/jamf-nation/discussions/23976/kernal-panic-reporting#responseChild145035
#
# This Jamf Pro Extension Attribute is designed to
# check the contents of /Library/Logs/DiagnosticReports
# and report on how many log files with the file suffix
# of ".panic" were created in the previous seven days.
PanicLogCount=$(/usr/bin/find /Library/Logs/DiagnosticReports -Btime -7 -name *.panic | grep . -c)
echo "<result>$PanicLogCount</result>"
exit 0

view raw
gistfile1.txt
hosted with ❤ by GitHub

Extension attributes for Jamf Protect

I’ve started working with Jamf Protect and, as part of that, I found that I needed to be able to report the following information about Jamf Protect to Jamf Pro:

  1. Is the Jamf Protect agent installed on a particular Mac?
  2. Is the Jamf Protect agent running on a particular Mac?
  3. Which Jamf Protect server is a particular Mac handled by?

To address these needs, I’ve written three Jamf Pro extension attributes which display the requested information as part of a Mac’s inventory record in Jamf Pro. For more details, please see below the jump:

The three Extension Attributes do the following:

jamf_protect_installed.sh: Checks to see if Jamf Protect is installed and the agent is able to run.

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_installed

Jamf Pro Extension Attribute Setup1

jamf_protect_status.sh: Checks and validates the following:

  • Jamf Protect is installed
  • The Jamf Protect processes are running

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_status

Jamf Pro Extension Attribute Setup3

jamf_protect_server.sh: Checks to see if Jamf Protect’s protectctl tool is installed on a particular Mac. If the protectctl tool is installed, check for and display the Jamf Protect tenant name.

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_server

Jamf Pro Extension Attribute Setup2

Remotely gathering sysdiagnose files and uploading them to S3

One of the challenges for helpdesks with folks now working remotely instead of in offices has been that it’s now harder to gather logs from user’s Macs. A particular challenge for those folks working with AppleCare Enterprise Support has been with regards to requests for sysdiagnose logfiles.

The sysdiagnose tool is used for gathering a large amount of diagnostic files and logging, but the resulting output file is often a few hundred megabytes in size. This is usually too large to email, so alternate arrangements have to be made to get it off of the Mac in question and upload it to a location where the person needing the logs can retrieve them.

After needing to gather sysdiagnose files a few times, I’ve developed a scripted solution which does the following:

  • Collects a sysdiagnose file.
  • Creates a read-only compressed disk image containing the sysdiagnose file.
  • Uploads the compressed disk image to a specified S3 bucket in Amazon Web Services.
  • Cleans up the directories and files created by the script.

For more details, please see below the jump.

Pre-requisites

You will need to provide the following information to successfully upload the sysdiagnose file to an S3 bucket:

  • S3 bucket name
  • AWS region for the S3 bucket
  • AWS programmatic user’s access key and secret access key
  • The S3 ACL used on the bucket

The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:

  • s3:ListBucket
  • s3:PutObject
  • s3:PutObjectAcl

The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:

  • s3:ListAllMyBuckets

These access rights will allow the AWS programmatic user the ability to do the following:

  1. Identify the correct S3 bucket
  2. Write the uploaded file to the S3 bucket

Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.

Information on S3 ACLs can be found via the link below:
https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.htmlcanned-acl

In an S3 bucket’s default configuration, where all public access is blocked, the ACL should be the one listed below:

private

Using the script

Once you have the S3 bucket and AWS programmatic user set up, you will need to configure the user-editable variables in the script:

# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"

view raw
gistfile1.txt
hosted with ❤ by GitHub

For example, if you set up the following S3 bucket and user access:

What: S3 bucket named sysdiagnose-log-s3-bucket
Where: AWS’s US-East-1 region
ACL configuration: Default ACL configuration with all public access blocked
AWS access key: AKIAX0FXU19HY2NLC3NF
AWS secret access key: YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK

The user-editable variables should look like this:

# User-editable variables
s3AccessKey="AKIAX0FXU19HY2NLC3NF"
s3SecretKey="YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK"
s3acl="private"
s3Bucket="sysdiagnose-log-s3-bucket"
s3Region="us-east-1"

view raw
gistfile1.txt
hosted with ❤ by GitHub

Note: The S3 bucket, access key and secret access key information shown above is no longer valid.

The script can be run manually or by a systems management tool. I’ve tested it with Jamf Pro and it appears to work without issue.

When run manually in Terminal, you should see the following output.

username@computername ~ % sudo /Users/username/Desktop/remote_sysdiagnose_collection.sh
Password:
Progress:
[|||||||||||||||||||||||||||||||||||||||100%|||||||||||||||||||||||||||||||||||]
Output available at '/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/logresults-20201016144407.1wghyNXE/sysdiagnose-VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.tar.gz'.
………………………………………………………..
created: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
Uploading: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg (application/octet-stream) to sysdiagnose-log-s3-bucket:VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
######################################################################### 100.0%
VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg uploaded successfully to sysdiagnose-log-s3-bucket.
username@computername ~ %

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once the script runs, you should see a disk image file appear in the S3 bucket with a name automatically generated using the following information:

Mac’s serial number – Mac’s hardware UUID – Year-Month-Day-Hour-Minute-Second

Screen Shot 2020 10 16 at 2 51 08 PM

Once downloaded, the sysdiagnose file is accessible by mounting the disk image.

Screen Shot 2020 10 16 at 2 53 58 PM

Screen Shot 2020 10 16 at 2 52 27 PM

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/remote_sysdiagnose_collection

#!/bin/bash
# Log collection script which performs the following tasks:
#
# * Collects a sysdiagnose file.
# * Creates a read-only compressed disk image containing the sysdiagnose file.
# * Uploads the compressed disk image to a specified S3 bucket.
# * Cleans up the directories and files created by the script.
#
# You will need to provide the following information to successfully upload
# to an S3 bucket:
#
# S3 bucket name
# AWS region for the S3 bucket
# AWS programmatic user's access key and secret access key
# The S3 ACL used on the bucket
#
# The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:
#
# s3:ListBucket
# s3:PutObject
# s3:PutObjectAcl
#
# The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:
#
# s3:ListAllMyBuckets
#
# These access rights will allow the AWS programmatic user the ability to do the following:
#
# A. Identify the correct S3 bucket
# B. Write the uploaded file to the S3 bucket
#
# Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.
#
# Information on S3 ACLs can be found via the link below:
# https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
#
# By default, the ACL should be the one listed below:
#
# private
#
# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"
# It should not be necessary to edit any of the variables below this line.
error=0
date=$(date +%Y%m%d%H%M%S)
serial_number=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')
hardware_uuid=$(ioreg -ad2 -c IOPlatformExpertDevice | xmllint –xpath '//key[.="IOPlatformUUID"]/following-sibling::*[1]/text()')
results_directory=$(mktemp -d -t logresults-${date})
sysdiagnose_name="sysdiagnose-${serial_number}${hardware_uuid}${date}.tar.gz"
dmg_name="${serial_number}${hardware_uuid}${date}.dmg"
dmg_file_location=$(mktemp -d -t sysdiagnoselog-${date})
fileName=$(echo "$dmg_file_location"/"$dmg_name")
contentType="application/octet-stream"
LogGeneration()
{
/usr/bin/sysdiagnose -f ${results_directory} -A "$sysdiagnose_name" -u -b
if [[ -f "$results_directory/$sysdiagnose_name" ]]; then
/usr/bin/hdiutil create -format UDZO -srcfolder ${results_directory} ${dmg_file_location}/${dmg_name}
else
echo "ERROR! Log file not created!"
error=1
fi
}
S3Upload()
{
# S3Upload function taken from the following site:
# https://very.busted.systems/shell-script-for-S3-upload-via-curl-using-AWS-version-4-signatures
usage()
{
cat <<USAGE
Simple script uploading a file to S3. Supports AWS signature version 4, custom
region, permissions and mime-types. Uses Content-MD5 header to guarantee
uncorrupted file transfer.
Usage:
`basename $0` aws_ak aws_sk bucket srcfile targfile [acl] [mime_type]
Where <arg> is one of:
aws_ak access key ('' for upload to public writable bucket)
aws_sk secret key ('' for upload to public writable bucket)
bucket bucket name (with optional @region suffix, default is us-east-1)
srcfile path to source file
targfile path to target (dir if it ends with '/', relative to bucket root)
acl s3 access permissions (default: public-read)
mime_type optional mime-type (tries to guess if omitted)
Dependencies:
To run, this shell script depends on command-line curl and openssl, as well
as standard Unix tools
Examples:
To upload file '~/blog/media/image.png' to bucket 'storage' in region
'eu-central-1' with key (path relative to bucket) 'media/image.png':
`basename $0` ACCESS SECRET storage@eu-central-1 \\
~/blog/image.png media/
To upload file '~/blog/media/image.png' to public-writable bucket 'storage'
in default region 'us-east-1' with key (path relative to bucket) 'x/y.png':
`basename $0` '' '' storage ~/blog/image.png x/y.png
USAGE
exit 0
}
guessmime()
{
mime=`file -b –mime-type $1`
if [ "$mime" = "text/plain" ]; then
case $1 in
*.css) mime=text/css;;
*.ttf|*.otf) mime=application/font-sfnt;;
*.woff) mime=application/font-woff;;
*.woff2) mime=font/woff2;;
*rss*.xml|*.rss) mime=application/rss+xml;;
*) if head $1 | grep '<html.*>' >/dev/null; then mime=text/html; fi;;
esac
fi
printf "$mime"
}
if [ $# -lt 5 ]; then usage; fi
# Inputs.
aws_ak="$1" # access key
aws_sk="$2" # secret key
bucket=`printf $3 | awk 'BEGIN{FS="@"}{print $1}'` # bucket name
region=`printf $3 | awk 'BEGIN{FS="@"}{print ($2==""?"us-east-1":$2)}'` # region name
srcfile="$4" # source file
targfile=`echo -n "$5" | sed "s/\/$/\/$(basename $srcfile)/"` # target file
acl=${6:-'public-read'} # s3 perms
mime=${7:-"`guessmime "$srcfile"`"} # mime type
md5=`openssl md5 -binary "$srcfile" | openssl base64`
# Create signature if not public upload.
key_and_sig_args=''
if [ "$aws_ak" != "" ] && [ "$aws_sk" != "" ]; then
# Need current and file upload expiration date. Handle GNU and BSD date command style to get tomorrow's date.
date=`date -u +%Y%m%dT%H%M%SZ`
expdate=`if ! date -v+1d +%Y-%m-%d 2>/dev/null; then date -d tomorrow +%Y-%m-%d; fi`
expdate_s=`printf $expdate | sed s/-//g` # without dashes, as we need both formats below
service='s3'
# Generate policy and sign with secret key following AWS Signature version 4, below
p=$(cat <<POLICY | openssl base64
{ "expiration": "${expdate}T12:00:00.000Z",
"conditions": [
{"acl": "$acl" },
{"bucket": "$bucket" },
["starts-with", "\$key", ""],
["starts-with", "\$content-type", ""],
["content-length-range", 1, `ls -l -H "$srcfile" | awk '{print $5}' | head -1`],
{"content-md5": "$md5" },
{"x-amz-date": "$date" },
{"x-amz-credential": "$aws_ak/$expdate_s/$region/$service/aws4_request" },
{"x-amz-algorithm": "AWS4-HMAC-SHA256" }
]
}
POLICY
)
# AWS4-HMAC-SHA256 signature
s=`printf "$expdate_s" | openssl sha256 -hmac "AWS4$aws_sk" -hex | sed 's/(stdin)= //'`
s=`printf "$region" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$service" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "aws4_request" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$p" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
key_and_sig_args="-F X-Amz-Credential=$aws_ak/$expdate_s/$region/$service/aws4_request -F X-Amz-Algorithm=AWS4-HMAC-SHA256 -F X-Amz-Signature=$s -F X-Amz-Date=${date}"
fi
# Upload. Supports anonymous upload if bucket is public-writable, and keys are set to ''.
echo "Uploading: $srcfile ($mime) to $bucket:$targfile"
curl \
-# -k \
-F key=$targfile \
-F acl=$acl \
$key_and_sig_args \
-F "Policy=$p" \
-F "Content-MD5=$md5" \
-F "Content-Type=$mime" \
-F "file=@$srcfile" \
https://${bucket}.s3.amazonaws.com/ | cat # pipe through cat so curl displays upload progress bar, *and* response
}
CleanUp()
{
if [[ -d ${results_directory} ]]; then
/bin/rm -rf ${results_directory}
fi
if [[ -d ${dmg_file_location} ]]; then
/bin/rm -rf ${dmg_file_location}
fi
}
LogGeneration
if [[ -f ${fileName} ]]; then
S3Upload "$s3AccessKey" "$s3SecretKey" "$s3Bucket"@"$s3Region" ${fileName} "$dmg_name" "$s3acl" "$contentType"
if [[ $? -eq 0 ]]; then
echo "$dmg_name uploaded successfully to $s3Bucket."
else
echo "ERROR! Upload of $dmg_name failed!"
error=1
fi
else
echo "ERROR! Creating $dmg_name failed! No upload attempted."
error=1
fi
CleanUp
exit $error

“Getting Started with Amazon Web Services” encore presentation at MacSysAdmin 2020

The MacSysAdmin conference, like many conferences in 2020, has moved to an online format for this year. The MacSysAdmin 2020 organizers have also decided to have both sessions that are new for the 2020 conference as well as give an encore performance for sessions given at past MacSysAdmin conferences.

I was pleased to see that my “Getting Started with Amazon Web Services” session from MacSysAdmin 2018 made the cut for MacSysAdmin 2020. For those interested, my session will be available for viewing this Friday, October 9th.