Identifying Universal 2 apps on macOS Mojave and later

As Apple introduces its new Apple Silicon Macs, it’s important that Mac admins be able to identify if their environment’s software will be able to run natively on both Intel and Apple Silicon as Universal 2 apps or if they’ll need Apple’s Rosetta 2 translation service installed first on their Apple Silicon Macs to allow their apps to run.

To assist with this identification effort, Apple has provided two tools:

Both have been around for a while and initially helped identify the original Universal binaries, which were compiled to support both PowerPC and Intel processors. They’ve now been updated for this new processor transition and either will be able to identify if an app’s binary was compiled for the following:

  • x86_64 (Intel)
  • arm64 (Apple Silicon)
  • Both x86_64 and arm64 (Universal 2)

For more details, please see below the jump.

To identify if an app is Intel-only or Universal using the lipo tool, please use the command shown below:

lipo -detailed_info /path/to/binary

For example, on macOS Catalina 10.15.7 Apple’s Safari browser is an Intel-only binary, since macOS Catalina won’t run on an Apple Silicon Mac. Running lipo on macOS Catalina 10.15.7’s Safari should produce output similar to what’s shown below:

username@computername ~ % lipo -detailed_info /Applications/Safari.app/Contents/MacOS/Safari
input file /Applications/Safari.app/Contents/MacOS/Safari is not a fat file
Non-fat file: /Applications/Safari.app/Contents/MacOS/Safari is architecture: x86_64
username@computername ~ %

Likewise, Jamf Pro 10.25.2’s jamf binary now supports both Intel and Apple Silicon. Running the lipo command described above should produce output similar to what’s shown below:

username@computername ~ % lipo -detailed_info /usr/local/jamf/bin/jamf
Fat header in: /usr/local/jamf/bin/jamf
fat_magic 0xcafebabe
nfat_arch 2
architecture x86_64
cputype CPU_TYPE_X86_64
cpusubtype CPU_SUBTYPE_X86_64_ALL
capabilities 0x0
offset 16384
size 6441136
align 2^14 (16384)
architecture arm64
cputype CPU_TYPE_ARM64
cpusubtype CPU_SUBTYPE_ARM64_ALL
capabilities 0x0
offset 6471680
size 6121168
align 2^14 (16384)
username@computername ~ %

To identify if an app is Intel-only or Universal using the file tool, please use the command shown below:

file /path/to/binary

Running the file command described above on macOS Catalina 10.15.7’s Safari should produce output similar to what’s shown below:

username@computername ~ % file /Applications/Safari.app/Contents/MacOS/Safari
/Applications/Safari.app/Contents/MacOS/Safari: Mach-O 64-bit executable x86_64
username@computername ~ %

Running the file command described above on the Jamf Pro 10.25.2 jamf binary should produce output similar to what’s shown below:

username@computername ~ % file /usr/local/jamf/bin/jamf
/usr/local/jamf/bin/jamf: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64]
/usr/local/jamf/bin/jamf (for architecture x86_64):	Mach-O 64-bit executable x86_64
/usr/local/jamf/bin/jamf (for architecture arm64):	Mach-O 64-bit executable arm64
username@computername ~ %

For more information about app testing, Howard Oakley has a blog post discussing the lipo tool in more detail which I recommend checking out. I’ve linked to it below:

Magic, lipo and testing for Universal binaries:
https://eclecticlight.co/2020/07/24/magic-lipo-and-testing-for-universal-binaries/

Installing Rosetta 2 on Apple Silicon Macs

With Apple now officially selling Apple Silicon Macs, there’s a design decision which Apple made with macOS Big Sur that may affect various Mac environments:

At this time, macOS Big Sur does not install Rosetta 2 by default on Apple Silicon Macs.

Rosetta 2 is Apple’s software solution for aiding in the transition from Macs running on Intel processors to Macs running on Apple Silicon processors. It allows most Intel apps to run on Apple Silicon without issues, which provides time for vendors to update their software to a Universal build which can run on both Intel and Apple Silicon.

Without Rosetta 2 installed, Intel apps do not run on Apple Silicon. So for those folks who need Rosetta 2, how to install it? For more details, please see below the jump.

You can install Rosetta 2 on Apple Silicon Macs using the softwareupdate command. To install Rosetta 2, run the following command with root privileges:

/usr/sbin/softwareupdate --install-rosetta

Installing this way will cause an interactive prompt to appear, asking you to agree to the Rosetta 2 license. If you want to perform a non-interactive install, please run the following command with root privileges to install Rosetta 2 and agree to the license in advance:

/usr/sbin/softwareupdate --install-rosetta --agree-to-license

Having the the non-interactive method for installing Rosetta 2 available makes it easier to script the installation process. My colleague Graham Gilbert has written a script for handling this process and discussed it here:

https://grahamgilbert.com/blog/2020/11/13/installing-rosetta-2-on-apple-silicon-macs/

I’ve written a similar script to Graham’s, which is available below and from the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/install_rosetta_on_apple_silicon

#!/bin/bash
# Installs Rosetta as needed on Apple Silicon Macs.
exitcode=0
# Determine OS version
# Save current IFS state
OLDIFS=$IFS
IFS='.' read osvers_major osvers_minor osvers_dot_version <<< "$(/usr/bin/sw_vers -productVersion)"
# restore IFS to previous state
IFS=$OLDIFS
# Check to see if the Mac is reporting itself as running macOS 11
if [[ ${osvers_major} -ge 11 ]]; then
# Check to see if the Mac needs Rosetta installed by testing the processor
processor=$(/usr/sbin/sysctl -n machdep.cpu.brand_string | grep -o "Intel")
if [[ -n "$processor" ]]; then
echo "$processor processor installed. No need to install Rosetta."
else
# Check Rosetta LaunchDaemon. If no LaunchDaemon is found,
# perform a non-interactive install of Rosetta.
if [[ ! -f "/Library/Apple/System/Library/LaunchDaemons/com.apple.oahd.plist" ]]; then
/usr/sbin/softwareupdate –install-rosetta –agree-to-license
if [[ $? -eq 0 ]]; then
echo "Rosetta has been successfully installed."
else
echo "Rosetta installation failed!"
exitcode=1
fi
else
echo "Rosetta is already installed. Nothing to do."
fi
fi
else
echo "Mac is running macOS $osvers_major.$osvers_minor.$osvers_dot_version."
echo "No need to install Rosetta on this version of macOS."
fi
exit $exitcode

Preventing the macOS Big Sur upgrade advertisement from appearing in the Software Update preference pane on macOS Catalina

Not yet ready for macOS Big Sur in your environment, but you’ve trained your folks to look at the Software Update preference pane to see if there’s available updates? One of the ways Apple is advertising the macOS Big Sur upgrade is via the Software Update preference pane:

Screen Shot 2020 11 12 at 2 25 15 PM

You can block it from appearing using the softwareupdate –ignore command, but for macOS Catalina, Mojave and High Sierra, that command now requires one of the following enrollments as a pre-requisite:

  • Apple Business Manager enrollment
  • Apple School Manager enrollment
  • Enrollment in a user-approved MDM

For more information on this, please reference the following KBase article: https://support.apple.com/HT210642 (search for the following: Major new releases of macOS can be hidden when using the softwareupdate(8) command).

For more details, please see below the jump.

Once that pre-requisite condition has been satisfied, run the following command with root privileges:

softwareupdate --ignore "macOS Big Sur"

You should see text appear which looks like this:

Ignored updates:
(
"macOS Big Sur"
)

Screen Shot 2020 11 12 at 2 28 44 PM

The advertisement banner should now be removed from the Software Update preference pane.

Screen Shot 2020 11 12 at 2 28 58 PM

Note: If the pre-requisite condition has not been fulfilled, running the softwareupdate –ignore command will have no effect.

Screen Shot 2020 11 12 at 2 28 03 PM

Detecting kernel panics using Jamf Pro

Something that has (mostly) become more rare on the Mac platform are kernel panics, which are computer errors from which the operating system cannot safely recover without risking major data loss. Since a kernel panic means that the system has to halt or automatically reboot, this is a major inconvenience to the user of the computer.

6lYdt

Kernel panics are always the result of a software bug, either in Apple’s code or in the code of a third party’s kernel extension. Since they are always from bugs and they cause work interruptions, it’s a good idea to get on top of kernel panic issues as quickly as possible. To assist with this, a Jamf Pro Extension Attribute has been written to detect if a kernel panic has taken place. For more details, please see below the jump.

When a Mac has a kernel panic, the information from the panic is logged to a log file in /Library/Logs/DiagnosticReports. This log file will be named something similar to this:

Kernel-date-goes-here.panic

The Extension Attribute is based off an earlier example posted by Mike Morales on the Jamf Nation forums. It performs the following tasks:

  1. Checks to see if there are any logs in the /Library/Logs/DiagnosticReports with a .panic file extension.
  2. If there are, check to see which are from the past seven days.
  3. Output a count of how many .panic logs were generated in the past seven days.

To test the Extension Attribute, it is possible to force a kernel panic on a Mac. To do this, please use the process shown below:

1. Disable System Integrity Protection
2. Run the following command with root privileges:

dtrace -w -n "BEGIN{ panic();}"

Screen Shot 2020 11 10 at 10 52 23 AM

3. After the kernel panic, run a Jamf Pro inventory update.

After the inventory update, it should show that at least one kernel panic had occurred on that Mac. For more information about kernel panics, please see the link below:

https://developer.apple.com/library/content/technotes/tn2004/tn2118.html

The Extension Attribute is available below and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/kernel_panic_detection

#!/bin/bash
# Detects kernel panics which occurred in the last seven days.
#
# Original idea and script from here:
# https://www.jamf.com/jamf-nation/discussions/23976/kernal-panic-reporting#responseChild145035
#
# This Jamf Pro Extension Attribute is designed to
# check the contents of /Library/Logs/DiagnosticReports
# and report on how many log files with the file suffix
# of ".panic" were created in the previous seven days.
PanicLogCount=$(/usr/bin/find /Library/Logs/DiagnosticReports -Btime -7 -name *.panic | grep . -c)
echo "<result>$PanicLogCount</result>"
exit 0

view raw
gistfile1.txt
hosted with ❤ by GitHub

Extension attributes for Jamf Protect

I’ve started working with Jamf Protect and, as part of that, I found that I needed to be able to report the following information about Jamf Protect to Jamf Pro:

  1. Is the Jamf Protect agent installed on a particular Mac?
  2. Is the Jamf Protect agent running on a particular Mac?
  3. Which Jamf Protect server is a particular Mac handled by?

To address these needs, I’ve written three Jamf Pro extension attributes which display the requested information as part of a Mac’s inventory record in Jamf Pro. For more details, please see below the jump:

The three Extension Attributes do the following:

jamf_protect_installed.sh: Checks to see if Jamf Protect is installed and the agent is able to run.

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_installed

Jamf Pro Extension Attribute Setup1

jamf_protect_status.sh: Checks and validates the following:

  • Jamf Protect is installed
  • The Jamf Protect processes are running

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_status

Jamf Pro Extension Attribute Setup3

jamf_protect_server.sh: Checks to see if Jamf Protect’s protectctl tool is installed on a particular Mac. If the protectctl tool is installed, check for and display the Jamf Protect tenant name.

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/jamf_protect_server

Jamf Pro Extension Attribute Setup2

Remotely gathering sysdiagnose files and uploading them to S3

One of the challenges for helpdesks with folks now working remotely instead of in offices has been that it’s now harder to gather logs from user’s Macs. A particular challenge for those folks working with AppleCare Enterprise Support has been with regards to requests for sysdiagnose logfiles.

The sysdiagnose tool is used for gathering a large amount of diagnostic files and logging, but the resulting output file is often a few hundred megabytes in size. This is usually too large to email, so alternate arrangements have to be made to get it off of the Mac in question and upload it to a location where the person needing the logs can retrieve them.

After needing to gather sysdiagnose files a few times, I’ve developed a scripted solution which does the following:

  • Collects a sysdiagnose file.
  • Creates a read-only compressed disk image containing the sysdiagnose file.
  • Uploads the compressed disk image to a specified S3 bucket in Amazon Web Services.
  • Cleans up the directories and files created by the script.

For more details, please see below the jump.

Pre-requisites

You will need to provide the following information to successfully upload the sysdiagnose file to an S3 bucket:

  • S3 bucket name
  • AWS region for the S3 bucket
  • AWS programmatic user’s access key and secret access key
  • The S3 ACL used on the bucket

The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:

  • s3:ListBucket
  • s3:PutObject
  • s3:PutObjectAcl

The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:

  • s3:ListAllMyBuckets

These access rights will allow the AWS programmatic user the ability to do the following:

  1. Identify the correct S3 bucket
  2. Write the uploaded file to the S3 bucket

Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.

Information on S3 ACLs can be found via the link below:
https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.htmlcanned-acl

In an S3 bucket’s default configuration, where all public access is blocked, the ACL should be the one listed below:

private

Using the script

Once you have the S3 bucket and AWS programmatic user set up, you will need to configure the user-editable variables in the script:

# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"

view raw
gistfile1.txt
hosted with ❤ by GitHub

For example, if you set up the following S3 bucket and user access:

What: S3 bucket named sysdiagnose-log-s3-bucket
Where: AWS’s US-East-1 region
ACL configuration: Default ACL configuration with all public access blocked
AWS access key: AKIAX0FXU19HY2NLC3NF
AWS secret access key: YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK

The user-editable variables should look like this:

# User-editable variables
s3AccessKey="AKIAX0FXU19HY2NLC3NF"
s3SecretKey="YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK"
s3acl="private"
s3Bucket="sysdiagnose-log-s3-bucket"
s3Region="us-east-1"

view raw
gistfile1.txt
hosted with ❤ by GitHub

Note: The S3 bucket, access key and secret access key information shown above is no longer valid.

The script can be run manually or by a systems management tool. I’ve tested it with Jamf Pro and it appears to work without issue.

When run manually in Terminal, you should see the following output.

username@computername ~ % sudo /Users/username/Desktop/remote_sysdiagnose_collection.sh
Password:
Progress:
[|||||||||||||||||||||||||||||||||||||||100%|||||||||||||||||||||||||||||||||||]
Output available at '/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/logresults-20201016144407.1wghyNXE/sysdiagnose-VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.tar.gz'.
………………………………………………………..
created: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
Uploading: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg (application/octet-stream) to sysdiagnose-log-s3-bucket:VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
######################################################################### 100.0%
VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg uploaded successfully to sysdiagnose-log-s3-bucket.
username@computername ~ %

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once the script runs, you should see a disk image file appear in the S3 bucket with a name automatically generated using the following information:

Mac’s serial number – Mac’s hardware UUID – Year-Month-Day-Hour-Minute-Second

Screen Shot 2020 10 16 at 2 51 08 PM

Once downloaded, the sysdiagnose file is accessible by mounting the disk image.

Screen Shot 2020 10 16 at 2 53 58 PM

Screen Shot 2020 10 16 at 2 52 27 PM

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/remote_sysdiagnose_collection

#!/bin/bash
# Log collection script which performs the following tasks:
#
# * Collects a sysdiagnose file.
# * Creates a read-only compressed disk image containing the sysdiagnose file.
# * Uploads the compressed disk image to a specified S3 bucket.
# * Cleans up the directories and files created by the script.
#
# You will need to provide the following information to successfully upload
# to an S3 bucket:
#
# S3 bucket name
# AWS region for the S3 bucket
# AWS programmatic user's access key and secret access key
# The S3 ACL used on the bucket
#
# The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:
#
# s3:ListBucket
# s3:PutObject
# s3:PutObjectAcl
#
# The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:
#
# s3:ListAllMyBuckets
#
# These access rights will allow the AWS programmatic user the ability to do the following:
#
# A. Identify the correct S3 bucket
# B. Write the uploaded file to the S3 bucket
#
# Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.
#
# Information on S3 ACLs can be found via the link below:
# https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
#
# By default, the ACL should be the one listed below:
#
# private
#
# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"
# It should not be necessary to edit any of the variables below this line.
error=0
date=$(date +%Y%m%d%H%M%S)
serial_number=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')
hardware_uuid=$(ioreg -ad2 -c IOPlatformExpertDevice | xmllint –xpath '//key[.="IOPlatformUUID"]/following-sibling::*[1]/text()')
results_directory=$(mktemp -d -t logresults-${date})
sysdiagnose_name="sysdiagnose-${serial_number}${hardware_uuid}${date}.tar.gz"
dmg_name="${serial_number}${hardware_uuid}${date}.dmg"
dmg_file_location=$(mktemp -d -t sysdiagnoselog-${date})
fileName=$(echo "$dmg_file_location"/"$dmg_name")
contentType="application/octet-stream"
LogGeneration()
{
/usr/bin/sysdiagnose -f ${results_directory} -A "$sysdiagnose_name" -u -b
if [[ -f "$results_directory/$sysdiagnose_name" ]]; then
/usr/bin/hdiutil create -format UDZO -srcfolder ${results_directory} ${dmg_file_location}/${dmg_name}
else
echo "ERROR! Log file not created!"
error=1
fi
}
S3Upload()
{
# S3Upload function taken from the following site:
# https://very.busted.systems/shell-script-for-S3-upload-via-curl-using-AWS-version-4-signatures
usage()
{
cat <<USAGE
Simple script uploading a file to S3. Supports AWS signature version 4, custom
region, permissions and mime-types. Uses Content-MD5 header to guarantee
uncorrupted file transfer.
Usage:
`basename $0` aws_ak aws_sk bucket srcfile targfile [acl] [mime_type]
Where <arg> is one of:
aws_ak access key ('' for upload to public writable bucket)
aws_sk secret key ('' for upload to public writable bucket)
bucket bucket name (with optional @region suffix, default is us-east-1)
srcfile path to source file
targfile path to target (dir if it ends with '/', relative to bucket root)
acl s3 access permissions (default: public-read)
mime_type optional mime-type (tries to guess if omitted)
Dependencies:
To run, this shell script depends on command-line curl and openssl, as well
as standard Unix tools
Examples:
To upload file '~/blog/media/image.png' to bucket 'storage' in region
'eu-central-1' with key (path relative to bucket) 'media/image.png':
`basename $0` ACCESS SECRET storage@eu-central-1 \\
~/blog/image.png media/
To upload file '~/blog/media/image.png' to public-writable bucket 'storage'
in default region 'us-east-1' with key (path relative to bucket) 'x/y.png':
`basename $0` '' '' storage ~/blog/image.png x/y.png
USAGE
exit 0
}
guessmime()
{
mime=`file -b –mime-type $1`
if [ "$mime" = "text/plain" ]; then
case $1 in
*.css) mime=text/css;;
*.ttf|*.otf) mime=application/font-sfnt;;
*.woff) mime=application/font-woff;;
*.woff2) mime=font/woff2;;
*rss*.xml|*.rss) mime=application/rss+xml;;
*) if head $1 | grep '<html.*>' >/dev/null; then mime=text/html; fi;;
esac
fi
printf "$mime"
}
if [ $# -lt 5 ]; then usage; fi
# Inputs.
aws_ak="$1" # access key
aws_sk="$2" # secret key
bucket=`printf $3 | awk 'BEGIN{FS="@"}{print $1}'` # bucket name
region=`printf $3 | awk 'BEGIN{FS="@"}{print ($2==""?"us-east-1":$2)}'` # region name
srcfile="$4" # source file
targfile=`echo -n "$5" | sed "s/\/$/\/$(basename $srcfile)/"` # target file
acl=${6:-'public-read'} # s3 perms
mime=${7:-"`guessmime "$srcfile"`"} # mime type
md5=`openssl md5 -binary "$srcfile" | openssl base64`
# Create signature if not public upload.
key_and_sig_args=''
if [ "$aws_ak" != "" ] && [ "$aws_sk" != "" ]; then
# Need current and file upload expiration date. Handle GNU and BSD date command style to get tomorrow's date.
date=`date -u +%Y%m%dT%H%M%SZ`
expdate=`if ! date -v+1d +%Y-%m-%d 2>/dev/null; then date -d tomorrow +%Y-%m-%d; fi`
expdate_s=`printf $expdate | sed s/-//g` # without dashes, as we need both formats below
service='s3'
# Generate policy and sign with secret key following AWS Signature version 4, below
p=$(cat <<POLICY | openssl base64
{ "expiration": "${expdate}T12:00:00.000Z",
"conditions": [
{"acl": "$acl" },
{"bucket": "$bucket" },
["starts-with", "\$key", ""],
["starts-with", "\$content-type", ""],
["content-length-range", 1, `ls -l -H "$srcfile" | awk '{print $5}' | head -1`],
{"content-md5": "$md5" },
{"x-amz-date": "$date" },
{"x-amz-credential": "$aws_ak/$expdate_s/$region/$service/aws4_request" },
{"x-amz-algorithm": "AWS4-HMAC-SHA256" }
]
}
POLICY
)
# AWS4-HMAC-SHA256 signature
s=`printf "$expdate_s" | openssl sha256 -hmac "AWS4$aws_sk" -hex | sed 's/(stdin)= //'`
s=`printf "$region" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$service" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "aws4_request" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$p" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
key_and_sig_args="-F X-Amz-Credential=$aws_ak/$expdate_s/$region/$service/aws4_request -F X-Amz-Algorithm=AWS4-HMAC-SHA256 -F X-Amz-Signature=$s -F X-Amz-Date=${date}"
fi
# Upload. Supports anonymous upload if bucket is public-writable, and keys are set to ''.
echo "Uploading: $srcfile ($mime) to $bucket:$targfile"
curl \
-# -k \
-F key=$targfile \
-F acl=$acl \
$key_and_sig_args \
-F "Policy=$p" \
-F "Content-MD5=$md5" \
-F "Content-Type=$mime" \
-F "file=@$srcfile" \
https://${bucket}.s3.amazonaws.com/ | cat # pipe through cat so curl displays upload progress bar, *and* response
}
CleanUp()
{
if [[ -d ${results_directory} ]]; then
/bin/rm -rf ${results_directory}
fi
if [[ -d ${dmg_file_location} ]]; then
/bin/rm -rf ${dmg_file_location}
fi
}
LogGeneration
if [[ -f ${fileName} ]]; then
S3Upload "$s3AccessKey" "$s3SecretKey" "$s3Bucket"@"$s3Region" ${fileName} "$dmg_name" "$s3acl" "$contentType"
if [[ $? -eq 0 ]]; then
echo "$dmg_name uploaded successfully to $s3Bucket."
else
echo "ERROR! Upload of $dmg_name failed!"
error=1
fi
else
echo "ERROR! Creating $dmg_name failed! No upload attempted."
error=1
fi
CleanUp
exit $error

“Getting Started with Amazon Web Services” encore presentation at MacSysAdmin 2020

The MacSysAdmin conference, like many conferences in 2020, has moved to an online format for this year. The MacSysAdmin 2020 organizers have also decided to have both sessions that are new for the 2020 conference as well as give an encore performance for sessions given at past MacSysAdmin conferences.

I was pleased to see that my “Getting Started with Amazon Web Services” session from MacSysAdmin 2018 made the cut for MacSysAdmin 2020. For those interested, my session will be available for viewing this Friday, October 9th.

Backing up Jamf Pro Self Service bookmarks

As part of working with Jamf Pro, I prefer to be able to save as much of the existing configuration of it as possible. Normally I can do this via the Jamf Pro Classic API and I have a number of blog posts showing how I use the API to create backups of my Jamf Pro configuration.

However, one set of data which is not accessible via the API are the Self Service bookmarks.

Screen Shot 2020 09 27 at 11 29 48 AM

If I want to back up this information, is there a way outside of the API? It turns out that there is. For more details, please see below the jump.

After some digging around, I discovered that the Self Service bookmarks are automatically downloaded from the Jamf Pro server and stored locally on each Mac in the following directory:

/Library/Application Support/JAMF/Self Service/Managed Plug-ins

In this directory, there are .plist files named with the Jamf Pro ID number of the relevant Self Service bookmark.

Screen Shot 2020 09 27 at 11 31 16 AM

To make backups of the Self Service bookmarks, I’ve written a script which performs the following tasks:

  1. If necessary, create a directory for storing backup copies of the Self Service bookmark files.
  2. Make copies of the Self Service bookmark files.
  3. Name the copied files using the title of the Self Service bookmark.
  4. Store the copied bookmarks in the specified directory.

Once the script is run, you should see copies of the Self Service bookmark files appearing in the script-specified location.

Screen Shot 2020 09 27 at 11 43 33 AM

This location can be set manually or created automatically by the script.

Screen Shot 2020 09 27 at 11 42 59 AM

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Scripts/Jamf_Pro_Self_Service_Bookmark_Backup

Jamf_Pro_Self_Service_Bookmark_Backup.sh:

#!/bin/bash
# This script is designed to do the following:
#
# 1. If necessary, create a directory for storing backup copies of Jamf Pro Self Service bookmark files.
# 2. Make copies of the Self Service bookmark files.
# 3. Name the copied files using the title of the Self Service bookmark.
# 4. Store the copied bookmarks in the specified directory.
#
# If you choose to specify a directory to save the Self Service bookmarks into,
# please enter the complete directory path into the SelfServiceBookmarkBackupDirectory
# variable below.
SelfServiceBookmarkBackupDirectory=""
# If the SelfServiceBookmarkBackupDirectory isn't specified above, a directory will be
# created and the complete directory path displayed by the script.
error=0
if [[ -z "$SelfServiceBookmarkBackupDirectory" ]]; then
SelfServiceBookmarkBackupDirectory=$(mktemp -d)
echo "A location to store copied bookmarks has not been specified."
echo "Copied bookmarks will be stored in $SelfServiceBookmarkBackupDirectory."
fi
self_service_bookmarks="/Library/Application Support/JAMF/Self Service/Managed Plug-ins"
for bookmark in "$self_service_bookmarks"/*.plist
do
echo "Processing "$bookmark" file…"
bookmark_name=$(/usr/bin/defaults read "$bookmark" title)
cat "$bookmark" > "$SelfServiceBookmarkBackupDirectory/${bookmark_name}.plist"
if [[ $? -eq 0 ]]; then
echo "$bookmark_name.plist processed and stored in $SelfServiceBookmarkBackupDirectory."
else
echo "ERROR! Problem occurred when processing $self_service_bookmarks/$bookmark file!"
error=1
fi
done
exit $error
view raw gistfile1.txt hosted with ❤ by GitHub

Clearing failed MDM commands on Jamf Pro

For a variety of reasons, MDM commands sent out from an MDM server can fail to run correctly on a Mac. Many times, these MDM commands will not be re-sent unless the failure is cleared. With the failure cleared, the MDM server will not have a record of sending the MDM command and should try again.

On Jamf Pro, there’s a couple of ways you can clear failed MDM commands. The first is a manual process which uses the Jamf Pro admin console. The second uses the Jamf Pro Classic API and can be automated. For more details, please see below the jump.

Clearing failed MDM commands using the Jamf Pro admin console

To clear failed MDM commands using the admin console, please use the procedure shown below.

1. Run a search for the computers you want to clear.

Note: If you search with no criteria, the search results will list all Macs enrolled with the Jamf Pro server.

2. Once you have the desired list, click the Action button.

Screen Shot 2020 09 11 at 5 09 10 PM

3. Select Cancel Remote Commands and click the Next button.

Screen Shot 2020 09 11 at 5 09 29 PM

4. Select Cancel All Failed Commands and click the Next button.

Screen Shot 2020 09 11 at 5 09 39 PM

5. Once all failed commands have been cleared, click the Done button.

Screen Shot 2020 09 11 at 5 09 45 PM

Clearing failed MDM commands using the Jamf Pro Classic API

You can also use the Jamf Pro Classic API to script an automatic clearing of failed MDM commands at whatever interval is desired. There’s numerous ways to make this work, with my approach being the following:

1. Write a script designed to run via a Jamf Pro policy on individual Macs to perform the following tasks:

a. Use the API and the Mac’s hardware UUID to identify the Mac’s computer ID in Jamf Pro.
b. Use the API and the Mac’s hardware UUID to download the list of failed MDM commands.
c. Use the API and the Mac’s Jamf Pro computer ID clear all failed MDM commands associated with that Jamf Pro computer ID.

Note: For those who haven’t used the Jamf Pro Classic API before, you will need to provide a username and password to the script. This is a security risk, so my recommendation is to carefully evaluate if the risk is worth it for your environment. If it’s not, don’t use this approach.

One way to mitigate this risk is to set up a dedicated account with the least privileges necessary to accomplish the task of clearing the failed MDM commands. This method does not eliminate the risk, but it may reduce it to one acceptable in your environment.

In my testing, the least privileges are the following:

In Jamf Pro Server Objects:

Computers: Read

Screen Shot 2020 09 25 at 9 57 12 AM

In Jamf Pro Server Actions:

Flush MDM Commands

Screen Shot 2020 09 25 at 9 56 59 AM

2. Set up a Jamf Pro computer policy with the following components:

Script: The script to clear failed MDM commands
Trigger: Recurring Check-In
Execution Frequency: Once every day

Note: Execution Frequency can be set as desired for a longer interval, like Once every week or Once every month.

The script is available from following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Scripts/clear_failed_Jamf_Pro_mdm_commands

Uninstalling macOS system extensions

With the ongoing change from kernel extensions to system extensions, one new thing Mac admins will need to learn is how to uninstall system extensions. Fortunately, Apple has provided a tool as of macOS Catalina that assists with this: systemextensionsctl

If you run the systemextensionsctl command by itself, you should get the following information about usage:

systemextensionsctl: usage:
	systemextensionsctl developer [on|off]
	systemextensionsctl list [category]
	systemextensionsctl reset  - reset all System Extensions state
	systemextensionsctl uninstall  ; can also accept '-' for teamID

The last verb, uninstall, is what allows us to remove system extensions. For more details, please see below the jump.

To uninstall a system extension using systemextensionsctl, you need to provide the following:

  • Team identifier of the certificate used to sign the system extension
  • Bundle identifier for the system extension

Locating Team and bundle identifiers

You can identify team and bundle identifiers by locating the system extension in question inside the application and running the following commands:

To identify the Team identifier:

codesign -dvvv /path/to/name_goes_here.systemextension 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'

To identify the bundle identifier:

codesign -dvvv /path/to/name_goes_here.systemextension 2>&1 | awk -F= '/^Identifier/ {print $NF}'

For example, Microsoft Defender ATP currently has several system extensions within its application bundle:

  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension
  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.netext.systemextension
  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.tunnelext.systemextension

To find the bundle identifier for the com.microsoft.wdav.epsext.systemextension system extension, run the command shown below:

codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^Identifier/ {print $NF}'

That should give you the following output:

username@computername ~ % codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^Identifier/ {print $NF}'
com.microsoft.wdav.epsext
username@computername ~ %

To find the Team identifier for the com.microsoft.wdav.epsext.systemextension system extension, run the command shown below:

codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'

That should give you the following output:

username@computername ~ % codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'
UBF8T346G9
username@computername ~ %

Uninstalling a system extension

Once you have both, you can run the following command with root privileges to uninstall a system extension:

systemextensionsctl uninstall Team_Identifier_Goes_Here Bundle_Identifier_Goes_Here

For example, if you wanted to uninstall Microsoft Defender’s com.microsoft.wdav.epsext.systemextension system extension, you would run the following command with root privileges:

systemextensionsctl uninstall UBF8T346G9 com.microsoft.wdav.epsext

Note: As of September 1, 2020, running the systemextensionsctl uninstall command requires System Integrity Protection (SIP) to be disabled. This limitation is supposed to be removed by Apple at some point in the very near future.