Remotely gathering sysdiagnose files and uploading them to S3

One of the challenges for helpdesks with folks now working remotely instead of in offices has been that it’s now harder to gather logs from user’s Macs. A particular challenge for those folks working with AppleCare Enterprise Support has been with regards to requests for sysdiagnose logfiles.

The sysdiagnose tool is used for gathering a large amount of diagnostic files and logging, but the resulting output file is often a few hundred megabytes in size. This is usually too large to email, so alternate arrangements have to be made to get it off of the Mac in question and upload it to a location where the person needing the logs can retrieve them.

After needing to gather sysdiagnose files a few times, I’ve developed a scripted solution which does the following:

  • Collects a sysdiagnose file.
  • Creates a read-only compressed disk image containing the sysdiagnose file.
  • Uploads the compressed disk image to a specified S3 bucket in Amazon Web Services.
  • Cleans up the directories and files created by the script.

For more details, please see below the jump.

Pre-requisites

You will need to provide the following information to successfully upload the sysdiagnose file to an S3 bucket:

  • S3 bucket name
  • AWS region for the S3 bucket
  • AWS programmatic user’s access key and secret access key
  • The S3 ACL used on the bucket

The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:

  • s3:ListBucket
  • s3:PutObject
  • s3:PutObjectAcl

The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:

  • s3:ListAllMyBuckets

These access rights will allow the AWS programmatic user the ability to do the following:

  1. Identify the correct S3 bucket
  2. Write the uploaded file to the S3 bucket

Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.

Information on S3 ACLs can be found via the link below:
https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.htmlcanned-acl

In an S3 bucket’s default configuration, where all public access is blocked, the ACL should be the one listed below:

private

Using the script

Once you have the S3 bucket and AWS programmatic user set up, you will need to configure the user-editable variables in the script:

# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"

view raw
gistfile1.txt
hosted with ❤ by GitHub

For example, if you set up the following S3 bucket and user access:

What: S3 bucket named sysdiagnose-log-s3-bucket
Where: AWS’s US-East-1 region
ACL configuration: Default ACL configuration with all public access blocked
AWS access key: AKIAX0FXU19HY2NLC3NF
AWS secret access key: YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK

The user-editable variables should look like this:

# User-editable variables
s3AccessKey="AKIAX0FXU19HY2NLC3NF"
s3SecretKey="YWRkX0FXU19zZWNyZXRfa2V5X2hlcmUK"
s3acl="private"
s3Bucket="sysdiagnose-log-s3-bucket"
s3Region="us-east-1"

view raw
gistfile1.txt
hosted with ❤ by GitHub

Note: The S3 bucket, access key and secret access key information shown above is no longer valid.

The script can be run manually or by a systems management tool. I’ve tested it with Jamf Pro and it appears to work without issue.

When run manually in Terminal, you should see the following output.

username@computername ~ % sudo /Users/username/Desktop/remote_sysdiagnose_collection.sh
Password:
Progress:
[|||||||||||||||||||||||||||||||||||||||100%|||||||||||||||||||||||||||||||||||]
Output available at '/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/logresults-20201016144407.1wghyNXE/sysdiagnose-VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.tar.gz'.
………………………………………………………..
created: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
Uploading: /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sysdiagnoselog-20201016144407.VQgd61kP/VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg (application/octet-stream) to sysdiagnose-log-s3-bucket:VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg
######################################################################### 100.0%
VMDuaUp36s8k-564DA5F0-0D34-627B-DE5E-A7FA6F7AF30B-20201016144407.dmg uploaded successfully to sysdiagnose-log-s3-bucket.
username@computername ~ %

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once the script runs, you should see a disk image file appear in the S3 bucket with a name automatically generated using the following information:

Mac’s serial number – Mac’s hardware UUID – Year-Month-Day-Hour-Minute-Second

Screen Shot 2020 10 16 at 2 51 08 PM

Once downloaded, the sysdiagnose file is accessible by mounting the disk image.

Screen Shot 2020 10 16 at 2 53 58 PM

Screen Shot 2020 10 16 at 2 52 27 PM

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/remote_sysdiagnose_collection

#!/bin/bash
# Log collection script which performs the following tasks:
#
# * Collects a sysdiagnose file.
# * Creates a read-only compressed disk image containing the sysdiagnose file.
# * Uploads the compressed disk image to a specified S3 bucket.
# * Cleans up the directories and files created by the script.
#
# You will need to provide the following information to successfully upload
# to an S3 bucket:
#
# S3 bucket name
# AWS region for the S3 bucket
# AWS programmatic user's access key and secret access key
# The S3 ACL used on the bucket
#
# The AWS programmatic user must have at minimum the following access rights to the specified S3 bucket:
#
# s3:ListBucket
# s3:PutObject
# s3:PutObjectAcl
#
# The AWS programmatic user must have at minimum the following access rights to all S3 buckets in the account:
#
# s3:ListAllMyBuckets
#
# These access rights will allow the AWS programmatic user the ability to do the following:
#
# A. Identify the correct S3 bucket
# B. Write the uploaded file to the S3 bucket
#
# Note: The AWS programmatic user would not have the ability to read the contents of the S3 bucket.
#
# Information on S3 ACLs can be found via the link below:
# https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
#
# By default, the ACL should be the one listed below:
#
# private
#
# User-editable variables
s3AccessKey="add_AWS_access_key_here"
s3SecretKey="add_AWS_secret_key_here"
s3acl="add_AWS_S3_ACL_here"
s3Bucket="add_AWS_S3_bucket_name_here"
s3Region="add_AWS_S3_region_here"
# It should not be necessary to edit any of the variables below this line.
error=0
date=$(date +%Y%m%d%H%M%S)
serial_number=$(ioreg -c IOPlatformExpertDevice -d 2 | awk -F\" '/IOPlatformSerialNumber/{print $(NF-1)}')
hardware_uuid=$(ioreg -ad2 -c IOPlatformExpertDevice | xmllint –xpath '//key[.="IOPlatformUUID"]/following-sibling::*[1]/text()')
results_directory=$(mktemp -d -t logresults-${date})
sysdiagnose_name="sysdiagnose-${serial_number}${hardware_uuid}${date}.tar.gz"
dmg_name="${serial_number}${hardware_uuid}${date}.dmg"
dmg_file_location=$(mktemp -d -t sysdiagnoselog-${date})
fileName=$(echo "$dmg_file_location"/"$dmg_name")
contentType="application/octet-stream"
LogGeneration()
{
/usr/bin/sysdiagnose -f ${results_directory} -A "$sysdiagnose_name" -u -b
if [[ -f "$results_directory/$sysdiagnose_name" ]]; then
/usr/bin/hdiutil create -format UDZO -srcfolder ${results_directory} ${dmg_file_location}/${dmg_name}
else
echo "ERROR! Log file not created!"
error=1
fi
}
S3Upload()
{
# S3Upload function taken from the following site:
# https://very.busted.systems/shell-script-for-S3-upload-via-curl-using-AWS-version-4-signatures
usage()
{
cat <<USAGE
Simple script uploading a file to S3. Supports AWS signature version 4, custom
region, permissions and mime-types. Uses Content-MD5 header to guarantee
uncorrupted file transfer.
Usage:
`basename $0` aws_ak aws_sk bucket srcfile targfile [acl] [mime_type]
Where <arg> is one of:
aws_ak access key ('' for upload to public writable bucket)
aws_sk secret key ('' for upload to public writable bucket)
bucket bucket name (with optional @region suffix, default is us-east-1)
srcfile path to source file
targfile path to target (dir if it ends with '/', relative to bucket root)
acl s3 access permissions (default: public-read)
mime_type optional mime-type (tries to guess if omitted)
Dependencies:
To run, this shell script depends on command-line curl and openssl, as well
as standard Unix tools
Examples:
To upload file '~/blog/media/image.png' to bucket 'storage' in region
'eu-central-1' with key (path relative to bucket) 'media/image.png':
`basename $0` ACCESS SECRET storage@eu-central-1 \\
~/blog/image.png media/
To upload file '~/blog/media/image.png' to public-writable bucket 'storage'
in default region 'us-east-1' with key (path relative to bucket) 'x/y.png':
`basename $0` '' '' storage ~/blog/image.png x/y.png
USAGE
exit 0
}
guessmime()
{
mime=`file -b –mime-type $1`
if [ "$mime" = "text/plain" ]; then
case $1 in
*.css) mime=text/css;;
*.ttf|*.otf) mime=application/font-sfnt;;
*.woff) mime=application/font-woff;;
*.woff2) mime=font/woff2;;
*rss*.xml|*.rss) mime=application/rss+xml;;
*) if head $1 | grep '<html.*>' >/dev/null; then mime=text/html; fi;;
esac
fi
printf "$mime"
}
if [ $# -lt 5 ]; then usage; fi
# Inputs.
aws_ak="$1" # access key
aws_sk="$2" # secret key
bucket=`printf $3 | awk 'BEGIN{FS="@"}{print $1}'` # bucket name
region=`printf $3 | awk 'BEGIN{FS="@"}{print ($2==""?"us-east-1":$2)}'` # region name
srcfile="$4" # source file
targfile=`echo -n "$5" | sed "s/\/$/\/$(basename $srcfile)/"` # target file
acl=${6:-'public-read'} # s3 perms
mime=${7:-"`guessmime "$srcfile"`"} # mime type
md5=`openssl md5 -binary "$srcfile" | openssl base64`
# Create signature if not public upload.
key_and_sig_args=''
if [ "$aws_ak" != "" ] && [ "$aws_sk" != "" ]; then
# Need current and file upload expiration date. Handle GNU and BSD date command style to get tomorrow's date.
date=`date -u +%Y%m%dT%H%M%SZ`
expdate=`if ! date -v+1d +%Y-%m-%d 2>/dev/null; then date -d tomorrow +%Y-%m-%d; fi`
expdate_s=`printf $expdate | sed s/-//g` # without dashes, as we need both formats below
service='s3'
# Generate policy and sign with secret key following AWS Signature version 4, below
p=$(cat <<POLICY | openssl base64
{ "expiration": "${expdate}T12:00:00.000Z",
"conditions": [
{"acl": "$acl" },
{"bucket": "$bucket" },
["starts-with", "\$key", ""],
["starts-with", "\$content-type", ""],
["content-length-range", 1, `ls -l -H "$srcfile" | awk '{print $5}' | head -1`],
{"content-md5": "$md5" },
{"x-amz-date": "$date" },
{"x-amz-credential": "$aws_ak/$expdate_s/$region/$service/aws4_request" },
{"x-amz-algorithm": "AWS4-HMAC-SHA256" }
]
}
POLICY
)
# AWS4-HMAC-SHA256 signature
s=`printf "$expdate_s" | openssl sha256 -hmac "AWS4$aws_sk" -hex | sed 's/(stdin)= //'`
s=`printf "$region" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$service" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "aws4_request" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
s=`printf "$p" | openssl sha256 -mac HMAC -macopt hexkey:"$s" -hex | sed 's/(stdin)= //'`
key_and_sig_args="-F X-Amz-Credential=$aws_ak/$expdate_s/$region/$service/aws4_request -F X-Amz-Algorithm=AWS4-HMAC-SHA256 -F X-Amz-Signature=$s -F X-Amz-Date=${date}"
fi
# Upload. Supports anonymous upload if bucket is public-writable, and keys are set to ''.
echo "Uploading: $srcfile ($mime) to $bucket:$targfile"
curl \
-# -k \
-F key=$targfile \
-F acl=$acl \
$key_and_sig_args \
-F "Policy=$p" \
-F "Content-MD5=$md5" \
-F "Content-Type=$mime" \
-F "file=@$srcfile" \
https://${bucket}.s3.amazonaws.com/ | cat # pipe through cat so curl displays upload progress bar, *and* response
}
CleanUp()
{
if [[ -d ${results_directory} ]]; then
/bin/rm -rf ${results_directory}
fi
if [[ -d ${dmg_file_location} ]]; then
/bin/rm -rf ${dmg_file_location}
fi
}
LogGeneration
if [[ -f ${fileName} ]]; then
S3Upload "$s3AccessKey" "$s3SecretKey" "$s3Bucket"@"$s3Region" ${fileName} "$dmg_name" "$s3acl" "$contentType"
if [[ $? -eq 0 ]]; then
echo "$dmg_name uploaded successfully to $s3Bucket."
else
echo "ERROR! Upload of $dmg_name failed!"
error=1
fi
else
echo "ERROR! Creating $dmg_name failed! No upload attempted."
error=1
fi
CleanUp
exit $error

“Getting Started with Amazon Web Services” encore presentation at MacSysAdmin 2020

The MacSysAdmin conference, like many conferences in 2020, has moved to an online format for this year. The MacSysAdmin 2020 organizers have also decided to have both sessions that are new for the 2020 conference as well as give an encore performance for sessions given at past MacSysAdmin conferences.

I was pleased to see that my “Getting Started with Amazon Web Services” session from MacSysAdmin 2018 made the cut for MacSysAdmin 2020. For those interested, my session will be available for viewing this Friday, October 9th.

Backing up Jamf Pro Self Service bookmarks

As part of working with Jamf Pro, I prefer to be able to save as much of the existing configuration of it as possible. Normally I can do this via the Jamf Pro Classic API and I have a number of blog posts showing how I use the API to create backups of my Jamf Pro configuration.

However, one set of data which is not accessible via the API are the Self Service bookmarks.

Screen Shot 2020 09 27 at 11 29 48 AM

If I want to back up this information, is there a way outside of the API? It turns out that there is. For more details, please see below the jump.

After some digging around, I discovered that the Self Service bookmarks are automatically downloaded from the Jamf Pro server and stored locally on each Mac in the following directory:

/Library/Application Support/JAMF/Self Service/Managed Plug-ins

In this directory, there are .plist files named with the Jamf Pro ID number of the relevant Self Service bookmark.

Screen Shot 2020 09 27 at 11 31 16 AM

To make backups of the Self Service bookmarks, I’ve written a script which performs the following tasks:

  1. If necessary, create a directory for storing backup copies of the Self Service bookmark files.
  2. Make copies of the Self Service bookmark files.
  3. Name the copied files using the title of the Self Service bookmark.
  4. Store the copied bookmarks in the specified directory.

Once the script is run, you should see copies of the Self Service bookmark files appearing in the script-specified location.

Screen Shot 2020 09 27 at 11 43 33 AM

This location can be set manually or created automatically by the script.

Screen Shot 2020 09 27 at 11 42 59 AM

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Scripts/Jamf_Pro_Self_Service_Bookmark_Backup

Jamf_Pro_Self_Service_Bookmark_Backup.sh:

#!/bin/bash
# This script is designed to do the following:
#
# 1. If necessary, create a directory for storing backup copies of Jamf Pro Self Service bookmark files.
# 2. Make copies of the Self Service bookmark files.
# 3. Name the copied files using the title of the Self Service bookmark.
# 4. Store the copied bookmarks in the specified directory.
#
# If you choose to specify a directory to save the Self Service bookmarks into,
# please enter the complete directory path into the SelfServiceBookmarkBackupDirectory
# variable below.
SelfServiceBookmarkBackupDirectory=""
# If the SelfServiceBookmarkBackupDirectory isn't specified above, a directory will be
# created and the complete directory path displayed by the script.
error=0
if [[ -z "$SelfServiceBookmarkBackupDirectory" ]]; then
SelfServiceBookmarkBackupDirectory=$(mktemp -d)
echo "A location to store copied bookmarks has not been specified."
echo "Copied bookmarks will be stored in $SelfServiceBookmarkBackupDirectory."
fi
self_service_bookmarks="/Library/Application Support/JAMF/Self Service/Managed Plug-ins"
for bookmark in "$self_service_bookmarks"/*.plist
do
echo "Processing "$bookmark" file…"
bookmark_name=$(/usr/bin/defaults read "$bookmark" title)
cat "$bookmark" > "$SelfServiceBookmarkBackupDirectory/${bookmark_name}.plist"
if [[ $? -eq 0 ]]; then
echo "$bookmark_name.plist processed and stored in $SelfServiceBookmarkBackupDirectory."
else
echo "ERROR! Problem occurred when processing $self_service_bookmarks/$bookmark file!"
error=1
fi
done
exit $error
view raw gistfile1.txt hosted with ❤ by GitHub

Clearing failed MDM commands on Jamf Pro

For a variety of reasons, MDM commands sent out from an MDM server can fail to run correctly on a Mac. Many times, these MDM commands will not be re-sent unless the failure is cleared. With the failure cleared, the MDM server will not have a record of sending the MDM command and should try again.

On Jamf Pro, there’s a couple of ways you can clear failed MDM commands. The first is a manual process which uses the Jamf Pro admin console. The second uses the Jamf Pro Classic API and can be automated. For more details, please see below the jump.

Clearing failed MDM commands using the Jamf Pro admin console

To clear failed MDM commands using the admin console, please use the procedure shown below.

1. Run a search for the computers you want to clear.

Note: If you search with no criteria, the search results will list all Macs enrolled with the Jamf Pro server.

2. Once you have the desired list, click the Action button.

Screen Shot 2020 09 11 at 5 09 10 PM

3. Select Cancel Remote Commands and click the Next button.

Screen Shot 2020 09 11 at 5 09 29 PM

4. Select Cancel All Failed Commands and click the Next button.

Screen Shot 2020 09 11 at 5 09 39 PM

5. Once all failed commands have been cleared, click the Done button.

Screen Shot 2020 09 11 at 5 09 45 PM

Clearing failed MDM commands using the Jamf Pro Classic API

You can also use the Jamf Pro Classic API to script an automatic clearing of failed MDM commands at whatever interval is desired. There’s numerous ways to make this work, with my approach being the following:

1. Write a script designed to run via a Jamf Pro policy on individual Macs to perform the following tasks:

a. Use the API and the Mac’s hardware UUID to identify the Mac’s computer ID in Jamf Pro.
b. Use the API and the Mac’s hardware UUID to download the list of failed MDM commands.
c. Use the API and the Mac’s Jamf Pro computer ID clear all failed MDM commands associated with that Jamf Pro computer ID.

Note: For those who haven’t used the Jamf Pro Classic API before, you will need to provide a username and password to the script. This is a security risk, so my recommendation is to carefully evaluate if the risk is worth it for your environment. If it’s not, don’t use this approach.

One way to mitigate this risk is to set up a dedicated account with the least privileges necessary to accomplish the task of clearing the failed MDM commands. This method does not eliminate the risk, but it may reduce it to one acceptable in your environment.

In my testing, the least privileges are the following:

In Jamf Pro Server Objects:

Computers: Read

Screen Shot 2020 09 25 at 9 57 12 AM

In Jamf Pro Server Actions:

Flush MDM Commands

Screen Shot 2020 09 25 at 9 56 59 AM

2. Set up a Jamf Pro computer policy with the following components:

Script: The script to clear failed MDM commands
Trigger: Recurring Check-In
Execution Frequency: Once every day

Note: Execution Frequency can be set as desired for a longer interval, like Once every week or Once every month.

The script is available from following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Scripts/clear_failed_Jamf_Pro_mdm_commands

Uninstalling macOS system extensions

With the ongoing change from kernel extensions to system extensions, one new thing Mac admins will need to learn is how to uninstall system extensions. Fortunately, Apple has provided a tool as of macOS Catalina that assists with this: systemextensionsctl

If you run the systemextensionsctl command by itself, you should get the following information about usage:

systemextensionsctl: usage:
	systemextensionsctl developer [on|off]
	systemextensionsctl list [category]
	systemextensionsctl reset  - reset all System Extensions state
	systemextensionsctl uninstall  ; can also accept '-' for teamID

The last verb, uninstall, is what allows us to remove system extensions. For more details, please see below the jump.

To uninstall a system extension using systemextensionsctl, you need to provide the following:

  • Team identifier of the certificate used to sign the system extension
  • Bundle identifier for the system extension

Locating Team and bundle identifiers

You can identify team and bundle identifiers by locating the system extension in question inside the application and running the following commands:

To identify the Team identifier:

codesign -dvvv /path/to/name_goes_here.systemextension 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'

To identify the bundle identifier:

codesign -dvvv /path/to/name_goes_here.systemextension 2>&1 | awk -F= '/^Identifier/ {print $NF}'

For example, Microsoft Defender ATP currently has several system extensions within its application bundle:

  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension
  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.netext.systemextension
  • /Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.tunnelext.systemextension

To find the bundle identifier for the com.microsoft.wdav.epsext.systemextension system extension, run the command shown below:

codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^Identifier/ {print $NF}'

That should give you the following output:

username@computername ~ % codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^Identifier/ {print $NF}'
com.microsoft.wdav.epsext
username@computername ~ %

To find the Team identifier for the com.microsoft.wdav.epsext.systemextension system extension, run the command shown below:

codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'

That should give you the following output:

username@computername ~ % codesign -dvvv "/Applications/Microsoft Defender ATP.app/Contents/Library/SystemExtensions/com.microsoft.wdav.epsext.systemextension" 2>&1 | awk -F= '/^TeamIdentifier/ {print $NF}'
UBF8T346G9
username@computername ~ %

Uninstalling a system extension

Once you have both, you can run the following command with root privileges to uninstall a system extension:

systemextensionsctl uninstall Team_Identifier_Goes_Here Bundle_Identifier_Goes_Here

For example, if you wanted to uninstall Microsoft Defender’s com.microsoft.wdav.epsext.systemextension system extension, you would run the following command with root privileges:

systemextensionsctl uninstall UBF8T346G9 com.microsoft.wdav.epsext

Note: As of September 1, 2020, running the systemextensionsctl uninstall command requires System Integrity Protection (SIP) to be disabled. This limitation is supposed to be removed by Apple at some point in the very near future.

 

Speaking at Jamf Nation User Conference 2020

I’ll be speaking about how SAP transitioned to an at-home workforce this year at Jamf Nation User Conference 2020, which is being held online from September 29th – October 1st, 2020. For those interested, my talk will be on Tuesday, September 29th from 12:30pm – 1:00pm CDT.

For a description of what I’ll be talking about, please see the SAP in the Haus – How SAP transitioned its global workforce to working from home session description. You can see the whole list of JNUC sessions here on the Sessions page.

If you haven’t already signed up, the conference is free and there’s still time to register. You can do that via the link below:

https://www.jamf.com/events/jamf-nation-user-conference/2020/registration/

Running recoverydiagnose in macOS Recovery

Most Mac admins, especially those who file bug reports or who work with AppleCare Enterprise, are familiar with running the sysdiagnose tool to gather diagnostic information about a Mac they’re working on. Running sysdiagnose will trigger a large number of macOS’s performance and problem tracing tools and use their reports to assemble what amounts to a snapshot of your Mac’s complete state at the time you ran the sysdiagnose tool, which can be very useful to developers trying to trace down why a particular problem is occurring.

However, this tool only applies to a Mac’s regular OS. What if the problem you’re seeing is in the macOS Recovery environment? In that case, you can run the recoverydiagnose tool in macOS Recovery to gather similar data specifically for macOS Recovery-related problems. For more details, please see below the jump.

Note: macOS Recovery uses read-only storage and you won’t be able to save anything to it. As a consequence, you will need to have writable storage available to store the data which is being assembled and stored by the recoverydiagnose tool. This can be an external USB or Thunderbolt drive or even network storage, depending on what’s available.

Running recoverydiagnose

1. Boot to macOS Recovery

Screen Shot 2020 08 06 at 4 32 47 PM

2. Connect storage that you can read and write to.

3. Open Terminal.

Screen Shot 2020 08 06 at 4 32 58 PM

4. Run the recoverydiagnose tool and specify where you want to store the assembled data.

Normally, you would run a command similar to the one below:

recoverydiagnose -f /path/to/logging/directory

For example, if you have an attached USB drive named Data and want to store the data there, the command would look like this:

recoverydiagnose -f /Volumes/Data

Information about what data is being gathered will be displayed and give you the chance to opt out.

Screen Shot 2020 08 06 at 4 29 21 PM

 

If you choose to continue, recoverydiagnose will gather its data and store it in the specified destination.

Screen Shot 2020 08 06 at 4 30 04 PM

 

For more information about this tool, run the recoverydiagnose command without specifying any options. This will trigger the recoverydiagnose documentation to be displayed.

Screen Shot 2020 08 06 at 4 28 12 PM

PkgSigner AutoPkg processor updated for Python 3

A while back, I discussed how to incorporate installer package signing into AutoPkg workflows. The PkgSigner processor used in this workflow was originally written by Paul Suh and it uses Apple’s productsign tool to access a Developer ID Installer certificate stored in the login keychain.

Like other processors and AutoPkg itself, PkgSigner needed updating to Python 3 when Python 2 reached end-of-life in April 2020. This updating process has been completed, thanks to Nick McDonald. To make sure PkgSigner is consistently using the same Python environment across machines, PkgSigner has also been set to use the Python 3 install bundled with AutoPkg.

For those who need it, I have a copy of the PkgSigner processor available via the link below:

https://github.com/rtrouton/AutoPkg_Processors/tree/master/PkgSigner

Enabling diagnostic logging for Microsoft Outlook 2019

I was recently asked for assistance with a way to enable diagnostic logging for Microsoft Outlook 2019 for macOS:

I had seen Microsoft’s KBase article on how to do it, where it references enabling logging via the Outlook preferences:

https://support.microsoft.com/en-us/help/2872257/how-to-enable-logging-in-outlook-for-mac

However, the KBase article only references how to enable this logging via the GUI and does not show how to do this via the command line. Fortunately my colleague @golby knew which settings could enabled from the command line to produce the requested logging. For more details, please see below the jump:

The following defaults command can be run to enable Outlook’s diagnostic logging for the logged-in user:

defaults write com.microsoft.Outlook LogForTroubleshooting -bool TRUE

The following defaults command can be run to disable Outlook’s diagnostic logging for the logged-in user:

defaults write com.microsoft.Outlook LogForTroubleshooting -bool FALSE

Once the logging is enabled, the logs are stored in the following location:

/Users/username_goes_here/Library/Containers/com.microsoft.Outlook/Data/Library/Logs/

To help with enabling the logging, I’ve built a configuration profile to enable the logging. It’s available via the link below:

https://github.com/rtrouton/profiles/tree/master/EnableMicrosoftOutlookLogging

create_macos_vm_install_dmg updated for macOS Big Sur installer disk images

As part of testing macOS Big Sur 11.0.0, I’ve updated my create_macos_vm_install_dmg script. For more details, please see below the jump.

    Downloading the script:

The create_macos_vm_install_dmg script is available from the following location:

https://github.com/rtrouton/create_macos_vm_install_dmg

    Using the script:

Once you have the script downloaded, run the create_macos_vm_install_dmg script with two arguments:

  1. The path to an Install macOS.app.
  2. An directory to store the completed disk image in.

Example usage:

If you have a macOS Big Sur Beta installer available, run this command:

/path/to/create_macos_vm_install_dmg.sh "/Applications/Install macOS Beta.app" /path/to/output_directory

Screen Shot 2020 06 28 at 4 58 36 PM

Screen Shot 2020 06 28 at 5 01 38 PM

If you had chosen to not create the .iso file, this should produce a .dmg file inside output_directory that’s named something similar to macOS_1100_installer.dmg.

(Note: the WWDC beta identifies itself as 10.16, so a disk image of Big Sur’s WWDC beta will be named macOS_1016_installer.dmg.)

If you chose to create the .iso disk image, you should have two files inside the chosen directory: macOS_1100_installer.dmg and macOS_1100_installer.iso

Screen Shot 2020 06 28 at 5 02 46 PM

    Creating a VM with the OS installer disk image using VMware Fusion 11.x

1. Launch VMWare Fusion 11.x

2. In VMWare Fusion, select New… under the File menu to set up a new VM

3. In the Select the Installation method window, select Install from disc or image.

Screen Shot 2020 06 28 at 5 06 34 PM

4. In the Create a New Virtual Machine window, click on Use another disc or disc image…

Screen Shot 2020 06 28 at 6 11 45 PM

5. Select your macOS installer disk image file and click on the Open button.

Screen Shot 2020 06 28 at 5 07 31 PM

6. You’ll be taken back to the Create a New Virtual Machine window. Verify that the disk image file you want is selected, then click the Continue button.

Screen Shot 2020 06 28 at 5 07 47 PM

6. In the Choose Operating System window, set OS as appropriate then click the Continue button.

In this example, I’m setting it as follows:

  • Operating System: Apple OS X
  • Version: macOS 10.15

Screen Shot 2020 06 28 at 5 08 13 PM

7. In the Finish window, select Customize Settings if desired. Otherwise, click Finish.

Screen Shot 2020 06 28 at 6 05 55 PM

8. Save the VM file in a convenient location.

Screen Shot 2020 06 28 at 6 04 56 PM

The VM is now configured and set to use the macOS installer disk image. To install macOS, start the VM and then run through the normal installation process when prompted.