Backing up a Jamf Pro database hosted in Amazon Web Services’ RDS service to an S3 bucket

For those using Amazon Web Services to host Jamf Pro, one of the issues you may run into is how to get backups of your Jamf Pro database which you can access. AWS’s RDS service makes backups of your database to S3, but you don’t get direct access to the S3 bucket where they’re stored.

In the event that you want a backup that you can access of your RDS-hosted MySQL database, Amazon provides the option for exporting a database snapshot to an S3 bucket in your AWS account. This process will export your data in Apache Parquet format instead of a MySQL database export file.

However, it’s also possible to create and use an EC2 instance to perform the following tasks:

  1. Connect to your RDS-hosted MySQL database.
  2. Create a backup of your MySQL database using the mysqldump tool.
  3. Store the backup in an S3 bucket of your choosing.

For more details, please see below the jump.

Setting up the backup server

In order to run the backups, you’ll need to set up several resources in AWS. This includes the following:

Please use the procedure below to create the necessary resources:

1. Create an S3 bucket to store your MySQL backups in.

2. Set up an IAM role which allows an EC2 instance to have read/write access to the S3 bucket where you’ll be storing the backups.

3. Create an EC2 instance running Linux.

Note: This instance will need to have enough free space to store a complete backup of your database, so I recommend looking at the size of your database and choose the appropriate amount of disk space when you’re setting up the new instance.

4. Install the following tools on your Linux EC2 instance:

5. Attach the IAM role to your EC2 instance.

6. Create a VPC Security Group which allows your EC2 instance and RDS-hosted database to successfully communicate with each other.

Note: If you’re running Jamf Pro in AWS and you’re hosting your database in RDS, you likely have a security group like this set up already. Otherwise, your Jamf Pro server wouldn’t be able to communicate with the database.

7. Add the EC2 instance to the VPC Security Group which allows access to your RDS database.

Once all of the preparation work has been completed, use the following procedure to set up the backup process:

Note: For the purposes of this post, I’m using Red Hat Enterprise Linux (RHEL) as the Linux distro. If using another Linux distro, be aware that you may need to make adjustments for application binaries being stored in different locations than they are on RHEL.

Setting up MySQL authentication

1. Log into your EC2 instance.

2. Run the following command to change to a shell which has root privileges.

sudo -s

3. Create a MySQL connection named local using a command similar to the one below:

mysql_config_editor set --login-path=local --host=rds.database.server.url.goes.here --user=username --password

You’ll then be prompted for the password to the Jamf Pro database.

For example, if your Jamf Pro database has the following RDS URL and username:

  • URL: jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com
  • Username: jamfsw03

The following command would be used to create the MySQL connection:

mysql_config_editor set --login-path=local --host=jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com --user=jamfsw03 --password

Running this command should create a file named .mylogin.cnf in root’s home directory. To see the contents of the MySQL connection file and verify that it’s set up correctly, run the following command:

mysql_config_editor print --login-path=local

That should produce output which looks similar to what’s shown below:

[local]
user = jamfsw03
password = *****
host = jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com

Note: The reason for creating the MySQL connection is so we don’t need to store the database password as plaintext in the script.

Creating the backup script

1. Once the MySQL connection has been created, copy the script below and store it as /usr/local/bin/aws_mysql_database_backup.sh.

This script has several variables that will need to be edited. For example, if your Jamf Pro database is named jamfprodb, the S3 bucket you created is named jamfpro-database-backup and the MySQL connection you set up is named local, the following variables would look like this:

# Enter name of the RDS database being backed up

database_name=jamfprodb

# Enter name of the S3 bucket

S3_bucket=jamfpro-database-backup

# Enter the MySQL connection name

mysql_connection_name=local

This script is also available via the link below:

https://github.com/rtrouton/aws_scripts/tree/master/rds_mysql_backup_to_s3_bucket

2. Make the script executable by running the following command with root privileges:

chmod 755 /usr/local/bin/aws_mysql_database_backup.sh

3. Ensure that root owns the file by running the following command with root privileges:

chown root:root /usr/local/bin/aws_mysql_database_backup.sh

Note: The mysqldump command used in the script is set up with the following options:

  • – -max-allowed-packet=1024M
  • – -single-transaction
  • – -routines
  • – -triggers

– -max-allowed-packet=1024M: This specifies a max_allowed_packet value of 1 GB for mysqldump. This allows the packet buffer limit for mysqldump to grow beyond its default 4 MB limit to the 1 GB limit specified by the max_allowed_packet value.

– -single-transaction: Generates a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

– -routines: Dumps all stored procedures and stored functions.

– -triggers: Dumps all triggers for each table that has them.

These options are designed for use with InnoDB tables and provides an exact point-in-time snapshot of the data in the database. These options also do not require the MySQL tables to be locked, which in turn allows the Jamf Pro database to continue to work normally while the backup is taking place.

Scheduling the database backup:

You can set up a nightly database backup using cron. For example, if you wanted to set up a database backup to run daily at 11:30 PM, you can use the procedure below to set that up.

1. Export existing crontab by running the following command with root privileges:

crontab -l > /tmp/crontab_export

2. Export new crontab entry to exported crontab file by running the following command with root privileges:

echo "30 23 * * * /usr/local/bin/mysql_database_backup.sh 2>&1" >> /tmp/crontab_export

3. Install new cron file using exported crontab file by running the following command with root privileges:

crontab /tmp/crontab_export

 

Once everything is set up and ready to go, you should see your database backups and associated logs begin to appear in your S3 bucket.

Screen Shot 2020-02-15 at 10.52.51 PM

Creating root-level directories and symbolic links on macOS Catalina

One of the changes which came with macOS Catalina was the introduction of a read-only root volume for the OS. For users or environments which were used to using adding directories to the root level of the boot drive, this change meant they could no longer do that.

To address this need, Apple added a new method for creating directories at the root level which leverages Apple File System’s new firmlink functionality. Firmlinks are new in macOS Catalina and are similar in function to Unix symbolic links, but instead of only allowing travel one way (from source to destination) firmlinks allow bi-directional travel.

The use of firmlinks is exclusively reserved for the OS’s own use, but Apple has also made available what are called synthetic firmlinks. These synthetic firmlinks are how the OS enables folks to create directories and symbolic links on the read-only boot volume. For more details, please see below the jump.

To create a synthetic firmlink, you need to do the following:

1. Create a file in the /etc directory named synthetic.conf.
2. Make sure /etc/synthetic.conf has the following permissions:

  • root: read, write
  • wheel: read
  • everyone: read

3. In /etc/synthetic.conf, define the name(s) of the empty directory or symbolic link you want to have appear at the root level.

4. After all desired entries have been made, save the /etc/synthetic.conf file.

5. Restart the Mac to apply the changes.

For example, /etc/synthetic.conf may look like this:

Note: In those cases where you’re creating a symbolic link and are including a path, the start point for the directory path is not /. Instead, it is the next directory level down.

To show how this works, I’ve created a directory containing installer packages located at /Users/Shared/installers.

Screen Shot 2020 01 17 at 10 46 06 PM

To create a symbolic link at the root level named installers which points to /Users/Shared/installers, I would do the following:

1. Create the /etc/synthetic.conf file if it didn’t already exist.
2. Add the following entry to the /etc/synthetic.conf file:

installers	Users/Shared/installers

Screen Shot 2020 01 17 at 10 32 45 PM

3. Reboot the Mac.

Note: Whomever designed this came down on the “tabs” side of the “tabs vs. spaces” debate. When creating the separation between installers and Users/Shared/installers in the /etc/synthetic.conf file, you need to use tabs. If you use spaces instead, the synthetic firmlink won’t be created.

After the reboot, you should see a symbolic link named installers at the root level of the boot volume. When you navigate to it, you should see the contents of /Users/Shared/installers.

Screen Shot 2020 01 17 at 10 33 30 PM

To remove the symbolic link, remove the relevant entry from /etc/synthetic.conf and then restart. After the reboot, the installers symbolic link should be missing from the root level of the boot volume.

Screen Shot 2020 01 17 at 10 46 15 PM

For more information, please see the synthetic.conf man page. This is available by entering the following command in Terminal on macOS Catalina:

man synthetic.conf

A beginner’s guide to the Jamf Pro Classic API

When working with Jamf Pro, one way to save yourself a lot of clicking in the admin console is to use one of the two current Jamf Pro APIs. Both APIs are REST APIs, which means they can perform requests and receive responses via HTTP protocols like GET, PUT, POST and DELETE. That means that the curl tool can be used to send commands to and receive information from a Jamf Pro server.

The two APIs are as follows:

  • Classic API
  • Jamf Pro API (formerly known as the Universal API)

Classic API

This API is the original one which Jamf Pro started with and it is slated for eventual retirement. This API is designed to work with XML and JSON.

The base URL for the Classic API is located at /JSSResource on your Jamf Pro server. If your Jamf Pro server is https://server.name.here:8443, that means that the API base URL is as follows:

https://server.name.here:8443/JSSResource

To help you become familiar with the API, Jamf includes documentation and “Try it out” functionality at the following URL on your Jamf Pro server:

https://server.name.here:8443/api

The Classic API is designed to work with usernames and passwords for authentication, with the username and password being passed as part of the curl command.

Examples: https://developer.jamf.com/apis/classic-api/index

Jamf Pro API

This API is in beta and is designed to be an eventual replacement for the Classic API. This API is designed to work with JSON.

The base URL for the Jamf Pro API is located at /uapi on your Jamf Pro server. If your Jamf Pro server is https://server.name.here:8443, that means that the API base URL is as follows:

https://server.name.here:8443/uapi

To help you become familiar with the API, Jamf includes documentation and “Try it out” functionality at the following URL on your Jamf Pro server:

https://server.name.here:8443/uapi/docs

The Jamf Pro API is designed to work with token-based authentication, with a Jamf Pro username and password used to initially generate the necessary token. These tokens are time-limited and expire after 30 minutes. However, you can generate a new token for API authentication using the existing token’s credentials. The new token generation process does the following:

  1. Creates a new token with the same access rights as the existing token.
  2. Invalidates the existing token.

Jamf Pro API examples: https://developer.jamf.com/apis/jamf-pro-api/index

For more details, please see below the jump.

Of the two, the Classic API is the one currently most used by Jamf Pro admins and the one I’ll be focusing on how to use it, using XML for input and output. The reasons that the Classic API is most used at this time are the following:

  • The Classic API has been around the longest, so more Jamf Pro admins are familiar with it.
  • Both XML input and output and JSON output are supported:
    • There are various tools installed as part of macOS which allow XML parsing and manipulation when using Bash shell scripting.
    • There are not tools installed as part of macOS which allow JSON parsing and manipulation when using Bash shell scripting.

Update – January 3, 2020:  As Graham Pugh pointed out in the comments, I made a mistake originally on what can be done with JSON, where I stated that both JSON input and output were supported.

While you can both input and output XML, JSON can only be output. I’m updating the post to correct this.


 

There are tools available for macOS which allow easy JSON parsing and manipulation, with jq being an excellent example. However, they are not installed as part of macOS Catalina or earlier which means that its up to the Mac admin to make sure the relevant JSON parsing tools are installed and up to date on the Mac admin’s managed Macs.

In contrast, a number of XML parsing tools (like xmllint and xpath) are installed as part of macOS Catalina and earlier, which means that the Mac admin can currently rely on them being available if the Mac admin needs to run API-using scripts on managed Macs.

When using the Classic API, there are four commands available:

  • DELETE
  • GET
  • PUT
  • POST

DELETE = Deletes data from Jamf Pro
GET = Retrieves data from Jamf Pro
PUT = Updates data on Jamf Pro
POST = Creates new data on Jamf Pro

When sending one of these commands to Jamf Pro, you must include the following:

  1. Tool being used – In this case, we’re using curl
  2. Authentication – In this case, we’re using the username and password of a Jamf Pro user with the correct privileges to run the API command.
  3. URL – We’re using the API base URL followed by the specific API endpoint and data identifier
  4. -X or –request – We’re using the curl option for sending a request because we’re sending in a request for Jamf Pro to do something.
  5. Command being sent – This will be DELETE, GET, PUT or POST.

For all commands except DELETE, we also need to specify a header as this will specify for Jamf Pro if we’re using XML or JSON. Without this header specification, you should get XML but Jamf Pro may send back JSON instead. By specifying XML or JSON using the header, we avoid this issue.

The reason why DELETE is an exception is that we’re not sending or receiving any XML or JSON data. Instead, the Jamf Pro server receives and executes the command to delete the specified data.

Headers:

GET

The header should look like this for XML output:

-H "accept: application/xml"

The header should look like this for JSON output:

-H "accept: application/json"

 

PUT

The header should look like this for XML input:

-H "content-type: application/xml"

 

POST

The header should look like this for XML input:

-H "content-type: application/xml"

 

If you look closely, GET is using different headers than PUT and POST are:

GET

-H "accept: application/xml"

PUT / POST

-H "content-type: application/xml"

Why? It has to do with which way that data is expected to flow. With GET, you’re downloading data from the server and with PUT / POST, you’re uploading to the server. So with a GET command, setting accept: as part of the header lets the Jamf Pro server know how you’re planning to receive the data. For PUT / POST, setting content-type: as part of the header lets the Jamf Pro server know what to expect what kind of content it should be expecting for the data being uploaded to it.

Using GET

Let’s take a look at some GET examples, using Jamf’s tryitout.jamfcloud.com server. This server doesn’t require authentication, but I’m going to add the curl options for sending username and password as part of the command so that the command matches what a normal Jamf Pro server should be sent.

curl -su username:password "https://tryitout.jamfcloud.com/JSSResource/accounts" -H "accept: application/xml" -X GET

When I run that, I get the following XML output:

That could use some improvement for readability, so next let’s pipe it through xmllint’s formatting option to make it look nicer.

curl -su username:password "https://tryitout.jamfcloud.com/JSSResource/accounts" -H "accept: application/xml" -X GET | xmllint --format -

Note: When using xmllint’s formatting option, you need to specify the file being formatted: xmllint –format /path/to/filename.xml. In order to have it format standard input, like we’re trying to do in this example by piping the output to xmllint, the filename used is a single dash ( ).

When I run that, I get the following output:

That output lists all accounts and gives me two pieces of information about each account:

  • ID
  • Name

Normally, the API only gives me the option about pulling additional data about something specific by using its ID number. However, for accounts, I’m also given the option of doing a lookup by account name.

From there, I can pull out information about the following account using either the username or ID:

Username: jnuc
ID: 3

By ID: https://tryitout.jamfcloud.com/JSSResource/accounts/userid/3

curl -su username:password "https://tryitout.jamfcloud.com/JSSResource/accounts/userid/3" -H "accept: application/xml" -X GET | xmllint --format -

By Name: https://tryitout.jamfcloud.com/JSSResource/accounts/username/jnuc

curl -su username:password "https://tryitout.jamfcloud.com/JSSResource/accounts/username/jnuc" -H "accept: application/xml" -X GET | xmllint --format -

Since both API requests are ultimately referring to the same data, you should get identical output:

Using POST

When you want to upload all-new data to the Jamf Pro server via the API, you would use the POST command. This command requires that the data be sent along with the API command, so you would need to have either the XML written out as part of the API command or in a file. One important thing to know when using POST is that the ID used is always going to be the number 0. Jamf Pro will interpret an ID of zero as meaning that Jamf Pro should assign the next available ID number to the uploaded data.

For example, if you want to create a new department named Art on your Jamf Pro server, you could use a command like the one shown below:

In this example, the XML being used is pretty simple so we’re flattening out the necessary XML into one line and including it in the command. We’re also using curl’s -d option, which tells curl that it will be transmitting data along with the rest of the command.

If the data being sent along is a little unwieldy to include with the command, curl has a -T option for uploading a file. For example, if you wanted to create a smart group, you could write out the necessary XML into a file like the one shown below:

Once you have the file ready, you could use a command like the one shown below to create the smart group:

curl -su username:password "https://server.name.here/JSSResource/computergroups/id/0" -H "content-type: application/xml" -X POST -T /path/to/filename.xml

Using PUT

When you want to update existing data to the Jamf Pro server via the API, you would use the PUT command. When using this, you would be targeting some existing data and changing one of the data’s existing attributes. A good example would be if you want to change the status of a policy from enabled to disabled. To do this with a policy which has an ID number of 27, you could use a command like the one shown below:

Similar to the earlier POST example which uses a XML file, you can also used the -T option with PUT to upload a file. For example, if you wanted to update the distribution point associated with a network segment which has an ID number of 561, you could add the necessary data into an XML file like the one shown below:

Once you have the file ready, you could use a command like the one shown below to update the network segment:

curl -su username:password "https://server.name.here/JSSResource/networksegments/id/561" -H "content-type: application/xml" -X PUT -T /path/to/filename.xml

Using DELETE

When you want to delete data from the Jamf Pro server via the API, you would use the DELETE command. All you generally need with the DELETE command is the identifier for the data you want to remove, so the commands are simpler. No header info or specifying if you want to use XML or JSON is required.

For example, if you wanted to delete an existing computer inventory record which has the ID number of 1024, you can use a command like the one shown below to do so:

curl -su username:password "https://server.name.here/JSSResource/computers/id/1024" -X DELETE

Similarly, if you wanted to delete an existing network segment which has the ID number of 22, you could use a command like the one shown below:

curl -su username:password "https://server.name.here/JSSResource/networksegments/id/22" -X DELETE

Moving on to more advanced usage

You can use the Jamf Pro Classic API with scripting and other automation tools to accomplish some truly amazing administrative feats with Jamf Pro and the Jamf Pro API beta looks to build on that strong foundation. While the information in this post won’t solve all of your API-related issues, it does hopefully provide enough foundational support to get you started with using the Jamf Pro Classic API. Good luck!

Apple Device Management book now available for purchase from Apple Books

For the folks who have asked about eBook purchasing options for the Apple Device Management book I wrote with my colleague Charles Edge, I’m happy to say that you can now also get our book from Apple Books.

IMG 0941

That means that you can now get it in digital format from the following vendors:

If you’re looking for a digital copy, there are lots of options available. Hopefully, you’ll find one that works best for you.

Apple Device Management book now available for purchase from Amazon and Apress

I’ve been working on a new book with my colleague Charles Edge and I’m delighted to announce it’s now available for regular sale from both Amazon and Apress, our publisher!

Amazon: https://www.amazon.com/Apple-Device-Management-Managing-AppleTVs/dp/1484253876

Apress: https://www.apress.com/us/book/9781484253878

This quality item is suitable for any gift-giving occasion (including the ones occurring this week!) in addition to being the perfect something for yourself. For those who have asked about it being available in electronic format, it’s available in the following formats depending on the seller:

  • Amazon: Available for the Kindle
  • Apress: Available in PDF and ePub format

Deploying Terminal profile settings using macOS configuration profiles

A number of Mac admins have their Terminal appearance settings configured just the way they like them, but it can be a bit of manual work to export and import them. After having to manually configure and export these settings more than a few times, I wanted to see if it was possible to export these settings in a way to make it easy to convert into a configuration profile.

With a little work and research, I was able to write a script which handled exporting the Terminal profile I wanted into a properly formatted plist file. For more details, please see below the jump.

The script I wrote is named Export_Mac_Terminal_Profiles and it is a .command file, which means it can be run by double-clicking on it. To use it, please use the following procedure:

  1. Identify the name of the Terminal profile you want to export.
  2. Double-click on the Export_Mac_Terminal_Profiles.command script.
  3. Enter the name of the Terminal profile you want to export.
  4. Decide if you want the exported Terminal profile to be set up as a default profile. By specifying it as a default profile, the exported Terminal profile will be configured as both a startup profile and as a default profile.

In this example, I’ve configured a custom Terminal profile named Documentation in my account’s Terminal settings and want to export it for use with a configuration profile.

Screen Shot 2019 12 18 at 4 47 23 PM

To export it, I followed the procedure described above and entered the following when prompted:

Name: Documentation
Default profile: 1 (which configures the exported profile to be set as both a startup profile and as a default profile.)

Screen Shot 2019 12 18 at 6 16 55 PM

When the script finished running, it opened a Finder window showing me a com.apple.Terminal.plist file.

Screen Shot 2019 12 18 at 6 16 35 PM

This plist file contained all of the settings needed to create a configuration profile which did the following:

  1. Install the Documentation Terminal profile
  2. Configure the Documentation Terminal profile as both a startup profile and as a default profile.

From there, I used Tim Sutton‘s mcxToProfile tool to create a configuration profile from the exported com.apple.Terminal.plist file.

Screen Shot 2019 12 18 at 6 25 47 PM

Once I had the configuration profile, I verified that I was able to install it and that the Documentation Terminal profile was now installed and set as the default profile.

Screen Shot 2019 12 18 at 5 10 07 PM

However, one side effect I noticed was that installing a Terminal profile using a configuration profile resulted in all other Terminal profiles vanishing from the Terminal preferences.

Screen Shot 2019 12 18 at 4 42 25 PM

To restore copies of the OS-provided Terminal profiles, click on the Profiles window’s cog wheel and select Restore Default Profiles.

Screen Shot 2019 12 18 at 4 44 49 PM

This will restore the OS-installed Terminal profiles in their default configuration. This restore process will not affect the Terminal profile installed by the configuration profile.

Screen Shot 2019 12 18 at 4 47 23 PM

The Export_Mac_Terminal_Profiles script is available below and also on GitHub at the following address:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Export_Mac_Terminal_Profiles

The example configuration profile for the Documentation Terminal profile is also available below.

Enabling or disabling all Jamf Pro policies using the API

Every so often, it may be useful to be able to enable or disable all of your current Jamf Pro policies. In those cases, depending on how many policies you have, it can be tedious to have to do them one at a time using the admin console.

However, with the right API calls in a script, it’s straightforward to perform these tasks using the Jamf Pro API. For more information, please see below the jump.

To disable a Jamf Pro policy, you can use curl to send an API call similar to the one shown below:

As an example, here’s how the API call would look if using the following information to disable a specified Jamf Pro policy:

  • Jamf Pro server: https://jamfpro.demo.com
  • Jamf Pro username: jpadmin
  • Jamf Pro username’s password: Password1234
  • Jamf Pro policy ID number: 27

To enable a Jamf Pro policy, you can use curl to send an API call similar to the one shown below:

If using the same information as the example above, here’s how the API call would look when enabling the specified Jamf Pro policy

Since running the API calls individually may get tedious, I’ve written a couple of scripts to assist with these tasks:

  • Jamf-Pro-Disable-All-Policies.sh
  • Jamf-Pro-Enable-All-Policies.sh

These scripts are designed to use the Jamf Pro API to do the following:

  • Identify the individual IDs of the computer policies stored on a Jamf Pro server
    • If running Jamf-Pro-Disable-All-Policies.sh, disable the policy using its policy ID
    • If running Jamf-Pro-Enable-All-Policies.sh, enable the policy using its policy ID
  • Display HTTP return code and API output

When the script is run, you should see output similar to that shown below. Because these scripts affect all policies on the Jamf Pro server, the scripts will ask you to confirm that you want to do this by typing the following when prompted:

YES

Any other input will cause the scripts to exit.

Screen Shot 2019 12 16 at 3 23 32 PM

If the script is successful, you should see output like this for each policy. In this case, this is output from Jamf-Pro-Disable-All-Policies.sh:

Screen Shot 2019 12 16 at 3 24 19 PM

 

The policy enabling and disabling scripts are available from following addresses on GitHub:

Session videos from Jamf Nation User Conference 2019 now available

Jamf has posted the session videos for from Jamf Nation User Conference 2019, including the video for my “MDM: From Nice-To-Have to Necessity” session.

For those interested, all of the the JNUC 2019 session videos are available on YouTube. For convenience, I’ve linked my session here.

Identifying Self Service policies with missing icons

As part of setting up Self Service policies in Jamf Pro, the usual practice is to include an icon to help the user distinguish between various Self Service policies.

Screen Shot 2019 11 25 at 12 02 35 PM

However, when copying policy information via the API, a Self Service policy’s icon is sometimes not copied along with the rest of the policy. When this happens, it can be hard to figure this out later which ones were missed.

To help with situations like this, I have a script which does the following:

  1. Checks all policies on a Jamf Pro server.
  2. Identifies which ones are Self Service policies which do not have icons
  3. Displays a list of the relevant policies

For more details, please see below the jump.

The script is named Jamf_Pro_Detect_Self_Service_Policies_Without_Icons.sh. For authentication, the script can accept hard-coded values in the script, manual input or values stored in a ~/Library/Preferences/com.github.jamfpro-info.plist file.

The plist file can be created by running the following commands and substituting your own values where appropriate:

To store the Jamf Pro URL in the plist file:

defaults write com.github.jamfpro-info jamfpro_url https://jamf.pro.server.goes.here:port_number_goes_here

To store the account username in the plist file:

defaults write com.github.jamfpro-info jamfpro_user account_username_goes_here

To store the account password in the plist file:

defaults write com.github.jamfpro-info jamfpro_password account_password_goes_here

When the script is run, you should see output similar to that shown below.

Report

The script is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Scripts/Jamf_Pro_Detect_Self_Service_Policies_Without_Icons

Identifying vendors of installed Java JDKs using Jamf Pro

Since Oracle’s license change for Java 11 and later took effect in October 2018, where Oracle announced that they would now be charging for the production use of Oracle’s Java 11 and later, the number of open source (and free) OpenJDK distributions has increased dramatically.

Before the license change, most Mac admins would only install Oracle Java on those Macs which needed Java. Now, the list of available vendors has broadened to include the following:

Note: There may be even more OpenJDK distributions available for macOS, but these are the ones I know of.

To help Jamf Pro admins keep track of which vendors’ Java distributions are installed on their Macs, I’ve written a Jamf Pro Extension Attribute to help identify them. For more details, please see below the jump.

This Jamf Pro Extension Attribute verifies if a Java JDK is installed. Once the presence of an installed JDK has been verified by checking the java_home environment variable, the JDK is checked for the vendor information. The EA will return one of the following values:

  • None
  • AdoptOpenJDK
  • Amazon
  • Apple
  • Azul
  • OpenJDK
  • Oracle
  • SAP
  • Unknown

The returned values indicate the following:

None = No Java JDK is installed.
AdoptOpenJDK = AdoptOpenJDK is the Java JDK vendor.
Amazon = Amazon is the Java JDK vendor.
Apple = Apple is the Java JDK vendor.
Azul = Azul is the Java JDK vendor.
OpenJDK = OpenJDK is the Java JDK vendor.
Oracle = Oracle is the Java JDK vendor.
SAP = SAP is the Java JDK vendor.
Unknown = There is a Java JDK installed, but it is not from one of the listed vendors.

The Extension Attribute is available below, and at the following address on GitHub:

https://github.com/rtrouton/rtrouton_scripts/tree/master/rtrouton_scripts/Casper_Extension_Attributes/java_jdk_vendor