So I had 250GB of data to upload to Google G Suite.

Some ~200GB is Pictures the other is a mix of Documents and an assortment of Macbook home directory detritus.

So I wanted to upload the photos and get them to automatically appear in Google Photos.

Google Backup and Sync supposedly is the application that Google recommends to upload and sync files from multiple devices.

The problem is that Google Backup and Sync seems to have so many things on it’s mind that it can’t concentrate on just getting your data up to the cloud.

First problem using backup and sync to upload data is the atrociously slow Australian internet. At the speeds that DSL in Melbourne the best case scenario meant it would take months to copy the data up. And that is before factoring in that backup and sync stops and restarts at every reboot.

So as a work-a-round I thought  that if I copied the data to the Amazon Cloud and then fired up a t2.medium EC2 Windows instance then the Google backup and sync client would make short work of copying the data to Google Drive and Photos.

Sadly this wasn’t the case. The copy takes days. Because the files I was copying have many thousands of images that are too small to upload (as a result of them being from the mac Photos Library.photoslibrary app) backup and sync thinks about copying them up to Google but then has to laboriously skip them each time your restart / reboot the backup and sync computer.

Googles answer to problems you have with backup and sync is Reboot, Disconnect your Google Account, Remove and Re-install, Wait. Basically the Google forums sound like an episode of the IT Crowd. The advice is repeatedly: “Have you tried switching it off and back on again?”

Frustratingly the assistance on the forums seems to have been filtered through the Sales, Marketing & Legal departments. So real help is buried.

It’s been really disappointing. I am starting to think in future I would rather use oneDrive or Dropbox. Anything but Backup & Sync.

The backup and sync client also seems to starve resources. I increased my AWS EC2 instance from a t2.medium (2CPU and 4GB RAM) to a t2.large (2CPU and 8GB RAM) but found this didn’t help. The backup and sync program wasn’t using a lot of RAM perhaps 1.5-1.8GB RAM but it was pegging the t2.medium / large instance’s CPU at 90-100% utilisation.

So one thing that I did find that helped was changing to a c5.large Instance Type (This is 2vCPU’s with 4GB RAM but with 3.0GHz CPU’s instead of 2.4GHz CPU’s which the t2 instances had). This brought down the CPU utilisation to around 50-60% and so backup and sync finally wasn’t running like molasses in Winter.

Hopefully this will allow it to churn through the files & folders it should be uploading and finally get to a point where it tells me it’s synced….

Update! So finally it’s finished.

The EC2 Instance was created on “2018-04-23T01:04:45.000Z” and the upload and sync completed  “Wed 9 May 2018 11:17:16 UTC” so  it’s taken after doing all its “upload skipped” due to being too small and “upload skipped already uploaded” stuff.
15 days to copy 153GB

So in the end my question is “How the heck can a global company create a product that wastes so much time and resources?”.