Simplifying Repetitive Tasks with CRON Jobs: My Backend Story

When I was tasked with developing the API for my church website, I was both honored and excited. It wasn’t just a project—it was recognition that my church saw me as the tech guy, their very own “tech bro.” Naturally, I was determined to do a stellar job. I rolled up my sleeves, fired up my laptop, and chose my trusty tools: NestJS for the framework and MongoDB for data storage.

The project itself was straightforward—or so I thought. The website’s purpose was to give church members easy access to information about upcoming meetings, allow them to register for these events, and provide downloadable access to sermons. On the flip side, it also needed to empower church leaders with valuable data on registration stats and attendee numbers.

Most of these requirements were a breeze to implement. Meetings, registration, analytics? No problem. But the sermon implementation? That turned out to be my Achilles' heel. Fetching, organizing, and updating sermon data from Google Drive was a whole different beast, and I soon realized I’d signed up for a challenge far bigger than I’d anticipated.

When I first developed the application, I ran into two major roadblocks on the backend. The first was fetching sermons from Google Drive and organizing them in the database by the date they were preached and the series they belonged to. The second was figuring out how to update this information without messing up the existing records—especially ensuring that the file links stayed intact.

By June or July last year, I cracked the first problem. But the second one? I just didn’t have the bandwidth to solve it properly. So, I resorted to a tedious manual process that went something like this:

1. First, I’d manually run the code to fetch the sermon data from Google Drive.

2. Then, I’d turn to ChatGPT to help reformat the data into my desired schema.

3. After that, I’d update my local database with the new records and export everything as a JSON file.

4. Finally, I’d swap out the old sermon data in the production database with the updated version.

As you can imagine, this was a nightmare—time-consuming and full of opportunities for mistakes. I knew I needed a better solution. Enter CRON jobs, my unsung hero. For those who aren’t familiar, CRON is like having a personal assistant for your server—a way to automate repetitive tasks at scheduled intervals. It’s a DevOps staple, and in this case, it was exactly what I needed to save myself from the endless cycle of manual updates.

Here’s how I finally tackled the issue once and for all:

1. Fetching and Organizing: I wrote a script to fetch the sermon files from Google Drive, organize them by the year they were preached, and save them in a folder on the server. Each file included details like the sermon name, file ID, upload date, whether it was part of a series, the download URL, and more.

2. Database Updates: Next, I created another script to process these files. It would read the contents, check the database to see if a record already existed, and either update it or create a new one as needed.

3. Cleanup: After the database was updated, the script would delete the files from the server to ensure there was no outdated data hanging around for the next run.

4. Automation: Finally, I scheduled this entire workflow to run every two weeks using CRON. Now, the database stays fresh and up-to-date without me lifting a finger.

What was once a tedious manual process is now a streamlined, automated task. CRON jobs didn’t just save me time—they gave me my sanity back.