During my internship with Universal Creative in summer 2024, I worked with a version control and workflow management software called M-Files. An Excel workbook referred to as the Rosetta Stone was used to track the status of the safety related tests performed during the commisioning of our attractions and a daily task I took on was to gather a report from M-Files on each test's status, update the Rosetta Stone, then calculate the daily progress made toward test competion and update this into a separate burndown workbook.
At the suggestion of my direct manager I looked into the possibility of automating this process and eventually created a program in Python with a GUI and persistent settings/saved data to complete this process with a single button click. A re-creation of the program UI is shown below
At startup the program checks for a local saved-data file to load settings from the last use and auto-populate the UI components if it is present, if not found the fields will be blank and the file created for next time. With all relevant fields filled including a username and password, server connection info, and location of the burndown and Rosetta Stone workbooks, the program auto-mode can be started.
In auto-mode the server connection will be made, metadata on all tests in the vault pulled from the M-Files REST API, and any additional API calls made if any tests need their status updated. This data is then saved to a specific sheet in the Rosetta Stone workbook and the test list from the Rosetta Stone is read to filter out any untracked tests still in M-Files. Finally the total number of tests in each category of interest is calculated and stored in the next available line in the burndown workbook which can then be opened by the user in excel and read without the need to complete any part of this process manually.
A major roadblock in developing this program was the general limitation of what data could be read from the M-Files API. In reading the API documentation and experimenting with the call parameters I was able to get a list of all files in the test list and some information like the timestamp of the most recent update but this data did not include the current status of any of the files. An additional API call needed to be made to check the status which for a group of roughly 1800 tests would result in 1800 API calls taking a total time of over six minutes. By keeping a local CSV file containing the test ID, date last modified, and the last known status I was able to create a system to compare the last-modified timestamp for each test with the locally stored data to call for an update only on tests which had changed or been added since the program was last run. This reduced the program run time from over six minutes to as little as 15 seconds and eliminated potentially thousands of unneccesary API calls.
During my internship session I was able to validate the operation of all features of the program and ensured that changes to the M-Files vault were handled correctly with no manual intervention needed. The program became flexible enough that it could be used to track two different test groups saving to two differently formatted sets of Rosetta Stone and burndown workbooks without overwriting or corrupting any data. A future revision to the program and system would have been to standardize the format of the two Excel workbooks but as my summer internship drew to a close, that was a task I had to leave unfinished. I left a well documented library of code and a user-guide and references for anyone who had to maintain or update my code with the hope that it would evolve and continue to prove useful in future projects even though I could not be there to implement new features myself.