I was able to do a small proof of concept project with a friend in the ISP business recently and we were able to take some "difficult" backup files from a tool (which turned out to be weirdly named .csv's) and put together a python-pandas-duckdb solution for operators to read those strange back up files and create a wide table of handy reference information.
Here's my direct github link with the saved .html file:
and here's a html preview via a 3rd party that renders the above link as if you had opened locally (as I usually do =] ):
3rd party html preview of the jupyter lab file
Though this is a simple demonstration I hope you may find this helpful, and for those who aren't in the data field, this could be very helpful if it allows them to better organized disparate data as provided by their utilities 🤔😅!
Also, we took efforts to ensure personal data and privacy was respected. If you should see something of concern, please let me know, but I think this is more diagnostic data at this point then anything specific.
Lastly, what's not shown here is the trial and error that lead to this point. The document is cleaned up, but there were plenty of failed attempts to get the syntax correct to load and display the data as desired. Additionally, we spent a good amount time using dtale.show() to visualize and explore interactively with the dataframes to see what data could be useful and what was extra compared to the desired base dataset. We iteratively tested and built upon the SQL query as we progressed until it reached it's final form published here.
This is based on using Anaconda Python for Windows, and specifically Jupyter Lab as the IDE of choice. By using a 3rd party preview shared above or downloading a copy of the provided .hmtl webppage, you should be able to see what libraries were used and methods employed.