markmac99
36 supporters
Now with added containers!

Now with added containers!

Jul 23, 2022

Yahoo! I managed to get the distributed processing working :)


The approach i chose was to use AWS's container service (ECS). I made a few tweaks to the correlation library so that with an optonal parameter it can dump out candidate matches to files (in python pickle format). These are then distributed to containers in groups of 20, cutting runtime and cost dramatically. Previously it took about one minute per match, meaning a busy night of Perseids might take 6-8 hours (and cost $$$$). Now, the workload processes in about 30 minutes irrespective of the number of matches to check, Its also about half as costly.

Enjoy this post?

Buy markmac99 a beer

More from markmac99