Sometimes you are away you preferred machine for running Racket, or maybe you are a student and need to share computing resources. In these scenarios and others it can be useful to have other options.
Oh excellent—I had been a long-time lurker on the racket mailing list, and this was the first thread on the discourse server that caught my eye!
Apologies if this posting is too commercial for this forum, but I thought I'd mention something I just started tinkering with in my off hours. (Disclaimer: this is tangentially related to my day-job working on open data at AWS, but opinions are solely mine etc, etc.). AWS recently started giving away free Jupyter Notebook instances that are fairly powerful (16GB RAM, optional GPU currently). Towards that, there is a github repository of examples, including examples in languages other than the default Python.
I've put together a draft notebook that installs Racket, the IRacket Jupyter kernel, and some dependencies in a mostly sandboxed Conda environment. I've also started putting together a notebook that illustrates using Racket with Jupyter / SageMaker Studio Lab.
Unfortunately, I'm pretty novice with respect to Racket coding ability, so if anyone has suggestions about something interesting to show off in Racket (or better yet wants to make a pull request ), I'm all ears. In particular, anything showing off data processing, data visualization, or any aspect of statistics or machine learning would be of interest.
Relatedly, if people are doing work in the cloud and are looking for real datasets representing a wide variety of domains, formats, and volumes to work with in-situ in the cloud, my team at work runs the Registry of Open Data on AWS. I selfishly want to see more usage of Racket when working with scientific data! If anyone finds themselves doing something interesting with open data & Racket, please let me know and I'd love to get your work included as usage examples or even blog about it with you.
I’ve added the link to your draft notebook to the wiki page. I’d love to see it in action - perhaps you would be willing to do a brief screen share at the next racket meet-up? (Sat 7 May 18:00 UTC)
Let us know when the notebook that illustrates using Racket with Jupyter / SageMaker Studio Lab is ready and I’ll add that too.
I'm happy to show off this notebook at the upcoming meetup, though at present the notebook doesn't "do" much beyond preparing the environment. If nothing else, it could serve as an orientation to integrating Racket into a popular environment for data analysis and ML (Jupyter, that is). I think that's an interesting topic, but I'll let you decide if it would be interesting for attendees.
As for the disaster response hackathon, unfortunately the contest ended in February—it looks like I need to remind that team to update their Github README!
Perhaps as a consolation prize, I can offer that our Open Data team covers the cost of hosting useful / interesting datasets via AWS (S3 specifically). In addition, we can generally help out with AWS credits for computing workloads that directly support creation or optimization of datasets for more straight-forward data analysis.
So, if showing off how to build and run a large-scale data processing pipeline with Racket on AWS were of interest, we can likely help out by footing the bill for hosting the data as well as with credits to compute / process the data. We can also help promote / show off pipelines, tutorials, and other Racket-based usage examples involving Open Data either through our website, or by collaborating on a post to one of the various AWS blogs. I have dozens of ideas for datasets posing varying levels of data processing challenge if folks are interested.
Apologies if this feels like me backing into a request for proposals
Compiler Explorer now supports Racket (more info in the thread below), and it's a form of Racket in the cloud, so I've gone ahead and added it to the wiki page.