Skip to main content

December 6

Hi everyone! Hope you’re well 🙂

Deserted Chateau (ArtCentral)

I announced it on Twitter last week! ArtCentral finally has a proper name. The logo is still being finalised, more on that later.

You can see the announcement tweet here, with some extra info about the website plans underneath it. https://twitter.com/antsstyle/status/1598011858455527426 

Image resizing: search and user thumbnails

As part of improving the gallery layout, and also in preparation for implementing cropped thumbnail galleries (an alternate gallery layout for user profiles, when artists want it), I had to change some of my serverless lambda code and other stuff to deal with cropping specific areas of an image. Easy enough in theory, but presented some complications in conjunction with existing code which are solved now.

The difference between “search” and “user” thumbnails:

  • User thumbnails are ones a user crops themselves, by selecting the area of the image they want to be displayed in their gallery preview. They’re only displayed in cropped thumbnail galleries.

  • Search thumbnails are generated for normal gallery types and for search results, when an artwork has a very distended aspect ratio - and it only takes the center area, no user input allowed. For example a 5000x1000 artwork would have the center area used as its preview, so maybe the center 2000x1000 area or 2500x1000 area.

The “search” thumbnail type exists to prevent the dynamic resizing galleries from adopting very awkward dimensions. Artworks that are very tall and thin, for instance, cause a problem: either you have to resize a row to be really small to fit it (making that artwork tiny) or you have to make the other elements in the row very big, neither of which is a favourable outcome. As such those galleries work best when there is only a certain range of aspect ratios in them, necessitating cropping some more extreme artwork sizes.

Image resizing: further improvements

You would think I’d spent enough time on this, but no 😀 at present, this is one of few areas of the site with a significant delay for users (when submitting), so I wanted to try and improve it as much as I can.

Under the old (up to now) system, resizing an image to 4k, 1080p and 480p took approximately 10 seconds in the worst case scenario, which happens with large images like 40MB PNGs. This means that upon pressing submit, the user must wait in the worst case 10 seconds before they can be redirected to the display artwork page; that’s a relatively long time. To explain how this has improved, I’ll explain the code changes so far.

Version 1 of the image resizing code basically did a simple for loop of “get the image from cloud storage, resize it to given size, then upload the resized image to cloud storage” for each size required.

Version 2 improved on this by ordering the resizes, allowing each resize to be performed one after the other on the same image buffer, without having to keep reloading the image from cloud storage each time. This reduced total time taken by around 20% on larger images.

For version 3 that I’ve just implemented, I came to realise something: when I first implemented this code, I hadn’t realised that AWS Lambda limits are more generous than the initial ones I had on my account. At the time I was limited to 10 concurrent function executions, meaning that it was best to try and get all resizes done in one script call, plus that’s also the most cost-efficient way to do it due to how Lambda is priced. However, it is possible to make it faster - and thus give the user less delay - by splitting the operations up into multiple operations. My concurrent execution limit is also 1000 now, which is much easier to deal with in terms of making multiple calls at once.

Version 3 of the image resizer code works fundamentally differently to the previous versions. Instead of calling one script and waiting synchronously for the result, it calls the script X times asynchronously to perform each resize independently of the others using promises; in other words, AWS Lambda is performing 3-5 resize operations at the same time. This means that instead of my webserver waiting 10 seconds for the resize to complete, it only has to wait as long as the longest operation (in the worst case this is resizing to 4k, which is now around 5.5 seconds).

Additionally, I increased the function memory from 1769MB to 10240MB. In AWS Lambda, you’re granted CPU power proportional to the memory allocated. When I did extensive testing before, allocating over 1769MB produced poor diminishing returns, and that’s still true, but I realised that it remains beneficial. The free tier (free lambda function power you can use every month) allows for over 3000 artworks per month to be processed at no charge, and for every ~3000 artworks after that it will cost around $7. Much later on if the website got bigger, it would make sense to move to lower memory again, but for now the faster speed is worth it.

I've experimented with compiling the image resizing code to run on ARM architecture instead of x86; AWS Lambda has performance improvements and lower prices on ARM, but actually getting a Node.js project to compile in this scenario hasn't turned out to be the simplest thing ever. I'm going to spend a little more time here and there seeing if it can be done easily, but it's not essential by any means.

Cloudfront and S3 changes

I made some minor changes to how the CDN and file storage are handled, to accommodate for AWS’ newer access policies. At some point I also need to decide where the website’s infrastructure (file hosting buckets, webservers etc) will be physically located. The two main choices:

USA: Likely to be closer to some of the core audience, and is where most stuff is currently located for the testing stuff I have implemented. That said, USA data protection laws are a bit lax, and I think users might feel more protected using another option.

Ireland (EU): The tiniest little bit more expensive, basically negligible. Being in the EU means data protection laws are stronger, but it’ll mean a tiny bit more latency for US users (though this is likely to be a negligible issue). I will probably go with this option I think, so I’ve been preparing to move file storage buckets and other stuff over to this region; I have to double-check if there’s any parts of my code or elsewhere that I need to adjust for this, but I think I know all the configs that have to change.

Appearance and styling

I'm in the process of updating the website's looks, after getting some helpful advice from the graphic designer I'm commissioning for the logo (the wonderful @GrandSageArt). I'm part of the way through this, but things are going well in this regard.