<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Spacerunners Blog]]></title><description><![CDATA[Spacerunners sits at the intersection of fashion, AI, and web3. Our core product, ablo.ai, is a community driven marketplace for fashion and design that uses AI.]]></description><link>https://blog.ablo.ai</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 22:30:54 GMT</lastBuildDate><atom:link href="https://blog.ablo.ai/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How we implemented multiselect in the Ablo Editor with fabric.js]]></title><description><![CDATA[1. Introduction
One of the most intuitive features in modern design tools is the ability to select and manipulate multiple objects simultaneously. Whether you're moving several graphics together, changing their size or deleting a group of items at on...]]></description><link>https://blog.ablo.ai/how-we-implemented-multiselect-in-the-ablo-editor-with-fabricjs</link><guid isPermaLink="true">https://blog.ablo.ai/how-we-implemented-multiselect-in-the-ablo-editor-with-fabricjs</guid><category><![CDATA[fabricjs]]></category><category><![CDATA[canvas]]></category><category><![CDATA[React]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Jason Ladias]]></dc:creator><pubDate>Tue, 23 Dec 2025 14:31:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766485836408/daf69fbc-f837-4e85-8236-9b00fecbfc2e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-introduction">1. Introduction</h2>
<p>One of the most intuitive features in modern design tools is the ability to select and manipulate multiple objects simultaneously. Whether you're moving several graphics together, changing their size or deleting a group of items at once, multiselect dramatically improves workflow efficiency.</p>
<p>In this article, we'll explore how we implemented multiselect functionality in the Ablo editor, the challenges we faced, and why this feature makes the editing experience so much more powerful and intuitive.</p>
<h2 id="heading-2-before-multiselect-the-single-selection-era">2. Before Multiselect: The Single-Selection Era</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766488303208/4cd1239f-65b8-4e91-961c-390d08943595.gif" alt class="image--center mx-auto" /></p>
<p>Previously, the Ablo editor operated in a single-selection mode. Users could only interact with one object at a time on the canvas. This meant that common workflows like:</p>
<ul>
<li><p>Moving multiple graphics together</p>
</li>
<li><p>Deleting a group of objects</p>
</li>
<li><p>Copying multiple items at once</p>
</li>
</ul>
<p>...required users to perform these operations one object at a time, which was tedious and time-consuming. Each operation required clicking an object, performing the action, then repeating for the next object.</p>
<h2 id="heading-3-how-single-selection-worked-under-the-hood">3. How Single Selection Worked Under The Hood</h2>
<p>In the single-selection implementation, object selection was handled through mouse event listeners. When a user clicked on the canvas, the system would:</p>
<ul>
<li><p><strong>Detect the click target</strong>: On <code>mouse:up</code> events, the code checked if the click hit an object (<code>e.target</code>)</p>
</li>
<li><p><strong>Set the active object in Canvas</strong>: If an object was clicked and it was selectable, it would be set as the active object:</p>
<ul>
<li><pre><code class="lang-javascript">    canvas.on(<span class="hljs-string">'mouse:up'</span>, <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">e</span>) </span>{
      <span class="hljs-keyword">if</span> (e.target?.selectable) {
         canvas.setActiveObject(e.target);
         canvas.renderAll();
      }
    });
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Update React state</strong>: The active object was then stored in React state and used throughout the editor.</p>
<ul>
<li><pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> activeObj = canvas.getActiveObject();
    setActiveObject(activeObj || <span class="hljs-literal">null</span>);
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Handle deselection</strong>: Clicking on empty canvas space would deselect the current object, clearing the active state</p>
</li>
</ul>
<p>This approach worked well for single-object interactions, but it meant that selecting multiple objects required a sequential process: click object A, perform action, click object B, perform action, and so on. There was no way to select multiple objects simultaneously and operate on them as a group.</p>
<h2 id="heading-4-the-solution-fabricjs-activeselection">4. The Solution: Fabric.js ActiveSelection</h2>
<p>Fabric.js, the powerful canvas library that powers our editor, provides a built-in solution for multiselect through the <a target="_blank" href="https://fabricjs.com/api/classes/activeselection/"><strong>ActiveSelection class</strong></a>. This class represents a temporary group of selected objects that can be manipulated together without permanently grouping them.</p>
<p>The key insight is that <code>ActiveSelection</code> behaves like a regular fabric object in many ways, it can be moved, scaled, rotated, and transformed, but it's actually a container that holds references to multiple objects. When you perform operations on an <strong>ActiveSelection</strong>, fabric.js intelligently applies those operations to all contained objects.</p>
<h3 id="heading-41-enable-selection-on-fabricjs-canvas">4.1. Enable Selection on fabric.js canvas</h3>
<p>The first step was to enable canvas selection specifically for desktop users. In our canvas initialization code, we conditionally enable selection based on the device type:</p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> isCanvasSelectionEnabled = isSelectionEnabled &amp;&amp; !isMobile;

  canvas = <span class="hljs-keyword">new</span> fabric.Canvas(<span class="hljs-string">`<span class="hljs-subst">${canvasName}</span>`</span>, {
    width,
    height,
    <span class="hljs-attr">selection</span>: isCanvasSelectionEnabled,
    ...
  });
</code></pre>
<p>This ensures that:</p>
<ul>
<li><p>Desktop users get the full multiselect experience with drag-to-select and modifier key support</p>
</li>
<li><p>Mobile users maintain the touch-friendly single-tap selection behavior</p>
</li>
<li><p>The feature can be toggled if needed via the <code>isSelectionEnabled</code> parameter for most mini-versions of the editor</p>
</li>
</ul>
<h3 id="heading-42-why-fabricjs-getactiveobject-can-handle-both-single-amp-multi-select">4.2. Why fabric.js `getActiveObject()` can Handle both Single &amp; Multi Select</h3>
<p>A key aspect of fabric.js's selection system is that <code>canvas.getActiveObject()</code> can return either a single <code>fabric.Object</code> or a <code>fabric.ActiveSelection</code>, depending on how many objects are currently selected. This is handled automatically by fabric.js:</p>
<ul>
<li><p><strong>Single object selected</strong>: When a user clicks on one object, <code>getActiveObject()</code> returns that object directly (e.g., <code>fabric.Image</code>, <code>fabric.IText</code>, etc.)</p>
</li>
<li><p><strong>Multiple objects selected</strong>: When a user drags to select multiple objects or uses modifier keys to add objects to the selection, fabric.js automatically creates an <code>ActiveSelection</code> instance that wraps all selected objects. In this case, <code>getActiveObject()</code> returns the <code>ActiveSelection</code> container.</p>
</li>
</ul>
<p>This design means we don't need to manually track which objects are selected, as fabric.js manages the selection state internally. When selection is enabled on the canvas (<code>selection: true</code>), fabric.js handles the complexity of:</p>
<ul>
<li><p>Creating selection rectangles when dragging</p>
</li>
<li><p>Managing modifier key behavior (Shift + Click for adding to selection)</p>
</li>
<li><p>Automatically wrapping multiple selections in an <code>ActiveSelection</code></p>
</li>
<li><p>Returning the appropriate type based on selection count</p>
</li>
</ul>
<p>This is why we can simply call <code>canvas.getActiveObject()</code> anywhere in our code and check the type. Fabric.js has already done the heavy lifting of managing the selection state.</p>
<h3 id="heading-43-type-system-updates">4.3. Type System Updates</h3>
<p>To properly handle multiselect throughout our codebase, we updated our type definitions to recognize <code>ActiveSelection</code> as a valid active object type:</p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> [activeObject, setActiveObject] = useState&lt;fabric.Object | fabric.ActiveSelection | <span class="hljs-literal">null</span>&gt;(<span class="hljs-literal">null</span>);
</code></pre>
<h3 id="heading-44-detecting-and-normalizing-selections">4.4. Detecting and Normalizing Selections</h3>
<p>Throughout the codebase, we use a consistent pattern to detect when the user has selected multiple objects and normalize them for processing:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> isMultiSelect = activeObject <span class="hljs-keyword">instanceof</span> fabric.ActiveSelection;
<span class="hljs-keyword">const</span> objectsToProcess: fabric.Object[] = isMultiSelect
    ? activeObject.getObjects()
    : [activeObject];
</code></pre>
<p>The key insight of this pattern is that it <strong>serializes the selection into an array format</strong>, regardless of whether we're dealing with a single object or multiple objects. This normalization allows us to:</p>
<ul>
<li><p>Check if we're dealing with a multiselect using <code>instanceof fabric.ActiveSelection</code></p>
</li>
<li><p>Extract the individual objects, either from the <code>ActiveSelection</code> container or wrap the single object in an array</p>
</li>
<li><p>Process all objects uniformly using array methods like <code>forEach</code>, <code>map</code>, or <code>filter</code></p>
</li>
</ul>
<p>By converting both cases into an array, we can write operation logic once that works for both single and multiple selections, making the code more maintainable and consistent.</p>
<h2 id="heading-5-the-challenges-not-so-easy-as-it-sounds">5. The Challenges: Not So Easy As It Sounds</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766496668631/d3d624bb-dcd9-4a66-a0ed-0412e96d73f8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-51-mobile-considerations">5.1. Mobile Considerations</h3>
<p>One of the first decisions we had to make was how multiselect should work on mobile devices. We decided to move fast and ship multiselect as a Desktop-Only feature for the first version.</p>
<p><strong>Why?</strong> Mobile devices have different interaction patterns, touch gestures, smaller screens, and different user expectations. The drag-to-select behavior that works so well on desktop with a mouse doesn't translate naturally to touch interfaces and might conflict with like pan gestures.</p>
<p>By keeping multiselect desktop-only initially, we could:</p>
<ul>
<li><p>Focus on perfecting the desktop experience first</p>
</li>
<li><p>Avoid complicating the mobile touch interactions</p>
</li>
<li><p>Maintain the existing, optimized mobile selection behavior</p>
</li>
</ul>
<p>This decision allowed us to ship a polished desktop feature faster while keeping the door open for future mobile enhancements if user feedback indicates a need.</p>
<h3 id="heading-52-operation-consistency">5.2. Operation Consistency</h3>
<p>One of the main challenges was ensuring that operations like copy, delete, and layer management work correctly with multiple objects. For example, when copying multiple objects, we need to preserve their relative positions and transformations.</p>
<p><strong>Solution</strong>: We handle this by checking for multiselect and processing all objects in the selection one by one:</p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> handleRemoveActiveObject = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> activeObject = canvas.getActiveObject();

    <span class="hljs-keyword">const</span> objectsToProcess =
      activeObject <span class="hljs-keyword">instanceof</span> fabric.ActiveSelection ? activeObject.getObjects() : [activeObject];

    objectsToProcess.forEach(<span class="hljs-function">(<span class="hljs-params">obj: fabric.<span class="hljs-built_in">Object</span></span>) =&gt;</span> {
      canvas.remove(obj);
    });

    canvas.discardActiveObject();
    canvas.renderAll();

    saveState();
  };
</code></pre>
<p>This pattern <strong>respects the DRY principle</strong> by normalizing the selection into an array format upfront. Instead of having conditional checks scattered throughout the codebase, e.g. checking <code>if (isMultiSelect)</code> in every operation, we perform a single check at the beginning and then process all objects uniformly.</p>
<h3 id="heading-53-complex-operations-and-scope-management">5.3. Complex Operations and Scope Management</h3>
<p>Not all operations make sense or are feasible with multiselect. Some features, like cropping, are inherently single-object operations that don't translate well to multiple selections.</p>
<p><strong>Decision 1</strong>: We left certain operations out of scope for the first version. For example, the crop tool is disabled when multiple objects are selected:</p>
<p><strong>Decision 2</strong>: Some operations like copying, need a totally different implementation for single &amp; multiselect and can’t be processed by the exact same algorithms.</p>
<h2 id="heading-6-why-its-awesome">6. Why It’s Awesome</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766496774119/7e460b9a-1237-44ec-abff-7be8b5b4eb90.gif" alt class="image--center mx-auto" /></p>
<p><strong>A. Intuitive Desktop Experience</strong>: Multiselect works exactly as users expect from modern design tools. You can:</p>
<ul>
<li><p><strong>Drag to select</strong>: Click and drag on the canvas to create a selection rectangle</p>
</li>
<li><p><strong>Modifier keys</strong>: Hold Shift and click to add objects to your selection</p>
</li>
<li><p><strong>Visual feedback</strong>: Selected objects are clearly highlighted with selection handles</p>
</li>
</ul>
<p>This matches the behavior users are familiar with from modern tools like Figma.</p>
<p><strong>B. Workflow Efficiency</strong>: The time savings are significant. Consider a scenario where you need to:</p>
<ul>
<li><p>Move 5 graphics to a new position</p>
</li>
<li><p>Delete 4 unwanted objects</p>
</li>
</ul>
<p><strong>Before</strong>: 9 separate operations (select, act, repeat), <strong>After</strong>: 2 operations (multiselect, act, done) ###</p>
<p><strong>C. Preserves Relationships and Alignment:</strong> One of the most powerful aspects of multiselect is how it maintains the spatial relationships between objects. When you select multiple elements and move them together, their relative positions are preserved perfectly.</p>
<h2 id="heading-7-conclusion">7. Conclusion</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766498165741/feca5db6-f860-4892-a2ca-4174dddd847b.png" alt class="image--center mx-auto" /></p>
<p>The multiselect feature represents a significant improvement in the Ablo editor's usability. By leveraging fabric.js's <code>ActiveSelection</code> class and thoughtfully handling edge cases, we've created an intuitive, powerful feature that matches user expectations from modern design tools.</p>
<p>The implementation demonstrates how a well-designed library feature (fabric.js's ActiveSelection) can be integrated into a complex application. The result is a feature that feels natural, saves users time, and makes the editor more powerful without adding complexity to the user interface.</p>
]]></content:encoded></item><item><title><![CDATA[How we implemented cropping on the canvas with Fabric.js]]></title><description><![CDATA[Introduction
SpaceRunners has a design tool where artists can create custom designs in a limited drawing area on any physical object. We already described the core concepts of the tool in previous articles on this blog. A core feature that every desi...]]></description><link>https://blog.ablo.ai/how-we-implemented-cropping-on-the-canvas-with-fabricjs</link><guid isPermaLink="true">https://blog.ablo.ai/how-we-implemented-cropping-on-the-canvas-with-fabricjs</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[fabric]]></category><category><![CDATA[canvas]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Mihovil Kovacevic]]></dc:creator><pubDate>Fri, 19 Dec 2025 19:17:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766156856104/b5ff05bb-af1f-4867-b111-a0fef44aa0f0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>SpaceRunners has a design tool where artists can create custom designs in a limited drawing area on any physical object. We already described the core concepts of the tool in previous articles on this blog. A core feature that every design tool must have by default is cropping. However, there's no native cropping feature in Fabric.js where one can just call a few functions and call it a day. We had to implement it ourselves using several steps. This entire process has sparse and incomplete documentation online. This blog post will try to give practical tips to anyone who encounters the same problems.</p>
<p>The basic idea of cropping in Fabric.js is to create a crop mask shape when you initiate the crop and then let the user move the mask around the drawable area. The user can position the mask to the exact area to be cropped and then confirm the action. Confirmation can be done by any combination of keys, clicks, or just a button that appears somewhere in the UI. Whatever is inside the mask will stay on the canvas and whatever is left out will be removed. The crop mask is applied by setting it as the clip path of the original image. The canvas is then exported to a data URL. The clip path is a Fabric.js concept that lets the user specify which part of the canvas is visible in the browser or when exported as an image output. This is the basic idea of how cropping works. The following sections will describe the process in detail together with some code examples.</p>
<h3 id="heading-crop-mask">Crop Mask</h3>
<p>In our design tool, the crop is initiated by selecting an image and clicking on a specialized “Crop” icon in the toolbar, as displayed in the picture below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766158445699/9f809e47-7220-45a2-902a-14937b3191f1.png" alt class="image--center mx-auto" /></p>
<p>After the crop is initiated, the design tool offers different crop shapes that can be applied to the image. Imagine this shape as a mold on an image. In the image below, which has a rectangle crop shape, you can see the rectangle crop mask over the image. After applying the crop mask, everything inside this rectangle marked by the white dots will stay on the canvas and everything else will get cropped out.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766158514822/1fec06ec-2fd7-4771-b2e5-c19fe5c6e80e.png" alt class="image--center mx-auto" /></p>
<p>If we apply a heart shape you’ll notice how the crop mask will allow you to cut out heart shapes from the original image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766159078097/63ed042d-d33b-407e-8628-021bb9633563.png" alt class="image--center mx-auto" /></p>
<p>Our supported crop shapes are Rectangle, Circle, Rounded Rectangle, Heart, and Star. Fabric.js already has native shapes for everything except the Heart shape. For custom shapes such as Heart or whatever else you may want to use, you can use Fabric.js's Path object that takes an SVG path. When working with SVG icons, you must make sure that you include just the path and not the entire SVG, and that it's properly formatted and has just a single path and not embedded images. Here's the code that shows how we generate the crop mask based on the selected shape:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> heartSVGPath =
  <span class="hljs-string">'M 272.70141,238.71731 \
      C 206.46141,238.71731 152.70146,292.4773 152.70146,358.71731  \
      C 152.70146,493.47282 288.63461,528.80461 381.26391,662.02535 \
      C 468.83815,529.62199 609.82641,489.17075 609.82641,358.71731 \
      C 609.82641,292.47731 556.06651,238.7173 489.82641,238.71731  \
      C 441.77851,238.71731 400.42481,267.08774 381.26391,307.90481 \
      C 362.10311,267.08773 320.74941,238.7173 272.70141,238.71731  \
      Z '</span>;

<span class="hljs-keyword">const</span> CropMaskProps = {
  isCropMask: <span class="hljs-literal">true</span>,
  fill: <span class="hljs-string">'rgba(0,0,0,0.3)'</span>,
  stroke: <span class="hljs-string">'black'</span>,
  opacity: <span class="hljs-number">1</span>,
  originX: <span class="hljs-string">'left'</span>,
  originY: <span class="hljs-string">'top'</span>,
  hasRotatingPoint: <span class="hljs-literal">false</span>,
  transparentCorners: <span class="hljs-literal">false</span>,
  cornerColor: <span class="hljs-string">'white'</span>,
  cornerStrokeColor: <span class="hljs-string">'black'</span>,
  borderColor: <span class="hljs-string">'black'</span>,
  cornerSize: <span class="hljs-number">20</span> * <span class="hljs-number">3</span>,
  padding: <span class="hljs-number">0</span>,
  height: <span class="hljs-number">150</span>,
  width: <span class="hljs-number">150</span>,
  cornerStyle: <span class="hljs-string">'circle'</span>,
  borderDashArray: [<span class="hljs-number">5</span>, <span class="hljs-number">5</span>],
  excludeFromExport: <span class="hljs-literal">true</span>,
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> getCropMaskShape = <span class="hljs-function">(<span class="hljs-params">
  shapeName: CropShape,
  width: <span class="hljs-built_in">number</span>,
  left: <span class="hljs-built_in">number</span>,
  top: <span class="hljs-built_in">number</span>
</span>) =&gt;</span> {
  <span class="hljs-keyword">let</span> shape;

  <span class="hljs-keyword">if</span> (shapeName === CropShape.RECTANGLE) {
    shape = <span class="hljs-keyword">new</span> fabric.Rect({ ...CropMaskProps, left, centeredScaling: <span class="hljs-literal">true</span>, top });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.CIRCLE) {
    shape = <span class="hljs-keyword">new</span> fabric.Circle({
      ...CropMaskProps,
      width: <span class="hljs-literal">undefined</span>,
      height: <span class="hljs-literal">undefined</span>,
      radius: width / <span class="hljs-number">2</span>,
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.ROUNDED_RECTANGLE) {
    shape = <span class="hljs-keyword">new</span> fabric.Rect({
      ...CropMaskProps,
      rx: <span class="hljs-number">20</span>,
      ry: <span class="hljs-number">20</span>,
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.STAR) {
    shape = <span class="hljs-keyword">new</span> fabric.Star({
      ...CropMaskProps,
      width: <span class="hljs-number">200</span>,
      height: <span class="hljs-number">200</span>,
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.HEART) {
    shape = <span class="hljs-keyword">new</span> fabric.Path(heartSVGPath, {
      ...CropMaskProps,
      width: <span class="hljs-number">200</span>,
      height: <span class="hljs-number">200</span>,
    });
  }

  shape
    .scaleToWidth(width)
    .set({
      left,
      centeredScaling: <span class="hljs-literal">true</span>,
      top,
    })
    .setCoords();

  <span class="hljs-keyword">return</span> shape;
};
</code></pre>
<p>Another important consideration that comes into effect at the end of cropping is that you must use similar logic to get the applied crop mask that will be used as a clip path to cut out the cropped area and remove the remainder. The original crop mask, marked by white circles as shown in the previous images, is draggable and also resizable. After the user performs any of these operations and decides on the final look of the mask, we must call another function that will take the coordinates of the live mask together with its absolute position on the canvas and then generate a new shape that is used just for cropping. Here's the code that achieves just that:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> getAppliedCropMask = <span class="hljs-function">(<span class="hljs-params">shapeName: CropShape, croppingMask: CanvasObject</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> commonProps = {
    left: croppingMask.left,
    top: croppingMask.top,
    originX: <span class="hljs-string">'left'</span>,
    originY: <span class="hljs-string">'top'</span>,
    absolutePositioned: <span class="hljs-literal">true</span>,
  };

  <span class="hljs-keyword">let</span> cropMask;

  <span class="hljs-keyword">if</span> (shapeName === CropShape.RECTANGLE) {
    cropMask = <span class="hljs-keyword">new</span> fabric.Rect({
      ...commonProps,
      width: croppingMask.getScaledWidth(),
      height: croppingMask.getScaledHeight(),
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.CIRCLE) {
    cropMask = <span class="hljs-keyword">new</span> fabric.Circle({
      ...commonProps,
      radius: croppingMask.radius * croppingMask.scaleX,
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.ROUNDED_RECTANGLE) {
    cropMask = <span class="hljs-keyword">new</span> fabric.Rect({
      ...commonProps,
      rx: croppingMask.rx,
      ry: croppingMask.ry,
      width: croppingMask.getScaledWidth(),
      height: croppingMask.getScaledHeight(),
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.STAR) {
    cropMask = <span class="hljs-keyword">new</span> fabric.Star({
      ...commonProps,
      width: croppingMask.getScaledWidth(),
      height: croppingMask.getScaledHeight(),
    });
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (shapeName === CropShape.HEART) {
    cropMask = <span class="hljs-keyword">new</span> fabric.Path(heartSVGPath, {
      ...commonProps,
      originX: <span class="hljs-string">'left'</span>,
      originY: <span class="hljs-string">'top'</span>,
    });

    cropMask.scaleToWidth(croppingMask.getScaledWidth());
  }

  <span class="hljs-keyword">return</span> cropMask;
};
</code></pre>
<h3 id="heading-applying-the-mask">Applying the mask</h3>
<p>After the mask is positioned at the desired place and scaled to the desired dimensions, the user can confirm the crop. At that point, we must perform a series of operations that will generate a new cropped output. Here's the code, with explanations of each line below it:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> cropMask = getAppliedCropMask(shapeToUse, croppingMask);

imageToCrop.clipPath = cropMask;

canvas.remove(croppingMask);
removeDrawingArea(canvas);

canvas.renderAll();

<span class="hljs-keyword">const</span> cropped = <span class="hljs-keyword">new</span> Image();

<span class="hljs-keyword">const</span> backgroundImage = canvas.backgroundImage;
<span class="hljs-keyword">const</span> overlayImage = canvas.overlayImage;

canvas.backgroundImage = <span class="hljs-literal">null</span>;
canvas.overlayImage = <span class="hljs-literal">null</span>;

<span class="hljs-keyword">const</span> originalViewportTransform = canvas.viewportTransform;

canvas.viewportTransform = [<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>];

cropped.src = canvas.toDataURL({
  left: cropMask.left,
  top: cropMask.top,
  width: cropMask.width * cropMask.scaleX,
  height: cropMask.height * cropMask.scaleY,
  multiplier: <span class="hljs-number">5</span>,
  format: <span class="hljs-string">'png'</span>,
  quality: <span class="hljs-number">0.99</span>,
});

canvas.viewportTransform = originalViewportTransform;

canvas.backgroundImage = backgroundImage;
canvas.overlayImage = overlayImage;

cropped.onload = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> image = <span class="hljs-keyword">new</span> fabric.Image(cropped);

  image.left = cropMask.left + (cropMask.width * cropMask.scaleX) / <span class="hljs-number">2</span>;
  image.top = cropMask.top + (cropMask.height * cropMask.scaleX) / <span class="hljs-number">2</span>;
  image.aiImage = activeObject.aiImage;
  image.setCoords();
  image.scaleToWidth(cropMask.width * cropMask.scaleX);

  image.set(<span class="hljs-string">'radius'</span>, (cropMask.width * cropMask.scaleX) / <span class="hljs-number">2</span>);
  image.set(<span class="hljs-string">'originX'</span>, <span class="hljs-string">'center'</span>);
  image.set(<span class="hljs-string">'originY'</span>, <span class="hljs-string">'center'</span>);

  canvas.add(image);
  canvas.remove(imageToCrop);
  canvas.discardActiveObject();

  onCrop(image);
};

setCroppingMask(<span class="hljs-literal">null</span>);
setImageToCrop(<span class="hljs-literal">null</span>);
</code></pre>
<p>Here are the steps with explanations:</p>
<ol>
<li><p><code>const cropMask = getAppliedCropMask(shapeToUse, croppingMask);</code><br /> <code>imageToCrop.clipPath = cropMask;</code>  </p>
<p> - This gets the crop mask absolutely positioned on the canvas that will be used to tell the image output function which part of the original canvas to keep.</p>
</li>
<li><pre><code class="lang-typescript"> canvas.remove(croppingMask);
 removeDrawingArea(canvas);

 canvas.renderAll();

 <span class="hljs-keyword">const</span> cropped = <span class="hljs-keyword">new</span> Image();

 <span class="hljs-keyword">const</span> backgroundImage = canvas.backgroundImage;
 <span class="hljs-keyword">const</span> overlayImage = canvas.overlayImage;

 canvas.backgroundImage = <span class="hljs-literal">null</span>;
 canvas.overlayImage = <span class="hljs-literal">null</span>;
</code></pre>
</li>
</ol>
<p>This code removes from the original canvas all elements that won't be in the output. For example, it removes the drawing area lines and background/overlay images (these are the template images). This is all temporary until the clipped canvas is exported. This also removes the draggable cropping mask and saves the original background and overlay images so they can be restored after clipping.</p>
<ol start="3">
<li><pre><code class="lang-typescript"> <span class="hljs-keyword">const</span> originalViewportTransform = canvas.viewportTransform;

 canvas.viewportTransform = [<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>];
</code></pre>
<p> One tricky thing is that we must also adjust the canvas viewport at this step while preserving the original viewport. That's why we use the viewportTransform property on the canvas that is defined like this in the source docs:  </p>
<p> <code>/**</code></p>
<ul>
<li><p><code>The transformation (a Canvas 2D API transform matrix) which focuses the viewport</code></p>
</li>
<li><p><code>@type Array</code></p>
</li>
<li><p><code>@example Default transform</code></p>
</li>
<li><p><code>canvas.viewportTransform = [1, 0, 0, 1, 0, 0];</code></p>
</li>
<li><p><code>@example Scale by 70% and translate toward bottom-right by 50, without skewing</code></p>
</li>
<li><p><code>canvas.viewportTransform = [0.7, 0, 0, 0.7, 50, 50]; */ viewportTransform: TMat2D;      **/</code></p>
</li>
</ul>
</li>
</ol>
<ol start="4">
<li><pre><code class="lang-typescript"> cropped.src = canvas.toDataURL({
   left: cropMask.left,
   top: cropMask.top,
   width: cropMask.width * cropMask.scaleX,
   height: cropMask.height * cropMask.scaleY,
   multiplier: <span class="hljs-number">5</span>,
   format: <span class="hljs-string">'png'</span>,
   quality: <span class="hljs-number">0.99</span>,
 });

 canvas.viewportTransform = originalViewportTransform;

 canvas.backgroundImage = backgroundImage;
 canvas.overlayImage = overlayImage;
</code></pre>
<p> The <code>canvas.toDataURL</code> function generates an image from the area on the canvas that was defined by the crop mask. Then we set it as the src for a new Image element. This new Image element will be used to replace the original one. After calling the toDataURL() function, we can restore the viewport and the background and overlay images so that the user can continue using the canvas normally. One important thing here is that this operation is so fast that the user doesn't see any visual effects.</p>
</li>
<li><pre><code class="lang-typescript"> cropped.onload = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
   <span class="hljs-keyword">const</span> image = <span class="hljs-keyword">new</span> fabric.Image(cropped);

   image.left = cropMask.left + (cropMask.width * cropMask.scaleX) / <span class="hljs-number">2</span>;
   image.top = cropMask.top + (cropMask.height * cropMask.scaleX) / <span class="hljs-number">2</span>;
   image.aiImage = activeObject.aiImage;
   image.setCoords();
   image.scaleToWidth(cropMask.width * cropMask.scaleX);

   image.set(<span class="hljs-string">'radius'</span>, (cropMask.width * cropMask.scaleX) / <span class="hljs-number">2</span>);
   image.set(<span class="hljs-string">'originX'</span>, <span class="hljs-string">'center'</span>);
   image.set(<span class="hljs-string">'originY'</span>, <span class="hljs-string">'center'</span>);

   canvas.add(image);
   canvas.remove(imageToCrop);
   canvas.discardActiveObject();

   onCrop(image);
 };
</code></pre>
<p> The last step is to position the new Image element at the exact origin where the crop mask was so that the cropped element stays in the same place and doesn't jump around. You'll notice that all the dimensions and coordinates of the image are set according to the crop mask counterparts. Then we add the cropped image to the canvas and remove the original one, while calling any handlers that will persist this new state on the backend, but that's unrelated to the canvas.</p>
</li>
</ol>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this blog post we demonstrated how to implement the cropping operation on a Fabric.js canvas and also how to work with custom shapes. It’s very important to get all the details right because even one small mistake can make the elements go out of bounds or have wrong proportions. You can see the full source code for our entire design tool in our open sourced repository:</p>
<p><a target="_blank" href="https://github.com/Space-Runners/ablo.ai/pulls">https://github.com/Space-Runners/ablo.ai/pulls</a></p>
<p>If you follow all of these steps and cross reference your implementation with our publicly available code you’ll get it right. The best approach would be to work backwards by first implementing our working implementation and then adjusting it piece by piece to fit your needs.</p>
]]></content:encoded></item><item><title><![CDATA[Streamlining API Development: Generating API Client from Swagger Documentation]]></title><description><![CDATA[At Ablo, we build products that operate at the intersection of e-commerce, design, and scale. Our core platform consists of one main frontend application and one main backend service, but our architecture extends beyond that. We also develop dedicate...]]></description><link>https://blog.ablo.ai/streamlining-api-development</link><guid isPermaLink="true">https://blog.ablo.ai/streamlining-api-development</guid><category><![CDATA[OpenApi]]></category><category><![CDATA[swagger]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[API clients]]></category><category><![CDATA[nestjs]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[APIs]]></category><category><![CDATA[spacerunners]]></category><category><![CDATA[REST API]]></category><category><![CDATA[GraphQL]]></category><dc:creator><![CDATA[Okan Aslan]]></dc:creator><pubDate>Thu, 18 Dec 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766056594893/d07f0c84-01bf-4a0b-bc13-45521570ffa1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At Ablo, we build products that operate at the intersection of e-commerce, design, and scale. Our core platform consists of one main frontend application and one main backend service, but our architecture extends beyond that. We also develop dedicated projects for enterprise customers, each with its own repository and deployment lifecycle, while still depending on our main backend APIs. <strong>This hybrid setup gives us flexibility, but it also introduces additional complexity when it comes to API contracts and consistency.</strong></p>
<p>On the backend, we use <code>NestJS</code>, and we rely on <code>@nestjs/swagger</code> to generate OpenAPI definitions directly from code annotations. Like many teams, we initially treated Swagger primarily as a documentation tool. Swagger UI became the default reference point for frontend developers, and we maintained a Postman collection for testing and exploration. <strong>While this setup worked, it positioned our API definitions as a reference rather than a source of truth.</strong></p>
<p>As the team grew and we started delivering features at a faster pace, we also found ourselves frequently updating and refactoring existing logic. <strong>This rapid development cycle made it increasingly difficult to track API changes and catch breaking updates early.</strong> Without a strongly enforced contract between backend and frontend, small changes could slip through unnoticed, leading to inconsistencies that were time-consuming to debug and reduced overall development confidence.</p>
<p>This post explains how we moved from <strong>“Swagger UI as documentation” to automated OpenAPI generation and consumption</strong> across environments, and how that shift fundamentally improved our API reliability, frontend integration, and overall developer experience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765975351555/dcd14a5a-1238-4d23-bed5-7658a5f983d6.png" alt="Our SwaggerUI docs" class="image--center mx-auto" /></p>
<h2 id="heading-our-setup">Our Setup</h2>
<p>On the backend, our architecture is built heavily around <code>NestJS</code>. We use <code>@nestjs/swagger</code> to generate OpenAPI definitions directly from decorators and annotations in our codebase. For request validation and data transformation, we rely on <code>class-validator</code> and <code>class-transformer</code>, which gives us strong runtime guarantees for incoming data. <strong>We manually defined DTOs for most endpoints, but this practice was not consistently enforced across the codebase.</strong> In many cases, we had well-defined input types and validations, but response schemas were either loosely defined or missing altogether.</p>
<p>On the frontend, we maintained our own TypeScript interfaces to represent backend responses. These types were manually updated and not automatically synchronized with the backend. <strong>As the API evolved, this led to gaps: some fields were missing, others were deprecated but still referenced, and certain commonly used entities existed in multiple slightly different definitions.</strong> API requests were constructed manually, along with response type casting, and frontend-side validation was minimal.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.ablo.ai/our-tech-stack-at-space-runners">https://blog.ablo.ai/our-tech-stack-at-space-runners</a></div>
<p> </p>
<p>This setup worked in the early stages, but as the number of endpoints and feature iterations increased, the lack of a single, enforced API contract between backend and frontend started to create friction and uncertainty across teams.</p>
<pre><code class="lang-typescript"><span class="hljs-meta">@ApiOperation</span>({ description: <span class="hljs-string">'Get list of brands with paginantion'</span> })
<span class="hljs-meta">@ApiQuery</span>({ <span class="hljs-keyword">type</span>: GetBrandsQuery })
<span class="hljs-meta">@ApiResponse</span>({
  status: HttpStatus.OK,
  <span class="hljs-keyword">type</span>: BrandDto[]
})
<span class="hljs-meta">@UseGuards</span>(UserGuard)
<span class="hljs-meta">@ApiSecurity</span>(<span class="hljs-string">'auth'</span>)
<span class="hljs-meta">@Get</span>()
<span class="hljs-meta">@UsePipes</span>(<span class="hljs-keyword">new</span> ValidationPipe())
<span class="hljs-keyword">async</span> getBrand(
  <span class="hljs-meta">@Request</span>() req: UserAuthRequest,
  <span class="hljs-meta">@Query</span>() query: GetBrandsQuery,
): <span class="hljs-built_in">Promise</span>&lt;BrandDto[]&gt; {
  <span class="hljs-comment">// Service Calls</span>
}
</code></pre>
<h2 id="heading-automating-api-client-generation">Automating API Client Generation</h2>
<p><strong>One of the key decisions we made was to keep the backend changes minimal.</strong> We didn’t introduce any new packages or tooling on the backend. Since we were already using <code>@nestjs/swagger</code>, all the information we needed was already there. Instead, we exposed a new internal endpoint that returns the JSON version of our OpenAPI 3.0 specification. To secure this endpoint, we added a simple header-based secret check, ensuring that the schema is only accessible to internal tooling. This approach allowed us to treat the OpenAPI definition as a first-class artifact of our backend without changing how developers write endpoints. <strong>The backend continues to generate the schema from annotations, but now it can also be consumed programmatically.</strong></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> allDocs = SwaggerModule.createDocument(
  app,
  <span class="hljs-keyword">new</span> DocumentBuilder()
    .setTitle(<span class="hljs-string">'Ablo API'</span>)
    .build(),
  { include: [...publicModules, ...privateModules] }
)
app.use(<span class="hljs-string">'/'</span>, <span class="hljs-function">(<span class="hljs-params">req: Request, res: Response</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (req.headers[<span class="hljs-string">'secret'</span>] !== SECRET) {
    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">401</span>).send({ message: <span class="hljs-string">'Unauthorized'</span> })
  }
  res.setHeader(<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'application/json'</span>)
  res.send(allDocs)
})
</code></pre>
<p>On the frontend, we introduced <a target="_blank" href="https://www.npmjs.com/package/swagger-typescript-api"><code>swagger-typescript-api</code></a> to convert the OpenAPI JSON schema into fully typed TypeScript API clients and models. This tool generates request functions, response types, and shared models directly from the OpenAPI definition, removing the need for manual type maintenance. <strong>The generated output becomes the single source of truth for frontend–backend communication.</strong></p>
<p>To make this process repeatable and easy to run, we added a small set of NPM scripts to our frontend project. These scripts fetch the latest OpenAPI schema from the backend, format it, and generate TypeScript definitions automatically:</p>
<pre><code class="lang-json"><span class="hljs-comment">// package.json</span>
{
  <span class="hljs-attr">"api"</span>: <span class="hljs-string">"npm run api:fetch &amp;&amp; npm run api:format &amp;&amp; npm run api:generate"</span>,
  <span class="hljs-attr">"api:fetch"</span>: <span class="hljs-string">"curl -s $API_DOCS_URL -o ./src/api/docs.json"</span>,
  <span class="hljs-attr">"api:format"</span>: <span class="hljs-string">"prettier --write ./src/api/docs.json"</span>,
  <span class="hljs-attr">"api:generate"</span>: <span class="hljs-string">"npx swagger-typescript-api generate --path ./src/api/docs.json -o ./src/api"</span>
}
</code></pre>
<p>Once generated, the API is consumed through a single, centralized client instance. This client handles base configuration and shared security concerns, such as authentication headers, so individual API calls remain clean and consistent:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/api/index.ts</span>
<span class="hljs-keyword">const</span> AbloAPI = <span class="hljs-keyword">new</span> Api({
  baseUrl: Config.API_URL,
  securityWorker: <span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> {
      headers: {
        Authorization: <span class="hljs-string">`Bearer <span class="hljs-subst">${data?.token}</span>`</span>,
      },
    };
  },
});
</code></pre>
<p>With this setup in place, consuming backend APIs on the frontend becomes a simple, type-safe function call. For example, fetching brands no longer requires manually constructing requests or maintaining separate response types:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/pages/brand.ts</span>
<span class="hljs-keyword">const</span> brands = <span class="hljs-keyword">await</span> AbloAPI.brands.brandControllerFindAll({
  isFeatured,
  limit,
  skip,
});
</code></pre>
<p>All request and response definitions are automatically generated inside <code>Api.ts</code>. Each endpoint includes typed query parameters, return types, and metadata derived directly from the OpenAPI schema:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/api/Api.ts</span>
...
brands = {
  <span class="hljs-comment">/**
   * Get list of brands with paginantion
   *
   * @tags Brands
   * @name BrandControllerFindAll
   * @request GET:/brands
   */</span>
  brandControllerFindAll: <span class="hljs-function">(<span class="hljs-params">
    query: {
      active: <span class="hljs-built_in">boolean</span>;
      limit: <span class="hljs-built_in">number</span>;
      skip: <span class="hljs-built_in">number</span>;
    },
    params: RequestParams = {},
  </span>) =&gt;</span>
    <span class="hljs-built_in">this</span>.request&lt;BrandDto[], <span class="hljs-built_in">any</span>&gt;({
      path: <span class="hljs-string">`/brands`</span>,
      method: <span class="hljs-string">"GET"</span>,
      query: query,
      format: <span class="hljs-string">"json"</span>,
      ...params,
    }),
...
</code></pre>
<h2 id="heading-challenges-during-adoption">Challenges During Adoption</h2>
<h3 id="heading-challenge-1-non-standard-endpoint-definitions-on-the-backend">Challenge 1: Non-Standard Endpoint Definitions on the Backend</h3>
<p>Our first major obstacle was consistency on the backend. Although we were already generating OpenAPI definitions through <code>@nestjs/swagger</code>, our controller implementations were not standardized. Over time, endpoints accumulated a large number of decorators often applied inconsistently across controllers. <strong>To solve this, we introduced a single, standardized decorator that encapsulates the full endpoint definition</strong>: routing method and path, Swagger response definitions, request body/query/param metadata, guards, roles, and additional decorators. Below is the core implementation of our <code>EndpointDefinition</code> decorator. It converts a single definition object into the appropriate NestJS and Swagger decorators and applies them via <code>applyDecorators</code>:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/endpoint/index.ts</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">EndpointDefinition</span>(<span class="hljs-params">
  definition: EndpointDefinition
</span>): <span class="hljs-title">MethodDecorator</span> </span>{
  <span class="hljs-keyword">const</span> decorators: <span class="hljs-built_in">Array</span>&lt;
    ClassDecorator | MethodDecorator | PropertyDecorator
  &gt; = []
  <span class="hljs-keyword">switch</span> (definition.method) {
    <span class="hljs-keyword">case</span> HttpMethod.get:
      decorators.push(Get(definition.path))
      <span class="hljs-keyword">break</span>
    ...
  }
  definition.responses.forEach(<span class="hljs-function"><span class="hljs-params">response</span> =&gt;</span>  decorators.push(ApiResponse(response)))
  <span class="hljs-keyword">if</span> (definition.body) decorators.push(ApiBody(definition.body))
  <span class="hljs-keyword">if</span> (definition.query) decorators.push(ApiQuery(definition.query))
  <span class="hljs-keyword">if</span> (definition.params) decorators.push(ApiParam(definition.params))
  decorators.push(ApiOperation({ description: definition.description }))
  decorators.push(UseGuards(...definition.guards))
  decorators.push(...definition.extra)
  <span class="hljs-keyword">return</span> applyDecorators(...decorators)
}
</code></pre>
<p>Once this abstraction was in place, controller methods became significantly easier to scan and maintain. For example, the <code>getBrands</code> endpoint moved from a long list of annotations into a single reusable definition that fully describes the endpoint contract:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/definitions/brand.ts</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> GetBrandsResponse <span class="hljs-keyword">extends</span> BrandDto {}
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> GetBrandsQuery {
  <span class="hljs-comment">// Properties for creating a brand</span>
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> GetBrandsDefinition = EndpointDefinition({
  method: HttpMethod.get,
  path: <span class="hljs-string">'/brands'</span>,
  description: <span class="hljs-string">'Get list of brands with paginantion'</span>,
  guards: [UserGuard],
  responses: [
    { status: HttpStatus.CREATED, <span class="hljs-keyword">type</span>: GetBrandsResponse, isArray: <span class="hljs-literal">true</span> }
  ],
  extra: [UsePipes(<span class="hljs-keyword">new</span> ValidationPipe())]
})
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/controllers/brand.ts</span>
<span class="hljs-meta">@GetBrandsDefinition</span>
<span class="hljs-keyword">async</span> getBrand(
  <span class="hljs-meta">@Request</span>() req: UserAuthRequest,
  <span class="hljs-meta">@Query</span>() query: GetBrandsQuery,
): <span class="hljs-built_in">Promise</span>&lt;GetBrandsResponse[]&gt; {
  <span class="hljs-comment">// Service Calls</span>
}
</code></pre>
<p>This pattern gave us two immediate benefits. First, <strong>controllers became cleaner and more uniform</strong>, which reduced review overhead and made it easier to spot missing pieces. Second, the OpenAPI schema quality improved because endpoint metadata was consistently defined in one place.</p>
<h3 id="heading-challenge-2-adopting-generated-api-types-on-the-frontend">Challenge 2: Adopting Generated API Types on the Frontend</h3>
<p>The second major challenge emerged on the frontend once we started consuming the generated OpenAPI types. Before this change, we relied on a small set of manually maintained TypeScript interfaces to represent most backend entities. These types were shared across multiple endpoints, pages, and components, and over time they drifted away from the actual API behavior. <strong>In particular, many fields that were optional in practice were not marked as optional in TypeScript, which masked inconsistencies until runtime.</strong></p>
<p>When we switched to using the generated API definitions, these mismatches surfaced immediately as TypeScript errors. The compiler started flagging missing fields, incorrect assumptions, and invalid type usage across the application. While this was ultimately a positive outcome, it made the initial adoption phase challenging, especially for entities that were reused extensively across the UI.</p>
<p><strong>To manage this, we avoided a big-bang migration. Instead, we updated the frontend incrementally, focusing on one entity at a time.</strong> For each entity, we aligned components, pages, and data flows with the generated types before moving on to the next. This was particularly time-consuming for core entities that appeared in multiple views and business flows, but it allowed us to make progress without blocking feature development.</p>
<p>As a result, the frontend currently uses a mix of manually defined types and generated API types. While this is not the final state we’re aiming for, it provides a practical transition path. Our long-term goal is to rely entirely on generated types as the single source of truth, but we expect this migration to happen gradually as the codebase continues to evolve.</p>
<h2 id="heading-results">Results</h2>
<h3 id="heading-improved-api-standardization-and-contract-quality">Improved API Standardization and Contract Quality</h3>
<p>Standardizing endpoint definitions led us to be much more deliberate about request and response contracts. Responses that were previously implicit or loosely defined are now explicitly documented in OpenAPI, which improved clarity, <strong>reduced accidental breaking changes</strong>, and made backend development more consistent and reviewable.</p>
<h3 id="heading-stronger-reliability-between-backend-and-frontend">Stronger Reliability Between Backend and Frontend</h3>
<p>Using generated API clients and types significantly improved reliability between backend and frontend by catching mismatches at compile time instead of runtime. While migrating from manually maintained types introduced some short-term friction and type errors, <strong>these issues consistently exposed real inconsistencies</strong> and resulted in more robust frontend implementations.</p>
<h3 id="heading-better-visibility-into-api-usage-and-optimization-opportunities">Better Visibility into API Usage and Optimization Opportunities</h3>
<p>Typed API usage made it easier to see which endpoints are used by which screens and which response fields are actually required. Although we haven’t fully leveraged this yet, it creates a strong foundation for future optimizations such as reducing response sizes and aligning endpoints more closely with real usage patterns, <strong>bringing some GraphQL-like benefits to our REST setup.</strong></p>
<h3 id="heading-daily-developer-experience">Daily Developer Experience</h3>
<p>From a frontend perspective, the developer experience has improved notably. Based on Jason’s experience, frontend developers now care less about plugging in API endpoints and can almost immediately start using them by simply wrapping them with React Query. This saves time and allows the team to focus on how data is consumed rather than on correctly fetching it. Additionally, pulling types directly from the OpenAPI Specification has elevated frontend type safety, as changes are now reflected immediately instead of requiring manual updates.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.ablo.ai/how-we-work">https://blog.ablo.ai/how-we-work</a></div>
<p> </p>
<h2 id="heading-whats-next"><strong>What’s Next</strong></h2>
<p>One natural next step is to <strong>package our generated API definitions as a versioned NPM package</strong> and publish a new release on every merge. This would clearly shift ownership of API contracts to the backend and allow frontend projects to consume a specific, immutable version of the API. The value of this approach increases as the number of backend and frontend services grows, especially in a multi-repo environment.</p>
<p>Another opportunity is <strong>reusing these API definitions across our other backend projects</strong>. Since some of our enterprise-facing services already depend on the main backend for certain operations, sharing a single, typed API contract would reduce duplication and make cross-service communication more explicit and reliable.</p>
<p>Longer term, we could move away from generating OpenAPI definitions via <code>@nestjs/swagger</code> annotations and instead <strong>generate OpenAPI JSON directly from TypeScript types</strong>. While this would create a cleaner, type-first contract model, it would require introducing additional tooling on the backend and refactoring all existing endpoints. Given the current cost and limited immediate benefit, this is not something we plan to pursue right now but it remains a potential direction if our architecture evolves further.</p>
]]></content:encoded></item><item><title><![CDATA[Search using PostgreSQL GIN indices]]></title><description><![CDATA[Ablo.ai, like most applications worldwide, provides a search feature to look for various products or creators. As with any startup, Ablo.ai initially implemented search by simply filtering the PostgreSQL database using full-text search with LIKE stat...]]></description><link>https://blog.ablo.ai/search-using-postgresql-gin-indices</link><guid isPermaLink="true">https://blog.ablo.ai/search-using-postgresql-gin-indices</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[typeorm]]></category><category><![CDATA[Databases]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[software development]]></category><category><![CDATA[backend]]></category><dc:creator><![CDATA[Jurgis Petrauskas-Mittas]]></dc:creator><pubDate>Thu, 09 Oct 2025 15:39:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760017266118/1206de77-3b4a-4139-a471-300205ad7d3a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Ablo.ai</em>, like most applications worldwide, provides a search feature to look for various products or creators. As with any startup, <a target="_blank" href="http://Ablo.ai"><em>Ablo.ai</em></a> initially implemented search by simply filtering the PostgreSQL database using full-text search with <code>LIKE</code> statements. This approach works relatively fast only when the data size is limited. As the data scales and additional filtering features are added, PostgreSQL's default filtering becomes exponentially slower, making our API latency times unbearable. To solve problems like this, people typically think of tools like Elasticsearch, Solr, or other third-party services. However, we decided to work with what we had and exploit PostgreSQL to its fullest. This blog describes our research and decisions to choose PostgreSQL and GIN (<a target="_blank" href="https://www.postgresql.org/docs/current/gin.html">Generalized Inverted Index</a>) indices as our search engine over more industry-standard and popular tools.</p>
<h2 id="heading-why-choose-postgresql-gin-indexes-full-text-search">Why choose PostgreSQL GIN indexes full text search</h2>
<p>Following is the image of GIN index data structure (<a target="_blank" href="https://pganalyze.com/blog/gin-index">https://pganalyze.com/blog/gin-index</a>):</p>
<p><a target="_blank" href="https://pganalyze.com/blog/gin-index"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760015171970/f7f37b0b-f6c2-4ace-a687-a532c589c508.png" alt class="image--center mx-auto" /></a></p>
<p>Choosing PostgreSQL GIN indexes for full-text search is mainly driven by ease of implementation and cost-effectiveness, as it provides indexed search speed directly within PostgreSQL without additional components. Since our codebase already runs on TypeORM with PostgreSQL, it only requires setting up the appropriate columns with a GIN index. This approach avoids the need to collect meaningful data, send it to third-party services, and then query those services via APIs or SDKs.</p>
<p>Given that we already have a PostgreSQL container with ample unused space, potential issues related to adding indices—such as increased storage—are minimal. PostgreSQL indices store extra data structures to enable faster data retrieval, which speeds up queries but also consumes additional storage. For example, GitLab encountered issues with large GIN indexes causing occasional slow updates due to the overhead of cleaning up the GIN pending list, sometimes resulting in multi-second stalls during operations (<a target="_blank" href="https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4725#note_596146675">source</a>). These issues arose from the heavy write workload on their large indices, requiring complex tuning and maintenance strategies to keep performance acceptable. However, this situation does not apply to us for the foreseeable future, as our database workload and size are much smaller, and we have sufficient resources to manage GIN index overhead without experiencing such bottlenecks.</p>
<p>By default, PostgreSQL uses B-tree indices, which employ a balanced tree structure where each index entry points to a single row. B-tree indices work best with straightforward data types like numbers, dates, and single values. However, when dealing with complex data types that hold multiple values in one column—such as arrays, JSON documents, or full-text search data—GIN indexes are the better option as they efficiently handle these multi-valued structures.</p>
<p>Using third-party search tools like Elasticsearch provides powerful, highly scalable full-text search capabilities with advanced features such as fuzzy matching, ranking algorithms, autocomplete, and complex aggregations. These tools are built specifically for search, offering fast and relevant results even on massive datasets. However, they bring overhead in the form of additional infrastructure to deploy and maintain, requiring data synchronization between your main database and the search cluster, which increases complexity and operational costs. Moreover, you must manage separate backups, security, and ensure consistency between data stores, which can complicate your system architecture compared to using built-in database search features.</p>
<p>So, we decided to give GIN a try.</p>
<h2 id="heading-setting-up-gin-in-typeorm">Setting up GIN in TypeORM</h2>
<p>By creating a generated column in the same table that combines text fields such as name and description into a single <code>tsvector</code> using PostgreSQL’s <code>to_tsvector</code> with a specified configuration (e.g., ‘english’), we can automatically update this column on each insert or update. This generated column can then be indexed with a GIN index, which is designed for sub-dividable data like full-text lexemes, enabling fast and efficient search queries without manually converting text to vectors on each request.</p>
<p>Key benefits include automatic updates of the full-text vector on data changes, speeding up search execution via the GIN index, and simplifying queries as they can directly search the indexed generated column rather than calling <code>to_tsvector</code> dynamically. While generated columns must reference only columns from the same table (thus requiring separate columns and indexes for related entities like design and template), this approach still avoids overhead of on-the-fly vector calculation and provides clear performance advantages.</p>
<p>Example from the Design entity with a generated <code>searchVector</code> column and corresponding GIN index demonstrates this setup:</p>
<pre><code class="lang-typescript"><span class="hljs-meta">@Index</span>(<span class="hljs-string">'idx_gin_design_search_vector'</span>, [<span class="hljs-string">'searchVector'</span>])
<span class="hljs-meta">@Column</span>({
  <span class="hljs-keyword">type</span>: <span class="hljs-string">'tsvector'</span>,
  generatedType: <span class="hljs-string">'STORED'</span>,
  asExpression: <span class="hljs-string">`to_tsvector('english', COALESCE(name, '') || ' ' || COALESCE(description, ''))`</span>,
  nullable: <span class="hljs-literal">false</span>,
  select: <span class="hljs-literal">false</span>,
  insert: <span class="hljs-literal">false</span>,
  update: <span class="hljs-literal">false</span>
})
searchVector: <span class="hljs-built_in">string</span>;
</code></pre>
<p>After generating a migration for the <code>searchVector</code> column, a couple of inaccuracies were spotted. TypeORM does not support creating GIN indexes natively, so we had to manually tweak the index creation SQL to include the GIN index type. Additionally, when creating a generated column, TypeORM inserts full details from the database into the <code>typeorm_metadata</code> table, including the database name. This means that if your local database name differs from the initial database used for migrations, TypeORM will regenerate all code for that generated column whenever new migrations are created.</p>
<p>Fixed migration look like this (<code>USING GIN</code> is a very important tweak, without it you will get B-Tree index created instead):</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> up(queryRunner: QueryRunner): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
    <span class="hljs-keyword">await</span> queryRunner.query(<span class="hljs-string">`
            ALTER TABLE "design"
            ADD "search_vector" tsvector GENERATED ALWAYS AS (
                    to_tsvector(
                        'english',
                        COALESCE(name, '') || ' ' || COALESCE(description, '')
                    )
                ) STORED NOT NULL
        `</span>)
    <span class="hljs-keyword">await</span> queryRunner.query(
      <span class="hljs-string">`
            INSERT INTO "typeorm_metadata"(
                    "database",
                    "schema",
                    "table",
                    "type",
                    "name",
                    "value"
                )
            VALUES ($1, $2, $3, $4, $5, $6)
        `</span>,
      [
        <span class="hljs-string">'image_generator'</span>,
        <span class="hljs-string">'public'</span>,
        <span class="hljs-string">'design'</span>,
        <span class="hljs-string">'GENERATED_COLUMN'</span>,
        <span class="hljs-string">'search_vector'</span>,
        <span class="hljs-string">"to_tsvector('english', COALESCE(name, '') || ' ' || COALESCE(description, ''))"</span>
      ]
    )

    <span class="hljs-keyword">await</span> queryRunner.query(<span class="hljs-string">`
      CREATE INDEX "idx_gin_design_search_vector" 
      ON "design" USING GIN ("search_vector")
  `</span>)
  }
</code></pre>
<p>The data stored into <code>search_vector</code> for design with name <code>Heart Eyes Bunny</code> and description: <code>A cute pink bunny with heart-shaped eyes radiating love. Perfect for expressing affection in a fun way!</code> looks like this: <code>'affect':18 'bunni':3,7 'cute':5 'express':17 'eye':2,12 'fun':21 'heart':1,10 'heart-shap':9 'love':14 'perfect':15 'pink':6 'radiat':13 'shape':11 'way':22</code>.</p>
<p>The text represents a <code>tsvector</code> column value, which stores a processed, searchable version of a text document for full-text search. Each word (called a lexeme), like ‘affect’, ‘bunni’, or ‘cute’, is listed with numbers that indicate the positions where that word appears in the original text. This positional data helps PostgreSQL optimize phrase and proximity searches. The GIN index uses these lexemes and their positions to quickly find rows containing the searched words without scanning the entire text, making full-text search efficient and fast.</p>
<p>Finally, we update the API to handle search:</p>
<pre><code class="lang-typescript">      <span class="hljs-keyword">const</span> sanitizedSearch = sanitizeSearchString(options.search)
      <span class="hljs-keyword">const</span> searchTerm = <span class="hljs-built_in">this</span>.prepareSearchTerm(sanitizedSearch)
      <span class="hljs-keyword">const</span> fullTextExpr = <span class="hljs-string">`
        "design"."search_vector" @@ to_tsquery(:query)
      `</span>
      queryBuilder
        .addSelect(<span class="hljs-string">`similarity(design.name, :searchTerm)`</span>, <span class="hljs-string">'name_similarity'</span>)
        .addSelect(fullTextExpr, <span class="hljs-string">'fulltext_match'</span>)
        .andWhere(<span class="hljs-string">`(<span class="hljs-subst">${fullTextExpr}</span> OR design.name ILIKE :searchPattern)`</span>, {
          query: searchTerm,
          searchPattern: <span class="hljs-string">`%<span class="hljs-subst">${sanitizedSearch}</span>%`</span>,
          searchTerm: sanitizedSearch
        })
        .orderBy(<span class="hljs-string">'fulltext_match'</span>, <span class="hljs-string">'DESC'</span>)
        .addOrderBy(<span class="hljs-string">'name_similarity'</span>, <span class="hljs-string">'DESC'</span>)
        .addOrderBy(<span class="hljs-string">'design.createdAt'</span>, <span class="hljs-string">'DESC'</span>)
</code></pre>
<p>This code performs a full-text search combined with a similarity ranking and fallback pattern match on the “design” entity. First, it sanitizes the search input and prepares it as a PostgreSQL <code>tsquery</code> term. Then, it constructs a query that checks if the <code>search_vector</code> (a precomputed tsvector column) matches the search term using full-text search operators. The query adds two computed columns: <code>fulltext_match</code> to indicate if the full-text search matched, and <code>name_similarity</code> to measure similarity between the search term and the design name. It filters results where either the full-text search matches or the name contains the search text pattern (<code>ILIKE</code>). Finally, it orders results by the full-text match flag first (descending), then by the similarity score, and lastly by creation date, ensuring the most relevant and recent records appear first.</p>
<h2 id="heading-the-results-of-the-api">The results of the API</h2>
<p>We haven’t noticed any slowdown in our POST or PATCH APIs in practice—though theoretically they may take slightly longer, the difference is imperceptible. Meanwhile, our search performs quickly and reliably, consistently returning accurate results for both Products and Creators. The API response time typically stays within 200-250 ms, and with caching in place, it becomes even faster over time as more requests are served directly from cache, improving efficiency with continued use.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760015870331/9d2f90f0-3ac0-4cae-ab0d-c66f0fc2ee60.png" alt class="image--center mx-auto" /></p>
<p>And SQL query always provides around 0.041s (0.001s fetch) for query, while executing this into the server that is in a different region:</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">SELECT</span>
  design.*,
  similarity(design.name, <span class="hljs-string">'dogs'</span>) <span class="hljs-keyword">AS</span> name_similarity,
  (design.search_vector @@ to_tsquery(<span class="hljs-string">'dogs'</span>)) <span class="hljs-keyword">AS</span> fulltext_match
<span class="hljs-keyword">FROM</span>
  design
<span class="hljs-keyword">WHERE</span>
  (design.search_vector @@ to_tsquery(<span class="hljs-string">'dogs'</span>) <span class="hljs-keyword">OR</span> design.name <span class="hljs-keyword">ILIKE</span> <span class="hljs-string">'%dogs%'</span>)
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span>
  fulltext_match <span class="hljs-keyword">DESC</span>,
  name_similarity <span class="hljs-keyword">DESC</span>,
  design.created_at <span class="hljs-keyword">desc</span>
<span class="hljs-keyword">limit</span> <span class="hljs-number">5</span>;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760016281012/6587a4d3-c7da-4988-ab83-d2d86a0dab78.png" alt class="image--center mx-auto" /></p>
<p>The query plan shows that the GIN index on the <code>search_vector</code> column is effectively utilized to accelerate full-text search. The plan includes a Bitmap Index Scan on the GIN index followed by a Bitmap Heap Scan on the <code>design</code> table, filtering rows that match the full-text condition or the trigram index on the <code>name</code> column. The sorting is performed with a top-N heapsort based on the full-text match, similarity score, and creation date. The entire query runs efficiently with an execution time of just over 2 milliseconds:</p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">Limit</span>  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">341.80</span>.<span class="hljs-number">.341</span><span class="hljs-number">.81</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">5</span> width=<span class="hljs-number">551</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">2.305</span>.<span class="hljs-number">.2</span><span class="hljs-number">.307</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">5</span> loops=<span class="hljs-number">1</span>)
  -&gt;  Sort  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">341.80</span>.<span class="hljs-number">.342</span><span class="hljs-number">.15</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">139</span> width=<span class="hljs-number">551</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">2.305</span>.<span class="hljs-number">.2</span><span class="hljs-number">.306</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">5</span> loops=<span class="hljs-number">1</span>)
        Sort Key: ((search_vector @@ to_tsquery(<span class="hljs-string">'dogs'</span>::<span class="hljs-type">text</span>))) <span class="hljs-keyword">DESC</span>, (similarity((<span class="hljs-type">name</span>)::<span class="hljs-type">text</span>, <span class="hljs-string">'dogs'</span>::<span class="hljs-type">text</span>)) <span class="hljs-keyword">DESC</span>, created_at <span class="hljs-keyword">DESC</span>
        Sort <span class="hljs-keyword">Method</span>: top-N heapsort  Memory: <span class="hljs-number">29</span>kB
        -&gt;  Bitmap Heap Scan <span class="hljs-keyword">on</span> design  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">119.06</span>.<span class="hljs-number">.339</span><span class="hljs-number">.49</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">139</span> width=<span class="hljs-number">551</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">1.171</span>.<span class="hljs-number">.2</span><span class="hljs-number">.023</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">135</span> loops=<span class="hljs-number">1</span>)
              Recheck Cond: ((search_vector @@ to_tsquery(<span class="hljs-string">'dogs'</span>::<span class="hljs-type">text</span>)) <span class="hljs-keyword">OR</span> ((<span class="hljs-type">name</span>)::<span class="hljs-type">text</span> ~~* <span class="hljs-string">'%dogs%'</span>::<span class="hljs-type">text</span>))
              Heap Blocks: exact=<span class="hljs-number">108</span>
              -&gt;  BitmapOr  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">119.06</span>.<span class="hljs-number">.119</span><span class="hljs-number">.06</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">139</span> width=<span class="hljs-number">0</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">1.132</span>.<span class="hljs-number">.1</span><span class="hljs-number">.133</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">0</span> loops=<span class="hljs-number">1</span>)
                    -&gt;  Bitmap <span class="hljs-keyword">Index</span> Scan <span class="hljs-keyword">on</span> idx_gin_design_search_vector  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">0.00</span>.<span class="hljs-number">.24</span><span class="hljs-number">.38</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">138</span> width=<span class="hljs-number">0</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">0.371</span>.<span class="hljs-number">.0</span><span class="hljs-number">.372</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">135</span> loops=<span class="hljs-number">1</span>)
                          <span class="hljs-keyword">Index</span> Cond: (search_vector @@ to_tsquery(<span class="hljs-string">'dogs'</span>::<span class="hljs-type">text</span>))
                    -&gt;  Bitmap <span class="hljs-keyword">Index</span> Scan <span class="hljs-keyword">on</span> trgm_idx_name  (<span class="hljs-keyword">cost</span>=<span class="hljs-number">0.00</span>.<span class="hljs-number">.94</span><span class="hljs-number">.61</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">1</span> width=<span class="hljs-number">0</span>) (actual <span class="hljs-type">time</span>=<span class="hljs-number">0.760</span>.<span class="hljs-number">.0</span><span class="hljs-number">.760</span> <span class="hljs-keyword">rows</span>=<span class="hljs-number">8</span> loops=<span class="hljs-number">1</span>)
                          <span class="hljs-keyword">Index</span> Cond: ((<span class="hljs-type">name</span>)::<span class="hljs-type">text</span> ~~* <span class="hljs-string">'%dogs%'</span>::<span class="hljs-type">text</span>)
Planning <span class="hljs-type">Time</span>: <span class="hljs-number">0.406</span> ms
Execution <span class="hljs-type">Time</span>: <span class="hljs-number">2.355</span> ms
</code></pre>
<p>This demonstrates how GIN indexes combined with similarity and trigram indexes can deliver fast, accurate full-text search results with efficient query execution.</p>
<p><a target="_blank" href="https://ablo.ai/"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760015744805/33fd414e-ddef-40ab-b82d-9da6b793b897.png" alt class="image--center mx-auto" /></a></p>
<p><a target="_blank" href="https://ablo.ai/"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760015770328/218263bf-c595-4793-9f33-996bf77d68ff.png" alt class="image--center mx-auto" /></a></p>
<h2 id="heading-to-sum-up">To Sum Up</h2>
<p>To sum up, at Ablo AI, we initially used PostgreSQL’s simple LIKE-based search, which worked only for small datasets. As we scaled and added features, this approach became too slow and caused high API latency. Instead of adopting third-party search tools like Elasticsearch, we decided to maximize PostgreSQL’s native capabilities by using GIN indexes for full-text search. This approach fits well with our existing TypeORM setup, requires less infrastructure overhead, and delivers fast, automated search vector updates and queries. We created generated columns combining text fields into <code>tsvector</code> columns that are automatically updated and indexed with GIN. While we had to manually adjust SQL to correctly create GIN indexes due to TypeORM limitations, the benefits are clear: fast, consistent search performance with minimal maintenance complexity. The <code>tsvector</code> columns store tokenized words with positions, allowing GIN to efficiently locate matches. Our API combines full-text search with similarity ranking and pattern matching, delivering results within 200-250 ms, with query plans confirming the effective use of GIN and trigram indexes in just a few milliseconds. This demonstrates how harnessing PostgreSQL’s full-text search capabilities with GIN indexes can provide scalable, high-performance search without the complexity of external search services.</p>
<h3 id="heading-sources">Sources</h3>
<ul>
<li><p><a target="_blank" href="https://www.postgresql.org/docs/current/gin.html">https://www.postgresql.org/docs/current/gin.html</a> [2025-10-09]</p>
</li>
<li><p><a target="_blank" href="https://pganalyze.com/blog/gin-index">https://pganalyze.com/blog/gin-index</a> [2025-10-09]</p>
</li>
<li><p><a target="_blank" href="https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4725#note_596146675">https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4725#note_596146675</a> [2025-10-09]</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Code Splitting with Vite SSR (Server Side Rendering)]]></title><description><![CDATA[1. What is code splitting and why you should care
As your app grows, features, utilities, third-party packages pile up and the amount of JavaScript you ship grows with them. That extra JS has to be downloaded, parsed, and executed before users can fu...]]></description><link>https://blog.ablo.ai/code-splitting-with-vite-ssr-server-side-rendering</link><guid isPermaLink="true">https://blog.ablo.ai/code-splitting-with-vite-ssr-server-side-rendering</guid><category><![CDATA[vite]]></category><category><![CDATA[SSR]]></category><category><![CDATA[code splitting]]></category><category><![CDATA[React]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Jason Ladias]]></dc:creator><pubDate>Tue, 07 Oct 2025 17:06:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759843224769/a60c6773-b900-47f2-a6c4-9c252372264e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-what-is-code-splitting-and-why-you-should-care">1. What is code splitting and why you should care</h2>
<p>As your app grows, features, utilities, third-party packages pile up and the amount of JavaScript you ship grows with them. That extra JS has to be <strong>downloaded, parsed, and executed</strong> before users can fully interact (and in many SPAs, before the initial screen hydrates). So, the more you ship, the slower the startup? Pretty much yes, unless you do some <em>code splitting</em>.</p>
<p>Think of <em>code splitting</em> as slicing your app into smaller bundles that the browser <strong>loads</strong> <strong>only when they’re needed</strong> (for the current route or interaction). This optimization shortens the initial load and directly improves metrics measured by tools like <strong>Lighthouse</strong>, <a target="_blank" href="https://developer.chrome.com/docs/lighthouse/overview">Google’s automated tool for auditing web performance and accessibility</a>. Strong Lighthouse scores often reflect healthy <a target="_blank" href="https://web.dev/articles/vitals"><strong>Core Web Vitals</strong></a>, metrics like <a target="_blank" href="https://web.dev/articles/lcp">Largest Contentful Paint (LCP)</a> and <a target="_blank" href="https://web.dev/articles/cls">Cumulative Layout Shift (CLS)</a> that affect both user experience and search ranking. You can read more about code splitting in the <a target="_blank" href="https://react.dev/learn/build-a-react-app-from-scratch#code-splitting">React docs</a>.</p>
<h2 id="heading-2-react-based-code-splitting">2. React-based code splitting</h2>
<h3 id="heading-a-route-based-splitting">a. Route-based splitting</h3>
<p>This is the classic move: split by page or route. Each route’s UI becomes its own bundle, loaded only when the user navigates there. It’s clean because routes are natural boundaries in your app. This is often the first cut people make.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Assume a router setup with React Router</span>
<span class="hljs-keyword">const</span> Home = React.lazy(<span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./pages/Home'</span>));
<span class="hljs-keyword">const</span> Profile = React.lazy(<span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./pages/Profile'</span>));

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">AppRouter</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">Suspense</span> <span class="hljs-attr">fallback</span>=<span class="hljs-string">{</span>&lt;<span class="hljs-attr">div</span>&gt;</span>Loading…<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>}&gt;
      <span class="hljs-tag">&lt;<span class="hljs-name">Routes</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">Route</span> <span class="hljs-attr">path</span>=<span class="hljs-string">"/"</span> <span class="hljs-attr">element</span>=<span class="hljs-string">{</span>&lt;<span class="hljs-attr">Home</span> /&gt;</span>} /&gt;
        <span class="hljs-tag">&lt;<span class="hljs-name">Route</span> <span class="hljs-attr">path</span>=<span class="hljs-string">"/profile"</span> <span class="hljs-attr">element</span>=<span class="hljs-string">{</span>&lt;<span class="hljs-attr">Profile</span> /&gt;</span>} /&gt;
      <span class="hljs-tag">&lt;/<span class="hljs-name">Routes</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">Suspense</span>&gt;</span></span>
  );
}
</code></pre>
<h3 id="heading-b-component-level-splitting">b. Component-level splitting</h3>
<p>Within a route, you may have heavy components (charts, editors, modals, maps). Use lazy loading on those so you avoid shipping them by default. Even if the user never opens a modal or expands the editor, they won’t incur that cost.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Dashboard</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> HeavyChart = React.lazy(<span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./charts/HeavyChart'</span>));
  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>Your activity<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Suspense</span> <span class="hljs-attr">fallback</span>=<span class="hljs-string">{</span>&lt;<span class="hljs-attr">div</span>&gt;</span>Chart loading…<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>}&gt;
        <span class="hljs-tag">&lt;<span class="hljs-name">HeavyChart</span> /&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">Suspense</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
}
</code></pre>
<h3 id="heading-c-library-splitting">c. Library splitting</h3>
<p>You can extract large third-party libraries (Stripe, DnD, fabric, data viz libs) into their own chunks. They’re loaded only when features requiring them activate. This lets your core app stay lean, while these “big rocks” sit dormant until needed.</p>
<h2 id="heading-3-route-based-splitting-and-vite-ssr-are-not-good-friends">3. Route Based Splitting and Vite SSR are not good friends</h2>
<p>We built our stack on <strong>Vite’s SSR</strong>, expecting route-based splitting to mostly work out of the box. Instead, we ended up staring at broken routes, weird freezes, and more console errors than confidence. The problems we encountered boiled down to <strong>three systemic pains</strong>:</p>
<ul>
<li><p><strong>Hydration mismatches</strong>: the HTML sent by our server would sometimes diverge from what React expected on the client, triggering errors like <em>“Hydration failed”</em>. In SSR environments, mismatches aren’t rare. To tame this, we looked at libraries that advertise smoother SSR + lazy boundary integration. <a target="_blank" href="https://loadable-components.com/docs/server-side-rendering/"><strong>Loadable Components</strong></a>, for instance, presents itself as a better fit for SSR than <code>React.lazy</code>, offering tooling to help align server and client rendering.</p>
</li>
<li><p><strong>Tooling friction with SSR + lazy libraries</strong>: We experiment with Loadable Components, but moved away <a target="_blank" href="https://github.com/gregberge/loadable-components/issues/833">since this is not compatible with Vite</a>. Then, we tried also <a target="_blank" href="https://github.com/wille/vite-preload">Vite preload</a>, but wiring it into our stack wasn’t as smooth as the docs suggest. Parts of our app became unresponsive, and we had to build custom logic to patch over some of those issues. What stabilized eventually cracked again when authentication flows were involved.</p>
</li>
<li><p><strong>Auth logic</strong>: Even when chunking <em>worked</em>, some user flows, especially around login, started failing. For example, after authentication the app sometimes treated the user as still unauthenticated.</p>
</li>
</ul>
<p>Together, these pains taught us that “route splitting under ViteSSR” is far from a lightweight refactor or a quick win. It demands deep orchestration between routing, lazy loading, manifest wiring, and authentication. Trying to build it from scratch felt a lot like rewriting your own framework. So we decided to look elsewhere for performance gains.</p>
<h2 id="heading-4-what-we-shipped-good-old-library-splitting">4. What we shipped: Good old library splitting</h2>
<p>We took a pragmatic approach, not trying to split everything, but isolating what hurt us most. The ultimate goal was to reduce our <strong>main bundle size</strong> and shave milliseconds off startup. That’s why we pivoted toward <strong>library splitting</strong> instead of wresting with full route-based code splitting under SSR.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> MODULES_FOR_SEPARATE_CHUNKS = [
  <span class="hljs-string">'@stripe/stripe-js'</span>,
  <span class="hljs-string">'@stripe/react-stripe-js'</span>,
  ... <span class="hljs-comment">// rest of heavy libraries</span>
];

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  <span class="hljs-attr">build</span>: {
    <span class="hljs-attr">rollupOptions</span>: {
      <span class="hljs-attr">output</span>: {
        manualChunks(id) {
          <span class="hljs-keyword">const</span> ROLLUP_COMMON_MODULES = [
            <span class="hljs-string">'vite/preload-helper'</span>,
            <span class="hljs-string">'vite/modulepreload-polyfill'</span>,
            <span class="hljs-string">'vite/dynamic-import-helper'</span>,
            <span class="hljs-string">'commonjsHelpers'</span>,
            <span class="hljs-string">'commonjs-dynamic-modules'</span>,
            <span class="hljs-string">'__vite-browser-external'</span>,
          ];

          <span class="hljs-keyword">if</span> (
            id.includes(<span class="hljs-string">'node_modules'</span>) &amp;&amp;
            MODULES_FOR_SEPARATE_CHUNKS.find(<span class="hljs-function">(<span class="hljs-params"><span class="hljs-built_in">module</span></span>) =&gt;</span> id.includes(<span class="hljs-built_in">module</span>))
          ) {
            <span class="hljs-keyword">return</span> id.toString().split(<span class="hljs-string">'node_modules/'</span>)[<span class="hljs-number">1</span>].split(<span class="hljs-string">'/'</span>)[<span class="hljs-number">0</span>].toString();
          }

          <span class="hljs-keyword">if</span> (
            id.includes(<span class="hljs-string">'node_modules'</span>) ||
            ROLLUP_COMMON_MODULES.some(<span class="hljs-function">(<span class="hljs-params">commonModule</span>) =&gt;</span> id.includes(commonModule))
          ) {
            <span class="hljs-keyword">return</span> <span class="hljs-string">'vendor'</span>;
          }
        },
      },
    },
  },
  ...<span class="hljs-comment">//rest of config</span>
})
</code></pre>
<p>In our <code>vite.config.js</code>, we declared a <code>MODULES_FOR_SEPARATE_CHUNKS</code> array that lists heavyweight dependencies we want carved out of the main bundle. In the <code>manualChunks(id)</code> function, we check if the module path (<code>id</code>) comes from <code>node_modules</code> and matches one of those listed modules. If so, we return the module’s name so Vite/Rollup builds it into its own chunk. For anything else in <code>node_modules</code>, we funnel them into a shared <code>vendor</code> chunk. The result: heavy libraries live in isolated bundles only fetched when needed, while the core of our app stays lean.</p>
<p>A few caveats worth knowing: Vite/Rollup’s manual chunking isn’t foolproof. Issues like chunks loading earlier than expected or duplication sometimes appear in real-world setups Here is a link to the issue in Github: <a target="_blank" href="https://github.com/vitejs/vite/issues/5189">https://github.com/vitejs/vite/issues/5189</a></p>
<h2 id="heading-5-faster-load-times">5. Faster load times</h2>
<p>After deploying our optimizations indeed our app load time felt significantly faster. We started seeing Lighthouse scores pop into the <strong>90s</strong> on several pages. That kind of result signals you’re in the “green zone” for performance (<a target="_blank" href="https://developer.chrome.com/docs/lighthouse/performance/performance-scoring">Lighthouse marks 90+ as “good”</a>). These results confirmed that our surgical splitting, deferring of heavy scripts, and chunking strategy weren’t just theoretical—they moved practical metrics.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759852042256/5e179fc6-8cde-42e5-8430-a0a3433ce125.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759852062364/bc1ed922-d880-4f6a-92e1-1d86762ffa21.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759852086109/41d1998a-e88b-4916-b6ba-04422ca27fc6.png" alt class="image--center mx-auto" /></p>
<p>To be precise, the scores referenced here were measured on <strong>October 7, 2025</strong>, giving us a date-stamped benchmark for comparison.</p>
<h2 id="heading-6-whats-next">6. What’s Next?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759853883721/cecba407-4b4e-4209-963b-1fe8c1772a5b.png" alt class="image--center mx-auto" /></p>
<p>We started off looking for a <em>quick win</em>, route splitting, turn the lever, get performance. But ViteSSR &amp; route code splitting turned out to be a beast: hydration mismatches, odd auth bugs, and tooling friction were true boss fights. By focusing on surgical cuts like deferring third-party scripts, isolating heavy libs, and letting the main bundle stay lean we managed to achieve really nice results.</p>
<p>Of course, optimizations never really stop. We can continue pushing: splitting out more chunks, preloading fonts only in the editor, asynchronously loading those small CSS files only when needed, tightening cache lifetimes, and squashing layout shifts. And if those efforts don’t feel enough, migrating to a purpose-built framework remains on the table as a fallback, though not our only path forward.</p>
]]></content:encoded></item><item><title><![CDATA[Transitioning Your Testing Framework: Jest to Vitest]]></title><description><![CDATA[In backend development, testing is more than just a best practice; it's a safeguard for stability, data integrity, and system resilience. While unit tests help catch issues in isolated functions, they often fall short in revealing real world integrat...]]></description><link>https://blog.ablo.ai/jest-to-vitest-in-nestjs</link><guid isPermaLink="true">https://blog.ablo.ai/jest-to-vitest-in-nestjs</guid><category><![CDATA[Jest]]></category><category><![CDATA[vitest]]></category><category><![CDATA[vite]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Testing Library]]></category><category><![CDATA[migration]]></category><category><![CDATA[spacerunners]]></category><dc:creator><![CDATA[Okan Aslan]]></dc:creator><pubDate>Mon, 21 Jul 2025 15:15:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753019855860/eb628e28-956c-46cf-8764-e972b35cf4d0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In backend development, testing is more than just a best practice; it's a safeguard for stability, data integrity, and system resilience. While unit tests help catch issues in isolated functions, they often fall short in revealing real world integration problems that arise when multiple components interact. <strong>This is where end-to-end (E2E) testing shines. It not only validates logic, but checks behavior across authentication, APIs, databases, and even external services.</strong></p>
<p>At Space Runners, we’ve embraced comprehensive end-to-end testing as a pillar of our <a target="_blank" href="https://blog.ablo.ai/how-we-work">engineering culture</a>. Our E2E test suite covers over 90% of the backend logic, helping us catch regressions early, ensure contract stability, and build with confidence. But as our systems and team scaled, so did the need for faster, <strong>more efficient testing infrastructure without compromising on coverage or reliability</strong>.</p>
<h2 id="heading-why-we-considered-a-change">Why We Considered a Change</h2>
<p>While our test suite was comprehensive, it came at a cost: <strong>our CI pipeline was taking nearly 15 minutes to complete.</strong> This delay significantly slowed down our development feedback loop, especially painful during hotfixes or urgent releases. We found ourselves either waiting unproductively for builds to finish or, worse, skipping tests to save time. Neither option was sustainable.</p>
<p>After evaluating alternatives, we decided to migrate our backend tests from Jest to Vitest. The transition was appealing for several reasons: <strong>Vitest shares a nearly identical API with Jest, making it easy to swap out without massive refactoring.</strong> It's also built on top of the Vite ecosystem, which continues to gain traction in the frontend and backend communities alike. With strong community adoption, active development, and <a target="_blank" href="https://v0.vitest.dev/guide/migration.html#migrating-from-jest">detailed migration guides</a>, Vitest provided a smooth and well supported path forward.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753027077674/8df7865a-ab02-4376-8d8e-851e77cf90d2.png" alt="Our CI pipeline durations when we were using Jest which is almost 15 minutes" class="image--center mx-auto" /></p>
<h2 id="heading-configuring-vitest-for-our-stack">Configuring Vitest for our Stack</h2>
<p>Getting Vitest up and running in our backend wasn’t entirely plug-and-play, but the process was manageable thanks to our existing setup. Since <strong>we were already running Jest with SWC</strong>, we had most of the necessary infrastructure in place. The primary additions were a new <code>vitest.config.ts</code> file and an update to our <code>tsconfig.json</code> to include Vitest’s global types via <code>"types": ["vitest/globals"]</code>.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"compilerOptions"</span>: {
    ...
    <span class="hljs-attr">"emitDecoratorMetadata"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"experimentalDecorators"</span>: <span class="hljs-literal">true</span>,
    ...
    <span class="hljs-attr">"types"</span>: [<span class="hljs-string">"vitest/globals"</span>]
  }
}
</code></pre>
<p>By default, <strong>Vitest uses ESBuild to transform code. However, because our NestJS + TypeORM backend relies heavily on metadata</strong> (especially for decorators), we needed a transformer that supports it properly. ESBuild lacks full metadata support, so <strong>we opted to use SWC instead</strong>. The <code>unplugin-swc</code> package made this integration straightforward, allowing us to plug SWC into Vitest without needing to change our existing build pipeline.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// vitest.config.ts</span>
<span class="hljs-keyword">import</span> swc <span class="hljs-keyword">from</span> <span class="hljs-string">'unplugin-swc'</span>
<span class="hljs-keyword">import</span> { defineConfig } <span class="hljs-keyword">from</span> <span class="hljs-string">'vitest/config'</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineConfig({
  test: {
    globals: <span class="hljs-literal">true</span>,
    environment: <span class="hljs-string">'node'</span>,
    pool: <span class="hljs-string">'threads'</span>,
    poolOptions: {
      threads: {
        singleThread: <span class="hljs-literal">true</span>
      }
    },
  },
  plugins: [swc.vite()]
})
</code></pre>
<p>To ensure correct metadata handling, our <code>.swcrc</code> configuration includes <code>"legacyDecorator": true</code> and <code>"decoratorMetadata": true</code>. These flags are essential in projects that use TypeORM entities and decorators, like ours.</p>
<pre><code class="lang-json"><span class="hljs-comment">// .swcrc</span>
{
  <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://swc.rs/schema.json"</span>,
  <span class="hljs-attr">"sourceMaps"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"jsc"</span>: {
    <span class="hljs-attr">"parser"</span>: {
      <span class="hljs-attr">"syntax"</span>: <span class="hljs-string">"typescript"</span>,
      <span class="hljs-attr">"decorators"</span>: <span class="hljs-literal">true</span>,
      <span class="hljs-attr">"dynamicImport"</span>: <span class="hljs-literal">true</span>
    },
    <span class="hljs-attr">"transform"</span>: {
      <span class="hljs-attr">"legacyDecorator"</span>: <span class="hljs-literal">true</span>,
      <span class="hljs-attr">"decoratorMetadata"</span>: <span class="hljs-literal">true</span>
    },
  },
}
</code></pre>
<p>Lastly, due to some setup constraints with our current test structure, we opted to run tests in a single-threaded pool. While Vitest supports parallel test execution, we deferred that optimization to avoid introducing flaky test behavior before stabilizing the new setup.</p>
<h2 id="heading-challenges-in-the-migration">Challenges in the Migration</h2>
<p>At first glance, migrating from Jest to Vitest looked relatively simple. Both frameworks share nearly identical testing APIs. However, once we began running tests in our actual backend environment, deeper issues started to surface. The core challenges came down to the specifics of how Vitest handles module resolution, metadata, and ESM mocking.</p>
<h3 id="heading-1-typeorm-compatibility">1. TypeORM Compatibility</h3>
<p>Our backend is built on NestJS and TypeORM, which heavily depend on decorators and runtime metadata. This became one of the more stubborn issues in the migration. When configuring TypeORM in tests, our original approach was to pass file path patterns (e.g., <code>**/*.entity.ts</code>) to define the entity list. However, Vitest’s transformation layer struggled to preserve metadata information when resolving those files dynamically.</p>
<p>To resolve this, we switched to importing every entity manually and passing them as an array to the TypeORM configuration. While this is slightly more verbose and rigid, it ensured that decorator metadata was preserved and the ORM could correctly reflect on the entities. We also tried referencing compiled JavaScript paths, but this too resulted in missing metadata issues.</p>
<p>One caveat with this approach is that every entity must be explicitly included. Missing even one results in confusing errors or unexpected runtime failures. We’ve since made this part of our setup more maintainable by centralizing entity imports in a shared module.</p>
<h3 id="heading-2-mocking-esm-modules">2. Mocking ESM Modules</h3>
<p>Another tricky area was mocking third-party ESM modules. In our case, Vitest wasn’t reliably able to mock certain ESM-only packages, like <code>potrace</code>. Direct mocks would either fail silently or break at runtime.</p>
<p>Our workaround was to introduce internal wrapper functions around these modules. Instead of importing <code>potrace</code> directly in our tests or application logic, we created utility helpers that internally used <code>potrace</code>, and then mocked those helpers in tests. This added a small layer of indirection, but it allowed us to keep tests deterministic and compatible with Vitest’s mocking system. Thankfully, the number of affected modules was small, and we were able to refactor them without much disruption.</p>
<h2 id="heading-results-of-the-migration">Results of the Migration</h2>
<p>Despite some initial hiccups during the migration, the results have been undeniably worth it. Our CI runtime dropped from 15 minutes to just 4, and local test runs now complete in around one minute. This dramatic improvement in speed has made testing a frictionless part of development. We are no longer hesitant to run the full test suite locally, and urgent deployments or hotfixes are no longer bottlenecked by slow CI feedback.</p>
<p>Another key benefit was test monitoring support. Vitest integrates smoothly with tools that allow us to track test performance over time and gather insights on memory leaks and coverage changes. This gives us visibility we previously lacked, helping us proactively improve test quality.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753027163746/10255a57-9475-4356-a136-0cdd6dac8e70.png" alt="Our current CI pipeline durations with Vitest which is under 5 minutes" class="image--center mx-auto" /></p>
<h2 id="heading-whats-next">What’s Next?</h2>
<p>The migration from Jest to Vitest was absolutely worth it. Although, there were technical hurdles mainly around <strong>TypeORM</strong> and <strong>ESBuild</strong> quirks the performance gains and improved developer experience have made a measurable difference. Our tests are faster, lighter, and better integrated with modern tooling.</p>
<p>We’re now exploring <strong>parallel test execution</strong>, which could further reduce CI times. However, running E2E tests in parallel introduces complexity, particularly around database state isolation. Solving these challenges is our next focus as we continue to scale our infrastructure.</p>
]]></content:encoded></item><item><title><![CDATA[Server-Side Rendering (SSR) with Vite]]></title><description><![CDATA[Introduction
Server side rendering (SSR) is a technique where a page with all of its content is generated on the server and sent to the browser fully populated. This technique helps with SEO and makes the site feel snappier on initial load, although ...]]></description><link>https://blog.ablo.ai/server-side-rendering-ssr-with-vite</link><guid isPermaLink="true">https://blog.ablo.ai/server-side-rendering-ssr-with-vite</guid><category><![CDATA[SSR]]></category><category><![CDATA[vite]]></category><category><![CDATA[e-commerce]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Mihovil Kovacevic]]></dc:creator><pubDate>Mon, 16 Jun 2025 21:09:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750100179871/c2b57075-e048-4098-a48a-c9ffdba43dfc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Server side rendering (SSR) is a technique where a page with all of its content is generated on the server and sent to the browser fully populated. This technique helps with SEO and makes the site feel snappier on initial load, although it adds implementation complexity. In the old days of the web everything was server side generated, but then single page apps (SPAs) were introduced and they offered a much better user experience. These days modern frameworks based on React use a hybrid approach which works like this:</p>
<ol>
<li><p>A page is generated on the server, populated with data and sent to the browser</p>
</li>
<li><p>On the browser side the page is visible immediately and then the client side script adds interactivity and event handlers</p>
</li>
<li><p>As you navigate through the app with links you get the interactivity of a SPA</p>
</li>
<li><p>Internal pages such as dashboards and editors where SEO doesn't matter are just regular SPAs</p>
</li>
</ol>
<p>This article will focus on a high-level overview on how to take a SPA built with the popular build tool Vite and add SSR to it using native Vite tools, without committing to a framework. If you browse online through Reddit and various forums, you’ll find that many teams are asking questions like: “How do I build the home page with SSR and then the rest of the app with Vite”. There aren’t many resources with definitive answers except that it’s possible. This article aims to shed some light on this entire process and how it worked out for us.</p>
<h2 id="heading-motivation-for-ssr-in-ablo">Motivation for SSR in Ablo</h2>
<p>Ablo offers a comprehensive editor tool where artists create beautiful designs, but it also offers a storefront where consumers can buy these designs. The latter has to be indexable by Google and easily discoverable on the web because we want to maximize the traffic we get on these pages. Maximizing traffic on every e-commerce related page means more people going into the sales funnel and ultimately more sales. These are pages used for marketing such as the home page or the page with a list of products or product details. If we only had these kind of pages, we would be better served with a framework such as NextJS or Remix.</p>
<p>However, the bulk of our app is the entire ecosystem around the editor and all the admin pages focused around publishing and managing designs. At the time of this writing we’re about to build a huge creator hub module with tools which artists will use to maximize their earnings. All of these pages are behind a login and use client-side libraries such as fabric.js or later on charting libraries for dashboards. They aren’t indexed by Google so they don’t need SEO considerations and SSR support. Since its inception Ablo has been a pure SPA centered around design tools and AI image generation. The merch shop and e-commerce aspect came later. Like many SPAs out there, Ablo uses Vite as its build tool because of unparalleled simplicity, flexibility, and speed of development. Unlike frameworks it doesn’t force you into a specific paradigm. It just does its job and gets out of the way.</p>
<p>Like many apps, when Ablo recently got the requirement to have a subset of its pages rendered on the server, we evaluated different solutions. Thankfully, Vite has SSR support and although it’s not a fully fledged SSR framework, it provides tools to effectively and easily build an SSR solution while maintaining all the benefits previously mentioned. For us this meant that we got to keep our existing workflows and implement SSR on a few of our pages with minimal effort.</p>
<p>A worthy mention is <code>vite-ssr</code>, a framework for SSR with Vite. Their site has a comprehensive comparison with popular frameworks and it makes a great sales pitch on all of its benefits. However, we decided not to go with this solution because it’s paid and we saw a clear path on how to achieve our goals with a custom solution.</p>
<h2 id="heading-measuring-the-results">Measuring the results</h2>
<p>Our end goal with SSR was to improve page speed load and ultimately SEO, because the latter depends on the former. There are many free and paid tools online which measure page speed load, but the first step is to start with Google’s Lighthouse. It can be ran from Chrome DevTools to get fast feedback during development on how different approaches affect performance. Here’s our score for Ablo’s home page on a pure SPA without SSR:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750099873262/283c85d8-f29d-4019-a49f-49dea2e820d1.png" alt class="image--center mx-auto" /></p>
<p>As you can see, it’s not very good. This was our starting point. After we implemented SSR and also added a couple of optimizations on how we store and serve images, it’s much better:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750100170275/0635ecc4-ff21-4521-aff2-0b84bccbc831.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-getting-started">Getting started</h2>
<p>Vite has good documentation on how to work with SSR, but they go into too much details on some things so you miss out on what’s important. We’ll present a bare bones version that’s already in production in our case, works well, and took a couple of days to implement. If we assume that you already have a Vite app, the first thing is to install Express: <code>npm i express</code> because you’ll need a server to serve the pre-generated static pages. Here’s the basic idea of SSR with Vite:</p>
<ol>
<li><p>In dev mode you use Vite’s Hot Module Reload (HMR) server as Express middleware. This allows for instant feedback in the browser as you make changes in the code. That’s a feature that everyone got used to when working with Vite and you can still use it.</p>
</li>
<li><p>In production the Vite app is built into a <code>dist</code> folder as usual, without SSR. However, in this case the Express server reads this HTML and pre-renders data before sending it to the client.</p>
</li>
<li><p>There are two entry points to the app: <code>entry-client.tsx</code> and <code>entry-server.tsx</code> . For existing Vite apps the client entry point is the same main file which they currently use. The server entry point has all the pre-rendering logic and a static router for routes with SSR.</p>
</li>
<li><p>In production the client entry point is built into the client bundle as described in point 2, and the server side is also built into its own distribution folder.</p>
</li>
</ol>
<p>The entry point to the server (Express) side is the <code>server.js</code> file. It’s a simple file which embeds the logic described above.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> express <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;

<span class="hljs-keyword">import</span> fs <span class="hljs-keyword">from</span> <span class="hljs-string">'node:fs/promises'</span>;

<span class="hljs-keyword">const</span> base = process.env.BASE || <span class="hljs-string">'/'</span>;

<span class="hljs-keyword">const</span> isProduction = process.env.NODE_ENV === <span class="hljs-string">'production'</span>;

<span class="hljs-comment">// Cached production assets</span>
<span class="hljs-keyword">const</span> templateHtml = isProduction ? <span class="hljs-keyword">await</span> fs.readFile(<span class="hljs-string">'./dist/client/index.html'</span>, <span class="hljs-string">'utf-8'</span>) : <span class="hljs-string">''</span>;

<span class="hljs-comment">// Create http server</span>
<span class="hljs-keyword">const</span> app = express();

<span class="hljs-comment">// Add Vite or respective production middlewares</span>
<span class="hljs-comment">/** @type {import('vite').ViteDevServer | undefined} */</span>
<span class="hljs-keyword">let</span> vite;

<span class="hljs-keyword">if</span> (!isProduction) {
  <span class="hljs-comment">// In development we use the good old Vite HMR server, but as an Express middleware here</span>
  <span class="hljs-keyword">const</span> { createServer } = <span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'vite'</span>);

  vite = <span class="hljs-keyword">await</span> createServer({
    <span class="hljs-attr">server</span>: { <span class="hljs-attr">middlewareMode</span>: <span class="hljs-literal">true</span> },
    <span class="hljs-attr">appType</span>: <span class="hljs-string">'custom'</span>,
    base,
  });
  app.use(vite.middlewares);
} <span class="hljs-keyword">else</span> {
  <span class="hljs-keyword">const</span> compression = (<span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'compression'</span>)).default;
  <span class="hljs-keyword">const</span> sirv = (<span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'sirv'</span>)).default;
  app.use(compression());
  app.use(base, sirv(<span class="hljs-string">'./dist/client'</span>, { <span class="hljs-attr">extensions</span>: [] }));
}

<span class="hljs-comment">// This is a catch all route used as en entry point to render the initial page</span>
app.use(<span class="hljs-string">'*all'</span>, <span class="hljs-keyword">async</span> (req, res, next) =&gt; {
  <span class="hljs-keyword">const</span> url = req.originalUrl;

  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">let</span> template;
    <span class="hljs-comment">/** @type {import('./src/entry-server.js').render} */</span>
    <span class="hljs-keyword">let</span> render;

    <span class="hljs-keyword">if</span> (!isProduction) {
      template = <span class="hljs-keyword">await</span> fs.readFile(<span class="hljs-string">'./index.html'</span>, <span class="hljs-string">'utf-8'</span>);
      template = <span class="hljs-keyword">await</span> vite.transformIndexHtml(url, template);
      <span class="hljs-comment">// The entry point to SSR for the initial load in dev mode</span>
      render = (<span class="hljs-keyword">await</span> vite.ssrLoadModule(<span class="hljs-string">'/src/entry-server.tsx'</span>)).render;
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-comment">// In production the entry-server file is built into a distribution folder</span>
      template = templateHtml;
      render = (<span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./dist/server/entry-server.js'</span>)).render;
    }

    <span class="hljs-comment">// The entry-server file always has to implement a "render" method which is used to generate</span>
    <span class="hljs-comment">// the HTML that will be returned to the browser on initial load</span>
    <span class="hljs-keyword">const</span> {
      head,
      <span class="hljs-attr">html</span>: appHtml,
      <span class="hljs-attr">dehydratedState</span>: initialData,
    } = <span class="hljs-keyword">await</span> render({
      <span class="hljs-attr">path</span>: url.split(<span class="hljs-string">'?'</span>)[<span class="hljs-number">0</span>],
      <span class="hljs-attr">userAgent</span>: req.headers[<span class="hljs-string">'user-agent'</span>],
    });

    <span class="hljs-comment">// The Helmet part is to support meta tags, which is a topic for a future article</span>
    <span class="hljs-keyword">const</span> html = template
      .replace(<span class="hljs-string">`&lt;!--helmet-outlet--&gt;`</span>, <span class="hljs-function">() =&gt;</span> head)
      <span class="hljs-comment">// The index html has a comment area which is replaced with the actual HTML on initial load</span>
      .replace(<span class="hljs-string">`&lt;!--ssr-outlet--&gt;`</span>, <span class="hljs-function">() =&gt;</span> appHtml)
      <span class="hljs-comment">// This part is to inject preloaded data into the page immediately, without fetching it on</span>
      <span class="hljs-comment">// client side. For example, you preload products on the server and inject the entire </span>
      <span class="hljs-comment">// resulting JSON into the HTML which is then picked up by the client side code</span>
      .replace(
        <span class="hljs-string">'&lt;!--dehydrated-state--&gt;'</span>,
        <span class="hljs-string">`&lt;script&gt;window.__REACT_QUERY_STATE__ = <span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(initialData)}</span>&lt;/script&gt;`</span>
      );

    <span class="hljs-comment">// Send the rendered HTML back.</span>
    res.status(<span class="hljs-number">200</span>).set({ <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'text/html'</span> }).end(html);
  } <span class="hljs-keyword">catch</span> (e) {
    <span class="hljs-comment">// If an error is caught, let Vite fix the stack trace so it maps back</span>
    <span class="hljs-comment">// to your actual source code.</span>
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Err'</span>, e);
    vite.ssrFixStacktrace(e);
    next(e);
  }
});

app.listen(<span class="hljs-number">5173</span>);
</code></pre>
<p>There are two important parts in the code above:</p>
<ol>
<li><code>&lt;!--ssr-outlet--&gt;</code> - this is a comment in the index.html which tells the server where to inject the pre-rendered HTML. This is a fully populated document structure which you can see coming back from the server if you filter by “Doc” in Chrome DevTools:</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750101894596/84013884-e6ba-45cc-8e9b-4f1fe5a2dabb.png" alt class="image--center mx-auto" /></p>
<p>2. <code>&lt;!--dehydrated-state--&gt;</code> - this comment marks the area where the server will inject data pre-loaded on the server. For example, on our Merch Shop page above we have a list of brands. In an SPA, the page would load these brands from the server on mount and show a loading spinner. With SSR we preload the brands on the server, but we have to inject them somehow into the client side code. We do that by serializing it and replacing this comment with the serialized state. On the client, a library like `react-query` picks up this state and just renders the data. This is a topic for the next installment of this article series.</p>
<h2 id="heading-server-entry">Server entry</h2>
<pre><code class="lang-typescript"><span class="hljs-comment">// src/entry-server.tsx</span>
<span class="hljs-keyword">import</span> { renderToString } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-dom/server'</span>;
<span class="hljs-keyword">import</span> { ChakraProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">'@chakra-ui/react'</span>;
<span class="hljs-keyword">import</span> { dehydrate, QueryClient, QueryClientProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">'@tanstack/react-query'</span>;
<span class="hljs-keyword">import</span> { Route, StaticRouter } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;

<span class="hljs-keyword">import</span> theme <span class="hljs-keyword">from</span> <span class="hljs-string">'@/theme'</span>;

<span class="hljs-keyword">import</span> HomeSignedIn <span class="hljs-keyword">from</span> <span class="hljs-string">'./views/home/HomeSignedIn'</span>;
<span class="hljs-keyword">import</span> SelectTemplatePage <span class="hljs-keyword">from</span> <span class="hljs-string">'./views/template/SelectTemplatePage'</span>;
<span class="hljs-keyword">import</span> { getCategories } <span class="hljs-keyword">from</span> <span class="hljs-string">'./api/templates'</span>;

<span class="hljs-keyword">import</span> <span class="hljs-string">'./index.css'</span>;

<span class="hljs-keyword">import</span> ProductsPageAuthenticated <span class="hljs-keyword">from</span> <span class="hljs-string">'./views/products/ProductsPageAuthenticated'</span>;

<span class="hljs-keyword">import</span> ProductDetailsPage <span class="hljs-keyword">from</span> <span class="hljs-string">'./views/product/ProductDetailsPage'</span>;

<span class="hljs-keyword">import</span> { Helmet } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-helmet'</span>;
<span class="hljs-keyword">import</span> CreatorHubSignedIn <span class="hljs-keyword">from</span> <span class="hljs-string">'./views/creator-hub/CreatorHubSignedIn'</span>;

<span class="hljs-keyword">interface</span> IRenderProps {
  path: <span class="hljs-built_in">string</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">render</span>(<span class="hljs-params">{ path }: IRenderProps</span>) </span>{
  <span class="hljs-keyword">const</span> queryClient = <span class="hljs-keyword">new</span> QueryClient({
    defaultOptions: {
      queries: {
        staleTime: <span class="hljs-number">1000</span> * <span class="hljs-number">60</span> * <span class="hljs-number">5</span>, <span class="hljs-comment">// 5 minutes</span>
      },
    },
  });

  <span class="hljs-keyword">await</span> queryClient.prefetchQuery([<span class="hljs-string">'templates'</span>, <span class="hljs-string">'categories'</span>], <span class="hljs-function">() =&gt;</span> getCategories());

  <span class="hljs-keyword">const</span> html = renderToString(
    &lt;ChakraProvider theme={theme}&gt;
      &lt;QueryClientProvider client={queryClient}&gt;
        &lt;StaticRouter location={path}&gt;
          &lt;Route exact path=<span class="hljs-string">"/"</span> render={<span class="hljs-function">() =&gt;</span> &lt;HomeSignedIn /&gt;}&gt;&lt;/Route&gt;
          &lt;Route
            exact
            path=<span class="hljs-string">"/shop/category/:categorySlug"</span>
            render={<span class="hljs-function">() =&gt;</span> &lt;ProductsPageAuthenticated /&gt;}
          &gt;&lt;/Route&gt;
          &lt;Route
            path=<span class="hljs-string">"/shop/community/:brandIdOrSlug/:categorySlug"</span>
            render={<span class="hljs-function">() =&gt;</span> &lt;ProductsPageAuthenticated /&gt;}
          &gt;&lt;/Route&gt;
          &lt;Route path=<span class="hljs-string">"/shop/:idOrSlug"</span> render={<span class="hljs-function">() =&gt;</span> &lt;ProductDetailsPage /&gt;}&gt;&lt;/Route&gt;
          &lt;Route path=<span class="hljs-string">"/design-studio"</span> render={<span class="hljs-function">() =&gt;</span> &lt;SelectTemplatePage /&gt;}&gt;&lt;/Route&gt;
          &lt;Route path=<span class="hljs-string">"/creator-hub"</span> render={<span class="hljs-function">() =&gt;</span> &lt;CreatorHubSignedIn /&gt;}&gt;&lt;/Route&gt;
        &lt;/StaticRouter&gt;
      &lt;/QueryClientProvider&gt;
    &lt;/ChakraProvider&gt;
  );

  <span class="hljs-keyword">const</span> helmet = Helmet.renderStatic();

  <span class="hljs-keyword">const</span> head = <span class="hljs-string">`
      <span class="hljs-subst">${helmet.title.toString()}</span>
            <span class="hljs-subst">${helmet.meta.toString()}</span>
            <span class="hljs-subst">${helmet.link.toString()}</span>
            `</span>;

  <span class="hljs-keyword">const</span> dehydratedState = dehydrate(queryClient);

  <span class="hljs-keyword">return</span> {
    head,
    html,
    dehydratedState,
  };
}
</code></pre>
<p>The server entry file looks like a normal entry file to a React app on the client side, with a top level router. What’s different is that it uses the <code>renderToString</code> method to turn the React component tree to raw HTML. We use Chakra UI as our styling library and you can see how it can seamlessly be used in SSR. Another thing to notice here is how we pre-hydrate the <code>react-query</code> query client. We do the same API calls as we would on the client side here on the server and then store them into <code>react-query</code> ‘s internal state. At the end we pull all of the query client’s internal state to <code>dehydratedState</code> which is then used in <code>entry-client</code> as you’ll see in the next section.</p>
<h2 id="heading-client-entry">Client entry</h2>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> { hydrateRoot } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-dom/client'</span>;

<span class="hljs-keyword">import</span> { isAxiosError } <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;

<span class="hljs-keyword">import</span> { BrowserRouter, Route, Switch } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;

<span class="hljs-keyword">import</span> { IntercomProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-use-intercom'</span>;

<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> Sentry <span class="hljs-keyword">from</span> <span class="hljs-string">'@sentry/react'</span>;

<span class="hljs-keyword">import</span> AdminDashboard <span class="hljs-keyword">from</span> <span class="hljs-string">'@/layouts/admin'</span>;
<span class="hljs-keyword">import</span> Auth <span class="hljs-keyword">from</span> <span class="hljs-string">'@/layouts/auth'</span>;
<span class="hljs-keyword">import</span> { ChakraProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">'@chakra-ui/react'</span>;
<span class="hljs-keyword">import</span> ResetPasswordPage <span class="hljs-keyword">from</span> <span class="hljs-string">'@/views/auth/reset-password'</span>;
<span class="hljs-keyword">import</span> theme <span class="hljs-keyword">from</span> <span class="hljs-string">'@/theme'</span>;

<span class="hljs-keyword">import</span> { QueryClient, QueryClientProvider, DehydratedState, Hydrate } <span class="hljs-keyword">from</span> <span class="hljs-string">'@tanstack/react-query'</span>;
<span class="hljs-keyword">import</span> { GoogleOAuthProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">'@react-oauth/google'</span>;

<span class="hljs-keyword">import</span> Config <span class="hljs-keyword">from</span> <span class="hljs-string">'./config'</span>;
<span class="hljs-keyword">import</span> { PageTracker } <span class="hljs-keyword">from</span> <span class="hljs-string">'./analytics/PageTracker'</span>;

<span class="hljs-keyword">import</span> <span class="hljs-string">'./index.css'</span>;
<span class="hljs-keyword">import</span> { Helmet } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-helmet'</span>;
<span class="hljs-keyword">import</span> DEFAULT_TOAST_OPTIONS <span class="hljs-keyword">from</span> <span class="hljs-string">'./theme/toast'</span>;

<span class="hljs-keyword">declare</span> <span class="hljs-built_in">global</span> {
  <span class="hljs-keyword">interface</span> Window {
    __REACT_QUERY_STATE__: DehydratedState;
  }
}

<span class="hljs-keyword">const</span> { ENVIRONMENT, GOOGLE_CLIENT_ID, INTERCOM_APP_ID, SENTRY_DSN } = Config;

<span class="hljs-keyword">const</span> container = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">'app'</span>);

<span class="hljs-keyword">if</span> (<span class="hljs-keyword">import</span>.meta.env.PROD) {
  Sentry.init({
    dsn: SENTRY_DSN,
    environment: ENVIRONMENT,
    integrations: [<span class="hljs-keyword">new</span> Sentry.BrowserTracing(), <span class="hljs-keyword">new</span> Sentry.Replay()],
    tracesSampleRate: <span class="hljs-number">0</span>,
    <span class="hljs-comment">// Session Replay</span>
    replaysSessionSampleRate: <span class="hljs-number">0.1</span>, <span class="hljs-comment">// This sets the sample rate at 10%. You may want to change it to 100% while in development and then sample at a lower rate in production.</span>
    replaysOnErrorSampleRate: <span class="hljs-number">1.0</span>, <span class="hljs-comment">// If you're not already sampling the entire session, change the sample rate to 100% when sampling sessions where errors occur.</span>
  });
}

<span class="hljs-keyword">const</span> MAX_RETRIES = <span class="hljs-number">6</span>;
<span class="hljs-keyword">const</span> HTTP_STATUS_TO_NOT_RETRY = [<span class="hljs-number">400</span>, <span class="hljs-number">401</span>, <span class="hljs-number">403</span>, <span class="hljs-number">404</span>, <span class="hljs-number">409</span>];

<span class="hljs-keyword">const</span> dehydratedState = <span class="hljs-built_in">window</span>.__REACT_QUERY_STATE__;

<span class="hljs-keyword">const</span> queryClient = <span class="hljs-keyword">new</span> QueryClient({
  defaultOptions: {
    queries: {
      refetchOnWindowFocus: <span class="hljs-literal">false</span>,
      retry: <span class="hljs-function">(<span class="hljs-params">failureCount, error</span>) =&gt;</span> {
        <span class="hljs-keyword">if</span> (failureCount &gt; MAX_RETRIES) {
          <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }

        <span class="hljs-keyword">if</span> (isAxiosError(error) &amp;&amp; HTTP_STATUS_TO_NOT_RETRY.includes(error.response?.status ?? <span class="hljs-number">0</span>)) {
          <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Aborting retry due to <span class="hljs-subst">${error.response?.status}</span> status`</span>);
          <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
      },
    },
  },
});

hydrateRoot(
  container,
  &lt;React.StrictMode&gt;
    &lt;IntercomProvider appId={INTERCOM_APP_ID}&gt;
      &lt;GoogleOAuthProvider clientId={GOOGLE_CLIENT_ID}&gt;
        &lt;QueryClientProvider client={queryClient}&gt;
          &lt;Hydrate state={dehydratedState}&gt;
            &lt;ChakraProvider theme={theme} toastOptions={{ defaultOptions: DEFAULT_TOAST_OPTIONS }}&gt;
              &lt;BrowserRouter&gt;
                &lt;PageTracker /&gt;
                &lt;Helmet&gt;
                  {ENVIRONMENT !== <span class="hljs-string">'production'</span> ? &lt;meta content=<span class="hljs-string">"noindex"</span> name=<span class="hljs-string">"robots"</span> /&gt; : <span class="hljs-literal">null</span>}
                  &lt;title&gt;ABLO – AI‑Powered Fashion &amp; Custom Merch&lt;/title&gt;
                  &lt;meta
                    name=<span class="hljs-string">"description"</span>
                    content=<span class="hljs-string">"Create premium fashion with AI. Collaborate with iconic brands &amp; design merch in minutes. Shop unique pieces from the creators you love."</span>
                  /&gt;
                &lt;/Helmet&gt;
                &lt;Switch&gt;
                  &lt;Route path={<span class="hljs-string">`/auth`</span>} component={Auth} /&gt;
                  &lt;Route path={<span class="hljs-string">`/reset-password`</span>} component={ResetPasswordPage} /&gt;
                  &lt;Route path={<span class="hljs-string">`/`</span>} component={AdminDashboard} /&gt;
                &lt;/Switch&gt;
              &lt;/BrowserRouter&gt;
            &lt;/ChakraProvider&gt;
          &lt;/Hydrate&gt;
        &lt;/QueryClientProvider&gt;
      &lt;/GoogleOAuthProvider&gt;
    &lt;/IntercomProvider&gt;
  &lt;/React.StrictMode&gt;
);
</code></pre>
<p>The client entry file is just a React app, but as described before, you’ll notice how we load the server query client’s state into the client side query client. On the server we preloaded all the data and stored it into a string. We loaded this string into the query client on the client side. This will be explained in more detail in a future article.</p>
<h2 id="heading-new-commands-in-packagejson">New commands in package.json</h2>
<p>Here’s the end state of our scripts in package.json:</p>
<pre><code class="lang-typescript"> <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"dev"</span>: <span class="hljs-string">"vite"</span>,
    <span class="hljs-string">"lint"</span>: <span class="hljs-string">"eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 20"</span>,
    <span class="hljs-string">"typecheck"</span>: <span class="hljs-string">"tsc --noEmit"</span>,
    <span class="hljs-string">"build:library"</span>: <span class="hljs-string">"tsc &amp;&amp; vite build --config vite-lib.config.ts"</span>,
    <span class="hljs-string">"build:fabric"</span>: <span class="hljs-string">"cd node_modules/fabric &amp;&amp; npm run build_with_gestures"</span>,
    <span class="hljs-string">"preview"</span>: <span class="hljs-string">"vite preview"</span>,
    <span class="hljs-string">"build:client"</span>: <span class="hljs-string">"vite build --outDir dist/client"</span>,
    <span class="hljs-string">"build:server"</span>: <span class="hljs-string">"vite build --ssr src/entry-server.tsx --outDir dist/server"</span>,
    <span class="hljs-string">"build:ssr"</span>: <span class="hljs-string">"npm run build:client &amp;&amp; npm run build:server"</span>,
    <span class="hljs-string">"build"</span>: <span class="hljs-string">"vite build"</span>,
    <span class="hljs-string">"start"</span>: <span class="hljs-string">"node server.js"</span>,
    <span class="hljs-string">"start:prod"</span>: <span class="hljs-string">"NODE_ENV=production npm run start"</span>
  },
</code></pre>
<p>When in development and you don’t need to test or work with SSR you use <code>npm run dev</code> . When you need to work with SSR in development you use <code>npm run start</code> . For production you first build the app with <code>npm run build:ssr</code> and then you run <code>npm run start:prod</code></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This should explain basic strategies around implementing SSR with Vite and get you running with your own implementation. Future installments of this article series will go deeper into specific concepts and details such as working with data.</p>
]]></content:encoded></item><item><title><![CDATA[How We Work]]></title><description><![CDATA[We on the engineering team have put a lot of thought into how to best work together to ensure we iterate quickly with quality, while not burning out. We’ve formulated this working agreement that I’d love to share and get any suggestions or feedback o...]]></description><link>https://blog.ablo.ai/how-we-work</link><guid isPermaLink="true">https://blog.ablo.ai/how-we-work</guid><category><![CDATA[engineering-management]]></category><category><![CDATA[agile]]></category><dc:creator><![CDATA[Karim Varela]]></dc:creator><pubDate>Fri, 18 Apr 2025 17:48:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744998390371/e68363cb-f5b2-4450-977c-4d0caf8bc03d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We on the engineering team have put a lot of thought into how to best work together to ensure we iterate quickly with quality, while not burning out. We’ve formulated this working agreement that I’d love to share and get any suggestions or feedback on. We’re always thinking about how we can do better!</p>
<p>v1.3</p>
<h1 id="heading-overview">Overview</h1>
<p>To build a well-designed, efficiently functioning engineering team, we should agree on <strong><em>how</em></strong> we work with each other. These principles will guide us in day to day decision making and ensure we know <strong><em>what to expect from each other</em></strong>. This is a living document and will evolve as the team evolves and with everybody's input.</p>
<h1 id="heading-workflow">Workflow</h1>
<h2 id="heading-standup-daily-sync">Standup / Daily Sync</h2>
<h3 id="heading-asynchronous-written-standup">Asynchronous Written Standup</h3>
<p>You should post daily whenever you start work in the Slack <a target="_blank" href="https://spacerunnersworkspace.slack.com/archives/C05KZ398NUX">#standup</a> channel a detailed status update that includes:</p>
<ol>
<li><p>What you accomplished previously / yesterday</p>
</li>
<li><p>What you plan on accomplishing today / next</p>
<ol>
<li>Indicate your top focus by putting a <code>(TF)</code> next to the item that's most important today</li>
</ol>
</li>
<li><p>Any blockers, dependencies, or obstacles in your way</p>
<ol>
<li><p>Call out any team members you're depending on</p>
</li>
<li><p>Help team resolve blockers asap!</p>
</li>
</ol>
</li>
</ol>
<h3 id="heading-synchronous-video-standup">Synchronous Video Standup</h3>
<p>For our synchronous video standup, you should focus on just 2 things:</p>
<p>1. The most important thing you want to accomplish today (TF)</p>
<p>2. Any obstacles in your way to accomplish that thing</p>
<p>If anybody has any blockers, the team should rally to remove those blockers and get everybody productive asap.</p>
<h2 id="heading-sprintly-recurring-events-amp-ceremonies">Sprintly Recurring Events &amp; Ceremonies</h2>
<h3 id="heading-first-monday">First Monday</h3>
<ul>
<li><strong>Sprint kick off / planning meeting - 1h</strong></li>
</ul>
<p><em>Finalize previous sprint and review tasks to be delivered in current sprint. Ensure estimates are accurate and tasks are well-defined. Commit to delivery.</em></p>
<ul>
<li><strong>Push to Production - 1h</strong></li>
</ul>
<p><em>As long as no critical or high priority bugs exist. If P0 or P1 bugs exists, need to fix asap before we can release to production.</em></p>
<h3 id="heading-first-friday">First Friday</h3>
<ul>
<li><strong>Mid-Sprint Checkin - 30m</strong></li>
</ul>
<p><em>Check-in during standup to see if we're on track for the week to deliver all our commitments for the most important epics on the roadmap. If it looks like anything will slip, see if we can potentially shuffle around any work.</em></p>
<h3 id="heading-2nd-thursday">2nd Thursday</h3>
<ul>
<li><strong>Demo Prep</strong></li>
</ul>
<p><em>Take some time to prepare for your demo so that you can successfully show off what you've accomplished in the sprint.</em></p>
<ul>
<li><strong>Demo</strong></li>
</ul>
<p><em>Demo the most impactful things you've accomplished in the sprint. Limit it to 5m and really highlight the most impactful things. You don't need to demo every single thing you did, especially not bug fixes (unless they're very impactful).</em></p>
<ul>
<li><strong>EOD code freeze</strong></li>
</ul>
<p><em>Wrap up an PRs you're working, help other devs get their code merged in, and start testing and preparing your demo.</em></p>
<h3 id="heading-2nd-friday">2nd Friday</h3>
<ul>
<li><strong>Internal Testing</strong></li>
</ul>
<p><em>We should focus on testing the changes, fixes, and updates we just implemented in the last sprint. We should ensure the team is equally split between testing mobile, desktop, and BE. We should test on staging builds.</em></p>
<ul>
<li><strong>Next Sprint Proposal and Estimation</strong></li>
</ul>
<p><em>Karim will work with Drew on the roadmap and propose tasks for next sprint and team should give their own estimates. All tasks should be reviewed and estimates completed</em> <strong><em>before</em></strong> <strong>the</strong> <em>sprint kickoff session on Monday.</em></p>
<h2 id="heading-planning">Planning</h2>
<p>In order to work effectively, we must ensure that the work we plan to do in any given sprint is clearly-defined and actionable. That means that before we commit to doing some work in a sprint, that we've fully digested the task and asked any clarifying questions if we need to do. It also means that we've called out any dependencies and worked with product or engineering leadership to ensure those dependencies are prioritized and actionable by the right person or group.</p>
<p>We should NOT commit to doing work in a sprint if it's not actionable</p>
<p><em>(Unless it's an emergency and/or it's super important that the work is delivered in this sprint. If we have dependencies in a sprint that need research to properly estimate, we can adjust sprint commitments after necessary research is done.)</em></p>
<h3 id="heading-tech-specs">Tech-Specs</h3>
<p>Any non-trivial piece of technical work, and especially for any new APIs, or updates to APIs, we should write a tech spec to align on <strong><em>how</em></strong> we're going to do the work.</p>
<p>This will ensure everybody is happy, or at least on board, with the implementation details and will help to reduce technical debt going forward. As with everything we do as a startup, there needs to be a balance between doing things the safest / most optimal / most scalable / most correct way vs moving fast. We can talk about those trade-offs in the tech spec and associated meetings (if necessary).</p>
<h2 id="heading-individual-task-workflow">Individual Task Workflow</h2>
<ol>
<li>Move your highest priority task from 'TO DO' to 'IN PROGRESS'</li>
</ol>
<p><em>Humans are by nature single threaded - You can only do one thing at once, so</em> <em>ensure you only have one task IN PROGRESS at any given time</em>_. Anybody in the company should be able to look at our sprint board and know exactly what any of us are working on this very instant._</p>
<ol start="2">
<li><p>For any coding task, no matter how small, create a branch for your work using the ClickUp UI.</p>
</li>
<li><p>We should shoot for smaller, more focused PRs.</p>
</li>
</ol>
<p><em>PRs should accomplish a single thing. Smaller PRs are easier to digest, test, and review than large PRs. This ensures quality and alignment. Totally fine to have multiple PRs per ticket - not totally fine to have multiple tickets per PR.</em></p>
<ol start="4">
<li>Commit early and often.</li>
</ol>
<p>Commits should be for small subtasks and the message should be a short description of what you did._</p>
<ol start="5">
<li>When a coding task is done, create a PR</li>
</ol>
<p><em>ClickUp should automatically move the task to</em> <em>REVIEW</em>_. When the PR is merged, ClickUp should automatically move the task to_ <em>STAGING</em>_._</p>
<ol start="6">
<li><p>Explicitly request review from at least one contributor to the repo so they get a notification about it</p>
</li>
<li><p>If changes are requested, as soon as you make changes and are ready for another review, click the button in Github to re-request review so the reviewer is notified.</p>
</li>
</ol>
<p><img src="https://t9003194404.p.clickup-attachments.com/t9003194404/d0402c5b-7cf4-4ebd-b48e-8108e21c7251/image.png" alt /></p>
<h1 id="heading-communication">Communication</h1>
<p>Since we are a globally distributed team, we should default to asynchronous, written communication via Slack. If there are more than a few back and forths in Slack, then it's a sign that you should handle the conversation synchronously - either by huddling immediately or scheduling a future meeting if necessary. If you need to schedule a meeting, just fine time on people's calendar this is free. We should expect that people's calendars are up to date and reflect their availability.</p>
<p>In addition, since you can't always count on a quick response and back and forth with someone on Slack, we must ensure our communication is concise, but complete. Your written communication should leave no ambiguity to the reader. You should always err on the side of over-communication if you're ever unsure.</p>
<h2 id="heading-cave-focus-time">Cave / Focus Time</h2>
<p>Engineering work requires deep focus. You can and should plan to eliminate distractions from your personal life and your work life so that you can do your best work.</p>
<p>You can add <strong><em>focus time</em></strong> to your calendar to ensure you aren't disrupted by meetings. You can also set your Slack status to 'Cave Time' to indicate to coworkers that they shouldn't disturb you unless it's an emergency.</p>
<h1 id="heading-collaboration">Collaboration</h1>
<p>We value collaboration and mentorship, and recognize that it's not a one way street. Of course, more junior people can learn from more senior people, but more senior people can also learn from junior people.</p>
<h2 id="heading-availability">Availability</h2>
<p>In order to effectively collaborate, we must be available to each other for synchronous work. Regardless of your timezone, you should be available for meetings, pair programming, and other synchronous things from at least the hours of:</p>
<ul>
<li><p>0700 to 1000 PT</p>
</li>
<li><p>1000 to 1300 ET</p>
</li>
<li><p>1200 - 1700 UTC</p>
</li>
<li><p>1400 - 1900 CEST</p>
</li>
</ul>
<h2 id="heading-solving-issues">Solving Issues</h2>
<p>If you're ever stuck on an issue for more than 30 minutes, that's a good sign that you should reach out to the team for help. Likely, there's someone on the team who has seen your issue before and can save the team a lot of time.</p>
<h2 id="heading-pair-programming">Pair Programming</h2>
<p><img src="https://t9003194404.p.clickup-attachments.com/t9003194404/89908a2c-7ea0-425b-8272-be1f28b8a24f/image.png" alt /></p>
<p>We value the practice of pair programming and aim to do it at least once per week with another developer on the team. We recommend the driver / navigator methodology. In this method, the driver is the one who is controlling the keyboard and mouse and the navigator is the one who tells the driver what to do, like you're driving a rally car together. If the driver disagrees, then the driver and the navigator should discuss the best implementation. This method is advantageous because it ensures the one who is not typing stays engaged and forces two way learning.</p>
<p>For most of us engineers, our natural inclination is to hunker down and solve problems by ourself. Solving problems together can be rewarding and fun! Make an effort to reach out to your coworkers and suggest a pair programming session!</p>
<h1 id="heading-branching">Branching</h1>
<p>For every piece of work we do, we branch. For consistency and for the tight integration with Github, we use the branch name provided by ClickUp:</p>
<p><img src="https://t9003194404.p.clickup-attachments.com/t9003194404/6d8f9f97-23c5-4141-b780-4dce846ca72b/image.png" alt /></p>
<p><img src="https://t9003194404.p.clickup-attachments.com/t9003194404/1c235dc7-4415-48c7-ada9-eec94107c2fc/image.png" alt /></p>
<p>Tip: branch names are not editable, but you can temporarily change the ticket name to something more git-readable and lowercase before creating the branch</p>
<p>Using the branch name from ClickUp will ensure that:</p>
<ol>
<li><p>ClickUp automatically moves your task from TODO to IN PROGRESS when you push the branch.</p>
</li>
<li><p>ClickUp automatically moves your task from IN PROGRESS to IN REVIEW when you create a PR. ClickUp automatically moves your task from IN REVIEW to IN STAGING when the PR is merged.</p>
</li>
<li><p>Github will link to the ClickUp task in the PR.</p>
</li>
</ol>
<h1 id="heading-reviewing-code">Reviewing Code</h1>
<p>Reviewing code earnestly is the single most important thing we can do to ensure the quality of our products, the maintainability of our technology, and knowledge share among the team.</p>
<h2 id="heading-process">Process</h2>
<p>PRs should be created for every change, no matter how small. This enforces discipline on us and prevents us from going down a slippery slope of not getting things reviewed, not catching bugs, letting them get into production, and building a low-quality product.</p>
<p>When reviewing a PR, you should first verify it works as expected. On web, this means running the code locally or via a preview build. On BE, this means running it locally and/or simply verifying the functionality is covered adequately with unit tests.</p>
<h2 id="heading-naming-prs">Naming PRs</h2>
<p>If you created a branch using ClickUp as above, the PR title will automatically be named:</p>
<p><code>CU-{{id}}/{{title}}/{{owner}}</code></p>
<p>Including the ClickUp ID in the PR title creates an audit trail so we can easily refer to the original issue.</p>
<h2 id="heading-what-to-look-for">What to Look For?</h2>
<h3 id="heading-clear-purpose-intent">Clear purpose / intent:</h3>
<ul>
<li><p>Unclear code: many reasons why something could be unclear; anonymous complex regexp, poor naming, mega-functions, lack of commenting, etc. Think as much as you can about the next dev or even future you! Code is read 10x more than it's written.</p>
</li>
<li><p>Unnecessary code: for example, over-abstraction to the point of getting in the way, code broken out into a function for no reason, etc; TypeScript provides a slew of functionality (utility types, generics, type literals, etc) to prevent things getting complex</p>
</li>
<li><p>Appropriateness: does the fix make sense in the context of the wider code / service / api / module / feature? Sometimes diving into a small fix can miss the wider context (this should also be addressed at the planning stage)</p>
</li>
<li><p>Code that misses the point: is this a problem that even needs to be solved? Less code is better than more code, no code better than less code; review the big picture as well as the small picture to make sure we're not writing code that could remain unwritten</p>
</li>
</ul>
<h3 id="heading-reasonable-code-reuse">Reasonable code reuse:</h3>
<ul>
<li><p>Repeated code: does it make sense to abstract it away to enable cleaner and more maintainable code?</p>
</li>
<li><p>Before adding a new dependency, check if similar library already being used</p>
</li>
<li><p>Before creating utility function, search if there are relevant functions already created</p>
</li>
</ul>
<h3 id="heading-consistency">Consistency:</h3>
<ul>
<li><p>Ensure file and directory naming is consistent with repo</p>
</li>
<li><p>Ensure component and classnames are consistent with repo</p>
</li>
<li><p>Ensure folder structure is consistent with repo</p>
</li>
<li><p>One class or component per file for easiest searchability</p>
</li>
<li><p>Clearly / consistently-named variables (where possible!) especially across related functions</p>
</li>
</ul>
<h3 id="heading-housekeeping">Housekeeping:</h3>
<ul>
<li><p>Logical / functional errors</p>
</li>
<li><p>Commented code: we can always go back to old code; no need to clutter code base with commented code</p>
</li>
<li><p>Minor cleanups: if something can be cleaned up without affecting the functionality (i.e. risking a new bug); just do it</p>
</li>
</ul>
<h3 id="heading-support">Support:</h3>
<ul>
<li><p>Automated tests: should cover at least the happy path and the most common error paths.</p>
</li>
<li><p>Documentation: were the functions commented helpfully, would they benefit from doc comments, does the feature require additional markdown documentation?</p>
</li>
</ul>
<p>If making suggestion or criticism, try to add example or short code snippet on better way to do things.</p>
<h3 id="heading-some-specific-things-to-look-for">Some Specific Things to Look For</h3>
<ul>
<li><p>For list of items `</p>
</li>
<li><p>` , use `key` attribute for each one</p>
</li>
<li><p>Use named exports always to remove ambiguity when importing</p>
</li>
<li><p>Poor typing or `@ts-ignore`; the compiler is your friend; understand any errors and mitigate</p>
</li>
<li><p>Missing types / generics: many functions can be typed using generics; missing this is akin to typing as `any` so you lose type safety</p>
</li>
</ul>
<h2 id="heading-who-should-merge">Who Should Merge?</h2>
<p>A PR should signal an intent that you want to merge your code. As the author, if the code is not ready to merge, you should add a label like DON'T MERGE or WIP or leave it as a draft PR. If a reviewer approves of your PR, all CI checks pass, it has been tested, and the reviewer hasn't made any comments / suggestions then the reviewer should just merge it in.</p>
<p>If the reviewer approves, but leaves some small comments/suggestions, the reviewer should leave it to the author to either merge on their own or address the comments first.</p>
<p>Reviewers can also turn on 'auto merge' to merge a PR as soon as all build checks (tests, linting, approvals, conflicts) are resolved.</p>
<h2 id="heading-review-prs-in-a-timely-fashion">Review PRs in a Timely Fashion</h2>
<p>If everyone on the team reviews PRs at least when they start work for the day and when they finish work for the day, we will ensure that no PR languishes in a non-reviewed state for more than 12 hours or so.</p>
<p>The faster we can review each other's code the better as we want to avoid developer context switching as much as possible. Also, the faster we review each other's code, the less likely it is that there will be conflicts to fix, which can be a source of frustration and bugs. Of course, if you are in deep focus, you should not drop that focus to review code unless it's an emergency.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>This is a living document. Feel free to comment or make suggestions. Looking forward to building an awesome engineering team and an awesome product with you!</p>
]]></content:encoded></item><item><title><![CDATA[Serving Stable Diffusion XL on Google Cloud]]></title><description><![CDATA[Serving outputs from generative AI models is notoriously resource-intensive1. At Space Runners, we are primarily focused on diffusion models for generating or modifying images, which we apply to fashion items. Visitors should be excited to create wit...]]></description><link>https://blog.ablo.ai/serving-stable-diffusion-xl-on-google-cloud</link><guid isPermaLink="true">https://blog.ablo.ai/serving-stable-diffusion-xl-on-google-cloud</guid><category><![CDATA[AI]]></category><category><![CDATA[gke]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[Diffusion Models ]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Steven J. Munn]]></dc:creator><pubDate>Mon, 10 Feb 2025 15:52:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738886339457/36f16bc8-b174-4100-bf63-2095c9008673.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Serving outputs from generative AI models is notoriously resource-intensive<a class="post-section-overview" href="#ft1"><sup>1</sup></a>. At Space Runners, we are primarily focused on diffusion models for generating or modifying images, which we apply to fashion items. Visitors should be excited to create with our platform, so we need our models to be customized (through fine-tuning or network architecture modifications) to produce results that stand out. However, we also need to ensure that our model outputs are served quickly<a class="post-section-overview" href="#ft2"><sup>2</sup></a>.</p>
<p>Setting up generative AI for Space Runners involves many steps and technologies. <a target="_blank" href="https://blog.spacerunners.com/our-tech-stack-at-space-runners">A previous blog post on our tech stack</a> mentions some of the tools we are using at the code level. Today, we will focus more on the MLOps/DevOps aspects of how things work in Google Kubernetes. We will first explain why we chose Kubernetes over other GCP products for hosting our models. Then, we will dive into the specifics of setting up generative AI infrastructure with Google Kubernetes Engine (GKE).</p>
<h1 id="heading-model-serving-with-google-cloud-platform">Model Serving with Google Cloud Platform</h1>
<p>Google offers three major products for creating images with Generative AI<a class="post-section-overview" href="#ft3"><sup>3</sup></a>. In order of least to most customizable they are:</p>
<ol>
<li><p>Google Imagen/Gemini</p>
</li>
<li><p>Vertex AI</p>
</li>
<li><p>Google Kubernetes Engine (GKE)</p>
</li>
</ol>
<p>At the time we set up our Ablo website, <strong>Imagen/Gemini</strong> did not produce images of sufficient quality for our use case in fashion. The images contained artifacts, did not adhere closely to prompts, and could not be styled with training data. This may change in the future, but for the time being, it’s not a viable option for us.</p>
<h2 id="heading-vertex-ai-vs-manual-setup-in-gke">Vertex AI vs Manual Setup in GKE</h2>
<p>Although VertexAI is a powerful platform, abstracting away a lot of GKE’s complexity, it is still has some severe limitations and we ultimately chose to setup our models manually in GKE. For quick reference, here is a list of pros and cons that summarize our experience with the two services.</p>
<h3 id="heading-vertex-ai-pros">Vertex AI Pros</h3>
<ul>
<li><p>Fully managed node scaling</p>
</li>
<li><p>Built-in latency monitoring</p>
</li>
<li><p>Automatic request queue handling</p>
</li>
</ul>
<h3 id="heading-vertex-ai-cons">Vertex AI Cons</h3>
<ul>
<li><p>Poor visibility into debugging logs</p>
</li>
<li><p>Arbitrary and non-configurable timeouts and size limits</p>
</li>
</ul>
<h3 id="heading-gke-pros">GKE Pros</h3>
<ul>
<li><p>Proven track record with 10 years of general access</p>
</li>
<li><p>Popular with lots of support</p>
</li>
<li><p>Versatile</p>
</li>
</ul>
<h3 id="heading-gke-cons">GKE Cons</h3>
<ul>
<li><p>Relatively complex</p>
</li>
<li><p>Requires a lot of manual setup and monitoring</p>
</li>
</ul>
<p>The Vertex AI documentation does not make it very clear that deploying custom docker containers is an option for inference. <a target="_blank" href="https://cloud.google.com/vertex-ai/docs/training/containers-overview">The documentation for this</a> is in the "custom training" section.</p>
<p>For our particular use-cases, the biggest limitation we ran into with Vertex AI is that we could not configure readiness checks and timeouts to give our docker containers enough time to load the neural network checkpoint data (see the next section for more details on that topic). With poor visibility into the logs for Vertex AI, rather than go back and forth with GCP support, we switched to GKE.</p>
<h1 id="heading-google-kubernetes-setup">Google Kubernetes Setup</h1>
<h2 id="heading-storage">Storage</h2>
<p>Checkpoints for SDXL models, including Control Nets, LoRAs for styling, IP-Adapters, and other components, range between 15GB and 70GB. Including these files in a Docker image will result in long build, push, and startup times. The best approach to handle this is to store all the checkpoints in a folder and add that folder to a <code>.dockerignore</code> file. Then, add code to load the checkpoints from a storage bucket once the Docker container starts up in GKE or another context.</p>
<p>At a high level the process looks like the figure below,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738889664483/7e4c09c3-24f6-4d19-8ff6-8c5ddeae7881.png" alt class="image--center mx-auto" /></p>
<p>The GCP Artifact Registry stores the docker images for the model servers. GKE takes these images to spin up pods. After the pods have started up, they fetch model weights and other data from cloud storage using a <code>gcloud storage cp</code> command.</p>
<h2 id="heading-request-flow">Request Flow</h2>
<p>Information about image styles and the prompt engineering that goes around that need to be easy to edit. The backend API services handles fetching this type of information from our Postgres database (see our <a target="_blank" href="https://blog.spacerunners.com/our-tech-stack-at-space-runners">tech stack</a>). The backend service then creates a request and passes it along to GKE.</p>
<p><mark>It’s also important to note that the ML serving instances on GKE are much more expensive to run at $5+ per hour versus the machines that run the backend services, so we want to minimize compute time on the ML instances as much as possible.</mark></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738888178264/2f7cb822-1fce-42d0-9be9-c4eeb996e522.png" alt class="image--center mx-auto" /></p>
<p>In the diagram above, numbers inside the arrows indicate the order of execution. For some workflows like our photo transformer, we need to use secondary services like an image captioning model. Since these only ever are called by the generative AI service, they are not exposed outside of GKE.</p>
<h2 id="heading-observability-and-alerting">Observability and Alerting</h2>
<p>GKE offers versatile observability and alerting tools. For machine learning inference, the most useful so far have been Cloud Trace and log-based metrics. Cloud trace allows us to look at the run time of requests, and then investigate all the logs associate with the request. To achieve this, we use <a target="_blank" href="https://opentelemetry.io/docs/languages/python/">python’s opentelemetry</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738895608176/dee5389b-9949-47c8-96fa-aa216ef2ce2e.png" alt class="image--center mx-auto" /></p>
<p>Log-based metrics help us setup dashboards and watch the distribution of inference times. The 95th and 99th quantiles help us get an idea of what inference times are in the worst-case scenarios.</p>
<h2 id="heading-choosing-the-right-hardware">Choosing the Right Hardware</h2>
<p>Our goal is to try to get inference times on the order of 10 seconds—the time at which most people will lose focus to something else<a class="post-section-overview" href="#ft2"><sup>2</sup></a>. Running SDXL, or any large diffusion model without distillation, in that time-frame requires a GPU or more specialized hardware, such as TPUs and FPGA. Our customization requirements however make it very difficult to compile our workflows for more specialized hardware. This makes Nvidia GPUs our hardware of choice. This is also part of the reason we went to GCP rather than AWS: The options for mid-range data center GPUs are more numerous.</p>
<p>SDXL, with all of the control nets, LoRAs, and IP-adapters that we are loading use up anywhere from 20 to 40 GB of GPU VideoRAM. So the GPUs we have to choose from are:</p>
<ul>
<li><p>Nvidia L4 (24 GB VRAM)</p>
</li>
<li><p>Nvidia A100 (40 GB VRAM)</p>
</li>
<li><p>Nvidia A100 “Ultra” (80 GB VRAM)</p>
</li>
</ul>
<p>The L4 is much cheaper than the A100, especially on a cost per GB of VRAM. Anything we can fit on the L4, we run on an L4; however, at least two of our major workflows require more VRAM.</p>
<p>Nvidia GPU sharing comes in handy here because it makes it possible to have multiple GKE pods share a graphics card. For all of the extra-large workflows that do not fit in the L4 GPU, we have A100 “Ultra” configured to share 2 pods providing 40 GB of VRAM each.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Setting up generative AI inference pipeline in Google Cloud’s Kubernetes Engine was a challenge that spanned the course of several weeks (and months of maintenance and perfecting). It has proven itself as powerful and reliable platform giving us all the options we need to create images for our platform.</p>
<p>After deciding all the architecture and systems for our workflows, the next challenge is to write code and build docker containers. This is something we will examine in a future blog post, so stay tuned!</p>
<hr />
<ol>
<li><p>Hugo Huang, Harvard Business Review. <a target="_blank" href="https://hbr.org/2023/11/what-ceos-need-to-know-about-the-costs-of-adopting-genai">What CEOs Need to Know About the Costs of Adopting GenAI</a></p>
</li>
<li><p>Jakob Nielsen, UX Tigers Blog. <a target="_blank" href="https://www.uxtigers.com/post/ai-response-time">The Need for Speed in AI</a></p>
</li>
<li><p>Google Cloud. <a target="_blank" href="https://cloud.google.com/products/ai">AI and machine learning products</a></p>
</li>
</ol>
<hr />
]]></content:encoded></item><item><title><![CDATA[How to create a canvas with a limited drawing area on a background image in Fabric.js]]></title><description><![CDATA[Introduction
SpaceRunners has a design tool where artists can create custom designs in a limited drawing area on any physical object. For example, the SpaceRunners team can upload an image of a T-Shirt and set it as a template. Inside this T-Shirt yo...]]></description><link>https://blog.ablo.ai/how-to-create-a-canvas-with-a-limited-drawing-area-on-a-background-image-in-fabricjs</link><guid isPermaLink="true">https://blog.ablo.ai/how-to-create-a-canvas-with-a-limited-drawing-area-on-a-background-image-in-fabricjs</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[canvas]]></category><category><![CDATA[fabricjs]]></category><category><![CDATA[Design]]></category><dc:creator><![CDATA[Mihovil Kovacevic]]></dc:creator><pubDate>Thu, 05 Dec 2024 15:57:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733412171741/cb86e29d-c16b-4463-93ba-62b0dd43d817.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>SpaceRunners has a design tool where artists can create custom designs in a limited drawing area on any physical object. For example, the SpaceRunners team can upload an image of a T-Shirt and set it as a template. Inside this T-Shirt you can only design on a predefined area as shown in the image below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733411960231/6f827c7b-e6f1-4c23-afa6-0440f4cad413.png" alt class="image--center mx-auto" /></p>
<p>This image shows a part of the SpaceRunners admin where one can position the drawing area inside the template and set its width and height. This area makes sense in the context of bringing these designs to the real world, for example printing the T-Shirt. Design elements that go outside of this area will be cut off when the design is exported.</p>
<p>On the frontend, SpaceRunners is using <a target="_blank" href="https://fabricjs.com/">Fabric.js</a> to power its design tool. Fabric.js provides an object model on top of the HTML <strong>canvas</strong> element to help build canvas experiences faster, with less code and in a more maintainable way. More information can be found in their docs. This article will describe a very well designed UX flow for how to achieve the requirements explained above and also how it can be done with Fabric.js. There's no straightforward API for this and figuring it out took some time so we hope this article will be helpful to others.</p>
<h3 id="heading-ui-considerations-to-create-a-seamless-user-experience">UI considerations to create a seamless user experience</h3>
<p>The canvas has to span the full width and height of its container inside the editor. In the image below the canvas is the entire area in gray, below the editor tools and next to the image generation tools.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733412263297/636b7206-d74f-4bf6-9230-ee1ef09fc328.png" alt class="image--center mx-auto" /></p>
<p>This allows for two things:</p>
<ol>
<li><p>When an object on the canvas goes outside of the drawing area although the object itself is cut off, the resize and rotate controls should still be visible. Since those controls are canvas based everything around the drawing area also has to be a canvas</p>
</li>
<li><p>The canvas can be zoomed in and out where the entire background object and the design elements zoom together.</p>
</li>
</ol>
<p>An easy to implement approach would be to have the background image as an HTML element and then absolutely position the drawing area on it using the coordinates from the admin. In this case only the drawing area would be the canvas. Looking at this approach through the lens of the two requirements above, this would mean that controls wouldn't be visible outside of the drawing area and the background image wouldn't zoom together with the canvas.</p>
<h3 id="heading-the-approach-to-achieve-the-desired-result">The approach to achieve the desired result</h3>
<ol>
<li><p>Set a background image on the canvas using its <code>setBackgroundImage</code> function. This makes the background image a part of the canvas and can be zoomed in and out together with everything else</p>
</li>
<li><p>Add a Fabric.js clip path to the canvas. A Fabric.js canvas object has a <code>clipPath</code> property where any shape can be place. When objects on the canvas go outside of the clip path they're clipped.</p>
</li>
<li><p>Add an overlay image to the canvas using the <code>setOverlayImage</code> from Fabric.js</p>
</li>
<li><p>Set an inverted clip path to this overlay image. An inverted clip path cuts off everything except the shape it has been defined for it. The reason for this inverted clip path is to prevent the original clip path from also cutting off the background image.</p>
</li>
</ol>
<p>Here's the code:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> setBackgroundImage = <span class="hljs-function">(<span class="hljs-params">canvas, img, isMobile</span>) =&gt;</span> {
  <span class="hljs-comment">// The clipPath object here has been previously set based on the settings for the template</span>
  <span class="hljs-keyword">const</span> { clipPath, width, height, wrapperEl } = canvas;

  <span class="hljs-comment">// This calculates the background image width to fit inside the container</span>
  <span class="hljs-keyword">const</span> { clientHeight, clientWidth } = wrapperEl;

  <span class="hljs-keyword">const</span> aspectRatio = clientWidth / clientHeight;

  <span class="hljs-keyword">const</span> imageWidth =
    (canvas.width *
      (isMobile
        ? GARMENT_IMAGE_MOBILE_WIDTH
        : clientHeight * PERCENTAGE_OF_CONTAINER_HEIGHT * aspectRatio)) /
    clientWidth;

  img.scaleToWidth(imageWidth);

  img.set({
    left: width / <span class="hljs-number">2</span>,
    top: height / <span class="hljs-number">2</span>,
    originX: <span class="hljs-string">'center'</span>,
    originY: <span class="hljs-string">'center'</span>,
    selectable: <span class="hljs-literal">false</span>,
    centeredScaling: <span class="hljs-literal">true</span>,
    erasable: <span class="hljs-literal">false</span>,
    excludeFromExport: <span class="hljs-literal">true</span>,
  });

  canvas.renderAll();

  <span class="hljs-keyword">const</span> oldClipPath = { ...clipPath };

  canvas.clipPath.top = clipPath.templateBasedTop;

  <span class="hljs-comment">// This positions the clip path based on current container size because the canvas is responsive to container dimensions changes</span>
  scaleObjectTops(canvas, oldClipPath);

  canvas.setBackgroundImage(img).renderAll();

  <span class="hljs-comment">// This adds the overlay background image which prevents the original background image from being clipped by the clip path</span>
  img.clone(<span class="hljs-function">(<span class="hljs-params">copy</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> clipPath2 = <span class="hljs-keyword">new</span> fabric.Rect({
      height: clipPath.height - <span class="hljs-number">2</span>,
      selectable: <span class="hljs-literal">false</span>,
      stroke: <span class="hljs-string">'transparent'</span>,
      strokeWidth: <span class="hljs-number">0</span>,
      width: clipPath.width - <span class="hljs-number">2</span>,
      left: clipPath.left + <span class="hljs-number">1</span>,
      top: clipPath.top + <span class="hljs-number">1</span>,
      inverted: <span class="hljs-literal">true</span>,
      absolutePositioned: <span class="hljs-literal">true</span>,
      excludeFromExport: <span class="hljs-literal">true</span>,
    });

    copy.set({
      clipPath: clipPath2,
    });
    canvas.setOverlayImage(copy).renderAll();
  });
};
</code></pre>
<p>Here’s the end result visually:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733412322251/1706d0fe-d24f-4a1c-a572-a9e603cee061.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The code for your app will look different depending on how complex you want your experience to be, but you can reuse the code provided here to achieve this particular pattern with minor adjustments.</p>
<p>Feel free to go try it out at <a target="_blank" href="https://ablo.ai">ablo.ai</a>. We’re currently offering 1000 free credits to use our AI design tools.</p>
<p>In future articles we'll explain more Fabric.js concepts and how to achieve other complex behaviors.</p>
]]></content:encoded></item><item><title><![CDATA[Our Tech Stack at Space Runners]]></title><description><![CDATA[As a seed stage startup, our top priority in choosing technology is the ability to move fast and iterate quickly. We must be able to test new ideas in market quickly and be willing to drop them and try something else if it’s not working. Additionally...]]></description><link>https://blog.ablo.ai/our-tech-stack-at-space-runners</link><guid isPermaLink="true">https://blog.ablo.ai/our-tech-stack-at-space-runners</guid><category><![CDATA[ablo]]></category><category><![CDATA[spacerunners]]></category><category><![CDATA[TechStack]]></category><category><![CDATA[technology]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Karim Varela]]></dc:creator><pubDate>Thu, 26 Sep 2024 17:51:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/EUsVwEOsblE/upload/39cba5fd3571d4d164182ca3711a6d21.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a seed stage startup, our top priority in choosing technology is the ability to move fast and iterate quickly. We must be able to test new ideas in market quickly and be willing to drop them and try something else if it’s not working. Additionally, as an AI image generation company, we must choose flexible and customizable tools that allow us to fully experiment with AI and allow our Users to fully express themselves. All the while, we need to balance this with costs.</p>
<p>Here’s our opinionated tech stack, starting from the frontend and going all the way back to our image generation services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727373005273/3cfc6fee-cf16-48a0-a458-54e8dced9ab2.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-frontend-web">Frontend (web)</h1>
<p>We have 2 websites we’re actively maintaining and updating:</p>
<ol>
<li><p><a target="_blank" href="https://ablo.ai">ablo.ai</a>: Our core image generation and community design (and soon to be e-comm) product.</p>
</li>
<li><p><a target="_blank" href="https://spacerunners.com">spacerunners.com</a>: Our informational, inspirational site for the company.</p>
</li>
</ol>
<h2 id="heading-stack">Stack</h2>
<ul>
<li><p><strong>Analytics: Segment → Amplitude</strong><br />  <a target="_blank" href="https://segment.com/">Segment</a> is great for piping your user data and analytics to multiple places and <a target="_blank" href="https://amplitude.com/">Amplitude</a> is great for capturing the important events for your User’s behavior.</p>
</li>
<li><p><strong>Canvas manipulation: Fabric.js</strong><br />  <a target="_blank" href="http://fabricjs.com/">Fabric</a> makes it easy to work with the HTML Canvas</p>
</li>
<li><p><strong>Customer Services: Intercom</strong><br />  We integrate <a target="_blank" href="https://www.intercom.com/">Intercom</a> into our sites so customers can easily chat with us.</p>
</li>
<li><p><strong>Deployment: Render</strong><br />  <a target="_blank" href="https://render.com/">Render</a> is free to host and deploy static sites and also gives us preview builds on pull requests, which is super handy for testing before we merge in code.</p>
</li>
<li><p><strong>Error reporting: Sentry</strong><br />  <a target="_blank" href="https://sentry.io">Sentry</a> is great for catching issues in production and we use it full-stack as well.</p>
</li>
<li><p><strong>Framework: React.js</strong><br />  In our opinion, <a target="_blank" href="https://react.dev/">React</a> is still the easiest framework to build fast in and has a huge community and toolset supporting it.</p>
</li>
<li><p><strong>Language: Typescript</strong><br />  We are full stack <a target="_blank" href="https://www.typescriptlang.org/">Typescript</a>. This allows our FE developers to more easily work on the BE.</p>
</li>
<li><p><strong>Real-time: Pubnub</strong><br />  <a target="_blank" href="https://www.pubnub.com/">Pubnub</a> provides a simple API to get real-time notifications, such as when credits are used on our your account.</p>
</li>
<li><p><strong>Styling: Chakra UI</strong><br />  <a target="_blank" href="https://v2.chakra-ui.com/">Chakra</a> is a full UI and styling library that enables us to build and style components super quickly (once you learn the basics :) )</p>
</li>
</ul>
<h1 id="heading-services-backend-api">Services Backend (API)</h1>
<p>Our Services BE powers our public API used by Clients all over the world in their design tools and also internally for us in ablo.ai.</p>
<h2 id="heading-stack-1">Stack</h2>
<ul>
<li><p><strong>Analytics: Segment → Amplitude</strong><br />  <a target="_blank" href="https://segment.com">Segment</a> is great for piping your user data and analytics to multiple places and <a target="_blank" href="https://amplitude.com/">Amplitude</a> is great for capturing the important events for your User’s behavior.</p>
</li>
<li><p><strong>API Docs: Readme</strong><br />  <a target="_blank" href="https://readme.copm">Readme</a> integrates nicely with our OpenAPI definitions that are generated automatically from our annotations in our Nest app.</p>
</li>
<li><p><strong>Cache: Redis</strong><br />  We cache responses in <a target="_blank" href="https://redis.io/">Redis</a> to some of those heavy queries where the response doesn’t change often. This speeds up our system and reduces the load on our Postgres DB.</p>
</li>
<li><p><strong>Deployment: Render</strong><br />  <a target="_blank" href="https://render.com">Render</a> makes it really easy to do CI/CD and includes our Postgres and Redis stores as well.</p>
</li>
<li><p><strong>DNS: Cloudflare</strong><br />  <a target="_blank" href="https://cloudflare.com">Cloudflare</a> provides a nice proxy to all our endpoints and also provides out of the box DDoS protection.</p>
</li>
<li><p><strong>E-comm: Shopify</strong><br />  <a target="_blank" href="https://shopify.com">Shopify</a> enables us to easily set up a headless store, integrates nicely with our printing and shipping partner (Printful), and handles all the payments for us.</p>
</li>
<li><p><strong>Error reporting: Sentry</strong><br />  We use <a target="_blank" href="https://sentry.io">Sentry</a> full-stack for error reporting to catch issues in production.</p>
</li>
<li><p><strong>File store: GCP Storage</strong><br />  We store images mostly in <a target="_blank" href="https://cloud.google.com/storage">GCP Storage</a> as that’s where our image generation workflows (e.g. Image Maker, Fontmaker, and Photo Transformer) live. GCP Storage is also great because they have a feature that automatically moves files to cold storage if they haven’t been used in a while.</p>
</li>
<li><p><strong>Framework: Nest.js</strong><br />  <a target="_blank" href="https://nestjs.com/">Nest</a> gives us an easy to use modular framework for cleanly separating services in our monolithic BE, defining RESTful endpoints, generating API docs, and running scheduled jobs.</p>
</li>
<li><p><strong>Language: Typescript</strong><br />  We are full-stack <a target="_blank" href="https://www.typescriptlang.org/">Typescript</a>. Allows our BE devs to more easily work on the FE.</p>
</li>
<li><p><strong>Printing &amp; Shipping: Printful</strong><br />  <a target="_blank" href="https://printful.com">Printful</a> has a huge catalogue and enables us to send custom designs via API to get printed and shipped.</p>
</li>
<li><p><strong>Real-time: Pubnub</strong><br />  We call a simple <a target="_blank" href="https://www.pubnub.com/">Pubnub</a> API to notify all Users when credits have been used on their Client.</p>
</li>
<li><p><strong>Email: Sendgrid</strong><br />  We use <a target="_blank" href="https://sendgrid.com">Sendgrid</a> for sending transactional emails from our System via API.</p>
</li>
<li><p><strong>Subscriptions: Stripe</strong><br />  <a target="_blank" href="https://stripe.com">Stripe</a> enables us to easily set up monthly credit subscriptions with overage charges and handles all the billing for us.</p>
</li>
<li><p><strong>Transactional data store: Postgres</strong><br />  <a target="_blank" href="https://www.postgresql.org/">Postgres</a> is a tried and true, scalable, relational database. As we’ve scaled, we have had some performance issues with complex queries involving a lot of joins, and have had to make optimizations here and there to our queries and call patterns, but that’s part of the game.</p>
</li>
</ul>
<h1 id="heading-machine-learning-backend-ai">Machine Learning Backend (AI)</h1>
<p>We use machine learning / AI to do a number of things, mostly related to image generation and manipulation:</p>
<ul>
<li><p>Image Maker: Our text to image service</p>
</li>
<li><p>Font Maker: Our text to graphic font service</p>
</li>
<li><p>Photo Transformer: Our image to image service</p>
</li>
<li><p>Background removal</p>
</li>
<li><p>Upscale</p>
</li>
</ul>
<p>We’re always experimenting with the best way to do these things and the landscape is changing rapidly under our feet, but our current stack is something like this:</p>
<ul>
<li><p><strong>Background removal: BiRefNet</strong><br />  <a target="_blank" href="https://www.birefnet.top/">BiRefNet (bi-direction recurrent feature network)</a> is a performant, open-source, background removal library that doesn’t need a GPU.</p>
</li>
<li><p><strong>Deployment: Google Kubernetes Engine</strong><br />  <a target="_blank" href="https://cloud.google.com/kubernetes-engine/">GKE</a> enables us to build our custom AI workflows in containers and utilize GPUs from GCP.</p>
</li>
<li><p><strong>Image generation engines: Stable Diffusion and Flux</strong><br />  <a target="_blank" href="https://stability.ai/">Stable Diffusion</a> gives us the best combination of customizability and quality. <a target="_blank" href="https://blackforestlabs.ai/">Flux</a> is a great combination of speed and quality.</p>
</li>
<li><p><strong>Image storage: GCP</strong> <strong>Storage</strong><br />  <a target="_blank" href="https://cloud.google.com/storage">GCP Storage</a> is great because they have a feature that automatically moves files to cold storage if they haven’t been used in a while.</p>
</li>
<li><p><strong>Language: Python</strong><br />  All the hip machine learning and AI libraries are written in <a target="_blank" href="https://www.python.org/">Python</a>. It also has image manipulation libraries.</p>
</li>
<li><p><strong>Models: Hugging Face</strong><br />  <a target="_blank" href="https://huggingface.co/">Hugging Face</a> has an easy to use repository of models to experiment with and a great community.</p>
</li>
<li><p><strong>Training: Replicate</strong><br />  For custom style training, we use <a target="_blank" href="https://replicate.com/">Replicate</a> to train LoRAs and do inference.</p>
</li>
<li><p><strong>Upscale: SupIR</strong><br />  <a target="_blank" href="https://supir.xpixel.group/">SupIR (Super-Resolution using Iterative Refinement)</a> is a great open source library for upscaling as it enables us to upscale to 16MP for high quality printing while preserving details from the original image.</p>
</li>
</ul>
<p>That’s a wrap! Let me know if there’s anything here you’d like to learn more about, and we’ll write a follow up post and go in depth.</p>
]]></content:encoded></item><item><title><![CDATA[Intro to Space Runners]]></title><description><![CDATA[Welcome to the Space Runners tech blog!
Space Runners is a fashion-tech platform revolutionizing how you design, collaborate, and launch fashion collections across both physical and digital realms. We do this by using modern AI and web3 technologies....]]></description><link>https://blog.ablo.ai/intro-to-space-runners</link><guid isPermaLink="true">https://blog.ablo.ai/intro-to-space-runners</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[AI tools for image creation]]></category><category><![CDATA[Web3]]></category><category><![CDATA[fashion]]></category><category><![CDATA[Fashion Industry]]></category><category><![CDATA[fashion tech]]></category><dc:creator><![CDATA[Karim Varela]]></dc:creator><pubDate>Fri, 20 Sep 2024 18:48:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726857980726/55d729b8-af1e-4404-ad62-abdc576c5548.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the Space Runners tech blog!</p>
<p>Space Runners is a fashion-tech platform revolutionizing how you design, collaborate, and launch fashion collections across both physical and digital realms. We do this by using modern AI and web3 technologies.</p>
<p>Our blog will be your window into the innovative work happening at Space Runners. Through these posts, we’ll share insights, behind-the-scenes looks, and exciting updates on what we’re building.</p>
<h1 id="heading-our-mission">Our Mission</h1>
<p>Our mission is simple: to create the most intuitive tools that empower anyone to innovate, collaborate, and grow in the fashion industry.</p>
<p>Our core tool and product is called <a target="_blank" href="https://ablo.ai">ablo.ai</a>. On ablo.ai, we’re building a community powered collaborative design tool that enables anyone to create and sell their own designs.</p>
<h1 id="heading-the-team">The Team</h1>
<p>Our tech team is small but mighty, composed of experts across multiple fields:</p>
<ul>
<li><p>1 CTO</p>
</li>
<li><p>1 FE focused full-stack engineer</p>
</li>
<li><p>1 BE focused full-stack engineer</p>
</li>
<li><p>1 machine learning engineer</p>
</li>
<li><p>1 technical artist</p>
</li>
</ul>
<h1 id="heading-our-core-values">Our Core Values</h1>
<p>We have a set of core values that we try to live every day:</p>
<ul>
<li><p><strong>Creativity</strong><br />  We push the boundaries of creativity with cutting-edge technology, always testing, learning, and evolving.</p>
</li>
<li><p><strong>Collaboration</strong><br />  We value working together wherever we are around the world. We value good communication and we value other people’s perspectives.</p>
</li>
<li><p><strong>Ownership</strong><br />  We give and expect ownership of things end to end. If you don’t like it, speak up!</p>
</li>
<li><p><strong>Inclusivity</strong><br />  We employ empathy in everything we do and ensure everyone is included.</p>
</li>
<li><p><strong>Fun</strong><br />  Enjoy the journey!</p>
</li>
</ul>
<h1 id="heading-thats-a-wrap">That’s a wrap!</h1>
<p>In future posts, we’ll go deep into our tech stack, how we work, and how we utilize AI especially to do magic.</p>
<p>Want to stay in the loop? Subscribe to our <a target="_blank" href="https://blog.spacerunners.com/newsletter">newsletter</a> for the latest updates and insights straight to your inbox!</p>
]]></content:encoded></item></channel></rss>