• articles
  • general information
  • Software Choices for a Multi-GPU AI Workstation
  • Software Choices for a Multi-GPU AI Workstation

    Author:
    Published:

    Content Creation

    ComfyUI

    Stable Diffusion launched the offline image generation craze and ComfyUI took that to the next level allowing users to build image and video workflows in node-based system.  This system is extensible with custom nodes and can be expanded to accommodate additional steps to tweak and/or improve the image generation.

    A good example is generating an image using the Qwen 2512 image model.  With just three nodes you can provide a text prompt, send it to the image model and generate an image.  From there you can configure things such as output size, steps and any additional LoRAs you want to use in the pipeline.

    If you change the workflow slightly you can use an image editing model and instruct the system to change a provided image and/or combine more than one image together.  To take this even a step further you can chain both workflows to generate an image with text and then combine it with additional images to create a final result.

    I use ComfyUI with OpenWebUI to provide an image generation tool for the lab allowing me to generate thumbnails or entire scenes based on the stories and articles that are being generated.

    N8N

    One of the real powers of AI is when you can use it to automate a workflow and transform your basic chat model into an agent that can do things for you in a repeatable pattern.  One of the common uses is to build summaries from a daily news feed.  You simply program N8N with the pattern you want it to follow (which might also be the pattern you follow every morning) and it will step through the tasks and generate whatever you want.

    One of the N8N nodes will provide a hook to an AI model, this can be OpenAI or something local like Ollama (what I have been using).  This node can do anything you would normally use Ollama or ChatGPT for and becomes just another stop along the workflow.

     N8N is node based similar to LangChain with pre-built nodes that you can add to your workflow or you can create custom ones if you cannot find one that will work for you.  Of course, the real power is the ability to automate tasks and even include decision trees like any application you might build.  Much like with OpenWebUI, you will want to run this project from Docker to facilitate installation and updates.

    OpenClaw

    At GTC 2026, Jensen Huang from Nvidia called OpenClaw the operating system for AI, and while I don’t fully agree, he is also not wrong.  OpenClaw is a gateway that can manage tool calling from an LLM.  The LLM can be a local model or a cloud-based service and can add another dimension to your LLM chats and automation.

    In maybe one of the most dangerous ways.

    By default, OpenClaw has no guardrails and will do things that may surprise you.  Out of the box it is dumb but, during the onboarding process you will establish some basic parameters for it to follow, provide it a name and give it a “soul”.  These results are stored as a series of markdown files and will serve as the initial prompt for your LLM.  What is nice about the markdown files is that you can adjust anything you want in those files or, ask OpenClaw to do the changes for you.

    From there you simply tell it what you want to do and the LLM will figure everything else out.  For instance, if you wanted to monitor an RSS feed, you give it the URL and frequency for checking the feed.  It will then pull down a copy of the RSS and provide a summary of the results either via chat message, email or through its own web interface.  From there you can additional instructions and do even more things.

    The real danger is that, it naturally runs on your computer under your user account.  If OpenClaw determines that it needs to install some software to complete the task, it will do that.  If OpenClaw decides that your email inbox is too full, it might decide to delete things.  There are also security concerns, since OpenClaw can do web and data searches, it might find a prompt that will override the basic instructions and give out sensitive information or even self-destruct the install.

    Personally, I find OpenClaw to be an extremely helpful “toy” that can deliver real value but, is no better than talking with a pre-teen.  If you are very careful about your instructions and provide all the right requirements you will get decent result.  If you are not detailed enough or forget a step the model will often misinterpret the intent and you’ll spend the next hour trying to fix things. 

    I use OpenClaw with my local Ollama server and have had great success with context windows of 128k.  However, once the cache reaches 80%, the quality of the interactions can slowly degrade.  The best way to fix this is to make a backup of your work and issue a “/new” command to clear the cache.  This is no different than what we are familiar with when our AI discussions get filled with context slop and start creating hallucinations. 

    I have OpenClaw installed on my AI workstation under my user account but have limited the interaction to just the OpenClaw web dashboard and OpenClaw workspace.  This allows me to monitor what it is doing and limits my exposure.  In addition to my other security protocols I can wade through the hype and find the best way to use it.