Tech News

  • NVIDIA Digital Human Technologies Bring AI Characters to Life

    Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications

    SAN JOSE, Calif., March 18, 2024 (GLOBE NEWSWIRE) -- NVIDIA announced today that leading AI application developers across a wide range of industries are using NVIDIA digital human technologies to create lifelike avatars for commercial applications and dynamic game characters. The results are on display at GTC, the global AI conference held this week in San Jose, Calif., and can be seen in technology demonstrations from Hippocratic AI, Inworld AI, UneeQ and more.

    NVIDIA Avatar Cloud Engine (ACE) for speech and animation, NVIDIA NeMo™ for language, and NVIDIA RTX™ for ray-traced rendering are the building blocks that enable developers to create digital humans capable of AI-powered natural language interactions, making conversations more realistic and engaging.

    “NVIDIA offers developers a world-class set of AI-powered technologies for digital human creation,” said John Spitzer, vice president of developer and performance technologies at NVIDIA. “These technologies may power the complex animations and conversational speech required to make digital interactions feel real.”

    World-Class Digital Human Technologies
    The digital human technologies suite includes language, speech, animation and graphics powered by AI:

    • NVIDIA ACE — technologies that help developers bring digital humans to life with facial animation powered by NVIDIA Audio2Face™ and speech powered by NVIDIA Riva automatic speech recognition (ASR) and text-to-speech (TTS). ACE microservices are flexible in allowing models to run across cloud and PC depending on the local GPU capabilities to help ensure the user receives the best experience.
    • NVIDIA NeMo — an end-to-end platform that enables developers to deliver enterprise-ready generative AI models with precise data curation, cutting-edge customization, retrieval-augmented generation and accelerated performance.
    • NVIDIA RTX — a collection of rendering technologies, such as RTX Global Illumination (RTXGI) and DLSS 3.5, that enable real-time path tracing in games and applications.

    Building Blocks for Digital Humans and Virtual Assistants
    To showcase the new capabilities of its digital human technologies, NVIDIA worked across industries with leading developers, such as Hippocratic AI, Inworld AI and UneeQ, on a series of new demonstrations.

    Hippocratic AI has created a safety-focused, LLM-powered, task-specific Healthcare Agent. The agent calls patients on the phone, follows up on care coordination tasks, delivers preoperative instructions, performs post-discharge management and much more. For GTC, NVIDIA collaborated with Hippocratic AI to extend its solution to use NVIDIA ACE microservices, NVIDIA Audio2Face along with NVIDIA Animation graph and NVIDIA Omniverse™ Streamer Client to show the potential of a generative AI healthcare agent avatar.

    “Our digital assistants provide helpful, timely and accurate information to patients worldwide,” said Munjal Shah, cofounder and CEO of Hippocratic AI. “NVIDIA ACE technologies bring them to life with cutting-edge visuals and realistic animations that help better connect to patients.”

    UneeQ is an autonomous digital human platform specialized in creating AI-powered avatars for customer service and interactive applications. Its digital humans represent brands online, communicating with customers in real time to give them confidence in their purchases. UneeQ integrated the NVIDIA Audio2Face microservice into its platform and combined it with Synanim ML to create highly realistic avatars for a better customer experience and engagement.

    “UneeQ combines NVIDIA animation AI with our own Synanim ML synthetic animation technology to deliver real-time digital human interactions that are emotionally responsive and deliver dynamic experiences powered by conversational AI,” said Danny Tomsett, founder and CEO of UneeQ.

    Bringing Dynamic Non-Playable Characters to Games
    NVIDIA ACE is a suite of technologies designed to bring game characters to life. Covert Protocol is a new technology demonstration, created by Inworld AI in partnership with NVIDIA, that pushes the boundary of what character interactions in games can be. Inworld’s AI engine has integrated NVIDIA Riva for accurate speech-to-text and NVIDIA Audio2Face to deliver lifelike facial performances.

    Inworld’s AI engine takes a multimodal approach to the performance of non-playable characters (NPCs), bringing together cognition, perception and behavior systems for an immersive narrative with stunning RTX-rendered characters set in a beautifully crafted environment.

    “The combination of NVIDIA ACE microservices and the Inworld Engine enables developers to create digital characters that can drive dynamic narratives, opening new possibilities for how gamers can decipher, deduce and play,” said Kylan Gibbs, CEO of Inworld AI.

    Game publishers worldwide are evaluating how NVIDIA ACE can improve the gaming experience.

    Developers Across Healthcare, Gaming, Financial Services, Media & Entertainment and Retail Embrace ACE
    Top game and digital human developers are pioneering ways ACE and generative AI technologies can be used to transform interactions between players and NPCs in games and applications.

    Developers and platforms embracing ACE include Convai, Cyber Agent, Data Monsters, Deloitte, Hippocratic AI, IGOODI, Inworld AI, Media.Monks, miHoYo, NetEase Games, Perfect World, Openstream, OurPalm, Quantiphi, Rakuten Securities, Slalom, SoftServe, Tencent, Top Health Tech, Ubisoft, UneeQ and Unions Avatars.

    More information on NVIDIA ACE is available at https://developer.nvidia.com/ace. Platform developers can incorporate the full suite of digital human technologies or individual microservices into their product offerings.

    Developers can start their journey on NVIDIA ACE by applying for the early access program to get in-development AI models. To explore available models, developers can evaluate and access NVIDIA NIM, a set of easy-to-use microservices designed to accelerate the deployment of generative AI, for Riva and Audio2Face on ai.nvidia.com today.

  • NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale

    • Scales to Tens of Thousands of Grace Blackwell Superchips Using Most Advanced NVIDIA Networking, NVIDIA Full-Stack AI Software, and Storage
    • Features up to 576 Blackwell GPUs Connected as One With NVIDIA NVLink
    • NVIDIA System Experts Speed Deployment for Immediate AI Infrastructure

    SAN JOSE, Calif., March 18, 2024 (GLOBE NEWSWIRE) -- GTCNVIDIA today announced its next-generation AI supercomputer — the NVIDIA DGX SuperPOD™ powered by NVIDIA GB200 Grace Blackwell Superchips — for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

    Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX™ GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory — scaling to more with additional racks.

    Each DGX GB200 system features 36 NVIDIA GB200 Superchips — which include 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs — connected as one supercomputer via fifth-generation NVIDIA NVLink®. GB200 Superchips deliver up to a 30x performance increase compared to the NVIDIA H100 Tensor Core GPU for large language model inference workloads.

    “NVIDIA DGX AI supercomputers are the factories of the AI industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “The new DGX SuperPOD combines the latest advancements in NVIDIA accelerated computing, networking and software to enable every company, industry and country to refine and generate their own AI.”

    The Grace Blackwell-powered DGX SuperPOD features eight or more DGX GB200 systems and can scale to tens of thousands of GB200 Superchips connected via NVIDIA Quantum InfiniBand. For a massive shared memory space to power next-generation AI models, customers can deploy a configuration that connects the 576 Blackwell GPUs in eight DGX GB200 systems connected via NVLink.

    New Rack-Scale DGX SuperPOD Architecture for Era of Generative AI
    The new DGX SuperPOD with DGX GB200 systems features a unified compute fabric. In addition to fifth-generation NVIDIA NVLink, the fabric includes NVIDIA BlueField®-3 DPUs and will support NVIDIA Quantum-X800 InfiniBand networking, announced separately today. This architecture provides up to 1,800 gigabytes per second of bandwidth to each GPU in the platform.

    Additionally, fourth-generation NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology provides 14.4 teraflops of In-Network Computing, a 4x increase in the next-generation DGX SuperPOD architecture compared to the prior generation.

    Turnkey Architecture Pairs With Advanced Software for Unprecedented Uptime
    The new DGX SuperPOD is a complete, data-center-scale AI supercomputer that integrates with high-performance storage from NVIDIA-certified partners to meet the demands of generative AI workloads. Each is built, cabled and tested in the factory to dramatically speed deployment at customer data centers.

    The Grace Blackwell-powered DGX SuperPOD features intelligent predictive-management capabilities to continuously monitor thousands of data points across hardware and software to predict and intercept sources of downtime and inefficiency — saving time, energy and computing costs.

    The software can identify areas of concern and plan for maintenance, flexibly adjust compute resources, and automatically save and resume jobs to prevent downtime, even without system administrators present.

    If the software detects that a replacement component is needed, the cluster will activate standby capacity to ensure work finishes in time. Any required hardware replacements can be scheduled to avoid unplanned downtime.

    NVIDIA DGX B200 Systems Advance AI Supercomputing for Industries
    NVIDIA also unveiled the NVIDIA DGX B200 system, a unified AI supercomputing platform for AI model training, fine-tuning and inference.

    DGX B200 is the sixth generation of air-cooled, traditional rack-mounted DGX designs used by industries worldwide. The new Blackwell architecture DGX B200 system includes eight NVIDIA Blackwell GPUs and two 5th Gen Intel® Xeon® processors. Customers can also build DGX SuperPOD using DGX B200 systems to create AI Centers of Excellence that can power the work of large teams of developers running many different jobs.

    DGX B200 systems include the FP4 precision feature in the new Blackwell architecture, providing up to 144 petaflops of AI performance, a massive 1.4TB of GPU memory and 64TB/s of memory bandwidth. This delivers 15x faster real-time inference for trillion-parameter models over the previous generation.

    DGX B200 systems include advanced networking with eight NVIDIA ConnectX™-7 NICs and two BlueField-3 DPUs. These provide up to 400 gigabits per second bandwidth per connection — delivering fast AI performance with NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum™-X Ethernet networking platforms.

    Software and Expert Support to Scale Production AI
    All NVIDIA DGX platforms include NVIDIA AI Enterprise software for enterprise-grade development and deployment. DGX customers can accelerate their work with the pretrained NVIDIA foundation models, frameworks, toolkits and new NVIDIA NIM microservices included in the software platform.

    NVIDIA DGX experts and select NVIDIA partners certified to support DGX platforms assist customers throughout every step of deployment, so they can quickly move AI into production. Once systems are operational, DGX experts continue to support customers in optimizing their AI pipelines and infrastructure.

    Availability
    NVIDIA DGX SuperPOD with DGX GB200 and DGX B200 systems are expected to be available later this year from NVIDIA’s global partners.

    For more information, watch a replay of the GTC keynote or visit the NVIDIA booth at GTC, held at the San Jose Convention Center through March 21.

  • NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

    • New Blackwell GPU, NVLink and Resilience Technologies Enable Trillion-Parameter-Scale AI Models
    • New Tensor Cores and TensorRT- LLM Compiler Reduce LLM Inference Operating Cost and Energy by up to 25x
    • New Accelerators Enable Breakthroughs in Data Processing, Engineering Simulation, Electronic Design Automation, Computer-Aided Drug Design and Quantum Computing
    • Widespread Adoption by Every Major Cloud Provider, Server Maker and Leading AI Company

    SAN JOSE, Calif., March 18, 2024 (GLOBE NEWSWIRE) -- Powering a new era of computing, NVIDIA today announced that the NVIDIA Blackwell platform has arrived — enabling organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.

    The Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and generative AI — all emerging industry opportunities for NVIDIA.

    “For three decades we’ve pursued accelerated computing, with the goal of enabling transformative breakthroughs like deep learning and AI,” said Jensen Huang, founder and CEO of NVIDIA. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”

    Among the many organizations expected to adopt Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.

    Sundar Pichai, CEO of Alphabet and Google: “Scaling services like Search and Gmail to billions of users has taught us a lot about managing compute infrastructure. As we enter the AI platform shift, we continue to invest deeply in infrastructure for our own products and services, and for our Cloud customers. We are fortunate to have a longstanding partnership with NVIDIA, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google, including Google DeepMind, to accelerate future discoveries.”

    Andy Jassy, president and CEO of Amazon: “Our deep collaboration with NVIDIA goes back more than 13 years, when we launched the world’s first GPU cloud instance on AWS. Today we offer the widest range of GPU solutions available anywhere in the cloud, supporting the world’s most technologically advanced accelerated workloads. It's why the new NVIDIA Blackwell GPU will run so well on AWS and the reason that NVIDIA chose AWS to co-develop Project Ceiba, combining NVIDIA’s next-generation Grace Blackwell Superchips with the AWS Nitro System's advanced virtualization and ultra-fast Elastic Fabric Adapter networking, for NVIDIA's own AI research and development. Through this joint effort between AWS and NVIDIA engineers, we're continuing to innovate together to make AWS the best place for anyone to run NVIDIA GPUs in the cloud.”

    Michael Dell, founder and CEO of Dell Technologies: “Generative AI is critical to creating smarter, more reliable and efficient systems. Dell Technologies and NVIDIA are working together to shape the future of technology. With the launch of Blackwell, we will continue to deliver the next-generation of accelerated products and services to our customers, providing them with the tools they need to drive innovation across industries.”

    Demis Hassabis, cofounder and CEO of Google DeepMind: “The transformative potential of AI is incredible, and it will help us solve some of the world’s most important scientific problems. Blackwell’s breakthrough technological capabilities will provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries.”

    Mark Zuckerberg, founder and CEO of Meta: “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it's only going to get more important in the future. We're looking forward to using NVIDIA's Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”

    Satya Nadella, executive chairman and CEO of Microsoft: “We are committed to offering our customers the most advanced infrastructure to power their AI workloads. By bringing the GB200 Grace Blackwell processor to our datacenters globally, we are building on our long-standing history of optimizing NVIDIA GPUs for our cloud, as we make the promise of AI real for organizations everywhere.”

    Sam Altman, CEO of OpenAI: “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute.”

    Larry Ellison, chairman and CTO of Oracle: "Oracle’s close collaboration with NVIDIA will enable qualitative and quantitative breakthroughs in AI, machine learning and data analytics. In order for customers to uncover more actionable insights, an even more powerful engine like Blackwell is needed, which is purpose-built for accelerated computing and generative AI.”

    Elon Musk, CEO of Tesla and xAI: “There is currently nothing better than NVIDIA hardware for AI.”

    Named in honor of David Harold Blackwell — a mathematician who specialized in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences — the new architecture succeeds the NVIDIA Hopper™ architecture, launched two years ago.

    Blackwell Innovations to Fuel Accelerated Computing and Generative AI
    Blackwell’s six revolutionary technologies, which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters, include:

    • World’s Most Powerful Chip — Packed with 208 billion transistors, Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
    • Second-Generation Transformer Engine — Fueled by new micro-tensor scaling support and NVIDIA’s advanced dynamic range management algorithms integrated into NVIDIA TensorRT™-LLM and NeMo Megatron frameworks, Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.
    • Fifth-Generation NVLink — To accelerate performance for multitrillion-parameter and mixture-of-experts AI models, the latest iteration of NVIDIA NVLink® delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.
    • RAS Engine — Blackwell-powered GPUs include a dedicated engine for reliability, availability and serviceability. Additionally, the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues. This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs.
    • Secure AI — Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.
    • Decompression Engine — A dedicated decompression engine supports the latest formats, accelerating database queries to deliver the highest performance in data analytics and data science. In the coming years, data processing, on which companies spend tens of billions of dollars annually, will be increasingly GPU-accelerated.

    A Massive Superchip
    The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

    For the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum™-X800 Ethernet platforms, also announced today, which deliver advanced networking at speeds up to 800Gb/s.

    The GB200 is a key component of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale system for the most compute-intensive workloads. It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. Additionally, GB200 NVL72 includes NVIDIA BlueField®-3 data processing units to enable cloud network acceleration, composable storage, zero-trust security and GPU compute elasticity in hyperscale AI clouds. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x.

    The platform acts as a single GPU with 1.4 exaflops of AI performance and 30TB of fast memory, and is a building block for the newest DGX SuperPOD.

    NVIDIA offers the HGX B200, a server board that links eight B200 GPUs through NVLink to support x86-based generative AI platforms. HGX B200 supports networking speeds up to 400Gb/s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms.

    Global Network of Blackwell Partners
    Blackwell-based products will be available from partners starting later this year.

    AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to offer Blackwell-powered instances, as will NVIDIA Cloud Partner program companies Applied Digital, CoreWeave, Crusoe, IBM Cloud and Lambda. Sovereign AI clouds will also provide Blackwell-based cloud services and infrastructure, including Indosat Ooredoo Hutchinson, Nebius, Nexgen Cloud, Oracle EU Sovereign Cloud, the Oracle US, UK, and Australian Government Clouds, Scaleway, Singtel, Northern Data Group's Taiga Cloud, Yotta Data Services’ Shakti Cloud and YTL Power International.

    GB200 will also be available on NVIDIA DGX™ Cloud, an AI platform co-engineered with leading cloud service providers that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. AWS, Google Cloud and Oracle Cloud Infrastructure plan to host new NVIDIA Grace Blackwell-based instances later this year.

    Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers based on Blackwell products, as are Aivres, ASRock Rack, ASUS, Eviden, Foxconn, GIGABYTE, Inventec, Pegatron, QCT, Wistron, Wiwynn and ZT Systems.

    Additionally, a growing network of software makers, including Ansys, Cadence and Synopsys — global leaders in engineering simulation — will use Blackwell-based processors to accelerate their software for designing and simulating electrical, mechanical and manufacturing systems and parts. Their customers can use generative AI and accelerated computing to bring products to market faster, at lower cost and with higher energy efficiency.

    NVIDIA Software Support
    The Blackwell product portfolio is supported by NVIDIA AI Enterprise, the end-to-end operating system for production-grade AI. NVIDIA AI Enterprise includes NVIDIA NIM™ inference microservices — also announced today — as well as AI frameworks, libraries and tools that enterprises can deploy on NVIDIA-accelerated clouds, data centers and workstations.

    To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

  • SilverStone IceMyst 360 All-in-One Liquid Cooler Review @ APH Networks

    I really do love the SilverStone naming schemes,  The cases are named after famous people of the warring type and the coolers tend to self describe, themselves.

    For instance, we have the IceMyst, great name but being an AIO watercooler, there will be no Ice or Myst unless you consider the LEDs as sparkles of diamond shapes lights in your PC and play a little bit of Myst on the side. happy smile

    The SilverStone IceMyst 360 all-in-one liquid cooler is an excellent performer with an innovative way to cool around the motherboard too.

    Check back for our review of this exciting new cooler, with extra fans.

  • Happy Baud Day!! 3/12/24 a BBS callback

    Back in the days of the dialup the notion of modem speed, or baud rate, was a pretty big deal.  The supported speeds were often a selling point for any particular modem and would largely determine how productive your online sessions where.

    When a BBS would advertise, they often mentioned their supported baud rates and was written in the (a/s/l) notation.  For March 12th 2024 that translates to 300/1200/2400 as the supported baud rate which was shortened to drop the redundant double zero. 

    So, why was the supported baud rate important??

    The baud rate difference between 1200 and 2400 was exactly double meaning that if your favorite BBS supported 2400 baud then you could download twice as much data, complete your online door game session faster and have the opportunity to bank more time with faster uploads.  Of course, aside from torrents the notion of uploading more to download more has largely been forgotten however, before the widespread use of the internet a BBS was the defacto way to do online communication and try out some new software AND, you needed time credits to do this.

    Check out the site supporting Baud Day [3/12/2024].  They go into a little more detail about the significance and why days like this are a chance of a lifetime.

    Baud Days are like comets. They do not come around too often. December 24, 1996 (12/24/96) marked the first Baud Day with little fanfare as many saw the popularity of the Computer Bulletin Board System (BBS) waning from the rise of the Internet.

    Personally, I have fond memories of dialing into a BBS system, I was curious about computers and found the notion extremely enlightening.  During the rise of the Internet we simply changed the system we dialed into.  Instead of “City of Trees BBS” I dialed into, AOL or CompuServe and later into my ISP.  The interesting thing is that as Internet technology grew, so did the data requirements and the requirement for a fast modem and even faster ISP.    

  • Razer Seiren v3 Chroma and Mini @ LanOC Reviews

    I have to hand it to Wes over at LanOC.  It seems they have broken the Razer code and was able to get them to ship out a review sample.

    What I like about these microphones is that they appear as the industrial designer literally lifted an emoji to create this product.  This isn't a huge deal, microphone icons look like that for a reason but, when companies start copying them you know they are running out of ideas.  There is also plenty of feature copy from other, similar, microphones on the market.

    It’s kind of crazy to think but the last time I had a Razer desktop microphone in for review was back in 2015 when I took a look at the Razer Seiren Pro. That wasn’t Razer’s first desktop microphone but it was part of their first generation of Seiren microphones and Razer has continued to evolve their lineup in the last 9 years. Well, today they have updated that lineup with their new Seiren v3 models. That lineup consists of two different models, the Seiren v3 Chroma and the Seiren v3 Mini. The Seiren v3 Mini is their budget-friendly option and then the Seiren v3 Chroma is the larger flashier option. Today I'm going to check out both and see what they have to offer then put them to the test to see how they sound compared to the competition. I’m excited to see how their design has changed over the years and to see how these new microphones compare to the current competition like the Yeti Orb and Yeti GX that I took a look at last fall.

    If you are in the market for a new microphone and want one to satisfy your OCD need to have your graphics and hardware match then be sure to check out the review.

  • YEYIAN GAMING Introduces the PHOENIX: A Next-Level High-Performance PC Case

    YEYIAN GAMING Introduces the PHOENIX: A Next-Level High-Performance ATX Mid-Tower PC Case and GeForce RTX™ 4000 Super-Powered Gaming PCs Lineup

    With liquid cooling, 4 ARGB PWM fans pre-built, and a chic aesthetic design, the PHOENIX delivers optimal thermal performance and sturdy protection for the NVIDIA® GeForce RTX™ 4000 SUPER graphics cards now available in five YEYIAN GAMING PHOENIX prebuilt gaming PCs.

    San Diego, California, March 6th, 2024 – YEYIAN GAMING, a global leader in the design and manufacturing of innovative pre-built gaming PCs, peripherals, and computer components, today introduces the new PHOENIX ATX mid-tower gaming PC case and five PHOENIX housed prebuilt gaming desktop PCs. The PHOENIX ATX mid-tower gaming PC case is designed to accommodate the latest NVIDIA® GeForce RTX™ 4000 SUPER series graphics cards across the INTEL 14th gen Intel® Core™ desktop processors and AMD® Ryzen™ 7 desktop processors platform, ensuring optimal thermal management and gaming performance.

    The YEYIAN GAMING PHOENIX ATX mid-tower gaming PC case boasts ample interior space and a user-friendly chassis design. With support for up to 9 x 120mm cooling fans and 1 x 360/240mm (top/front) AIO radiators, the PHOENIX is meticulously designed for air and liquid cooling setups, catering to all high-performance gaming needs. Offered in two variants with tempered glass or meshed metal side panels, the PHOENIX strikes the perfect balance between air/liquid cooling, RGB illumination, cable management, and intelligent connectivity. Available across five YEYIAN GAMING pre-built PCs, each configuration features varying CPU, GPU, RAM, and storage options, ensuring a tailored gaming experience to meet diverse preferences and needs.

    Frank Lee, Vice President of YEYIAN GAMING USA, emphasized, “As graphics cards evolve, so do the challenges of PC case design. The PHOENIX ATX mid-tower gaming PC case rises to meet these challenges with its sleek and elegant design, offering not only optimal ventilation and ample space for liquid cooling but also enhanced protection for the latest NVIDIA® GeForce RTX™ 4000 SUPER GPUs. It stands as the cornerstone of our new lineup of prebuilt gaming desktops, meticulously crafted to cater to the discerning tastes of the gaming community. With a focus on gamer's choice and superior components, the PHOENIX guarantees not only longevity but also unrivaled performance, ensuring an immersive gaming experience for gamers and content creators alike.”

    Superior Ventilation and Cooling System and Layout

    Effective cooling and ventilation are paramount for maintaining peak performance in a gaming PC, especially with the powerhouse NVIDIA® GeForce RTX™ 4000 SUPER series GPU systems. The YEYIAN GAMING PHOENIX ATX mid-tower gaming PC case delivers an optimal thermal solution, ensuring efficient heat dissipation. With support for up to 9 x 120mm or 5 x 140mm system fans, as well as 1 x 360 and 1 x 280/240mm AIO liquid cooler radiators, builders are empowered with unparalleled flexibility in designing their cooling systems. Given the heat generated from NVIDIA® GeForce RTX™ 4000 SUPER GPUs and the latest INTEL 14th gen Intel® Core™ desktop processors and AMD® Ryzen™ 7 desktop processors, superior airflow is essential. To this end, the front panel grille design strategically directs intake air from left to right, bolstering air pressure and optimizing dissipation within the chassis. These meticulously engineered designs and arrangements guarantee optimal ventilation and stability, regardless of the gaming scenario.

    Generous Interior Design for Cutting-Edge Gaming Hardware Setups

    The YEYIAN GAMING PHOENIX ATX mid-tower gaming PC case is intentionally designed for those who want a mid-tower, high-performance, and powerful gaming desktop with a roomy interior. It supports standard ATX, Micro-ATX, and mini-ITX motherboard form factors and the latest NVIDIA® GeForce RTX™ 4000 SUPER series graphic cards with up to 400 mm clearance, which can easily accommodate NVIDIA® GeForce RTX™ 4080 SUPER/4090 SUPER graphics card. It also fits any CPU tower air cooler up to 170mm in height and any regular ATX power supply up to 220 mm deep.

    Intelligent Interior Design for Storage and Cable Management

    As graphics cards get more extensive and take up more space, cable management becomes essential for building a liquid-cooled desktop PC, especially in a compact mid-tower PC case. The PHOENIX ATX mid-tower gaming PC case allows the builder to easily organize and conceal the cables behind the motherboard tray, with a 35mm depth that enables cables to run smoothly without interfering with other components inside the chassis. This PC case also provides enough space for 7 x expansion slots, 4 x 2.5" SSD drive bays, and 2 x 3.5" drive bays, ensuring that various storage options are available.

    For gamers and content creators who want a medium-sized PC case that offers much space and options, the PHOENIX ATX mid-tower gaming PC case would be an ideal choice. It is available in tempered glass and meshed metal side panel options for different appearances and occasions. It widely supports the latest processors and graphics cards through excellent cooling performance. Phoenix with Tempered Glass

    The PHOENIX ATX mid-tower gaming PC case is backed by a one-year warranty from YEYIAN GAMING USA and is now available online from these major US retailers.

    YEYIAN GAMING PHOENIX Pre-built Gaming PCs Powered by NVIDIA® GeForce RTX™ 4000 SUPER GPUs

    YEYIAN GAMING is committed to providing trust-worthy pre-built gaming desktops with cutting-edge PC components under strict testing and quality assurance. The brand-new PHOENIX mid-tower PC case is now available across five exclusive pre-built models, which are now available on Newegg and YEYIAN GAMING Webstore: PHOENIX 47F0C-47S1N, PHOENIX 49KFC-47Y1N, PHOENIX 47KFC-47Y1N, PHOENIX 49KFC-48S1N, and PHOENIX 47KFC-48S1N.

    These five exclusive gaming PCs boast the latest NVIDIA® GeForce RTX™ 4000 Super graphics card from the 4070/4070 Ti/4080 SUPER series. Featuring enhanced CUDA Cores and VRAM memory under the Lovelace architecture, the GPU elevates the frame buffer to 16GB and a 256-bit memory bus, ideal for delivering exceptional performance at 4K at 120Hz or 8K at 60Hz. Additionally, content creators will appreciate its capabilities for video editing and rendering large 3D scenes.

    YEYIAN GAMING pre-built gaming desktops offer diverse options tailored for hardcore gaming enthusiasts. Each PHOENIX gaming desktop system is meticulously equipped with meticulously selected PC components, including NVMe M.2 SSDs, DDR5 DRAM modules, and Windows 11 preinstalled. Furthermore, all five pre-built gaming PCs can support up to 9 PWM ARGB system fans and 120/240/360 CPU AIO liquid coolers, ensuring optimal cooling efficiency and system stability for uninterrupted gaming experiences.

    Learn more about YEYIAN GAMING PHOENIX pre-built gaming desktops:

  • darkFlash DLX4000 @ TechPowerUp

    Seems that it took case makers a really really long time to figure out what to do with the space left over from the removal of external 5.25 drive bays.  For those living under a rock, the area I speak is to the right of the motherboard, or what we would normally call the front.  This is has always been the primary intake for cooling the case unless you modded fan holes into the side of your case on the same side as the motherboard tray.

    I'm going to claim starting that and have the photos to prove it.

    Anyhow, the darkFlash DLX4000, and just about everyone else, has been taking notes from the O11 and I have to say, the results are extremely similar.

    The darkFlash DLX4000 is a clean, solid and understated chassis with two clear glass panels, so you can feast your eyes on your hardware from multiple angles. Thanks to the well-built chassis and excellent material mix, if you are in the market for this style of case, add it to your shortlist.

    I would agree with everything they said above, though still think we can improve the designs considerably before the desktop tower become irrelevant.

  • Corsair 2500D AIRFLOW PC chassis @ Guru3D

    It has been awhile since I have said anything about Corsair.  But, you know, when you are a small player and get big overnight there is a tendency to leave certain people behind.  It is sad but, a true fact of gorilla marketing and unlimited budgets.

    What I find interesting is the new Airflow chassis designs coming out of Corsair.  The original dual chamber cubes were a huge hit with case modders and system builders due to ease of installation and rather low overall height.

    Well, seems they are back again, this time with even smaller cases that look too much like what everyone else is doing.

    Corsair is back with a new badass chassis; this time, the Corsair 2500D AIRFLOW PC chassis is being tested. It is an innovative chassis with lots of space, hiding options, and clearances for a lot of liquid cooling. It's quite the looker as well with support for the new hidden connectors motherboards.

    The important thing to mention here is that the dual chamber design is back and, with even more cable routing holes for the reverse power bracket fad.  Well, at least the case looks nice.

  • Crucial Pro Series Supercharges Portfolio with DDR5 Overclocking Memory and World’s Fastest Gen5 SSD

    AI-PC ready, Crucial® DDR5 Pro Overclocking memory is compatible with DDR5 CPUs supporting Intel® XMP 3.0 and AMD EXPO™, while the Crucial T705 Gen5 NVMe® SSD further raises the bar for Gen5 performance 

    BOISE, Idaho, Feb. 20, 2024 (GLOBE NEWSWIRE) -- Micron Technology, Inc. (Nasdaq: MU), today announced two new Crucial Pro Series products with the addition of overclocking-capable memory and the world’s fastest Gen5 SSD. The Crucial DDR5 Pro Memory: Overclocking Edition modules are available in 16GB densities up to 6,000MT/s to deliver higher performance, lower latencies and better bandwidth to fuel gaming wins and reduce performance bottlenecks. These powerful DDR5 overclocking DRAM modules are compatible with the latest DDR5 Intel and AMD CPUs and support both Intel XMP 3.0 and AMD EXPO specifications on every module, eliminating compatibility hassles. Built with leading-edge Micron® 232-layer TLC NAND, the Crucial T705 SSD unleashes the full potential of Gen5 performance. Lightning-fast sequential reads and writes up to 14,500MB/s and 12,700MB/s (up to 1,550K/1,800K IOPS random reads and writes) respectively, enable faster gaming, video editing, 3D rendering and heavy workload AI application processing. With DDR5 Pro Overclocking DRAM and the T705 SSD, enthusiasts, gamers and professionals can harness the speed, bandwidth and performance they need for AI-ready PC builds capable of processing, rendering and storing large volumes of AI generated content. 

    “Today’s high-end PCs require exceptional memory and storage solutions to meet the growing demands of applications and workloads. The class-leading Crucial T705 Gen5 SSD and high-performance Crucial DDR5 Pro Overclocking DRAM offerings continue our legacy of designing products specifically for gamers, creators and other performance users that allow them to fully leverage the capabilities provided by the latest generation of CPU platforms,” said Jonathan Weech, senior director of product marketing for Micron’s Commercial Products Group. 

    Overclocking unlocked! 

    The Crucial DDR5 Pro Memory: Overclocking Edition delivers: 

    • 36-38-38-80 extended timings for 25% lower latency than Crucial DDR5 Pro Memory Plug and Play Edition4
    • Elegant, origami-inspired aluminum heat spreader to complement a variety of gaming rigs
    • Higher frame rates4 for serious 1080p and 1440p resolution gaming on memory-intensive titles like Rainbow Six® Siege, Forza™ Horizon 4, Horizon Zero Dawn™, Cyberpunk 2077®, Hogwarts Legacy™, Marvel's Spider-Man Remastered or Forespoken™
    • Universal compatibility with DDR5-based Intel Core 12th to 14th Gen desktop CPUs and AMD Ryzen 7000 to 8000G Series desktop CPUs

    Additionally, Crucial has fine-tuned all XMP 3.0 and EXPO memory profiles to maximize CPU compatibility without compromising overclocking stability and performance. Activating one of these pre-tuned profiles is necessary to overclocking the CPU and memory and is the easiest way to achieve maximum performance. 

    Crucial’s fastest Gen5 SSD just got faster. 

    The T705 SSD is available in capacities up to 4TB and features a premium black aluminum and copper heatsink that dissipates heat without noisy fans or liquid cooling, takes full advantage of Microsoft® DirectStorage and is backward compatible with Gen3 and Gen4 motherboards. A non-heatsink version is also available for use with a motherboard heatsink. For a limited time while global supplies last, a 2TB Crucial T705 Gen5 SSD with an exclusive white heatsink is also available and was meticulously designed to meet the aesthetic preferences of enthusiasts and gamers, harmonizing perfectly with white motherboards and PC components. 

    The Crucial T705 Gen5 SSD also provides: 

    • Faster gameplay and reduced game load times than Gen4 SSDs with Microsoft DirectStorage
    • Compatibility with Intel® Core 13th and 14th Gen CPUs and AMD Ryzen™ 7000 CPUs

    The Crucial DDR5 Pro Overclocking DRAM will be available in 16GB densities on February 27, 2024, and 24GB densities later in 2024. All Crucial Pro memory modules have a limited lifetime warranty. Crucial T705 SSDs, including the limited-edition 2TB white heatsink version, are available to pre-order now on www.crucial.com/T705 and will be available on March 12, 2024 through select etailers, retailers and global channel partners. The T705 SSDs have a 5-year warranty. To learn more about the entire high-performance Crucial Pro Series memory and storage product category, visit: https://www.crucial.com/pro-series.