↩ Accueil

Vue lecture

14 Years Later than Planned, NexPhone is Up for Preorder

NexPhone is available for pre-order, some 14 years after it was first announced to the world – back then it planned to ship with Ubuntu for Android. Created by Nex Computer, the company behind the NexDock laptop shells, the NexPhone aims to deliver on ambitions that Canonical’s Ubuntu Phone set out to: using your phone as a proper PC when connected to a monitor (aka ‘convergence’). In 2012, the plan was to offer the NexPhone with Ubuntu for Android as its sole OS. This would attach to a range of optional devices to function as a tablet, a laptop or […]

You're reading 14 Years Later than Planned, NexPhone is Up for Preorder, a blog post from OMG! Ubuntu. Do not reproduce elsewhere without permission.

  •  

Linux Finally Retiring HIPPI: The First Near-Gigabit Standard For Networking Supercomputers

While the Linux kernel has been seeing preparations from NVIDIA for 1.6 Tb/s networking in preparing for next-generation super-computing, the kernel has still retained support to now for the High Performance Parallel Interface. HIPPI was the standard for connecting supercomputers in the late 1980s and a portion of the 1990s with being the first networking standard for near-Gigabit connectivity at 800 Mb/s over distances up to 25 meters. But HIPPI looks like it will be retired from the mainline kernel with Linux 7.0...
  •  

AMD ROCm 7.2 Now Released With More Radeon Graphics Cards Supported, ROCm Optiq Introduced

Back at CES earlier this month AMD talked up features of the ROCm 7.2 release. ROCm 7.2 though wasn't actually released then, at least not for Linux. That ROCm 7.2.0 release though was pushed out today as the latest improvement to this open-source AMD GPU compute stack and officially extending the support to more Radeon graphics cards...
  •  

The CPU Performance Of The NVIDIA GB10 With The Dell Pro Max vs. AMD Ryzen AI Max+ "Strix Halo"

With the Dell Pro Max GB10 testing at Phoronix we have been focused on the AI performance with its Blackwell GPU as the GB10 superchip was designed for meeting the needs of AI. Many Phoronix readers have also been curious about the GB10's CPU performance in more traditional Linux workloads. So for those curious about the GB10 CPU performance, here are some Linux benchmarks focused today on the CPU performance and going up against the AMD Ryzen AI Max+ 395 "Strix Halo" within the Framework Desktop.
  •  

Adjusting One Line Of Linux Code Yields 5x Wakeup Latency Reduction For Modern Xeon CPUs

A new patch posted to the Linux kernel mailing list aims to address the high wake-up latency experienced on modern Intel Xeon server platforms. With Sapphire Rapids and newer, "excessive" wakeup latencies with the Linux menu governor and NOHZ_FULL configuration can negatively impair Xeon CPUs for latency-sensitive workloads but a 16 line patch aims to better improve the situation. That is, changing one line of actual code and the rest being code comments...
  •  

New Linux Patch Improved NVMe Performance +15% With CPU Cluster-Aware Handling

Intel Linux engineers have been working on enhancing the NVMe storage performance with today's high core count processors. Due to situations where multiple CPUs could end up sharing the same NVMe IRQ(s), performance penalties can arise if the IRQ affinity and the CPU's cluster do not align. There is a pending patch to address this situation. A 15% performance improvement was reported with the pending patch...
  •  

Linux 6.19 ATA Fixes Address Power Management Regression For The Past Year

It's typically rare these days for the ATA subsystem updates in the Linux kernel to contain anything really noteworthy. But today some important fixes were merged for the ATA code to deal with a reported power management regression affecting the past number of Linux kernel releases over the last year. ATAPI devices with dummy ports weren't hitting their low-power state and in turn preventing the CPU from reaching low-power C-states but thankfully that is now resolved with this code...
  •  

AMD Making It Easier To Install vLLM For ROCm

Deploying vLLM for LLM inference and serving on NVIDIA hardware can be as easy as pip3 install vllm. Beautifully simple just as many of the AI/LLM Python libraries can deploy straight-away and typically "just work" on NVIDIA. Running vLLM atop AMD Radeon/Instinct hardware though has traditionally meant either compiling vLLM from source yourself or AMD's recommended approach of using Docker containers that contain pre-built versions of vLLM. Finally there is now a blessed Python wheel for making it easier to install vLLM without Docker and leveraging ROCm...
  •  
❌