We review a lot of computers at CNET, and we’ve been doing it for a long time. Over the years some of the methodology has changed, but our core commitment to in-depth product reviews has not. But how, exactly, do we test the products we review?
The review process for laptops, desktops, tablets and other computer-like devices consists of two parts: performance testing under controlled conditions in the CNET Labs and extensive hands-on use by our expert reviewers. This includes evaluating a device’s aesthetics, ergonomics and features. A final review verdict is a combination of both those objective and subjective judgments.
When a computer — typically a laptop, desktop, two-in-one hybrid or Chromebook — arrives at the CNET Labs, we set it up as a typical user of the product would. As a best practice, during the setup, we disable as much of the invasive privacy and data collection options as possible. Then we update the OS, GPU drivers, BIOS and manufacturer utilities as needed and use an application such as Sandra from SiSoftware to gather information about the system’s components, such as the CPU, GPU and RAM.
Our benchmark tests consist of a core set we run on every compatible system. There’s also an extended set of tests for specific use cases, such as gaming or content creation, where systems may have more powerful GPUs or higher-resolution displays that need to be evaluated.

Dan Ackerman/CNET
The list of benchmarking software we use changes over time as the devices we test evolve. The most important core tests we’re currently running on every compatible computer are:
Primate Labs Geekbench 5
We run both single-core and multicore CPU tests, and either the Vulcan (Windows) or Metal (MacOS) Compute test. On Android, Apple devices and Chromebooks, we run the CPU tests and the Compute test. Geekbench’s CPU tests measure the performance of a mixed workload.
Cinebench R23
We run both the single-core and multicore tests on Windows and MacOS devices. Cinebench measures pure CPU processing performance for 3D rendering.
PCMark 10
We’re phasing out this Windows benchmark, but at the moment still run the last-generation version, which simulates a wide range of functions, including web browsing, video conferencing, photo editing, video editing and more.
3DMark Wild Life Extreme
We run this test on MacOS (Apple silicon), Windows, Android and iPadOS systems; it’s one of the few cross-platform benchmarks available to test graphics performance. We additionally run it in Unlimited mode, which eliminates screen resolution as a variable when making cross-device comparisons.
3DMark Fire Strike Ultra, Time Spy, and Port Royal
We run these tests on any system with a discrete GPU to test a system’s DirectX 11 or DirectX 12 graphics performance, which is especially important for gaming computers. We’re phasing out Port Royal, which was originally designed for Nvidia GPUs, and switching to 3DMark’s DXR test for raytracing performance. We’re also adding CPU Profiler, Storage and PCI Features tests.
Far Cry 5 benchmark
We’re phasing out this game benchmark, but at the moment we run it at the High quality preset in 1,920×1,080 resolution. We’re phasing in the RiftBreaker CPU and GPU benchmarks (it’s a complex simulation game) and Marvel’s Guardians of the Galaxy in-game benchmark, both in 1080p resolution using their respective High quality presets. Both games can run on systems with low-end discrete GPUs.
Shadow of the Tomb Raider benchmark
This is an older game that balances the CPU and GPU loads rather than relying exclusively on the GPU, and can run well on lower-end gaming hardware. We run the game’s built-in benchmark on systems with a discrete GPU using the Highest quality preset in 1,920×1,080 resolution.
UL Procyon Video and Photo Editing tests
If a system meets the baseline requirements to run Adobe Premiere Pro and Photoshop with Photoshop Lightroom Classic, we use these two benchmarks at 1,920×1,080-pixel resolution to measure a system’s suitability for content creation.
Battery life test
For all computers with a battery, we change the settings to keep the system from going to sleep or hibernating, disable pop-ups and notifications that may interfere with the test, and set screen brightness and volume (output to earbuds) to 50%. We then stream a looped, custom YouTube video over Wi-Fi in Chrome and use a timer app to track how long the system remains active.
JetStream 2, MotionMark and WebXPRT 3
We run these browser-based tests to evaluate Chromebook performance, and occasionally run them on Windows systems for comparison.

Josh Goldman/CNET
Additional testing
We may run a number of additional tests or variations on the standard tests, such as Geekbench 5 and Cinebench on battery power, to provide more context for a system’s performance.
Discretionary testing can also include DLSS (on Nvidia) or FidelityFX Super Resolution (on AMD) performance where supported by in-game benchmarks or additional in-game benchmarks such as Watch Dogs: Legion. For systems with midrange and above GPUs, we sometimes run SpecViewPerf 2020 (professional content creation and analysis beyond photo and video editing).
As part of a review, we often include a comparison chart of scores from relevant tests across comparable products. When we make a major change to testing, such as moving from one version of a test to another, we double-test both versions or the entire old and new set to build up a database of comparison data.
With so many computers using the same handful of CPUs and GPUs, the same operating systems and similar amounts of RAM and storage, these benchmark results usually fit in with our expectations. That means, by looking at the specs for a system on paper, you can get a good idea of how it’ll perform. When it doesn’t match those expectations, that’s where things get interesting, and we investigate the cause. Sometimes it’s faulty hardware, sometimes it’s poor optimization or a compatibility issue, and identifying these edge cases is one of the reasons we test so extensively, so you can avoid buying a lemon.
Here are some of our current favorites, which are based in part on this testing, but also on our hands-on experience, the system’s design and ergonomics, and, very importantly, the overall value. That’s because as a long-time member of this team, I want everyone who uses a CNET review to decide what to buy to know that I take your money as seriously as you do.