​Lenovo has unveiled a new ThinkStation model, the P320 Tiny, based on a Kaby Lake / Q270 platform with NVIDIA's Quadro P600 GPU. The unique aspect is the dimensions - At 1.4" x 7.1" x 7.2" (1L in volume), it is one of the smallest systems we have see that includes a discrete GPU. In order to achieve this compact size, the 135W power adapter is external to the system.

The P320 Tiny supports Kaby Lake CPUs with TDP of up to 35W (such as the Intel Core i7-7700T). NVIDIA's Quadro P600 is a GP107-based GPU with a 40W TDP. The system comes with two DDR4 SODIMM slots and two M.2 NVMe SSD slots. There is a rich variety of I/O ports - audio jacks in the front, a total of six USB 3.0 ports spread across the front and the rear, a RJ-45 GbE port, and six display outputs (4x mini-DP + 2x DP). Thanks to the Quadro GPU, the P320 Tiny is able to come with ISV certifications for various applications such as AutoCAD etc.

Lenovo ThinkStation P320 Tiny: General Specifications
CPU Intel Kaby Lake (up to Core i7)
(35W TDP max.)
Chipset Intel Q270
RAM Up to 32 GB DDR4-2400 (2x SODIMM)
GPU NVIDIA Quadro P600
Storage 2x M.2 PCIe: up to 1 TB NVMe SSD each
ODD: optional with add-on
Networking Gigabit Ethernet
Intel 802.11 ac, 2 x 2, 2.4 GHz/5GHz + Bluetooth 4.0 -
I/O 6x USB 3.0
Serial - optional
Dimensions 1.4" x 7.1" x 7.2"
Weight 2.9 lbs

The board used in the system seems to be a custom one - it is larger than a mini-STX board, but, smaller than an ITX one. It is perfect for space-constrained setups, and comes with extensibility options such as add-ons for extra USB ports and a COM port, or, for an optical drive, as shown in the gallery below.

As for operating systems, the new Lenovo ThinkStation P320 Tiny workstation supports both Windows and Linux. The P320 Tiny starts at $799 and is available now.

POST A COMMENT

26 Comments

View All Comments

  • CaedenV - Saturday, June 24, 2017 - link

    Why? Because there is (sadly) no market for it.
    Even look at the mATX standard. Most people who buy mATX end up putting it into a mid to full tower case. I have a mATX in a Gen1 Cosmos... it looks downright silly in there, but I already had the case. But that is the issue most builders have... you can go smaller and smaller, but at some point you have to start getting custom small-batch cases, PSUs, etc that just cost too much. ITX is about as small as you can go while still using standard parts.

    And then there is a flip side to this: If you have to go smaller than ITX for some reason, then it is probably for a very specific applicaiton which has other thermal and size constraints to contend with. The simple matter is that once you get so small you are not doing standard work, so the standards fall apart. Small batch manufacturing of boards, cases, and PSUs adds up quick (though not as bad as it use to be).

    Then there is another big issue; the technology keeps changing too fast. Not to demeane the people who do the work, but it is not difficult to slap a bunch of parts on an ATX board and ship them. There are lots of known quantities (thermals, inter-board interference, etc.) and it makes the work fairly straight forward (my dad does board design... I know it isn't quite that simple). But when you get to small scale parts you start pushing things to limits. Just how close can you put the CPU to the RAM slots without interference? Do you need a GPU or other expansion slot? Need to add SATA/m.2 or just integrate it? So you figure everything out, publish a standard... and a new chip comes out, or a new RAM standard, or more traditionally mobile parts can be used now. It upsets the apple cart and you have to start over from the ground up. Meanwhile ATX is 20+ years old and just keeps on working. Unless you have a very specific application, there is just no payback time for ultra small form factors. It is a lot of work, and short lived. No time to get an ecosystem built up around it.
    Reply
  • fazalmajid - Tuesday, June 20, 2017 - link

    No Xeon means no ECC DRAM. How is this a workstation? Reply
  • Einy0 - Tuesday, June 20, 2017 - link

    That's an opinion... According to a Google study only 3.2% of all DIMMs will experience at least one correctable or uncorrectable bit of data per year of constant use.

    For servers ECC is important but for most workstations ECC is overkill.
    Reply
  • DanNeely - Tuesday, June 20, 2017 - link

    You're forgetting the No True Scotsman's Workstation fallacy. A Real Workstation (tm) is used for mission critical work where a 12.2% or 22.9% (4 or 8 dimms) chance of an error/year is totally unacceptable. :eyeroll: Reply
  • benzosaurus - Tuesday, June 20, 2017 - link

    And the OS will respond to the single corrupted bit by immediately performing a hard shut-down, thus protecting you by ensuring that way more than one bit gets corrupted. Reply
  • DanNeely - Wednesday, June 21, 2017 - link

    ECC means that the single bit error will be corrected transparently to the rest of the system. Only much rarer double bit errors can bring the system down.

    And while a system shutdown means a loss of work since the last save, it's also a problem that announces itself so that it can be immediately corrected vs the corruption going initially undetected and tainting every result that builds off of it until the rot spreads to the point of producing something obviously wrong.
    Reply
  • easp - Wednesday, June 21, 2017 - link

    Immediate shut-down, which is not what should happen with ECC errors, won't cause data corruption if one is using a good journaling file-system and has hardware caching configured properly. It may cause data loss back to the last committed transaction, but it won't leave data in a unknown state. Reply
  • cbm80 - Wednesday, June 21, 2017 - link

    "Random" memory errors are rare. Usually errors indicate a subtle hardware fault of some kind...and even simple parity is much better than nothing, since you can take corrective action. Reply
  • Samus - Wednesday, June 21, 2017 - link

    HP equips some of their USFF's with i3's instead of Xeon's to add ECC support. For light applications, an i3 is more than sufficient, and in business I'd take an i3 with ECC over an i5 without it.

    That isn't to say you can't get a Z workstation from HP with an i5/i7 without ECC, but they are pretty rare. Most of the high end HP i5/i7's (before Xeon becomes the defacto configuration) are the Elitedesk 800's, and they don't dare call those workstations like Lenovo does with their P (and even M) series.

    No wonder Lenovo is losing the sales crown. People are finally realizing that quality doesn't come cheap. And Lenovo is cheap, especially compares to HP. But IT departments have spoken and enterprise is moving back to HP and Dell.
    Reply
  • HeyEric - Wednesday, June 21, 2017 - link

    You should fact check before you troll.
    First, I would like to see you get a Z workstation from HP with ANY 7th gen Core i processor AND ECC memory. The current gen of Core i does not support ECC memory. That's the only difference between Core i and Xeon processors, and now that Intel has mobile Xeon, they will likely close that loophole that allows certain SKUs to support ECC.

    Second, HP likely sells almost as many workstation-class machines with i5/i7 as they do Xeon E3/E5. Based on volumes in the WS market that entry space does =/> volume as the higher-end systems. And last time I checked, they're still called HP Z2 and Z240 WORKSTATIONS. Oh, and try to buy a Z2 with a Xeon AND discrete graphics. Please. Try.

    Third, when has Lenovo referred to an M Series as a workstation? Please do tell.

    Finally, Lenovo isn't losing the sales crown. If HP manages to seize it for a quarter, it is NOT in the corporate space, it's in consumer, where there generally aren't IT departments "speaking," they're buying based on... cheap.
    Reply

Log in

Don't have an account? Sign up now