It's great that companies do this, but I wish that they would also open-source the verification infrastructure at the same time.
Otherwise, anyone who wants to use it in a commercial setting has to either re-do their own verification effort and due-diligence, or just trust the core works perfectly. Don't get me wrong, I trust IBM to build a working CPU, but that doesn't mean there aren't corner cases to be found when you put the core in a new SoC environment.
There is lots of open source HDL out there. We need re-usable verification infrastructure too!
Most of that verification infrastructure is tied to proprietary tooling in some way or another. In the case of IBM they have a lot of in house EDA tooling that they aren't willing to release or even talk about much. They extended vhdl with "aspect oriented programming" to implement their approach to verification. Most of the other verification
tooling for verilog (now system verilog) started as proprietary extensions by some vendor (Intel, Synopsis, ...) and then made it into the system verilog standard. None of those verification parts of system verilog have an open source compiler that supports them.
The good news is that at least the simulation part has improved massively over the last few years. For synthesizable system verilog you have https://www.veripool.org/wiki/verilator, which allows you to compile to C++ and then you can implement any verification methodology you like from scratch in C++ :). I did that for parts of an in-house POWER based processor and it isn't actually as bad as it sounds. You also have the option to use DPI-C and interface with a C++ test-driver that way. We managed to do extensive Hardware and Software (full hardware abstraction layer + instruction compiler + FPGA + ASIC) co-simulation for an accelerator we build, although using proprietary tools because we had Xilinx IP and VHDL involved.
Verilator is also used as a backend for Chisel.
Higan (née bsnes) is written in a very HDL way. It's almost entirely an engine for evaluating RTL graphs with coroutines. cen64 is very similar. 'On clock edge, evaluate the next clock's input latches'.
But yes, it would be really nice to increase verilator's perf to be close to higan (which is a multi dimensional problem, latency versus throughput, etc.), so that you could compile to software or hardware. A MAME cabinet based around an FPGA with cached configuration images would be the bee's knees.
Exactly and note "The A2I core is compliant to Power ISA 2.06 and will need updates to be compliant with either version 3.0c or 3.1. Power ISA 3.0c and 3.1 are the two Power ISA versions contributed to OpenPOWER Foundation by IBM."
Who is going to do that and how without the verification infrastructure? (Verification is about the test infrastructure, not the Verilog simulator which is a commodity). The _real_ value is in the verification infrastructure.
IBM targeted POWER towards the high-end server market. While this space suffers erosion from x86's coming from its low-end, IBM is not known to play in the commodity price-pressured segments unless it can't avoid it. I'm very sure that if IBM wanted to compete in the HEDT segment against Intel, it'd be able to offer a competitive product (as the Raptor workstation proves).