![]() ![]() Added TC-dependent output to the backendbench assistant.Support for double Fischer random chess (dfrc).Support for using pgn book with long lines in training: selfplay can start at a random point in the book.The default net is now 791556 for most backends except opencl and dx12 that get 753723 (as they lack attention policy support).The onednn package comes with the latest dnnl compiled to allow running on an intel gpu by adding gpu=0 to the backend options.This is the default for onnx-cuda and onnx-dml, can be switched on or off with by setting the fp16 backend option to true or false respectively. Now the onnx backends can use fp16 when running with a network file (not with.Non multigather (legacy) search code and -multigather option are removed.Partial attention policy support in onednn backend (good enough for T79).Full attention policy support in cuda, cudnn, metal, onnx, blas, dnnl, and eigen backends.See the README for use instructions, a separate download of the DirectML dll is required. New onnx-dml backend to use DirectML under windows, has better net compatibility than dx12 and is faster than opencl.This is now the default backend for macos builds. 128 threads), obviously for cpu backends only. A new spinlock implementation (selected with -search-spin-backoff) to help.Added a first-move-bonus option to the legacy time manager, to accompany.White's perspective: 0 gives standard scoring, -1 gives Armageddon scoring. Simplified to a single -draw-score parameter, adjusting the draw score from.A new score type WDL_mu which follows the new eval convention, where +1.00.Post soon explaining in detail how it works. Helps with more accurate playĪt high level (WDL sharpening), more aggressive play against weaker opponentsĪnd draw avoiding openings (contempt), piece odds play. WDL transformation of the NN value head output. WDL conversion for more realistic WDL score and contempt.Fixes for contempt with infinite search/pondering and for the wdl display when pondering.A new spinlock implementation (selected with -search-spin-backoff) to help with many cpu threads (e.g.The Python bindings are available as a package, see the README for instructions.This may affect performance, in which case you can use the steps=8 backend option to get the old behavior. Some users experienced memory issues with onnx-dml, so the defaults were changed.The onnx-dml package now includes a directml.dll installation script.Added the threads backend option to onnx, defaults to 0 (let the onnxruntime decide) except for onnx-cpu that defaults to 1.Some performance improvements for the cuda, onnx and blas backends.Use the cache_opt=true backend option to turn it on. Persistent L2 cache optimization for the cuda backend.Added a first-move-bonus option to the legacy time manager, to accompany book-ply-bonus for shallow openings.Updated describenet for new net architectures.Simplified to a single -draw-score parameter, adjusting the draw score from white's perspective: 0 gives standard scoring, -1 gives Armageddon scoring.WDL_mu score type is now the default and the -moves-left-threshold default was changed from 0 to 0.8.Changed mlh threshold effect to create a smooth transition.A new score type WDL_mu which follows the new eval convention, where +1.00 means 50% white win chance.Helps with more accurate play at high level (WDL sharpening), more aggressive play against weaker opponents and draw avoiding openings (contempt), piece odds play. Adds an Elo based WDL transformation of the NN value head output. Support for networks with attention body and smolgen added to blas, cuda, metal and onnx backends.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |