DeepSpeed-Inference introduces several features to efficiently serve transformer-based PyTorch models. It supports model parallelism (MP) to fit large models that would otherwise not fit in GPU memory ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results