Tesla does INT8 inference. Way more efficient than FP16。。。

 

 

登录后才可评论.