recentpopularlog in

pytorch

« earlier   
The State of Machine Learning Frameworks in 2019
In 2019, the war for ML frameworks has two remaining main contenders: PyTorch and TensorFlow. My analysis suggests that researchers are abandoning TensorFlow and flocking to PyTorch in droves. Meanwhile in industry, Tensorflow is currently the platform of choice, but that may not be true for long.

Another difference is deployment. Researchers will run experiments on their own machines or on a server cluster somewhere that’s dedicated for running research jobs. On the other hand, industry has a litany of restrictions/requirements.

No Python. Some companies will run servers for which the overhead of the Python runtime is too much to take.
Mobile. You can’t embed a Python interpreter in your mobile binary.
Serving. A catch-all for features like no-downtime updates of models, switching between models seamlessly, batching at prediction time, and etc.

TensorFlow was built specifically around these requirements, and has solutions for all these issues: the graph format and execution engine natively has no need for Python, and TensorFlow Lite and TensorFlow Serving address mobile and serving considerations respectively.

Historically, PyTorch has fallen short in catering to these considerations, and as a result most companies are currently using TensorFlow in production.

Near the end of 2018, two major events threw a wrench into the story:

PyTorch introduced the JIT compiler and “TorchScript,” thus introducing graph-based features.
TensorFlow announced they were moving to eager mode by default in 2.0.

Clearly, these were moves attempting to address their respective weaknesses. So what exactly are these features, and what do they have to offer?

The PyTorch JIT is an intermediate representation (IR) for PyTorch called TorchScript. TorchScript is the “graph” representation of PyTorch. You can turn a regular PyTorch model into TorchScript by using either tracing or script mode. Tracing takes a function and an input, records the operations that were executed with that input, and constructs the IR. Although straightforward, tracing has its downsides. For example, it can’t capture control flow that didn’t execute. For example, it can’t capture the false block of a conditional if it executed the true block.
pytorch  tensorflow 
5 days ago by mike
DeepSpeed
"DeepSpeed reduces the training memory footprint through a novel solution called Zero Redundancy Optimizer (ZeRO). Unlike basic data parallelism where memory states are replicated across data-parallel processes, ZeRO partitions model states to save significant memory."
libs  optimization  neural-net  parallel  pytorch  zero  microsoft 
9 days ago by arsyed
outcastofmusic/quick-nlp: Pytorch NLP library based on FastAI
Pytorch NLP library based on FastAI . Contribute to outcastofmusic/quick-nlp development by creating an account on GitHub.
pytorch  attention  code  deep-learning  github  library  nlp  seq2seq 
9 days ago by johns

Copy this bookmark:





to read