YourTTS – Swiss Knife for Text-to-Speech

👉 Try out YourTTS demo  👉 Visit YourTTS project page  👉 Try YourTTS on Colab 👉 Try voice conversion with YourTTS on Colab YourTTS The recent surge of new end-to-end deep learning models has enabled new and exciting Text-to-Speech (TTS) use-cases with impressive natural-sounding results. However, most of these models are trained on massive datasets […]

Posted in Machine Learning, Research Notes, Technology | Tagged , , , , , | Leave a comment

Gradual Training with Tacotron for Faster Convergence

Tacotron is a commonly used Text-to-Speech architecture. It is a very flexible alternative over traditional solutions. It only requires text and corresponding voice clips to train the model. It avoids the toil of fine-grained annotation of the data. However, Tacotron might also be very time demanding to train, especially if you don’t know the right […]

Posted in Machine Learning, Research, Research Notes, Uncategorized | Tagged , , , , , , | Leave a comment

Irregular Regularization Methods.

Mixup – Shake and Shake – Mix Feat – Speed perturbation (ASR) – Please feel free to extend the list…

Posted in Machine Learning, Research Notes | Tagged , , , | Leave a comment

Text to Speech Deep Learning Architectures

Small Intro. and Background Up until now, I worked on a variety of data types and ML problems, except audio. Now it is time to learn it. And the first thing to do is a comprehensive literature review (like a boss). Here I like to share the top-notch DL architectures dealing with TTS (Text to […]

Posted in Machine Learning, Research, Research Notes | Tagged , , , , , , , | Leave a comment

Why mere Machine Learning cannot predict Bitcoin price

Lately, I study time series to see something more out the limit of my experience. I decide to use what I learn in cryptocurrency price predictions with a hunch of being rich. Kidding? Or not :).  As I see more about the intricacies of the problem I got deeper and I got a new challenge […]

Posted in Machine Learning, Research, Research Notes | Tagged , , , , | Leave a comment

Online Hard Example Mining on PyTorch

Online Hard Example Mining (OHEM) is a way to pick hard examples with reduced computation cost to improve your network performance on borderline cases which generalize to the general performance. It is mostly used for Object Detection. Suppose you like to train a car detector and you have positive (with car) and negative images (with […]

Posted in CodeBook, Machine Learning | Tagged , , , , , | Leave a comment

Paper review: EraseReLU

paper: ReLU is defined as a way to train an ensemble of exponential number of linear models due to its zeroing effect. Each iteration means a random set of active units hence, combinations of different linear models. They discuss, relying on the given observation, it might be useful to remove non-linearities for some layers […]

Posted in Machine Learning, Research, Research Notes | Tagged , , | Leave a comment

Designing a Deep Learning Project

There are numerous on-line and off-line technical resources about deep learning. Everyday people publish new papers and write new things. However, it is rare to see resources teaching practical concerns for structuring a deep learning projects; from top to bottom, from problem to solution. People know fancy technicalities but even some experienced people feel lost […]

Posted in Machine Learning, Research Notes | Tagged , , , | 2 Comments

Paper Review: Self-Normalizing Neural Networks

One of the main problems of neural networks is to tame layer activations so that one is able to obtain stable gradients to learn faster without any confining factor. Batch Normalization shows us that keeping values with mean 0 and variance 1 seems to work things. However, albeit indisputable effectiveness of BN, it adds more […]

Posted in Machine Learning, Research Notes | Tagged , , , , , | 11 Comments

Paper Notes: The Shattered Gradients Problem …

paper: The whole heading of the paper is “The Shattered Gradients Problem: If resnets are the answer, then what is the question?”. It is really interesting work with all its findings about gradient dynamics of neural networks. It also examines Batch Normalization (BN) and Residual Networks (Resnet) under this problem. The problem, dubbed “Shattered Gradients”, described as […]

Posted in Computer Vision, Machine Learning, Research Notes | Tagged , , , , , , | Leave a comment