From 71cbd6cc2bf086bc75689bfc622cf971e1981617 Mon Sep 17 00:00:00 2001 From: Huw Prosser <16668357+huwprosser@users.noreply.github.com> Date: Fri, 3 Feb 2023 11:26:22 +0000 Subject: [PATCH] README typo fix. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 450754e..368fd59 100644 --- a/README.md +++ b/README.md @@ -253,7 +253,7 @@ of the model increases multiplicatively. On enterprise-grade hardware, this is n exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck. I want to mention here -that I think Tortoise could do be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks +that I think Tortoise could be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason to believe that the same is not true of TTS.