mirror of
https://github.com/neonbjb/tortoise-tts.git
synced 2026-02-02 05:44:23 +01:00
README typo fix.
This commit is contained in:
parent
8d342cfbc0
commit
71cbd6cc2b
|
|
@ -253,7 +253,7 @@ of the model increases multiplicatively. On enterprise-grade hardware, this is n
|
|||
exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck.
|
||||
|
||||
I want to mention here
|
||||
that I think Tortoise could do be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks
|
||||
that I think Tortoise could be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks
|
||||
or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason
|
||||
to believe that the same is not true of TTS.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue