* fix: Fix parameter mismatch in EPUBBookLoaderHelper.translate_with_backoff
- Fix TypeError when calling translate_with_backoff with multiple arguments
- Add proper parameter handling in the decorated method
- Add jitter=None to prevent extra parameters from backoff decorator
- Improve code readability and error handling
* style: format code with black
---------
Co-authored-by: wenping <angenpn@gmail.com>
* chore: Bump google-generativeai and related dependencies
* feat: add support for --temperature option to gemini
* feat: add support for --interval option to gemini
* feat: add support for --model_list option to gemini
* feat: add support for --prompt option to gemini
* modify: model settings
* feat: add support for --use_context option to gemini
* feat: add support for rotate_key to gemini
* feat: add exponential backoff to gemini
* Update README.md
* fix: typos and apply black formatting
* Update make_test_ebook.yaml
* fix: cli
* fix: interval option implementation
* fix: interval for geminipro
* fix: recreate convo after rotating key
* Feat: combine multiple lines into one block
bug: some text is not replaced with translation
* Fix: some text are not translated
known issue:
1. sometime the original text show up
2. resume function not working
* Style: clean up code
fix(chatgptapi_translator): when use azure openai something wrong
1. when I use azure openai will get "api_base" not have this arribute error. So I add self.api_base = api_base
2. In line 84 when the message content type is None will get typeError. So I add a if else to avoid this error.
* style: format code style
* style: format code style
* fix: lint
* fix: bugs for using exlude_filelists and only_filelists
1. progress bars: only calculating tags in files that will be included
2. temp_file: avoid mismatch between translated texts and original texts
* simplify logic
* add `temperature` parameter to OpenAI based translators and `Claude`
* add `temperature` parameter to book loaders
* add `--temperature` option to cli
* feat: add gpt4 support
* Update prompt_template_sample.json
* fix: cleaned up formatting for quotes
* feature: added context functionality (--use_context) for GPT4 model, which accumulates a running paragraph giving historical context to the current passage
* fix: propagated context_flag argument to txt_loader and srt_loader
* Updated Readme to include GPT4 parameters
* Removed debug output
* fix: lint
---------
Co-authored-by: yihong <zouzou0208@gmail.com>