Something to simplify your AI pipelines using the builder pattern - see aiflow.py.
All other files and folders in the repo are not necessary to run your flows.
The code has been optimized using the Aider tool.
AIFlow demo - Jupyter workbook showing the works
Empty book to start - Jupyter workbook to start your own project
Generating a real book - A project I did to generate a book inspired on The Hitchikers Guide (fun project no real business goals) PDF version after edits - the version after manual edits on layout and adding images from dalle3.
General method naming:
getreturns data in structured format from the class (e.g. JSON, list of strings)displayshows output on the console, so very helpful in pybooksaveandloadsaves output to/from a filesetdefines a variable or config part of the class
__init__(self, api_key, model=Model.GPT_4, temperature=0, max_tokens=150): Initialize the AIFlow class with API key, model, temperature, and max tokens.
set_temperature(self, temperature=0): Set the temperature for the model.set_model(self, model=Model.GPT_4): Set the model to be used.set_max_tokens(self, max_tokens=150): Set the maximum number of tokens.set_json_output(self, json_mode=False): Set the output format to JSON.display_model_config(self): Display the current model configuration.get_token_usage(self): Get the token usage statistics.
set_output_folder(self, folder=""): Set the default folder for output.set_verbose(self, level=True): Set the verbosity level.set_step_save(self, step=False): Enable or disable saving state per step.
display_internal_data(self): Display internal data for debugging.clear_internal_data(self): Clear internal data.
pretty_print_messages(self): Pretty print chat messages.pretty_print_messages_to_file(self, file_name="output.txt", html=True): Pretty print chat messages to a file.set_system_prompt(self, prompt=""): Set the system prompt.add_user_chat(self, prompt, label="latest"): Add a user chat message and get a response.filter_messages(self, func): Filter chat messages using a function.reduce_messages_to_text(self, func): Reduce chat messages to text using a function.
generate_completion(self, prompt, label="latest"): Get a completion for a given prompt.generate_json_completion(self, prompt, label="latest", schema=BaseModel): Get a JSON schema completion for a given prompt and schema.
replace_tags_with_content(self, input_string=""): Replace tags in the input string with context content.copy_latest_to(self, label="latest"): Copy the latest context to a specified label.transform_context(self, label="latest", func=lambda x: x): Transform the context using a function.set_context_of(self, content="", label="latest"): Set the context for a specified label.delete_context(self, label="latest"): Delete the context for a specified label.display_context_of(self, label="latest"): Show the context for a specified label.display_context_keys(self): Show all context keys.return_context_keys(self): Return all context keys.load_to_context(self, filename, label="latest_file"): Load content from a file into the context.save_context_to_file(self, label="latest", filename=""): Dump the context to a file.save_context_to_files(self): Dump all contexts to files.save_context_to_markdown(self, output_filename="content.md"): Dump the context to a markdown file.load_multiple_context_from_file(self, output_filename="context_stuff.txt"): Load multiple context entries from one file.generate_headings_for_contexts(self, labels=[], prompt="Generate a short 10 word summary of the following content:\n", replace=True): Generate headings for multiple contexts.generate_heading_for_context(self, label="latest", prompt="Generate a short 10 word summary of the following content:\n", replace=True): Generate a heading for a single context.save_context_to_docx(self, output_filename, chapters_to_include=[]): Save the context to a DOCX file.save_context_to_html(self, output_filename, chapters_to_include=[]): Save the context to an HTML file.
generate_image(self, model="dall-e-2", style="vivid", response_format="url", prompt="A white siamese cat", size="1024x1024", quality="standard", n=1, label="latest_image", html=False): Generate an image.save_image_to_file(self, label="latest_image", filename=""): Save the generated image to a file.
analyze_image(self, image="", prompt="What's in this image?", model="gpt-4o", label="latest", detail="low", max_tokens=300): Analyze an image.
generate_speech(self, model="tts-1", voice="alloy", response_format="mp3", prompt="A white siamese cat", speed=1, filename="", label="latest_speech", html=False): Generate speech from text.
transcribe_audio(self, filename="", model="whisper-1", language="en", prompt="", response_format="text", temperature=0, label="latest"): Transcribe audio to text.
moderate_content(self, prompt="", label="latest_moderation"): Create a moderation for a given prompt.
save_internal_state(self, filename=""): Save the internal state to a file.load_internal_state(self, filename="state.json"): Load the internal state from a file.
get_latest_context_as_text(self): Get the latest context as text.get_context_as_text(self, label="latest"): Get the context as text for a specified label.get_reduced_chat_messages_as_text(self, func): Get reduced chat messages as text using a function.display_latest_context_as_markdown(self): Display the latest context as markdown.display_context_as_markdown(self, label="latest"): Display the context as markdown for a specified label.execute_function(self, func=lambda: "", label=""): Run a function that may return something or nothing.