-
Notifications
You must be signed in to change notification settings - Fork 0
47 implement aws api as a potential runner type #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,7 +1,9 @@ | ||
| # defines a class object for a task | ||
| # from openai import OpenAI | ||
| import os | ||
| import yaml # requires pyyaml | ||
| import yaml | ||
| import json | ||
| import boto3 | ||
| import pandas as pd | ||
| from ollama import chat, ChatResponse, Client | ||
| from benchtools.logger import init_log_folder, log_interaction | ||
|
|
@@ -204,11 +206,18 @@ def generate_prompts(self): | |
| # TODO: consider if this could be a generator function if there are a lot of variants, to avoid memory issues. For now, we will assume that the number of variants is small enough to generate all prompts at once. | ||
| if self.variant_values: | ||
| id_prompt_list = [] | ||
| for value_set in self.variant_values: | ||
|
|
||
| keys = self.variant_values.keys() | ||
|
|
||
| for i in range(len(list(self.variant_values.values())[0])): | ||
| single_dict={} | ||
| prompt = self.template | ||
| prompt = prompt.format(**value_set) | ||
| prompt_id = self.prompt_id_generator(self.task_id,value_set) | ||
| for key in keys: | ||
| single_dict.update({key: self.variant_values[key][i]}) | ||
| prompt = prompt.format(**single_dict) | ||
| prompt_id = self.prompt_id_generator(self.task_id,single_dict) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. no. if the thing isn't working then it's becaue data got loaded wrong. this makes no sense
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tested it before and after. Before it was giving an error about the passed data not being Map data (not verbatim). I looked up the format method and from what I saw it takes a simple dict object with key-value. I didn't see an instance where it took a dict of key-[list of values]. unless I didin't understand what exactly you were trying to do... |
||
| id_prompt_list.append((prompt_id,prompt)) | ||
|
|
||
| return id_prompt_list | ||
| else: | ||
| return [(self.name, self.template)] | ||
|
|
@@ -260,6 +269,9 @@ def write_csv(self, target_folder): | |
| ''' | ||
| write the task to a csv file with a task.txt template file | ||
| ''' | ||
| # Create task folder | ||
| os.mkdir(os.path.join(target_folder, self.task_id)) | ||
|
|
||
| # write the template | ||
| with open(os.path.join(target_folder,self.task_id, 'template.txt'), 'w') as f: | ||
| f.write(self.template) | ||
|
|
@@ -358,6 +370,24 @@ def run(self, runner=BenchRunner(), log_dir='logs', benchmark=None, bench_path=N | |
| ) | ||
| response = chat_completion.choices[0].message.content | ||
| responses.append(response) | ||
| case "bedrock": | ||
| bedrock_client = boto3.client('bedrock-runtime') | ||
| completeion = bedrock_client.invoke_model( | ||
| modelId = runner.model, | ||
| body = json.dumps( | ||
| { | ||
| 'messages': [ | ||
| { | ||
| 'role': 'user', | ||
| 'content': sub_task | ||
| } | ||
| ] | ||
| } | ||
| ) | ||
| ) | ||
| response = json.loads(completeion['body'].read()) | ||
| response = response['choices'][0]['message']['content'] | ||
| responses.append(response) | ||
| case _: | ||
| print(f"Runner type {runner.runner_type} not supported") | ||
| return None | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no
for i in rangeloops, that's not pythonic style and is very hard to parse