Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion packages/jobs/fun-fact-job/script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,13 @@ send_slack_message() {
--header "Authorization: Bearer ${MUZZLE_BOT_TOKEN}" \
--header 'Content-Type: application/json; charset=utf-8' \
--data "${payload}" \
https://slack.com/api/chat.postMessage)
https://slack.com/api/chat.postMessage || true)

if [[ -z "${response_code:-}" ]]; then
echo "Slack API request failed: curl did not complete successfully" >&2
rm -f "${response_file}"
return 1
fi

if [[ "${response_code}" != '200' ]] || ! jq -e '.ok == true' "${response_file}" >/dev/null 2>&1; then
cat "${response_file}" >&2
Expand Down
13 changes: 13 additions & 0 deletions packages/jobs/health-job/script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,15 @@ fi

PATH=/usr/local/bin:/usr/bin:/bin:${PATH:-}

require_command() {
local command_name="$1"

if ! command -v "${command_name}" >/dev/null 2>&1; then
echo "Missing required command: ${command_name}" >&2
exit 1
fi
}

HEALTH_URL="${HEALTH_URL:-http://127.0.0.1:3000/health}"
SLACK_CHANNEL="${SLACK_CHANNEL:-#muzzlefeedback}"
SLACK_MESSAGE=':this-is-fine: `Moonbeam is experiencing some technical difficulties at the moment.` :this-is-fine:'
Expand Down Expand Up @@ -89,6 +98,10 @@ check_health() {
}

main() {
require_command curl
require_command grep
require_command mktemp

if check_health; then
exit 0
fi
Expand Down
12 changes: 10 additions & 2 deletions packages/jobs/pricing-job/script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ main() {
local price_pct
local price
local median_rep
local sql_batch
local -a teams
local -a items
local row
Expand Down Expand Up @@ -125,18 +126,25 @@ main() {

median_rep=$(calculate_median_rep)

sql_batch=""
for team_id in "${teams[@]}"; do
[[ -n "${team_id}" ]] || continue

for item_row in "${items[@]}"; do
IFS=$'\t' read -r item_id price_pct <<<"${item_row}"
price=$(awk -v median="${median_rep}" -v pct="${price_pct}" 'BEGIN { printf "%.15f", median * pct }')
mysql_query "INSERT INTO price(itemId, teamId, price, itemIdId) VALUES(${item_id}, '$(sql_escape "${team_id}")', ${price}, ${item_id});" >/dev/null
sql_batch+="INSERT INTO price(itemId, teamId, price, itemIdId) VALUES(${item_id}, '$(sql_escape "${team_id}")', ${price}, ${item_id});"$'\n'
done

echo "Completed update for ${team_id}"
echo "Queued update for ${team_id}"
done

if [[ -n "${sql_batch}" ]]; then
echo 'Executing batch price inserts...'
mysql_query "START TRANSACTION;
${sql_batch}COMMIT;" || { echo 'Batch insert failed; transaction has been rolled back' >&2; return 1; }
Comment on lines +144 to +145
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Building the entire INSERT workload into sql_batch and passing it as a single mysql -e argument can hit OS ARG_MAX/"Argument list too long" limits and/or MySQL max_allowed_packet as teams/items grow, causing the job to fail before any SQL runs. Consider streaming the SQL to mysql via stdin (or a temp file) and/or chunking into smaller batches (e.g., per team or fixed-size batches) to avoid unbounded query size and quadratic string-concatenation overhead.

Suggested change
mysql_query "START TRANSACTION;
${sql_batch}COMMIT;" || { echo 'Batch insert failed; transaction has been rolled back' >&2; return 1; }
{
echo 'START TRANSACTION;'
printf '%s' "${sql_batch}"
echo 'COMMIT;'
} | mysql --host="${MYSQL_HOST}" --user="${MYSQL_USER}" --password="${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" \
|| { echo 'Batch insert failed; transaction has been rolled back' >&2; return 1; }

Copilot uses AI. Check for mistakes.
fi

echo "Completed job in $(( $(date +%s) - start_time )) seconds!"
}

Expand Down
Loading