Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
104 commits
Select commit Hold shift + click to select a range
bdd6245
feat: Added support for OpenLineage integration (#5884)
ntkathole Jan 29, 2026
bba71ac
docs: Add native lineage support description to blog (#5922)
ntkathole Jan 29, 2026
dca6f93
Change publication date to January 29, 2026
franciscojavierarceo Jan 29, 2026
23de772
feat: Add publish docker image of Go feature server. (#5923)
shuchu Jan 30, 2026
91b88b2
feat: Adjust ray offline store to support abfs(s) ADLS Azure Storage …
jbauer12 Jan 30, 2026
ab9c6fc
feat: Add lazy initialization and feature service caching (#5924)
franciscojavierarceo Jan 30, 2026
030a51e
feat: Added online server worker config support in operator (#5926)
ntkathole Jan 31, 2026
f64034c
fix: use logging instead of print during materialization
Shizoqua Jan 1, 2026
66d4052
Update feature_store.py
Shizoqua Jan 5, 2026
4cc99aa
fix: default CLI log level to INFO
Shizoqua Jan 10, 2026
ea75d6e
fix: import default entity_df event timestamp const
Shizoqua Jan 12, 2026
eb874e3
fix: avoid click in FeatureStore materialization
Shizoqua Jan 18, 2026
be90ca3
fix: format feature_store and update materialize docstrings
Shizoqua Jan 20, 2026
9c38991
Fix: remove ANSI color codes from materialization logs
Shizoqua Jan 31, 2026
5d63092
docs: Add multi-team feature store setup guide (#5932)
Copilot Feb 3, 2026
176ed0d
feat: Add DynamoDB in-place list update support for array-based featu…
anshishrivastava Feb 4, 2026
680725a
chore: Updated snowflake-connector-python (#5928)
ntkathole Feb 6, 2026
46b0967
refactor: Centralize feature view object lookup (#5898)
antznette1 Feb 6, 2026
36c63cb
feat(go): Add MySQL registry store support for Go feature server (#5933)
PepeluDev Feb 6, 2026
7629b5c
feat: Add integration tests for dbt import (#5899)
YassinNouh21 Feb 9, 2026
49c84dd
feat: Modernize precommit hooks and optimize test performance (#5929)
franciscojavierarceo Feb 10, 2026
f752488
feat: Batch_engine config injection in feature_store.yaml through ope…
aniketpalu Feb 10, 2026
e565f1d
fix(ci): Use uv run for pytest in master_only benchmark step (#5957)
franciscojavierarceo Feb 10, 2026
73581ea
feat: Add blog post on Feast dbt integration (#5915)
Copilot Feb 10, 2026
9516758
fix: Added a flag to correctly download the go binaries
aniketpalu Feb 11, 2026
7f0b5b0
fix: Adds mapping of date Trino's type into string Feast's type
soliverr Feb 9, 2026
ac88775
fix: Make timestamp field handling compatible with Athena V3 (#5936)
dym-ok Feb 11, 2026
2869e6c
feat: Add PostgreSQL online store support for Go feature server (#5963)
samuelkim7 Feb 12, 2026
f677c49
feat: Improve local dev experience with file-aware hooks and auto par…
franciscojavierarceo Feb 13, 2026
9be77c3
docs: fix minor grammar issues in README files
ananyagupta17 Feb 14, 2026
8b62ad3
fix: Support pgvector under non-default schema (#5970)
Anarion-zuo Feb 15, 2026
ffeea3e
feat: Consolidate Python packaging - remove setup.py/setup.cfg, stand…
ntkathole Feb 16, 2026
945d515
chore: Updated requirements (#5975)
ntkathole Feb 16, 2026
4b51239
chore(release): release 0.60.0
feast-ci-bot Feb 17, 2026
ee11cc7
chore: Optimize entity key serialization/deserialization hot path (#5…
franciscojavierarceo Feb 19, 2026
e9518f4
chore: Added language-specific patterns in gitignore
ntkathole Feb 18, 2026
99dd48c
feat: Added CodeQL SAST scanning and detect-secrets pre-commit hook
ntkathole Feb 18, 2026
43078d3
fix: Use commitlint pre-commit hook instead of a separate action
ntkathole Feb 18, 2026
31c51e6
fix: Fixed pre-commit check
ntkathole Feb 19, 2026
3f3e636
feat(go): Implement metrics and tracing for http and grpc servers (#5…
luisazofracabify Feb 19, 2026
9d28508
fix: Fixes a `PydanticDeprecatedSince20` warning for trino_offline_st…
soliverr Feb 19, 2026
1612cc9
Blog: Historical Feature Retrieval without entity df
jyejare Feb 19, 2026
f393e3b
fix: Add https readiness check for rest-registry tests
ntkathole Feb 17, 2026
4c8b45f
feat: Use orjson for faster JSON serialization in feature server
ntkathole Feb 1, 2026
aba90f4
refactor: Extract shared proto conversion helpers and simplify to_pro…
franciscojavierarceo Feb 25, 2026
f2610c5
fix: Fixed uv cache permission error for docker build on mac
aniketpalu Feb 25, 2026
aa245a8
[perf] Fix redundant registry.get_entity() calls in _get_entity_maps
abhijeet-dhumal Feb 24, 2026
d3ddce7
feat: Add complex type support (Map, JSON, Struct) with schema valida…
ntkathole Feb 26, 2026
5203fc7
chore: Updated feature server ubi image from 311 to python-312
ntkathole Feb 22, 2026
78786e9
feat: Horizontal scaling support to the Feast operator (#6000)
ntkathole Feb 27, 2026
9f6bd24
chore: Remove ikv online store (#6033)
tokoko Mar 1, 2026
1eb20c1
fix: Reenable tests (#6036)
tokoko Mar 1, 2026
46f869b
fix: Integration test failures (#6040)
tokoko Mar 1, 2026
18ec522
fix: Add grpcio dependency group to transformation server Dockerfile
Br1an67 Mar 1, 2026
ff01d4f
chore: Run duckdb offline store tests seperately (#6041)
tokoko Mar 2, 2026
1569d61
perf: Optimize protobuf parsing in Redis online store (#6023)
abhijeet-dhumal Mar 2, 2026
7083435
perf: Remove redundant entity key serialization in online_read
abhijeet-dhumal Feb 23, 2026
641a11d
chore: Run registry tests separately (#6043)
tokoko Mar 3, 2026
99a94c0
fix: Ray offline store tests are duplicated across 3 workflows
ntkathole Mar 3, 2026
cd9eb41
fix: Fix integration tests (#6046)
ntkathole Mar 3, 2026
7dc3b69
fix: Fixed IntegrityError on SqlRegistry (#6047)
ntkathole Mar 3, 2026
ce6398b
test: Move feature_repos into universal test layout skeleton (#6055)
turazashvili Mar 3, 2026
21ea2a9
feat: Feature Server High-Availability on Kubernetes (#6028)
ntkathole Mar 4, 2026
fa4a8d1
perf: Parallelize DynamoDB batch reads in sync online_read (#6024)
abhijeet-dhumal Mar 4, 2026
9293b5b
test: Move ray and spark tests to component/* (#6056)
turazashvili Mar 4, 2026
7a8b62d
perf: Optimize timestamp conversion in _convert_rows_to_protobuf
abhijeet-dhumal Feb 23, 2026
436404f
fix: Check duplicate names for feature view across types (#5999)
Prathap-P Mar 6, 2026
1dec68c
feat: Support arm docker build (#6061)
Anarion-zuo Mar 7, 2026
8ad3ea2
fix: Fix non-specific label selector on metrics service
ntkathole Mar 5, 2026
5d00b56
docs: Add blog post on Feast + MLflow + Kubeflow unified AI/ML lifecy…
Copilot Mar 9, 2026
16cbacf
fix: Add website build check for PRs and fix blog frontmatter YAML er…
ntkathole Mar 9, 2026
80b1f7d
fix: Added MLflow metric charts across feature selection (#6080)
ntkathole Mar 9, 2026
53e0269
feat: Add materialization, feature freshness, request latency, and pu…
ntkathole Mar 5, 2026
a861a3c
feat: Add non-entity retrieval support for ClickHouse offline store
YassinNouh21 Mar 5, 2026
15987cb
Update Feast Registry Rest Tests run on Openshift Env
Srihari1192 Mar 9, 2026
31b3922
feat: Add OnlineStore for MongoDB (#6025)
caseyclements Mar 10, 2026
5070dfa
feat: Making feature view source optional (feast-dev#6074) (#6075)
nquinn408 Mar 10, 2026
543ff29
feat: Adding optional name to Aggregation (feast-dev#5994) (#6083)
nquinn408 Mar 10, 2026
a946fd6
chore(release): release 0.61.0
feast-ci-bot Mar 10, 2026
7caf966
fix: Added missing jackc/pgx/v5 entries
ntkathole Mar 11, 2026
d205a80
feat: Add Oracle DB as Offline store in python sdk & operator (#6017)
aniketpalu Mar 14, 2026
558e28e
feat: Utilize date partition column in BigQuery (#6076)
Anarion-zuo Mar 15, 2026
741274c
feat: Add feast apply init container to automate registry population …
ntkathole Mar 15, 2026
7e8d6ac
fix: Fix regstry Rest API tests intermittent failure
Srihari1192 Mar 11, 2026
98a00b2
feat: Added Agent skills for AI Agents (#6007)
patelchaitany Mar 16, 2026
8e6327c
feat: Created DocEmbedder class (#5973)
patelchaitany Mar 16, 2026
c2fbc54
docs: Update type system reference with missing types (#6069)
ntkathole Mar 16, 2026
5b0df6c
feat: Add Claude Code agent skills for Feast (#6081)
Sagargupta16 Mar 16, 2026
fa9e647
feat: Support distinct count aggregation [#6116]
nquinn408 Mar 16, 2026
30b406c
fix: Fixed intermittent failures in get_historical_features
ntkathole Mar 17, 2026
2a0d873
fix: Fixed the intermittent FeatureViewNotFoundException
ntkathole Mar 17, 2026
6635515
docs: Add blog post on Feast Oracle Database offline store support (#…
aniketpalu Mar 17, 2026
928431b
fix(postgres): Use end_date in synthetic entity_df for non-entity ret…
YassinNouh21 Mar 17, 2026
61e9345
fix: SSL/TLS mode by default for postgres connection
ntkathole Mar 16, 2026
3bf66c6
feat: Add typed_features field to grpc write request ((#6117) (#6118)
nquinn408 Mar 17, 2026
40323bc
Make feature server URL configurable in UI
kchawlani19 Feb 24, 2026
aab9d78
fix: align FeatureStore signatures with cli/repo ops
Mar 18, 2026
d9ce1d5
fix: use logging instead of print during materialization
Shizoqua Jan 1, 2026
c274918
Update feature_store.py
Shizoqua Jan 5, 2026
69b4f69
fix: import default entity_df event timestamp const
Shizoqua Jan 12, 2026
46f3ac0
Fix: remove ANSI color codes from materialization logs
Shizoqua Jan 31, 2026
7f05abe
Merge branch 'master' into fix/materialization-use-logger
Shizoqua Mar 18, 2026
de8a1d0
Update sdk/python/feast/feature_store.py
Shizoqua Mar 18, 2026
d936ff2
fix(dynamodb): handle bytes vs Binary in batch_get response
Mar 18, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion sdk/python/feast/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ def format_options(self, ctx: click.Context, formatter: click.HelpFormatter):
)
@click.option(
"--log-level",
default="warning",
default="info",
help="The logging level. One of DEBUG, INFO, WARNING, ERROR, and CRITICAL (case-insensitive).",
)
@click.option(
Expand Down
105 changes: 43 additions & 62 deletions sdk/python/feast/feature_store.py
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Removed colorama import breaks materialize and materialize_incremental with NameError

The import from colorama import Fore, Style was removed (old line 42), but Style.BRIGHT, Fore.GREEN, and Style.RESET_ALL are still used in multiple print() statements within materialize_incremental() (lines 1678, 1701, 1707-1709) and materialize() (lines 1819, 1830). When either materialization method is called and reaches a feature view that triggers these print statements, a NameError: name 'Style' is not defined will be raised at runtime, crashing materialization.

(Refers to line 1678)

Prompt for agents
In sdk/python/feast/feature_store.py, the import `from colorama import Fore, Style` was removed but Style and Fore are still referenced in print statements at lines 1678, 1701, 1707-1709, 1819, and 1830. Either:

1. Re-add the import `from colorama import Fore, Style` near the top of the file (after line 43, near the other third-party imports), OR
2. Convert all remaining print() calls that use Style/Fore to use the logger instead (matching the pattern used elsewhere in this PR where print statements were converted to logger.info/logger.warning calls).

Option 2 would be more consistent with the PR's intent of migrating from print to logging.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
import copy
import itertools
import logging
import os
import time
import warnings
from datetime import datetime, timedelta
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Expand All @@ -29,17 +28,16 @@
Mapping,
Optional,
Sequence,
TYPE_CHECKING,
Tuple,
Union,
cast,
)

if TYPE_CHECKING:
from feast.diff.apply_progress import ApplyProgressContext

import pandas as pd
import pyarrow as pa
from colorama import Fore, Style
from fastapi.concurrency import run_in_threadpool
from google.protobuf.timestamp_pb2 import Timestamp
from tqdm import tqdm
Expand Down Expand Up @@ -85,6 +83,9 @@
from feast.online_response import OnlineResponse
from feast.permissions.permission import Permission
from feast.project import Project

if TYPE_CHECKING:
from feast.diff.apply_progress import ApplyProgressContext
from feast.protos.feast.serving.ServingService_pb2 import (
FieldStatus,
GetOnlineFeaturesResponse,
Expand Down Expand Up @@ -125,6 +126,7 @@ def _get_track_materialization():


warnings.simplefilter("once", DeprecationWarning)
logger = logging.getLogger(__name__)


class FeatureStore:
Expand Down Expand Up @@ -831,7 +833,6 @@ def plan(
self,
desired_repo_contents: RepoContents,
skip_feature_view_validation: bool = False,
progress_ctx: Optional["ApplyProgressContext"] = None,
) -> Tuple[RegistryDiff, InfraDiff, Infra]:
"""Dry-run registering objects to metadata store.

Expand All @@ -841,8 +842,6 @@ def plan(

Args:
desired_repo_contents: The desired repo state.
skip_feature_view_validation: If True, skip validation of feature views. This can be useful when the validation
system is being overly strict. Use with caution and report any issues on GitHub. Default is False.

Raises:
ValueError: The 'objects' parameter could not be parsed properly.
Expand Down Expand Up @@ -897,9 +896,6 @@ def plan(
# the desired repo state.
registry_diff = diff_between(self.registry, self.project, desired_repo_contents)

if progress_ctx:
progress_ctx.update_phase_progress("Computing infrastructure diff")

# Compute the desired difference between the current infra, as stored in the registry,
# and the desired infra.
self.registry.refresh(project=self.project)
Expand Down Expand Up @@ -927,7 +923,6 @@ def _apply_diffs(
registry_diff: The diff between the current registry and the desired registry.
infra_diff: The diff between the current infra and the desired infra.
new_infra: The desired infra.
progress_ctx: Optional progress context for tracking apply progress.
"""
try:
# Infrastructure phase
Expand All @@ -951,13 +946,7 @@ def _apply_diffs(

self.registry.update_infra(new_infra, self.project, commit=True)

if progress_ctx:
progress_ctx.update_phase_progress("Registry update complete")
progress_ctx.complete_phase()
finally:
# Always cleanup progress bars
if progress_ctx:
progress_ctx.cleanup()
self._registry.update_infra(new_infra, self.project, commit=True)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Botched refactoring leaves dangling try block and duplicate update_infra call in _apply_diffs

The refactoring of _apply_diffs removed the finally clause (which previously cleaned up progress bars) but left the try: keyword in place at line 927. Line 949 (self._registry.update_infra(...)) is a stray duplicate of line 947 (self.registry.update_infra(...)) that was placed outside the try block, causing a SyntaxError: expected 'except' or 'finally' block. This prevents the entire feast.feature_store module from being imported, completely breaking the library.

Even if the syntax error were naively fixed (e.g., by removing the try: keyword), the duplicate update_infra call would remain and cause the registry infrastructure to be committed twice per apply operation.

Intended vs actual code structure

The old code was:

try:
    ...
    self.registry.update_infra(new_infra, self.project, commit=True)
    if progress_ctx:
        progress_ctx.update_phase_progress(...)
        progress_ctx.complete_phase()
finally:
    if progress_ctx:
        progress_ctx.cleanup()

The new code ended up as:

try:
    ...
    self.registry.update_infra(new_infra, self.project, commit=True)

self._registry.update_infra(new_infra, self.project, commit=True)  # stray duplicate

The fix should remove both the try: keyword and the duplicate line 949.

Prompt for agents
In sdk/python/feast/feature_store.py, the _apply_diffs method (starting around line 913) has a broken structure. Two changes are needed:
1. Remove the `try:` keyword on line 927 (and un-indent the block below it to be at the method body level).
2. Delete the stray duplicate line 949: `self._registry.update_infra(new_infra, self.project, commit=True)` — this is a duplicate of line 947.

The resulting code should flow straight from the `self.registry.update_infra(...)` call on line 947 into the OpenLineage emit call on line 952, with no try block and no duplicate update_infra.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines 947 to +949
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Progress bar cleanup removed from _apply_diffs causing resource leak on errors

The old code had a finally block in _apply_diffs that called progress_ctx.cleanup() to close tqdm progress bars even when exceptions occurred. This was removed in the refactoring, but progress_ctx is still accepted as a parameter (line 918) and actively used to create and update progress bars (lines 929-945). The caller at sdk/python/feast/repo_operations.py:386 passes a real ApplyProgressContext. Without the finally cleanup, if an exception occurs during infrastructure update or registry commit, the tqdm progress bars (created in ApplyProgressContext.start_phase at sdk/python/feast/diff/apply_progress.py:117) will never be closed, leaving dangling terminal output. The comment at sdk/python/feast/repo_operations.py:400 even says "Cleanup is handled in the new _apply_diffs method" — but it no longer is.

Prompt for agents
In sdk/python/feast/feature_store.py _apply_diffs method, after removing the stray duplicate line 949 and fixing the try block (see BUG-0001), either:
(a) Restore a finally block that calls `if progress_ctx: progress_ctx.cleanup()` after the try block, OR
(b) If you want to remove the try/finally entirely, move the cleanup call to the caller in sdk/python/feast/repo_operations.py around line 399 (in the existing finally block there) so that progress bars are always cleaned up.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.


# Emit OpenLineage events for applied objects
self._emit_openlineage_apply_diffs(registry_diff)
Expand Down Expand Up @@ -1002,18 +991,12 @@ def apply(
an online store), it will commit the updated registry. All operations are idempotent, meaning they can safely
be rerun.

Note: The apply method does NOT delete objects that are removed from the provided list. To delete objects
from the registry, use explicit delete methods like delete_feature_view(), delete_feature_service(), or
pass objects to the objects_to_delete parameter with partial=False.

Args:
objects: A single object, or a list of objects that should be registered with the Feature Store.
objects_to_delete: A list of objects to be deleted from the registry and removed from the
provider's infrastructure. This deletion will only be performed if partial is set to False.
partial: If True, apply will only handle the specified objects; if False, apply will also delete
all the objects in objects_to_delete, and tear down any associated cloud resources.
skip_feature_view_validation: If True, skip validation of feature views. This can be useful when the validation
system is being overly strict. Use with caution and report any issues on GitHub. Default is False.

Raises:
ValueError: The 'objects' parameter could not be parsed properly.
Expand Down Expand Up @@ -1364,17 +1347,6 @@ def get_historical_features(
# TODO(achal): _group_feature_refs returns the on demand feature views, but it's not passed into the provider.
# This is a weird interface quirk - we should revisit the `get_historical_features` to
# pass in the on demand feature views as well.

# Deliberately disable writing to online store for ODFVs during historical retrieval
# since it's not applicable in this context.
# This does not change the output, since it forces to recompute ODFVs on historical retrieval
# but that is fine, since ODFVs precompute does not to work for historical retrieval (as per docs), only for online retrieval
# Copy to avoid side effects outside of this method
all_on_demand_feature_views = copy.deepcopy(all_on_demand_feature_views)

for odfv in all_on_demand_feature_views:
odfv.write_to_online_store = False

fvs, odfvs = utils._group_feature_refs(
_feature_refs,
all_feature_views,
Expand All @@ -1384,7 +1356,7 @@ def get_historical_features(
on_demand_feature_views = list(view for view, _ in odfvs)

# Check that the right request data is present in the entity_df
if type(entity_df) == pd.DataFrame:
if isinstance(entity_df, pd.DataFrame):
if self.config.coerce_tz_aware:
entity_df = utils.make_df_tzaware(cast(pd.DataFrame, entity_df))
for odfv in on_demand_feature_views:
Expand Down Expand Up @@ -1526,8 +1498,9 @@ def _materialize_odfv(
):
"""Helper to materialize a single OnDemandFeatureView."""
if not feature_view.source_feature_view_projections:
print(
f"[WARNING] ODFV {feature_view.name} materialization: No source feature views found."
logger.warning(
"ODFV %s materialization: No source feature views found.",
feature_view.name,
)
return
start_date = utils.make_tzaware(start_date)
Expand All @@ -1554,20 +1527,24 @@ def _materialize_odfv(
all_join_keys = {key for key in all_join_keys if key}

if not all_join_keys:
print(
f"[WARNING] ODFV {feature_view.name} materialization: No join keys found in source views. Cannot create entity_df. Skipping."
logger.warning(
"ODFV %s materialization: No join keys found in source views. Cannot create entity_df. Skipping.",
feature_view.name,
)
return

if len(entity_timestamp_col_names) > 1:
print(
f"[WARNING] ODFV {feature_view.name} materialization: Found multiple timestamp columns in sources ({entity_timestamp_col_names}). This is not supported. Skipping."
logger.warning(
"ODFV %s materialization: Found multiple timestamp columns in sources (%s). This is not supported. Skipping.",
feature_view.name,
entity_timestamp_col_names,
)
return

if not entity_timestamp_col_names:
print(
f"[WARNING] ODFV {feature_view.name} materialization: No batch sources with timestamp columns found for sources. Skipping."
logger.warning(
"ODFV %s materialization: No batch sources with timestamp columns found for sources. Skipping.",
feature_view.name,
)
return

Expand Down Expand Up @@ -1596,8 +1573,9 @@ def _materialize_odfv(
all_source_dfs.append(df)

if not all_source_dfs:
print(
f"No source data found for ODFV {feature_view.name} in the given time range. Skipping materialization."
logger.info(
"No source data found for ODFV %s in the given time range. Skipping materialization.",
feature_view.name,
)
return

Expand Down Expand Up @@ -1660,10 +1638,8 @@ def materialize_incremental(
>>> from datetime import datetime, timedelta
>>> fs = FeatureStore(repo_path="project/feature_repo")
>>> fs.materialize_incremental(end_date=_utc_now() - timedelta(minutes=5))
Materializing...
<BLANKLINE>
...
"""
_print_materializing_banner()
feature_views_to_materialize = self._get_feature_views_to_materialize(
feature_views
)
Expand Down Expand Up @@ -1812,10 +1788,8 @@ def materialize(
>>> fs.materialize(
... start_date=_utc_now() - timedelta(hours=3), end_date=_utc_now() - timedelta(minutes=10)
... )
Materializing...
<BLANKLINE>
...
"""
_print_materializing_banner()
if utils.make_tzaware(start_date) > utils.make_tzaware(end_date):
raise ValueError(
f"The given start_date {start_date} is greater than the given end_date {end_date}."
Expand Down Expand Up @@ -3174,7 +3148,7 @@ def validate_logged_features(

return exc
else:
print(f"{t.shape[0]} rows were validated.")
logger.info("%s rows were validated.", t.shape[0])

if cache_profile:
self.apply(reference)
Expand Down Expand Up @@ -3305,20 +3279,27 @@ def _print_materialization_log(
start_date, end_date, num_feature_views: int, online_store: str
):
if start_date:
print(
f"Materializing {Style.BRIGHT + Fore.GREEN}{num_feature_views}{Style.RESET_ALL} feature views"
f" from {Style.BRIGHT + Fore.GREEN}{utils.make_tzaware(start_date.replace(microsecond=0))}{Style.RESET_ALL}"
f" to {Style.BRIGHT + Fore.GREEN}{utils.make_tzaware(end_date.replace(microsecond=0))}{Style.RESET_ALL}"
f" into the {Style.BRIGHT + Fore.GREEN}{online_store}{Style.RESET_ALL} online store.\n"
logger.info(
"Materializing %s feature views from %s to %s into the %s online store.",
num_feature_views,
utils.make_tzaware(start_date.replace(microsecond=0)),
utils.make_tzaware(end_date.replace(microsecond=0)),
online_store,
)
else:
print(
f"Materializing {Style.BRIGHT + Fore.GREEN}{num_feature_views}{Style.RESET_ALL} feature views"
f" to {Style.BRIGHT + Fore.GREEN}{utils.make_tzaware(end_date.replace(microsecond=0))}{Style.RESET_ALL}"
f" into the {Style.BRIGHT + Fore.GREEN}{online_store}{Style.RESET_ALL} online store.\n"
logger.info(
"Materializing %s feature views to %s into the %s online store.",
num_feature_views,
utils.make_tzaware(end_date.replace(microsecond=0)),
online_store,
)


def _print_materializing_banner() -> None:
logger.info("Materializing...")
logger.info("")


def _validate_feature_views(feature_views: List[BaseFeatureView]):
"""Verify feature views have case-insensitively unique names across all types.

Expand Down
5 changes: 4 additions & 1 deletion sdk/python/feast/infra/online_stores/dynamodb.py
Original file line number Diff line number Diff line change
Expand Up @@ -737,7 +737,10 @@ def _process_batch_get_response(
values_data = tbl_res["values"]
for feature_name, value_bin in values_data.items():
val = ValueProto()
val.ParseFromString(value_bin.value)
if isinstance(value_bin, (bytes, bytearray)):
val.ParseFromString(value_bin)
else:
val.ParseFromString(value_bin.value)
features[feature_name] = val

# Parse timestamp and set result
Expand Down
Loading