Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update training_with_built_in_methods.py #21098

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

he7d3r
Copy link

@he7d3r he7d3r commented Mar 27, 2025

Clarify parameter name

@codecov-commenter
Copy link

codecov-commenter commented Mar 27, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.68%. Comparing base (f8b893f) to head (a41f298).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21098   +/-   ##
=======================================
  Coverage   82.68%   82.68%           
=======================================
  Files         564      564           
  Lines       54124    54124           
  Branches     8411     8411           
=======================================
  Hits        44755    44755           
  Misses       7293     7293           
  Partials     2076     2076           
Flag Coverage Δ
keras 82.50% <ø> (ø)
keras-jax 64.03% <ø> (ø)
keras-numpy 59.11% <ø> (ø)
keras-openvino 32.88% <ø> (ø)
keras-tensorflow 64.35% <ø> (ø)
keras-torch 64.07% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@@ -620,8 +620,8 @@ def __getitem__(self, idx):
"""
To fit the model, pass the dataset instead as the `x` argument (no need for a `y`
argument since the dataset includes the targets), and pass the validation dataset
as the `validation_data` argument. And no need for the `batch_size` argument, since
the dataset is already batched!
as the `validation_data` argument. And no need for the `validation_batch_size`
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note how validation_batch_size is not specified (but batch_size is) on line 629 below:

train_py_dataset, batch_size=64, validation_data=val_py_dataset, epochs=1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Assigned Reviewer
Development

Successfully merging this pull request may close these issues.

3 participants