MarkDaoust

MarkDaoust

Member Since 9 years ago

Experience Points
210
follower
Lessons Completed
0
follow
Lessons Completed
167
stars
Best Reply Awards
37
repos

428 contributions in the last year

Pinned
⚡ Multivariate Normal Distributions, in Python
⚡ Code samples for a TensorFlow Workshop [WIP]
⚡ Models built with TensorFlow
⚡ Computation using data flow graphs for scalable machine learning
⚡ Slides and code from our TensorFlow workshop.
⚡ A library for transfer learning by reusing parts of TensorFlow models.
Activity
Dec
3
2 days ago
push

MarkDaoust push MarkDaoust/docs

MarkDaoust
MarkDaoust

Update transfer_learning.ipynb

Updated markdown with correct format and also updated some sentences to make the concept clear

MarkDaoust
MarkDaoust

Fixed typos in image segmentation tutorial

##Link to the mentioned documentation https://www.tensorflow.org/tutorials/images/segmentation

MarkDaoust
MarkDaoust

Nit: fix logarithm operation

MarkDaoust
MarkDaoust

Make comments consistent with corresponding code

MarkDaoust
MarkDaoust

Update text.ipynb

Changed import statement from tensorflow.keras.layers.experimental.preprocessing import TextVectorization to from tensorflow.keras.layers import TextVectorization

Removed below note section, since experimental.preprocessing.TextVectorization moved under tensorflow.keras.layers { "cell_type": "markdown", "metadata": { "id": "HSDO9JZYrK-O" }, "source": [ "Note: The Preprocessing APIs used in this section are experimental in TensorFlow 2.3 and subject to change." ] },

and Changed from preprocessing.TextVectorization to TextVectorization

MarkDaoust
MarkDaoust

Update text_classification.ipynb

Removed experimental api docs for TextVectorization layer

MarkDaoust
MarkDaoust

Update word2vec.ipynb

updated TextVectorization layer imports

MarkDaoust
MarkDaoust

updated experimental details

changed from tf.config.experimental.set_visible_devices to tf.config.set_visible_devices. changed from tf.config.experimental.set_virtual_device_configuration to tf.config.set_logical_device_configuration

MarkDaoust
MarkDaoust

Make these look less like errors.

PiperOrigin-RevId: 398106286

MarkDaoust
MarkDaoust

Clean up existing migration docs

PiperOrigin-RevId: 398113643

MarkDaoust
MarkDaoust

Update Migrate from Estimator to Keras APIs (migrating to TF2)

PiperOrigin-RevId: 398217007

MarkDaoust
MarkDaoust

Add traceback to Graph Tensor runtime error and update error message.

PiperOrigin-RevId: 398256308

MarkDaoust
MarkDaoust

Update migrate landing page

PiperOrigin-RevId: 398275191

MarkDaoust
MarkDaoust

Refresh Effective TF2 guide

PiperOrigin-RevId: 398279077

MarkDaoust
MarkDaoust

Fix formatting

PiperOrigin-RevId: 398286660

MarkDaoust
MarkDaoust

Update Migrate checkpoint saving (migrating to TF2)

PiperOrigin-RevId: 398287265

MarkDaoust
MarkDaoust

Add note

PiperOrigin-RevId: 398295579

MarkDaoust
MarkDaoust

Fix validate correctness notebook

PiperOrigin-RevId: 398311532

commit sha: 0747fe67fa0079bd787b7c06e1c570a9c61ddaca

push time in 1 day ago
push

MarkDaoust push MarkDaoust/examples

MarkDaoust
MarkDaoust

Simplifies calculation of attributed string size without making string itself

MarkDaoust
MarkDaoust

Merge branch 'master' into sizeUsingFont

MarkDaoust
MarkDaoust

Update tflite_transfer_converter.py

MarkDaoust
MarkDaoust

Update tflite_transfer_converter.py

MarkDaoust
MarkDaoust

Functional API with InputLayer supported

MarkDaoust
MarkDaoust

gradient computation on only trainable variables

MarkDaoust
MarkDaoust
MarkDaoust
MarkDaoust

Mention alternative for TF 1.x

MarkDaoust
MarkDaoust

Merge branch 'master' into master

MarkDaoust
MarkDaoust

Add api-docs generator script

MarkDaoust
MarkDaoust
MarkDaoust
MarkDaoust

use the explicit package contents filter

MarkDaoust
MarkDaoust

Update recommendation model to use full vocab as negatives to calculate loss and monitor the recall for labels in both batch samples and full vocabularies.

PiperOrigin-RevId: 330851551

MarkDaoust
MarkDaoust

Merge pull request #245 from MarkDaoust:mm_py_docs

PiperOrigin-RevId: 331054481

MarkDaoust
MarkDaoust

Updated Text Classification Android sample to use model with metadata

PiperOrigin-RevId: 331242240

MarkDaoust
MarkDaoust

[Clean up] Consolidate distribution utils.

PiperOrigin-RevId: 331359058

MarkDaoust
MarkDaoust

updating object detection sample to use TFHub model

PiperOrigin-RevId: 331507205

MarkDaoust
MarkDaoust

Split the inference code from the image classification ref app

PiperOrigin-RevId: 331566630

commit sha: 50efb8d5767eb26d02850db7cd83bfeb8f78768c

push time in 1 day ago
pull request

MarkDaoust pull request quantumlib/OpenFermion

MarkDaoust
MarkDaoust

Direct download the missing file

This fixes the colab, and will get the import build to stop failing.

pull request

MarkDaoust pull request quantumlib/OpenFermion

MarkDaoust
MarkDaoust

Direct download the missing file

This file is no longer included in the pip package causing the colab and docs import to fail.

Download it from github.

Activity icon
issue

MarkDaoust issue comment quantumlib/OpenFermion

MarkDaoust
MarkDaoust

Direct download the missing file

This file is no longer included in the pip package causing the colab and docs import to fail.

Download it from github.

MarkDaoust
MarkDaoust

Wait. This is still broken.

pull request

MarkDaoust pull request quantumlib/OpenFermion

MarkDaoust
MarkDaoust

Direct download the missing file

This file is no longer included in the pip package causing the colab and docs import to fail.

Download it from github.

Activity icon
created branch

MarkDaoust in MarkDaoust/OpenFermion create branch fix-notebook

createdAt 1 day ago
push

MarkDaoust push MarkDaoust/OpenFermion

MarkDaoust
MarkDaoust

Change _book.yaml to point to /reference/python (#663)

  • Change _book.yaml to point to /reference/python
  • Update _book.yaml
MarkDaoust
MarkDaoust

Remove mention of OFC (#669)

OFC will be depricated and all functionality is now in OpenFermion proper.

MarkDaoust
MarkDaoust

Remove ofc from projects in docs (#670)

MarkDaoust
MarkDaoust

Update tutorial list (#671)

When OpenFermion-Cirq was merged we gained those tutorials.

The list in docs is now updated and reordered.

MarkDaoust
MarkDaoust

Found another OFC reference (#672)

maybe this is the last one?

MarkDaoust
MarkDaoust

Fermion partitioning (#676)

  • Added functions from old commit, fixed up typing and docstrings from Kevin and Victorys suggestions

  • Fixed formatting

  • Added space

MarkDaoust
MarkDaoust

Trotter circuit validation (#674)

  • Made function, tests with feedback from previous pr

  • Fixed formatting

  • Fixed typo

  • Fixed faulty import

  • Fixed import

  • Fixed test

  • Fixed additional space

  • Added Victory's suggestions

  • Fixed formatting

  • Fixed nits

  • Fixed last bracket issue

Co-authored-by: Nicholas Rubin [email protected]

MarkDaoust
MarkDaoust

Add front matter to notebook tutorials (#682)

  • nbfmt tutorials/ with default indent

  • Add AUTHORS file to repo

  • Add license, buttons, and formatting to tutorials/binary_code_transforms

  • Add license, buttons, and formatting to tutorials/bosonic_operators

  • Add license, buttons, and formatting to tutorials/circuits_1_basis_change

  • Add license, buttons, and formatting to tutorials/circuits_2_diagonal_coulomb_trotter

  • Add license, buttons, and formatting to tutorials/circuits_3_arbitrary_basis_trotter

  • Add license, buttons, and formatting to tutorials/intro_to_openfermion

  • Add license, buttons, and formatting to tutorials/jordan_wigner_and_bravyi_kitaev_transforms

MarkDaoust
MarkDaoust

Update cirq pin (#684)

  • Update cirq pin

Updated test files now use final_state_vector. Old function call will be depricated in cirq v0.10.

  • format check
MarkDaoust
MarkDaoust

Tutorials install from github (#685)

Tutorials install from latest on master.

MarkDaoust
MarkDaoust
MarkDaoust
MarkDaoust

Package testing data with OpenFermion (#686)

  • Package testing data with OpenFermion

Data was moved out of the package with the expectation that users would using this data if they were working on a development branch.

Thus far we have found it useful to have this data available when developing chemistry programs and techniques.

Therefore the correct place for this data is in testing.

We also explore other options like including this as package_data.

Generally, package_data wasn't originally designed with this particular usecase in mind.

Overall, though this adds a bit of data to the sdist it makes testing and using OpenFermion as a development platform easier.

  • try out module formw

  • needed to actually add the files

  • Move performance test

why is this is this used as an example

  • format

  • coverage_test

  • move perforamnce tests to testing

  • expanding test case to hit all ifs

  • windows tried to divide by zero

MarkDaoust
MarkDaoust

Anti-symmetrized integrals, coulomb and exchange integrals (#681)

  • Added functionality to extract anti-symmetrized integrals, coulomb and exchange matrices

  • corrected small mistake

  • Writing tests...

  • added tests, and the anti-symmetrized orbitals are now in spin-orbital basis. To do this, I added a separate function to molecular_data.py that does this transformation.

  • oops, has to be 1/2 of course

  • pylint and format correction

  • format and testing

Co-authored-by: Nicholas Rubin [email protected]

MarkDaoust
MarkDaoust

Verified phase estimation - estimators (#683)

  • Added phase fit estimator

  • Added reference to vpe paper

  • Started function to extract phase function from experimental data

  • Made phase function generation function, tests not yet written

  • Wrote tests, tests pass!

  • Fixed missing init

  • Fixed formatting

  • Formatting

  • Fixed line length

Co-authored-by: Nicholas Rubin [email protected]

MarkDaoust
MarkDaoust

Slight increase in testing speed (#689)

We might want to consider testing incrementally

MarkDaoust
MarkDaoust

Added functions to init file (#673)

  • Added functions to init file

  • Fixing circular dependencies

  • Added comment to init file explaining shift

  • Shifting a commutator calculation from utils to opconversions where it belongs

  • Adding the new files

  • Removing old import statement

  • Maybe this works

  • Made another change

  • Adding import

  • Remove commented code

Co-authored-by: Nick Rubin [email protected]l.com

MarkDaoust
MarkDaoust

Add type hint for binary tranform (#656)

MarkDaoust
MarkDaoust

Docs: update notebook buttons (#692)

  • Update site button in notebooks

  • Update colab button in notebooks

  • Update github button in notebooks

  • Update download button in notebooks

MarkDaoust
MarkDaoust

Fixing commit history (#675)

Co-authored-by: Nicholas Rubin [email protected]

MarkDaoust
MarkDaoust

Type hints hamiltonians (#693)

  • Add type hints to dwave, plane wave and spec ops

  • Add type hints to dwave, plane wave, spec ops and jellium

  • Add Union type hint

  • Put Typing imports before other imports

  • Change format to conform to format check

  • Add test for bad parity parameter

  • Remove space before parameter

  • Return value type hint to previous line

  • 3-line gap between functions to 2-line

  • Geometry type hint can be int or float

  • Reformat

  • Split Tuple type check into two separate lines

  • Merge split up type hint into one line

  • Import Union type hint

  • Split type hint into two lines

  • Delete settings.json

Accidentally kept this file, obviously not needed

commit sha: 7532ad9e6d34f6922537f00a5cc001047f2bfdd1

push time in 1 day ago
Dec
2
3 days ago
pull request

MarkDaoust merge to tensorflow/docs

MarkDaoust
MarkDaoust

Update deepdream.ipynb

Removed "from tensorflow.keras.preprocessing import image" . Also updated "tf.keras.preprocessing.image.img_to_array" to "tf.keras.utils.img_to_array".

pull request

MarkDaoust merge to tensorflow/docs

MarkDaoust
MarkDaoust

Update transfer_learning_with_hub.ipynb

Updated "val_ds = tf.keras.preprocessing.image_dataset_from_directory" to "val_ds = tf.keras.utils.image_dataset_from_directory" in Dataset section and executed successfully.

Dec
1
4 days ago
pull request

MarkDaoust merge to tensorflow/docs

MarkDaoust
MarkDaoust

Functions reference fixed

The reference to functions was broken. I changed the link to this one

MarkDaoust
MarkDaoust

Thanks. That link probably only worked on tensorflow.org.

Relative links work everywhere and keep your context (tensorflow.org, github, colab), the #fragment part works everywhere except colab.

pull request

MarkDaoust merge to tensorflow/docs

MarkDaoust
MarkDaoust

Functions reference fixed

The reference to functions was broken. I changed the link to this one

MarkDaoust
MarkDaoust

Thanks. That link probably only worked on tensorflow.org.

Relative links work everywhere and keep your context (tensorflow.org, github, colab), the #fragment part works everywhere except colab.

Activity icon
issue

MarkDaoust issue comment tensorflow/java

MarkDaoust
MarkDaoust

Remove ndarray from the docs generator.

ndarray is not part of the repo anymore. This is preventing me from updating the docs to v0.4.

We need to either remove it here or add code to this script to clone that repo and copy-in the code.

MarkDaoust
MarkDaoust

They can both be released independently now (e.g. we've just released 0.4.0 of TF Java but NdArray is still at 0.3.3, which is fine). How can we can keep all this in sync on the website?

I'm not exactly sure yet 😅.

IDK anything about java package management. Do you know a good command I can use to check maven for the latest version of the two packages? If I can automate the version number check I could download the correct branch for both, splice the source together, and generate the docs.

pull request

MarkDaoust merge to tensorflow/docs-l10n

MarkDaoust
MarkDaoust
MarkDaoust
MarkDaoust

I test-ran this in Colab. It works now.

Nov
30
5 days ago
pull request

MarkDaoust pull request tensorflow/java

MarkDaoust
MarkDaoust

Remove ndarray from the docs generator.

ndarray is not part of the repo anymore. This is preventing me from updating the docs to v0.4.

We need to either remove it here or add code to this script to clone that repo and copy-in the code.

push

MarkDaoust push MarkDaoust/java

MarkDaoust
MarkDaoust

Remove ndarray from the docs generator.

ndarray is not part of the repo anymore. This is preventing me from updating the docs to v0.4.

We need to either remove it here or add code to this script to clone that repo and copy-in the code.

commit sha: 68a38c917a320eaabbeca6599d2dbe3e5ae68e9a

push time in 4 days ago
Nov
29
6 days ago
pull request

MarkDaoust merge to tensorflow/docs

MarkDaoust
MarkDaoust

Remove a hard coded constant for the image classification tutorial.

The constant can be inferred from other parts of the code. Making this change allows for this example to be generalized more easily.

Nov
23
1 week ago
pull request

MarkDaoust merge to tensorflow/io

MarkDaoust
MarkDaoust

Update Kafka to 2.7.2

The previous archive has been removed, so the notebook isn't starting the kafka brokers.

This will expose the error causing failures in #1569.

Nov
22
1 week ago
Activity icon
issue

MarkDaoust issue comment tensorflow/io

MarkDaoust
MarkDaoust

Dicom tuto plus tags

following #1493 #suggesting a new a new subchapter for working with dicom tags in tfio #"Decode DICOM Metadata and working with Tags "

my tuto file is named dicom.ipynb as suggested by @yongtang

Activity icon
issue

MarkDaoust issue tensorflow/io

MarkDaoust
MarkDaoust

tensorflow.org kafka.ipynb failing

In kafka.ipynb:

write_to_kafka("susy-train", zip(x_train, y_train))

---------------------------------------------------------------------------
NoBrokersAvailable                        Traceback (most recent call last)
/tmp/ipykernel_16467/922230483.py in <module>
     11   print("Wrote {0} messages into topic: {1}".format(count, topic_name))
     12 
---> 13 write_to_kafka("susy-train", zip(x_train, y_train))
     14 write_to_kafka("susy-test", zip(x_test, y_test))

/tmp/ipykernel_16467/922230483.py in write_to_kafka(topic_name, items)
      4 def write_to_kafka(topic_name, items):
      5   count=0
----> 6   producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'])
      7   for message, key in items:
      8     producer.send(topic_name, key=key.encode('utf-8'), value=message.encode('utf-8')).add_errback(error_callback)

/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/kafka/producer/kafka.py in __init__(self, **configs)
    381         client = KafkaClient(metrics=self._metrics, metric_group_prefix='producer',
    382                              wakeup_timeout_ms=self.config['max_block_ms'],
--> 383                              **self.config)
    384 
    385         # Get auto-discovered version from client if necessary

/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/kafka/client_async.py in __init__(self, **configs)
    242         if self.config['api_version'] is None:
    243             check_timeout = self.config['api_version_auto_timeout_ms'] / 1000
--> 244             self.config['api_version'] = self.check_version(timeout=check_timeout)
    245 
    246     def _can_bootstrap(self):

/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/kafka/client_async.py in check_version(self, node_id, timeout, strict)
    898             if try_node is None:
    899                 self._lock.release()
--> 900                 raise Errors.NoBrokersAvailable()
    901             self._maybe_connect(try_node)
    902             conn = self._conns[try_node]

NoBrokersAvailable: NoBrokersAvailable
NoBrokersAvailable: NoBrokersAvailable
Activity icon
issue

MarkDaoust issue tensorflow/io

MarkDaoust
MarkDaoust

tensorflow.org avro.ipynb Failing

The tensorflow.org import for avro.ipynb is failing:

------------------
model.fit(x=dataset, epochs=1, steps_per_epoch=1, verbose=1)
------------------

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_16881/4054086148.py in <module>
----> 1 model.fit(x=dataset, epochs=1, steps_per_epoch=1, verbose=1)

/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
   1127           except Exception as e:  # pylint:disable=broad-except
   1128             if hasattr(e, "ag_error_metadata"):
-> 1129               raise e.ag_error_metadata.to_exception(e)
   1130             else:
   1131               raise

TypeError: in user code:

    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 878, in train_function  *
        return step_function(self, iterator)
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 867, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step  **
        outputs = model.train_step(data)
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 813, in train_step
        f'Target data is missing. Your model has `loss`: {self.loss}, '

    TypeError: Target data is missing. Your model has `loss`: mse, and therefore expects target data to be passed in `fit()`.

TypeError: in user code:

    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 878, in train_function  *
        return step_function(self, iterator)
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 867, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step  **
        outputs = model.train_step(data)
    File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/training.py", line 813, in train_step
        f'Target data is missing. Your model has `loss`: {self.loss}, '

    TypeError: Target data is missing. Your model has `loss`: mse, and therefore expects target data to be passed in `fit()`.

Nov
20
2 weeks ago
Activity icon
issue

MarkDaoust issue comment tensorflow/tensorflow

MarkDaoust
MarkDaoust

tf.keras.layers.Permute documentation is misleading, error messages even more so

URL(s) with the issue:

https://www.tensorflow.org/api_docs/python/tf/keras/layers/Permute

Description of issue (what needs changing):

Clear description

Permute layer is quite picky about its dims argument despite docs clearly saying

dims: Tuple of integers. Permutation pattern, does not include the samples dimension. Indexing starts at 1. For instance, (2, 1) permutes the first and second dimensions of the input.

It does not actually follow from this documentation that the dims input MUST have all the dimensions clearly listed without misses - as per check actually present in the code:

   if sorted(dims) != list(range(1, len(dims) + 1)):
      raise ValueError(
          'Invalid permutation `dims` for Permute Layer: %s. '
          'The set of indices in `dims` must be consecutive and start from 1.' %
          (dims,))

The next hurdle is that the error message thrown by runtime is kind of inconsistent:

keras.layers.Permute((3, 2), input_shape=[30, 6, 8], name=f"Permute_layer")
>>> Invalid permutation `dims` for Permute Layer: (3, 2). The set of indices in `dims` must be consecutive and start from 1.

It does not say in the doc that the dims must start with 1, it just says indexing starts with 1! Ok now the user knows it must start with 1, but how does one actually get dimensions 2 and 3 swapped? At that point it's just a mess from there on.

Suggest following changes:

to the docs: please make example use 3D tensor, rather than 2D. Then it would be clear that all dims must be listed, even if they are not to be permuted, e.g.

tmp = keras.layers.Permute((1, 3, 2), name=f"Permute_input")(tmp)

Another note that docs do not make is that the len(ndim) must match the input_shape dimensions, which is however checked by the init, again possibly conflicting with the doc that says that input_shape can be whatever (clearly not whatever, it is coupled with ndim argument.

MarkDaoust
MarkDaoust
Nov
19
2 weeks ago
Activity icon
issue

MarkDaoust issue tensorflow/tensorflow

MarkDaoust
MarkDaoust

tf.signal.rfft documentation refers to Tcomplex as an argument

URL(s) with the issue:

https://www.tensorflow.org/api_docs/python/tf/signal/rfft

Description of issue (what needs changing):

Clear description

The table of Args in the documentation includes Tcomplex:

Tcomplex | An optional tf.DType from: tf.complex64, tf.complex128. Defaults to tf.complex64.

But the function does not accept this argument. Calling tf.signal.rfft(..., Tcomplex=...) results in the error:

TypeError: _rfft() got an unexpected keyword argument 'Tcomplex'

This makes sense given the signature in the documentation:

tf.signal.rfft(
    input_tensor, fft_length=None, name=None
)

and https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/signal/fft_ops.py#L114-L140

Submit a pull request?

No. I could not find where this table was generated in the code.

Activity icon
issue

MarkDaoust issue comment tensorflow/tensorflow

MarkDaoust
MarkDaoust

tf.signal.rfft documentation refers to Tcomplex as an argument

URL(s) with the issue:

https://www.tensorflow.org/api_docs/python/tf/signal/rfft

Description of issue (what needs changing):

Clear description

The table of Args in the documentation includes Tcomplex:

Tcomplex | An optional tf.DType from: tf.complex64, tf.complex128. Defaults to tf.complex64.

But the function does not accept this argument. Calling tf.signal.rfft(..., Tcomplex=...) results in the error:

TypeError: _rfft() got an unexpected keyword argument 'Tcomplex'

This makes sense given the signature in the documentation:

tf.signal.rfft(
    input_tensor, fft_length=None, name=None
)

and https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/signal/fft_ops.py#L114-L140

Submit a pull request?

No. I could not find where this table was generated in the code.

Activity icon
issue

MarkDaoust issue comment tensorflow/tensorflow

MarkDaoust
MarkDaoust

tf.keras.layers.Permute documentation is misleading, error messages even more so

URL(s) with the issue:

https://www.tensorflow.org/api_docs/python/tf/keras/layers/Permute

Description of issue (what needs changing):

Clear description

Permute layer is quite picky about its dims argument despite docs clearly saying

dims: Tuple of integers. Permutation pattern, does not include the samples dimension. Indexing starts at 1. For instance, (2, 1) permutes the first and second dimensions of the input.

It does not actually follow from this documentation that the dims input MUST have all the dimensions clearly listed without misses - as per check actually present in the code:

   if sorted(dims) != list(range(1, len(dims) + 1)):
      raise ValueError(
          'Invalid permutation `dims` for Permute Layer: %s. '
          'The set of indices in `dims` must be consecutive and start from 1.' %
          (dims,))

The next hurdle is that the error message thrown by runtime is kind of inconsistent:

keras.layers.Permute((3, 2), input_shape=[30, 6, 8], name=f"Permute_layer")
>>> Invalid permutation `dims` for Permute Layer: (3, 2). The set of indices in `dims` must be consecutive and start from 1.

It does not say in the doc that the dims must start with 1, it just says indexing starts with 1! Ok now the user knows it must start with 1, but how does one actually get dimensions 2 and 3 swapped? At that point it's just a mess from there on.

Suggest following changes:

to the docs: please make example use 3D tensor, rather than 2D. Then it would be clear that all dims must be listed, even if they are not to be permuted, e.g.

tmp = keras.layers.Permute((1, 3, 2), name=f"Permute_input")(tmp)

Another note that docs do not make is that the len(ndim) must match the input_shape dimensions, which is however checked by the init, again possibly conflicting with the doc that says that input_shape can be whatever (clearly not whatever, it is coupled with ndim argument.

MarkDaoust
MarkDaoust

This page hasn't changes. You can give it a shot. The source for this page is here:

https://github.com/keras-team/keras/blob/master/keras/layers/core/permute.py

It might be sufficient to say that dims expects a permutation of the dimensions.

Nov
18
2 weeks ago
Activity icon
issue

MarkDaoust issue tensorflow/tensorflow

MarkDaoust
MarkDaoust

Documentation about convolution padding computation is missing

URL(s) with the issue:

Please provide a link to the documentation entry, for example: https://www.tensorflow.org/api_docs/python/tf/nn/convolution?version=nightly

Description of issue (what needs changing):

The documentation refers to an invalid link, https://tensorflow.org/api_guides/python/nn#Convolution, and the reference to that link is recently deleted in this commit: https://github.com/tensorflow/tensorflow/commit/271df9ce498f630b6c417087f18b32f2f7446a83 (cc @MarkDaoust ).

However, I think the correct action to take is to make an alternative to the invalid link (rather than deleting it).

When the link existed in the past, it contains the formula of how padding is computed for convolution, which is obviously a necessary piece of information for users to know and must be in the documentation.

As a reference, I found some incomplete third-party website that contain the old version of this now-missing page:

2021-02-06_23-50

Activity icon
issue

MarkDaoust issue comment tensorflow/tensorflow

MarkDaoust
MarkDaoust

Documentation about convolution padding computation is missing

URL(s) with the issue:

Please provide a link to the documentation entry, for example: https://www.tensorflow.org/api_docs/python/tf/nn/convolution?version=nightly

Description of issue (what needs changing):

The documentation refers to an invalid link, https://tensorflow.org/api_guides/python/nn#Convolution, and the reference to that link is recently deleted in this commit: https://github.com/tensorflow/tensorflow/commit/271df9ce498f630b6c417087f18b32f2f7446a83 (cc @MarkDaoust ).

However, I think the correct action to take is to make an alternative to the invalid link (rather than deleting it).

When the link existed in the past, it contains the formula of how padding is computed for convolution, which is obviously a necessary piece of information for users to know and must be in the documentation.

As a reference, I found some incomplete third-party website that contain the old version of this now-missing page:

2021-02-06_23-50

MarkDaoust
MarkDaoust

The api-reference pages on tensorflow.org are built from the source. Those two links I posted take you to the source for the source for tf.nn.convolution:

If you edit this, and your PR gets merged:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_ops.py#L998-L1135

It will show up on tensorflow.org is a few days.

Previous