Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try in tensorflow2.6 by myself. But had some problems. Hope get some help! #40

Open
Liozizy opened this issue Nov 19, 2021 · 6 comments

Comments

@Liozizy
Copy link

Liozizy commented Nov 19, 2021

When I try to define a layer as loss by myself and use the add_weight() function to declare the trainable return propagation variable,Threw an error:

ValueError: Variable <tf.Variable ‘eqn1_1/constant1:0’ shape=(1,) dtype=float32> has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

My code is as follows:

class WbceLoss(KL.Layer):

  def __init__(self, **kwargs):
      super(WbceLoss, self).__init__(**kwargs)
      
  def build(self, input_shape):
      self.constant1 = self.add_weight(name = "constant1", shape[1],initializer='random_normal', trainable=True)
      self.constant2 = self.add_weight(name = "constant2", shape[1],initializer='random_normal', trainable=True)
  
  def call(self, inputs, **kwargs):
          
      tf.compat.v1.disable_eager_execution()
      out1, out2, out3, cur_time, cur_x_input, cur_y_input, cur_z_input, perm_input = inputs
      
      x_input = cur_x_input
      y_input = cur_y_input
      z_input = cur_z_input
      perm_input = perm_input
      
      constant1 = self.constant1
      constant2 = self.constant2
      print(constant1)
      print(constant2)
      
      gradient_with_time = tf.keras.backend.gradients(out1, cur_time)[0]
      constant1 = tf.convert_to_tensor(constant1)
      constant2 = tf.convert_to_tensor(constant2)
      a = tf.zeros((1,), dtype=tf.float32)
      bias = tf.convert_to_tensor([a, a, constant1])
      #bias = tf.expand_dims([0., 0., constant1], 0)
      bias = tf.expand_dims(bias, 2)
      
      pressure_grad_x = tf.keras.backend.gradients(out2, cur_x_input)[0]
      pressure_grad_y = tf.keras.backend.gradients(out2, cur_y_input)[0]
      pressure_grad_z = tf.keras.backend.gradients(out2, cur_z_input)[0]
        
      pressure_grad = tf.convert_to_tensor([pressure_grad_x, pressure_grad_y, pressure_grad_z])
      pressure_grad = tf.keras.backend.permute_dimensions(
      pressure_grad, (1, 0, 2))
      coeff = (1 - out1) / constant2
      
      m = tf.matmul(perm_input, (pressure_grad - bias))
      m_grad_x = tf.keras.backend.gradients(m, cur_x_input)[0]
      m_grad_y = tf.keras.backend.gradients(m, cur_y_input)[0]
      m_grad_z = tf.keras.backend.gradients(m, cur_z_input)[0]
      m_grad_1 = tf.add(m_grad_x, m_grad_y)
      m_grad = tf.add(m_grad_1, m_grad_z)
      
      m_final = tf.multiply(coeff, m_grad)
      eqn_1 = tf.add(gradient_with_time, m_final)
      eqn_2 = tf.add(eqn_1, out3)
      eqn = tf.negative(eqn_2)
      
      eqn = tf.compat.v1.to_float(eqn)
      
      self.add_loss(eqn, inputs=True)
      self.add_metric(eqn, aggregation="mean", name="eqn1")
      
      return eqn

The whole error when I train the model is as follows:

ValueError Traceback (most recent call last) in () 12 batch_size=241, 13 shuffle=True, ---> 14 verbose=1)

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 794 max_queue_size=max_queue_size, 795 workers=workers, --> 796 use_multiprocessing=use_multiprocessing) 797 798 def evaluate(self,

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 655 validation_steps=validation_steps, 656 validation_freq=validation_freq, --> 657 steps_name='steps_per_epoch') 658 659 def evaluate(self,

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 175 # function we recompile the metrics based on the updated 176 # sample_weight_mode value. --> 177 f = _make_execution_function(model, mode) 178 179 # Prepare validation data. Hold references to the iterator and the input list

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_arrays_v1.py in _make_execution_function(model, mode) 545 if model._distribution_strategy: 546 return distributed_training_utils_v1._make_execution_function(model, mode) --> 547 return model._make_execution_function(mode) 548 549

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in _make_execution_function(self, mode) 2077 def _make_execution_function(self, mode): 2078 if mode == ModeKeys.TRAIN: -> 2079 self._make_train_function() 2080 return self.train_function 2081 if mode == ModeKeys.TEST:

~\AppData\Roaming\Python\Python36\site-packages\keras\engine\training_v1.py in _make_train_function(self) 2009 # Training updates
2010 updates = self.optimizer.get_updates( -> 2011 params=self._collected_trainable_weights, loss=self.total_loss) 2012 # Unconditional updates
2013 updates += self.get_updates_for(None)

~\AppData\Roaming\Python\Python36\site-packages\keras\optimizer_v2\optimizer_v2.py in get_updates(self, loss, params) 757 758 def get_updates(self, loss, params): --> 759 grads = self.get_gradients(loss, params) 760 grads_and_vars = list(zip(grads, params)) 761 self._assert_valid_dtypes([

~\AppData\Roaming\Python\Python36\site-packages\keras\optimizer_v2\optimizer_v2.py in get_gradients(self, loss, params) 753 "gradient defined (i.e. are differentiable). " 754 "Common ops without gradient: " --> 755 "K.argmax, K.round, K.eval.".format(param)) 756 return grads 757

ValueError: Variable <tf.Variable 'constant1_6:0' shape=(1,) dtype=float32> has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

Hope get some help. Thank you!

@Liozizy Liozizy changed the title Try in tensorflow2.6 by mysele. But had some problems. Hope get some help! Try in tensorflow2.6 by myself. But had some problems. Hope get some help! Nov 19, 2021
@nnnn123456789
Copy link

use tensorflow 1.x (like 1.15)

@manwu1994
Copy link

Try

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

@amiralizadeh1
Copy link

Try to do it in TensorFlow 1 because there are some major changes to the first version. Plus, I recommend using Google Colab instead of local processors because installing tf 1 is much easier (you don't have to change your python or IDE version. you can take a look at my repo for more details.

@hsks
Copy link

hsks commented Mar 6, 2023

@amiralizadeh1 google colab no longer support tensor 1.x

@mingwei-yang-byte
Copy link

tensorflow 1.15.0
python 3.6

@justgoinggoxixi
Copy link

tensorflow 1.15.0 python 3.6

wow,thank y

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants