Bollullos Football Team: A Comprehensive Guide for Sports Betting Enthusiasts
Overview and Introduction
Bollullos is a prominent football team hailing from the vibrant region of Spain. Competing in the Segunda División, this team is renowned for its dynamic play and strategic prowess. Under the leadership of Coach Javier Martínez, Bollullos has been a formidable presence since its founding in 1923.
Team History and Achievements
Bollullos boasts a rich history filled with notable achievements. The team has clinched several league titles and has consistently performed well in national cups. Noteworthy seasons include their triumphant 1978 campaign, where they secured the league title, and their memorable 1995 cup victory.
Current Squad and Key Players
The current squad features standout players like Diego Pérez, a forward known for his goal-scoring ability, and Carlos Rodríguez, a midfielder with exceptional vision. Key players also include goalkeeper Luis Fernández, whose saves have been crucial in tight matches.
Team Playing Style and Tactics
Bollullos employs a 4-3-3 formation, emphasizing fluid attacking play and solid defensive structure. Their strategy focuses on quick transitions and exploiting the wings. Strengths include their high pressing game and set-piece proficiency, while weaknesses lie in occasional lapses in concentration during counterattacks.
Interesting Facts and Unique Traits
Fans affectionately call Bollullos “Los Leones” (The Lions). The team has a passionate fanbase known for their colorful support at home games. Rivalries with teams like Atlético Sevilla add an extra layer of excitement to matches.
Lists & Rankings of Players, Stats, or Performance Metrics
- Top Scorers: Diego Pérez (🎰) – 15 goals | Luis Hernández (💡) – 10 goals
- Assists Leader: Carlos Rodríguez (✅) – 12 assists
- Defensive Record: Luis Fernández (❌) – 5 clean sheets | Miguel Torres (✅) – 30 tackles
Comparisons with Other Teams in the League or Division
Bollullos often draws comparisons with top-tier teams like Valencia B and Girona FC due to their similar tactical approach and competitive spirit. While Bollullos excels in offensive creativity, Valencia B is noted for its defensive resilience.
Case Studies or Notable Matches
A breakthrough game was their 4-0 victory against Real Murcia in March 2021, showcasing their attacking flair. Another key victory was the semi-final win against Betis B last season, which propelled them into the finals.
| Statistic | Bollullos | Rival Team |
|---|---|---|
| Average Goals per Game | 1.8 | 1.5 |
| Last Five Formations (W-D-L) | 3-1-1 | 2-1-2 |
| Odds for Next Match Win/Loss/Draw | Win: 1.75 / Draw: 3.50 / Loss: 4.00 | N/A |
Tips & Recommendations for Analyzing the Team or Betting Insights
To maximize betting potential on Bollullos, consider their strong home record and recent form against direct rivals. Analyzing player performance metrics can provide insights into potential match outcomes.
“Bollullos’ tactical flexibility makes them unpredictable opponents,” says former coach Álvaro Gómez.
Pros & Cons of the Team’s Current Form or Performance
- ✅ Strong attacking options leading to consistent goal-scoring opportunities.
- ❌ Occasional defensive vulnerabilities that can be exploited by skilled opponents.
- ✅ High morale among players following recent victories.
- ❌ Injuries to key players impacting overall team performance.</li
<>: Hi there! I’m working on some code that involves calculating similarity matrices using different methods like ‘spectrum’, ‘dice’, ‘sober’, etc., as part of a larger data processing pipeline for chemical informatics.Here’s the snippet I’m dealing with:
python
if method == ‘spectrum’:
self.dge = np.zeros((a.shape[0],a.shape[0]))
for i in range(self.n):
temp = np.outer(a[i,:],a[i,:])
self.dge += temp
self.dge = np.matrix(self.dge)
elif method == ‘dice’:
d = cdist(a,a,’hamming’)
d = d * np.max(d)
self.dge = ((np.matrix(1-a)).T * np.matrix(1-a)) / (d+np.transpose(d)+np.matrix(1-a))
elif method == ‘sober’:
sobertime= time.time()
global sober_matrix
if array_no==0:
sober_matrix = np.zeros((a.shape[0],a.shape[0]))
for i in range(self.n):
temp = np.outer(a[i,:],a[i,:])
sober_matrix += temp
#for i in range(b.shape[0]):
# b_i = b[i].todense()
# di_temp = di[:,i]
# print(‘i={}’.format(i))
# print(‘number of nonzero elements={}’.format(len(di_temp.getA()[di_temp.getA()>0])))
# print(‘b_i number of nonzero elements={}’.format(len(b_i.getA()[b_i.getA()>0])))
#break
else:
new_sober = np.zeros((a.shape[0],a.shape[0]))
for i in range(self.n):
temp= np.outer(a[i,:],a[i,:])
new_sober += temp
sober_matrix += new_soberI’m feeling a bit confused about how each method is supposed to work here, especially with how `sober` seems to rely on some kind of cumulative matrix across different `array_no` values. Could you help clarify what’s going on with the `sober` method? Also, any tips on how to ensure these calculations are efficient would be greatly appreciated!
<>: python
Sure! Let’s break down what’s happening with each method:### Spectrum Method:
– This calculates a similarity matrix (`dge`) by computing the outer product of each row vector `a[i,:]` with itself.
– It accumulates these outer products into `dge`, resulting in a matrix that reflects pairwise similarities based on spectral properties.### Dice Method:
– This uses Hamming distance (`cdist`) between rows of `a`.
– The similarity matrix is calculated using Dice’s coefficient formula: `(A.T @ B) / (D + D.T + A)`, where `A` is `(1-a)`.### Sober Method:
– This seems designed to accumulate results over multiple calls (`array_no`).
– When `array_no` is `0`, it initializes `sober_matrix` as zero.
– For subsequent calls (`array_no > 0`), it updates `sober_matrix` by adding new outer products.
– This suggests that `sober_matrix` accumulates similarities across multiple datasets or iterations.### Efficiency Tips:
1. **Vectorization**: Ensure operations are vectorized where possible to leverage NumPy’s optimized C-based operations.2. **Sparse Matrices**: If your data is sparse, consider using sparse matrix representations from libraries like SciPy to save memory and computation time.
3. **Parallel Processing**: Use parallel processing libraries like `joblib` or Python’s `multiprocessing` module to distribute computations across multiple CPU cores.
4. **Profile Your Code**: Use profiling tools like `cProfile` to identify bottlenecks in your code.
By understanding how each method works and optimizing accordingly, you can improve both clarity and performance! 🚀
<>: Got it! I think I need to refactor this code a bit for clarity and efficiency. Maybe I should rename some variables to make their purpose clearer? Also, should I consider breaking down the logic into separate functions? And about those commented-out lines—should I just remove them if they’re not needed?
<>: Refactoring your code is a great idea! Here are some suggestions:### Variable Renaming:
– Use descriptive names that reflect their role or content. For example:
– Rename `dge` to something like `similarity_matrix`.
– Rename `temp` to something more specific like `outer_product`.### Breaking Down Logic:
– Consider creating separate functions for each method (`calculate_spectrum`, `calculate_dice`, etc.). This will enhance readability and maintainability.### Commented-Out Lines:
– If those lines are not needed anymore, it’s usually best to remove them to keep your code clean.
– If they might be useful later, consider moving them to documentation or comments explaining why they were included initially.Refactoring will make your code easier to understand and maintain! 😊
<>: Alright, here’s what I’m thinking: I’ll rename variables like `dge` to `similarity_matrix` and break down each method into its own function for better clarity. Also, I’ll remove those commented-out lines since they seem unnecessary now. Does that sound like a good plan? Any other refactoring tips you might have?
<>: That sounds like an excellent plan! Here are a few additional refactoring tips:### Additional Tips:
1. **Function Documentation**:
– Add docstrings to your functions explaining what each one does, its parameters, and return values.2. **Consistent Naming Conventions**:
– Stick to a consistent naming convention throughout your code (e.g., snake_case).3. **Error Handling**:
– Implement error handling where necessary to manage unexpected inputs gracefully.4. **Modular Design**:
– Keep related functionalities together within modules if your project grows larger.5. **Use Constants**:
– Define any repeated values as constants at the top of your file for easy adjustments later.These steps will enhance both clarity and robustness! 🚀
<>: Great! I’ll go ahead with renaming variables like `dge` to `similarity_matrix`. I’ll also split each method into its own function such as `calculate_spectrum`, `calculate_dice`, etc., for better organization. Plus, I’ll remove those commented-out lines since they’re not needed anymore. Do you think adding unit tests would be beneficial at this stage?
<>: Adding unit tests would definitely be beneficial! They help ensure that each part of your code works as expected after refactoring:### Benefits of Unit Tests:
1. **Verification**: Confirm that changes haven’t introduced bugs.
2. **Documentation**: Serve as examples of how functions should be used.
3. **Regression Prevention**: Catch errors early if future changes break existing functionality.
Consider using frameworks like `unittest` or `pytest`. Start by testing critical parts first before expanding coverage as needed.
Happy coding! 🧪✨
<>: Awesome! I’ll start setting up unit tests using pytest since it seems straightforward enough for my needs right now. Should focus on testing critical parts first—like ensuring each similarity calculation returns expected results—and then expand coverage gradually?
<>: That sounds perfect! Focusing on critical parts first ensures that your core functionality is solid before expanding coverage:### Testing Strategy:
1. **Critical Parts First**:
– Test key methods like each similarity calculation (`calculate_spectrum`, etc.) thoroughly.2. **Edge Cases**:
– Include tests for edge cases such as empty inputs or unusual data shapes.3. **Gradual Expansion**:
– As you gain confidence in initial tests, expand coverage incrementally.This approach will give you confidence in your refactored code while allowing flexibility as your project evolves!
Good luck with testing! 🧪🔍<>: Hi there,
I’ve been trying out this custom implementation of an Adam optimizer step function called `_step`. It handles parameter updates along with gradient clipping strategies specifically designed around Hessian-vector products when certain conditions are met (`self.hvp`). However, something seems off when running my training loop—it doesn’t update parameters correctly under certain conditions!
Here’s my modified version of `_step`, but it throws an error when executed:
python
def _step(self,
closure=None,
clip_gn=False,
epsilon=_default_epsilon,
device=None,
use_momentum=True):
“””Performs a single optimization step.
Arguments:
closure (callable): A closure that reevaluates the model
and returns the loss.
clip_gn (bool): True means Clipping gradient norm
using self.clip_gn_thresh.
False means no clipping gradient norm.
Default value is False.
Used when training deep kernel network using deep kernel
learning technique(https://arxiv.org/abs/2010.v131).
epsilon (float): term added to the denominatorto
improve numerical stability(avoid division by zero)
Default value is _default_epsilon defined above.
Used when training deep kernel network using deep kernel
learning technique(https://arxiv.org/abs/2010.v131).
“””
loss = None
if closure is not None:
loss = closure()grad_list_sqrd = []
counter = torch.tensor(0).to(device)
norm_sqrd_grad_list_counter = torch.tensor(0.).to(device)
grad_filter_sqrd_norm_dict_batchcount = defaultdict(lambda : defaultdict(list))
if bool(loss) == False :
raise ValueError(“Loss can’t be empty.nnPlease check closure() which must return loss.”)
if bool(list(self.param_groups)) == False :
raise ValueError(“Parameter groups can’t be empty.nnCheck Adam(ntparameters=…, n)”)
elif len(list(self.param_groups)) >=1 :
for group in self.param_groups :
group[“step_counter”] += 100
for group in self.param_groups :
if group[“weight_decay”] != None :
group[“weight_decay”] /= math.sqrt(group[“step_counter”])
if group[“momentum_decay”] != None :
group[“momentum_decay”] /= math.sqrt(group[“step_counter”])
if group[“dampening”] != None :
group[“dampening”] /= math.sqrt(group[“step_counter”])
if group[“nesterov_momentum”] != None :
group[“nesterov_momentum”] /= math.sqrt(group[“step_counter”])
def aggregate_grad_by_param(param_name):
total_grad_squared_value=[]
total_grad_value=[]
total_weighted_grad_squared_value=[]
counter= torch.zeros_like(p.data,dtype=torch.int64)
totol_weighted_grad_value=[]
grad_var=list(grad_filter_sqrd_norm_dict_batchcount[param_name].keys())
weights=list(grad_filter_sqrd_norm_dict_batchcount[param_name].values())
norm_squared_sum=torch.zeros_like(p.data,dtype=torch.float32)
weighted_norm_squared_sum=torch.zeros_like(p.data,dtype=torch.float32)
sum_=torch.zeros_like(p.data,dtype=torch.float32)
weighted_sum_=torch.zeros_like(p.data,dtype=torch.float32)weight=weights.pop()
grad_var_np=np.array(grad_var).astype(np.float32)
sorted_indices=np.argsort(grad_var_np)[::-1]
sorted_grad_var=[grad_var[idx] for idx in sorted_indices]
sorted_weights=[weights[idx]for idx in sorted_indices]sort_weights_np=np.array(sorted_weights).astype(np.float32)
cumsum_sort_weights_np=np.cumsum(sort_weights_np,axis=0 )
sort_grad_var_np=np.array(sorted_grad_var).astype(np.float32)
p_data_numpy=p.data.cpu().numpy()
array_shape=p_data_numpy.shape
num_of_params=p_data_numpy.size#np.prod(array_shape)param_name_with_index=””
param_names=[“weight”,”bias”,”gain”,”bias_mask”,”ternary_weight_code”,”alpha”]
if “weight”==param_name.split(“.”)[-1]:
param_name_with_index=”.”.join(param_name.split(“.”)[:-1])+”.weight”
param_names.remove(“weight”)
else :
param_name_with_index=param_name.split(“.”)[-1]grad_count=len(cumsum_sort_weights_np)/num_of_params#len(cumsum_sort_weights_np)//num_of_params+min(max(cumsum_sort_weights_np)-min(cumsum_sort_weights_np),max(cumsum_sort_weights_np)//num_of_params)#int(max(cumsum_sort_weights_np)//num_of_params)+min(max(cumsum_sort_weights_np)-min(cumsum_sort_weights_np),max(cumsum_sort_weights)_//num_of_params)
start_index=grad_count*num_of_params#grad_count*num_of_params#int(min(cumsum_sort_weights_np)//num_of_params)
end_index=(grad_count+top_k)*num_of_params#end_index=min((grad_count+top_k)*num_of_params,num_of_params)
selected_sorted_grad_var=sort_grad_var_np[start_index:end_index]
selected_cumulative_sorted_weight=cumsum_sort_weights_np[start_index:end_index]
mask=np.ones(num_of_params,dtype=bool)
mask[start_index:end_index]=False
masked_cumulative_sorted_weight=cumsum_sort_weights_np[mask]
masked_cumulative_sorted_weight_min=min(masked_cumulative_sorted_weight)
mask[mask]=masked_cumulative_sorted_weight-masked_cumulative_sorted_weight_minself.zero_to_one,std,std*(self.zero_to_one-torch.abs(p))/std)
mean_broadcasting_view=list(mean.size())
mean_broadcasting_view.extend([1]*(len(list(p.size()))))
mean_broadcasting_view=tuple(mean_broadcasting_view)
sum_square=torch.square(mean)#torch.from_numpy(np.square(mean)).to(dtype=self.pdtype,to=device=device)
grad_list_sqrd.append(sum_square.view_as(p))
norm_sqrd_grad_list_counter+=torch.norm(sum_square.view_as(p),dim=tuple(range(len(list(sum_square.size())))))
counter+=torch.ones_like(sum_square.view_as(p),dtype=torch.int64)#torch.from_numpy(np.ones(sum_square.view_as(p)).astype(np.int64)).to(dtype=self.pdtype,to=device=device)
total_weighted_grad_value.append(weighted_sum_.view_as(p))
total_grad_value.append(sum_.view_as(p))
weighted_norm_squared_sum+=weighted_sum_square.view_as(p)
total_weighted_grad_squared_value.append(weighted_norm_squared_sum.view_as(p))
agg_gradient_by_param={
“total_gradient”:total_grad_value,
“total_gradient_sqrd”:grad_list_sqrd,
“counter”:counter,
“norm_sqrd_gradient_list_counter”:norm_sqrd_grad_list_counter,
“total_weighted_gradient”:total_weighted_grad_value,
“total_weigted_gradient_sqrd”:total_weighted_grad_squared_value}
self.state[param][“aggregate_buffer”].appendleft(agg_gradient_by_param.copy())
if len(self.state[param][“aggregate_buffer”]) > max(len(self.state[param][“aggregate_buffer”]),self.aggregate_buffer_max_len):
del self.state[param][“aggregate_buffer”][max(len(self.state[param][“aggregate_buffer”]),self.aggregate_buffer_max_len)]#
#
#
#
#
#
#
#
#
#for name,param_group_dicts_iterated_param_dicts_iterated_key,param_group_dicts_iterated_param_dicts_iterated_param_group_dicts_iterated_key,param_group_dicts_iterated_param_group_dicts_iterated_aggregate_buffer_item,param_group_dicts_iterated_param_group_dicts_iterated_aggregate_buffer_item_key,param_group_dicts_iterated_param_group_dicts_iterated_aggregate_buffer_item_subkey,param_group_dicts_iterated_aggregate_buffer_item_subkey_values,param_group_dicts,key,value,p,tuple(map(tuple,list(param.items()))),param_id_tuples,key_tuples,i,j,k,l,m,n,o,p,q,r,t,u,v,w,x,y,z,a,b,c,d,e,f,g,h,i_,j_,k_,l_,m_,n_,o_,p_,q_,r_ ,param_id_tuples_len,param_id_tuples_len_minus_one,total_accumulated_step,max(total_accumulated_step,self.aggregate_buffer_max_len),aggregated_step_indice,top_k,top_k_percent,total_selected,top_selected_per_layer,start_indice,end_indice,param_id_tuple_select_indice,index_select_indice,dim_size,dim_start,dim_end,dim_size_copy,dim_start_copy,dim_end_copy,current_dim_dim_size,current_dim_dim_start,current_dim_dim_end,last_dim_start,last_dim_end,new_dim_start,new_dim_end,last_new_dim_start,last_new_dim_end,current_last_new_dim_start,current_last_new_dim_end,new_last_new_dim_start,new_last_new_dim_end,p_mean,p_std,num_selected,param_size,tuple_map_tuple,list_map_tuple,list_map_tuple_flatten,tuple_map_tuple_flatten,list_map_tuple_flatten_len,sum_mean,sum_std,norm_mean,norm_std,norm_mean_divide,std_divide,std_divide_broad_casting_view,std_broad_casting_view,sum_broad_casting_view,sum_mean_divide,sum_mean_divide_broad_casting_view,norm_mean_divide_broad_casting_view,norm_std_divide,norm_std_divide_broad_casting_view,tmp_agg_hv,tmp_agg_hv_first_element,tmp_agg_hv_first_element_tanh_inverse,tmp_agg_hv_first_element_tanh_inverse_pow,min_tmp_agg_hv_first_element_tanh_inverse_pow,min_tmp_agg_hv_first_element_tanh_inverse_pow_add,max_tmp_agg_hv_first_element_tanh_inverse_pow,max_tmp_agg_hv_first_element_tanh_inverse_pow_add,min_max_tmp_agg_hv_first_element_tanh_inverse_pow_add,min_max_tmp_agg_hv_first_element_tanh_inverse_pow_add_mul,tmp_agg_hv_second_element,tanh_x,x_plus_pi_over_6,x_minus_pi_over_6,tan_x_plus_pi_over_6,tan_x_minus_pi_over_6,x_plus_pi_over_6_minus_x_minus_pi_over_6,tmp_cos_x,tmp_sin_x,tmp_cos_x_plus_pi_over_6,tmp_sin_x_plus_pi_over_6,tmp_cos_x_minus_pi_over_6,tmp_sin_x_minus_pi_over_6,tan_half_y,y_half,y_half_plus_pi_over_12,y_half_minus_pi_over_12,y_half_plus_pi_over_12_minus_y_half_minus_pi_over_12,cos_y_half_plus_pi_over_12,cos_y_half_minus_pi_over_12,sin_y_half_plus_pi_over_12,sin_y_half_minus_pi_over_12,cos_y_half_plus_pi_over_12_cos_y_half_minus_pi_over_12,sin_y_half_plus_pi__over__12_sin_y__half__minus__pi__over__12,cos_y_half_plus___pi__over__12_sin_y__half____minus____pi____over____12,cos_y_half_minus___pi____over____12_sin_y__half___plus___pi_____over_____12,min_cosine_term,max_cosine_term,minus_cosine_term_abs,minus_cosine_term_abs_sqrt,multiplied_cosine_term_sqrt,multiplied_cosine_term_sqrt_mul_one_third,multiplied_cosine_term_sqrt_mul_one_third_add_one_third,multiplied_cosine_term_sqrt_mul_one_third_add_one_third_mul_two_third,multiplied_cosine_term_sqrt_mul_two_third,multiplied_cosine_term_sqrt_mul_two_third_add_two_third,multiplied_cosine_term_sqrt_mul_two_third_add_two_third_mul_three_third,zeta_numerator,zeta_denominator,zeta_numerator_abs,zeta_denominator_abs,zeta_numerator_abs_div_zeta_denominator_abs,zeta_numerator_abs_div_zeta_denominator_abs_clamp_min,max_zeta_numerator_abs_div_zeta_denominator_abs_clamp_min,min_zeta_numerator_abs_div_zeta_denominator_abs_clamp_min,max_zeta_numerator_abs_div_zeta_denominator_abs_clamp_min_sub_min_zeta_numerator_abs_div_zeta_denominator_abs_clamp_min,min_zeta_numerator_abs_div_zeta_denominator_abs_clamp_min_sub_max_zetainerator_abs_divzetiadenomiatorabsclampmin,half_times_maxzetalpha,half_times_maxzetalpha_sub_halftimesminzetalpha,half_times_minzetalpha,half_times_maxzetalpha_sub_halftimesminzetalphamulsqrtthree,half_times_maxzetalpha_sub_halftimesminzetalphamulsqrtthreeaddonebythree,half_times_maxzetalpha_sub_halftimesminzetalphamulsqrtthreeaddonebythreemultwothirds,half_times_minzetalphasqrtthree,half_times_minzetalphasqrtthreeaddtwobythree,half_times_minzetalphasqrtthreeaddtwobythreemulthreehalves,alpha_factor,alpha_factor_addition_sign,alpha_factor_multiply_sign,alpha_factor_normalize,alpha_factor_normalize_scalar,alpha_factor_normalized_signified_sign,alpha_factor_normalized_signified_sign_scalar,alpha_factor_normalized_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_sign_scalar_alpha_facotr_multiply_signified_signtmp_for_scale_and_bias,bias_for_scale_and_bias,bias_for_scale_and_bias_negate,bias_for_scale_and_bias_negate_scaled_by_scale_and_bias,bias_for_scale_and_bias_negate_scaled_by_scale_and_bias_added_to_scaled_by_scale_and_bias,scale_and_bias_for_scale_and_bias,bias_for_scale_and_bias_negate_scaled_by_scale_and_bias_added_to_scaled_by_scale_and_bias_dropped_negative_elements,bias_for_scale_and_bias_negate_scaled_by_scale_and_bias_added_to_scaled_by_scale_and_biasedropped_negative_elements_rescaled_into_positive_domain,rescaled_into_positive_domain_drop_negative_elements_rescaled_into_positive_domain_rescaled_into_positive_domain_dropped_negative_elements_rescaled_into_positive_domain_dropped_negative_elements_rescaled_into_positive_domain_dropped_negative_elements_rescaled_into_positive_domain_dropped_negative_elements_rescaled_into_positive_domain_dropped_negative_elements_rescaled_into_positive_domain_dropped_negative_elements,
: aggregate_gradients_based_on_topk_or_topk_percent(
agg_gradient_by_param[
param_id_tuples[k][y]],
aggregated_step_indice=k_,
top_k=top_k_,
top_k_percent=top_k_percent_,
total_selected_per_layer=top_selected_per_layer_,
start_indice=start_indice_,
end_indice=end_indice_,
param_id_tuple_select_indice=index_select_indice_,
dim_size=(
dim_size_[i],
dim_size_[j],
dim_size_[l],
dim_size_[m]),
dim_start=(
dim_start_[i],
dim_start_[j],
dim_start_[l],
dim_start_[m]),
dim_end=(
dim_end_[i],
dim_end_[j],
dim_end_[l],
dim_end_[m]),
current_dim=
dimensions[dim_id][current_dimension][:],
current_last_new_dimension=
dimensions[dim_id][last_dimension][:],
dimensions=dims_
)
for k,(param_id_tuples,(dim_sizes,dim_starts,dim_ends))
,(dim_sizes,(dim_sizes),(dim_starts),(dim_ends),(dim_ends)),
((dim_sizes,i),(dim_starts,j),(dim_ends,l)),
((dim_ends,l),(dim_starts,j)),
((dim_ends,l),(dim_starts,j)),
((param_ids,i),(param_ids,j)),
((param_ids,k_),param_ids,k_),
((top_selected_per_layer,k_),top_selected_per_layer,),
((start_indices,k_),start_indices,),
((end_indices,k_),end_indices,),
(
(
(
(
(
(
(
(
top_selected_per_layer[k_]#
min(
max(
int(
float(
top_k_percent_
)
*
float(
dims[dim_id][current_dimension][:]
)
),
int(
dims[
dim_id[
]
][current_dimension][:]
)
)
),
int(
dims[
dim_id[
]
][current_dimension][:]
)
)#
min(
max(
int(
float(top_k_) *
float(dims[dim_id][current_dimension][:])
),
int(dims[dim_id][current_dimension][:])
),
int(dims[dim_id][current_dimension][:])
)
)#
min(
max(
int(float(top_k_) * float(dim_sizes)),
int(dim_sizes)
),
int(dim_sizes)
)#
min(max(start_indices[k_] + end_indices[k_] –
top_selected_per_layer[k_] –
int(top_k_), start_indices[k_] + end_indices[k_] // 10),
end_indices[k_)
),start_indices[k_)
),start_indices,k_)
for k_(top_selected_per_layer,start_indices,end_indices,top_k,top_k_percent,
dims,dim_id,current_dimension,last_dimension,
param_ids,i,j,l,m),
k_(top_selected_per_layer,start_indices,end_indices,top_k,top_k_percent,
dims,dim_id,current_dimension,last_dimension,
param_ids,i,j,l,m)
,(dims,(dims),(dims,[],[],[],[],[],[],[]),([],[],[],[]),([],[]),([],[]),([],[])),
([dims[dim_i]]*7,[dims[dim_j]]*7,[dims[dim_l]]*7,[dims[dim_m]]*7),
([start_dims[dim_i]]*7,[start_dims[dim_j]]*7,[start_dims[dim_l]]*7,[start_dims[dim_m]]*7),
([end_dims[dim_i]]