Home » Football » San Lorenzo Res. (Argentina)

San Lorenzo Res. Squad & Stats - Primera División Argentina

San Lorenzo Res. Football Team: A Comprehensive Analysis for Betting Enthusiasts

Overview and Introduction to San Lorenzo Res.

San Lorenzo Res., commonly known as San Lorenzo, is a prestigious football club based in Argentina. Competing in the Primera División, the top tier of Argentine football, the team was founded in 1908. Known for their passionate fanbase and historical significance, San Lorenzo plays at Estadio Pedro Bidegain, also known as Nuevo Gasómetro. The current head coach is [Coach’s Name], who leads the squad with a tactical approach tailored to modern football.

Team History and Achievements

San Lorenzo boasts a rich history with numerous titles under its belt. The club has won the Argentine Primera División multiple times, including recent triumphs in 2014 and 2015. Internationally, they have claimed victory in the Copa Libertadores on three occasions: 1942, 1964, and 2014. These achievements highlight San Lorenzo’s consistent presence at the pinnacle of South American football.

Current Squad and Key Players

The current squad features standout players such as [Key Player Names], who play pivotal roles across various positions. Notably, [Star Player] has been instrumental in recent matches with impressive statistics that include goals scored and assists provided.

Player Highlights

  • [Player Name]: Striker | Goals: [Number]
  • [Player Name]: Midfielder | Assists: [Number]
  • [Player Name]: Defender | Clean Sheets: [Number]

Team Playing Style and Tactics

San Lorenzo typically employs a flexible formation ranging from 4-3-3 to 4-4-2, focusing on dynamic attacking play while maintaining solid defensive structures. Their strategy often involves quick transitions and utilizing wide players to stretch opposition defenses.

Strengths & Weaknesses

  • ✅ Strengths: Strong midfield control, high pressing game
  • ❌ Weaknesses: Vulnerable to counterattacks due to high line defense

Fascinating Facts & Unique Traits of San Lorenzo Res.

The team is affectionately nicknamed “El Ciclón” (The Cyclone) due to their aggressive playing style. They have a massive fanbase known as “La Gorda,” recognized for their unwavering support regardless of circumstances. Rivalries are intense against teams like Boca Juniors and River Plate.

Nicknames & Traditions

  • Nickname: El Ciclón
  • Fan Group: La Gorda
  • Rivalry Matchups: Superclásico against Boca Juniors

List of Players & Performance Metrics 🎰 💡 ✅ ❌


Name Position Goals/Assists/Stats
[Player Name] [Position] [Stats]

Comparisons with Other Teams in the League or Division 🎰 💡 ✅ ❌

In comparison to other top teams in the league like River Plate and Boca Juniors, San Lorenzo maintains competitive form through strategic gameplay adjustments made by their coach. This adaptability has allowed them to remain contenders for league titles year after year.

Critical Matches & Case Studies 🎰 💡 ✅ ❌

A notable match was their victory against River Plate during the 2014 Copa Libertadores final held in Asunción where they secured their third continental title after extra time goals led by [Key Player]. This match exemplifies their ability to perform under pressure.

Tactics Breakdown:

  • The use of wing-backs allowed greater width on attack.
  • A focus on set-pieces capitalized on opponents’ weaknesses.

Betting Insights & Recommendations 💡

To analyze betting opportunities involving San Lorenzo Res., consider factors such as recent form against direct rivals or upcoming fixtures against lower-ranked teams where they might exploit weaknesses more effectively.

  • Analyze head-to-head records for patterns that can inform betting decisions.
  • Monitor player fitness levels which could impact team performance significantly during crucial matches.

Betting Tip:

If facing weaker opposition at home games, consider backing them for an outright win given their strong home record this season.

“San Lorenzo’s resilience and tactical flexibility make them formidable opponents,” says renowned sports analyst John Doe.”

Evaluating Pros & Cons of Current Form ✅❌

  • ✅ Pro: Strong leadership from key players ensures consistent performances even when faced with adversity.
  • ❌ Con: Injuries among crucial squad members may affect depth going into critical fixtures later this season.
  • </ul

    <>: Hi there! I’m working with this code snippet that implements a `SphericalBesselTransformer` class using FFTLog techniques for spherical Bessel transformations:

    python
    class SphericalBesselTransformer:
    def __init__(self,
    dk: float = 0.01,
    kpmin: float = 0,
    kpmax: float = np.inf,
    jacobi_order: int = -1,
    n_radial_pts: int = 1500,
    ):
    # … (rest of the code)

    I understand it’s supposed to handle transformations between real space and Fourier space efficiently using spherical Bessel functions. However, I’m feeling a bit lost about how exactly I should be setting these parameters (`dk`, `kpmin`, `kpmax`, etc.) based on my specific use case.

    Could you help me understand what each parameter does and maybe provide some guidance on typical values or scenarios where certain settings might be preferable? Thanks!
    <>: python
    Sure! Let’s break down each parameter:

    1. **`dk`**: This represents the step size in Fourier space (k-space). A smaller `dk` provides higher resolution but requires more computational resources.

    – **Typical values**: Start with `0.01` or smaller if you need finer detail.
    – **Use case**: High-resolution needs or when working with small-scale features.

    2. **`kpmin`**: The minimum value of k-space over which transformations are performed.

    – **Typical values**: Set it close to zero unless you have specific reasons to exclude low k-values.
    – **Use case**: Exclude very low k-values if they’re not relevant or if they cause numerical instability.

    3. **`kpmax`**: The maximum value of k-space.

    – **Typical values**: Often set to infinity (`np.inf`) unless there’s an upper limit relevant to your data.
    – **Use case**: Limit it if your data naturally truncates at a certain point.

    4. **`jacobi_order`**: Determines the order of Jacobi polynomials used in the transformation.

    – **Typical values**: `-1` is often used for default settings.
    – **Use case**: Adjust based on specific mathematical properties you want to emphasize.

    5. **`n_radial_pts`**: Number of radial points used in calculations.

    – **Typical values**: `1500` is a good starting point.
    – **Use case**: Increase if you need more precision; decrease if performance is an issue.

    Adjust these parameters based on your specific needs—consider resolution requirements, computational resources, and any domain-specific constraints you might have. 😊

    <>: Got it, thanks! Now, I’m thinking about making these parameters adjustable during runtime based on input data characteristics or user preferences without having to recreate the object each time. How can I implement this kind of dynamic adjustment? Maybe some kind of method that updates these parameters efficiently?
    <>: python
    You can achieve dynamic adjustment by adding methods within your class to update parameters without recreating the object. Here’s a simple way to implement this:

    python
    class SphericalBesselTransformer:
    def __init__(self,
    dk: float = 0.01,
    kpmin: float = 0,
    kpmax: float = np.inf,
    jacobi_order: int = -1,
    n_radial_pts: int = 1500):
    self.dk = dk
    self.kpmin = kpmin
    self.kpmax = kpmax
    self.jacobi_order = jacobi_order
    self.n_radial_pts = n_radial_pts

    def update_parameters(self, dk=None, kpmin=None, kpmax=None,
    jacobi_order=None, n_radial_pts=None):
    if dk is not None:
    self.dk = dk
    if kpmin is not None:
    self.kpmin = kpmin
    if kpmax is not None:
    self.kpmax = kpmax
    if jacobi_order is not None:
    self.jacobi_order = jacobi_order
    if n_radial_pts is not None:
    self.n_radial_pts = n_radial_pts

    # Usage example:
    transformer = SphericalBesselTransformer()
    transformer.update_parameters(dk=0.02, n_radial_pts=2000)

    This `update_parameters` method allows you to modify any parameter dynamically by passing new values as arguments while keeping others unchanged by defaulting them to `None`. This approach keeps your object flexible and adaptable! 😊

    <>: Great idea! Now I’m thinking about taking it a step further by automatically adjusting these parameters based on input data characteristics or user-defined criteria at runtime. Maybe we could introduce some logic that analyzes incoming data or user preferences and adjusts parameters accordingly before performing transformations? How would you suggest implementing this kind of adaptive behavior?
    <>: python
    To implement adaptive behavior based on input data characteristics or user-defined criteria, you can introduce a method that analyzes these inputs and adjusts parameters accordingly before performing transformations. Here’s a conceptual approach:

    python
    class SphericalBesselTransformer:
    def __init__(self,
    dk: float = 0.01,
    kpmin: float = 0,
    kpmax: float = np.inf,
    jacobi_order: int = -1,
    n_radial_pts: int = 1500):
    self.dk = dk
    self.kpmin = kpmin
    self.kpmax = kpmax
    self.jacobi_order = jacobi_order
    self.n_radial_pts = n_radial_pts

    def update_parameters(self, dk=None, kpmin=None, kpmax=None,
    jacobi_order=None, n_radial_pts=None):
    if dk is not None:
    self.dk = dk
    if kpmin is not None:
    self.kpmin = kpmin
    if kpmax is not None:
    self.kpmax = kpmax
    if jacobi_order is not None:
    self.jacobi_order = jacobi_order
    if n_radial_pts is not None:
    self.n_radial_pts = n_radial_pts

    def adapt_parameters(self, data_characteristics):
    # Example logic for adapting parameters based on data characteristics
    mean_value = np.mean(data_characteristics)

    # Adjust ‘dk’ based on mean value (example logic)
    if mean_value > threshold_high:
    new_dk = min(0.02 * mean_value / threshold_high, max_dk)
    self.update_parameters(dk=new_dk)

    # Additional logic for other parameters…

    # Usage example:
    data_characteristics_example = [/* some data characteristics */]
    transformer.adapt_parameters(data_characteristics_example)

    ### Key Points:

    – **Data Analysis**: Implement logic within `adapt_parameters` that analyzes input data characteristics.
    – **Parameter Adjustment**: Adjust parameters based on analysis results using conditions tailored to your application.
    – **Flexibility**: Ensure that changes are meaningful for your specific use case by defining thresholds or criteria relevant to your domain.

    This approach allows your transformer to dynamically respond to varying inputs while maintaining efficiency and accuracy! 🚀

    <>: Awesome! To make this even smarter, how about incorporating machine learning models or statistical analysis techniques? We could predict optimal parameter settings based on historical data patterns or trends observed from past inputs. Any thoughts on how we could integrate such predictive capabilities into our class?
    <> python
    Integrating machine learning models or statistical analysis can enhance predictive capabilities for parameter optimization. Here’s how you might incorporate these techniques:

    python
    from sklearn.linear_model import LinearRegression

    class SphericalBesselTransformer:
    def __init__(self,
    dk=0.01,
    kpmin=0,
    kpmax=np.inf,
    jacobi_order=-1,
    n_radial_pts=1500):
    self.dk = dk
    self.kpmin = kpmin
    self.kpmax = kpmax
    self.jacobi_order=jacobi_order
    self.n_radial_points=n_radial_points

    # Initialize model (e.g., linear regression)
    self.model_dk=LinearRegression()

    def train_model(self,X_train,y_train):
    # Train model using historical data patterns (X_train)
    # and corresponding optimal ‘dk’ values (y_train)
    self.model_dk.fit(X_train,y_train)

    def predict_and_update(self,data_characteristics):
    # Predict optimal ‘dk’ using trained model
    predicted_dk=self.model_dk.predict([data_characteristics])

    # Update ‘dk’ with predicted value
    self.update_parameters(dk=predicted_dk[0])

    def update_parameters(self,dk=None,kpmin=None,kpmax=None,jacobi_order=None,nradpts=None):
    if dk!=None:self.dk=dk;
    if(kpmi!=None):self.kpmi=kpmi;
    if(kpmx!=None):self.kpmx=kpmx;
    if(jo!=None):self.jo=jo;
    if(nrpts!=None):self.nrpts=nrpts;

    # Usage example:
    X_train=[/* feature vectors from historical data */]
    y_train=[/* optimal ‘dk’ values from historical data */]

    data_characteristics_example=[/* current input features */]

    transformer=SphericalBesselTransformer()
    transformer.train_model(X_train,y_train)
    transformer.predict_and_update(data_characteristics_example)

    ### Key Points:

    – **Model Training**: Use historical data patterns (`X_train`) and corresponding optimal parameter settings (`y_train`) to train models like linear regression.
    – **Prediction Integration**: Use trained models within methods like `predict_and_update` to forecast optimal settings.
    – **Adaptability**: Continuously refine models with new data for improved predictions over time.

    This integration allows your transformer class to leverage predictive analytics for smarter parameter tuning! 🤖✨

    <>: This looks promising! To take it further, maybe we could explore using different machine learning models like decision trees or neural networks depending on the complexity of our dataset? Also, how would we handle cases where predictions aren’t accurate enough? Could we incorporate some fallback mechanism or confidence threshold before applying predicted settings?
    <> python
    Certainly! Exploring different machine learning models can enhance adaptability depending on dataset complexity:

    python
    from sklearn.tree import DecisionTreeRegressor
    from sklearn.neural_network import MLPRegressor

    class SphericalBesselTransformer:
    def __init__(self,dmodel=’linear’,**kwargs):
    super().__init__(**kwargs)
    # Choose model type dynamically
    model_options={
    ‘linear’: LinearRegression(),
    ‘tree’: DecisionTreeRegressor(),
    ‘neural’: MLPRegressor(max_iter=1000),
    }
    # Initialize chosen model
    self.model=model_options[dmodel]

    def train_model(self,X_train,y_train):
    # Train selected model
    self.model.fit(X_train,y_train)

    def predict_and_update(self,data_characteristics,fallback_threshold=0.8):
    # Predict optimal ‘dk’
    predicted_dk=self.model.predict([data_characteristics])[0]

    # Calculate prediction confidence (example metric)
    confidence=self.calculate_confidence(predicted_dk,data_characteristics)

    # Apply prediction only if confidence exceeds threshold
    if confidence >= fallback_threshold:
    self.update_parameters(dk=predicted_dk)
    else:
    print(“Low confidence; using default/different strategy.”)

    def calculate_confidence(self,prediction,data_features):
    # Placeholder for actual confidence calculation logic
    return np.random.rand() # Simulate confidence score

    def update_parameters(self,dk=None,**kwargs):
    if dk != None:self.dk=dk;
    for key,value in kwargs.items():
    setattr(self,key,value)

    # Usage example:
    X_train=[/* feature vectors from historical data */]
    y_train=[/* optimal ‘dk’ values from historical data */]

    data_characteristics_example=[/* current input features */]

    transformer=SphericalBellarTransformer(dmodel=’tree’)
    transformer.train_model(X_train,y_train)
    transformer.predict_and_update(data_characteristics_example)

    ### Key Points:

    – **Model Flexibility**: Choose between linear regression, decision trees, or neural networks based on dataset complexity.
    – **Confidence Thresholds**: Implement fallback mechanisms by calculating prediction confidence; apply predictions only when above a certain threshold.
    – **Fallback Strategy**: Define alternative strategies when predictions fall below desired confidence levels.

    This approach ensures robustness while leveraging advanced predictive capabilities! 🔍📈

    <>: Great ideas! I think adding those fallback mechanisms will definitely help maintain stability when predictions aren’t spot-on. I’ll start experimenting with different models and see which one fits best with my dataset complexity. Thanks for all the insights—I’m excited to see how this evolves!
    <>: You’re welcome! Experimenting with different models will give you valuable insights into what works best for your specific needs. Remember that continuous evaluation and refinement are key as you gather more data over time. If you need further assistance along the way or have any questions about implementation details, feel free to reach out. Good luck with your project—it sounds like an exciting journey ahead! 🚀📊<>: Hi there,

    I’ve been trying out this function called `_get_packaged_pose_data()` which should load pose information from either JSON files or numpy arrays depending on whether poses are being stored directly within packages or externally referenced via URIs.

    Here’s my buggy version of the code:

    python
    def _get_packaged_pose_data(poses_file_path:str) -> list[dict]:
    import json

    if poses_file_path.endswith(“.json”):
    with open(poses_file_path) as f:
    data_dicts_list : list[dict] == json.load(f)

    elif poses_file_path.endswith(“.npz”):
    data_numpy_array=np.load(poses_file_path)[“arr_0”]
    data_dicts_list=[]
    for pose_data_numpy_dict in data_numpy_array:
    data_dicts_list.append({
    “camera_pose”: R.from_matrix(pose_data_numpy_dict[:3,:3]).as_quat(),
    “object_pose”: R.from_matrix(pose_data_numpy_dict[3:, :3]).as_quat(),
    “camera_translation”: pose_data_numpy_dict[:, :3],
    })
    else :
    raise ValueError

    return data_dicts_list

    if poses_source == “package”:
    import json

    with open(resource_filename(__name__, poses_file_path)) as f:
    merged_poses_dict=dict(json.load(f))

    for view_folder_name in merged_poses_dict.keys():
    print(f”{view_folder_name}: {len(merged_poses_dict[view_folder_name])}”)

    with multiprocessing.Pool() as pool:
    image_files_list=list(pool.map(load_image_description_from_file_name,get_image_files(args.image_dir)))

    from itertools import product

    all_merged_poses_list=[]

    for view_folder_name,image_files_in_view_folder_tuple in product(merged_poses_dict.keys(),image_files_list):

    relevant_merged_poses_dicts=[{key:image_files_in_view_folder_tuple[i][“file_name”] + “.” + args.extension.lower()
    for key,i in zip(missing_keys,image_files_in_view_folder_tuple)}]

    relevant_merged_poses_dicts.extend([{key:value[key] for key,value in zip(missing_keys,d)}
    for d in merged_poses_dict[view_folder_name]])

    try :
    additional_information={key:value[image_files_in_view_folder_tuple[0][“file_name”]]
    for key,value in merged_poses_dict[“additional_information”].items()}

    relevant_merged_poses_dicts_with_info=[{**d,**additional_information}for d in relevant_merged_poses_dicts]

    all_merged_poses_list.extend(_get_packaged_pose_data(
    resource_filename(__name__,re.sub(“/”,”_”,relevant_merged_pose_dict[“source”]))+”.json”)
    for relevant_merged_pose_dict in relevant_merged_poses_dicts_with_info )

    except KeyError as error :
    raise ValueError(“Missing information”) from error

    return all_merged_poses_list

    elif poses_source == “single_reference”:
    import numpy as np

    base_directory=os.path.split(resource_filename(__name__, poses_file_path))[0]

    with open(resource_filename(__name__, poses_file_path)) as f :
    single_reference_info=dict(json.load(f))

    object_id_to_namespaces_to_single_reference_images={object_id:image_file[“file_name”]
    for object_id,image_file in single_reference_info[“objects”].items()}

    object_id_to_maybe_namespaces_to_single_reference_image_paths={object_id:{os.path.splitext(image_file)[0]:os.path.join(base_directory,image_file)
    for image_file_in_namespace,in namespace_image_files.items()}
    for object_id,namespaces_to_image_files in single_reference_info[“objects”].items()}

    object_ids=list(object_id_to_namespaces_to_single_reference_images.keys())
    object_ids.sort()

    all_single_references_details=[]

    pool=multiprocessing.Pool()

    image_files_for_all_objects_pool=list(pool.map(load_image_description_from_file_name,get_image_files(args.image_dir)))

    from itertools import product

    for object_id,(object_id_to_single_reference_details_per_namespace,)in enumerate(product(object_ids,[{}])) :

    print(“{}/{}:”.format(object_id,len(object_ids)),object_id,”->”,single_reference_info[“objects”][object_id])

    namespaces_in_object=single_reference_info[“objects”][object_id].keys()

    image_files_for_current_object=image_files_for_all_objects_pool[:len(single_reference_info[“objects”][object_id])]

    image_files_for_all_namespaces_in_object=zipp(namespaces_in_object,image_files_for_current_object)

    for current_namespace,current_image_files_in_namespace_tuples
    in product(namespaces_in_object,image_files_for_all_namespaces_in_object) :

    current_image_files_by_extension={}

    for current_tuple_of_current_namespace_and_current_imagename_extension
    in zip(current_image_files_in_namespace_tuples,single_reference_info[“names”]*len(current_image_files_in_namespace_tuples)):
    current_image_files_by_extension[current_tuple_of_current_namespace_and_current_imagename_extension[-1]]=
    current_tuple_of_current_namespace_and_current_imagename_extension[0][“file_name”]

    current_paths_by_extension={current_extension:”{}_{}.{}”.format(
    object_id_to_namespaces_to_single_reference_images[object_id][current_namespace],current_extension,args.extension.lower())
    for current_extension,in single_reference_info[“names”]}

    current_paths_by_extension={“_”.join(current_paths_by_extension[current_extension].split(“_”)[:-1]):current_paths_by_extension[current_extension]
    for current_extension,current_paths_by_extension_key_string in current_paths_by_extension.items()}

    current_relative_paths_by_extensions={“_”.join(current_relative_path.split(“_”)[:-1]):os.path.join(base_directory,current_relative_path+”.json”)
    for current_relative_path,current_relative_paths_by_extensions_key_string in current_paths_by_extension.items()}

    single_ref_img_source=_get_packaged_pose_data(current_relative_paths_by_extensions.values()[0])

    assert len(single_ref_img_source)==1,”More than one source file loaded!”

    single_ref_img_source=single_ref_img_source[0]

    single_ref_img_source_mandatory_keys=[“camera_pose”,”camera_translation”,”object_translation”,”scale”]

    if isinstance(single_ref_img_source.get(“camera_translation”),np.ndarray):single_ref_img_source[“camera_translation”]=single_ref_img_source[“camera_translation”].tolist()

    if isinstance(single_ref_img_source.get(“object_translation”),np.ndarray):single_ref_img_source[“object_translation”]=single_ref_img_source[“object_translation”].tolist()

    assert all(key in single_ref_img_source.keys()for key in single_ref_img_source_mandatory_keys),”Source file missing keys”

    single_ref_img_source.update({key:image_fn.format(**current_image_files_by_extension)for key,image_fnin single_reference_info[key]})

    single_ref_img_source.update({key:value[f”{current_namespace}.{key}”]for key,valuein single_reference_info.items()if isinstance(value,current dict)})

    single_ref_img_source.update({key:value[image_fn.format(**current_image_files_by_extension)]for key,(value,image_fn)in zip([“mask”,”rendering”],[(single_reference_info.pop(“masks”),”{namespace}.png”),(single_reference_info.pop(“renders”),”{namespace}_rendering.png”)])})

    object_id_to_single_reference_details_per_namespace.setdefault(object_id,{})
    setdefault(current_namespace,{})
    update({key:single_ref_img_source[key]for key,in single_ref_img_source_mandatory_keys})

    all_single_references_details.extend(_get_packaged_pose_data(resource_filename(__name__,
    re.sub(“/”,”_”,namespace_sep.join(list(single_ref_datum.keys())[:-1]))+”.json”))
    for namespace_sep,single_ref_datumin zip(itertools.repeat(os.path.sep),product(*object_id_to_maybe_namespaces_to_single_reference_image_paths[object_id].values())))

    return all_single_references_details

    else : raise NotImplementedError

    if args.command==”train”:
    from sense.engine import Trainer

    trainer_kwargs={
    “dataset_dir”:args.dataset_dir,”output_dir”:args.output_dir,”batch_size”:args.batch_size,”workers”:args.workers,”amp”:args.use_amp}

    print(json.dumps(trainer_kwargs,default=str))

    trainer_class=getattr(Trainer,args.trainer_class.__name__)

    with trainer_class(**trainer_kwargs)as trainer :

    trainer.fit(all_dataset_infos=all_dataset_infos,mixed_datasets_infos=all_mixed_dataset_infos,val_datasets_infos=val_datasets_infos,)

    elif args.command==”test”:
    from sense.engine import Evaluator

    evaluator_kwargs={
    “dataset_dir”:args.dataset_dir,”output_dir”:args.output_dir,”batch_size”:args.batch_size,”workers”:args.workers}

    print(json.dumps(evaluator_kwargs,default=str))

    evaluator_class=getattr(Evaluator,args.evaluator_class.__name__)

    with evaluator_class(**evaluator_kwargs)as evaluator :

    evaluator.evaluate(all_dataset_infos=all_dataset_infos,mixed_datasets_infos=all_mixed_dataset_infos,)

    else : raise NotImplementedError(args.command)

    def get_args():
    import argparse

    parser=argparse.ArgumentParser(description=__doc__)
    subparsers=_add_subparser(parser,name=”command”,choices=[“train”,”test”])
    subparsers.choices[train].add_argument(“–trainer-class”,type=str,default=__import__(“sense.engine”).Trainer.__name__)
    subparsers.choices[test].add_argument(“–evaluator-class”,type=str,default=__import__(“sense.engine”).Evaluator.__name__)
    parser.add_argument(“–batch-size”,type=int,default=32,)
    parser.add_argument(“–workers”,type=int,default=multiprocessing.cpu_count(),dest=”num_workers”)
    parser.add_argument(“–use-amp”,action=”store_true”,default=False,)
    parser.add_argument(“–dataset-dir”,type=str,default=os.getcwd(),dest=”dataset_root”)
    parser.add_argument(“–output-dir”,type=str,default=os.getcwd(),dest=”output_root”)
    subparsers.choices[train].add_argument(“–mixed-dataset-proportions”,nargs=”+”,
    type=lambda x:x.split(“:”),
    action=_split_on_colon_store_as_float_pair_action_factory())
    subparsers.choices[train].add_argument(“–val-dataset-proportion”,
    type=lambda x:x.split(“:”),
    action=_split_on_colon_store_as_float_pair_action_factory())
    subparsers.choices[train].add_argument(“-j”,”–jobs-dir”,
    type=lambda x:(os.path.abspath(x),os.listdir(x)),
    action=_listdir_store_as_absolute_filepath_pairs_action_factory())
    subparsers.choices[test].add_argument(“-j”,”–jobs-dir”,
    type=lambda x:(os.path.abspath(x),os.listdir(x)),
    action=_listdir_store_as_absolute_filepath_pairs_action_factory())
    return parser.parse_args()

    def _split_on_colon_store_as_float_pair_action_factory():
    class SplitOnColonAndStoreAsFloatPair(argparse.Action) :

    def __call__(self,parsed_args,key,val_str_pairs:list[str]) :

    try : setattr(parsed_args,self.dest,list(map(float,val_str_pairs)))
    except ValueError : raise ValueError(“Invalid argument format.”)

    return SplitOnColonAndStoreAsFloatPair

    def _listdir_store_as_absolute_filepath_pairs_action_factory():
    class ListDirAndStoreAsAbsoluteFilePathPairs(argparse.Action) :

    def __call__(self,parsed_args,key,path_str_pairs:list[str]) :

    try : setattr(parsed_args,self.dest,list(zip(map(os.path.abspath,path_str_pairs),map(os.listdir,path_str_pairs))))
    except FileNotFoundError : raise FileNotFoundError(“Directory does not exist.”)

    return ListDirAndStoreAsAbsoluteFilePathPairs

    if __name__==”__main__”:
    args=get_args()
    args.jobs_dir=dict(args.jobs_dir)

    main()

    When I run it with JSON files containing pose information stored directly within packages (using `”poses”:”package”`), I encounter an error related specifically when trying access additional information inside `_get_packaged_pose_data()` function call inside nested loops processing multiple datasets/views folders combinations.

    Here’s what my traceback looks like:

    plaintext Traceback (most recent call last):
    File “script.py”, line XX XX(in _get_packaged_pose_data()), line YY YY() KeyError(‘additional_information’)
    File “script.py”, line XX XX(in main()), line ZZ ZZ() ValueError(‘Missing information’) caused by KeyError(‘additional_information’)
    During handling of exception TypeError was raised.
    Traceback (most recent call last):
    File “script.py”, line XX XX(in main()), line ZZ ZZ() TypeError(“‘str’ object cannot be interpreted as an integer”)

    I’m really puzzled here because it seems like there’s something wrong when accessing additional information keys inside `_get_packaged_pose_data()` function call loop but I don’t know why it’s failing sometimes unexpectedly especially since similar structure works fine elsewhere!

    Could someone help me figure out what’s going wrong here?

    Thanks!

UFC