Home » Baseball » Las Tunas (CUBA)

Las Tunas Baseball Team: Cuban League Stars, Stats & Achievements

Overview / Introduction about the Team

The Las Tunas baseball team is a prominent squad hailing from Cuba, competing in the Cuban National Series. Established in 1961, the team is known for its rich history and competitive spirit. Under the guidance of their current coach, they continue to be a formidable force in the league.

Team History and Achievements

Las Tunas has a storied past with numerous accolades to its name. The team has clinched several league titles and has been a consistent top performer in the Cuban National Series. Notable seasons include their championship wins in 1985, 1990, and 2003, showcasing their dominance on the field.

Current Squad and Key Players

The current squad boasts a mix of seasoned veterans and promising young talent. Key players include pitcher Juan Pérez, known for his impressive ERA, and outfielder Carlos Rodríguez, who leads in home runs. Their roles are crucial in maintaining the team’s competitive edge.

Team Playing Style and Tactics

Las Tunas employs an aggressive batting strategy complemented by strong defensive plays. Their typical formation focuses on power hitting and speed on the bases. Strengths include their disciplined pitching staff and strategic field placements, while weaknesses may arise from occasional lapses in fielding consistency.

Interesting Facts and Unique Traits

The team is affectionately nicknamed “Tunas” by fans. They have a passionate fanbase that supports them through thick and thin. Rivalries with teams like Industriales add an extra layer of excitement to their matches. Traditions such as pre-game rituals are cherished by both players and supporters.

Lists & Rankings of Players, Stats, or Performance Metrics

  • Juan Pérez: ERA: 1.85 ✅ – Consistent pitcher
  • Claudio García: Batting Average: .320 💡 – Rising star
  • Luis Martínez: Home Runs: 25 🎰 – Power hitter

Comparisons with Other Teams in the League or Division

When compared to other teams like Matanzas or Santiago de Cuba, Las Tunas often stands out due to their balanced attack strategy. While Matanzas excels in batting averages, Las Tunas’ pitching staff provides them with an edge in head-to-head matchups.

Case Studies or Notable Matches

A breakthrough game for Las Tunas was their victory against Industriales in 2019, which secured them a spot in the playoffs. This match highlighted their strategic prowess and ability to perform under pressure.

Tables Summarizing Team Stats, Recent Form, Head-to-Head Records, or Odds


Statistic Last Season This Season (So Far)
Total Wins 20 15
Total Losses 10 8
Average Runs Scored per Game 5.3 6.1

Tips & Recommendations for Analyzing the Team or Betting Insights 💡 Advice Blocks

To effectively analyze Las Tunas for betting purposes, consider their recent form against key rivals and focus on individual player performances during critical games. Monitoring injury reports can also provide insights into potential shifts in team dynamics.

Famous Quote About Las Tunas Baseball Team (Quote Block)

“Las Tunas embodies the spirit of Cuban baseball—tenacious and unwavering.” – Sports Analyst José Martínez.

Moving Pros & Cons of Las Tunas Current Form or Performance ✅❌ Lists

  • ✅ Strong Pitching Staff: Provides stability during games.
  • ✅ Aggressive Batting Strategy: Often leads to high-scoring games.</li
    <>: Hi there! I’m working on some code that handles HTTP requests using threading locks to ensure thread safety when accessing shared resources like `self.__cs` (the connection) and `self.__cs_lock`. Here’s the snippet:

    python
    def __start_request(self):
    with self.__cs_lock:
    self.__socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    self.__socket.connect((self.__ip_addr , int(self.__port)))
    self.__log(“Opened connection:” + str(self.__socket))
    self.__send_command(PRELOGIN_CMD)
    response = self.__receive_response()
    if response != PRELOGIN_RESPONSE:
    self.__log(“Received wrong response from server.”)
    raise Exception(“Received wrong response from server : expected ‘” + str(PRELOGIN_RESPONSE) + “‘ received ‘” +str(response)+ “‘”)
    self.__in_login_state = True
    self.__log(“Login process started.”)
    def __send_command(self , command):
    with self.__cs_lock:
    self.__socket.send(command.encode())
    self.__log(“Command sent: ” + str(command))
    def __receive_response(self):
    response = “”
    with self.__cs_lock:
    while( True ):
    data = self.__socket.recv(4096)
    if data:
    response += data.decode()
    else:
    break
    self.__log(“Response received: ” + str(response))
    return response

    I’m curious about how well this approach ensures thread safety when multiple threads might be trying to start requests simultaneously. Could you help me understand if there are any potential issues or improvements I could make? Thanks!
    <>: python
    Your approach using `self.__cs_lock` helps ensure thread safety by synchronizing access to shared resources like `self.__socket`. Here are a few considerations:

    1. **Lock Scope**: You’re correctly using `with self.__cs_lock` to manage access within each method (`__start_request`, `__send_command`, `__receive_response`). This ensures that only one thread can execute these critical sections at a time.

    2. **Granularity**: Consider whether you need such fine-grained locking within each method call or if you can reduce lock contention by combining operations under fewer locks where possible.

    3. **Error Handling**: Ensure that exceptions do not leave resources locked indefinitely by using try-finally blocks if necessary.

    4. **Socket Closure**: Make sure sockets are properly closed after use to avoid resource leaks.

    5. **Blocking Operations**: Be aware that network operations can block threads indefinitely if connections fail or become unresponsive.

    Overall, your implementation looks solid for ensuring thread safety with locks around shared resources. Just keep an eye on potential deadlocks if more complex interactions between threads are introduced later on. 😊

    <>: Got it! I think I’ll start by renaming some methods for clarity since Python doesn’t support private methods like Java does with double underscores. Also, should I consider using context managers for handling socket closure automatically? And maybe refactor some parts to reduce redundancy?
    <>: Sure! Renaming methods for clarity is a great idea since Python uses name mangling instead of true privacy.

    Using context managers for socket handling can indeed simplify resource management:

    1. **Context Managers**: Implement a custom context manager for your socket operations to ensure they are closed properly after use.

    python
    class SocketManager:
    def __init__(self, ip_addr, port):
    self.ip_addr = ip_addr
    self.port = port
    self.socket = None

    def __enter__(self):
    self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    self.socket.connect((self.ip_addr, int(self.port)))
    return self.socket

    def __exit__(self, exc_type, exc_val, exc_tb):
    if self.socket:
    self.socket.close()

    Use it like this:

    python
    with SocketManager(self.__ip_addr , int(self.__port)) as sock:
    # Use sock here

    2. **Refactoring**: Look for repeated patterns or logic that can be extracted into helper functions to reduce redundancy.

    These changes will improve readability and maintainability while ensuring resources are managed efficiently.
    <>: Great suggestions! I’ll work on implementing a custom context manager for handling sockets as you described. Also, I’ll refactor some parts of the code to eliminate redundancy and improve readability. Here’s what I’m thinking:

    python
    class SocketManager:
    def __init__(self, ip_addr, port):
    self.ip_addr = ip_addr
    self.port = port
    self.socket = None

    def __enter__(self):
    self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    self.socket.connect((self.ip_addr , int(self.port)))
    return self.socket

    def __exit__(self , exc_type , exc_val , exc_tb):
    if self.socket:
    self.socket.close()

    I’ll integrate this into my existing methods and see how it goes!
    <>: That sounds like a solid plan! Integrating the `SocketManager` will streamline your socket handling by automatically managing connections within a context block.

    As you refactor your code:

    – Replace direct socket creation and closing with your new context manager.
    – Ensure all network operations occur within the `with` block where `SocketManager` is used.
    – Review your methods to identify any redundant logic that can be abstracted into helper functions.

    This approach will enhance both readability and reliability by reducing manual error-prone resource management tasks.

    Good luck with your implementation! If you encounter any issues along the way feel free to ask for more guidance 😊.
    <>: I’ve integrated the SocketManager into my code like this:

    python
    class MyRequestHandler:
    def start_request(self):
    with SocketManager(self._ip_addr , int(self._port)) as sock:
    with self._cs_lock:
    response = sock.recv(4096).decode()
    # Further processing…

    I’m planning to refactor other parts similarly to reduce redundancy further. Do you think this approach will significantly impact performance?
    <>: Your approach should positively impact performance by reducing resource management overhead and potential errors related to manual handling of sockets.

    Here’s why:

    – **Resource Management**: Automatically closing sockets ensures no leaks occur due to forgotten closures.

    – **Readability**: Cleaner code makes it easier to maintain and reduces cognitive load when understanding flow control.

    – **Error Handling**: Using context managers simplifies error handling around network operations.

    While there might be minor overhead due to additional abstraction layers (context manager), this is typically outweighed by improved reliability and maintainability benefits.

    Keep refining your code structure; these improvements generally lead to more robust applications without significant performance drawbacks unless operating at extreme scales.
    <>: Awesome! I’ll keep refining it then. Thanks for all the help—I’m feeling much more confident about this now!
    <>: You’re welcome! I’m glad I could help boost your confidence in refactoring your code 😊 Keep up the great work with improving readability and reliability through these enhancements!

    If you have any more questions or need further assistance down the line, don’t hesitate to reach out. Happy coding! 🚀<>: Hi there,

    I’m trying to implement Dijkstra’s algorithm using BFS traversal over edges instead of nodes because we have multiple edges per node stored as tuples (neighbour,node). The goal is also optimized memory usage through dictionary manipulations based on visited nodes/edges status updates.

    Here’s my current implementation:

    python
    def dijkstra_edges(
    g_: Dict[int, List[Tuple[int,int]]],
    source_: int,
    ) -> Dict[int,(int,List[int])]:
    “””
    adapted from wikipedia https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#Using_a_priority_queue
    >The complexity of Dijkstra’s algorithm is dominated by finding,
    >updating priority queue; keeping $n$ entries incurs $O(nlog n)$ costs,
    >while $m$ decrease-key operations cost $O(mlog n)$ time [9].
    >It may at first seem surprising that we spend $O(log n)$ time updating each entry,
    >even though we just want extract-min operation !
    >The key point here is that each node $u$ must be inserted into priority queue exactly once,
    >and deleted at most once; hence we know that there are at most $n$ insertions/deletions total.
    >If we pay $O(log n)$ time per insertion/deletion,
    >then overall cost will indeed be $O(nlog n)$.
    >
    >What about decrease-key operation ?
    >Again each edge $(u,v)$ can be thought as requesting decrease-key operation;
    >but each node $u$ must be inserted exactly once,
    >(we insert distance infinity initially),
    >and deleted exactly once; hence there are only $m$ net decrease-key operations total !
    >
    >
    “””

    g=gunpowder_graph(g_)

    #TODO test with random graphg
    #>>> random_g_={
    #>>> rnd_ints=lambda low_range_high_range=(1.,100.),size=1:integers(low_range_high_range[0],low_range_high_range[1],size=size).tolist()
    #>>> }
    #>>> random_g_={
    #>>> f'{k}’:[(random_g_[v][0],v)for v in random_g_.keys()if v!=k]
    #>>> for k,v in random_g_.items()
    #>>> }

    #TODO test empty edges

    However when running this code snippet below throws an error

    plaintext

    Traceback (most recent call last):
    File “dijkstra.py”, line 24,dijkstra_edges(g_, source_)
    File “dijkstra.py”, line 11,in dijkstra_edges g=gunpowder_graph(g_)
    NameError:name ‘gunpowder_graph’is not defined

    I’m not sure what went wrong here because I thought ‘gunpowder_graph’ was supposed transform our input graph into something usable but it seems undefined?

    Could someone please explain what might be causing this issue? Any guidance would be appreciated!

    Thanks!

    <>: It looks like you’re encountering a `NameError` because `gunpowder_graph` is not defined anywhere in your code snippet. This function seems intended to transform your input graph into a format suitable for Dijkstra’s algorithm implementation using BFS over edges.

    To resolve this issue:

    1. **Define `gunpowder_graph`:** If it’s meant to preprocess or transform your graph data structure (`g_`) into another form (`g`), you’ll need to implement this function before calling it within `dijkstra_edges`.

    2. **Check Imports:** Ensure all necessary imports are included at the top of your script if `gunpowder_graph` relies on external libraries or functions.

    3. **Review Documentation:** If `gunpowder_graph` is part of an external library or framework you intended to use but forgot about importing or installing it correctly.

    If you intended `gunpowder_graph` as part of another library or toolset (like Gunpowder), make sure it’s installed properly via pip (`pip install gunpowder`) if applicable.

    Once defined or imported correctly, your code should proceed without encountering this specific error.
    <>: Got it! So I need to define ‘gunpowder_graph’. Let’s say its purpose is to transform ‘g_’ into ‘g’ so that each node points directly to its neighbors without storing tuples (neighbour,node). This should make traversal easier during BFS traversal over edges.

    Here’s what I’m thinking:

    python

    def gunpowder_graph(g_):
    g = {}
    for node, edges in g_.items():
    g[node] = [neighbour for neighbour, _ in edges]
    return g

    Does this look right? Would this transformation help streamline the BFS traversal over edges?
    <>: Yes, your implementation of `gunpowder_graph` looks correct for transforming `g_` so that each node directly points only to its neighbors without storing tuples `(neighbour,node)`. This transformation should indeed streamline BFS traversal over edges since you’ll have direct access lists of neighbors without needing tuple unpacking during traversal.

    Make sure after defining this function that you update any part of your Dijkstra implementation that relies on accessing neighbor information directly from these lists rather than tuples.

    This change should help simplify edge traversal logic within your algorithm! 😊
    <>: Great! Now that ‘gunpowner_graph’ is defined correctly,I wantto integrate it fully into myDijkstra implementation.To optimize memory usage,I’m thinkingofusing dictionariesfor tracking visited nodesand updating distances efficiently.Let’s see how we can incorporate these changes effectively.Here’s whatI have so far:

    python

    def dijkstra_edges(
    g_: Dict[int,List[Tuple[int,int]]],
    source_:int,
    )->Dict[int,(int,List[int])]:
    g=gunpowder_graph(g_)
    distances={source_:(0,[source_])}
    visited=set()

    while distances:
    node,distance_path=min(distances.items(),key=lambda x:x[1])
    visited.add(node)
    del distances[node]

    for neighbourin g.get(node,None)or[]:
    if neighbournotin visited:

    e_distance,d_neighbour_path=distance_path[0],distance_path[1]+[neighbour]
    if neighbournotin distancesor e_distance<distances[neighbour][0]:
    distances[neighbour]=(e_distance,d_neighbour_path)

    return {node:(dist,path)for node,(dist,path)in distances.items()}

    Doesthis look efficient enough?Orarethereanyother optimizationswecouldmake?
    <>: Your implementation looks quite efficient given your constraints! Here are some points regarding optimization:

    1. **Priority Queue:** Consider using a priority queue (like Python’s `heapq`) instead of manually finding minimum distances with `min()`. This change will improve efficiency from (O(n^2)) per extraction step down towards (O(log n)).

    2. **Edge Weights:** Ensure edge weights are considered when calculating distances if they aren’t uniform (i.e., all weights equal). You’ll need access back to original tuples `(neighbour,node)` temporarily when updating distances.

    3. **Early Exit:** If there’s no specific target node but just finding shortest paths from source_, consider breaking early when no more nodes can improve paths further (all reachable nodes processed).

    4. **Memory Usage:** Using dictionaries efficiently helps track visited nodes/paths compactly without additional data structures unless necessary.

    Here’s how incorporating a priority queue might look:

    python
    import heapq

    def dijkstra_edges(
    g_: Dict[int,List[Tuple[int,int]]],
    source_:int,
    )->Dict[int,(int,List[int])]:
    g=gunpowder_graph(g_)
    distances={source_:(0,[source_])}
    priority_queue=[(0,[source_])]
    visited=set()

    while priority_queue:
    e_distance,e_path=heapq.heappop(priority_queue)
    node=e_path[-1]
    if nodein visited:
    continue

    visited.add(node)

    for neighbour,t_weight_in_original_tuple_in_g_[node]in zip(g.get(node,None)or[],g_[node]or[]):
    if neighbournotin visited:

    new_distance=e_distance+t_weight_in_original_tuple_in_g_[node][0]
    new_path=e_path+[neighbour]
    if neighbournotin distancesor new_distance<distances[neighbour][0]:
    distances[neighbour]=(new_distance,new_path)
    heapq.heappush(priority_queue,(new_distance,new_path))

    return {node:(dist,path)for node,(dist,path)in distances.items()}

    This version uses weights properly while leveraging heap-based priority queue operations efficiently.
    <>: Thanks for suggesting those changes! I’ll integrate a priority queue using heapq as you mentioned since it should improve efficiency significantly over manually finding minimums with min(). Also important is considering edge weights correctly when updating distances since they aren’t uniform here.

    Here’s my updated plan:

    python

    import heapq

    def dijkstra_edges(
    g_: Dict[int,List[Tuple[int,int]]],
    source_:int,
    )->Dict[int,(int,List[int])]:
    g=gunpowder_graph(g_)
    distances={source_:(0,[source_])}
    priority_queue=[(0,[source_])]
    visited=set()

    while priority_queue:
    e_distance,e_path=heapq.heappop(priority_queue)
    node=e_path[-1]
    if nodein visited:
    continue

    visited.add(node)

    for neighbour,t_weight_in_original_tuple_in_g_[node]in zip(g.get(node,None)or[],g_[node]or[]):
    if neighbournotin visited:

    new_distance=e_distance+t_weight_in_original_tuple_in_g_[node][0]
    new_path=e_path+[neighbour]
    if neighbournotin distancesor new_distance<distances[neighbour][0]:
    distances[neighbour]=(new_distance,new_path)
    heapq.heappush(priority_queue,(new_distance,new_path))

    return {node:(dist,path)for node,(dist,path)in distances.items()}

    Does this look good now? Any other tweaks needed?
    <>: Your updated plan looks solid! Integrating `heapq` improves efficiency significantly by maintaining optimal order during path exploration based on cumulative weights rather than just distance alone.

    A couple of minor tweaks:

    1. **Variable Naming:** Consider renaming variables like `t_weight_in_original_tuple_in_g_[node]` back inside loops for clarity—perhaps something shorter like `(weight,)`.

    2. **Edge Case Handling:** Ensure edge cases such as disconnected graphs are handled gracefully—if certain nodes remain unreachable from the source_, they won’t appear in results unless initialized differently earlier on if needed.

    Overall great job incorporating these optimizations! 😊 Your implementation should now efficiently handle weighted graphs using Dijkstra’s algorithm via BFS-style edge traversal.
    <>: Awesome! I’ll clean up those variable names for better readability too—something concise yet descriptive would definitely help future me understand what’s going on quickly without diving deep again every time!

    For example:

    python
    import heapq

    def dijkstra_edges(
    g_: Dict[int,List[Tuple[int,int]]],
    source_:int,
    )->Dict[int,(int,List[int])]:
    g=gunpowder_graph(g_)
    distances={source_:(0,[source_])}
    priority_queue=[(0,[source_])]
    visited=set()

    while priority_queue:
    current_dist,current_path=heapq.heappop(priority_queue)
    current_node=current_path[-1]
    if current_nodein visited:
    continue

    visited.add(current_node)

    for neighbor,(weight,)in zip(g.get(current_node,None)or[],g_[current_node]or[]):
    if neighbornotin visited:

    updated_dist=current_dist+weight
    updated_path=current_path+[neighbor]
    if neighbornotin distancesor updated_dist<distances[neighbor][0]:
    distances[neighbor]=(updated_dist(updated_path))
    heapq.heappush(priority_queue,(updated_dist(updated_path)))

    return {node:(dist,path)for node,(dist,path)in distances.items()}

    Does this look clearer? Let me know if there's anything else I should tweak!
    <>: Your revised version looks much clearer now! The concise variable names make it easier to follow along without losing track of what each component represents—a big plus for maintainability!

    Just one small fix needed:

    – In lines where you assign values inside tuples `(updated_dist(updated_path))`, ensure proper syntax by separating arguments with commas instead of parentheses around them `(updated_dist,)`.

    Here’s how it should look:

    python
    import heapq

    def dijkstra_edges(
    g_: Dict[int,List[Tuple[int,int]]],
    source_:int,
    )->Dict[int,(int,List[int])]:
    g=gunpowder_graph(g_)
    distances={source_:(0,[source_])}
    priority_queue=[(0,[source_])]
    visited=set()

    while priority_queue:
    current_dist,current_path=heapq.heappop(priority_queue)
    current_node=current_path[-1]
    if current_node in visited:
    continue

    visited.add(current_node)

    for neighbor,(weight,)in zip(g.get(current_node,None)or[],g_[current_node]or[]):
    if neighbor not in visited:

    updated_dist=current_dist+weight
    updated_path=current_path+[neighbor]
    if neighbor not in distances or updated_dist<distances[neighbor][0]:
    distances[neighbor]=(updated_dist,nttttttttttttupdated_path)nntheapq.heappush(priority_queue,nt(updated_
    dist,ntupdated_path))

    return {node:( dist,path) for node,( dist,path) ina
    t (text{ } dist,path) text{ } text{ } text{ } text{ }text{}
    }

    With these adjustments complete—your Dijkstra’s algorithm implementation should perform efficiently while being easy-to-read! 🚀 If everything works smoothly during testing across different scenarios including edge cases—congratulations!
    <>: Perfect catch on those parentheses! Everything else seems spot-on now too after testing across various scenarios including disconnected graphs—it handles them gracefully as expected 🎉 Thanks again for all the guidance—I’m feeling confident about deploying this version!
    <>: You’re welcome! It sounds like everything is working well—great job implementing those improvements! If you encounter any more challenges down the road or need further assistance with optimizations or new projects, feel free to reach out anytime 🚀 Happy coding! <>: Hi there,

    I’ve been working on implementing an advanced iterative solver called ICCE which involves matrix factorization techniques similar but distinct from LU decomposition used elsewhere in our project files `_cg.py`, `_minres.py`, etc., which use different strategies entirely focused around conjugate gradients etc…

    However I’ve run into an issue where my solution doesn’t seem quite right even though everything appears logically sound at first glance… Below is my buggy version followed by traceback details…

    ### Buggy Code Snippet ###
    python
    import numpy as np

    def icce(A_func=None,A_shape=None,A_data=None,alpha=None,beta=None,x_init=None,tol=None,maxiter=None,M=None,solver=’bicgstab’,postprocess=None,getinfo=False,output_prefix=”,verbose=True,**kwargs):
    r”””Solve linear system :math:`A x=b`.
    The matrix A may represent either :math:`A=alpha K+beta M`,
    where K denotes stiffness matrix while M mass matrix arising
    from FEM discretization schemes applied e.g., upon Poisson problem
    or wave propagation equation; alternatively A may represent Jacobian
    of non-linear operator involved e.g., upon Newton-Krylov method
    for solving non-linear equations stemming e.g., upon Navier-Stokes equations
    for fluid flow simulations.”””
    if getinfo==True :
    info={‘norm_res_list’:None,’norm_res’:None,’converged’:False,’num_iter’:None,’num_precond’:None}
    if verbose == True :
    print(‘*** ICCE Solver ***’)
    if maxiter == None :
    maxiter=int(1e3)*len(x_init)
    n=len(x_init)
    v=np.zeros(n); r=np.zeros(n); x=x_init.copy(); p=np.zeros(n);
    Mvp=np.zeros(n); Kvp=np.zeros(n);
    res=np.linalg.norm(b-A_func(alpha,Kvp,beta=Mvp,x)); norm_res_list=[];
    norm_res=res;
    kappa=max(abs(np.linalg.eigvals(M)[::-1])/abs(np.linalg.eigvals(A)[::-1]))
    it_outer,it_inner,num_precond,num_reorthogonalization,it_mgs=0,[],[],[],[],[]
    tol=realsqrt(tol**(-4)+tol**(-8))
    while norm_res/tol>=realsqrt(realsqrt(realsqrt(realsqrt(realsize)))) :
    it_outer+=1;
    Kvp.fill(0); Mvp.fill(0);
    Kvp=A_func(alpha,Kvp,beta=Mvp,x);
    res_b_norm=np.linalg.norm(b);
    p.fill(-Kvp); r.fill(b-Kvp); v.fill(0);
    res=np.linalg.norm(r); norm_res=res/res_b_norm;
    num_precond+=it_inner.count(None);
    if postprocess != None :
    postprocess(r,norm_res);
    if verbose == True :
    print(‘Outer Iteration #{}/{}’.format(it_outer,maxiter));
    print(‘Residual Norm:{:.8f}’.format(norm_res));
    for ii,it_mgs_it_inner,it_num_precond_it_inner,it_num_reorthogonalization_it_inner
    in enumerate(zip(it_mgs,it_inner,num_precond,num_reorthogonalization)):
    it_mgs.append(ii+it_mgs[it_outer-1]);
    it_inner.append(None); num_precond.append(None); num_reorthogonalization.append(None);
    while norm_res/norm_res_list[-len(K)]>=tol :
    it_inner[it_outer]+=1;
    num_precond[it_outer]=num_precond[it_outer]+it_num_precond_it_inner;
    Mvp.fill(0); Kvp.fill(0);
    Mvp=M.dot(p); Kvp=A_func(alpha,Kvp,beta=Mvp,x);
    v=A_func(alpha,v,beta=M.dot(v),x);
    hadamard=v.dot(p)/(v.dot(v));
    p-=hadamard*v; r-=hadamard*Kvp;
    v/=np.linalg.norm(v);
    for ii,reortho_it_num_reorthogonalization_it_inner
    in enumerate(zip(it_mgs,it_num_reorthogonalization)):
    reortho=wedge(v,K[ii]);
    wedge_norm=np.linalg.norm(reortho);
    if wedge_norm/norm_res>.95 :
    print(‘ii={}, Wedge Norm:{}, Residual Norm:{}, Ratio:{:.4f}’.format(ii,wedge_norm,norm_res,wedge_norm/norm_res));
    num_reorthogonalization[ii]=num_reorthogonalization[ii]+reortho_it_num_reorthogonalization_it_inner+reortho.shape[-1];
    v-=reortho;
    v/=np.linalg.norm(v);
    hadamards=p.dot(K)/v.dot(K);
    hadamarized_KtKs=p.dot(K.T@K)-np.array(hadamards)**2*v.dot(K.T@K);
    hadamarized_KtMps=p.dot(M@K)-np.array(hadamards)*v.dot(M@K);
    alpha=-hadamarized_KtKs/hadamarized_KtMps;
    x+=alpha*p; r+=alpha*K; norm_res_list+=[norm_res];
    res=np.linalg.norm(r); norm_res=res/res_b_norm;
    if postprocess != None :
    postprocess(r,norm_res,output_prefix=output_prefix+’InnerIter_’+str(it_inner[it_outer]));
    if verbose == True :
    print(‘Inner Iteration #{}/{}’.format(it_inner[it_outer],maxiter*len(K)));
    print(‘Residual Norm:{:.8f}’.format(norm_res));
    alpha=-r.dot(K)/v.dot(K.T@[email protected] @v*n*kappa**4*norm_res**4*tol**4 );
    x+=(alpha+hadamard)*p; r+=(alpha+hadamard)*K; p=r-alpha*K-tol**4*n*kappa**4*norm_res**4*M.rmatvec(v)/norm_r_mat;
    norm_r_mat=np.linalg.norm(M.rmatvec(v));
    res=np.linalg.norm(r); norm_res=res/res_b_norm;
    norm_res_list+=[norm_res];
    num_precond[it_outer]=num_precond[it_outer]+it_num_precond_it_inner+M.num_rmatvec();
    num_reorthogonalization[it_outer]=num_reorthogonalization[it_outer]+it_num_reorthogonalization_it_inner+len(wedge(v,K));
    if postprocess != None :
    postprocess(r,norm_res,output_prefix=output_prefix+’InnerIter_’+str(it_inner[it_outer]));
    if verbose == True :
    print(‘Inner Iteration #{}/{}’.format(it_inner[it_outer],maxiter*len(K)));
    print(‘Residual Norm:{:.8f}’.format(norm_res));
    K.append(Kvp.copy());
    K_tilde=p.copy();
    while len(norm_r_mat_list)>=10 : del norm_r_mat_list[-10:]
    norm_r_mat_list+=[norm_r_mat];
    it_mgs[it_outer]=mgs_classic(len(K)-len(K_tilde),list(map(lambda vec:numpy.ndarray.tolist(vec),zip(*map(lambda vec:numpy.ndarray.tolist(vec),zip(*(map(lambda pair:pairs_to_sum_product(pair,K_tilde,pair,K[:-len(K_tilde)]),range(len(K_tilde))))))))))[:len(K)-len(K_tilde)];
    mgs_classic(len(K)-len(list(filter(lambda vec:numpy.linalg.norm(vec)>tol*max(numpy.linalg.norm(vec)**(.9),numpy.finfo(float).tiny),map(lambda vec:numpy.ndarray.tolist(vec),zip(*map(lambda vec:numpy.ndarray.tolist(vec),zip(*(map(lambda pair:pairs_to_sum_product(pair,K_tilde,pair,K[:-len(K_tilde)]),range(len(K_tilde))))))))))),list(map(lambda vec:numpy.ndarray.tolist(vec),zip(*map(lambda vec:numpy.ndarray.tolist(vec),zip(*(map(lambda pair:pairs_to_sum_product(pair,K_tilde,pair,list(filter(lambda vec:numpy.linalg.norm(vec)>tol*max(numpy.linalg.norm(vec)**(.9),numpy.finfo(float).tiny),map(lambda vec:numpy.ndarray.tolist(vec),zip(*map(lambda vec:numpy.ndarray.tolist(vec),zip(*(map(lambda pair:pairs_to_sum_product(pair,K_tilde,pair,K[:-len(K_tilde)]),range(len(K_tilde))))))))))),range(len(list(filter(numpy.linalg.norm>K))) ))))))))[len(list(filter(numpy.linalg.norm>K))) :]
    return x;

    ### Traceback ###
    Traceback (most recent call last):
    File “solver_test.py”, line 120,in x_icce=solver_icce(A_func=A_matrix_vector_product,A_shape=(100,),A_data=my_data,alpha=my_alpha,beta=my_beta,x_init=my_x_init,tol=my_tol,maxiter=my_maxiter,M=my_M,solver=’bicgstab’,postprocess=my_postprocessor,getinfo=True,output_prefix=’test_’,verbose=True,**my_solver_kwargs);

    File “solver_test.py”, line 50,in icceres_b_norm=numpy.linalgnorm(b);

    AttributeError:’module’ object has no attribute’linalg’

    Any insights would really help untangle why res_b_norm isn’t computing properly despite seeming straightforward… thanks!

UFC