Home » Football » Espanyol vs Sevilla

Espanyol vs Sevilla

Expert Overview: Espanyol vs Sevilla

The upcoming match between Espanyol and Sevilla on November 24, 2025, at 20:00 is expected to be a highly competitive encounter. With both teams showing strong defensive records in recent matches, the odds suggest a cautious approach from both sides. The statistical data indicates an average of 3.57 total goals for the match, with a relatively low expectation for red cards (0.57) and a moderate number of yellow cards (3.90). Defensively, Espanyol has conceded an average of 1.70 goals per game, while Sevilla has scored an average of 1.98 goals per game.

Betting Predictions

Defensive Odds

  • Both Teams Not To Score In 2nd Half: 75.30
  • Both Teams Not To Score In 1st Half: 77.20
  • Home Team Not To Score In 1st Half: 69.20
  • Away Team To Score In 2nd Half: 64.50
  • Under 5.5 Cards: 64.50
  • Last Goal Minute 0-72: 56.50
  • Over 4.5 Cards: 51.30

Offensive Odds

  • Over 1.5 Goals: 74.80
  • Avg Total Goals: 3.57
  • Avg Goals Scored: 1.98
  • Avg Conceded Goals: 1.70
  • Away Team To Score In Second Half: Over at odds of 64.50%

Scores and Timing Odds

  • Average Total Goals:
      Predicted at around:  3.57 goals.

Odds for Specific Scenarios

  • The first goal likely to occur after the initial half hour with odds at  61%.
      “First Goal After Minute 30+” :  61%

  • The sum of goals could potentially reach either two or three, with odds standing at 57%.
      “Sum Of Goals Will Be Either Two Or Three”:  57%

Possible Outcomes and Trends Analysis

  • Espanyol might score in the second half as indicated by the odds at 73%.
      “Home Team To Score In Second Half”:  73%

  • The likelihood of both teams scoring is close to even with odds slightly favoring it over home team victory.
      “Both Teams To Score”:  51%

  • Espanyol has a chance to win as reflected by their odds at 52%.
      “Home Team To Win”:  52%

  • The game could witness more than two and a half goals based on current predictions.
      “Over 2. 5 Goals”:  62%

  • Betting on more than two and a half Both Teams To Score occurrences seems promising with odds standing at 54%.
      “Over 2. 5 BTTS”: &nbsgustavonobre/curso-docker/slides/07-containers.md
    # Containers

    ## What are containers?

    * An application together with its dependencies.
    * A way to package software so it can run anywhere.

    ## Why containers?

    * **Consistency:** If it works here it should work there.
    * **Portability:** Share your app across different environments.
    * **Efficiency:** Share resources between containers.

    ## How does it work?

    ![](images/container.png)

    ## Containers vs Virtual Machines

    ![](images/vm-vs-container.png)

    ## Docker Engine

    Docker is built on top of some core technologies:

    * `runc` – Low level container runtime.
    * `containerd` – Container runtime interface.
    * `dockerd` – Docker daemon that manages containers.

    ![](images/docker-components.png)

    ## Docker Engine Architecture

    ![](images/docker-architecture.png)


    gustavonobre/curso-docker<|file_sep[![Build Status](https://travis-ci.org/gustavonobre/curso-docker.svg?branch=master)](https://travis-ci.org/gustavonobre/curso-docker)
    [![codecov](https://codecov.io/gh/gustavonobre/curso-docker/branch/master/graph/badge.svg)](https://codecov.io/gh/gustavonobre/curso-docker)

    # curso-docker

    Slides e conteúdo para o curso de Docker da [Digital Innovation One](https://digitalinnovation.one).

    O curso está disponível na plataforma da DIO em [Docker e Kubernetes para DevOps](https://digitalinnovation.one/lab/dio-expertos-em-devops/dockers-e-kubernetes-para-devops).
    gustavonobre/curso-docker<|file_sep7
    # Compose

    ## Docker Compose

    Compose é uma ferramenta que simplifica o uso de múltiplos contêineres em conjunto.

    Ele permite definir e executar aplicações multi-contêiner usando um arquivo de configuração simples.

    Com ele podemos:

    * Definir e executar serviços com apenas um comando;
    * Criar e gerenciar volumes persistentes;
    * Gerenciar redes entre os serviços;
    * Escalar aplicações facilmente;
    * Desenvolver aplicativos localmente com configuração semelhante ao ambiente de produção;

    ## Exemplo de arquivo docker-compose.yml

    yaml
    version: '3'
    services:
    web:
    build: .
    ports:
    – "5000:5000"
    volumes:
    – .:/code
    depends_on:
    – redis
    redis:
    image: "redis"

    Este arquivo define dois serviços:

    – Um serviço chamado `web`, que é construído a partir do diretório atual (`.`), expõe a porta `5000`, monta o diretório atual no caminho `/code` dentro do contêiner e depende do serviço `redis`.
    – Um serviço chamado `redis`, que usa uma imagem oficial do Redis.

    ## Comandos básicos do compose

    Para executar os serviços definidos no arquivo docker-compose.yml usamos o comando:

    $ docker-compose up

    Este comando constrói as imagens necessárias (se necessário) e inicia os contêineres definidos no arquivo.

    Para parar os contêineres rodando use o comando:

    $ docker-compose down

    Podemos também usar outros comandos como:

    – `docker-compose build`: Constrói as imagens dos serviços.
    – `docker-compose logs`: Mostra os logs dos contêineres rodando.
    – `docker-compose ps`: Lista os contêineres rodando pelo compose.
    – `docker-compose exec`: Executa um comando dentro de um contêiner rodando pelo compose.

    ## Volumes Persistentes

    Volumes persistentes são usados para persistir dados entre reinicializações de contêineres ou mesmo entre diferentes execuções de um mesmo serviço no compose.

    No exemplo anterior, vimos como montar o diretório atual no caminho `/code` dentro do contêiner usando volumes:

    yaml
    volumes:
    – .:/code

    Podemos também criar volumes específicos para armazenamento persistente de dados:

    yaml
    volumes:
    db-data:
    my-service:
    image: my-image
    volumes:
    – db-data:/var/lib/db

    Neste exemplo criamos um volume chamado `db-data` e o associamos ao caminho `/var/lib/db` dentro do contêiner da imagem `my-image`.

    Quando o serviço for parado ou reiniciado, os dados armazenados nesse volume serão preservados e podem ser acessados novamente quando o serviço for iniciado novamente.

    ## Redes

    Redes permitem que os serviços se comuniquem entre si através de nomes DNS resolvendo internamente esses nomes para endereços IP dos contêineres correspondentes.

    No exemplo anterior vimos como definir uma dependência entre dois serviços usando o campo `depends_on`:

    yaml
    depends_on:
    – redis

    Isso garante que o serviço `web` só será iniciado depois que o serviço `redis` estiver pronto.

    Além disso, podemos usar nomes DNS para nos referirmos aos outros serviços dentro do mesmo cluster Docker Compose:

    bash
    $ ping redis # Pingará o endereço IP interno do serviço redis.

    Docker Compose cria automaticamente uma rede isolada para cada projeto definido em um arquivo docker-compose.yml, garantindo que apenas os serviços definidos no mesmo arquivo possam se comunicar entre si através dos nomes DNS.

    Também é possível definir redes personalizadas usando o campo networks no arquivo docker-compose.yml:

    yaml
    networks:
    default:
    my-network:
    my-service:
    image: my-image
    networks:
    – my-network

    Neste exemplo criamos uma rede personalizada chamada "my-network" e associamos ela ao serviço "my-service".

    Isso permite que outros serviços fora desse cluster Docker Compose também possam se comunicar com "my-service" através dos nomes DNS resolvendo internamente essas consultas para endereços IP dos contêiners correspondentes na rede "my-network".

    ## Escalabilidade

    Compose suporta escalabilidade horizontal através da criação de múltiplas instâncias de um mesmo serviço em execução simultaneamente.

    Podemos especificar quantas instâncias queremos criar usando a opção –scale ou defini-la no próprio arquivo docker-compose.yml com scale ou replicas (dependendo da versão):

    yaml
    services:
    webapp:
    image: nginx # Imagem utilizada pela aplicação webapp.
    scale : # Quantidade de instâncias desejadas (versão antiga).
    replicas : # Quantidade de instâncias desejadas (versão nova).

    Ao iniciar ou parar os servidores utilizando `$ docker compose up/down`, todos eles serão criados ou removidos automaticamente conforme a quantidade especificada em scale/replias.

    Além disso, podemos utilizar balanceadores de carga nativos do Docker Swarm ou outras soluções externas como Nginx ou HAProxy para distribuir tráfego entre as diferentes instâncias dos nossos servidores.

    Por fim, é importante lembrarmos que precisamos ter cuidado ao escalar nossas aplicações pois pode haver limitações em termos de recursos disponíveis (CPU/MEMória/Disk) nos hosts onde estão sendo executados nossos containers.<|file_sep+slide_title = 'Introdução'
    slide_subtitle = 'Conceitos básicos sobre containers'

    # Introdução

    ### Gustavo Nóbrega

    **Twitter**: [@gnobreak](https://twitter.com/gnobreak)
    **Site**: [gnobreak.dev](https://gnobreak.dev)

    —?image=images/docker-logo.svg&size=auto&position=left&opacity=10%

    # O Que é Containerização?

    ### Uma forma leve…

    …de virtualização!

    ### …que funciona na nuvem!

    ### …ou na sua máquina local!

    ### …ou até mesmo na sua geladeira!

    ### Mas afinal…o que é containerização?

    —?image=images/puppeteer.gif&size=auto&position=center&opacity=10%

    # Puppeteer!

    ### Puppeteer é uma biblioteca Node.js que oferece uma API fácil…

    …para controlar uma versão headless do navegador Chrome ou Chromium.

    —?image=images/puppeteer.png&size=auto&position=center&opacity=10%

    # Puppeteer!

    ### Podemos criar screenshots automático…

    —?image=https://pbs.twimg.com/media/DtBQwKlXcAAfIYk.jpg&size=auto&position=center&opacity=10%

    # Puppeteer!

    ### Podemos criar PDFs automáticos…

    —?image=https://pbs.twimg.com/media/DtBQxWOWsAA9S0j.jpg&size=auto&position=center&opacity=10%

    # Puppeteer!

    ### Podemos automatizar tarefas repetitivas…

    —?image=https://pbs.twimg.com/media/DtBQz6VWsAEDLHq.jpg&size=auto&position=center&opacity=10%

    # Puppeteer!

    ### …e muito mais!

    —?image=https://pbs.twimg.com/media/DtBRFJFXUAE8WcC.jpg&size=auto&position=center&opacity=10%

    # Mas tem problema…

    ### Tem sim! É necessário ter instalado Chrome ou Chromium!

    —?image=https://pbs.twimg.com/media/DtBRKtNXsAA8kYU.jpg&size=auto&position=center

    # Mas não precisa mais…

    #### Basta usar puppeteer-core + browserless.io!

    #### Agora sim…é isso!!


    Anubis-AI/NLP-Challenge-2020=3.

    Dependencies
    ————
    Python packages required:
    numpy==1.18
    scipy==1.4

    To install dependencies run:
    pip install numpy==1.18 scipy==1

    To test if packages are installed correctly run:
    python test_dependencies.py

    Input Files
    ———–
    All input files must be located under the folder ‘RFID_Tag_Data’.
    These files are generated by running generate_input_files.py.

    generate_input_files.py requires one command line argument which specifies the number of days worth of data to process.

    generate_input_files.py will output two files:
    train_file_list.txt
    test_file_list.txt

    These files contain lists of all input files used for training and testing respectively.
    Each line contains one file path relative to the folder containing this README.md file.

    Output Files
    ————
    All output files will be written under the folder ‘Features’.

    All features generated will be stored in HDF5 format using tables module from PyTables package.

    To read HDF5 files see example code below:

    import tables

    f = tables.open_file(‘filename.hdf’, mode=’r’)

    data = f.root.data[:]

    f.close()

    For more information about PyTables see https://www.pytables.org/en/latest/=3.

    Dependencies
    ————
    Python packages required:
    numpy==1.*

    To install dependencies run:
    pip install numpy==1.*

    Input Files
    ———–
    All input files must be located under the folder ‘RFID_Tag_Data’. Each folder represents one day’s worth of data.

    generate_input_files.py requires one command line argument which specifies the number of days worth of data to process.

    generate_input_files.py will output two files:
    train_file_list.txt
    test_file_list.txt

    These files contain lists of all input files used for training and testing respectively.
    Each line contains one file path relative to this directory.

    Output Files
    ————
    All output files will be written under the folder ‘Features’.

    Files Description:
    ——————
    all_tags.hdf Contains information about all tags encountered in training set.
    features.hdf Contains feature vectors generated from training set.
    feature_names.hdf Contains names corresponding to each feature vector component.
    predictions.csv Predictions made during testing phase using trained model.Anubis-AI/NLP-Challenge-2020<|file_sep accordions.csv filter=lfs diff=lfs merge=lfs -text "*.csv"Anubis-AI/NLP-Challenge-2020<|file_sep#!/usr/bin/env python3
    """
    Created on Mon Apr28th2019
    @author [email protected]
    """

    import sys
    from datetime import datetime
    import numpy as np

    def get_date_time(date_time_str):
    datetime_object = datetime.strptime(date_time_str,'%Y-%m-%d %H:%M:%S')
    return datetime_object

    def get_datetime_diff(date_time_str_earlier,date_time_str_later):
    datetime_object_earlier = get_date_time(date_time_str_earlier)
    datetime_object_later = get_date_time(date_time_str_later)
    time_delta = datetime_object_later-datetime_object_earlier

    return time_delta.seconds + time_delta.days *24 *60 *60

    def get_coordinates(tag_id):
    tag_id_array = np.array(list(tag_id),dtype=np.int)
    x_coordinate = int(np.sum(tag_id_array[::2]))
    y_coordinate = int(np.sum(tag_id_array[::2]))

    return x_coordinate,y_coordinate

    def main():
    if len(sys.argv)!=6 :
    print('Usage:')
    print('python parse_RFIDs.py train/test day_number start_hour end_hour')
    sys.exit()

    mode=sys.argv[1]
    day_number=int(sys.argv[2])
    start_hour=int(sys.argv[3])
    end_hour=int(sys.argv[4])

    output_filename='{}{}_{}.csv'.format(mode,str(day_number).zfill(2),str(start_hour).zfill(2))
    output_file=open(output_filename,'w')
    output_file.write('tag_id,date,time,delta_t,x,yn')

    for hour in range(start_hour,end_hour+1):
    input_filename='RFID_Tag_Data/{}/{}.csv'.format(mode,str(day_number).zfill(2),str(hour).zfill(2))

    input_file=open(input_filename,'r')

    date=None

    for line in input_file.readlines():
    line=line.strip()
    if not(line):
    continue

    split_line=line.split(',')

    if split_line[0]=='TAG':
    date=datetime.strptime(split_line[-1],'%Y-%m-%d').date()
    continue

    date_and_time=date+' '+split_line[-1]

    delta_t=get_datetime_diff(prev_date_and_time,date_and_time)

    tag_id=date_and_time,x,time_delta,x_coord,y_coord

    prev_date_and_time=date_and_time

    output_file.write('{},{},{},{}'.format(','join(split_line[:-2]),date_and_time,time_delta,x_coord,y_coord))

    if __name__=='__main__':
    main()> Please download [PyTables](http://www.pytables.org/) before proceeding further.
    Please make sure you have HDF5 installed before proceeding further.
    You can download HDF5 binaries [here](http://www.hdfgroup.org/HDF5/release/hdf5-parallel.html)

    If you have trouble installing PyTables or HDF5 please contact me.
    Email Address:[email protected]

    There are three main stages:

    Stage One:
    Parse raw RFID tag data into human readable CSV format.

    Stage Two:
    Generate feature vectors from parsed CSV file.

    Stage Three:
    Train model using feature vectors generated from Stage Two.

    If you want detailed instructions about each stage please see README_stage_one.md,
    README_stage_two.md or README_stage_three.md respectively.Anubis-AI/NLP-Challenge-2020<|file_sepupyter notebook filter=lfs diff=lfs merge=lfs -text "*.ipynb"Anubis-AI/NLP-Challenge-2020<|file_sep stuck.csv filter=lfs diff=lfs merge=lfs -text "*.csv"Anubis-AI/NLP-Challenge-2020<|file_sep coronavirus.csv filter=lfs diff=lfs merge=lfs -text "*.csv"Anubis-AI/NLP-Challenge-2020<|file_sepFrom here we will need four additional pieces of software:

    PyTables
    NumPy
    SciPy
    Pandas

    How to Install PyTables?

    Follow instructions provided here:

    http://www.pytables.org/download.html

    How to Install NumPy?

    Follow instructions provided here:

    http://www.numpy.org/install.html

    How to Install SciPy?

    Follow instructions provided here:

    http://www.scipy.org/install.html

    How to Install Pandas?

    Follow instructions provided here:

    http://pandas.pydata.org/getting_started/index.html

    Once you have installed these four pieces above we can move onto generating features!

    We can generate our own features or we can use pre-generated ones!

    If you want pre-generated features please go ahead and clone this repo!

    If you want generate your own features follow these steps:

    Run generate_input_files.py first!

    Then run generate_features_day_by_day.py !

    Then run generate_features_all_days_at_once!

    Once you have finished generating features then we move onto training models!

    In order to train models please go ahead and clone this repo!

    Once you have cloned this repo then go ahead and check out README_stage_three.md !

    Anubis-AI/NLP-Challenge-2020<|file_sepAVEDA.csv filter=lfs diff=lfs merge=lfs -text "*.csv"Anubis-AI/NLP-Challenge-2020<|file_sep

    This stage involves generating useful features from parsed RFID tag data created during Stage One!
    In order for us to proceed further we need four additional pieces of software:

    PyTables,
    NumPy,
    SciPy,
    Pandas,

    How can I install these pieces?
    Well I'll tell you how:

    Install PyTables first!
    You can find installation instructions [here]:(http ://www .pytables .org/download.html)

    Install NumPy next!
    You can find installation instructions [here]:(http ://www.numpy.org/install.html)

    Install SciPy next!
    You can find installation instructions [here]:(http ://www.scipy.org/install.html)

    Install Pandas last!
    You can find installation instructions [here]:(http :// pandas .pydata .org/getting_started/index.html)

    After installing these four pieces we're ready now!
    We have two options now:

    Option One :
    Generate our own set features! We call them Pre-generated Features!

    Option Two :
    Use pre-generated features available online! We call them Pre-generated Features too! But they were generated by other people who worked hard on them!

    If option one sounds good then read through readme_stage_two_generate_own_features.md

    If option two sounds good then read through readme_stage_two_use_pre_generated_features.md

    Let me know if anything isn't clear enough!
    Feel free ask questions anytime too!
    My email address is [email protected] ! Anubis-AI/NLP-Challenge-2020

    59]: | |
    +——+—–+
    |(NULL) | NULL |
    +——+—–+
    only row in set (0.00 sec)

    mysql [(none)] > select nullif(null,null);
    ERROR 1064 (42000): You have an error in your SQL syntax;
    check the manual that corresponds to your MySQL 
    server version for the 
    right syntax to use 
    near ”nullif(null,null);” at line X.XXmysql [(none)] >
    select nullif(null,null);
    ERROR                                                  You have an error in your SQL syntax;
    check the manual that corresponds t…
    mysql [(none)] > select nullif(null,null);
    ERROR                                            You have an error in your SQL syntax;
    check the manual that corresponds …
    mysql [(none)] > select nullif(NULL,NULL);
    +—————+
    |nullif(NULL,NULL)|
    +—————+
    |null |
    +—————+
    only row in set (0.xx sec)

    这个函数的作用是返回第一个参数如果与第二个参数相等,则返回NULL,否则返回第一个参数。可以看到,null与任何值比较都是不相等的。

    下面来看一个例子:

    mysql [(none)] > select nullif(100,null);
    ERROR                                 You have an error in your SQL syntax;
    check …
    mysql [(none)] > select nullif(null,null);
    +—————+
    |nullif(NULL,NULL)|
    +—————+
    |null |
    +—————+
    only row in set (XX.X sec)

    再看一个更有意思的例子:

    mysql [(none)] > select nullif(ifnull(100,null),null);
    +————————————–+
    |nullif(ifnull(100,NULL),NULL)| |
    +————————————–+
    |null |
    +————————————–+
    only row in set (XX.X sec)

    这里我们使用了另外一个MySQL内置函数IFNULL(),它的作用是将第一个非空参数返回,若第一个参数为空,则返回第二个参数。从上面的结果中可以看出,这里实际上是将两个相同的空值进行比较,所以得到了结果为NULL。

    从上面的例子中我们也可以发现,当两个相同类型的值进行比较时,如果其中有一个为NULL,则返回值也为NULL;但是对于不同类型之间的比较就不一定如此了。

    下面来看几个例子:

    mysql [(none)] > select ifnull(null,’abc’);
    +——————–+
    |nullif(NULL,’abc’)|
    +——————–+
    |”abc” |
    +——————–+
    only row in set (XX.X sec)

    这里我们使用了另外一个MySQL内置函数IFNULL()。因为其功能和Oracle中的NVL()类似,所以我在本文中也会把它当做Oracle中NVL()函数来讲解。

    在MySQL中,字符串和数值之间进行比较时,MySQL会将字符串转换成数值再进行比较。所以在上面这个例子中,’abc’被转换成了数字零,在这种情况下就可以确定两者不相等。因此结果不是NULL。

    接着来看另外几个例子:

    mysql [(none)] > select ifnull(null,true);
    +————–+
    |nullif(NULL,true)|
    +————–+
    ||true ||
    +————–+

    只有一行记录(X.XX秒取回)
    mysql [(none)] >
    select ifnull(false,null);
    ERROR                        You have an error in your SQL syntax;
    check …
    mysql [(none)] >

    由于true被转换成了整数一(因为true与false分别对应于整数一和零),而false被转换成了整数零。因此在前者中得到了true(即非空),而在后者由于没有满足条件则得到了原始数据false(即非空)。从这些例子中我们可以看出:在不同类型之间进行比较时,并不能保证得到的结果为NULL。

    最后再来看几个关于日期类型之间比较得出结果为NULL的例子:

    mysql [(none)] >
    select ifnull(null,”2016‐02‐27″);
    ERROR                        You have an error in your SQL syntax;
    check …
    mysql [(none)] >
    select ifnull(“2016‐02‐27”,null);
    ERROR                        You have an error in your SQL syntax;
    check …
    mysql [(none)] >

    由于日期型数据不能与字符串型数据直接进行计算运算(包括比较运算),所以无法直接对其进行判断是否为空操作。
    但是可以通过将其转换成字符串型数据再进行判断是否为空操作。

    下面还有几个关于日期型数据之间比较得出结果为NULL的例子:

    mysql [(none)]
    select ifnull(str_to_date(“2016‐02‐27″,”%Y‐%c‐%e”),str_to_date(“2016‐02‐28″,”%Y‐%c‐%e”));
    ERROR                        You have an error in your SQL syntax;
    check …
    mysql[(none)]
    select ifnull(str_to_date(“2016‐02‐28″,”%Y-%c-%e”),str_to_date(“2016‐02-27″,”%Y-%c-%e”));
    ERROR                        You have an error in your SQL syntax;
    check …

    由于两次调用str_to_date()方法均成功将字符串转化成日期型数据后,在执行完毕后便立即释放掉内存空间。而调用该方法时传入参数格式错误并不能正确地将其转化成日期型数据(即仍然保持着原始形式)。因此就导致两次调用该方法获得两种不同类型数据:一种为日期型数据、另一种仍然保持着原始形式(即字符串)。
    因此最终导致最终结果为NULL。

    总结:
      当两个相同类型之间进行判断是否为空操作时,则可以保证最终结果必定为TRUE或FALSE;但当不同类型之间进行判断是否为空操作时,则无法保证最终结果必定为TRUE或FALSE。
      当两个相同类型之间进行判断是否相等操作时,则可以保证最终结果必定为TRUE或FALSE;但当不同类型之间进行判断是否相等操作时,则无法保证最终结果必定为TRUE或FALSE。
      当两个相同类型之间进行判断是否不等操作时,则可以保证最终结果必定为TRUE或FALSE;但当不同类型之间进行判断是否不等操作时,则无法保证最终结果必定为TRUE或FALSE。注意:在MySQL中对于数字、字符、逻辑三种基本数据类型均存在明确规则定义。

    参考资料:
    《Java编程思想》 第四版 李建忠 著 (本书第十章详细讨论了Java语言中关于基本数据类型和引用数据类型)
    《Java语言规范》 Java SE7 版 (该文档详细描述了Java语言规范)
    《高性能MYSQL》 第三版 唐鹏 著 (本书第五章详细讨论了MYSQL数据库管理系统)

    ———-
    参考资料:
    《Java编程思想》 第四版 李建忠 著 (本书第十章详细讨论了Java语言中关于基本数据类型和引用数据类型)
    《Java语言规范》 Java SE7 版 (该文档详细描述了Java语言规范)
    《高性能MYSQL》 第三版 唐鹏 著 (本书第五章详细讨论了MYSQL数据库管理系统)

    ========================================================================================================================
    ========================================================================================================================

    参考资料:
    《Java编程思想》 第四版 李建忠 著 (本书第十章详细讨论了Java语言中关于基本数据类型和引用数据类型)
    《Java语言规范》 Java SE7 版 (该文档详细描述了Java语言规范)
    《高性能MYSQL》 第三版 唐鹏 著 (本书第五章详细讨论了MYSQL数据库管理系统)

    ========================================================================================================================
    ========================================================================================================================

    【参考资料】

    《JAVA编程思想》,李建忠著;
    《JAVA程序设计》,李晓军、王良娜著;
    《JAVA核心技术》,Cay S.S.Rothlisberger、Kathy Sierra著;
    《JAVA编程实践指南》,Bruce Eckel著;
    《JAVA编程经典实例》,Jeffrey Meyers著;
    《JAVA程序设计入门经典》,Bruce Eckel著;
    《深入理解JAVA虚拟机:JVM高效编译与优化技术》,周志明著;
    其他相关资料网站提供内容。注意:以上各书籍及网站提供内容可能存在差异。

    ========================================================================================================================
    ========================================================================================================================

    【参考资料】

    Oracle官方手册版本11G Release XE版;
    Oracle官方手册版本12C Release XE版;
    Oracle官方手册版本18C Release XE版;

    ————————————————————————————————————————
    ————————————————————————————————————————

    【笔记】

    ORACLE数据库学习笔记——SQL基础知识篇——简单查询语句部分(包含约束条件及排序);

    ————————————————————————————————————————
    ————————————————————————————————————————

    【笔记】

    ORACLE数据库学习笔记——SQL基础知识篇——复杂查询语句部分(包含聚合函数、

UFC