Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • loyolay/hackaton-entrega-grupo-3
  • hackathon-bella/plantillajupyterbook
  • copernicus-hackaton/plantillajupyterbook
3 results
Show changes
Commits on Source (24)
%% Cell type:markdown id: tags:
# Cargar bibliotecas
%% Cell type:code id: tags:
```
# CDS API
import cdsapi
```
%% Cell type:markdown id: tags:
# Directorios para almacenar datos
%% Cell type:code id: tags:
```
DATADIR = 'era5/'
LABELDIR = 'fire_danger/'
```
%% Cell type:markdown id: tags:
# Descargar datos ERA5 de Argentina desde 2002 hasta 2022
%% Cell type:code id: tags:
```
for year in range(2002,2023):
target_file = DATADIR + f'{year}.nc'
c = cdsapi.Client()
c.retrieve(
'reanalysis-era5-single-levels',
{
'product_type': 'reanalysis',
'format': 'netcdf',
'month': [
'01','02','12'
],
'day': [
'01', '02', '03',
'04', '05', '06',
'07', '08', '09',
'10', '11', '12',
'13', '14', '15',
'16', '17', '18',
'19', '20', '21',
'22', '23', '24',
'25', '26', '27',
'28', '29', '30',
'31',
],
'time': [
'15:00',
],
'variable': [
'10m_u_component_of_wind', '10m_v_component_of_wind', '2m_temperature',
'leaf_area_index_high_vegetation', 'leaf_area_index_low_vegetation', 'total_precipitation',
],
'year': [str(year)],
'area': [
-20, -79, -57, # North West South East
-43,
],
},
target_file)
```
%% Cell type:markdown id: tags:
# Descargar datos del índice de peligro de incendios
%% Cell type:code id: tags:
```
for year in range(2002,2023):
target_file = LABELDIR + f'{year}.nc'
c = cdsapi.Client()
c.retrieve(
'cems-fire-historical-v1',
{
'product_type': 'reanalysis',
'variable': 'fire_danger_index',
'dataset_type': 'consolidated_dataset',
'system_version': '4_1',
'year': '2002',
'month': [
'01', '02', '12',
],
'day': [
'01', '02', '03',
'04', '05', '06',
'07', '08', '09',
'10', '11', '12',
'13', '14', '15',
'16', '17', '18',
'19', '20', '21',
'22', '23', '24',
'25', '26', '27',
'28', '29', '30',
'31',
],
'area': [
-20, -79, -57,
-43,
],
'grid': '0.25/0.25',
'format': 'netcdf',
},
target_file)
```
%% Cell type:markdown id: tags:
# Configurar directorios
%% Cell type:code id: tags:
``` python
DATADIR = 'era5/' # directory containing downloaded era5 data
FIREDATADIR = 'fire_danger/' # directory containing fire data
DESTDIR = 'processed_data/' # directory to save .npy files for each time step and variable
```
%% Cell type:markdown id: tags:
# Cargar bibliotecas
%% Cell type:code id: tags:
``` python
import numpy as np
import netCDF4 as nc
import os
from tqdm.notebook import tqdm
```
%% Cell type:markdown id: tags:
# Variables de configuración
%% Cell type:code id: tags:
``` python
vars = ['u10','v10','t2m','lai_hv','lai_lv','tp','fdimrk'] #considered variables (see 0_download_data.ipynb for long names)
months = [(1,31),(2,28),(12,31)] # months + days in month in dowloaded era5 .nc files
years = np.arange(2002,2023) # downloaded years
```
%% Cell type:markdown id: tags:
# Procesamiento de datos para crear archivos .npy
%% Cell type:code id: tags:
``` python
# Processing Data to create .npy files
for var in vars:
if not os.path.exists(DESTDIR + f"{var}"):
os.makedirs(DESTDIR + f"{var}")
for year in tqdm(years):
if var=='fdimrk':
root = nc.Dataset(FIREDATADIR + f"{year:d}.nc", 'r')
else:
root = nc.Dataset(DATADIR + f"{year:d}.nc", 'r')
v = root.variables[var][:,:-9,:-5] #crop to get to a size suitable for the considered Unet-like model, here 140x140
v = v.data
root.close()
if var in ['tp']: #change unit from m to mm for precipitation
v = 1000 * v
t = 0 # time step within v array that I am working on
for month, days in months:
for day in range(days):
np.save(DESTDIR + f"{var}/{year}_{month:02d}_{day+1:02d}.npy",v[t])
t += 1
```
%% Cell type:markdown id: tags:
# Guardar información de latitud/longitud
%% Cell type:code id: tags:
``` python
root = nc.Dataset(DATADIR + f"2002.nc", 'r') #constant in time -> take from any year
lat = root.variables['latitude'][:-9].data #crop to get to a size suitable for the considered Unet-like model
lon = root.variables['longitude'][:-5].data
np.save(DESTDIR + 'lat.npy', lat)
np.save(DESTDIR + 'lon.npy', lon)
```
%% Cell type:markdown id: tags:
# Configurar directorios
%% Cell type:code id: tags:
``` python
DATADIR = 'processed_data/'
```
%% Cell type:markdown id: tags:
# Cargar bibliotecas
%% Cell type:code id: tags:
``` python
import numpy as np
from tqdm.notebook import tqdm
import os, gc
```
%% Cell type:markdown id: tags:
# Variables de configuración
%% Cell type:code id: tags:
``` python
vars = ['u10','v10','t2m','lai_hv','lai_lv','tp']
target_vars = ['fdimrk']
months = [1,2,12]
val, test = [2015,2018], [2019,2022] #years to use for validation and testing, do not use these years to compute normalization constants
train = [x for x in np.arange(2002,2015) if x not in val+test]
```
%% Cell type:markdown id: tags:
# Constantes de normalización para variables de entrada:
%% Cell type:code id: tags:
``` python
for var in vars:
tmp = [] #save values from all time steps and locations into this list and then compute mean and std
files = sorted(os.listdir(DATADIR+var)) #SHOULD NOT CONTAIN ANY OTHER FILES THAN THOSE CREATED IN 1_prepareData.ipynb
for f in tqdm(files):
y,m,d=f.split('_')
if int(y) in train and int(m) in months:
t_array = np.load(DATADIR+var+'/'+f)
t_array[np.isnan(t_array)] = 0
tmp += list(t_array.flatten())
mean, std = np.mean(tmp), np.std(tmp)
print(f'Mean {mean}, std {std}')
np.save(f'norm_consts/input_{var}.npy',np.array([mean, std]))
del tmp
gc.collect()
```
%% Cell type:markdown id: tags:
# Constantes de normalización para variables objetivo:
%% Cell type:code id: tags:
``` python
for var in target_vars:
tmp = [] #save values from all time steps and locations into this list and then compute mean and std
files = sorted(os.listdir(DATADIR+var)) #SHOULD NOT CONTAIN ANY OTHER FILES THAN THOSE CREATED IN 1_prepareData.ipynb
for f in tqdm(files):
y,m,d=f.split('_')
if int(y) in train and int(m) in months:
t_array = np.load(DATADIR+var+'/'+f)
t_array[np.isnan(t_array)] = 0
tmp += list(t_array.flatten())
mean, std = np.mean(tmp), np.std(tmp)
print(f'Mean {mean}, std {std}')
np.save(f'norm_consts/target_{var}.npy',np.array([mean, std]))
del tmp
gc.collect()
```
%% Cell type:markdown id: tags:
# Configurar directorios
%% Cell type:code id: tags:
``` python
DATADIR = 'processed_data/'
```
%% Cell type:markdown id: tags:
# Cargar bibliotecas
%% Cell type:code id: tags:
``` python
import numpy as np #DL specific imports below
import sys, time, copy
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
# from torchvision import models, transforms
from skorch.net import NeuralNet #pytorch wrapper skorch
from skorch.helper import predefined_split
from skorch.callbacks import Checkpoint, EarlyStopping
```
%% Cell type:markdown id: tags:
# Variables de configuración
%% Cell type:code id: tags:
``` python
months = np.array([[(1,31),(2,28),(12,31)]])
valid, test = [2015,2018], [2019,2022] #years to use for validation and testing, do not use these years to compute normalization constants
train = [x for x in np.arange(2002,2015) if x not in valid+test]
targetVar = 'fdimrk'
var = ['u10','v10','t2m','lai_hv','lai_lv','tp']
means = np.array([np.load(f'norm_consts/input_{v}.npy')[0] for v in var])
stds = np.array([np.load(f'norm_consts/input_{v}.npy')[1] for v in var])
targetMean, targetStd = np.load(f'norm_consts/target_{targetVar}.npy')
```
%% Cell type:markdown id: tags:
# Parámetros de aprendizaje profundo
%% Cell type:code id: tags:
``` python
##### DL parameters
batch_size = 64
learning_rate = 1e-3
num_epochs = 200
num_workers = 8
weight_decay=0.
patience=30 # early stopping if valid loss did not improve for 30 epochs
```
%% Cell type:markdown id: tags:
# Parámetros de Antorcha
%% Cell type:code id: tags:
``` python
member = 0 #ensemble member = seed for weight initialization
torch.manual_seed(member) #for reproducibility and creation of a seed ensemble
np.random.seed(member)
torch.backends.cudnn.benchmark = False
# torch.set_deterministic(True)
torch.use_deterministic_algorithms(True)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
def pytorch_count_params(model): #counts the number trainable parameters in a pytorch model
tot = 0
for x in model.parameters():
#print(x.size())
tot += np.prod(x.size())
return tot
```
%% Cell type:markdown id: tags:
# Cuenta el número de parámetros entrenables en un modelo pytorch
%% Cell type:code id: tags:
``` python
def pytorch_count_params(model): #counts the number trainable parameters in a pytorch model
tot = 0
for x in model.parameters():
#print(x.size())
tot += np.prod(x.size())
return tot
```
%% Cell type:markdown id: tags:
# Conjunto de datos del índice de peligro de incendios y ERA5
%% Cell type:code id: tags:
``` python
class copernicusDataset(Dataset):
def __init__(self,years,aug=False): #aug = use rotation and flipping as data augmentation
self.years = years
self.length = len(self.years)*sum([m[1] for m in months]) #nb of years * nb of time steps each year
self.aug = aug
def idxToFile(self,idx): #conversion between time step index and (year, month, day, hour)-input and target file
year = self.years[idx//(sum([m[1] for m in months]))]
tInYear = idx%(sum([m[1] for m in months])) #time step within this year
monthIdx = np.argmax(tInYear < np.array([sum(months[:m,1]) for m in range(1,len(months)+1)]))
month = months[monthIdx,0]
tInMonth = tInYear - sum(months[:monthIdx,1]) #time step within this month
day = tInMonth + 1 #day numbering starts with 1
return f"/{year:d}_{month:02d}_{day:02d}.npy", f"/{year:d}_{month:02d}_{day:02d}.npy"
def normalize(self, x): #normalize the input fields
return ((x.transpose()-means)/stds).transpose()
def __len__(self):
return self.length
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
inpFile, targetFile = self.idxToFile(idx)
inp = []
for v in var:
inp += [np.load(DATADIR+v+inpFile)]
inp = self.normalize(np.stack(inp))
target = ((np.load(DATADIR+targetVar+targetFile)-targetMean)/targetStd).reshape((1,100,100))
if self.aug: #50 % probability to rotate by 180 deg, 50 % probability to flip left and right
rot = np.random.randint(2) #0 -> no rotate, 1 -> rotate
inp = np.rot90(inp,k=2*rot,axes=(1,2))
target = np.rot90(target,k=2*rot,axes=(1,2))
if np.random.randint(2): #0 -> no flip, 1 -> flip
inp = np.flip(inp,axis=2)
target = np.flip(target,axis=2)
return torch.tensor(inp.astype(np.float32)), torch.tensor(target.astype(np.float32))
trainset = copernicusDataset(train, aug=True)
validset = copernicusDataset(valid)
```
%% Cell type:markdown id: tags:
# Modelo Unet
%% Cell type:code id: tags:
``` python
class UNet(nn.Module):
def __init__(self):
super(UNet, self).__init__() # in: len(var) x 140x140
self.conv1 = nn.Conv2d(in_channels=len(var),out_channels=64,kernel_size=3) # out: 64 x 138 x 138
self.bn1 = nn.BatchNorm2d(64)
self.conv2 = nn.Conv2d(64,64,3) # out: 64 x 136 x 136
self.bn2 = nn.BatchNorm2d(64)
self.pool1 = nn.MaxPool2d(2) # out: 64 x 68 x 68
self.conv3 = nn.Conv2d(64,128,3) # out: 128 x 66 x 66
self.bn3 = nn.BatchNorm2d(128)
self.conv4 = nn.Conv2d(128,128,3) # out: 128 x 64 x 64
self.bn4 = nn.BatchNorm2d(128)
self.pool2 = nn.MaxPool2d(2) # out: 128 x 32 x 32
self.conv5 = nn.Conv2d(128,256,3) # out: 256 x 30 x 30
self.bn5 = nn.BatchNorm2d(256)
self.conv6 = nn.Conv2d(256,256,3) # out: 256 x 28x 28
self.bn6 = nn.BatchNorm2d(256)
self.upconv1 = nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=4, stride=2, padding=1) #out: 128 x 56 x 56
### concat with crop(conv 4) -> out: 256 x 56 x56
self.conv7 = nn.Conv2d(256,128,3) # out: 128 x 54 x 54
self.bn7 = nn.BatchNorm2d(128)
self.conv8 = nn.Conv2d(128,128,3) # out: 128 x 52 x 52
self.bn8 = nn.BatchNorm2d(128)
self.upconv2 = nn.ConvTranspose2d(128,64,4,2,1) # out 64 x 104 x 104
### concat with crop(conv4) -> out: 128 x 104 x 104
self.conv9 = nn.Conv2d(128,64,3) # out: 64 x 102 x 102
self.bn9 = nn.BatchNorm2d(64)
self.conv10 = nn.Conv2d(64,64,3) # out: 64 x 100 x 100
self.bn10 = nn.BatchNorm2d(64)
self.conv11 = nn.Conv2d(64,1,1) # out: 1 x 100 x 100
def forward(self, x):
level1 = F.relu(self.bn2(self.conv2(F.relu(self.bn1(self.conv1(x))))))
level2 = F.relu(self.bn4(self.conv4(F.relu(self.bn3(self.conv3(self.pool1(level1)))))))
level3 = F.relu(self.bn6(self.conv6(F.relu(self.bn5(self.conv5(self.pool2(level2)))))))
### going up again - to center crop the concatenated array, use the pad function with negative padding
level2 = F.relu(self.bn8(self.conv8(F.relu(self.bn7(self.conv7(torch.cat((F.pad(level2,[-4,-4,-4,-4]), self.upconv1(level3)), dim=1)))))))
level1 = F.relu(self.bn10(self.conv10(F.relu(self.bn9(self.conv9(torch.cat((F.pad(level1,[-16,-16,-16,-16]), self.upconv2(level2)), dim=1)))))))
return self.conv11(level1)
model = UNet()
print('Number of parameters in the model', pytorch_count_params(model))
```
%% Cell type:code id: tags:
``` python
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model)
```
%% Cell type:markdown id: tags:
# Error medio cuadrado
%% Cell type:code id: tags:
``` python
class myMSE(nn.Module): # just normal MSE, but pytorch implementation somehow did not work properly
def forward(self, input, target):
return ((input-target)**2).mean()
```
%% Cell type:markdown id: tags:
# Crear red neuronal
%% Cell type:code id: tags:
``` python
net = NeuralNet( #skorch wrapper facility
model,
criterion=myMSE,
batch_size=batch_size,
lr=learning_rate,
max_epochs=num_epochs,
optimizer=optim.Adam,
iterator_train__shuffle=True,
iterator_train__num_workers=num_workers,
iterator_valid__shuffle=False,
iterator_valid__num_workers=num_workers,
train_split=predefined_split(validset), #strange naming, but validset will be used for validation not training, see skorch.helper.predefined_split documentation
callbacks=[Checkpoint(dirname=f'training',f_params='best_params.pt'), #Saves the best parameters to best_params.pt.
EarlyStopping(patience=patience, threshold=1e-3, threshold_mode='abs')], #stops training if valid loss did not improve for patience epochs
device=device
)
tstart = time.time()
net.fit(trainset)
print('Time for training', (time.time()-tstart)/60, 'min')
```
This diff is collapsed.
# Plantilla Jupyter Book # BELLA Ideathon: Desafío de Innovación de Copérnico
Plantilla para la creación de documentación usando JupyterBook ## Descripción del problema
Los incendios forestales se han duplicado en todo el mundo en los últimos 20 años, y los incendios devoran el equivalente a 16 campos de fútbol por minuto, según un
estudio conjunto de la Universidad de Maryland en Estados Unidos, Global Forest Watch (GFW) y el World Resources Institute.
Ejemplo en [https://class.redclara.net/cyted/course2](https://class.redclara.net/cyted/course2) En Argentina, nos enfrentamos a uno de los problemas ambientales más devastadores. Según datos recabados por el Cuerpo Nacional de Bomberos
\ No newline at end of file Servicio de Gestión e INTA, los incendios forestales han arrasado hasta el momento más de 700.000 hectáreas en distintas provincias del país
en 2022.
La gente pierde sus hogares, tierras, trabajos y oxígeno. Incluso si el fuego ha terminado, los incendios forestales dejan enfermedades y epidemias en la vida silvestre, afectando la fotosíntesis, lo que es muy perjudicial para
nuestro planeta.
## Solución
Desarrollar una tecnología accesible a los ciudadanos, que utiliza inteligencia artificial (IA) para interpretar y procesar datos complejos y presentar la información que es comprensible para los humanos.
Este proyecto predice las posibilidades de que se inicie un incendio. Utiliza los siguientes parámetros: 0m u component of wind, 10m v component fo wind, 2m temperature, leaf area index high vegetation, leaf area index low vegetation y total precipitation de Datos por hora ERA5. Los datos de los índices de peligro de incendios se obtuvieron del Servicio de Gestión de Emergencias de Copernicus.
## Miembros del equipo
- David Akim
- Tomas Emanuel Schattmann
- Aldana Tedesco
## Requisitos de instalación
En la terminal ejecuta los siguientes comandos:
- `pip install cdsapi`
- `pip install numpy`
- `pip install netCDF4`
- `pip install torch`
- `pip install skorch`
- `pip install pandas`
## Orden para ejecutar cuadernos
1. 0_download_data.ipynb
2. 1_prepare_data.ipynb
3. 2_norm_consts.ipynb
4. 3_train_model.ipynb
\ No newline at end of file
# Book settings
# Learn more at https://jupyterbook.org/customize/config.html
title: Plantilla Jupyter Book
author: MiLab
logo: logo.png
# Force re-execution of notebooks on each build.
# See https://jupyterbook.org/content/execute.html
execute:
execute_notebooks: force
# Define the name of the latex output file for PDF builds
latex:
latex_documents:
targetname: book.tex
# Add a bibtex file so that we can create citations
bibtex_bibfiles:
- references.bib
# Information about where the book exists on the web
repository:
url: https://gitmilab.redclara.net/hackathon-bella/plantillajupyterbook # Online location of your book
#path_to_book: docs # Optional path to your book, relative to the repository root
branch: main # Which branch of the repository should be used when creating links (optional)
# Add GitHub buttons to your book
# See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository
html:
use_issues_button: true
use_repository_button: true
# Table of contents
# Learn more at https://jupyterbook.org/customize/toc.html
format: jb-book
root: docs/intro
parts:
- caption: Sesiones
numbered: True # Only applies to chapters in Part 1.
chapters:
- file: docs/sesion01/milab
docs/images/cyted1.png

111 KiB

docs/images/upload.png

447 B

# Creación de cursos con MkDocs
```{tableofcontents}
```
Este es un curso en el servicio [MiLab Pages](https://milab.redclara.net/). Puede revisar su [código fuente](https://gitlab.com/pages/mkdocs) y solicitar un despliegue nuevo con esta plantilla para sus proyectos llenando el siguiente formulario: [https://forms.gle/y33t6xb1vSKXQV4n8](https://forms.gle/y33t6xb1vSKXQV4n8).
## Tutorial de edición
> Esta plantilla se edita en formato MarkDown, puede usar la guía creada por [Matt Cone](https://github.com/mattcone) para conocer su sintaxis en [Guía rápida MarkDown](markdown.md)
Para la edición de los cursos se debe seguir estos pasos:
1. Dirigirse al repositorio del curso. Por ejemplo https://gitmilab.redclara.net/cyted/course1
2. Ingresar al editor web dando clic al botón **Web IDE**
![upload](images/cyted1.png "Web IDE")
3. Para editar archivos existentes, localizarlos en la carpeta `docs/` y realizar las ediciones necesarias.
4. Para nuevas paginas:
1. Se debe crear un nuevo archivo con extensión `.md` en el directorio `docs/`. Ejemplo `docs/documentacion.md`
2. Se debe agregar al menú desde el archivo `mkdocs.yml` en la sección `nav:` siguiendo el formato:
```
nav:
- Título: docs/archivo.md
```
5. Se pueden agregar imagenes, documentos o cualquier material complementario cargandolos desde el editor en el ícono ![upload](images/upload.png "Upload icon from https://www.flaticon.com/free-icon/upload_747416")
6. Para guardar los cambios:
1. Se da clíc al botón **Create Commit**
2. Se agrega una descripción de los cambios en el campo *Commit message*.
> Una frase corta que describe a grandes rasgos las mejoras añadidas
3. Se selecciona, Commit to **master** branch.
4. Se da clic al botón **Commit**
> A partir del commmit los cambios se reflejarán en la versión web en aproximadamente 5 minutos.
# Reproducibilidad `MiLab`
Plataforma MiLab, control de versiones, mensajería en escenarios de investigación, calculo computacional y datos de investigación para la ciencia abierta y reproducible.
logo.png

5.03 KiB

---
jupytext:
cell_metadata_filter: -all
formats: md:myst
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.11.5
kernelspec:
display_name: Python 3
language: python
name: python3
---
# Notebooks with MyST Markdown
Jupyter Book also lets you write text-based notebooks using MyST Markdown.
See [the Notebooks with MyST Markdown documentation](https://jupyterbook.org/file-types/myst-notebooks.html) for more detailed instructions.
This page shows off a notebook written in MyST Markdown.
## An example cell
With MyST Markdown, you can define code cells with a directive like so:
```{code-cell}
print(2 + 2)
```
When your book is built, the contents of any `{code-cell}` blocks will be
executed with your default Jupyter kernel, and their outputs will be displayed
in-line with the rest of your content.
```{seealso}
Jupyter Book uses [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert text-based files to notebooks, and can support [many other text-based notebook files](https://jupyterbook.org/file-types/jupytext.html).
```
## Create a notebook with MyST Markdown
MyST Markdown notebooks are defined by two things:
1. YAML metadata that is needed to understand if / how it should convert text files to notebooks (including information about the kernel needed).
See the YAML at the top of this page for example.
2. The presence of `{code-cell}` directives, which will be executed with your book.
That's all that is needed to get started!
## Quickly add YAML metadata for MyST Notebooks
If you have a markdown file and you'd like to quickly add YAML metadata to it, so that Jupyter Book will treat it as a MyST Markdown Notebook, run the following command:
```
jupyter-book myst init path/to/markdownfile.md
```
# Markdown Files
Whether you write your book's content in Jupyter Notebooks (`.ipynb`) or
in regular markdown files (`.md`), you'll write in the same flavor of markdown
called **MyST Markdown**.
This is a simple file to help you get started and show off some syntax.
## What is MyST?
MyST stands for "Markedly Structured Text". It
is a slight variation on a flavor of markdown called "CommonMark" markdown,
with small syntax extensions to allow you to write **roles** and **directives**
in the Sphinx ecosystem.
For more about MyST, see [the MyST Markdown Overview](https://jupyterbook.org/content/myst.html).
## Sample Roles and Directivs
Roles and directives are two of the most powerful tools in Jupyter Book. They
are kind of like functions, but written in a markup language. They both
serve a similar purpose, but **roles are written in one line**, whereas
**directives span many lines**. They both accept different kinds of inputs,
and what they do with those inputs depends on the specific role or directive
that is being called.
Here is a "note" directive:
```{note}
Here is a note
```
It will be rendered in a special box when you build your book.
Here is an inline directive to refer to a document: {doc}`markdown-notebooks`.
## Citations
You can also cite references that are stored in a `bibtex` file. For example,
the following syntax: `` {cite}`holdgraf_evidence_2014` `` will render like
this: {cite}`holdgraf_evidence_2014`.
Moreover, you can insert a bibliography into your page with this syntax:
The `{bibliography}` directive must be used for all the `{cite}` roles to
render properly.
For example, if the references for your book are stored in `references.bib`,
then the bibliography is inserted with:
```{bibliography}
```
## Learn more
This is just a simple starter to get you started.
You can learn a lot more at [jupyterbook.org](https://jupyterbook.org).
%% Cell type:markdown id: tags:
# Content with notebooks
You can also create content with Jupyter Notebooks. This means that you can include
code blocks and their outputs in your book.
## Markdown + notebooks
As it is markdown, you can embed images, HTML, etc into your posts!
![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)
You can also $add_{math}$ and
$$
math^{blocks}
$$
or
$$
\begin{aligned}
\mbox{mean} la_{tex} \\ \\
math blocks
\end{aligned}
$$
But make sure you \$Escape \$your \$dollar signs \$you want to keep!
## MyST markdown
MyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, check
out [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),
or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/).
## Code blocks and outputs
Jupyter Book will also embed your code blocks and output in your book.
For example, here's some sample Matplotlib code:
%% Cell type:code id: tags:
``` python
from matplotlib import rcParams, cycler
import matplotlib.pyplot as plt
import numpy as np
plt.ion()
```
%% Cell type:code id: tags:
``` python
# Fixing random state for reproducibility
np.random.seed(19680801)
N = 10
data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]
data = np.array(data).T
cmap = plt.cm.coolwarm
rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))
from matplotlib.lines import Line2D
custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),
Line2D([0], [0], color=cmap(.5), lw=4),
Line2D([0], [0], color=cmap(1.), lw=4)]
fig, ax = plt.subplots(figsize=(10, 5))
lines = ax.plot(data)
ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);
```
%% Cell type:markdown id: tags:
There is a lot more that you can do with outputs (such as including interactive outputs)
with your book. For more information about this, see [the Jupyter Book documentation](https://jupyterbook.org)
--- ---
--- ---
@inproceedings{holdgraf_evidence_2014, @misc{prapas2021deep,
address = {Brisbane, Australia, Australia}, title={Deep Learning Methods for Daily Wildfire Danger Forecasting},
title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}}, author={Ioannis Prapas and Spyros Kondylatos and Ioannis Papoutsis and Gustau Camps-Valls and Michele Ronco and Miguel-Ángel Fernández-Torres and Maria Piles Guillem and Nuno Carvalhais},
booktitle = {International {Conference} on {Cognitive} {Neuroscience}}, year={2021},
publisher = {Frontiers in Neuroscience}, eprint={2111.02736},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.}, archivePrefix={arXiv},
year = {2014} primaryClass={cs.LG}
} }
@article{holdgraf_rapid_2016, @article{holdgraf_rapid_2016,
title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility}, title = {Causal deep learning models for studying the Earth system},
volume = {7}, volume = {16},
issn = {2041-1723}, issn = {2041-1723},
url = {http://www.nature.com/doifinder/10.1038/ncomms13654}, url = {https://gmd.copernicus.org/articles/16/2149/2023/},
doi = {10.1038/ncomms13654}, doi = {10.5194/gmd-16-2149-2023}
number = {May}, journal = {Geoscientific Model Development},
journal = {Nature Communications}, author = {Tesch, T. and Kollet, S. and Garcke, J.},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.}, year = {2023},
year = {2016}, pages = {2149--2166},
pages = {13654},
file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf} file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf}
} }
@inproceedings{holdgraf_portable_2017,
title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science},
volume = {Part F1287},
isbn = {978-1-4503-5272-7},
doi = {10.1145/3093338.3093370},
abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.},
booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}},
author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.},
year = {2017},
keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy}
}
@article{holdgraf_encoding_2017,
title = {Encoding and decoding models in cognitive electrophysiology},
volume = {11},
issn = {16625137},
doi = {10.3389/fnsys.2017.00061},
abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.},
journal = {Frontiers in Systems Neuroscience},
author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.},
year = {2017},
keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials}
}
@book{ruby,
title = {The Ruby Programming Language},
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
}