2 Commits
main ... main

Author SHA1 Message Date
b172e93a85 update without blocking errors 2026-01-17 22:49:16 +01:00
7d55ba0840 update without blocking errors 2026-01-17 22:48:11 +01:00
13 changed files with 33 additions and 949 deletions

9
.gitignore vendored
View File

@@ -1,3 +1,10 @@
# Scripts
main.py
# UV Environment
.python-version
.venv
.venv
# Datasets
dataset.py
data/capacity.csv

View File

@@ -1,18 +1,15 @@
# Mini Projet - Optimisation Métaheuristique
Ceci est le répertoire Git du projet d'optimisation métaheuristique du groupe 9 dont les membres sont **AIT MOUSSA Amine, DAANOUNI Siham et DELAMOTTE Clément**.
Ceci est le répertoire Git du projet d'optimisation métaheuristique du groupe 9 dont les membres sont AIT MOUSSA Amine, DAANOUNI Siham et DELAMOTTE Clément.
Le sujet choisi est **l'optimisation du chargement des véhicules électriques** et l'algorithme mis en place est **Multiple Objectives Particle Swarm Optimization (MOPSO) + Surrogate**. La modélisation du problème se trouvera dans le rapport et les slides de présentation.
Le sujet choisi est **l'optimisation du chargement des véhicules électriques** et l'algorithme mis en place est **Multiple Objectives Particle Swarm Optimization (MOPSO) + Surrogate**. La modélisation du problème se trouvera dans le rapport.
Pour les datasets, nous avons pris diverses sources réalistes pour concevoir nos propres jeux de données afin de pouvoir récupérer des paramètres cruciaux:
- data/vehicle_capacity.csv: [Car Dataset (2025)](https://www.kaggle.com/datasets/abdulmalik1518/cars-datasets-2025/data).
- data/elec_prices.csv: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), les données ont été récupérées manuellement sur l'hivers 2025 (S2-S5) et l'été 2025 (S29-S32).
- data/grid_capacity.txt: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), même procédé qu'au dessus.
Pour les datasets, nous avons pris diverses sources pour concevoir notre propre jeu de données:
- data/vehicle_capacity.csv: [Car Dataset (2025)](https://www.kaggle.com/datasets/abdulmalik1518/cars-datasets-2025/data)
- data/elec_prices.csv: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), les données ont été récupérées manuellement sur l'hivers 2025 (S2-S5) et l'été 2025 (S29-S32)
## Installation
Pour télécharger le projet vous pouvez simplement utiliser la commande `git clone https://gitea.galaxynoliro.fr/KuMiShi/Optim_Metaheuristique.git` ou récupérer le fichier `.zip` du projet et l'extraire.
Le projet a été concu à l'aide du ***Python packet manager UV***, il est préférable d'utiliser celui-ci pour sa facilité d'utilisation **sauf si vous vous contentez de regarder les résultats de notre notebook**. **UV** peut être installé via le [site internet officiel](https://docs.astral.sh/uv/getting-started/installation/#installing-uv) sur tout système d'exploitation.
Le projet a été concu à l'aide du *Python packet manager* ***UV***, il est préférable d'utiliser celui-ci pour ca facilité d'utilisation. **UV** peut être installé via le [site internet officiel](https://docs.astral.sh/uv/getting-started/installation/#installing-uv).
**Linux:**
```bash
@@ -30,11 +27,6 @@ winget install --id=astral-sh.uv -e
```
## Utilisation
Vous pouvez utiliser le projet de deux manières:
1. Récupérer le notebook et suivre les cellules une à une avec les résultats pré-compiler dans le fichier.
2. Exécuter le projet complet à l'aide du code source et de **UV**
Pour charger le projet et l'executer sans problème, il faut d'abord configurer notre environnement d'execution de la manière suivante:
```bash
@@ -44,8 +36,8 @@ uv venv
# Téléchargement des requirements du projet
uv pip sync uv.lock
# Si uv.lock ne fonctionne pas correctement ou n'existe pas, vous pouvez le générer avec la commande suivante à partir du .toml:
# Si uv.lock n'existe pas, vous pouvez le générer avec la commande suivante:
uv pip compile --upgrade pyproject.toml -o uv.lock
```
Enfin, vous pouvez executer n'importe quel script avec la commande `uv run main.py`, sachant que `main.py` peut être remplacé par n'importe quel autre script python executable.
Enfin, vous pouvez executer n'importe quel script avec la commande `uv run main.py` (main.py pouvant etre remplacé par n'importe quel autre script python executable).

Binary file not shown.

Binary file not shown.

View File

@@ -14,7 +14,7 @@ Winter 2025; Summer 2025
12.54; 76.77
0.4; 63.01
60.01; 54.1
115.8; 69.52
1158; 69.52
93.49; 94.16
71.25; 30.5
79.76; 46.2
1 Winter 2025 Summer 2025
14 12.54 76.77
15 0.4 63.01
16 60.01 54.1
17 115.8 1158 69.52
18 93.49 94.16
19 71.25 30.5
20 79.76 46.2

View File

@@ -1,9 +0,0 @@
| Maximum | Minimum
---------------------------------------------------
Consumption (Winter)| 87 028 Mwh | 46 847 Mwh
(Summer)| 52 374 Mwh | 29 819 Mwh
---------------------------------------------------
Production (Winter)| 91 341 Mwh | 72 926 Mwh
(Summer)| 86 579 Mwh | 49 127 Mwh
Winter correspond to S2-S5 and Summer correspond to S29-S32 (same as prices)

285
main.py
View File

@@ -1,283 +1,6 @@
import time
import numpy as np
import matplotlib.pyplot as plt
import copy
from mopso import MOPSO
from surrogate_handler import SurrogateHandler
import pandas as pd
class SmartMOPSO(MOPSO):
def __init__(self, model_type=None, **kwargs):
super().__init__(**kwargs)
# Initialize Surrogate Handler if model_type is provided
self.use_surrogate = (model_type is not None)
if self.use_surrogate:
self.surrogate_handler = SurrogateHandler(model_type)
# Pre-fill with initial particle data
for p in self.particles:
self.surrogate_handler.add_data(p.x, p.f_current[1])
def iterate(self, prediction_freq:int=10):
# Main loop (overriding original logic to manage control flow)
for t in range(self.t):
self.select_leader()
for i in range(self.n):
# Movement
self.particles[i].update_velocity(self.leader.x, self.c1, self.c2, self.w)
self.particles[i].update_position()
self.particles[i].keep_boudaries(self.A_max)
if (t % (prediction_freq) != 0) and self.use_surrogate:
# Fast exact calculation (f1, f3)
f1 = self.particles[i].f1(self.prices)
f3 = self.particles[i].f3()
# Slow prediction (f2) by using Surrogate
f2_pred = self.surrogate_handler.predict(self.particles[i].x)
# Inject scores without running the expensive 'updating_socs'
self.particles[i].f_current = [f1, f2_pred, f3]
else:
# Standard Calculation (Slow and Exact)
self.particles[i].updating_socs(self.socs, self.capacities)
self.particles[i].evaluate(self.prices, self.socs, self.socs_req, self.times)
self.particles[i].update_best()
self.update_archive()
def main():
print("Hello from optim-meta!")
# Run Classic MOPSO, collect data and run training for the model
def train_surrogate_model(self):
# Generation of data
for t in range(self.t):
self.select_leader()
for i in range(self.n):
# Movement
self.particles[i].update_velocity(self.leader.x, self.c1, self.c2, self.w)
self.particles[i].update_position()
self.particles[i].keep_boudaries(self.A_max)
# Standard Calculation (Slow and Exact)
self.particles[i].updating_socs(self.socs, self.capacities)
self.particles[i].evaluate(self.prices, self.socs, self.socs_req, self.times)
# Capture data for AI training
self.surrogate_handler.add_data(self.particles[i].x, self.particles[i].f_current[1])
# End of dataset generation (based on classic MOPSO)
self.surrogate_handler.train()
def calculate_elec_prices(csv_file:str, sep:str=';'):
elec_df = pd.read_csv(filepath_or_buffer=csv_file, sep=sep, skipinitialspace=True)
# Mean of Winter and Summer of 2025 electric prices (Euros/MWh)
elec_mean = (elec_df['Winter 2025'].mean() + elec_df['Summer 2025'].mean())/2
# Standard variation of Winter and Summer of 2025 electric prices (Euros/MWh)
elec_std = (elec_df['Winter 2025'].std() + elec_df['Summer 2025'].std())/2
elec_mean = elec_mean / 1000
elec_std = elec_std / 1000
print(f'Electricity prices:\n - Mean: ${elec_mean}€/Mwh\n - Std: ${elec_std}€/Mwh')
return elec_mean, elec_std
def generate_capacities(csv_file:str, nb_vehicles:int, seed:int=42, sep:str=';'):
cap_df = pd.read_csv(filepath_or_buffer=csv_file, sep=sep)
# Getting back all kind of battery capacities with unique values
all_capacities = cap_df['Battery Capacity kwh'].dropna().unique()
# Extracting random values for generating the array of capacities
capacities = pd.Series(all_capacities).sample(n=nb_vehicles, random_state=seed)
print(f'Capacities of vehicles (kwh): ${capacities}')
return capacities.tolist()
def get_power_constants(nb_vehicles:int, nb_consumers:int=67000000):
mean_consumption = (87028 + 46847 + 52374 + 29819)/4 # Mean of consumption in France in 2025 (estimate according to data/grid_capacity.txt)
sim_ratio = nb_vehicles / nb_consumers # Ratio to reduce A_max of simulation to realistic restrictions
a_max = sim_ratio * mean_consumption
x_max = a_max / nb_vehicles # For init, uniform charging/discharging for every vehicle
x_min = -x_max
return a_max, x_max, x_min
def run_scenario(scenario_name, capacities:list, price_mean:float, price_std:float, model_type=None, n:int=20, t:int=30, w:float=0.4, c1:float=0.3, c2:float=0.2, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=48):
A_MAX, X_MAX, X_MIN = get_power_constants(nb_vehicles=nb_vehicles)
print(f"\n--- Launching Scenario: {scenario_name} ---")
# Simulation parameters
params = {
'A_max': A_MAX, 'price_mean': price_mean, 'price_std': price_std,
'capacities': capacities, 'n': n, 't': t,
'w': w, 'c1': c1, 'c2': c2,
'nb_vehicles': nb_vehicles, 'delta_t': delta_t, 'nb_of_ticks': nb_of_ticks,
'x_min':X_MIN, 'x_max':X_MAX
}
# Instantiate extended class
optimizer = SmartMOPSO(model_type=model_type, **params)
if(model_type is not None):
optimizer.train_surrogate_model()
start_time = time.time()
# Run simulation
optimizer.iterate()
end_time = time.time()
duration = end_time - start_time
# Retrieve best f2 (e.g. from archive)
best_f2 = min([p.f_best[1] for p in optimizer.archive]) if optimizer.archive else 0
print(f"Finished in {duration:.2f} seconds.")
print(f"Best f2 found: {best_f2:.4f}")
return duration, best_f2, optimizer.archive
# CSV files
elec_price_csv = 'data/elec_prices.csv'
capacity_csv = 'data/vehicle_capacity.csv'
# Global Simulation parameters
T = 30 # Number of iterations (for the particles)
W = 0.4 # Inertia (for exploration)
C1 = 0.3 # Individual trust
C2 = 0.2 # Social trust
ARC_SIZE = 10 # Archive size
nb_vehicle = 20
P_MEAN, P_STD = calculate_elec_prices(elec_price_csv)
CAPACITIES = generate_capacities(capacity_csv, nb_vehicles=nb_vehicle)
NB_TICKS = 48
DELTA = 60
results = {
'MOPSO':[],
'MLP': [],
'RF': []
}
nb_particles = [20,50,100,500]
for k in range(len(nb_particles)):
# 1. Without Surrogate (Baseline)
d1, f1_score, _ = run_scenario(
"Only MOPSO",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle, # Important pour la cohérence
model_type=None,
n=nb_particles[k]
)
results['MOPSO'].append((d1, f1_score))
# 2. With MLP
d2, f2_score, _ = run_scenario(
"With MLP",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle,
model_type='mlp',
n=nb_particles[k]
)
results['MLP'].append((d2, f2_score))
# 3. With Random Forest
d3, f3_score, _ = run_scenario(
"With Random Forest",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle,
model_type='rf',
n=nb_particles[k]
)
results['RF'].append((d3, f3_score))
# --- DISPLAY RESULTS ---
print("\n=== SUMMARY ===")
print(f"{'Mode':<15} | {'Time (s)':<10} | {'Best f2':<10}")
print("-" * 45)
for k, v in results.items():
for i in range(len(nb_particles)):
print(f"{k:<15}_{nb_particles[i]:<15} | {v[i][0]:<10.2f} | {v[i][1]:<10.4f}")
import matplotlib.pyplot as plt
import numpy as np
def plot_time_benchmark(nb_particles_list, results_dict):
t_mopso = [item[0] for item in results_dict['MOPSO']]
t_mlp = [item[0] for item in results_dict['MLP']]
t_rf = [item[0] for item in results_dict['RF']]
plt.figure(figsize=(10, 6))
plt.plot(nb_particles_list, t_mopso, 'o-', label='Sans IA (MOPSO)', color='#1f77b4', linewidth=2)
plt.plot(nb_particles_list, t_mlp, 's--', label='Avec MLP', color='#ff7f0e', linewidth=2)
plt.plot(nb_particles_list, t_rf, '^-.', label='Avec Random Forest', color='#2ca02c', linewidth=2)
plt.title("Temps d'exécution selon le nombre de particules", fontsize=14, fontweight='bold')
plt.xlabel("Nombre de Particules", fontsize=12)
plt.ylabel("Temps (s)", fontsize=12)
plt.grid(True, linestyle=':', alpha=0.7)
plt.legend(fontsize=11)
plt.tight_layout()
plt.show()
plot_time_benchmark(nb_particles, results)
import matplotlib.pyplot as plt
def plot_f2_benchmark(nb_particles_list, results_dict):
s_mopso = [item[1] for item in results_dict['MOPSO']]
s_mlp = [item[1] for item in results_dict['MLP']]
s_rf = [item[1] for item in results_dict['RF']]
plt.figure(figsize=(10, 6))
plt.plot(nb_particles_list, s_mopso, 'o-', label='Sans IA (MOPSO)', color='#1f77b4', linewidth=2)
plt.plot(nb_particles_list, s_mlp, 's--', label='Avec MLP', color='#ff7f0e', linewidth=2)
plt.plot(nb_particles_list, s_rf, '^-.', label='Avec Random Forest', color='#2ca02c', linewidth=2)
plt.title("Meilleur Score F2 (Convergence) selon le nombre de particules", fontsize=14, fontweight='bold')
plt.xlabel("Nombre de Particules (log scale)", fontsize=12)
plt.ylabel("Meilleur F2 Score", fontsize=12)
plt.grid(True, linestyle=':', alpha=0.7)
plt.legend(fontsize=11)
plt.xscale('log')
plt.tight_layout()
plt.show()
plot_f2_benchmark(nb_particles, results)
if __name__ == "__main__":
main()

View File

@@ -1,9 +1,9 @@
import random as rd
from particle import Particle
from .particle import Particle
import copy
class MOPSO():
def __init__(self, A_max:float, price_mean:float, price_std:float, capacities:list, n:int, t:int, w:float, c1:float, c2:float, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, v_alpha=0.1, surrogate=False):
def __init__(self, f_weights:list, A_max:float, price_mean:float, price_std:float, capacities:list, n:int, t:int, w:float, c1:float, c2:float, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, v_alpha=0.1, surrogate=False):
# Constants
self.n = n # Number of particles
self.t = t # Number of simulation iterations
@@ -11,13 +11,14 @@ class MOPSO():
self.c1 = c1 # Individual trust
self.c2 = c2 # Social trust
self.archive_size = archive_size # Archive size
self.f_weights = f_weights # Weigths for aggregation of all function objective
self.surrogate = surrogate # Using AI calculation
# Initialisation of particle's global parameters
self.A_max = A_max # Network's power limit
self.socs, self.socs_req = self.generate_state_of_charges(nb_vehicles,nb_of_ticks)
self.times = self.generate_times(nb_vehicles, nb_of_ticks)
self.times = self.generate_times(nb_vehicles, nb_of_ticks, delta_t)
self.prices = self.generates_prices(nb_of_ticks,price_mean,price_std) #TODO: Use RTE France prices for random prices generation according to number of ticks
self.capacities = capacities
@@ -70,7 +71,7 @@ class MOPSO():
# Checking for best positions
# Generation of arriving and leaving times for every vehicle
def generate_times(self, nb_vehicles, nb_of_ticks):
def generate_times(self, nb_vehicles, nb_of_ticks, delta_t):
times = []
for _ in range(nb_vehicles):
# Minumun, we have one tick of charging/discharging during simulation
@@ -83,7 +84,7 @@ class MOPSO():
def generates_prices(self,nb_of_ticks:int, mean:float, std:float):
prices = []
for _ in range(nb_of_ticks):
variation = rd.uniform(-std, std) # Random float variation
variation = rd.randrange(-(std*10), (std * 10) +1, 1) / 10 # Random float variation
prices.append(mean + variation)
return prices

File diff suppressed because one or more lines are too long

View File

@@ -64,13 +64,15 @@ class Particle():
if self.x[tick][i] > 0:
self.x[tick][i] = self.x[tick][i] * 0.9
current_power = self.get_current_grid_stress(tick)
def generate_position(self):
pos = []
for _ in range(self.nb_of_ticks):
x_tick = []
for _ in range(self.nb_vehicles):
x_tick.append(rd.uniform(self.x_min, self.x_max))
x_tick.append(rd.randrange(self.x_min, self.x_max +1, 1))
pos.append(x_tick)
return pos
@@ -81,8 +83,7 @@ class Particle():
for _ in range(self.nb_of_ticks):
v_tick = []
for _ in range(self.nb_vehicles):
# v_tick.append(rd.randrange(-vel_coeff, vel_coeff +1, 1) * self.alpha)
v_tick.append(rd.uniform(-vel_coeff, vel_coeff) * self.alpha)
v_tick.append(rd.randrange(-vel_coeff, vel_coeff +1, 1) * self.alpha)
vel.append(v_tick)
return vel
@@ -146,12 +147,10 @@ class Particle():
for tick in range(self.nb_of_ticks - 1): # On s'arrête à l'avant-dernier pour calculer le suivant
for i in range(self.nb_vehicles):
# SoC(t+1) = SoC(t) + (Puissance(t) * delta_t / Capacité)
# Attention: x est en kW, delta_t en minutes -> conversion en heures (/60) si capacité en kWh
energy_added = (self.x[tick][i] * (self.delta_t / 60))
# Mise à jour du tick suivant basé sur le tick actuel
# On utilise initial_socs comme base si c'est une liste de listes [tick][vehicule]
self.socs[tick+1][i] = self.socs[tick][i] + (energy_added / capacities[i])
# Bornage entre 0 et 1 (0% et 100%)
self.socs[tick+1][i] = max(0.0, min(1.0, self.socs[tick+1][i]))

View File

@@ -5,8 +5,5 @@ description = "Metaheuristic Optimization Project"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"matplotlib>=3.10.8",
"numpy>=2.4.1",
"pandas>=2.3.3",
"scikit-learn>=1.8.0",
]

View File

@@ -1,40 +0,0 @@
import numpy as np
from sklearn.neural_network import MLPRegressor
from sklearn.ensemble import RandomForestRegressor
class SurrogateHandler:
def __init__(self, model_type='mlp'):
self.model_type = model_type
self.is_trained = False
self.data_X = []
self.data_Y = []
# Model choice
if model_type == 'mlp':
self.model = MLPRegressor(hidden_layer_sizes=(100, 50), max_iter=500, random_state=42)
elif model_type == 'rf':
# RandomForest is generaly more robust "out of the box"
self.model = RandomForestRegressor(n_estimators=100, random_state=42)
else:
raise ValueError("Model type must be 'mlp' or 'rf'")
def add_data(self, x_matrix, f2_value):
# Flattening the position matrix to a 1 dimension vector
flat_x = np.array(x_matrix).flatten()
self.data_X.append(flat_x)
self.data_Y.append(f2_value)
def train(self):
if len(self.data_X) < 20: # No training if their is too few data
return
X = np.array(self.data_X)
y = np.array(self.data_Y)
self.model.fit(X, y)
self.is_trained = True
def predict(self, x_matrix):
if not self.is_trained:
return None
flat_x = np.array(x_matrix).flatten().reshape(1, -1)
return self.model.predict(flat_x)[0]

36
uv.lock generated
View File

@@ -1,46 +1,14 @@
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml -o uv.lock
contourpy==1.3.3
# via matplotlib
cycler==0.12.1
# via matplotlib
fonttools==4.61.1
# via matplotlib
joblib==1.5.3
# via scikit-learn
kiwisolver==1.4.9
# via matplotlib
matplotlib==3.10.8
# via optim-meta (pyproject.toml)
numpy==2.4.1
# via
# optim-meta (pyproject.toml)
# contourpy
# matplotlib
# pandas
# scikit-learn
# scipy
packaging==25.0
# via matplotlib
# via pandas
pandas==2.3.3
# via optim-meta (pyproject.toml)
pillow==12.1.0
# via matplotlib
pyparsing==3.3.1
# via matplotlib
python-dateutil==2.9.0.post0
# via
# matplotlib
# pandas
# via pandas
pytz==2025.2
# via pandas
scikit-learn==1.8.0
# via optim-meta (pyproject.toml)
scipy==1.17.0
# via scikit-learn
six==1.17.0
# via python-dateutil
threadpoolctl==3.6.0
# via scikit-learn
tzdata==2025.3
# via pandas