24 Commits
main ... main

Author SHA1 Message Date
2dadd20da8 Update Report 2026-01-19 17:53:47 +01:00
12c11b0634 Supprimer Rapport_MOPSO_Surrogate.pdf 2026-01-19 17:53:22 +01:00
29b613753f Téléverser les fichiers vers "/" 2026-01-19 17:26:56 +01:00
a8323a2633 Téléverser les fichiers vers "/" 2026-01-19 17:24:57 +01:00
ce3b7cf527 Update Report 2026-01-19 16:05:18 +01:00
0347ef9fd1 Supprimer Rapport_MOPSO_Surrogate.pdf 2026-01-19 16:04:28 +01:00
KuMiShi
20d17eb69f Adding slides 2026-01-18 18:25:33 +01:00
ca254e97ac updating main.py 2026-01-18 17:29:46 +01:00
ac5cbbc690 final version of demo notebook 2026-01-18 17:23:22 +01:00
2d7841dc82 Rapport 2026-01-18 15:56:00 +01:00
4e97f4d1a1 adding plot of pareto 2026-01-18 15:44:21 +01:00
874813e29e updating mopso demo 2026-01-18 14:51:35 +01:00
0f0a4e540d updating particle.py 2026-01-18 14:50:48 +01:00
0dd6770457 updating mopso.py 2026-01-18 14:50:26 +01:00
KuMiShi
76cd66c00d Correction README 2026-01-18 14:20:58 +01:00
KuMiShi
9c72e8cdd5 Update README.md 2026-01-18 14:10:34 +01:00
KuMiShi
d1c2475d1b Adding Power constraints to simulation 2026-01-18 13:43:44 +01:00
KuMiShi
b3f51d8363 Merge modifications 2026-01-18 13:05:42 +01:00
KuMiShi
698d1ff7dd Main Modifications 2026-01-18 12:27:51 +01:00
c71fde5088 first version of demo notebook 2026-01-18 12:14:58 +01:00
05298908e5 Main update with different scenario 2026-01-17 23:20:15 +01:00
da82ee9185 update without blocking errors 2026-01-17 23:19:50 +01:00
345ac1166c update without blocking errors 2026-01-17 23:19:22 +01:00
41c3134c9f adding the surrogate handler 2026-01-17 23:16:20 +01:00
13 changed files with 1007 additions and 67 deletions

9
.gitignore vendored
View File

@@ -1,10 +1,3 @@
# Scripts
main.py
# UV Environment
.python-version
.venv
# Datasets
dataset.py
data/capacity.csv
.venv

View File

@@ -1,15 +1,18 @@
# Mini Projet - Optimisation Métaheuristique
Ceci est le répertoire Git du projet d'optimisation métaheuristique du groupe 9 dont les membres sont AIT MOUSSA Amine, DAANOUNI Siham et DELAMOTTE Clément.
Ceci est le répertoire Git du projet d'optimisation métaheuristique du groupe 9 dont les membres sont **AIT MOUSSA Amine, DAANOUNI Siham et DELAMOTTE Clément**.
Le sujet choisi est **l'optimisation du chargement des véhicules électriques** et l'algorithme mis en place est **Multiple Objectives Particle Swarm Optimization (MOPSO) + Surrogate**. La modélisation du problème se trouvera dans le rapport.
Le sujet choisi est **l'optimisation du chargement des véhicules électriques** et l'algorithme mis en place est **Multiple Objectives Particle Swarm Optimization (MOPSO) + Surrogate**. La modélisation du problème se trouvera dans le rapport et les slides de présentation.
Pour les datasets, nous avons pris diverses sources pour concevoir notre propre jeu de données:
- data/vehicle_capacity.csv: [Car Dataset (2025)](https://www.kaggle.com/datasets/abdulmalik1518/cars-datasets-2025/data)
- data/elec_prices.csv: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), les données ont été récupérées manuellement sur l'hivers 2025 (S2-S5) et l'été 2025 (S29-S32)
Pour les datasets, nous avons pris diverses sources réalistes pour concevoir nos propres jeux de données afin de pouvoir récupérer des paramètres cruciaux:
- data/vehicle_capacity.csv: [Car Dataset (2025)](https://www.kaggle.com/datasets/abdulmalik1518/cars-datasets-2025/data).
- data/elec_prices.csv: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), les données ont été récupérées manuellement sur l'hivers 2025 (S2-S5) et l'été 2025 (S29-S32).
- data/grid_capacity.txt: [RTE France (éco2mix)](https://www.rte-france.com/donnees-publications/eco2mix-donnees-temps-reel/donnees-marche), même procédé qu'au dessus.
## Installation
Le projet a été concu à l'aide du *Python packet manager* ***UV***, il est préférable d'utiliser celui-ci pour ca facilité d'utilisation. **UV** peut être installé via le [site internet officiel](https://docs.astral.sh/uv/getting-started/installation/#installing-uv).
Pour télécharger le projet vous pouvez simplement utiliser la commande `git clone https://gitea.galaxynoliro.fr/KuMiShi/Optim_Metaheuristique.git` ou récupérer le fichier `.zip` du projet et l'extraire.
Le projet a été concu à l'aide du ***Python packet manager UV***, il est préférable d'utiliser celui-ci pour sa facilité d'utilisation **sauf si vous vous contentez de regarder les résultats de notre notebook**. **UV** peut être installé via le [site internet officiel](https://docs.astral.sh/uv/getting-started/installation/#installing-uv) sur tout système d'exploitation.
**Linux:**
```bash
@@ -27,6 +30,11 @@ winget install --id=astral-sh.uv -e
```
## Utilisation
Vous pouvez utiliser le projet de deux manières:
1. Récupérer le notebook et suivre les cellules une à une avec les résultats pré-compiler dans le fichier.
2. Exécuter le projet complet à l'aide du code source et de **UV**
Pour charger le projet et l'executer sans problème, il faut d'abord configurer notre environnement d'execution de la manière suivante:
```bash
@@ -36,8 +44,8 @@ uv venv
# Téléchargement des requirements du projet
uv pip sync uv.lock
# Si uv.lock n'existe pas, vous pouvez le générer avec la commande suivante:
# Si uv.lock ne fonctionne pas correctement ou n'existe pas, vous pouvez le générer avec la commande suivante à partir du .toml:
uv pip compile --upgrade pyproject.toml -o uv.lock
```
Enfin, vous pouvez executer n'importe quel script avec la commande `uv run main.py` (main.py pouvant etre remplacé par n'importe quel autre script python executable).
Enfin, vous pouvez executer n'importe quel script avec la commande `uv run main.py`, sachant que `main.py` peut être remplacé par n'importe quel autre script python executable.

BIN
Rapport_MOPSO_Surrogate.pdf Normal file

Binary file not shown.

BIN
Slides_presentation.pdf Normal file

Binary file not shown.

View File

@@ -14,7 +14,7 @@ Winter 2025; Summer 2025
12.54; 76.77
0.4; 63.01
60.01; 54.1
1158; 69.52
115.8; 69.52
93.49; 94.16
71.25; 30.5
79.76; 46.2
1 Winter 2025 Summer 2025
14 12.54 76.77
15 0.4 63.01
16 60.01 54.1
17 1158 115.8 69.52
18 93.49 94.16
19 71.25 30.5
20 79.76 46.2

9
data/grid_capacity.txt Normal file
View File

@@ -0,0 +1,9 @@
| Maximum | Minimum
---------------------------------------------------
Consumption (Winter)| 87 028 Mwh | 46 847 Mwh
(Summer)| 52 374 Mwh | 29 819 Mwh
---------------------------------------------------
Production (Winter)| 91 341 Mwh | 72 926 Mwh
(Summer)| 86 579 Mwh | 49 127 Mwh
Winter correspond to S2-S5 and Summer correspond to S29-S32 (same as prices)

285
main.py
View File

@@ -1,6 +1,283 @@
def main():
print("Hello from optim-meta!")
import time
import numpy as np
import matplotlib.pyplot as plt
import copy
from mopso import MOPSO
from surrogate_handler import SurrogateHandler
import pandas as pd
class SmartMOPSO(MOPSO):
def __init__(self, model_type=None, **kwargs):
super().__init__(**kwargs)
# Initialize Surrogate Handler if model_type is provided
self.use_surrogate = (model_type is not None)
if self.use_surrogate:
self.surrogate_handler = SurrogateHandler(model_type)
# Pre-fill with initial particle data
for p in self.particles:
self.surrogate_handler.add_data(p.x, p.f_current[1])
def iterate(self, prediction_freq:int=10):
# Main loop (overriding original logic to manage control flow)
for t in range(self.t):
self.select_leader()
for i in range(self.n):
# Movement
self.particles[i].update_velocity(self.leader.x, self.c1, self.c2, self.w)
self.particles[i].update_position()
self.particles[i].keep_boudaries(self.A_max)
if (t % (prediction_freq) != 0) and self.use_surrogate:
# Fast exact calculation (f1, f3)
f1 = self.particles[i].f1(self.prices)
f3 = self.particles[i].f3()
# Slow prediction (f2) by using Surrogate
f2_pred = self.surrogate_handler.predict(self.particles[i].x)
# Inject scores without running the expensive 'updating_socs'
self.particles[i].f_current = [f1, f2_pred, f3]
else:
# Standard Calculation (Slow and Exact)
self.particles[i].updating_socs(self.socs, self.capacities)
self.particles[i].evaluate(self.prices, self.socs, self.socs_req, self.times)
self.particles[i].update_best()
self.update_archive()
if __name__ == "__main__":
main()
# Run Classic MOPSO, collect data and run training for the model
def train_surrogate_model(self):
# Generation of data
for t in range(self.t):
self.select_leader()
for i in range(self.n):
# Movement
self.particles[i].update_velocity(self.leader.x, self.c1, self.c2, self.w)
self.particles[i].update_position()
self.particles[i].keep_boudaries(self.A_max)
# Standard Calculation (Slow and Exact)
self.particles[i].updating_socs(self.socs, self.capacities)
self.particles[i].evaluate(self.prices, self.socs, self.socs_req, self.times)
# Capture data for AI training
self.surrogate_handler.add_data(self.particles[i].x, self.particles[i].f_current[1])
# End of dataset generation (based on classic MOPSO)
self.surrogate_handler.train()
def calculate_elec_prices(csv_file:str, sep:str=';'):
elec_df = pd.read_csv(filepath_or_buffer=csv_file, sep=sep, skipinitialspace=True)
# Mean of Winter and Summer of 2025 electric prices (Euros/MWh)
elec_mean = (elec_df['Winter 2025'].mean() + elec_df['Summer 2025'].mean())/2
# Standard variation of Winter and Summer of 2025 electric prices (Euros/MWh)
elec_std = (elec_df['Winter 2025'].std() + elec_df['Summer 2025'].std())/2
elec_mean = elec_mean / 1000
elec_std = elec_std / 1000
print(f'Electricity prices:\n - Mean: ${elec_mean}€/Mwh\n - Std: ${elec_std}€/Mwh')
return elec_mean, elec_std
def generate_capacities(csv_file:str, nb_vehicles:int, seed:int=42, sep:str=';'):
cap_df = pd.read_csv(filepath_or_buffer=csv_file, sep=sep)
# Getting back all kind of battery capacities with unique values
all_capacities = cap_df['Battery Capacity kwh'].dropna().unique()
# Extracting random values for generating the array of capacities
capacities = pd.Series(all_capacities).sample(n=nb_vehicles, random_state=seed)
print(f'Capacities of vehicles (kwh): ${capacities}')
return capacities.tolist()
def get_power_constants(nb_vehicles:int, nb_consumers:int=67000000):
mean_consumption = (87028 + 46847 + 52374 + 29819)/4 # Mean of consumption in France in 2025 (estimate according to data/grid_capacity.txt)
sim_ratio = nb_vehicles / nb_consumers # Ratio to reduce A_max of simulation to realistic restrictions
a_max = sim_ratio * mean_consumption
x_max = a_max / nb_vehicles # For init, uniform charging/discharging for every vehicle
x_min = -x_max
return a_max, x_max, x_min
def run_scenario(scenario_name, capacities:list, price_mean:float, price_std:float, model_type=None, n:int=20, t:int=30, w:float=0.4, c1:float=0.3, c2:float=0.2, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=48):
A_MAX, X_MAX, X_MIN = get_power_constants(nb_vehicles=nb_vehicles)
print(f"\n--- Launching Scenario: {scenario_name} ---")
# Simulation parameters
params = {
'A_max': A_MAX, 'price_mean': price_mean, 'price_std': price_std,
'capacities': capacities, 'n': n, 't': t,
'w': w, 'c1': c1, 'c2': c2,
'nb_vehicles': nb_vehicles, 'delta_t': delta_t, 'nb_of_ticks': nb_of_ticks,
'x_min':X_MIN, 'x_max':X_MAX
}
# Instantiate extended class
optimizer = SmartMOPSO(model_type=model_type, **params)
if(model_type is not None):
optimizer.train_surrogate_model()
start_time = time.time()
# Run simulation
optimizer.iterate()
end_time = time.time()
duration = end_time - start_time
# Retrieve best f2 (e.g. from archive)
best_f2 = min([p.f_best[1] for p in optimizer.archive]) if optimizer.archive else 0
print(f"Finished in {duration:.2f} seconds.")
print(f"Best f2 found: {best_f2:.4f}")
return duration, best_f2, optimizer.archive
# CSV files
elec_price_csv = 'data/elec_prices.csv'
capacity_csv = 'data/vehicle_capacity.csv'
# Global Simulation parameters
T = 30 # Number of iterations (for the particles)
W = 0.4 # Inertia (for exploration)
C1 = 0.3 # Individual trust
C2 = 0.2 # Social trust
ARC_SIZE = 10 # Archive size
nb_vehicle = 20
P_MEAN, P_STD = calculate_elec_prices(elec_price_csv)
CAPACITIES = generate_capacities(capacity_csv, nb_vehicles=nb_vehicle)
NB_TICKS = 48
DELTA = 60
results = {
'MOPSO':[],
'MLP': [],
'RF': []
}
nb_particles = [20,50,100,500]
for k in range(len(nb_particles)):
# 1. Without Surrogate (Baseline)
d1, f1_score, _ = run_scenario(
"Only MOPSO",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle, # Important pour la cohérence
model_type=None,
n=nb_particles[k]
)
results['MOPSO'].append((d1, f1_score))
# 2. With MLP
d2, f2_score, _ = run_scenario(
"With MLP",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle,
model_type='mlp',
n=nb_particles[k]
)
results['MLP'].append((d2, f2_score))
# 3. With Random Forest
d3, f3_score, _ = run_scenario(
"With Random Forest",
capacities=CAPACITIES,
price_mean=P_MEAN,
price_std=P_STD,
nb_vehicles=nb_vehicle,
model_type='rf',
n=nb_particles[k]
)
results['RF'].append((d3, f3_score))
# --- DISPLAY RESULTS ---
print("\n=== SUMMARY ===")
print(f"{'Mode':<15} | {'Time (s)':<10} | {'Best f2':<10}")
print("-" * 45)
for k, v in results.items():
for i in range(len(nb_particles)):
print(f"{k:<15}_{nb_particles[i]:<15} | {v[i][0]:<10.2f} | {v[i][1]:<10.4f}")
import matplotlib.pyplot as plt
import numpy as np
def plot_time_benchmark(nb_particles_list, results_dict):
t_mopso = [item[0] for item in results_dict['MOPSO']]
t_mlp = [item[0] for item in results_dict['MLP']]
t_rf = [item[0] for item in results_dict['RF']]
plt.figure(figsize=(10, 6))
plt.plot(nb_particles_list, t_mopso, 'o-', label='Sans IA (MOPSO)', color='#1f77b4', linewidth=2)
plt.plot(nb_particles_list, t_mlp, 's--', label='Avec MLP', color='#ff7f0e', linewidth=2)
plt.plot(nb_particles_list, t_rf, '^-.', label='Avec Random Forest', color='#2ca02c', linewidth=2)
plt.title("Temps d'exécution selon le nombre de particules", fontsize=14, fontweight='bold')
plt.xlabel("Nombre de Particules", fontsize=12)
plt.ylabel("Temps (s)", fontsize=12)
plt.grid(True, linestyle=':', alpha=0.7)
plt.legend(fontsize=11)
plt.tight_layout()
plt.show()
plot_time_benchmark(nb_particles, results)
import matplotlib.pyplot as plt
def plot_f2_benchmark(nb_particles_list, results_dict):
s_mopso = [item[1] for item in results_dict['MOPSO']]
s_mlp = [item[1] for item in results_dict['MLP']]
s_rf = [item[1] for item in results_dict['RF']]
plt.figure(figsize=(10, 6))
plt.plot(nb_particles_list, s_mopso, 'o-', label='Sans IA (MOPSO)', color='#1f77b4', linewidth=2)
plt.plot(nb_particles_list, s_mlp, 's--', label='Avec MLP', color='#ff7f0e', linewidth=2)
plt.plot(nb_particles_list, s_rf, '^-.', label='Avec Random Forest', color='#2ca02c', linewidth=2)
plt.title("Meilleur Score F2 (Convergence) selon le nombre de particules", fontsize=14, fontweight='bold')
plt.xlabel("Nombre de Particules (log scale)", fontsize=12)
plt.ylabel("Meilleur F2 Score", fontsize=12)
plt.grid(True, linestyle=':', alpha=0.7)
plt.legend(fontsize=11)
plt.xscale('log')
plt.tight_layout()
plt.show()
plot_f2_benchmark(nb_particles, results)

View File

@@ -1,8 +1,9 @@
import random as rd
from .particle import Particle
from particle import Particle
import copy
class MOPSO():
def __init__(self, f_weights:list, A_max:float, price_mean:float, price_std:float, capacities:list, n:int, t:int, w:float, c1:float, c2:float, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, v_alpha=0.1, surrogate=False):
def __init__(self, A_max:float, price_mean:float, price_std:float, capacities:list, n:int, t:int, w:float, c1:float, c2:float, archive_size:int=10, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, v_alpha=0.1, surrogate=False):
# Constants
self.n = n # Number of particles
self.t = t # Number of simulation iterations
@@ -10,24 +11,34 @@ class MOPSO():
self.c1 = c1 # Individual trust
self.c2 = c2 # Social trust
self.archive_size = archive_size # Archive size
self.f_weights = f_weights # Weigths for aggregation of all function objective
self.surrogate = surrogate # Using AI calculation
# Initialisation of particle's global parameters
self.A_max = A_max # Network's power limit
self.socs, self.socs_req = self.generate_state_of_charges(nb_vehicles,nb_of_ticks)
self.times = self.generate_times(nb_vehicles, nb_of_ticks, delta_t)
self.prices = self.generates_prices(price_mean,price_std) #TODO: Use RTE France prices for random prices generation according to number of ticks
self.times = self.generate_times(nb_vehicles, nb_of_ticks)
self.prices = self.generates_prices(nb_of_ticks,price_mean,price_std) #TODO: Use RTE France prices for random prices generation according to number of ticks
self.capacities = capacities
# Particles of the simulation
self.particles = [Particle(nb_vehicles=nb_vehicles, nb_of_ticks=nb_of_ticks, delta_t=delta_t, x_min=x_min, x_max=x_max, alpha=v_alpha) for _ in range(self.n)]
self.particles = [
Particle(
socs=copy.deepcopy(self.socs),
times=self.times, # Ajouté ici
nb_vehicles=nb_vehicles,
nb_of_ticks=nb_of_ticks,
delta_t=delta_t,
x_min=x_min,
x_max=x_max,
alpha=v_alpha
) for _ in range(self.n)
]
self.archive = []
self.leader = self.particles[0] # it doesnt matter as the first thing done is choosing a new leader
for i in range(self.n):
self.particles[i].evaluate(self.f_weights, self.prices, self.socs, self.socs_req, self.times)
self.particles[i].evaluate(self.prices, self.socs, self.socs_req, self.times)
self.update_archive()
def iterate(self):
@@ -59,7 +70,7 @@ class MOPSO():
# Checking for best positions
# Generation of arriving and leaving times for every vehicle
def generate_times(self, nb_vehicles, nb_of_ticks, delta_t):
def generate_times(self, nb_vehicles, nb_of_ticks):
times = []
for _ in range(nb_vehicles):
# Minumun, we have one tick of charging/discharging during simulation
@@ -72,52 +83,54 @@ class MOPSO():
def generates_prices(self,nb_of_ticks:int, mean:float, std:float):
prices = []
for _ in range(nb_of_ticks):
variation = rd.randrange(-(std*10), (std * 10) +1, 1) / 10 # Random float variation
variation = rd.uniform(-std, std) # Random float variation
prices.append(mean + variation)
return prices
# Genrates the coordinated states of charges requested and initial (duplicated initially for other ticks)
def generate_state_of_charges(self, nb_vehicles:int, nb_of_ticks:int):
socs = []
# Structure souhaitée : socs[tick][vehicle] pour être cohérent avec self.x[tick][vehicle]
socs = [[0.0 for _ in range(nb_vehicles)] for _ in range(nb_of_ticks)]
socs_req = []
# We ensure soc_req is greater than what the soc_init is (percentage transformed into floats)
for _ in range(nb_vehicles):
soc_init = rd.randrange(0,100,1)
soc_req = rd.randrange(soc_init+1, 101,1)
# Creating states of charges for each tick in time
for _ in range(nb_of_ticks):
socs.append(soc_init/100)
for i in range(nb_vehicles):
soc_init = rd.randrange(0, 100, 1)
soc_req = rd.randrange(soc_init + 1, 101, 1)
# Remplissage de la matrice 2D
for tick in range(nb_of_ticks):
socs[tick][i] = soc_init / 100.0
socs_req.append(soc_req / 100.0)
# Adding the requested state of charge
socs_req.append(soc_req/100)
return socs, socs_req
# True if a dominates b, else false
def dominates(a:Particle, b:Particle):
dominates = (a.f_current[0] >= b.f_current[0]) and (a.f_current[1] >= b.f_current[1]) and (a.f_current[2] >= b.f_current[2])
def dominates(self, a:Particle, b:Particle):
dominates = (a.f_current[0] <= b.f_current[0]) and (a.f_current[1] <= b.f_current[1]) and (a.f_current[2] <= b.f_current[2])
if dominates:
# Not strict superiority yet
dominates = (a.f_current[0] > b.f_current[0]) or (a.f_current[1] > b.f_current[1]) or (a.f_current[2] > b.f_current[2])
dominates = (a.f_current[0] < b.f_current[0]) or (a.f_current[1] < b.f_current[1]) or (a.f_current[2] < b.f_current[2])
return dominates
def update_archive(self):
candidates = self.archive + self.particles
length = len(candidates)
non_dominated = []
for i in range(length):
candidate_i = candidates[i]
dominates = True
is_dominated = False
for j in range(length):
if i!=j:
if i != j:
candidate_j = candidates[j]
dominates = dominates and self.dominates(candidate_i, candidate_j)
if dominates:
if self.dominates(candidate_j, candidate_i):
is_dominated = True
break
if not is_dominated:
non_dominated.append(candidate_i)
# Keeping only a certain number of solutions depending on archive_size (to avoid overloading the number of potential directions for particles)
if len(non_dominated) > self.archive_size:
final_non_dominated = []
while len(final_non_dominated) < self.archive_size:

554
mopso_demonstrations.ipynb Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,12 +1,14 @@
import random as rd
import copy
class Particle():
def __init__(self,socs:list, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, alpha=0.1):
def __init__(self, socs:list, times:list, nb_vehicles:int=10, delta_t:int=60, nb_of_ticks:int=72, x_min=-100, x_max=100, alpha=0.1):
# Problem specific attributes
self.nb_vehicles = nb_vehicles # Number of vehicles handles for the generations of position x
self.delta_t = delta_t # delta_t for update purposes
self.nb_of_ticks = nb_of_ticks # Accounting for time evolution of the solution (multiplied by delta_t)
self.socs = socs # States of charges for the particle current position (self.x)
self.times = times
# Minima and maxima of a position value
self.x_min = x_min
@@ -62,18 +64,13 @@ class Particle():
if self.x[tick][i] > 0:
self.x[tick][i] = self.x[tick][i] * 0.9
current_power = self.get_current_grid_stress(tick)
def update_socs(self, capacities):
for tick in range(self.nb_of_ticks):
for i in range(self.nb_vehicles-1):
self.socs[tick][i+1] = self.socs[tick][i] + (self.x[tick][i] / capacities[i])
def generate_position(self):
pos = []
for _ in range(self.nb_of_ticks):
x_tick = []
for _ in range(self.nb_vehicles):
x_tick.append(rd.randrange(self.x_min, self.x_max +1, 1))
x_tick.append(rd.uniform(self.x_min, self.x_max))
pos.append(x_tick)
return pos
@@ -84,14 +81,15 @@ class Particle():
for _ in range(self.nb_of_ticks):
v_tick = []
for _ in range(self.nb_vehicles):
v_tick.append(rd.randrange(-vel_coeff, vel_coeff +1, 1) * self.alpha)
# v_tick.append(rd.randrange(-vel_coeff, vel_coeff +1, 1) * self.alpha)
v_tick.append(rd.uniform(-vel_coeff, vel_coeff) * self.alpha)
vel.append(v_tick)
return vel
# Function objective
def evaluate(self,elec_prices,socs,socs_req,times):
f1 = self.f1(elec_prices)
f2 = self.f2(socs,socs_req,times)
f2 = self.f2(self.socs,socs_req,times)
f3 = self.f3()
# Keeping in memory evaluation of each objective for domination evaluation
@@ -103,13 +101,13 @@ class Particle():
self.f_current = f_current
def update_best(self):
current_better = (self.f_current[0] >= self.f_best[0]) and (self.f_current[1] >= self.f_best[1]) and (self.f_current[2] >= self.f_best[2])
current_better = (self.f_current[0] <= self.f_best[0]) and (self.f_current[1] <= self.f_best[1]) and (self.f_current[2] <= self.f_best[2])
if current_better:
# Not strict superiority yet
current_dominates = (self.f_current[0] > self.f_best[0]) or (self.f_current[1] > self.f_best[1]) or (self.f_current[2] > self.f_best[2])
current_dominates = (self.f_current[0] < self.f_best[0]) or (self.f_current[1] < self.f_best[1]) or (self.f_current[2] < self.f_best[2])
if current_dominates:
self.p_best = self.x
self.f_best = self.f_current
self.p_best = copy.deepcopy(self.x)
self.f_best = self.f_current[:]
# Calculate the price of the electricity consumption in the grid SUM(1_to_T)(Epsilon_t * A_t * delta_t)
def f1(self,elec_prices):
@@ -142,5 +140,18 @@ class Particle():
current_grid_stress += self.x[tick][i]
return current_grid_stress
def updating_socs(self, socs, capacities):
pass
def updating_socs(self, initial_socs, capacities):
# Calcul de l'évolution temporelle
for tick in range(self.nb_of_ticks - 1): # On s'arrête à l'avant-dernier pour calculer le suivant
for i in range(self.nb_vehicles):
# SoC(t+1) = SoC(t) + (Puissance(t) * delta_t / Capacité)
# Attention: x est en kW, delta_t en minutes -> conversion en heures (/60) si capacité en kWh
energy_added = (self.x[tick][i] * (self.delta_t / 60))
# Mise à jour du tick suivant basé sur le tick actuel
# On utilise initial_socs comme base si c'est une liste de listes [tick][vehicule]
self.socs[tick+1][i] = self.socs[tick][i] + (energy_added / capacities[i])
# Bornage entre 0 et 1 (0% et 100%)
self.socs[tick+1][i] = max(0.0, min(1.0, self.socs[tick+1][i]))

View File

@@ -5,5 +5,8 @@ description = "Metaheuristic Optimization Project"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"matplotlib>=3.10.8",
"numpy>=2.4.1",
"pandas>=2.3.3",
"scikit-learn>=1.8.0",
]

40
surrogate_handler.py Normal file
View File

@@ -0,0 +1,40 @@
import numpy as np
from sklearn.neural_network import MLPRegressor
from sklearn.ensemble import RandomForestRegressor
class SurrogateHandler:
def __init__(self, model_type='mlp'):
self.model_type = model_type
self.is_trained = False
self.data_X = []
self.data_Y = []
# Model choice
if model_type == 'mlp':
self.model = MLPRegressor(hidden_layer_sizes=(100, 50), max_iter=500, random_state=42)
elif model_type == 'rf':
# RandomForest is generaly more robust "out of the box"
self.model = RandomForestRegressor(n_estimators=100, random_state=42)
else:
raise ValueError("Model type must be 'mlp' or 'rf'")
def add_data(self, x_matrix, f2_value):
# Flattening the position matrix to a 1 dimension vector
flat_x = np.array(x_matrix).flatten()
self.data_X.append(flat_x)
self.data_Y.append(f2_value)
def train(self):
if len(self.data_X) < 20: # No training if their is too few data
return
X = np.array(self.data_X)
y = np.array(self.data_Y)
self.model.fit(X, y)
self.is_trained = True
def predict(self, x_matrix):
if not self.is_trained:
return None
flat_x = np.array(x_matrix).flatten().reshape(1, -1)
return self.model.predict(flat_x)[0]

36
uv.lock generated
View File

@@ -1,14 +1,46 @@
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml -o uv.lock
contourpy==1.3.3
# via matplotlib
cycler==0.12.1
# via matplotlib
fonttools==4.61.1
# via matplotlib
joblib==1.5.3
# via scikit-learn
kiwisolver==1.4.9
# via matplotlib
matplotlib==3.10.8
# via optim-meta (pyproject.toml)
numpy==2.4.1
# via pandas
# via
# optim-meta (pyproject.toml)
# contourpy
# matplotlib
# pandas
# scikit-learn
# scipy
packaging==25.0
# via matplotlib
pandas==2.3.3
# via optim-meta (pyproject.toml)
pillow==12.1.0
# via matplotlib
pyparsing==3.3.1
# via matplotlib
python-dateutil==2.9.0.post0
# via pandas
# via
# matplotlib
# pandas
pytz==2025.2
# via pandas
scikit-learn==1.8.0
# via optim-meta (pyproject.toml)
scipy==1.17.0
# via scikit-learn
six==1.17.0
# via python-dateutil
threadpoolctl==3.6.0
# via scikit-learn
tzdata==2025.3
# via pandas