« Rust : async » : différence entre les versions
		
		
		
		Aller à la navigation
		Aller à la recherche
		
|  Page créée avec « = Exemple simple =  Cargo.toml:  <nowiki> [dependencies] futures = { version = "0.3.*" } tokio = {version = "0.2.*", features = ["full"] } </nowiki>  Explications dans du code:  <nowiki> use futures::prelude::*; use tokio::prelude::*; use tokio::task;  //The principle of asynchronous programming is to get out of the usual way of programming, //where the program executes from start to finish, blocking on each task. If we have a web server //for example, we may nee... » Balise : wikieditor | Aucun résumé des modifications Balise : wikieditor | ||
| Ligne 10 : | Ligne 10 : | ||
| Explications dans du code: | Explications dans du code: | ||
| <nowiki> |  <nowiki> | ||
| use futures::prelude::*; | use futures::prelude::*; | ||
| use tokio::prelude::*; | use tokio::prelude::*; | ||
Version du 15 novembre 2023 à 16:18
Exemple simple
Cargo.toml:
[dependencies]
futures = { version = "0.3.*" }
tokio = {version = "0.2.*", features = ["full"] }
Explications dans du code:
use futures::prelude::*;
use tokio::prelude::*;
use tokio::task;
//The principle of asynchronous programming is to get out of the usual way of programming,
//where the program executes from start to finish, blocking on each task. If we have a web server
//for example, we may need to answer 10k requests at the same time.
//We could use threads, and Rust's fearless concurrency; but asynchronous programming is another
//way of doing things.
//
//So, we have asynchronous functions, that we can launch. They return a handler, and we can keep on
//executing code, coming back to the handler to see if it yields anything. Only in these
//asynchronous function can we use asynchronous things.
//
//In Rust, we need two things to do that : the "futures" crate, and a Runtime since Rust does not
//come with it by default. The most common on is Tokio, which we'll use here.
//
//We can then do 3 basic operations : start the runtime, spawn a future, and spawn CPU-blocking
//intensive operations.
//
//Of course, most work will happen in futures, this way. We need to be able to offset the execution
//of cpu-intensive things to other threads so we don't block at some point in our main thread, otherwise it is all
//useless.
type Result<T> = std::result::Result<T, Box<dyn std::error::Error + Send + Sync>>;
//1 - Starting the runtime. This is the shortest version, using a trait
#[tokio::main]
async fn main() {
    //2.2 Here we finally get the result of it all.
    app().await.unwrap();
    //3.2 Here we execute our CPU-intensive future.
    otherapp().await.unwrap();
}
//1 bis - This is a more complicated way of doing the same:
//start the runtime, spawn a future, and block on it to get an answer.
fn othermain() {
    //runtime
    let mut rt = tokio::runtime::Runtime::new().unwrap();
    //future
    let future = app();
    //blocking
    rt.block_on(future);
}
async fn our_async_program() {
    println!("Hello world");
}
//2.1 - Here we spawn a future.
//We spawn it using 'task' so we can have multiple futures running at once.
//Note that this function is asynchronous, too.
async fn app() -> Result<()> {
    let join = task::spawn(our_async_program());
    let res = join.await?;
    Ok(())
}
//3.1 - Spawning CPU-intensive tasks.
//This fn is CPU-intensive.
fn fib_cpu_intensive(n: u32) -> u32 {
    match n {
        0 => 0,
        1 => 1,
        n => fib_cpu_intensive(n - 1) + fib_cpu_intensive(n - 2),
    }
}
//So we spawn a blocking future for it. 
//Note that this is also asynchronous.
async fn otherapp() -> Result<()>{
    let threadpool_future = task::spawn_blocking(||fib_cpu_intensive(30));
    //And await it.
    threadpool_future.await?;
    Ok(())
}