Communicating from the hooked syscall
In this post, we explore how to establish communication from a hooked syscall using Interprocess Communication (IPC) in Rust. We cover both Tokio-based async IPC for usermode components and blocking IPC for the hooked syscall inside an injected DLL.
Intro
Previously, we explored how to hook syscalls for our EDR. Now, we need a way to extract and communicate that data without disrupting the hooked process. This post covers Interprocess Communication (IPC) in Rust, using both async (Tokio) and blocking approaches tailored for injected DLLs
If you like what you see, or want to check out the code in action in the full repository, you can find the project on my GitHub.
IPC
IPC (Interprocess Communication) is a method of allowing processes to send data to one another at runtime. In Windows, we can use Named Pipes which are objects managed by the
operating system, and they allow us to write to the pipe server by specifying its name, which is under the pipe
namespace. For this example, I’ll use the following identifier
for the named pipe:
pub static PIPE_FOR_INJECTED_DLL: &str = r"\\.\pipe\sanctum_pipe_injected_dll";
The IPC system has two sides, a server, which listens for writes, and a client, which will instigate the connection and write data to the pipe - a similar model to web networking.
IPC in Rust
Tokio
has a nice implementation of named pipes which makes life really easy in an async context. We will be using the Tokio
implementation of IPC for our server, which is listening
in the usermode engine (which receives actions from the GUI, process telemetry, driver, etc) as we do want this to be truly asynchronous - we don’t want anything to block or hang threads
whilst other parts of the engine are running.
Turning our attention to the hooked syscall however, we cannot introduce Tokio there as we will be bringing along the whole async runtime that is required, as well as having to mark our callback functions as async. I haven’t tried it but this sounds like a low level disaster. I suppose in some ways it should not matter whether our injected DLL itself has the async runtime, but given the low level control we require over our callback routines, I do not want to add an additional layer of complication.
But this is no matter; a blocking IPC request suits our needs just fine - for this, we can use the std::fs
route to named pipes. Writing to a named pipe is just like writing to a file,
so if you are familiar with that, then IPC is a breeze.
The server
Let’s start with implementing the server. Our server will be running in the um_engine
part of the project, which introduces its own complication - that process is run as an administrator.
This means we need to set a SECURITY_ATTRIBUTES structure to allow read and writes from any
user. I’m not going to discuss it here, as it’s a little convoluted, but if you are interested check my source file on GitHub.
Starting the IPC server is as simple as:
use tokio::net::windows::named_pipe::ServerOptions;
let mut server = unsafe {ServerOptions::new()
.first_pipe_instance(true)
.create_with_security_attributes_raw(PIPE_FOR_INJECTED_DLL, sa_ptr)
.expect("[-] Unable to create named pipe server for injected DLL")};
In this example, PIPE_FOR_INJECTED_DLL is the string referenced above: r"\\.\pipe\sanctum_pipe_injected_dll"
and sa_ptr is a pointer to the SECURITY_ATTRIBUTES object. If you are creating a new
IPC server and you don’t care about the security attributes, you can instead do:
use tokio::net::windows::named_pipe::ServerOptions;
let mut server = unsafe {ServerOptions::new()
.first_pipe_instance(true)
.create(PIPE_FOR_INJECTED_DLL)
.expect("[-] Unable to create named pipe server for injected DLL")};
Now we enter the main loop; we want to wait for a new connection, and once we have it we need to create a new server before doing something with the connection - this ensures there is constant
uptime on the IPC server. With the below snippet, ignore SECURITY_PTR
- that is just a pointer to the SECURITY_ATTRIBUTES.
let server = tokio::spawn(async move {
loop {
// wait for a connection
println!( "Waiting for IPC message from DLL");
server.connect().await.expect("Could not get a client connection for injected DLL ipc");
let connected_client = server;
// Construct the next server before sending the one we have onto a task, which ensures
// the server isn't closed
let sec_ptr = SECURITY_PTR.load(std::sync::atomic::Ordering::SeqCst);
if sec_ptr.is_null() {
panic!("Security pointer was null for IPC server.");
}
// SAFETY: null pointer checked above
server = unsafe { ServerOptions::new().create_with_security_attributes_raw(PIPE_FOR_INJECTED_DLL, sec_ptr).expect("Unable to create new version of IPC for injected DLL") };
let client = tokio::spawn(async move {
println!("Hello from the client! {:?}", connected_client);
// todo use the client
});
}
});
And as easy as that, our IPC server is ready to go.
The client
Now for the client code that resides within a DLL injected into another process, from a function acting as a callback from a syscall hook.
As discussed above, we don’t want to bring in async / tokio, so we can turn to std::fs
to write to the named pipe as if it were a file. One thing to note is that
when we try write to it, we may get an error telling us that it is already in use / busy; so what we want to do is spin until the pipe is free so we can write to it.
In a nutshell, this looks as follows:
let mut client = loop {
match OpenOptions::new().read(true).write(true).open(PIPE_FOR_INJECTED_DLL) {
Ok(client) => break client,
// If the pipe is busy, try again after a wait
Err(e) if e.raw_os_error() == Some(ERROR_PIPE_BUSY.0 as _) => (),
Err(e) => panic!("An error occurred talking to the engine, {e}"),
}
sleep(Duration::from_millis(50));
};
let data = format!("PID that the process is trying to open a handle to is: {}", pid);
if let Err(e) = client.write_all(data.as_bytes()) {
panic!("Error writing to named pipe to UM Engine. {e}");
};
And in context of the full callback function, it looks like:
#[unsafe(no_mangle)]
unsafe extern "system" fn open_process(
process_handle: HANDLE,
desired_access: u32,
object_attrs: *mut c_void,
client_id: *mut CLIENT_ID,
) {
if !client_id.is_null() {
let pid = unsafe {(*client_id).UniqueProcess.0 } as u32;
let x = format!("pid: {}, proc hand: {:?}\0", pid, process_handle);
unsafe { MessageBoxA(None, PCSTR::from_raw(x.as_ptr()), PCSTR::from_raw(x.as_ptr()), MB_OK) };
// send information to the engine via IPC; do not use Tokio as we don't want the async runtime in our processes..
// and it would not be FFI safe, so we will use the standard library to achieve this
let mut client = loop {
match OpenOptions::new().read(true).write(true).open(PIPE_FOR_INJECTED_DLL) {
Ok(client) => break client,
// If the pipe is busy, try again after a wait
Err(e) if e.raw_os_error() == Some(ERROR_PIPE_BUSY.0 as _) => (),
Err(e) => panic!("An error occurred talking to the engine, {e}"), // todo is this acceptable?
}
sleep(Duration::from_millis(50));
};
let data= format!("PID that the process is trying to open a handle to is: {}", pid);
if let Err(e) = client.write_all(data.as_bytes()) {
panic!("Error writing to named pipe to UM Engine. {e}");
};
}
let ssn = 0x26; // give the compiler awareness of rax
unsafe {
asm!(
"mov r10, rcx",
"syscall",
in("rax") ssn,
// Use the asm macro to load our registers so that the Rust compiler has awareness of the
// use of the registers. Loading these by hands caused some instability
in("rcx") process_handle.0,
in("rdx") desired_access,
in("r8") object_attrs,
in("r9") client_id,
options(nostack, preserves_flags)
);
}
}
Using meaningful data
Now that we have a basic send and receive set up, we have to actually send data between the two! Lucky for us, this is really straightforward
thanks to Rusts type system and the serde_json
crate. We can use the function to_vec
which will serialise our struct into a vector of bytes that we can then write to the named pipe, and deserialise it at the other end with serde_json’s
from_slice function, which will take a slice of bytes and deserialise the data into a struct.
In practice, this looks as follows for our client:
We can share a crate of shared types between our client project and server project, defining the message we wish to send/receive:
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum Syscall {
OpenProcess(OpenProcessData),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OpenProcessData {
pub pid: u32,
}
And on the client we simply use to_vec before sending the data:
let message_data = to_vec(&data).unwrap();
if let Err(e) = client.write_all(&message_data) {
panic!("Error writing to named pipe to UM Engine. {e}");
};
And we modify our server code now to handle the incoming data (I’ll show the full function for the exact context):
pub async fn run_ipc_for_injected_dll() {
// Store the pointer in the atomic so we can safely access it across
let sa_ptr = create_security_attributes() as *mut c_void;
SECURITY_PTR.store(sa_ptr, std::sync::atomic::Ordering::SeqCst);
// SAFETY: Null pointer checked at start of function
let mut server = unsafe {ServerOptions::new()
.first_pipe_instance(true)
.create_with_security_attributes_raw(PIPE_FOR_INJECTED_DLL, sa_ptr)
.expect("[-] Unable to create named pipe server for injected DLL")};
let server = tokio::spawn(async move {
loop {
// wait for a connection
server.connect().await.expect("Could not get a client connection for injected DLL ipc");
let mut connected_client = server;
// Construct the next server before sending the one we have onto a task, which ensures
// the server isn't closed
let sec_ptr = SECURITY_PTR.load(std::sync::atomic::Ordering::SeqCst);
if sec_ptr.is_null() {
panic!("Security pointer was null for IPC server.");
}
// SAFETY: null pointer checked above
server = unsafe { ServerOptions::new().create_with_security_attributes_raw(PIPE_FOR_INJECTED_DLL, sec_ptr).expect("Unable to create new version of IPC for injected DLL") };
let client = tokio::spawn(async move {
let mut buffer = vec![0; 1024];
match connected_client.read(&mut buffer).await {
Ok(bytes_read) => {
// check we received > 0 bytes
if bytes_read == 0 {
println!()"IPC client disconnected";
return;
}
// deserialise the request
match from_slice::<Syscall>(&buffer[..bytes_read]) {
Ok(v) => println!("Data from pipe: {:?}", v),
Err(e) => eprintln!("Error converting data to Syscall. {e}"),
}
},
Err(_) => todo!(),
}
});
}
});
}
And, that’s it! As simple as that.
Bringing it back to the EDR
Bringing it back to the Sanctum project, this will be the start of my Ghost Hunting technique, the communication between the syscall stub and my engine. Now we have the communication working without breaking the injected process, we can start to look in more detail about implementing and dealing with Ghost Hunting.