Saturday, January 17, 2026

portwitr-interactive

 Interactive terminal-based port, process, file, and resource inspector for Linux


portwitr-interactive is a high-performance, curses-based Terminal User Interface (TUI) designed to give you instant visibility and control over your Linux system — all from a single, interactive view.

Tool for sysadmins and developers

✨ Features

๐Ÿ” Live port listing using ss

⚡ Shows CPU% / MEM% usage per process

๐Ÿง  Maps PORT → PID → PROGRAM

⛔ Firewall toggle for selected port (temporarily block/unblock traffic)

๐Ÿ“‚ Displays all open files of the selected process (/proc/<pid>/fd)

๐Ÿงพ Deep inspection via witr --port

๐Ÿ–ฅ️ Fully interactive terminal UI (curses)

⚡ Real-time refresh

๐Ÿ›‘ Stop a process or systemd service directly from the UI (with confirmation)

๐Ÿ“ Warnings annotation (e.g., suspicious working directory is flagged but explained)


Checkout and Play : https://github.com/sunels/portwitr-interactive





Sunday, August 10, 2025

TCP Socket: Accept, Buffer, 3 Ways Handshake Manim Animation Scripts

 


Saturday, January 25, 2025

Ollama - open-webui - deepseek - CPU - local run

Conversations on it :

Let's build a mini-ChatGPT that's powered by DeepSeek-R1 (100% local):

Why do we need to run it locally when we can always run it from deepseek site?

Privacy mainly, You can run it from the site if you want but this is for companies or tech departments that want to run it locally and not worry about what data / info could be leaked

Okay, but why build your own front end when open webUI exists? I can build an identical local solution with 2 commands (ollama pull, docker run).

A company may want to incorporate it into their own site for specific purpose to incorporate their branding and feel

Various reasons, but yes if I was just messing with it I would just do what you mentioned




sunels@sunels:~$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Unable to find image 'ghcr.io/open-webui/open-webui:ollama' locally


http://localhost:11434/

http://localhost:3000/

Settings




run distilled model
ollama run yasserrmd/DeepSeek-R1-Distill-Qwen-1.5B


Download and use Distilled DeepSeek Model within open-webui



Dont forget to fetch metadata from ollama before search/download

Distilled model thinking duration 3 min (original model took 11 mins)













Monday, March 18, 2024

MicroService ASYNC Communication via Message Brokers [Spring, RabbitMQ, Microservice]

UserService is Communicationg with Balance Service using rabbitMQ. 

Async communication microservices 

RabbitMQ DynamicQueue Names + replyTO semantic


USER SERVICE
@RabbitListener(queues = "#{dynamicQueueNameResolver.resolveResponseQueueName()}")
public void handleResponse(@Payload UserBalanceResponse response, org.springframework.amqp.core.Message message) {
System.out.println("UserController Got a rabbit message = " + response);
String correlationId = message.getMessageProperties().getCorrelationId();
correlationIdResponseMap.put(correlationId, response);
CountDownLatch latch = correlationIdLatchMap.get(correlationId);
if (latch != null) {
latch.countDown();
}
}

@PostMapping("/get-user-balance")
public ResponseEntity<String> getUserBalance(@RequestBody UserRequest userRequest) throws InterruptedException {
// Generate a correlation ID for the request
String correlationId = UUID.randomUUID().toString();
// Setup latch to wait for the response
CountDownLatch latch = new CountDownLatch(1);
correlationIdLatchMap.put(correlationId, latch);

// Send request to the BalanceService with the correlation ID
rabbitTemplate.convertAndSend("user_balance_request_queue", userRequest, message -> {
message.getMessageProperties().setCorrelationId(correlationId);
// Set replyTo to a dynamic queue based on correlation ID
message.getMessageProperties().setReplyTo(dynamicQueueNameResolver.resolveResponseQueueName());
return message;
});
System.out.println("UserController has SENT a rabbit message = " + userRequest);

// Wait for response with a timeout
latch.await(5, TimeUnit.SECONDS);
correlationIdLatchMap.remove(correlationId);
UserBalanceResponse userBalance = correlationIdResponseMap.remove(correlationId);

if (userBalance != null) {
return ResponseEntity.ok("User balance for user ID " + userRequest.getUserId() + " is: " + userBalance);
} else {
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Failed to get user balance for user ID: " + userRequest.getUserId());
}
}
BALANCE SERVICE
@RabbitListener(queues = "user_balance_request_queue")
public void processBalanceRequest(UserRequest userRequest, Message requestMessage) {
System.out.println("GOT userRequest = " + userRequest);
// Simulate processing the balance request
// In a real scenario, this could involve querying a database or external service
String userBalance = "Balance for user " + userRequest.getUserId() + ": $123"; // Example balance

// Construct the response object
UserBalanceResponse response = new UserBalanceResponse(userRequest.getUserId(), userBalance);

// Send the response back to the UserController using the replyTo queue specified in the request message
rabbitTemplate.convertAndSend(requestMessage.getMessageProperties().getReplyTo(), response, message -> {
message.getMessageProperties().
setCorrelationId(
requestMessage.getMessageProperties().getCorrelationId()
);
return message;
});
}

Friday, February 2, 2024

Got tired with Rust Borrow Checker ! Try WebAssembly instead :)

I needed to give a break to the figh with "Rust borrow checker" :) Just had some fun with Rust & Tensorflow & Webassembly - How ? - Ask to your favorite AI !