Sunday, February 22, 2026

Heimdall – How a Simple Port Error Turned into My Crazy Terminal Side Project

 

Hey folks,

Everything started because I got so tired of that classic Spring Boot error: “Port already in use”. Every time I had to hunt down the PID in the terminal, kill it manually, and pray it didn’t happen again five minutes later… ugh.

I was fed up scrolling through ss, netstat, lsof, ps — all that noise just to free a port during a busy workday.

Then things got out of hand (in a good way ๐Ÿ˜„).

I started building something small… and it kept growing. Now Heimdall is this beautiful curses-based TUI that watches ports, processes, files, risks — and even runs as a daemon in the background.

The coolest parts right now are Sentinel (it spots shady behavior like backdoors, masquerading processes, deleted binaries, suspicious listeners) and Daemon mode (it quietly monitors suspicious outbound connections, suspends the process, and asks you to allow or kill — either via a popup modal if the TUI is open, or a notification + timeout if not).

Whenever I think “hey, this would be cool to add”, I throw it at an AI agent while I still have tokens left ๐Ÿ˜‚. Sometimes I just feel like the product owner who never stops adding features.

If you want to join the fun, check out the repo: https://github.com/sunels/heimdall

or https://pypi.org/project/heimdall-linux/

Give it a star, try it, break it, send feedback/PRs — I’d love to hear what you think (or what crazy feature I should add next).

Thanks for reading, stay safe out there!

Love, Serkan ❤️


Saturday, January 17, 2026

Linux terminal tool for port process open files

 Interactive terminal-based port, process, file, and resource inspector for Linux


portwitr-interactive is a high-performance, curses-based Terminal User Interface (TUI) designed to give you instant visibility and control over your Linux system — all from a single, interactive view.

Tool for sysadmins and developers

✨ Features

๐Ÿ” Live port listing using ss

⚡ Shows CPU% / MEM% usage per process

๐Ÿง  Maps PORT → PID → PROGRAM

⛔ Firewall toggle for selected port (temporarily block/unblock traffic)

๐Ÿ“‚ Displays all open files of the selected process (/proc/<pid>/fd)

๐Ÿงพ Deep inspection via witr --port

๐Ÿ–ฅ️ Fully interactive terminal UI (curses)

⚡ Real-time refresh

๐Ÿ›‘ Stop a process or systemd service directly from the UI (with confirmation)

๐Ÿ“ Warnings annotation (e.g., suspicious working directory is flagged but explained)


Checkout and Play: https://github.com/sunels/heimdall

Installation: https://github.com/sunels/heimdall?tab=readme-ov-file#-installation

    wget https://github.com/sunels/heimdall/releases/download/v0.1.0/heimdall_0.1.0-1_all.deb
    sudo dpkg -i heimdall_0.1.0-1_all.deb
    Run:
    sudo heimdall
    
    # or just
    heimdall














Sunday, August 10, 2025

TCP Socket: Accept, Buffer, 3 Ways Handshake Manim Animation Scripts

 


Saturday, January 25, 2025

Ollama - open-webui - deepseek - CPU - local run

Conversations on it :

Let's build a mini-ChatGPT that's powered by DeepSeek-R1 (100% local):

Why do we need to run it locally when we can always run it from deepseek site?

Privacy mainly, You can run it from the site if you want but this is for companies or tech departments that want to run it locally and not worry about what data / info could be leaked

Okay, but why build your own front end when open webUI exists? I can build an identical local solution with 2 commands (ollama pull, docker run).

A company may want to incorporate it into their own site for specific purpose to incorporate their branding and feel

Various reasons, but yes if I was just messing with it I would just do what you mentioned




sunels@sunels:~$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Unable to find image 'ghcr.io/open-webui/open-webui:ollama' locally


http://localhost:11434/

http://localhost:3000/

Settings




run distilled model
ollama run yasserrmd/DeepSeek-R1-Distill-Qwen-1.5B


Download and use Distilled DeepSeek Model within open-webui



Dont forget to fetch metadata from ollama before search/download

Distilled model thinking duration 3 min (original model took 11 mins)













Monday, March 18, 2024

MicroService ASYNC Communication via Message Brokers [Spring, RabbitMQ, Microservice]

UserService is Communicationg with Balance Service using rabbitMQ. 

Async communication microservices 

RabbitMQ DynamicQueue Names + replyTO semantic


USER SERVICE
@RabbitListener(queues = "#{dynamicQueueNameResolver.resolveResponseQueueName()}")
public void handleResponse(@Payload UserBalanceResponse response, org.springframework.amqp.core.Message message) {
System.out.println("UserController Got a rabbit message = " + response);
String correlationId = message.getMessageProperties().getCorrelationId();
correlationIdResponseMap.put(correlationId, response);
CountDownLatch latch = correlationIdLatchMap.get(correlationId);
if (latch != null) {
latch.countDown();
}
}

@PostMapping("/get-user-balance")
public ResponseEntity<String> getUserBalance(@RequestBody UserRequest userRequest) throws InterruptedException {
// Generate a correlation ID for the request
String correlationId = UUID.randomUUID().toString();
// Setup latch to wait for the response
CountDownLatch latch = new CountDownLatch(1);
correlationIdLatchMap.put(correlationId, latch);

// Send request to the BalanceService with the correlation ID
rabbitTemplate.convertAndSend("user_balance_request_queue", userRequest, message -> {
message.getMessageProperties().setCorrelationId(correlationId);
// Set replyTo to a dynamic queue based on correlation ID
message.getMessageProperties().setReplyTo(dynamicQueueNameResolver.resolveResponseQueueName());
return message;
});
System.out.println("UserController has SENT a rabbit message = " + userRequest);

// Wait for response with a timeout
latch.await(5, TimeUnit.SECONDS);
correlationIdLatchMap.remove(correlationId);
UserBalanceResponse userBalance = correlationIdResponseMap.remove(correlationId);

if (userBalance != null) {
return ResponseEntity.ok("User balance for user ID " + userRequest.getUserId() + " is: " + userBalance);
} else {
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("Failed to get user balance for user ID: " + userRequest.getUserId());
}
}
BALANCE SERVICE
@RabbitListener(queues = "user_balance_request_queue")
public void processBalanceRequest(UserRequest userRequest, Message requestMessage) {
System.out.println("GOT userRequest = " + userRequest);
// Simulate processing the balance request
// In a real scenario, this could involve querying a database or external service
String userBalance = "Balance for user " + userRequest.getUserId() + ": $123"; // Example balance

// Construct the response object
UserBalanceResponse response = new UserBalanceResponse(userRequest.getUserId(), userBalance);

// Send the response back to the UserController using the replyTo queue specified in the request message
rabbitTemplate.convertAndSend(requestMessage.getMessageProperties().getReplyTo(), response, message -> {
message.getMessageProperties().
setCorrelationId(
requestMessage.getMessageProperties().getCorrelationId()
);
return message;
});
}

Friday, February 2, 2024

Got tired with Rust Borrow Checker ! Try WebAssembly instead :)

I needed to give a break to the figh with "Rust borrow checker" :) Just had some fun with Rust & Tensorflow & Webassembly - How ? - Ask to your favorite AI !

Saturday, April 23, 2022

Caching with Spring

 @Cachable: Runs before actual method invocation


@CachePut: Runs after the actual method invocation


Unless: works for return value

Condition: works for method parameter


@Cacheable(value = "saveCache", key = "{#a, #b, #c}", unless="#result.result.size() > 0")


@CachePut(value="defaultCache", key="#pk",unless="#result==null")


@Cacheable(value="actors", key="#key", condition="#key == 'sean'")

Tuesday, March 2, 2021

Rest Post Mock Intellij Mockoon

 Easy test your endpoints within the Ide : there is a world icon on the rest controller click on it



Here you can add your sample request and run easily


If you need to mock some requests Mockoon is a very handy tool for it




Thursday, January 28, 2021

SpringBoot StringKafkaTemplate Conductor Kafka Topic Producer Consumer

Wanna produce and consume from Kafka topics with SpringBoot ?

Wanna watch your Kafka Topics via Conductor ?

Wanna send some rest request to produce data to send your Kafka topic ?

Don't wanna write code ?

Get The Code

Just checkout and sit back :) 
























Your custom java annotation and check for it on the classes with reflection

Just another copy paste run bye article !

Wednesday, November 18, 2020

AVL TREE LEFT RIGHT DOUBLE ROTATIONS, DEPTH FIRST SEARCHs, BREADTH FIRST SEARCH, RECURSIVE and ITERATIVE TRAVERSALS, HEIGHT, DEPTH, SIZE, LEAF, NODE FINDERS, ROOT to LEAF NODE PATHS




SINGLE ROTATION
DOUBLE ROTATION



Suppose the node to be rebalanced is X. 
There are 4 cases that we might have to fix (two are the mirror images of the other two):

  • An insertion in the left subtree of the left child of X,
  • An insertion in the right subtree of the left child of X,
  • An insertion in the left subtree of the right child of X, or
  • An insertion in the right subtree of the right child of X.

Balance is restored by tree rotations.

Case 1 and case 4 are symmetric and requires the same operation for balance. 
Cases 1,4 are handled by single rotation.

Case 2 and case 3 are symmetric and requires the same operation for balance.
Cases 2,3 are handled by double rotation

Sunday, February 9, 2020

REDIS STREAM - CLI

DOCKER
docker run -it --rm --net host --name redis1  redis
docker exec -it redis1 redis-cli

XREADGROUP
XGROUP CREATE mystream mygroup $ MKSTREAM
XADD mystream * message apple
XADD mystream * message orange
XADD mystream * message strawberry

XGROUP CREATE mystream grp1 0
XREADGROUP GROUP grp1 Bob COUNT 2 STREAMS mystream >

XGROUP CREATE mystream grp2 0
XREADGROUP GROUP grp2 Bobiy COUNT 2 STREAMS mystream >

The special ID >. This special ID is only valid in the context of consumer groups, and it means: messages never delivered to other consumers so far.




XREAD
XADD mystream * message newcoronavirus
XADD mystream * message nextvirus

XREAD BLOCK 0 STREAMS mystream $
 The special ID $. This special ID means that XREAD should use as last ID the maximum ID already stored in the stream mystream, so that we will receive only new messages, starting from the time we started listening. This is similar to the tail -f Unix command in some way.


Use ConsumerGroups for FANOUT
Use XREAD for P2P

Sunday, December 2, 2018

Java 8 CompletableFuture parallel tasks and Timeout example


Suppose that you have a list of item (id list referring to a table, url list to fetch page data from internet, customer keys for location query from a map service  etc...). And you will start some parallel tasks  for those ids. But you don't want to wait no longer than your threshold. Also you want to collect the data from the returned "CompletableFutures" which are not timed out.

How can you achieve this while using a CompletableFuture ?

PS: There is no .orTimeout ()  option in Java 8. It is included in Java10.  And other option is using .get(N, TimeUnit.SECONDS) but it will not gave you what you want.



Outputs : 

pool-1-thread-1 - - > will sleep for secs :0
Duration here = 233
pool-1-thread-2 - - > will sleep for secs :1000
pool-1-thread-3 - - > will sleep for secs :2000
pool-1-thread-4 - - > will sleep for secs :3000
pool-1-thread-5 - - > will sleep for secs :4000
pool-1-thread-6 - - > will sleep for secs :5000
Collected Results = [0, 1000, 2000, 3000, 4000, 5000, null, null, null, null]
Total Duration = 5235

How did i come here :) See below code pls
In this example your main thread will be waiting 2 seconds for each uncompleted future.
I guess N*2 seconds waiting is not the thing that you expect to see here
And yes.. There is 20 seconds duration job and it is cancelled You have a minimal gain at that point but ! But if you would have 20 tasks ...The picture would be very nagative again .


Outputs : 

ForkJoinPool.commonPool-worker-1 - - > will sleep : 0
ForkJoinPool.commonPool-worker-5 - - > will sleep : 4
ForkJoinPool.commonPool-worker-3 - - > will sleep : 2
ForkJoinPool.commonPool-worker-4 - - > will sleep : 3
ForkJoinPool.commonPool-worker-7 - - > will sleep : 6
ForkJoinPool.commonPool-worker-1 - - > will sleep : 5
ForkJoinPool.commonPool-worker-2 - - > will sleep : 1
ForkJoinPool.commonPool-worker-6 - - > will sleep : 7
Tasks are created here, duration = 79
Getting results
Getting results
Getting results
ForkJoinPool.commonPool-worker-2 - - > will sleep : 8
Getting results
ForkJoinPool.commonPool-worker-3 - - > will sleep : 9
Getting results
ForkJoinPool.commonPool-worker-4 - - > will sleep : 20
Getting results
Getting results
Getting results
Getting results
Getting results
Getting results
Total duration = 13080
CollectedResults = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, java.util.concurrent.TimeoutException]

Wednesday, July 25, 2018

weak wifi , remote jmx



................HOW TO SEE CONNECTION POOL STATS ON COMMAND LINE VIA JMX ............................

Tool page : https://nofluffjuststuff.com/blog/vladimir_vivien/2012/04/jmx_cli_a_command_line_console_to_jmx

wget https://github.com/downloads/vladimirvivien/jmx-cli/jmxcli-0.1.2-bin.zip
unzip jmxcli-0.1.2-bin.zip
cd jmxcli-0.1.2
java -jar cli.jar
cp /usr/lib/jvm/java-8-oracle/lib/tools.jar lib/
chmod 777 lib/tools.jar 
list filter:"com.mchange.v2.c3p0:type=PooledDataSource*" label:true
desc bean:$0
exec bean:"com.mchange.v2.c3p0:type=PooledDataSource[z8kflt9w1cicerh10mnh44|20c29a6f]" get:"numBusyConnections"
exec bean:"com.mchange.v2.c3p0:type=PooledDataSource[z8kflt9w1jggz5o1xv9pi5|2101f18a]" get:"numBusyConnections"

Tuesday, May 9, 2017

Evolve ....

 
Collections.sort(itemList, new Comparator<Item>() {
    @Override    public int compare(Item o1, Item o2) {
        return o1.getItemId().compareTo(o2.getItemId());    }
});



 
Collections.sort(itemList, (o1, o2) -> o1.getItemId().compareTo(o2.getItemId()));
 

 
Collections.sort(itemList, Comparator.comparing(Item::getItemId));
 


Wednesday, March 22, 2017

How Hadoop works


Hadoop divides the given input file into small parts to increase parallel processing. It uses its own file system called HDFS. Each spitted file is assigned to the mapper which works on the same physical machine with the given chunk. 

Mappers are processing small file chunks and passing their processing results to context.Mappers are processing splitted files (each chunk {piece of the main file} size = HDFS block size) line by line in the map function .

Hadoop supports different programming languages so it uses its own serilization/deseriliazation mechanism. That why you see IntWritable, LongWritable,etc types in the examples. You can write your own Writable classess by implementing the Writable interface according to your requirements.

Hadoop collects all different outputs of the mappers and sort them by KEY and forwards these results to Reducers.


"Book says all values with same key will go to same reducer"



map (Key inputKey, Value inputValue, Key outputKey, Value outputValue)


reduce (Key inputKeyFromMapper, Value inputValueFromMapper, Key outputKey, Value output value)



Hadoop calls reduce function for the each line of given file.

And finally writes the result of reducers to the HDFS file system.

See the WordCount example for better understanding : hadoop-wordcount-example