Sunday, December 2, 2018

Java 8 CompletableFuture parallel tasks and Timeout example


Suppose that you have a list of item (id list referring to a table, url list to fetch page data from internet, customer keys for location query from a map service  etc...). And you will start some parallel tasks  for those ids. But you don't want to wait no longer than your threshold. Also you want to collect the data from the returned "CompletableFutures" which are not timed out.

How can you achieve this while using a CompletableFuture ?

PS: There is no .orTimeout ()  option in Java 8. It is included in Java10.  And other option is using .get(N, TimeUnit.SECONDS) but it will not gave you what you want.



Outputs : 

pool-1-thread-1 - - > will sleep for secs :0
Duration here = 233
pool-1-thread-2 - - > will sleep for secs :1000
pool-1-thread-3 - - > will sleep for secs :2000
pool-1-thread-4 - - > will sleep for secs :3000
pool-1-thread-5 - - > will sleep for secs :4000
pool-1-thread-6 - - > will sleep for secs :5000
Collected Results = [0, 1000, 2000, 3000, 4000, 5000, null, null, null, null]
Total Duration = 5235

How did i come here :) See below code pls
In this example your main thread will be waiting 2 seconds for each uncompleted future.
I guess N*2 seconds waiting is not the thing that you expect to see here
And yes.. There is 20 seconds duration job and it is cancelled You have a minimal gain at that point but ! But if you would have 20 tasks ...The picture would be very nagative again .


Outputs : 

ForkJoinPool.commonPool-worker-1 - - > will sleep : 0
ForkJoinPool.commonPool-worker-5 - - > will sleep : 4
ForkJoinPool.commonPool-worker-3 - - > will sleep : 2
ForkJoinPool.commonPool-worker-4 - - > will sleep : 3
ForkJoinPool.commonPool-worker-7 - - > will sleep : 6
ForkJoinPool.commonPool-worker-1 - - > will sleep : 5
ForkJoinPool.commonPool-worker-2 - - > will sleep : 1
ForkJoinPool.commonPool-worker-6 - - > will sleep : 7
Tasks are created here, duration = 79
Getting results
Getting results
Getting results
ForkJoinPool.commonPool-worker-2 - - > will sleep : 8
Getting results
ForkJoinPool.commonPool-worker-3 - - > will sleep : 9
Getting results
ForkJoinPool.commonPool-worker-4 - - > will sleep : 20
Getting results
Getting results
Getting results
Getting results
Getting results
Getting results
Total duration = 13080
CollectedResults = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, java.util.concurrent.TimeoutException]

Wednesday, July 25, 2018

weak wifi , remote jmx



................HOW TO SEE CONNECTION POOL STATS ON COMMAND LINE VIA JMX ............................

Tool page : https://nofluffjuststuff.com/blog/vladimir_vivien/2012/04/jmx_cli_a_command_line_console_to_jmx

wget https://github.com/downloads/vladimirvivien/jmx-cli/jmxcli-0.1.2-bin.zip
unzip jmxcli-0.1.2-bin.zip
cd jmxcli-0.1.2
java -jar cli.jar
cp /usr/lib/jvm/java-8-oracle/lib/tools.jar lib/
chmod 777 lib/tools.jar 
list filter:"com.mchange.v2.c3p0:type=PooledDataSource*" label:true
desc bean:$0
exec bean:"com.mchange.v2.c3p0:type=PooledDataSource[z8kflt9w1cicerh10mnh44|20c29a6f]" get:"numBusyConnections"
exec bean:"com.mchange.v2.c3p0:type=PooledDataSource[z8kflt9w1jggz5o1xv9pi5|2101f18a]" get:"numBusyConnections"

Tuesday, May 9, 2017

Evolve ....

 
Collections.sort(itemList, new Comparator<Item>() {
    @Override    public int compare(Item o1, Item o2) {
        return o1.getItemId().compareTo(o2.getItemId());    }
});



 
Collections.sort(itemList, (o1, o2) -> o1.getItemId().compareTo(o2.getItemId()));
 

 
Collections.sort(itemList, Comparator.comparing(Item::getItemId));
 


Wednesday, March 22, 2017

How Hadoop works


Hadoop divides the given input file into small parts to increase parallel processing. It uses its own file system called HDFS. Each spitted file is assigned to the mapper which works on the same physical machine with the given chunk. 

Mappers are processing small file chunks and passing their processing results to context.Mappers are processing splitted files (each chunk {piece of the main file} size = HDFS block size) line by line in the map function .

Hadoop supports different programming languages so it uses its own serilization/deseriliazation mechanism. That why you see IntWritable, LongWritable,etc types in the examples. You can write your own Writable classess by implementing the Writable interface according to your requirements.

Hadoop collects all different outputs of the mappers and sort them by KEY and forwards these results to Reducers.


"Book says all values with same key will go to same reducer"



map (Key inputKey, Value inputValue, Key outputKey, Value outputValue)


reduce (Key inputKeyFromMapper, Value inputValueFromMapper, Key outputKey, Value output value)



Hadoop calls reduce function for the each line of given file.

And finally writes the result of reducers to the HDFS file system.

See the WordCount example for better understanding : hadoop-wordcount-example