Sunday, April 4, 2021

Istio —simple fast way to start


istio archeticture (source istio.io)

I would like to share with you a sample repo to start and help you continue your journey learning istio.

If you are not familiar with Istio and just know it by name or the internet waves brought you here you can start by reading this very tiny historical background
 
If you are familiar with microservices and containers and preferably Kubernetes then you are set to play with istio right away.

the only prerequisite is having a Kubernetes cluster, and no you don’t have to go to AWS or Google cloud, you can have your own one on your local machine, a fully blown multi-node cluster with one command.

check this fantastic tool to do that it’s a very quick setup and easy enough to follow through: https://github.com/kubernetes-sigs/kind

now that being done you can follow the demo.md command by command it’s all done with a Makefile to group commands in sets that make them easier to understand and execute.

jump to the demo file to start and enjoy:


Tuesday, November 24, 2020

Deep Learning - Digit recognizer for MNIST - part 1


In this blog, I want to talk a little bit about testing different fully connected neural networks to see how each performs on this dataset:

https://www.kaggle.com/c/digit-recognizer


this is a very famous and one of the first classical data sets in the computer vision field. we have images like this:

and we want to be able to identify each image with least amount of errors.
 we have 42,000 labeled images for training
we have 28,000 non-labeled for kaggle evaluation.


In this blog I'll use few deep neural networks using Keras to build different models and evaluate them. each model will be different in either number of layers, nodes per layer to get a sense of what increases the accuracy of a neural network in this kind of problems and whether more layer & units and larger models mean better performance.

I'll assume you are familiar with python, basics of neural network.
as for Keras basics if you are not familiar with it, I recommend googling things along the way as it's not very complicated.


the link for the note book is here:
https://github.com/blabadi/ml_mnist_numbers/blob/blog-branch-part1/Digit%20Recognizer.ipynb

1- imports and libs


regular libraries to build Keras model and simple layers for fully connected NN and some utils to show images

2- define some parameters

we have 10 digits to classify an image as (0-9)
our images are of size 28x28 pixles
the directory is to save models in

the batch size tells the model to train with 32 images each iteration.
A training iteration has two steps: forward and backward passes over an image batch
1- forward: it first starts with random weights and predicts the images based on these weights
2- backward: then we calculate the error and our optimization algorithm will try to update and adjust the parameters by trying to minimize the error (using partial derivatives) with a learning rate parameter that decides how big  the change is.

the epoch is how many times to iterate over the whole training set while keeping the learned parameters from previous iterations so we can get as minimum error as possible based on the current hyper parameters setup.

3-  Loading the data set



we load our labeled data (X) and split it to two sets:
1- training : to be used by the model optimizer
2- test (validation): to be used by us to check the accuracy of the model on data it didn't see before but we have labeled it so we can compare it to a ground truth.

my split was 32,000 train, 10,000 validation ( almost 76% train to 24% validation )

we also split the data to the labels set (Y) and converted it to 1 hot encoding vector of 10 classes
so if you look at the last cell output you find number 3 is represented with a vector with 1 in position 3 of that vector, this is to be able to output a probability for each class in the model

4-Define the models 


I created different models but here is the biggest one:


it has  the input layer of size 28 x 28 = 784 input each represents a single pixle in the image and since it's only gray scale image ( a pixel can have value 0-255) and only one channel no RGB

it has the following layers:

- Drop out is a regularization layer to avoid over-fitting (memorizing the dataset instead of mapping the pattern) it drops some nodes randomly to prevent the neural network from memorizing them.

- Dense is a layer of nodes fully connected to the previous and next layer, each node has an activation function ( non linear function that is applied to the previous layer output ), each node will learn something about the data (feature, example the curves in a digit, straight lines, etc)
the deeper the node the more complex features it learns (that's why Neural networks work, they build connections that using composition can learn and map complex functions with ability to generalize for new data)
example:



image taken from this paper: http://www.cs.cmu.edu/~aharley/vis/harley_vis_isvc15.pdf

the last layer has 10 nodes (each node will output the probability of each digit)


5- Train the model

the commented code above will train the model to fit the training data this is the most time consuming step


each epoch the optimizer iterates over all images and prints the accuracy it got
with this model we achieved fairly quick result but as you can see the gains slow down very quickly
it, in my case it took few minutes to finish these epochs since I'm using a GPU.

6- Loading And Evaluating the models



here you can see all the models I tried with and trained. they are all saved on files to be reused later if needed.

here is the evaluation result on the validation set:

we let each model predict the digits and compare that to the ground truth, Keras does that for us.

our model achieved 99.15 % in training accuracy while here it got 97.74% in validation set accuracy which is expected to achieve less since these images are completely new to it. what we want to be careful of here to know if our model is not generalizing well, if your model gets 99% in training and gets say 80% in test accuracy then the gap is big this can be an indication that your model is over-fitting the training set and not generalizing on new data well (that's why I added regularization for my biggest model because the more parameter it has the easier it can overfit)


notes based on the results:


7-  Sample predictions from the non-labeled set


each image has the index:prediction over it, you can see some not-so-clear images like index : 399
, 366, 445 but our model got them correctly.


8- testing with my image


I created a digit image myself to see how the model

only 3 models were able to predict the correct digit.

the model achieved 97.7 % on the kaggle test set after submission.. for simple fully connected neural network it seems good for me !

In the next parts  I'll try with a more complex network using convolutions & residual network architecture to see how much more we can minimize the error, knowing that people achieved 99.7% on this already if not higher


Tuesday, March 5, 2019

[part 1] Spring State-of-the-art microservices full project

In this blog I want to introduce something I've been working on for a while last month, it's a java, spring boot 2, microservices application to demonstrate best of breed, state of the art spring tech stack.

the app itself is not new, it's good old Nutracker (a macro nutrition log & diary) I wrote this app as a monolith before and now I split it into different microservices with my focus on the architectural components that becomes essential with moving to a distributed system.

In microservices you basically leave the easy-to-debug and easy-to-write monoliths behind you to embrace a new mindset of thinking, one that is more complex and challenging to code, test, debug and deploy. and the reason why you may want to do that is to achieve more flexibility and scalability gains that will become more costly when the application gets beyond the early development stages.

Software starts simple but over time we add more features, we change old ones and we adapt new technologies, and if we are successful we receive more traffic, because all of that and other reasons, we need to write code in a way that:

1- allows good definitive boundaries between our different domains, and allows each domain to grow in somewhat isolated autonomous manner.

2- allows ease of change and replacement, code is as alive as we are, it's an abstraction of our thoughts and behaviors and needs, you can't write code, leave it, and expect it to live forever, it should grow, and then it should die, to be replaced, it's a natural life cycle that we should embrace.
monoliths can be hard to be let go because we then have to rewrite the whole thing, which is impossible.. we can only rewrite small chunks at a time.

3- Reactive: reactivity according to the reactive manifesto means being:
responsive, elastic, message driven & resilient. our app, infrastructure, processes to write and ship code should revolve around those concepts. we should be able to respond quickly, scale up and down, react to and publish events that drive our app behavior and be resilient to failures in a way that the whole app doesn't collapse in case of the unexpected.


more resources about this in case you need a start:

- Building Microservices: Designing Fine-Grained Systems
https://www.amazon.ca/Building-Microservices-Designing-Fine-Grained-Systems/dp/1491950358/
- https://12factor.net/
- https://www.amazon.ca/Cloud-Native-Java-Designing-Resilient/dp/1449374646


The code

 In this repository you can find the code, you should be able to pull and run with docker.

https://github.com/blabadi/nutracker-microservices

here is the description and highlights of this architecture :


  • Domain services: entries, food-catalog, identity
  • Support services: config-server, eurka discovery, boot admin, OAuth2 auth-server
  • frameworks: spring boot 2 stack, webflux (reactive).
  • docker for containers, docker-compose for local dev env
  • concerns and design goals:
    • Isolated and autonomous
    • Resilient, Fault tolerant
    • Responsive
    • Efficient
    • Scalable/ Elastic/ Highly available
    • Monitored & traceable
    • Developer quality of life (strong tooling and fast workflow)
  • patterns:
    • service registry & discovery (Eurka)
    • central runtime-changable configurations (spring config server)
    • circuit breakers (resilience4j)
    • client side load balancing (Ribbon)
    • reactive async I/O flow (reactor)
    • stateless token based authentication (OAuth2 + jwt)
    • Api gateway as single point of entry (spring cloud gateway)
    • Monitoring:
      • health checking
      • logs aggregation
  • Technologies:
    • Spring boot 2: spring data, webflux, test
    • Netflix OSS: eurka, ribbon
    • resilience4j, circuit breaker
    • Monitoring: Elastic stack (filebeat, elastic search, kibana) , spring actuator & admin, Zipkin, slueth
    • containerization: docker
    • spring security 5, oauth2 + jwt tokens
    • data stores: mongo db
    • junit 5, mockito, embedded dbs
    • kafka as message bus (currently used by zipkin & slueth for traces)
    • maven, git, shell scripts
    • java 11
 
There is a lot here and a lot to explain and talk about, however that will be the topic of other posts as part of this series.



Sunday, November 4, 2018

[PART 5] NuTracker ReactJS app - Add Login & Profile using Router



In the previous part we finished the dashboard read functionality, now we want to add the skeleton for other pages:

- Login
  In this page the user will be able to login to their account and the dashboard won't show unless the user is logged in.

- Profile
In this page the user will be able to update their daily nutrition goals that they can track in the dashboard.

to be able to have multiple 'pages' in react and navigate from one to one, we need something that can switch the rendered content based on what we want, we can do that with if statements in the App components and store some location state, but why invent the wheel.

React Router


every major single page app web framework has the routing concept and functionality to interact with the usual browser urls and switch the content based what user should see.

for example on the profile page I want the url path to be /profile, and for login to be /login and so on.
in more advanced cases you want the users to be able to bookmark a page and be able to get back to that page with state they left it on and that is tricky in single page apps because as in our case we deal with state which doesn't read anything from url and everything is just stored in browser memory.


for this PR I only added the navigation and authentication protection of dashboard & profile using react router, in future we can try to do something like making the date be a url parameter so users can bookmark specific dates in their dashboard.

here is the core lines that hold the routing logic (see routes.js)


everything added to enable router is in this PR, and I commented on the PR changes to explain each change.

Things to note


1- React Router manages its own state, it's not managed by redux

Yes things can get a bit messy because now we have two different sources of state and this means if we want a button to udpate the url and pass filters for examples (like the date navigation) then we need to let our components to read that from router state and not from the redux stored state.

there is a library that synchronizes redux and router called connected-react-router

more about this here:
- https://reacttraining.com/react-router/core/guides/redux-integration
- https://redux.js.org/advanced/usagewithreactrouter


2- we can check authentication and decide to either redirect to login or allow and render the component by adding custom PrivateRoute component (see routes.js), see also how the login page will redirect back to the referrer (the page the user tried to access before they were redirected to login)



how the app looks right now:

the dummy login page:

 

our lovely dashboard:





and our dummy profile page:





In the next part I'll be covering PropTypes and how to write unit tests for components then we can add our first form to allow the user to add entries to their day.

Wednesday, October 31, 2018

[PART 4] NuTracker, ReactJS application - Finish Dashboard Read functionality

In the previous part we added the first call to the async api to do search, now we will build on that to call more apis, and to add more components on the page to finish up the Display flows (i.e. what doesn't require writes / forms) :
  • search
    • this allows the user to find a specific food and choose to add an entry for it to their day to record that they consumed a specific portion of that food.
  • show entries
    • This feature is to allow users to see what they recorded as a list of entries, each entry represents a food the user added on a specific date.
  • show progress bars
    • The user will be able to set goals like how many calories, proteins, fats they want to consume daily and these bars will calculate from the entries they added how close they are to their daily goals
  • reload entries & progress metrics when date change 
    • This is to provide the ability for the users to go back in history and see their entries.

all this is in general similar to the steps I did before:
- add presentation component
- add add actions (api calls or anything else) & reducers ( store the responses in the state and calculate new state if the date changed for example )
- add container component to pass the props to the presentation


all the code for this day is in this pull request ( so you can check exactly what was added and changed), I have also added comments on some files there to explain a little bit.
It's easier to see and navigate files in the "Files Changed" tab in Github PR page


Things Worth noting


 Components Interaction


Up to this point the dashboard didn't have any interaction across components
but in this PR I added the Date navigator which when a user navigates in one direction should load the Entries they created for that new Date they are on, I addition the progress metrics should update according to the new entries.
the way I did this was :
1- when dashboard first loads it just loads the current day information, so the entries list component will  (in componentDidMount) and fetch today's entries).
2- if the user go back in date navigator that triggers a redux action (PERIOD_CHANGED, see dateNavActions.js)
3- the period reducer will update the state with the new start/end dates
4- Our DayEntries component uses state.period as props, so when the state changed because of the date navigation action above, our component will be rerendered.
5- since this is a rerender and not mounting, we have to use the componentWillReceiveProps life cycle method to trigger a new redux action to fetch the entries for the new date period.


 Introducing RequestBuilder

this class is a helper to be used by the repos to inject headers and parse reposne as json, this saves us from repeating this logic everywhere.

Adding User info to state

Since we now want to show progress bars about user goals, I added an api call through userRepo to get the user profile from backend (see Dashboard component), there was no login needed because I hardcoded the user token in the request builder, this way we have real user information without being blocked on login being ready to do this work but in the same time we deal with real data model of the user.


In the next part I plan to :
1- create a dummy login page
2- use react-router to have multiple pages in our app (login, dashboard, profile) each with it's own url


Friday, October 19, 2018

Using Windows 10 built in bash to ssh to ec2 instance

Using Windows 10 built in bash to ssh to ec2 instance


detailed steps to install bash on windows here: https://www.thewindowsclub.com/run-bash-on-windows-10

Summary:

0- enable windows subsytem for linux
1- open cmd
2- type bash
3- accept and create a unix user account
4- wait to finish
6- you may need to reboot

reopen cmd and type bash, you should see your cmd prompt changed.

5- to ssh to ec2 you need .pem key file that you download when you create the instance
6- copy that file under /home or something else but not under /mnt/* (windows files) because the next step will not work, the file has to be in a linux directory (bash in windows can't change windows files permissions)
7- we have to run this command :
$ chmod 400 pem_file.pem
because otherwise you will get an error that this key permissions are too open (not secure enough).
8- the command is ssh -i "pem_file.pem" ec2-user@123.456.897  (replace the pem file name and ip with your values).

now you can connect from windows terminal without ssh client/PuTTy  and this bash can be used for other bash command and tools like telnet etc.. fun :)

Tuesday, October 9, 2018

Online IDE stackblitz.com :)


I stumbled upon this neat online IDE, where you can :

- instantly start, share and run code
- use npm dependencies and install what you need quickly
- you can drag drop files from your computer
I'm yet to explore it but seems very promising

it also has a neat feature to embed itself in blogs like this one so that's awesome because now I can show live examples of running code and share them with my readers !

all I had to do is include an iframe with url to my test project on their website

this is an example of a react project   :)




Kudos to the people who create this cool stuff and share with us !

Istio —simple fast way to start

istio archeticture (source istio.io) I would like to share with you a sample repo to start and help you continue your jou...