Brief Configs of Zsh on MacOS

MacOS, as well as Linux, has its own default shell but Zsh is an awesome alternative because of its capability to customize following my preference. This note was planned to share steps of installation and my selected configurations, in addition to, I will introduce some tools that I have been using to have on my machine recently. The prerequisite of this blog is brew, like apt-get on Ubuntu, to install some packages without lifting fingers much as usual 🍻.

This article was well written, easy to install for MacOS. Next step is to install auto-suggestion which its name tells a fact, supporting to type a command super fast by reminding usage history.

I need something is fancy enough that I can easily recognize commands and associated parameters, syntax-highlight fits my need. Again, brew helps me a lot brew install zsh-syntax-highlighting.

Another cool plugin is zsh-completions but using brew, it seems that I don’t need to bother, explore here

oh-my-zsh is the last addon that I manage to post here. Using this plugin, we have a good way to colorize your terminal through a bunch of themes and a relatively big community. For instance, my favorite shell was captured below

Zsh isn’t applied unless I invoke it hence the last step is to change default shell. This part in MacOS is tricky because it requires detecting the location of Zsh shell then register it to /etc/shells. My command was

sudo echo "$(which zsh)" >> /etc/shells && chsh -s $(which zsh)

Last but not least, I highly recommend to use autoenv since I want to get rid of tedious procedures to change projects’ environmental variable frequently. Furthermore, iterm2 gives a more flexible way to adjust theme and open tabs, split windows expeditiously.

Here is my example configs of oh-my-zsh,

Incremental Average

Recently, I have played with time-series in a real-time production. In my case, every single time, there are new data points appended makes my series longer. Gradually, its length will exceed computer memory which leads an obstacle to any computing using the entire array as an input. I found an alternative way at wiki and stackexchange, below I show a note to elaborate.

Say, I managed to compute

Obviously, does need values that start from beginning to current time-point. It’s worth in following an incremental manner in which result is updated repeatedly. For that reason, let me change above formula slightly.

It can be seen that average value at time point can be calculated by using new value and previous average value . In other words, I can save a great amount of memory and it’s very convenient. :+1:

Dockerize My Blog

I feel a little bitter 😱 to install Jekyll again after a long period. Indeed, I don’t remember either steps or dependencies for different environments. My memory leads me to dockerize because I use daily Docker, anyway 🤔. Let me share mime.

Usage:

  1. Firstly, build images docker build -t jekyll . and name it jekyll
  2. Run container blog from jekyll
  • docker run -d -p 4000:4000 -v [path/to/blog]:/src:rw --name blog jekyll
    • e.g docker run -d -p 4000:4000 -v /Users/quy/dongchirua:/src:rw --name blog jekyll

After running container, Jekyll won’t bother me anymore 😋. Besides, every change will be synced automatically to the container, just refresh my browser at http://localhost:4000 (I use docker for Mac). Unfortunately, there is a drawback which costs 879 MB on disk 🤓.

Install New Plugins

I have installed new plugins e.g jekyll-gist, just test it :+1:!

Multiply Matrices in Python

Multiplying Matrices is a good way to practice what you understand about Python

Formular

Example:

Approach #1

def multipleMatrixes(A, B):
    B = list(zip(*B))
    return [[sum(ai * bj for ai, bj in zip(Ai, Bj)) for Bj in B] for Ai in A]
#[[58, 64], [139, 154]]
multipleMatrixes([[1,2,3],[4,5,6]], [[7,8],[9,10],[11, 12]]) 

§ zip([iterable, ...])1 returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables

§ *2 is unpack operator, def test(A, B): print A, B, test(*[[1,2],[3,4]])

Approach #2

def multipleMatrixes(A, B):
    return [[sum(x * B[i][col] for i,x in enumerate(row)) 
    		for col in range(len(B[0]))] for row in A]
#[[58, 64], [139, 154]]
multipleMatrixes([[1,2,3],[4,5,6]], [[7,8],[9,10],[11, 12]]) 

§ enumerate(sequence, start=0)3 returns an enumerate object

Approach #3

def multipleMatrixes(A, B):
	result = [[0] * len(A) for _ in range(len(B[0]))]
	for i in range(len(A)):
		for j in range(len(B[0])):
			for k in range(len(B)):
				result[i][j] += A[i][k] * B[k][j]
	return result
#[[58, 64], [139, 154]]
multipleMatrixes([[1,2,3],[4,5,6]], [[7,8],[9,10],[11, 12]]) 

§ [0] * 3 becomes [0,0,0]

Approach #4: Using Numpy

import numpy as np
#array([[ 58,  64],
#       [139, 154]])
np.dot([[1,2,3],[4,5,6]], [[7,8],[9,10],[11, 12]]) 

Source here

References

How to build a docker image having Nodejs, Ruby, Python on Ubuntu:16.04

Beforehand, please install Docker on your machine. If you use macOS, I highly recommend using Docker for Mac1 version (my term is native version) instead of docker-machine because I’m using native version to demonstrate. In case you might wonder why, please read here.

Step 1 - Preparing

A Dockerfile which is a set of instructions to make an image. Dockerfile syntax2 isn’t complicated so I put mine to demonstrate and explain thorough comments.

I start with ubuntu 16:04

FROM ubuntu:16.04

Installing my desire components like

RUN apt-get update && \
    apt-get install -y --force-yes --no-install-recommends\
    apt-transport-https \
    ssh-client \
    build-essential \
    curl \
    ca-certificates \
    git \
    libicu-dev \
    'libicu[0-9][0-9].*' \
    lsb-release \
    python-all \
    rlwrap \
    apt-utils \
    build-essential \
    libssl-dev \
    curl \
    graphicsmagick --fix-missing \
    imagemagick --fix-missing \
    git \
    telnet

Making a default executable for my image with CMD3

CMD ["bash"]

Step 2 - Run it

What you need to do next is to change directory as same level as Dockerfile and run docker build -t demo . then you can check docker images to see your demo image is there.

What I Learnt

Finding instructions to build an image is usually annoying because some lines in Dockerfile may occur errors and a trial consumes time. To understand, suppose my Dockerfile had 3 lines, I observed an error had happened at last line. Obviously, I would have to fix, run and wait. As I mentioned, emotion is problematic hence I think about acceleration.

The fact is Docker caches preceding layers4 so I do follow suggestions from the best docker practices. But I want more than that so I have an idea which I will build on a correct images. It means I break Dockerfile into parts, for lines from beginning don’t cause error will be a correct one, rest of lines are on working Dockerfile. These parts have been connected by FROM my_image:previous_correct

For instance, correct part is

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y

then I build and name it as based image docker build -t mine:based .. Next I just focus on working part like

FROM mine:based
RUN apt-get install ... # line may get error

Eventually, everything is correct, I’m able to make a single final file.

Another thing, inside container of Ubuntu:16.04, you’re not to use sudo at begining5, example RUN apt-get update && apt-get install -y

References

Install MongoDB with Replication on Docker

One of my previous works was to set up an infrastructure with MongoDB. If you asked me why MongoDB, I would say it was due to sort of legacy. The thing was, I had been struggling for replication installation until I found a way to complete. In this post, I share my steps to achieve with some references.

Foremost, a replica set means there are several database nodes maintaining the same data set. In reality, my application will connect to these nodes, Mongo itself will allow writing onto its primary node while secondary will clone data after a short particular period time. When the primary has a problem, the cluster will vote to differentiate new primary and secondaries.

My problem was the incorrect configuration for my cluster, therefore, these nodes can not communicate. My configuration used IP information which will probably cause some communication problems when these addresses change. Anyway, my post just shows how it works. Bear in mind that I use Docker to install MongoDB so need to install Docker first.

Steps

Install mongo nodes 1

The default port of mongo is 27017, I installed 3 nodes on 1 machine (suppose my real IP is 10.164.5.85) so I have to open 3 ports for mapping to each mongo container, otherwise I will meet further issues. You can start from 27017 but I choose from 27018. Three containers are mongo1, mongo2 and mongo3 respectively. Replica Set name is rs0.

sudo docker run \
-p 27018:27017 \
--name mongo1 \
--net curator-cluster \
mongo mongod --replSet rs0
sudo docker run  -it\
-p 27019:27017 \
--name mongo2 \
--net curator-cluster \
mongo mongod --replSet rs0
sudo docker run \
-p 27020:27017 \
--name mongo3 \
--net curator-cluster \
mongo mongod --replSet rs0

My configuration

If you make an incorrect configuration, these nodes can not find each other 2 or you can get MongoDB connection error 3.

config = {
  "_id" : "rs0",
  "members" : [
    {"_id" : 0, "host" : "10.164.5.85:27018"},
    {"_id" : 1, "host" : "10.164.5.85:27019"},
    {"_id" : 2, "host" : "10.164.5.85:27020"}
    ]
  }
rs.initiate(config)

Verification

To verify, you can use this command

mongo --host replicaSetName/host1[:porthost1],host2[:porthost1],host3[:porthost3] databaseToConnect

for instance:

mongo --host rs0/10.164.5.85:27018,10.164.5.85:27019,10.164.5.85:27020 schedulerinterface-staging

References